problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_6867 | rasdani/github-patches | git_diff | python-poetry__poetry-1621 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`poetry shell` puts Terminal in broken state and does not function
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
- **OS version and name**: Mac OS Mojave (10.14.6)
- **Poetry version**: 1.0.0b5
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: https://gist.github.com/orokusaki/0750bd0dfef13324353d302d74a48254
## Further environment notes
- Python 2.7.17 and Python 3.7.5 installed via Homebrew
- Poetry installed via `curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | POETRY_PREVIEW=1 python`
## Issue
Upon using `poetry shell -vvv` (also tried without `-vvv` flag) the shell appears to spawn, but when I attempt to type any command, no text appears in the Terminal, and when I hit <kbd>return</kbd> I get what you can see in the screenshot I attached (the screenshot reflects the state after I typed a few characters and then hit <kbd>return</kbd> twice). If I send `SIGINT` to the shell (<kbd>CTRL</kbd> + <kbd>C</kbd>), the Terminal drops to a new line with the same output and lack of responsiveness, and upon sending `SIGINT` many times I'm still left with the Terminal in an unusable state. If I attempt to close Terminal, I get "*Closing this tab will terminate the running processes: bash, Python.*", which indicates that some code in Poetry is still hung up.
### Screenshot
<img src="https://user-images.githubusercontent.com/97720/69014062-6a16bf80-0954-11ea-9717-7ff259875eea.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/utils/shell.py`
Content:
```
1 import os
2 import signal
3 import sys
4
5 import pexpect
6
7 from clikit.utils.terminal import Terminal
8 from shellingham import ShellDetectionFailure
9 from shellingham import detect_shell
10
11 from ._compat import WINDOWS
12 from .env import VirtualEnv
13
14
15 class Shell:
16 """
17 Represents the current shell.
18 """
19
20 _shell = None
21
22 def __init__(self, name, path): # type: (str, str) -> None
23 self._name = name
24 self._path = path
25
26 @property
27 def name(self): # type: () -> str
28 return self._name
29
30 @property
31 def path(self): # type: () -> str
32 return self._path
33
34 @classmethod
35 def get(cls): # type: () -> Shell
36 """
37 Retrieve the current shell.
38 """
39 if cls._shell is not None:
40 return cls._shell
41
42 try:
43 name, path = detect_shell(os.getpid())
44 except (RuntimeError, ShellDetectionFailure):
45 raise RuntimeError("Unable to detect the current shell.")
46
47 cls._shell = cls(name, path)
48
49 return cls._shell
50
51 def activate(self, env): # type: (VirtualEnv) -> None
52 if WINDOWS:
53 return env.execute(self.path)
54
55 terminal = Terminal()
56 with env.temp_environ():
57 c = pexpect.spawn(
58 self._path, ["-i"], dimensions=(terminal.height, terminal.width)
59 )
60
61 c.setecho(False)
62 activate_script = self._get_activate_script()
63 bin_dir = "Scripts" if WINDOWS else "bin"
64 activate_path = env.path / bin_dir / activate_script
65 c.sendline("{} {}".format(self._get_source_command(), activate_path))
66
67 def resize(sig, data):
68 terminal = Terminal()
69 c.setwinsize(terminal.height, terminal.width)
70
71 signal.signal(signal.SIGWINCH, resize)
72
73 # Interact with the new shell.
74 c.interact(escape_character=None)
75 c.close()
76
77 sys.exit(c.exitstatus)
78
79 def _get_activate_script(self):
80 if "fish" == self._name:
81 suffix = ".fish"
82 elif "csh" == self._name:
83 suffix = ".csh"
84 else:
85 suffix = ""
86
87 return "activate" + suffix
88
89 def _get_source_command(self):
90 if "fish" == self._name:
91 return "source"
92 elif "csh" == self._name:
93 return "source"
94
95 return "."
96
97 def __repr__(self): # type: () -> str
98 return '{}("{}", "{}")'.format(self.__class__.__name__, self._name, self._path)
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/utils/shell.py b/poetry/utils/shell.py
--- a/poetry/utils/shell.py
+++ b/poetry/utils/shell.py
@@ -58,7 +58,9 @@
self._path, ["-i"], dimensions=(terminal.height, terminal.width)
)
- c.setecho(False)
+ if not self._name == "bash":
+ c.setecho(False)
+
activate_script = self._get_activate_script()
bin_dir = "Scripts" if WINDOWS else "bin"
activate_path = env.path / bin_dir / activate_script
| {"golden_diff": "diff --git a/poetry/utils/shell.py b/poetry/utils/shell.py\n--- a/poetry/utils/shell.py\n+++ b/poetry/utils/shell.py\n@@ -58,7 +58,9 @@\n self._path, [\"-i\"], dimensions=(terminal.height, terminal.width)\n )\n \n- c.setecho(False)\n+ if not self._name == \"bash\":\n+ c.setecho(False)\n+\n activate_script = self._get_activate_script()\n bin_dir = \"Scripts\" if WINDOWS else \"bin\"\n activate_path = env.path / bin_dir / activate_script\n", "issue": "`poetry shell` puts Terminal in broken state and does not function\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n- **OS version and name**: Mac OS Mojave (10.14.6)\r\n- **Poetry version**: 1.0.0b5\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: https://gist.github.com/orokusaki/0750bd0dfef13324353d302d74a48254\r\n\r\n## Further environment notes\r\n\r\n - Python 2.7.17 and Python 3.7.5 installed via Homebrew\r\n - Poetry installed via `curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | POETRY_PREVIEW=1 python`\r\n\r\n## Issue\r\n\r\nUpon using `poetry shell -vvv` (also tried without `-vvv` flag) the shell appears to spawn, but when I attempt to type any command, no text appears in the Terminal, and when I hit <kbd>return</kbd> I get what you can see in the screenshot I attached (the screenshot reflects the state after I typed a few characters and then hit <kbd>return</kbd> twice). If I send `SIGINT` to the shell (<kbd>CTRL</kbd> + <kbd>C</kbd>), the Terminal drops to a new line with the same output and lack of responsiveness, and upon sending `SIGINT` many times I'm still left with the Terminal in an unusable state. If I attempt to close Terminal, I get \"*Closing this tab will terminate the running processes: bash, Python.*\", which indicates that some code in Poetry is still hung up.\r\n\r\n### Screenshot\r\n\r\n<img src=\"https://user-images.githubusercontent.com/97720/69014062-6a16bf80-0954-11ea-9717-7ff259875eea.png\">\n", "before_files": [{"content": "import os\nimport signal\nimport sys\n\nimport pexpect\n\nfrom clikit.utils.terminal import Terminal\nfrom shellingham import ShellDetectionFailure\nfrom shellingham import detect_shell\n\nfrom ._compat import WINDOWS\nfrom .env import VirtualEnv\n\n\nclass Shell:\n \"\"\"\n Represents the current shell.\n \"\"\"\n\n _shell = None\n\n def __init__(self, name, path): # type: (str, str) -> None\n self._name = name\n self._path = path\n\n @property\n def name(self): # type: () -> str\n return self._name\n\n @property\n def path(self): # type: () -> str\n return self._path\n\n @classmethod\n def get(cls): # type: () -> Shell\n \"\"\"\n Retrieve the current shell.\n \"\"\"\n if cls._shell is not None:\n return cls._shell\n\n try:\n name, path = detect_shell(os.getpid())\n except (RuntimeError, ShellDetectionFailure):\n raise RuntimeError(\"Unable to detect the current shell.\")\n\n cls._shell = cls(name, path)\n\n return cls._shell\n\n def activate(self, env): # type: (VirtualEnv) -> None\n if WINDOWS:\n return env.execute(self.path)\n\n terminal = Terminal()\n with env.temp_environ():\n c = pexpect.spawn(\n self._path, [\"-i\"], dimensions=(terminal.height, terminal.width)\n )\n\n c.setecho(False)\n activate_script = self._get_activate_script()\n bin_dir = \"Scripts\" if WINDOWS else \"bin\"\n activate_path = env.path / bin_dir / activate_script\n c.sendline(\"{} {}\".format(self._get_source_command(), activate_path))\n\n def resize(sig, data):\n terminal = Terminal()\n c.setwinsize(terminal.height, terminal.width)\n\n signal.signal(signal.SIGWINCH, resize)\n\n # Interact with the new shell.\n c.interact(escape_character=None)\n c.close()\n\n sys.exit(c.exitstatus)\n\n def _get_activate_script(self):\n if \"fish\" == self._name:\n suffix = \".fish\"\n elif \"csh\" == self._name:\n suffix = \".csh\"\n else:\n suffix = \"\"\n\n return \"activate\" + suffix\n\n def _get_source_command(self):\n if \"fish\" == self._name:\n return \"source\"\n elif \"csh\" == self._name:\n return \"source\"\n\n return \".\"\n\n def __repr__(self): # type: () -> str\n return '{}(\"{}\", \"{}\")'.format(self.__class__.__name__, self._name, self._path)\n", "path": "poetry/utils/shell.py"}], "after_files": [{"content": "import os\nimport signal\nimport sys\n\nimport pexpect\n\nfrom clikit.utils.terminal import Terminal\nfrom shellingham import ShellDetectionFailure\nfrom shellingham import detect_shell\n\nfrom ._compat import WINDOWS\nfrom .env import VirtualEnv\n\n\nclass Shell:\n \"\"\"\n Represents the current shell.\n \"\"\"\n\n _shell = None\n\n def __init__(self, name, path): # type: (str, str) -> None\n self._name = name\n self._path = path\n\n @property\n def name(self): # type: () -> str\n return self._name\n\n @property\n def path(self): # type: () -> str\n return self._path\n\n @classmethod\n def get(cls): # type: () -> Shell\n \"\"\"\n Retrieve the current shell.\n \"\"\"\n if cls._shell is not None:\n return cls._shell\n\n try:\n name, path = detect_shell(os.getpid())\n except (RuntimeError, ShellDetectionFailure):\n raise RuntimeError(\"Unable to detect the current shell.\")\n\n cls._shell = cls(name, path)\n\n return cls._shell\n\n def activate(self, env): # type: (VirtualEnv) -> None\n if WINDOWS:\n return env.execute(self.path)\n\n terminal = Terminal()\n with env.temp_environ():\n c = pexpect.spawn(\n self._path, [\"-i\"], dimensions=(terminal.height, terminal.width)\n )\n\n if not self._name == \"bash\":\n c.setecho(False)\n\n activate_script = self._get_activate_script()\n bin_dir = \"Scripts\" if WINDOWS else \"bin\"\n activate_path = env.path / bin_dir / activate_script\n c.sendline(\"{} {}\".format(self._get_source_command(), activate_path))\n\n def resize(sig, data):\n terminal = Terminal()\n c.setwinsize(terminal.height, terminal.width)\n\n signal.signal(signal.SIGWINCH, resize)\n\n # Interact with the new shell.\n c.interact(escape_character=None)\n c.close()\n\n sys.exit(c.exitstatus)\n\n def _get_activate_script(self):\n if \"fish\" == self._name:\n suffix = \".fish\"\n elif \"csh\" == self._name:\n suffix = \".csh\"\n else:\n suffix = \"\"\n\n return \"activate\" + suffix\n\n def _get_source_command(self):\n if \"fish\" == self._name:\n return \"source\"\n elif \"csh\" == self._name:\n return \"source\"\n\n return \".\"\n\n def __repr__(self): # type: () -> str\n return '{}(\"{}\", \"{}\")'.format(self.__class__.__name__, self._name, self._path)\n", "path": "poetry/utils/shell.py"}]} | 1,639 | 134 |
gh_patches_debug_30769 | rasdani/github-patches | git_diff | napari__napari-873 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
new zarr release / numcodecs
## 🐛 Bug
Looks like zarr's new release requires numcodecs > 0.6.4, but we pinned to exclude it see discussion #666. I think we need to resolve this ASAP and then make the 0.2.10 release (which also includes the #866 bug fix). Thoughts @tlambert03 @jni? Has the 0.6.4 numcodecs install problem been resolved? You can see our failing tests in #867.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/utils/io.py`
Content:
```
1 import os
2
3 from glob import glob
4 from pathlib import Path
5
6 import numpy as np
7 from skimage import io
8 from skimage.io.collection import alphanumeric_key
9
10 from dask import delayed
11 from dask import array as da
12 import zarr
13
14
15 def magic_imread(filenames, *, use_dask=None, stack=True):
16 """Dispatch the appropriate reader given some files.
17
18 The files are assumed to all have the same shape.
19
20 Parameters
21 -------
22 filenames : list
23 List of filenames or directories to be opened.
24 A list of `pathlib.Path` objects and a single filename or `Path` object
25 are also accepted.
26 use_dask : bool
27 Whether to use dask to create a lazy array, rather than NumPy.
28 Default of None will resolve to True if filenames contains more than
29 one image, False otherwise.
30 stack : bool
31 Whether to stack the images in multiple files into a single array. If
32 False, a list of arrays will be returned.
33
34 Returns
35 -------
36 image : array-like
37 Array or list of images
38 """
39 # cast Path to string
40 if isinstance(filenames, Path):
41 filenames = filenames.as_posix()
42
43 if len(filenames) == 0:
44 return None
45 if isinstance(filenames, str):
46 filenames = [filenames] # ensure list
47
48 # replace folders with their contents
49 filenames_expanded = []
50 for filename in filenames:
51 ext = os.path.splitext(filename)[-1]
52 # zarr files are folders, but should be read as 1 file
53 if os.path.isdir(filename) and not ext == '.zarr':
54 dir_contents = sorted(
55 glob(os.path.join(filename, '*.*')), key=alphanumeric_key
56 )
57 # remove subdirectories
58 dir_contents_files = filter(
59 lambda f: not os.path.isdir(f), dir_contents
60 )
61 filenames_expanded.extend(dir_contents_files)
62 else:
63 filenames_expanded.append(filename)
64
65 if use_dask is None:
66 use_dask = len(filenames_expanded) > 1
67
68 # then, read in images
69 images = []
70 shape = None
71 for filename in filenames_expanded:
72 ext = os.path.splitext(filename)[-1]
73 if ext == '.zarr':
74 image, zarr_shape = read_zarr_dataset(filename)
75 if shape is None:
76 shape = zarr_shape
77 else:
78 if shape is None:
79 image = io.imread(filename)
80 shape = image.shape
81 dtype = image.dtype
82 if use_dask:
83 image = da.from_delayed(
84 delayed(io.imread)(filename), shape=shape, dtype=dtype
85 )
86 elif len(images) > 0: # not read by shape clause
87 image = io.imread(filename)
88 images.append(image)
89 if len(images) == 1:
90 image = images[0]
91 else:
92 if stack:
93 if use_dask:
94 image = da.stack(images)
95 else:
96 image = np.stack(images)
97 else:
98 image = images # return a list
99 return image
100
101
102 def read_zarr_dataset(filename):
103 """Read a zarr dataset, including an array or a group of arrays.
104
105 Parameters
106 --------
107 filename : str
108 Path to file ending in '.zarr'. File can contain either an array
109 or a group of arrays in the case of pyramid data.
110 Returns
111 -------
112 image : array-like
113 Array or list of arrays
114 shape : tuple
115 Shape of array or first array in list
116 """
117 zr = zarr.open(filename, mode='r')
118 if isinstance(zr, zarr.core.Array):
119 # load zarr array
120 image = da.from_zarr(filename)
121 shape = image.shape
122 else:
123 # else load zarr all arrays inside file, useful for pyramid data
124 image = [da.from_zarr(filename, component=c) for c, a in zr.arrays()]
125 shape = image[0].shape
126 return image, shape
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/utils/io.py b/napari/utils/io.py
--- a/napari/utils/io.py
+++ b/napari/utils/io.py
@@ -9,7 +9,6 @@
from dask import delayed
from dask import array as da
-import zarr
def magic_imread(filenames, *, use_dask=None, stack=True):
@@ -99,13 +98,13 @@
return image
-def read_zarr_dataset(filename):
+def read_zarr_dataset(path):
"""Read a zarr dataset, including an array or a group of arrays.
Parameters
--------
- filename : str
- Path to file ending in '.zarr'. File can contain either an array
+ path : str
+ Path to directory ending in '.zarr'. Path can contain either an array
or a group of arrays in the case of pyramid data.
Returns
-------
@@ -114,13 +113,17 @@
shape : tuple
Shape of array or first array in list
"""
- zr = zarr.open(filename, mode='r')
- if isinstance(zr, zarr.core.Array):
+ if os.path.exists(os.path.join(path, '.zarray')):
# load zarr array
- image = da.from_zarr(filename)
+ image = da.from_zarr(path)
shape = image.shape
- else:
+ elif os.path.exists(os.path.join(path, '.zgroup')):
# else load zarr all arrays inside file, useful for pyramid data
- image = [da.from_zarr(filename, component=c) for c, a in zr.arrays()]
+ image = []
+ for subpath in sorted(os.listdir(path)):
+ if not subpath.startswith('.'):
+ image.append(read_zarr_dataset(os.path.join(path, subpath))[0])
shape = image[0].shape
+ else:
+ raise ValueError(f"Not a zarr dataset or group: {path}")
return image, shape
| {"golden_diff": "diff --git a/napari/utils/io.py b/napari/utils/io.py\n--- a/napari/utils/io.py\n+++ b/napari/utils/io.py\n@@ -9,7 +9,6 @@\n \n from dask import delayed\n from dask import array as da\n-import zarr\n \n \n def magic_imread(filenames, *, use_dask=None, stack=True):\n@@ -99,13 +98,13 @@\n return image\n \n \n-def read_zarr_dataset(filename):\n+def read_zarr_dataset(path):\n \"\"\"Read a zarr dataset, including an array or a group of arrays.\n \n Parameters\n --------\n- filename : str\n- Path to file ending in '.zarr'. File can contain either an array\n+ path : str\n+ Path to directory ending in '.zarr'. Path can contain either an array\n or a group of arrays in the case of pyramid data.\n Returns\n -------\n@@ -114,13 +113,17 @@\n shape : tuple\n Shape of array or first array in list\n \"\"\"\n- zr = zarr.open(filename, mode='r')\n- if isinstance(zr, zarr.core.Array):\n+ if os.path.exists(os.path.join(path, '.zarray')):\n # load zarr array\n- image = da.from_zarr(filename)\n+ image = da.from_zarr(path)\n shape = image.shape\n- else:\n+ elif os.path.exists(os.path.join(path, '.zgroup')):\n # else load zarr all arrays inside file, useful for pyramid data\n- image = [da.from_zarr(filename, component=c) for c, a in zr.arrays()]\n+ image = []\n+ for subpath in sorted(os.listdir(path)):\n+ if not subpath.startswith('.'):\n+ image.append(read_zarr_dataset(os.path.join(path, subpath))[0])\n shape = image[0].shape\n+ else:\n+ raise ValueError(f\"Not a zarr dataset or group: {path}\")\n return image, shape\n", "issue": "new zarr release / numcodecs\n## \ud83d\udc1b Bug\r\n\r\nLooks like zarr's new release requires numcodecs > 0.6.4, but we pinned to exclude it see discussion #666. I think we need to resolve this ASAP and then make the 0.2.10 release (which also includes the #866 bug fix). Thoughts @tlambert03 @jni? Has the 0.6.4 numcodecs install problem been resolved? You can see our failing tests in #867. \n", "before_files": [{"content": "import os\n\nfrom glob import glob\nfrom pathlib import Path\n\nimport numpy as np\nfrom skimage import io\nfrom skimage.io.collection import alphanumeric_key\n\nfrom dask import delayed\nfrom dask import array as da\nimport zarr\n\n\ndef magic_imread(filenames, *, use_dask=None, stack=True):\n \"\"\"Dispatch the appropriate reader given some files.\n\n The files are assumed to all have the same shape.\n\n Parameters\n -------\n filenames : list\n List of filenames or directories to be opened.\n A list of `pathlib.Path` objects and a single filename or `Path` object\n are also accepted.\n use_dask : bool\n Whether to use dask to create a lazy array, rather than NumPy.\n Default of None will resolve to True if filenames contains more than\n one image, False otherwise.\n stack : bool\n Whether to stack the images in multiple files into a single array. If\n False, a list of arrays will be returned.\n\n Returns\n -------\n image : array-like\n Array or list of images\n \"\"\"\n # cast Path to string\n if isinstance(filenames, Path):\n filenames = filenames.as_posix()\n\n if len(filenames) == 0:\n return None\n if isinstance(filenames, str):\n filenames = [filenames] # ensure list\n\n # replace folders with their contents\n filenames_expanded = []\n for filename in filenames:\n ext = os.path.splitext(filename)[-1]\n # zarr files are folders, but should be read as 1 file\n if os.path.isdir(filename) and not ext == '.zarr':\n dir_contents = sorted(\n glob(os.path.join(filename, '*.*')), key=alphanumeric_key\n )\n # remove subdirectories\n dir_contents_files = filter(\n lambda f: not os.path.isdir(f), dir_contents\n )\n filenames_expanded.extend(dir_contents_files)\n else:\n filenames_expanded.append(filename)\n\n if use_dask is None:\n use_dask = len(filenames_expanded) > 1\n\n # then, read in images\n images = []\n shape = None\n for filename in filenames_expanded:\n ext = os.path.splitext(filename)[-1]\n if ext == '.zarr':\n image, zarr_shape = read_zarr_dataset(filename)\n if shape is None:\n shape = zarr_shape\n else:\n if shape is None:\n image = io.imread(filename)\n shape = image.shape\n dtype = image.dtype\n if use_dask:\n image = da.from_delayed(\n delayed(io.imread)(filename), shape=shape, dtype=dtype\n )\n elif len(images) > 0: # not read by shape clause\n image = io.imread(filename)\n images.append(image)\n if len(images) == 1:\n image = images[0]\n else:\n if stack:\n if use_dask:\n image = da.stack(images)\n else:\n image = np.stack(images)\n else:\n image = images # return a list\n return image\n\n\ndef read_zarr_dataset(filename):\n \"\"\"Read a zarr dataset, including an array or a group of arrays.\n\n Parameters\n --------\n filename : str\n Path to file ending in '.zarr'. File can contain either an array\n or a group of arrays in the case of pyramid data.\n Returns\n -------\n image : array-like\n Array or list of arrays\n shape : tuple\n Shape of array or first array in list\n \"\"\"\n zr = zarr.open(filename, mode='r')\n if isinstance(zr, zarr.core.Array):\n # load zarr array\n image = da.from_zarr(filename)\n shape = image.shape\n else:\n # else load zarr all arrays inside file, useful for pyramid data\n image = [da.from_zarr(filename, component=c) for c, a in zr.arrays()]\n shape = image[0].shape\n return image, shape\n", "path": "napari/utils/io.py"}], "after_files": [{"content": "import os\n\nfrom glob import glob\nfrom pathlib import Path\n\nimport numpy as np\nfrom skimage import io\nfrom skimage.io.collection import alphanumeric_key\n\nfrom dask import delayed\nfrom dask import array as da\n\n\ndef magic_imread(filenames, *, use_dask=None, stack=True):\n \"\"\"Dispatch the appropriate reader given some files.\n\n The files are assumed to all have the same shape.\n\n Parameters\n -------\n filenames : list\n List of filenames or directories to be opened.\n A list of `pathlib.Path` objects and a single filename or `Path` object\n are also accepted.\n use_dask : bool\n Whether to use dask to create a lazy array, rather than NumPy.\n Default of None will resolve to True if filenames contains more than\n one image, False otherwise.\n stack : bool\n Whether to stack the images in multiple files into a single array. If\n False, a list of arrays will be returned.\n\n Returns\n -------\n image : array-like\n Array or list of images\n \"\"\"\n # cast Path to string\n if isinstance(filenames, Path):\n filenames = filenames.as_posix()\n\n if len(filenames) == 0:\n return None\n if isinstance(filenames, str):\n filenames = [filenames] # ensure list\n\n # replace folders with their contents\n filenames_expanded = []\n for filename in filenames:\n ext = os.path.splitext(filename)[-1]\n # zarr files are folders, but should be read as 1 file\n if os.path.isdir(filename) and not ext == '.zarr':\n dir_contents = sorted(\n glob(os.path.join(filename, '*.*')), key=alphanumeric_key\n )\n # remove subdirectories\n dir_contents_files = filter(\n lambda f: not os.path.isdir(f), dir_contents\n )\n filenames_expanded.extend(dir_contents_files)\n else:\n filenames_expanded.append(filename)\n\n if use_dask is None:\n use_dask = len(filenames_expanded) > 1\n\n # then, read in images\n images = []\n shape = None\n for filename in filenames_expanded:\n ext = os.path.splitext(filename)[-1]\n if ext == '.zarr':\n image, zarr_shape = read_zarr_dataset(filename)\n if shape is None:\n shape = zarr_shape\n else:\n if shape is None:\n image = io.imread(filename)\n shape = image.shape\n dtype = image.dtype\n if use_dask:\n image = da.from_delayed(\n delayed(io.imread)(filename), shape=shape, dtype=dtype\n )\n elif len(images) > 0: # not read by shape clause\n image = io.imread(filename)\n images.append(image)\n if len(images) == 1:\n image = images[0]\n else:\n if stack:\n if use_dask:\n image = da.stack(images)\n else:\n image = np.stack(images)\n else:\n image = images # return a list\n return image\n\n\ndef read_zarr_dataset(path):\n \"\"\"Read a zarr dataset, including an array or a group of arrays.\n\n Parameters\n --------\n path : str\n Path to directory ending in '.zarr'. Path can contain either an array\n or a group of arrays in the case of pyramid data.\n Returns\n -------\n image : array-like\n Array or list of arrays\n shape : tuple\n Shape of array or first array in list\n \"\"\"\n if os.path.exists(os.path.join(path, '.zarray')):\n # load zarr array\n image = da.from_zarr(path)\n shape = image.shape\n elif os.path.exists(os.path.join(path, '.zgroup')):\n # else load zarr all arrays inside file, useful for pyramid data\n image = []\n for subpath in sorted(os.listdir(path)):\n if not subpath.startswith('.'):\n image.append(read_zarr_dataset(os.path.join(path, subpath))[0])\n shape = image[0].shape\n else:\n raise ValueError(f\"Not a zarr dataset or group: {path}\")\n return image, shape\n", "path": "napari/utils/io.py"}]} | 1,529 | 449 |
gh_patches_debug_10983 | rasdani/github-patches | git_diff | goauthentik__authentik-4957 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Launch URL in Application UI Settings can't be entered for some domains
**Describe the bug**
When I try to add a fixed link to an application, it will return an error with null text.
I think this is happening only for any subdomain that has a dash character on the subdomain portion of the name:
ej: https://tbb-assets.domain.com
**Screenshots**
This one gets saved without any problems:
https://application.com

But if i edit this domain to something else like:
https://tbb-assets.easyfoodsin.com

**Logs**
Output of docker-compose logs or kubectl logs respectively.
I can't find anything on the logs it seems that nothing is submitted is a validation error within the application edit screen.
**Version and Deployment (please complete the following information):**
- authentik version: 2023.3.0
- Deployment: docker-compose
**Additional context**
This error is not happening on version (2023.2.2) because I created a few applications recently that have many urls that have a dash on the subdomain.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/lib/models.py`
Content:
```
1 """Generic models"""
2 import re
3
4 from django.core.validators import URLValidator
5 from django.db import models
6 from django.utils.regex_helper import _lazy_re_compile
7 from model_utils.managers import InheritanceManager
8 from rest_framework.serializers import BaseSerializer
9
10
11 class SerializerModel(models.Model):
12 """Base Abstract Model which has a serializer"""
13
14 @property
15 def serializer(self) -> type[BaseSerializer]:
16 """Get serializer for this model"""
17 raise NotImplementedError
18
19 class Meta:
20 abstract = True
21
22
23 class CreatedUpdatedModel(models.Model):
24 """Base Abstract Model to save created and update"""
25
26 created = models.DateTimeField(auto_now_add=True)
27 last_updated = models.DateTimeField(auto_now=True)
28
29 class Meta:
30 abstract = True
31
32
33 class InheritanceAutoManager(InheritanceManager):
34 """Object manager which automatically selects the subclass"""
35
36 def get_queryset(self):
37 return super().get_queryset().select_subclasses()
38
39
40 class InheritanceForwardManyToOneDescriptor(models.fields.related.ForwardManyToOneDescriptor):
41 """Forward ManyToOne Descriptor that selects subclass. Requires InheritanceAutoManager."""
42
43 def get_queryset(self, **hints):
44 return self.field.remote_field.model.objects.db_manager(hints=hints).select_subclasses()
45
46
47 class InheritanceForeignKey(models.ForeignKey):
48 """Custom ForeignKey that uses InheritanceForwardManyToOneDescriptor"""
49
50 forward_related_accessor_class = InheritanceForwardManyToOneDescriptor
51
52
53 class DomainlessURLValidator(URLValidator):
54 """Subclass of URLValidator which doesn't check the domain
55 (to allow hostnames without domain)"""
56
57 def __init__(self, *args, **kwargs) -> None:
58 super().__init__(*args, **kwargs)
59 self.host_re = "(" + self.hostname_re + self.domain_re + "|localhost)"
60 self.regex = _lazy_re_compile(
61 r"^(?:[a-z0-9.+-]*)://" # scheme is validated separately
62 r"(?:[^\s:@/]+(?::[^\s:@/]*)?@)?" # user:pass authentication
63 r"(?:" + self.ipv4_re + "|" + self.ipv6_re + "|" + self.host_re + ")"
64 r"(?::\d{2,5})?" # port
65 r"(?:[/?#][^\s]*)?" # resource path
66 r"\Z",
67 re.IGNORECASE,
68 )
69 self.schemes = ["http", "https", "blank"] + list(self.schemes)
70
71 def __call__(self, value: str):
72 # Check if the scheme is valid.
73 scheme = value.split("://")[0].lower()
74 if scheme not in self.schemes:
75 value = "default" + value
76 super().__call__(value)
77
78
79 class DomainlessFormattedURLValidator(DomainlessURLValidator):
80 """URL validator which allows for python format strings"""
81
82 def __init__(self, *args, **kwargs) -> None:
83 super().__init__(*args, **kwargs)
84 self.host_re = r"([%\(\)a-zA-Z])+" + self.domain_re + self.domain_re
85 self.regex = _lazy_re_compile(
86 r"^(?:[a-z0-9.+-]*)://" # scheme is validated separately
87 r"(?:[^\s:@/]+(?::[^\s:@/]*)?@)?" # user:pass authentication
88 r"(?:" + self.ipv4_re + "|" + self.ipv6_re + "|" + self.host_re + ")"
89 r"(?::\d{2,5})?" # port
90 r"(?:[/?#][^\s]*)?" # resource path
91 r"\Z",
92 re.IGNORECASE,
93 )
94 self.schemes = ["http", "https", "blank"] + list(self.schemes)
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/lib/models.py b/authentik/lib/models.py
--- a/authentik/lib/models.py
+++ b/authentik/lib/models.py
@@ -81,7 +81,8 @@
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
- self.host_re = r"([%\(\)a-zA-Z])+" + self.domain_re + self.domain_re
+ self.formatter_re = r"([%\(\)a-zA-Z])*"
+ self.host_re = "(" + self.formatter_re + self.hostname_re + self.domain_re + "|localhost)"
self.regex = _lazy_re_compile(
r"^(?:[a-z0-9.+-]*)://" # scheme is validated separately
r"(?:[^\s:@/]+(?::[^\s:@/]*)?@)?" # user:pass authentication
| {"golden_diff": "diff --git a/authentik/lib/models.py b/authentik/lib/models.py\n--- a/authentik/lib/models.py\n+++ b/authentik/lib/models.py\n@@ -81,7 +81,8 @@\n \n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n- self.host_re = r\"([%\\(\\)a-zA-Z])+\" + self.domain_re + self.domain_re\n+ self.formatter_re = r\"([%\\(\\)a-zA-Z])*\"\n+ self.host_re = \"(\" + self.formatter_re + self.hostname_re + self.domain_re + \"|localhost)\"\n self.regex = _lazy_re_compile(\n r\"^(?:[a-z0-9.+-]*)://\" # scheme is validated separately\n r\"(?:[^\\s:@/]+(?::[^\\s:@/]*)?@)?\" # user:pass authentication\n", "issue": "Launch URL in Application UI Settings can't be entered for some domains\n**Describe the bug**\r\nWhen I try to add a fixed link to an application, it will return an error with null text.\r\nI think this is happening only for any subdomain that has a dash character on the subdomain portion of the name:\r\nej: https://tbb-assets.domain.com\r\n\r\n**Screenshots**\r\nThis one gets saved without any problems:\r\nhttps://application.com\r\n\r\n\r\nBut if i edit this domain to something else like:\r\nhttps://tbb-assets.easyfoodsin.com\r\n\r\n\r\n**Logs**\r\nOutput of docker-compose logs or kubectl logs respectively.\r\nI can't find anything on the logs it seems that nothing is submitted is a validation error within the application edit screen.\r\n\r\n**Version and Deployment (please complete the following information):**\r\n - authentik version: 2023.3.0\r\n - Deployment: docker-compose\r\n\r\n**Additional context**\r\nThis error is not happening on version (2023.2.2) because I created a few applications recently that have many urls that have a dash on the subdomain.\n", "before_files": [{"content": "\"\"\"Generic models\"\"\"\nimport re\n\nfrom django.core.validators import URLValidator\nfrom django.db import models\nfrom django.utils.regex_helper import _lazy_re_compile\nfrom model_utils.managers import InheritanceManager\nfrom rest_framework.serializers import BaseSerializer\n\n\nclass SerializerModel(models.Model):\n \"\"\"Base Abstract Model which has a serializer\"\"\"\n\n @property\n def serializer(self) -> type[BaseSerializer]:\n \"\"\"Get serializer for this model\"\"\"\n raise NotImplementedError\n\n class Meta:\n abstract = True\n\n\nclass CreatedUpdatedModel(models.Model):\n \"\"\"Base Abstract Model to save created and update\"\"\"\n\n created = models.DateTimeField(auto_now_add=True)\n last_updated = models.DateTimeField(auto_now=True)\n\n class Meta:\n abstract = True\n\n\nclass InheritanceAutoManager(InheritanceManager):\n \"\"\"Object manager which automatically selects the subclass\"\"\"\n\n def get_queryset(self):\n return super().get_queryset().select_subclasses()\n\n\nclass InheritanceForwardManyToOneDescriptor(models.fields.related.ForwardManyToOneDescriptor):\n \"\"\"Forward ManyToOne Descriptor that selects subclass. Requires InheritanceAutoManager.\"\"\"\n\n def get_queryset(self, **hints):\n return self.field.remote_field.model.objects.db_manager(hints=hints).select_subclasses()\n\n\nclass InheritanceForeignKey(models.ForeignKey):\n \"\"\"Custom ForeignKey that uses InheritanceForwardManyToOneDescriptor\"\"\"\n\n forward_related_accessor_class = InheritanceForwardManyToOneDescriptor\n\n\nclass DomainlessURLValidator(URLValidator):\n \"\"\"Subclass of URLValidator which doesn't check the domain\n (to allow hostnames without domain)\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n self.host_re = \"(\" + self.hostname_re + self.domain_re + \"|localhost)\"\n self.regex = _lazy_re_compile(\n r\"^(?:[a-z0-9.+-]*)://\" # scheme is validated separately\n r\"(?:[^\\s:@/]+(?::[^\\s:@/]*)?@)?\" # user:pass authentication\n r\"(?:\" + self.ipv4_re + \"|\" + self.ipv6_re + \"|\" + self.host_re + \")\"\n r\"(?::\\d{2,5})?\" # port\n r\"(?:[/?#][^\\s]*)?\" # resource path\n r\"\\Z\",\n re.IGNORECASE,\n )\n self.schemes = [\"http\", \"https\", \"blank\"] + list(self.schemes)\n\n def __call__(self, value: str):\n # Check if the scheme is valid.\n scheme = value.split(\"://\")[0].lower()\n if scheme not in self.schemes:\n value = \"default\" + value\n super().__call__(value)\n\n\nclass DomainlessFormattedURLValidator(DomainlessURLValidator):\n \"\"\"URL validator which allows for python format strings\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n self.host_re = r\"([%\\(\\)a-zA-Z])+\" + self.domain_re + self.domain_re\n self.regex = _lazy_re_compile(\n r\"^(?:[a-z0-9.+-]*)://\" # scheme is validated separately\n r\"(?:[^\\s:@/]+(?::[^\\s:@/]*)?@)?\" # user:pass authentication\n r\"(?:\" + self.ipv4_re + \"|\" + self.ipv6_re + \"|\" + self.host_re + \")\"\n r\"(?::\\d{2,5})?\" # port\n r\"(?:[/?#][^\\s]*)?\" # resource path\n r\"\\Z\",\n re.IGNORECASE,\n )\n self.schemes = [\"http\", \"https\", \"blank\"] + list(self.schemes)\n", "path": "authentik/lib/models.py"}], "after_files": [{"content": "\"\"\"Generic models\"\"\"\nimport re\n\nfrom django.core.validators import URLValidator\nfrom django.db import models\nfrom django.utils.regex_helper import _lazy_re_compile\nfrom model_utils.managers import InheritanceManager\nfrom rest_framework.serializers import BaseSerializer\n\n\nclass SerializerModel(models.Model):\n \"\"\"Base Abstract Model which has a serializer\"\"\"\n\n @property\n def serializer(self) -> type[BaseSerializer]:\n \"\"\"Get serializer for this model\"\"\"\n raise NotImplementedError\n\n class Meta:\n abstract = True\n\n\nclass CreatedUpdatedModel(models.Model):\n \"\"\"Base Abstract Model to save created and update\"\"\"\n\n created = models.DateTimeField(auto_now_add=True)\n last_updated = models.DateTimeField(auto_now=True)\n\n class Meta:\n abstract = True\n\n\nclass InheritanceAutoManager(InheritanceManager):\n \"\"\"Object manager which automatically selects the subclass\"\"\"\n\n def get_queryset(self):\n return super().get_queryset().select_subclasses()\n\n\nclass InheritanceForwardManyToOneDescriptor(models.fields.related.ForwardManyToOneDescriptor):\n \"\"\"Forward ManyToOne Descriptor that selects subclass. Requires InheritanceAutoManager.\"\"\"\n\n def get_queryset(self, **hints):\n return self.field.remote_field.model.objects.db_manager(hints=hints).select_subclasses()\n\n\nclass InheritanceForeignKey(models.ForeignKey):\n \"\"\"Custom ForeignKey that uses InheritanceForwardManyToOneDescriptor\"\"\"\n\n forward_related_accessor_class = InheritanceForwardManyToOneDescriptor\n\n\nclass DomainlessURLValidator(URLValidator):\n \"\"\"Subclass of URLValidator which doesn't check the domain\n (to allow hostnames without domain)\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n self.host_re = \"(\" + self.hostname_re + self.domain_re + \"|localhost)\"\n self.regex = _lazy_re_compile(\n r\"^(?:[a-z0-9.+-]*)://\" # scheme is validated separately\n r\"(?:[^\\s:@/]+(?::[^\\s:@/]*)?@)?\" # user:pass authentication\n r\"(?:\" + self.ipv4_re + \"|\" + self.ipv6_re + \"|\" + self.host_re + \")\"\n r\"(?::\\d{2,5})?\" # port\n r\"(?:[/?#][^\\s]*)?\" # resource path\n r\"\\Z\",\n re.IGNORECASE,\n )\n self.schemes = [\"http\", \"https\", \"blank\"] + list(self.schemes)\n\n def __call__(self, value: str):\n # Check if the scheme is valid.\n scheme = value.split(\"://\")[0].lower()\n if scheme not in self.schemes:\n value = \"default\" + value\n super().__call__(value)\n\n\nclass DomainlessFormattedURLValidator(DomainlessURLValidator):\n \"\"\"URL validator which allows for python format strings\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n self.formatter_re = r\"([%\\(\\)a-zA-Z])*\"\n self.host_re = \"(\" + self.formatter_re + self.hostname_re + self.domain_re + \"|localhost)\"\n self.regex = _lazy_re_compile(\n r\"^(?:[a-z0-9.+-]*)://\" # scheme is validated separately\n r\"(?:[^\\s:@/]+(?::[^\\s:@/]*)?@)?\" # user:pass authentication\n r\"(?:\" + self.ipv4_re + \"|\" + self.ipv6_re + \"|\" + self.host_re + \")\"\n r\"(?::\\d{2,5})?\" # port\n r\"(?:[/?#][^\\s]*)?\" # resource path\n r\"\\Z\",\n re.IGNORECASE,\n )\n self.schemes = [\"http\", \"https\", \"blank\"] + list(self.schemes)\n", "path": "authentik/lib/models.py"}]} | 1,598 | 203 |
gh_patches_debug_6104 | rasdani/github-patches | git_diff | pre-commit__pre-commit-949 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cspell hook install fails due pre-commit assumptions regarding npm packages
I am raising this bug here as cspell is still unusable as a pre-commit hook even after the author made additional changes and I am afraid that the root cause is no longer inside cspell package.
Mainly cspell is a typescript project that is published on npm and you cannot run it without building it first. Apparently pre-commit does not know about this concenpt (or I am not aware about it).'
More information can be found on https://github.com/Jason3S/cspell/issues/53#issuecomment-402562237
To enabled cspell hook it should be enough to add this:
```
- repo: https://github.com/Jason3S/cspell.git
rev: v3.2.2
hooks:
- id: cspell
```
Still, once you run pre-precommit you soon endup with something like:
```
cspell...................................................................Failed
hookid: cspell
internal/modules/cjs/loader.js:611
throw err;
^
Error: Cannot find module './dist/app'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:609:15)
at Function.Module._load (internal/modules/cjs/loader.js:535:25)
at Module.require (internal/modules/cjs/loader.js:663:17)
at require (internal/modules/cjs/helpers.js:20:18)
at Object.<anonymous> (/Users/ssbarnea/.cache/pre-commit/repolvipoC/bin.js:5:1)
at Module._compile (internal/modules/cjs/loader.js:734:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:745:10)
at Module.load (internal/modules/cjs/loader.js:626:32)
at tryModuleLoad (internal/modules/cjs/loader.js:566:12)
at Function.Module._load (internal/modules/cjs/loader.js:558:3)
internal/modules/cjs/loader.js:611
throw err;
^
```
The maintainer of cspell mentioned that the project was not designed to run from source, and the expected behavior is to install the npm package. I have to say that I kinda agree with his view.
How can we address this issue?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/node.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import os
5 import sys
6
7 import pre_commit.constants as C
8 from pre_commit.envcontext import envcontext
9 from pre_commit.envcontext import Var
10 from pre_commit.languages import helpers
11 from pre_commit.languages.python import bin_dir
12 from pre_commit.util import clean_path_on_failure
13 from pre_commit.util import cmd_output
14
15
16 ENVIRONMENT_DIR = 'node_env'
17 get_default_version = helpers.basic_get_default_version
18 healthy = helpers.basic_healthy
19
20
21 def _envdir(prefix, version):
22 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
23 return prefix.path(directory)
24
25
26 def get_env_patch(venv):
27 if sys.platform == 'cygwin': # pragma: no cover
28 _, win_venv, _ = cmd_output('cygpath', '-w', venv)
29 install_prefix = r'{}\bin'.format(win_venv.strip())
30 elif sys.platform == 'win32': # pragma: no cover
31 install_prefix = bin_dir(venv)
32 else: # pragma: windows no cover
33 install_prefix = venv
34 return (
35 ('NODE_VIRTUAL_ENV', venv),
36 ('NPM_CONFIG_PREFIX', install_prefix),
37 ('npm_config_prefix', install_prefix),
38 ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),
39 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
40 )
41
42
43 @contextlib.contextmanager
44 def in_env(prefix, language_version):
45 with envcontext(get_env_patch(_envdir(prefix, language_version))):
46 yield
47
48
49 def install_environment(prefix, version, additional_dependencies):
50 additional_dependencies = tuple(additional_dependencies)
51 assert prefix.exists('package.json')
52 envdir = _envdir(prefix, version)
53
54 # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath
55 if sys.platform == 'win32': # pragma: no cover
56 envdir = '\\\\?\\' + os.path.normpath(envdir)
57 with clean_path_on_failure(envdir):
58 cmd = [
59 sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir,
60 ]
61 if version != C.DEFAULT:
62 cmd.extend(['-n', version])
63 cmd_output(*cmd)
64
65 with in_env(prefix, version):
66 helpers.run_setup_cmd(
67 prefix,
68 ('npm', 'install', '-g', '.') + additional_dependencies,
69 )
70
71
72 def run_hook(hook, file_args):
73 with in_env(hook.prefix, hook.language_version):
74 return helpers.run_xargs(hook, helpers.to_cmd(hook), file_args)
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -62,10 +62,11 @@
cmd.extend(['-n', version])
cmd_output(*cmd)
+ dep = 'git+file:///{}'.format(prefix.prefix_dir)
with in_env(prefix, version):
helpers.run_setup_cmd(
prefix,
- ('npm', 'install', '-g', '.') + additional_dependencies,
+ ('npm', 'install', '-g', dep) + additional_dependencies,
)
| {"golden_diff": "diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py\n--- a/pre_commit/languages/node.py\n+++ b/pre_commit/languages/node.py\n@@ -62,10 +62,11 @@\n cmd.extend(['-n', version])\n cmd_output(*cmd)\n \n+ dep = 'git+file:///{}'.format(prefix.prefix_dir)\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix,\n- ('npm', 'install', '-g', '.') + additional_dependencies,\n+ ('npm', 'install', '-g', dep) + additional_dependencies,\n )\n", "issue": "cspell hook install fails due pre-commit assumptions regarding npm packages\nI am raising this bug here as cspell is still unusable as a pre-commit hook even after the author made additional changes and I am afraid that the root cause is no longer inside cspell package.\r\n\r\nMainly cspell is a typescript project that is published on npm and you cannot run it without building it first. Apparently pre-commit does not know about this concenpt (or I am not aware about it).'\r\n\r\nMore information can be found on https://github.com/Jason3S/cspell/issues/53#issuecomment-402562237\r\n\r\nTo enabled cspell hook it should be enough to add this:\r\n```\r\n - repo: https://github.com/Jason3S/cspell.git\r\n rev: v3.2.2\r\n hooks:\r\n - id: cspell\r\n```\r\n\r\nStill, once you run pre-precommit you soon endup with something like:\r\n```\r\ncspell...................................................................Failed\r\nhookid: cspell\r\n\r\ninternal/modules/cjs/loader.js:611\r\n throw err;\r\n ^\r\n\r\nError: Cannot find module './dist/app'\r\n at Function.Module._resolveFilename (internal/modules/cjs/loader.js:609:15)\r\n at Function.Module._load (internal/modules/cjs/loader.js:535:25)\r\n at Module.require (internal/modules/cjs/loader.js:663:17)\r\n at require (internal/modules/cjs/helpers.js:20:18)\r\n at Object.<anonymous> (/Users/ssbarnea/.cache/pre-commit/repolvipoC/bin.js:5:1)\r\n at Module._compile (internal/modules/cjs/loader.js:734:30)\r\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:745:10)\r\n at Module.load (internal/modules/cjs/loader.js:626:32)\r\n at tryModuleLoad (internal/modules/cjs/loader.js:566:12)\r\n at Function.Module._load (internal/modules/cjs/loader.js:558:3)\r\ninternal/modules/cjs/loader.js:611\r\n throw err;\r\n ^\r\n```\r\n\r\nThe maintainer of cspell mentioned that the project was not designed to run from source, and the expected behavior is to install the npm package. I have to say that I kinda agree with his view.\r\n\r\nHow can we address this issue? \n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport os\nimport sys\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.languages.python import bin_dir\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\n\n\nENVIRONMENT_DIR = 'node_env'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef _envdir(prefix, version):\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n return prefix.path(directory)\n\n\ndef get_env_patch(venv):\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = r'{}\\bin'.format(win_venv.strip())\n elif sys.platform == 'win32': # pragma: no cover\n install_prefix = bin_dir(venv)\n else: # pragma: windows no cover\n install_prefix = venv\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(prefix, language_version):\n with envcontext(get_env_patch(_envdir(prefix, language_version))):\n yield\n\n\ndef install_environment(prefix, version, additional_dependencies):\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n envdir = _envdir(prefix, version)\n\n # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n if sys.platform == 'win32': # pragma: no cover\n envdir = '\\\\\\\\?\\\\' + os.path.normpath(envdir)\n with clean_path_on_failure(envdir):\n cmd = [\n sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir,\n ]\n if version != C.DEFAULT:\n cmd.extend(['-n', version])\n cmd_output(*cmd)\n\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix,\n ('npm', 'install', '-g', '.') + additional_dependencies,\n )\n\n\ndef run_hook(hook, file_args):\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, helpers.to_cmd(hook), file_args)\n", "path": "pre_commit/languages/node.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport os\nimport sys\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.languages.python import bin_dir\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\n\n\nENVIRONMENT_DIR = 'node_env'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef _envdir(prefix, version):\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n return prefix.path(directory)\n\n\ndef get_env_patch(venv):\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = r'{}\\bin'.format(win_venv.strip())\n elif sys.platform == 'win32': # pragma: no cover\n install_prefix = bin_dir(venv)\n else: # pragma: windows no cover\n install_prefix = venv\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(prefix, language_version):\n with envcontext(get_env_patch(_envdir(prefix, language_version))):\n yield\n\n\ndef install_environment(prefix, version, additional_dependencies):\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n envdir = _envdir(prefix, version)\n\n # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n if sys.platform == 'win32': # pragma: no cover\n envdir = '\\\\\\\\?\\\\' + os.path.normpath(envdir)\n with clean_path_on_failure(envdir):\n cmd = [\n sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir,\n ]\n if version != C.DEFAULT:\n cmd.extend(['-n', version])\n cmd_output(*cmd)\n\n dep = 'git+file:///{}'.format(prefix.prefix_dir)\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix,\n ('npm', 'install', '-g', dep) + additional_dependencies,\n )\n\n\ndef run_hook(hook, file_args):\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, helpers.to_cmd(hook), file_args)\n", "path": "pre_commit/languages/node.py"}]} | 1,534 | 133 |
gh_patches_debug_16802 | rasdani/github-patches | git_diff | chainer__chainer-658 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TestConvolution2D.test_(forward, backward)_gpu_im2col does not check use_cudnn=False case
`TestConvolution2D.test_forward_gpu_im2col` and `TestConvolution2D.test_backward_gpu_im2col` are expected to test `Convolution2DFunction.backward_gpu` works correctly when CuDNN is disabled.
To achieve this, these test fixtures set `self.use_cudnn` attribute of the instance of `Convolution2D` to `False` . But what is actually passed to `convoluton_2d` function as `use_cudnn` option is the `use_cudnn` argument of `__init__` , not the attribute `self.use_cudnn` (See [here](https://github.com/pfnet/chainer/blob/af1f11d4e50b322286a041c416eddd4e0ee63d30/chainer/links/connection/convolution_2d.py#L75)).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/links/connection/convolution_2d.py`
Content:
```
1 import numpy
2
3 from chainer.functions.connection import convolution_2d
4 from chainer import link
5
6
7 class Convolution2D(link.Link):
8
9 """Two-dimensional convolutional layer.
10
11 This link wraps the :func:`~chainer.functions.convolution_2d` function and
12 holds the filter weight and bias vector as parameters.
13
14 Args:
15 in_channels (int): Number of channels of input arrays.
16 out_channels (int): Number of channels of output arrays.
17 ksize (int or (int, int)): Size of filters (a.k.a. kernels).
18 ``ksize=k`` and ``ksize=(k, k)`` are equivalent.
19 stride (int or (int, int)): Stride of filter applications.
20 ``stride=s`` and ``stride=(s, s)`` are equivalent.
21 pad (int or (int, int)): Spatial padding width for input arrays.
22 ``pad=p`` and ``pad=(p, p)`` are equivalent.
23 wscale (float): Scaling factor of the initial weight.
24 bias (float): Initial bias value.
25 nobias (bool): If True, then this link does not use the bias term.
26 use_cudnn (bool): If True, then this link uses CuDNN if available.
27 initialW (4-D array): Initial weight value. If ``None``, then this
28 function uses to initialize ``wscale``.
29 initial_bias (1-D array): Initial bias value. If ``None``, then this
30 function uses to initialize ``bias``.
31
32 .. seealso::
33 See :func:`chainer.functions.convolution_2d` for the definition of
34 two-dimensional convolution.
35
36 Attributes:
37 W (~chainer.Variable): Weight parameter.
38 b (~chainer.Variable): Bias parameter.
39
40 """
41 def __init__(self, in_channels, out_channels, ksize, stride=1, pad=0,
42 wscale=1, bias=0, nobias=False, use_cudnn=True,
43 initialW=None, initial_bias=None):
44 kh, kw = _pair(ksize)
45 self._conv_arg = (stride, pad, use_cudnn)
46
47 W_shape = (out_channels, in_channels, kh, kw)
48 super(Convolution2D, self).__init__(W=W_shape)
49
50 if initialW is not None:
51 self.W.data[...] = initialW
52 else:
53 std = wscale * numpy.sqrt(1. / (kh * kw * in_channels))
54 self.W.data[...] = numpy.random.normal(0, std, W_shape)
55
56 if nobias:
57 self.b = None
58 else:
59 self.add_param('b', out_channels)
60 if initial_bias is None:
61 initial_bias = bias
62 self.b.data[...] = initial_bias
63
64 def __call__(self, x):
65 """Applies the convolution layer.
66
67 Args:
68 x (~chainer.Variable): Input image.
69
70 Returns:
71 ~chainer.Variable: Output of the convolution.
72
73 """
74 return convolution_2d.convolution_2d(
75 x, self.W, self.b, *self._conv_arg)
76
77
78 def _pair(x):
79 if hasattr(x, '__getitem__'):
80 return x
81 return (x, x)
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/links/connection/convolution_2d.py b/chainer/links/connection/convolution_2d.py
--- a/chainer/links/connection/convolution_2d.py
+++ b/chainer/links/connection/convolution_2d.py
@@ -42,7 +42,9 @@
wscale=1, bias=0, nobias=False, use_cudnn=True,
initialW=None, initial_bias=None):
kh, kw = _pair(ksize)
- self._conv_arg = (stride, pad, use_cudnn)
+ self.stride = _pair(stride)
+ self.pad = _pair(pad)
+ self.use_cudnn = use_cudnn
W_shape = (out_channels, in_channels, kh, kw)
super(Convolution2D, self).__init__(W=W_shape)
@@ -72,7 +74,7 @@
"""
return convolution_2d.convolution_2d(
- x, self.W, self.b, *self._conv_arg)
+ x, self.W, self.b, self.stride, self.pad, self.use_cudnn)
def _pair(x):
| {"golden_diff": "diff --git a/chainer/links/connection/convolution_2d.py b/chainer/links/connection/convolution_2d.py\n--- a/chainer/links/connection/convolution_2d.py\n+++ b/chainer/links/connection/convolution_2d.py\n@@ -42,7 +42,9 @@\n wscale=1, bias=0, nobias=False, use_cudnn=True,\n initialW=None, initial_bias=None):\n kh, kw = _pair(ksize)\n- self._conv_arg = (stride, pad, use_cudnn)\n+ self.stride = _pair(stride)\n+ self.pad = _pair(pad)\n+ self.use_cudnn = use_cudnn\n \n W_shape = (out_channels, in_channels, kh, kw)\n super(Convolution2D, self).__init__(W=W_shape)\n@@ -72,7 +74,7 @@\n \n \"\"\"\n return convolution_2d.convolution_2d(\n- x, self.W, self.b, *self._conv_arg)\n+ x, self.W, self.b, self.stride, self.pad, self.use_cudnn)\n \n \n def _pair(x):\n", "issue": "TestConvolution2D.test_(forward, backward)_gpu_im2col does not check use_cudnn=False case\n`TestConvolution2D.test_forward_gpu_im2col` and `TestConvolution2D.test_backward_gpu_im2col` are expected to test `Convolution2DFunction.backward_gpu` works correctly when CuDNN is disabled.\n\nTo achieve this, these test fixtures set `self.use_cudnn` attribute of the instance of `Convolution2D` to `False` . But what is actually passed to `convoluton_2d` function as `use_cudnn` option is the `use_cudnn` argument of `__init__` , not the attribute `self.use_cudnn` (See [here](https://github.com/pfnet/chainer/blob/af1f11d4e50b322286a041c416eddd4e0ee63d30/chainer/links/connection/convolution_2d.py#L75)).\n\n", "before_files": [{"content": "import numpy\n\nfrom chainer.functions.connection import convolution_2d\nfrom chainer import link\n\n\nclass Convolution2D(link.Link):\n\n \"\"\"Two-dimensional convolutional layer.\n\n This link wraps the :func:`~chainer.functions.convolution_2d` function and\n holds the filter weight and bias vector as parameters.\n\n Args:\n in_channels (int): Number of channels of input arrays.\n out_channels (int): Number of channels of output arrays.\n ksize (int or (int, int)): Size of filters (a.k.a. kernels).\n ``ksize=k`` and ``ksize=(k, k)`` are equivalent.\n stride (int or (int, int)): Stride of filter applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent.\n pad (int or (int, int)): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n wscale (float): Scaling factor of the initial weight.\n bias (float): Initial bias value.\n nobias (bool): If True, then this link does not use the bias term.\n use_cudnn (bool): If True, then this link uses CuDNN if available.\n initialW (4-D array): Initial weight value. If ``None``, then this\n function uses to initialize ``wscale``.\n initial_bias (1-D array): Initial bias value. If ``None``, then this\n function uses to initialize ``bias``.\n\n .. seealso::\n See :func:`chainer.functions.convolution_2d` for the definition of\n two-dimensional convolution.\n\n Attributes:\n W (~chainer.Variable): Weight parameter.\n b (~chainer.Variable): Bias parameter.\n\n \"\"\"\n def __init__(self, in_channels, out_channels, ksize, stride=1, pad=0,\n wscale=1, bias=0, nobias=False, use_cudnn=True,\n initialW=None, initial_bias=None):\n kh, kw = _pair(ksize)\n self._conv_arg = (stride, pad, use_cudnn)\n\n W_shape = (out_channels, in_channels, kh, kw)\n super(Convolution2D, self).__init__(W=W_shape)\n\n if initialW is not None:\n self.W.data[...] = initialW\n else:\n std = wscale * numpy.sqrt(1. / (kh * kw * in_channels))\n self.W.data[...] = numpy.random.normal(0, std, W_shape)\n\n if nobias:\n self.b = None\n else:\n self.add_param('b', out_channels)\n if initial_bias is None:\n initial_bias = bias\n self.b.data[...] = initial_bias\n\n def __call__(self, x):\n \"\"\"Applies the convolution layer.\n\n Args:\n x (~chainer.Variable): Input image.\n\n Returns:\n ~chainer.Variable: Output of the convolution.\n\n \"\"\"\n return convolution_2d.convolution_2d(\n x, self.W, self.b, *self._conv_arg)\n\n\ndef _pair(x):\n if hasattr(x, '__getitem__'):\n return x\n return (x, x)\n", "path": "chainer/links/connection/convolution_2d.py"}], "after_files": [{"content": "import numpy\n\nfrom chainer.functions.connection import convolution_2d\nfrom chainer import link\n\n\nclass Convolution2D(link.Link):\n\n \"\"\"Two-dimensional convolutional layer.\n\n This link wraps the :func:`~chainer.functions.convolution_2d` function and\n holds the filter weight and bias vector as parameters.\n\n Args:\n in_channels (int): Number of channels of input arrays.\n out_channels (int): Number of channels of output arrays.\n ksize (int or (int, int)): Size of filters (a.k.a. kernels).\n ``ksize=k`` and ``ksize=(k, k)`` are equivalent.\n stride (int or (int, int)): Stride of filter applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent.\n pad (int or (int, int)): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n wscale (float): Scaling factor of the initial weight.\n bias (float): Initial bias value.\n nobias (bool): If True, then this link does not use the bias term.\n use_cudnn (bool): If True, then this link uses CuDNN if available.\n initialW (4-D array): Initial weight value. If ``None``, then this\n function uses to initialize ``wscale``.\n initial_bias (1-D array): Initial bias value. If ``None``, then this\n function uses to initialize ``bias``.\n\n .. seealso::\n See :func:`chainer.functions.convolution_2d` for the definition of\n two-dimensional convolution.\n\n Attributes:\n W (~chainer.Variable): Weight parameter.\n b (~chainer.Variable): Bias parameter.\n\n \"\"\"\n def __init__(self, in_channels, out_channels, ksize, stride=1, pad=0,\n wscale=1, bias=0, nobias=False, use_cudnn=True,\n initialW=None, initial_bias=None):\n kh, kw = _pair(ksize)\n self.stride = _pair(stride)\n self.pad = _pair(pad)\n self.use_cudnn = use_cudnn\n\n W_shape = (out_channels, in_channels, kh, kw)\n super(Convolution2D, self).__init__(W=W_shape)\n\n if initialW is not None:\n self.W.data[...] = initialW\n else:\n std = wscale * numpy.sqrt(1. / (kh * kw * in_channels))\n self.W.data[...] = numpy.random.normal(0, std, W_shape)\n\n if nobias:\n self.b = None\n else:\n self.add_param('b', out_channels)\n if initial_bias is None:\n initial_bias = bias\n self.b.data[...] = initial_bias\n\n def __call__(self, x):\n \"\"\"Applies the convolution layer.\n\n Args:\n x (~chainer.Variable): Input image.\n\n Returns:\n ~chainer.Variable: Output of the convolution.\n\n \"\"\"\n return convolution_2d.convolution_2d(\n x, self.W, self.b, self.stride, self.pad, self.use_cudnn)\n\n\ndef _pair(x):\n if hasattr(x, '__getitem__'):\n return x\n return (x, x)\n", "path": "chainer/links/connection/convolution_2d.py"}]} | 1,352 | 261 |
gh_patches_debug_835 | rasdani/github-patches | git_diff | scikit-hep__pyhf-336 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bumpversion missing from setup.py[develop]
# Description
As titled, `bumpversion` is not in list of develop dependencies.
# Expected Behavior
Installing `pyhf` installs `bumpversion`.
# Actual Behavior
It does not install `bumpversion`.
# Steps to Reproduce
`pip install pyhf[develop]`
# Checklist
- [x] Run `git fetch` to get the most up to date version of `master`
- [x] Searched through existing Issues to confirm this is not a duplicate issue
- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from setuptools import setup, find_packages
4
5 extras_require = {
6 'tensorflow': [
7 'tensorflow>=1.10.0',
8 'tensorflow-probability==0.3.0',
9 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
10 'setuptools<=39.1.0',
11 ],
12 'torch': ['torch>=0.4.0'],
13 'mxnet': [
14 'mxnet>=1.0.0',
15 'requests<2.19.0,>=2.18.4',
16 'numpy<1.15.0,>=1.8.2',
17 'requests<2.19.0,>=2.18.4',
18 ],
19 # 'dask': [
20 # 'dask[array]'
21 # ],
22 'xmlimport': ['uproot'],
23 'minuit': ['iminuit'],
24 'develop': [
25 'pyflakes',
26 'pytest>=3.5.1',
27 'pytest-cov>=2.5.1',
28 'pytest-benchmark[histogram]',
29 'pytest-console-scripts',
30 'python-coveralls',
31 'coverage>=4.0', # coveralls
32 'matplotlib',
33 'jupyter',
34 'nbdime',
35 'uproot>=3.0.0',
36 'papermill',
37 'graphviz',
38 'sphinx',
39 'sphinxcontrib-bibtex',
40 'sphinxcontrib-napoleon',
41 'sphinx_rtd_theme',
42 'nbsphinx',
43 'm2r',
44 'jsonpatch',
45 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now
46 'pre-commit',
47 'black;python_version>="3.6"', # Black is Python3 only
48 ],
49 }
50 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
51
52 setup(
53 name='pyhf',
54 version='0.0.15',
55 description='(partial) pure python histfactory implementation',
56 url='https://github.com/diana-hep/pyhf',
57 author='Lukas Heinrich',
58 author_email='[email protected]',
59 license='Apache',
60 keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',
61 classifiers=[
62 "Programming Language :: Python :: 2",
63 "Programming Language :: Python :: 2.7",
64 "Programming Language :: Python :: 3",
65 "Programming Language :: Python :: 3.6",
66 ],
67 packages=find_packages(),
68 include_package_data=True,
69 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*",
70 install_requires=[
71 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet
72 'click>=6.0', # for console scripts,
73 'tqdm', # for readxml
74 'six', # for modifiers
75 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6
76 'jsonpatch',
77 ],
78 extras_require=extras_require,
79 entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},
80 dependency_links=[],
81 )
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -35,6 +35,7 @@
'uproot>=3.0.0',
'papermill',
'graphviz',
+ 'bumpversion',
'sphinx',
'sphinxcontrib-bibtex',
'sphinxcontrib-napoleon',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -35,6 +35,7 @@\n 'uproot>=3.0.0',\n 'papermill',\n 'graphviz',\n+ 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n", "issue": "bumpversion missing from setup.py[develop]\n# Description\r\n\r\nAs titled, `bumpversion` is not in list of develop dependencies.\r\n\r\n# Expected Behavior\r\n\r\nInstalling `pyhf` installs `bumpversion`.\r\n\r\n# Actual Behavior\r\n\r\nIt does not install `bumpversion`.\r\n\r\n# Steps to Reproduce\r\n\r\n`pip install pyhf[develop]`\r\n\r\n# Checklist\r\n\r\n- [x] Run `git fetch` to get the most up to date version of `master`\r\n- [x] Searched through existing Issues to confirm this is not a duplicate issue\r\n- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow>=1.10.0',\n 'tensorflow-probability==0.3.0',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'torch': ['torch>=0.4.0'],\n 'mxnet': [\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n # 'dask': [\n # 'dask[array]'\n # ],\n 'xmlimport': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot>=3.0.0',\n 'papermill',\n 'graphviz',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'm2r',\n 'jsonpatch',\n 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\nsetup(\n name='pyhf',\n version='0.0.15',\n description='(partial) pure python histfactory implementation',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich',\n author_email='[email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n ],\n packages=find_packages(),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow>=1.10.0',\n 'tensorflow-probability==0.3.0',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'torch': ['torch>=0.4.0'],\n 'mxnet': [\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n # 'dask': [\n # 'dask[array]'\n # ],\n 'xmlimport': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot>=3.0.0',\n 'papermill',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'm2r',\n 'jsonpatch',\n 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\nsetup(\n name='pyhf',\n version='0.0.15',\n description='(partial) pure python histfactory implementation',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich',\n author_email='[email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n ],\n packages=find_packages(),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n)\n", "path": "setup.py"}]} | 1,337 | 83 |
gh_patches_debug_13653 | rasdani/github-patches | git_diff | mars-project__mars-210 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG][TENSOR] TensorZeros generated in TensorDiag.tile have the same key even if they have different shapes
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
`TensorDiag.tile` may generate chunks whose op is TensorZeros, they will have the same key even if their shape are different.
**To Reproduce**
```python
In [94]: a = mt.arange(5, chunk_size=2)
In [95]: d = mt.diag(a)
In [96]: d.tiles()
Out[96]: <mars.tensor.core.Tensor at 0x136df1dc8>
In [99]: d.chunks[1].shape, d.chunks[1].op.key
Out[99]: ((2, 2), 'd6d8d339b2cbac64ae65cb29ff3f6785')
In [100]: d.chunks[2].shape, d.chunks[1].op.key
Out[100]: ((2, 1), 'd6d8d339b2cbac64ae65cb29ff3f6785')
```
**Expected behavior**
Chunks of TensorZeros should have different keys if their shapes are different, this is rightly handled for TensorZeros.tile, but when the TensorZeros op is created manually, this bug could happen.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/tensor/expressions/datasource/core.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import itertools
18
19 import numpy as np
20
21 from .... import opcodes as OperandDef
22 from ....operands import DataSource
23 from ....compat import izip
24 from ....config import options
25 from ..utils import normalize_shape, decide_chunk_sizes
26 from ..core import TensorOperandMixin
27
28
29 class TensorDataSource(DataSource, TensorOperandMixin):
30 """
31 Tensor data source base class, provide universal tile logic,
32 subclass can overwrite tile method.
33 """
34
35 __slots__ = ()
36
37 def to_chunk_op(self, *args):
38 chunk_shape, idx, chunk_size = args
39 chunk_op = self.copy().reset_key()
40 chunk_op.params = {'size': chunk_shape, 'index': idx} # to make op key different
41 return chunk_op
42
43 @classmethod
44 def tile(cls, op):
45 tensor = op.outputs[0]
46
47 chunk_size = tensor.params.raw_chunk_size or options.tensor.chunk_size
48 chunk_size = decide_chunk_sizes(tensor.shape, chunk_size, tensor.dtype.itemsize)
49 chunk_size_idxes = (range(len(size)) for size in chunk_size)
50
51 out_chunks = []
52 for chunk_shape, chunk_idx in izip(itertools.product(*chunk_size),
53 itertools.product(*chunk_size_idxes)):
54 chunk_op = op.to_chunk_op(chunk_shape, chunk_idx, chunk_size)
55 out_chunk = chunk_op.new_chunk(None, chunk_shape, index=chunk_idx)
56 out_chunks.append(out_chunk)
57
58 new_op = op.copy()
59 return new_op.new_tensors(op.inputs, tensor.shape, chunks=out_chunks, nsplits=chunk_size)
60
61
62 class TensorNoInput(TensorDataSource):
63 """
64 Tensor operand with no inputs.
65 """
66
67 def check_inputs(self, inputs):
68 # no inputs
69 if inputs and len(inputs) > 0:
70 raise ValueError("Tensor data source has no inputs")
71
72 def calc_shape(self, *inputs_shape):
73 return self.outputs[0].shape
74
75 def __call__(self, shape, chunk_size=None):
76 shape = normalize_shape(shape)
77 return self.new_tensor(None, shape, raw_chunk_size=chunk_size)
78
79
80 class TensorHasInput(TensorDataSource):
81 """
82 Tensor operand with a single input.
83 """
84
85 @property
86 def input(self):
87 return self._input
88
89 def check_inputs(self, inputs):
90 # no inputs
91 if len(inputs) != 1:
92 raise ValueError("Tensor can only have 1 input")
93
94 def _set_inputs(self, inputs):
95 super(TensorHasInput, self)._set_inputs(inputs)
96 self._input = self._inputs[0]
97
98 @classmethod
99 def tile(cls, op):
100 out_chunks = []
101 for c in op.input.chunks:
102 out_chunk = op.copy().reset_key().new_chunk([c], c.shape, index=c.index)
103 out_chunks.append(out_chunk)
104
105 new_op = op.copy()
106 return new_op.new_tensors(op.inputs, op.outputs[0].shape, chunks=out_chunks,
107 nsplits=op.input.nsplits)
108
109 def calc_shape(self, *inputs_shape):
110 return inputs_shape[0]
111
112 def __call__(self, a):
113 return self.new_tensor([a], a.shape)
114
115
116 class TensorLike(TensorHasInput):
117 def _set_inputs(self, inputs):
118 super(TensorLike, self)._set_inputs(inputs)
119 if self.dtype is None:
120 self._dtype = self.input.dtype
121 if self.gpu is None:
122 self._gpu = self.input.op.gpu
123
124 # FIXME: remove when cupy supports other dtypes
125 if self._gpu and self._dtype not in (np.float32, np.float64):
126 raise NotImplementedError('Sparse tensor on GPU only supports float32 and float64')
127
128
129 class TensorFetch(TensorNoInput):
130 _op_type_ = OperandDef.FETCH
131
132 def __init__(self, dtype=None, **kw):
133 super(TensorFetch, self).__init__(_dtype=dtype, **kw)
134
135 @classmethod
136 def tile(cls, op):
137 raise NotImplementedError('Fetch tile cannot be handled by operand itself')
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mars/tensor/expressions/datasource/core.py b/mars/tensor/expressions/datasource/core.py
--- a/mars/tensor/expressions/datasource/core.py
+++ b/mars/tensor/expressions/datasource/core.py
@@ -72,6 +72,14 @@
def calc_shape(self, *inputs_shape):
return self.outputs[0].shape
+ def _new_chunks(self, inputs, shape, **kw):
+ self.params['shape'] = shape # set shape to make the operand key different
+ return super(TensorNoInput, self)._new_chunks(inputs, shape, **kw)
+
+ def _new_entities(self, inputs, shape, **kw):
+ self.params['shape'] = shape # set shape to make the operand key different
+ return super(TensorNoInput, self)._new_entities(inputs, shape, **kw)
+
def __call__(self, shape, chunk_size=None):
shape = normalize_shape(shape)
return self.new_tensor(None, shape, raw_chunk_size=chunk_size)
| {"golden_diff": "diff --git a/mars/tensor/expressions/datasource/core.py b/mars/tensor/expressions/datasource/core.py\n--- a/mars/tensor/expressions/datasource/core.py\n+++ b/mars/tensor/expressions/datasource/core.py\n@@ -72,6 +72,14 @@\n def calc_shape(self, *inputs_shape):\n return self.outputs[0].shape\n \n+ def _new_chunks(self, inputs, shape, **kw):\n+ self.params['shape'] = shape # set shape to make the operand key different\n+ return super(TensorNoInput, self)._new_chunks(inputs, shape, **kw)\n+\n+ def _new_entities(self, inputs, shape, **kw):\n+ self.params['shape'] = shape # set shape to make the operand key different\n+ return super(TensorNoInput, self)._new_entities(inputs, shape, **kw)\n+\n def __call__(self, shape, chunk_size=None):\n shape = normalize_shape(shape)\n return self.new_tensor(None, shape, raw_chunk_size=chunk_size)\n", "issue": "[BUG][TENSOR] TensorZeros generated in TensorDiag.tile have the same key even if they have different shapes\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\n\r\n`TensorDiag.tile` may generate chunks whose op is TensorZeros, they will have the same key even if their shape are different.\r\n\r\n**To Reproduce**\r\n\r\n```python\r\nIn [94]: a = mt.arange(5, chunk_size=2) \r\n\r\nIn [95]: d = mt.diag(a) \r\n\r\nIn [96]: d.tiles() \r\nOut[96]: <mars.tensor.core.Tensor at 0x136df1dc8>\r\n\r\nIn [99]: d.chunks[1].shape, d.chunks[1].op.key \r\nOut[99]: ((2, 2), 'd6d8d339b2cbac64ae65cb29ff3f6785')\r\n\r\nIn [100]: d.chunks[2].shape, d.chunks[1].op.key \r\nOut[100]: ((2, 1), 'd6d8d339b2cbac64ae65cb29ff3f6785')\r\n```\r\n\r\n**Expected behavior**\r\n\r\nChunks of TensorZeros should have different keys if their shapes are different, this is rightly handled for TensorZeros.tile, but when the TensorZeros op is created manually, this bug could happen.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport itertools\n\nimport numpy as np\n\nfrom .... import opcodes as OperandDef\nfrom ....operands import DataSource\nfrom ....compat import izip\nfrom ....config import options\nfrom ..utils import normalize_shape, decide_chunk_sizes\nfrom ..core import TensorOperandMixin\n\n\nclass TensorDataSource(DataSource, TensorOperandMixin):\n \"\"\"\n Tensor data source base class, provide universal tile logic,\n subclass can overwrite tile method.\n \"\"\"\n\n __slots__ = ()\n\n def to_chunk_op(self, *args):\n chunk_shape, idx, chunk_size = args\n chunk_op = self.copy().reset_key()\n chunk_op.params = {'size': chunk_shape, 'index': idx} # to make op key different\n return chunk_op\n\n @classmethod\n def tile(cls, op):\n tensor = op.outputs[0]\n\n chunk_size = tensor.params.raw_chunk_size or options.tensor.chunk_size\n chunk_size = decide_chunk_sizes(tensor.shape, chunk_size, tensor.dtype.itemsize)\n chunk_size_idxes = (range(len(size)) for size in chunk_size)\n\n out_chunks = []\n for chunk_shape, chunk_idx in izip(itertools.product(*chunk_size),\n itertools.product(*chunk_size_idxes)):\n chunk_op = op.to_chunk_op(chunk_shape, chunk_idx, chunk_size)\n out_chunk = chunk_op.new_chunk(None, chunk_shape, index=chunk_idx)\n out_chunks.append(out_chunk)\n\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, tensor.shape, chunks=out_chunks, nsplits=chunk_size)\n\n\nclass TensorNoInput(TensorDataSource):\n \"\"\"\n Tensor operand with no inputs.\n \"\"\"\n\n def check_inputs(self, inputs):\n # no inputs\n if inputs and len(inputs) > 0:\n raise ValueError(\"Tensor data source has no inputs\")\n\n def calc_shape(self, *inputs_shape):\n return self.outputs[0].shape\n\n def __call__(self, shape, chunk_size=None):\n shape = normalize_shape(shape)\n return self.new_tensor(None, shape, raw_chunk_size=chunk_size)\n\n\nclass TensorHasInput(TensorDataSource):\n \"\"\"\n Tensor operand with a single input.\n \"\"\"\n\n @property\n def input(self):\n return self._input\n\n def check_inputs(self, inputs):\n # no inputs\n if len(inputs) != 1:\n raise ValueError(\"Tensor can only have 1 input\")\n\n def _set_inputs(self, inputs):\n super(TensorHasInput, self)._set_inputs(inputs)\n self._input = self._inputs[0]\n\n @classmethod\n def tile(cls, op):\n out_chunks = []\n for c in op.input.chunks:\n out_chunk = op.copy().reset_key().new_chunk([c], c.shape, index=c.index)\n out_chunks.append(out_chunk)\n\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, op.outputs[0].shape, chunks=out_chunks,\n nsplits=op.input.nsplits)\n\n def calc_shape(self, *inputs_shape):\n return inputs_shape[0]\n\n def __call__(self, a):\n return self.new_tensor([a], a.shape)\n\n\nclass TensorLike(TensorHasInput):\n def _set_inputs(self, inputs):\n super(TensorLike, self)._set_inputs(inputs)\n if self.dtype is None:\n self._dtype = self.input.dtype\n if self.gpu is None:\n self._gpu = self.input.op.gpu\n\n # FIXME: remove when cupy supports other dtypes\n if self._gpu and self._dtype not in (np.float32, np.float64):\n raise NotImplementedError('Sparse tensor on GPU only supports float32 and float64')\n\n\nclass TensorFetch(TensorNoInput):\n _op_type_ = OperandDef.FETCH\n\n def __init__(self, dtype=None, **kw):\n super(TensorFetch, self).__init__(_dtype=dtype, **kw)\n\n @classmethod\n def tile(cls, op):\n raise NotImplementedError('Fetch tile cannot be handled by operand itself')\n", "path": "mars/tensor/expressions/datasource/core.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport itertools\n\nimport numpy as np\n\nfrom .... import opcodes as OperandDef\nfrom ....operands import DataSource\nfrom ....compat import izip\nfrom ....config import options\nfrom ..utils import normalize_shape, decide_chunk_sizes\nfrom ..core import TensorOperandMixin\n\n\nclass TensorDataSource(DataSource, TensorOperandMixin):\n \"\"\"\n Tensor data source base class, provide universal tile logic,\n subclass can overwrite tile method.\n \"\"\"\n\n __slots__ = ()\n\n def to_chunk_op(self, *args):\n chunk_shape, idx, chunk_size = args\n chunk_op = self.copy().reset_key()\n chunk_op.params = {'size': chunk_shape, 'index': idx} # to make op key different\n return chunk_op\n\n @classmethod\n def tile(cls, op):\n tensor = op.outputs[0]\n\n chunk_size = tensor.params.raw_chunk_size or options.tensor.chunk_size\n chunk_size = decide_chunk_sizes(tensor.shape, chunk_size, tensor.dtype.itemsize)\n chunk_size_idxes = (range(len(size)) for size in chunk_size)\n\n out_chunks = []\n for chunk_shape, chunk_idx in izip(itertools.product(*chunk_size),\n itertools.product(*chunk_size_idxes)):\n chunk_op = op.to_chunk_op(chunk_shape, chunk_idx, chunk_size)\n out_chunk = chunk_op.new_chunk(None, chunk_shape, index=chunk_idx)\n out_chunks.append(out_chunk)\n\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, tensor.shape, chunks=out_chunks, nsplits=chunk_size)\n\n\nclass TensorNoInput(TensorDataSource):\n \"\"\"\n Tensor operand with no inputs.\n \"\"\"\n\n def check_inputs(self, inputs):\n # no inputs\n if inputs and len(inputs) > 0:\n raise ValueError(\"Tensor data source has no inputs\")\n\n def calc_shape(self, *inputs_shape):\n return self.outputs[0].shape\n\n def _new_chunks(self, inputs, shape, **kw):\n self.params['shape'] = shape # set shape to make the operand key different\n return super(TensorNoInput, self)._new_chunks(inputs, shape, **kw)\n\n def _new_entities(self, inputs, shape, **kw):\n self.params['shape'] = shape # set shape to make the operand key different\n return super(TensorNoInput, self)._new_entities(inputs, shape, **kw)\n\n def __call__(self, shape, chunk_size=None):\n shape = normalize_shape(shape)\n return self.new_tensor(None, shape, raw_chunk_size=chunk_size)\n\n\nclass TensorHasInput(TensorDataSource):\n \"\"\"\n Tensor operand with a single input.\n \"\"\"\n\n @property\n def input(self):\n return self._input\n\n def check_inputs(self, inputs):\n # no inputs\n if len(inputs) != 1:\n raise ValueError(\"Tensor can only have 1 input\")\n\n def _set_inputs(self, inputs):\n super(TensorHasInput, self)._set_inputs(inputs)\n self._input = self._inputs[0]\n\n @classmethod\n def tile(cls, op):\n out_chunks = []\n for c in op.input.chunks:\n out_chunk = op.copy().reset_key().new_chunk([c], c.shape, index=c.index)\n out_chunks.append(out_chunk)\n\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, op.outputs[0].shape, chunks=out_chunks,\n nsplits=op.input.nsplits)\n\n def calc_shape(self, *inputs_shape):\n return inputs_shape[0]\n\n def __call__(self, a):\n return self.new_tensor([a], a.shape)\n\n\nclass TensorLike(TensorHasInput):\n def _set_inputs(self, inputs):\n super(TensorLike, self)._set_inputs(inputs)\n if self.dtype is None:\n self._dtype = self.input.dtype\n if self.gpu is None:\n self._gpu = self.input.op.gpu\n\n # FIXME: remove when cupy supports other dtypes\n if self._gpu and self._dtype not in (np.float32, np.float64):\n raise NotImplementedError('Sparse tensor on GPU only supports float32 and float64')\n\n\nclass TensorFetch(TensorNoInput):\n _op_type_ = OperandDef.FETCH\n\n def __init__(self, dtype=None, **kw):\n super(TensorFetch, self).__init__(_dtype=dtype, **kw)\n\n @classmethod\n def tile(cls, op):\n raise NotImplementedError('Fetch tile cannot be handled by operand itself')\n", "path": "mars/tensor/expressions/datasource/core.py"}]} | 1,954 | 240 |
gh_patches_debug_60953 | rasdani/github-patches | git_diff | voicepaw__so-vits-svc-fork-336 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"TypedStorage is deprecated" while Training
**Describe the bug**
Spammy "TypedStorage is deprecated" warning on every epoch.
```
[23:52:12] WARNING [23:52:12] C:\omited\venv\lib\site-packages\torch\_utils.py:776: UserWarning: warnings.py:109
TypedStorage is deprecated. It will be removed in the future and UntypedStorage will
be the only storage class. This should only matter to you if you are using storages
directly. To access UntypedStorage directly, use tensor.untyped_storage() instead
of tensor.storage()
return self.fget.__get__(instance, owner)()
```
**To Reproduce**
Simply train a voice.
**Additional context**
I updated to 3.6.1 today and start seeing the issue. Unfortunately I didn't know what was last good known version.
I'm training a voice using CREPE F0 predictor and using PyTorch 2.0.0 in Windows 11 if that matters.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/so_vits_svc_fork/logger.py`
Content:
```
1 import os
2 import sys
3 from logging import (
4 DEBUG,
5 INFO,
6 FileHandler,
7 StreamHandler,
8 basicConfig,
9 captureWarnings,
10 getLogger,
11 )
12 from pathlib import Path
13
14 from rich.logging import RichHandler
15
16 LOGGER_INIT = False
17
18
19 def init_logger() -> None:
20 global LOGGER_INIT
21 if LOGGER_INIT:
22 return
23
24 IS_TEST = "test" in Path.cwd().stem
25 package_name = sys.modules[__name__].__package__
26 basicConfig(
27 level=INFO,
28 format="%(asctime)s %(message)s",
29 datefmt="[%X]",
30 handlers=[
31 StreamHandler() if is_notebook() else RichHandler(),
32 FileHandler(f"{package_name}.log"),
33 ],
34 )
35 if IS_TEST:
36 getLogger(package_name).setLevel(DEBUG)
37 captureWarnings(True)
38 LOGGER_INIT = True
39
40
41 def is_notebook():
42 try:
43 from IPython import get_ipython
44
45 if "IPKernelApp" not in get_ipython().config: # pragma: no cover
46 raise ImportError("console")
47 return False
48 if "VSCODE_PID" in os.environ: # pragma: no cover
49 raise ImportError("vscode")
50 return False
51 except Exception:
52 return False
53 else: # pragma: no cover
54 return True
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/so_vits_svc_fork/logger.py b/src/so_vits_svc_fork/logger.py
--- a/src/so_vits_svc_fork/logger.py
+++ b/src/so_vits_svc_fork/logger.py
@@ -1,5 +1,6 @@
import os
import sys
+import warnings
from logging import (
DEBUG,
INFO,
@@ -35,6 +36,9 @@
if IS_TEST:
getLogger(package_name).setLevel(DEBUG)
captureWarnings(True)
+ warnings.filterwarnings(
+ "ignore", category=UserWarning, message="TypedStorage is deprecated"
+ )
LOGGER_INIT = True
| {"golden_diff": "diff --git a/src/so_vits_svc_fork/logger.py b/src/so_vits_svc_fork/logger.py\n--- a/src/so_vits_svc_fork/logger.py\n+++ b/src/so_vits_svc_fork/logger.py\n@@ -1,5 +1,6 @@\n import os\n import sys\n+import warnings\n from logging import (\n DEBUG,\n INFO,\n@@ -35,6 +36,9 @@\n if IS_TEST:\n getLogger(package_name).setLevel(DEBUG)\n captureWarnings(True)\n+ warnings.filterwarnings(\n+ \"ignore\", category=UserWarning, message=\"TypedStorage is deprecated\"\n+ )\n LOGGER_INIT = True\n", "issue": "\"TypedStorage is deprecated\" while Training\n**Describe the bug**\r\nSpammy \"TypedStorage is deprecated\" warning on every epoch.\r\n\r\n```\r\n[23:52:12] WARNING [23:52:12] C:\\omited\\venv\\lib\\site-packages\\torch\\_utils.py:776: UserWarning: warnings.py:109\r\n TypedStorage is deprecated. It will be removed in the future and UntypedStorage will\r\n be the only storage class. This should only matter to you if you are using storages\r\n directly. To access UntypedStorage directly, use tensor.untyped_storage() instead\r\n of tensor.storage()\r\n return self.fget.__get__(instance, owner)()\r\n```\r\n\r\n**To Reproduce**\r\nSimply train a voice.\r\n\r\n**Additional context**\r\nI updated to 3.6.1 today and start seeing the issue. Unfortunately I didn't know what was last good known version.\r\n\r\nI'm training a voice using CREPE F0 predictor and using PyTorch 2.0.0 in Windows 11 if that matters.\r\n\n", "before_files": [{"content": "import os\nimport sys\nfrom logging import (\n DEBUG,\n INFO,\n FileHandler,\n StreamHandler,\n basicConfig,\n captureWarnings,\n getLogger,\n)\nfrom pathlib import Path\n\nfrom rich.logging import RichHandler\n\nLOGGER_INIT = False\n\n\ndef init_logger() -> None:\n global LOGGER_INIT\n if LOGGER_INIT:\n return\n\n IS_TEST = \"test\" in Path.cwd().stem\n package_name = sys.modules[__name__].__package__\n basicConfig(\n level=INFO,\n format=\"%(asctime)s %(message)s\",\n datefmt=\"[%X]\",\n handlers=[\n StreamHandler() if is_notebook() else RichHandler(),\n FileHandler(f\"{package_name}.log\"),\n ],\n )\n if IS_TEST:\n getLogger(package_name).setLevel(DEBUG)\n captureWarnings(True)\n LOGGER_INIT = True\n\n\ndef is_notebook():\n try:\n from IPython import get_ipython\n\n if \"IPKernelApp\" not in get_ipython().config: # pragma: no cover\n raise ImportError(\"console\")\n return False\n if \"VSCODE_PID\" in os.environ: # pragma: no cover\n raise ImportError(\"vscode\")\n return False\n except Exception:\n return False\n else: # pragma: no cover\n return True\n", "path": "src/so_vits_svc_fork/logger.py"}], "after_files": [{"content": "import os\nimport sys\nimport warnings\nfrom logging import (\n DEBUG,\n INFO,\n FileHandler,\n StreamHandler,\n basicConfig,\n captureWarnings,\n getLogger,\n)\nfrom pathlib import Path\n\nfrom rich.logging import RichHandler\n\nLOGGER_INIT = False\n\n\ndef init_logger() -> None:\n global LOGGER_INIT\n if LOGGER_INIT:\n return\n\n IS_TEST = \"test\" in Path.cwd().stem\n package_name = sys.modules[__name__].__package__\n basicConfig(\n level=INFO,\n format=\"%(asctime)s %(message)s\",\n datefmt=\"[%X]\",\n handlers=[\n StreamHandler() if is_notebook() else RichHandler(),\n FileHandler(f\"{package_name}.log\"),\n ],\n )\n if IS_TEST:\n getLogger(package_name).setLevel(DEBUG)\n captureWarnings(True)\n warnings.filterwarnings(\n \"ignore\", category=UserWarning, message=\"TypedStorage is deprecated\"\n )\n LOGGER_INIT = True\n\n\ndef is_notebook():\n try:\n from IPython import get_ipython\n\n if \"IPKernelApp\" not in get_ipython().config: # pragma: no cover\n raise ImportError(\"console\")\n return False\n if \"VSCODE_PID\" in os.environ: # pragma: no cover\n raise ImportError(\"vscode\")\n return False\n except Exception:\n return False\n else: # pragma: no cover\n return True\n", "path": "src/so_vits_svc_fork/logger.py"}]} | 892 | 144 |
gh_patches_debug_12843 | rasdani/github-patches | git_diff | cobbler__cobbler-3598 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Backport] [scm-track] Fix commit command
### Original feature issue
- PR: #3021
### Target release
- [x] release33
- [ ] release32
- [ ] release30
### Reason
Stabilizations of Cobbler 3.3.4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cobbler/modules/scm_track.py`
Content:
```
1 """
2 (C) 2009, Red Hat Inc.
3 Michael DeHaan <michael.dehaan AT gmail>
4
5 This program is free software; you can redistribute it and/or modify
6 it under the terms of the GNU General Public License as published by
7 the Free Software Foundation; either version 2 of the License, or
8 (at your option) any later version.
9
10 This program is distributed in the hope that it will be useful,
11 but WITHOUT ANY WARRANTY; without even the implied warranty of
12 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 GNU General Public License for more details.
14
15 You should have received a copy of the GNU General Public License
16 along with this program; if not, write to the Free Software
17 Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
18 02110-1301 USA
19 """
20
21
22 import os
23
24 import cobbler.utils as utils
25
26 from cobbler.cexceptions import CX
27
28
29 def register() -> str:
30 """
31 This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
32 indicates the trigger type
33 :return: Always: ``/var/lib/cobbler/triggers/change/*``
34 """
35
36 return "/var/lib/cobbler/triggers/change/*"
37
38
39 def run(api, args):
40 """
41 Runs the trigger, meaning in this case track any changed which happen to a config or data file.
42
43 :param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.
44 :param args: The parameter is currently unused for this trigger.
45 :return: 0 on success, otherwise an exception is risen.
46 """
47 settings = api.settings()
48
49 if not settings.scm_track_enabled:
50 # feature disabled
51 return 0
52
53 mode = str(settings.scm_track_mode).lower()
54 author = str(settings.scm_track_author)
55 push_script = str(settings.scm_push_script)
56
57 if mode == "git":
58 old_dir = os.getcwd()
59 os.chdir("/var/lib/cobbler")
60 if os.getcwd() != "/var/lib/cobbler":
61 raise CX("danger will robinson")
62
63 if not os.path.exists("/var/lib/cobbler/.git"):
64 utils.subprocess_call(["git", "init"], shell=False)
65
66 # FIXME: If we know the remote user of an XMLRPC call use them as the author
67 utils.subprocess_call(["git", "add", "--all", "collections"], shell=False)
68 utils.subprocess_call(["git", "add", "--all", "templates"], shell=False)
69 utils.subprocess_call(["git", "add", "--all", "snippets"], shell=False)
70 utils.subprocess_call(["git", "commit", "-m", "API", "update", "--author", author], shell=False)
71
72 if push_script:
73 utils.subprocess_call([push_script], shell=False)
74
75 os.chdir(old_dir)
76 return 0
77
78 elif mode == "hg":
79 # use mercurial
80 old_dir = os.getcwd()
81 os.chdir("/var/lib/cobbler")
82 if os.getcwd() != "/var/lib/cobbler":
83 raise CX("danger will robinson")
84
85 if not os.path.exists("/var/lib/cobbler/.hg"):
86 utils.subprocess_call(["hg", "init"], shell=False)
87
88 # FIXME: If we know the remote user of an XMLRPC call use them as the user
89 utils.subprocess_call(["hg", "add collections"], shell=False)
90 utils.subprocess_call(["hg", "add templates"], shell=False)
91 utils.subprocess_call(["hg", "add snippets"], shell=False)
92 utils.subprocess_call(["hg", "commit", "-m", "API", "update", "--user", author], shell=False)
93
94 if push_script:
95 utils.subprocess_call([push_script], shell=False)
96
97 os.chdir(old_dir)
98 return 0
99
100 else:
101 raise CX("currently unsupported SCM type: %s" % mode)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cobbler/modules/scm_track.py b/cobbler/modules/scm_track.py
--- a/cobbler/modules/scm_track.py
+++ b/cobbler/modules/scm_track.py
@@ -67,7 +67,7 @@
utils.subprocess_call(["git", "add", "--all", "collections"], shell=False)
utils.subprocess_call(["git", "add", "--all", "templates"], shell=False)
utils.subprocess_call(["git", "add", "--all", "snippets"], shell=False)
- utils.subprocess_call(["git", "commit", "-m", "API", "update", "--author", author], shell=False)
+ utils.subprocess_call(["git", "commit", "-m", "API update", "--author", author], shell=False)
if push_script:
utils.subprocess_call([push_script], shell=False)
| {"golden_diff": "diff --git a/cobbler/modules/scm_track.py b/cobbler/modules/scm_track.py\n--- a/cobbler/modules/scm_track.py\n+++ b/cobbler/modules/scm_track.py\n@@ -67,7 +67,7 @@\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"collections\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"templates\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"snippets\"], shell=False)\n- utils.subprocess_call([\"git\", \"commit\", \"-m\", \"API\", \"update\", \"--author\", author], shell=False)\n+ utils.subprocess_call([\"git\", \"commit\", \"-m\", \"API update\", \"--author\", author], shell=False)\n \n if push_script:\n utils.subprocess_call([push_script], shell=False)\n", "issue": "[Backport] [scm-track] Fix commit command\n### Original feature issue\r\n\r\n- PR: #3021\r\n\r\n### Target release\r\n\r\n- [x] release33\r\n- [ ] release32\r\n- [ ] release30\r\n\r\n### Reason\r\n\r\nStabilizations of Cobbler 3.3.4\r\n\n", "before_files": [{"content": "\"\"\"\n(C) 2009, Red Hat Inc.\nMichael DeHaan <michael.dehaan AT gmail>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\n\n\nimport os\n\nimport cobbler.utils as utils\n\nfrom cobbler.cexceptions import CX\n\n\ndef register() -> str:\n \"\"\"\n This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method\n indicates the trigger type\n :return: Always: ``/var/lib/cobbler/triggers/change/*``\n \"\"\"\n\n return \"/var/lib/cobbler/triggers/change/*\"\n\n\ndef run(api, args):\n \"\"\"\n Runs the trigger, meaning in this case track any changed which happen to a config or data file.\n\n :param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.\n :param args: The parameter is currently unused for this trigger.\n :return: 0 on success, otherwise an exception is risen.\n \"\"\"\n settings = api.settings()\n\n if not settings.scm_track_enabled:\n # feature disabled\n return 0\n\n mode = str(settings.scm_track_mode).lower()\n author = str(settings.scm_track_author)\n push_script = str(settings.scm_push_script)\n\n if mode == \"git\":\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.git\"):\n utils.subprocess_call([\"git\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the author\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"collections\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"templates\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"snippets\"], shell=False)\n utils.subprocess_call([\"git\", \"commit\", \"-m\", \"API\", \"update\", \"--author\", author], shell=False)\n\n if push_script:\n utils.subprocess_call([push_script], shell=False)\n\n os.chdir(old_dir)\n return 0\n\n elif mode == \"hg\":\n # use mercurial\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.hg\"):\n utils.subprocess_call([\"hg\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the user\n utils.subprocess_call([\"hg\", \"add collections\"], shell=False)\n utils.subprocess_call([\"hg\", \"add templates\"], shell=False)\n utils.subprocess_call([\"hg\", \"add snippets\"], shell=False)\n utils.subprocess_call([\"hg\", \"commit\", \"-m\", \"API\", \"update\", \"--user\", author], shell=False)\n\n if push_script:\n utils.subprocess_call([push_script], shell=False)\n\n os.chdir(old_dir)\n return 0\n\n else:\n raise CX(\"currently unsupported SCM type: %s\" % mode)\n", "path": "cobbler/modules/scm_track.py"}], "after_files": [{"content": "\"\"\"\n(C) 2009, Red Hat Inc.\nMichael DeHaan <michael.dehaan AT gmail>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\n\n\nimport os\n\nimport cobbler.utils as utils\n\nfrom cobbler.cexceptions import CX\n\n\ndef register() -> str:\n \"\"\"\n This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method\n indicates the trigger type\n :return: Always: ``/var/lib/cobbler/triggers/change/*``\n \"\"\"\n\n return \"/var/lib/cobbler/triggers/change/*\"\n\n\ndef run(api, args):\n \"\"\"\n Runs the trigger, meaning in this case track any changed which happen to a config or data file.\n\n :param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.\n :param args: The parameter is currently unused for this trigger.\n :return: 0 on success, otherwise an exception is risen.\n \"\"\"\n settings = api.settings()\n\n if not settings.scm_track_enabled:\n # feature disabled\n return 0\n\n mode = str(settings.scm_track_mode).lower()\n author = str(settings.scm_track_author)\n push_script = str(settings.scm_push_script)\n\n if mode == \"git\":\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.git\"):\n utils.subprocess_call([\"git\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the author\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"collections\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"templates\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"snippets\"], shell=False)\n utils.subprocess_call([\"git\", \"commit\", \"-m\", \"API update\", \"--author\", author], shell=False)\n\n if push_script:\n utils.subprocess_call([push_script], shell=False)\n\n os.chdir(old_dir)\n return 0\n\n elif mode == \"hg\":\n # use mercurial\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.hg\"):\n utils.subprocess_call([\"hg\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the user\n utils.subprocess_call([\"hg\", \"add collections\"], shell=False)\n utils.subprocess_call([\"hg\", \"add templates\"], shell=False)\n utils.subprocess_call([\"hg\", \"add snippets\"], shell=False)\n utils.subprocess_call([\"hg\", \"commit\", \"-m\", \"API\", \"update\", \"--user\", author], shell=False)\n\n if push_script:\n utils.subprocess_call([push_script], shell=False)\n\n os.chdir(old_dir)\n return 0\n\n else:\n raise CX(\"currently unsupported SCM type: %s\" % mode)\n", "path": "cobbler/modules/scm_track.py"}]} | 1,410 | 191 |
gh_patches_debug_10112 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-1624 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ifcfg parser to support for MASTER and TEAM_MASTER keys slave type
We need to update the ifcfg parser to support TEAMING and BONDING slave type in the configuration file, so, that we can use MASTER' and 'TEAM_MASTER' keys in raw format.
For ex- `obj['MASTER']="'bond0'"` or `obj['TEAM_MASTER']="'team0'"`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `insights/parsers/ifcfg.py`
Content:
```
1 """
2 IfCFG - files ``/etc/sysconfig/network-scripts/ifcfg-*``
3 ========================================================
4
5 IfCFG is a parser for the network interface definition files in
6 ``/etc/sysconfig/network-scripts``. These are pulled into the network
7 scripts using ``source``, so they are mainly ``bash`` environment
8 declarations of the form **KEY=value**. These are stored in the ``data``
9 property as a dictionary. Quotes surrounding the value
10
11 Three options are handled differently:
12
13 * ``BONDING_OPTS`` is usually a quoted list of key=value arguments separated
14 by spaces.
15 * ``TEAM_CONFIG`` and ``TEAM_PORT_CONFIG`` are treated as JSON stored as a
16 single string. Double quotes within the string are escaped using double
17 back slashes, and these are removed so that the quoting is preserved.
18
19 Because this parser reads multiple files, the interfaces are stored as a
20 list within the parser and need to be iterated through in order to find
21 specific interfaces.
22
23 Sample configuration from a teamed interface in file ``/etc/sysconfig/network-scripts/ifcfg-team1``::
24
25 DEVICE=team1
26 DEVICETYPE=Team
27 ONBOOT=yes
28 NETMASK=255.255.252.0
29 IPADDR=192.168.0.1
30 TEAM_CONFIG='{"runner": {"name": "lacp", "active": "true", "tx_hash": ["eth", "ipv4"]}, "tx_balancer": {"name": "basic"}, "link_watch": {"name": "ethtool"}}'
31
32 Examples:
33
34 >>> for nic in shared[IfCFG]: # Parser contains list of all interfaces
35 ... print 'NIC:', nic.iname
36 ... print 'IP address:', nic['IPADDR']
37 ... if 'TEAM_CONFIG' in nic:
38 ... print 'Team runner name:', nic['TEAM_CONFIG']['runner']['name']
39 ...
40 NIC: team1
41 IP addresss: 192.168.0.1
42 Team runner name: lacp
43
44 """
45
46 import json
47 import re
48 from collections import OrderedDict
49 from .. import parser, get_active_lines, LegacyItemAccess, CommandParser
50 from insights.specs import Specs
51
52 JSON_FIELDS = ["TEAM_CONFIG", "TEAM_PORT_CONFIG"]
53
54 QUOTES = "\"'"
55
56 bond_mode_map = {
57 'balance-rr': 0,
58 'active-backup': 1,
59 'balance-xor': 2,
60 'broadcast': 3,
61 '802.3ad': 4,
62 'balance-tlb': 5,
63 'balance-alb': 6
64 }
65
66
67 @parser(Specs.ifcfg)
68 class IfCFG(LegacyItemAccess, CommandParser):
69 """
70 Parse `ifcfg-` file,return a dict contain ifcfg config file info.
71 "iface" key is interface name parse from file name
72 `TEAM_CONFIG`, `TEAM_PORT_CONFIG` will return a dict with user config dict
73 `BONDING_OPTS` also will return a dict
74
75 Properties:
76 ifname (str): The interface name as defined in the name of the file
77 (i.e. the part after ``ifcfg-``).
78 """
79
80 def __init__(self, context):
81 super(IfCFG, self).__init__(context)
82 self.data["iface"] = context.path.rsplit("-", 1)[1]
83 self.ifname = self.data['iface']
84 self._has_empty_line = any(l.strip() == '' for l in context.content)
85
86 def parse_content(self, content):
87 self.data = {}
88 for line in get_active_lines(content):
89 if "=" not in line:
90 continue
91 key, value = line.split("=", 1)
92 # Since keys are variable names in bash, stripping quotes and
93 # spaces off them makes no sense.
94 key = key.strip().strip(QUOTES).upper()
95
96 # In some cases we want to know what the actual value-side
97 # of the key is before dequoting and stripping.
98 if key in ["DEVICE", "MASTER", "BONDING_OPTS"]:
99 self.data["raw_{0}_value".format(key.split('_')[0].lower())] = value
100 if key != "DEVICE":
101 value = value.strip().strip(QUOTES)
102 if key in JSON_FIELDS:
103 value = json.loads(value.replace("\\", ""))
104 if key == "BONDING_OPTS":
105 value_map = OrderedDict()
106 value = re.sub(r'\s*=\s*', '=', value)
107 for key_value_pair in value.split():
108 sub_key, sub_value = [
109 s.strip() for s in key_value_pair.split("=", 1)
110 ]
111 value_map[sub_key] = sub_value
112 value = value_map
113 self.data[key] = value
114
115 @property
116 def bonding_mode(self):
117 """
118 (int) the numeric value of bonding mode, or `None` if no bonding
119 mode is found.
120 """
121 if "BONDING_OPTS" not in self or 'mode' not in self['BONDING_OPTS']:
122 return None
123
124 m = self["BONDING_OPTS"]["mode"]
125 if m.isdigit():
126 return int(m)
127 if m in bond_mode_map:
128 return bond_mode_map[m]
129 return None
130
131 @property
132 def has_empty_line(self):
133 """
134 (bool) `True` if the file has empty line else `False`.
135 """
136 return self._has_empty_line
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/insights/parsers/ifcfg.py b/insights/parsers/ifcfg.py
--- a/insights/parsers/ifcfg.py
+++ b/insights/parsers/ifcfg.py
@@ -95,7 +95,7 @@
# In some cases we want to know what the actual value-side
# of the key is before dequoting and stripping.
- if key in ["DEVICE", "MASTER", "BONDING_OPTS"]:
+ if key in ["DEVICE", "MASTER", "TEAM_MASTER", "BONDING_OPTS"]:
self.data["raw_{0}_value".format(key.split('_')[0].lower())] = value
if key != "DEVICE":
value = value.strip().strip(QUOTES)
| {"golden_diff": "diff --git a/insights/parsers/ifcfg.py b/insights/parsers/ifcfg.py\n--- a/insights/parsers/ifcfg.py\n+++ b/insights/parsers/ifcfg.py\n@@ -95,7 +95,7 @@\n \n # In some cases we want to know what the actual value-side\n # of the key is before dequoting and stripping.\n- if key in [\"DEVICE\", \"MASTER\", \"BONDING_OPTS\"]:\n+ if key in [\"DEVICE\", \"MASTER\", \"TEAM_MASTER\", \"BONDING_OPTS\"]:\n self.data[\"raw_{0}_value\".format(key.split('_')[0].lower())] = value\n if key != \"DEVICE\":\n value = value.strip().strip(QUOTES)\n", "issue": "ifcfg parser to support for MASTER and TEAM_MASTER keys slave type\nWe need to update the ifcfg parser to support TEAMING and BONDING slave type in the configuration file, so, that we can use MASTER' and 'TEAM_MASTER' keys in raw format.\r\n\r\nFor ex- `obj['MASTER']=\"'bond0'\"` or `obj['TEAM_MASTER']=\"'team0'\"`\n", "before_files": [{"content": "\"\"\"\nIfCFG - files ``/etc/sysconfig/network-scripts/ifcfg-*``\n========================================================\n\nIfCFG is a parser for the network interface definition files in\n``/etc/sysconfig/network-scripts``. These are pulled into the network\nscripts using ``source``, so they are mainly ``bash`` environment\ndeclarations of the form **KEY=value**. These are stored in the ``data``\nproperty as a dictionary. Quotes surrounding the value\n\nThree options are handled differently:\n\n* ``BONDING_OPTS`` is usually a quoted list of key=value arguments separated\n by spaces.\n* ``TEAM_CONFIG`` and ``TEAM_PORT_CONFIG`` are treated as JSON stored as a\n single string. Double quotes within the string are escaped using double\n back slashes, and these are removed so that the quoting is preserved.\n\nBecause this parser reads multiple files, the interfaces are stored as a\nlist within the parser and need to be iterated through in order to find\nspecific interfaces.\n\nSample configuration from a teamed interface in file ``/etc/sysconfig/network-scripts/ifcfg-team1``::\n\n DEVICE=team1\n DEVICETYPE=Team\n ONBOOT=yes\n NETMASK=255.255.252.0\n IPADDR=192.168.0.1\n TEAM_CONFIG='{\"runner\": {\"name\": \"lacp\", \"active\": \"true\", \"tx_hash\": [\"eth\", \"ipv4\"]}, \"tx_balancer\": {\"name\": \"basic\"}, \"link_watch\": {\"name\": \"ethtool\"}}'\n\nExamples:\n\n >>> for nic in shared[IfCFG]: # Parser contains list of all interfaces\n ... print 'NIC:', nic.iname\n ... print 'IP address:', nic['IPADDR']\n ... if 'TEAM_CONFIG' in nic:\n ... print 'Team runner name:', nic['TEAM_CONFIG']['runner']['name']\n ...\n NIC: team1\n IP addresss: 192.168.0.1\n Team runner name: lacp\n\n\"\"\"\n\nimport json\nimport re\nfrom collections import OrderedDict\nfrom .. import parser, get_active_lines, LegacyItemAccess, CommandParser\nfrom insights.specs import Specs\n\nJSON_FIELDS = [\"TEAM_CONFIG\", \"TEAM_PORT_CONFIG\"]\n\nQUOTES = \"\\\"'\"\n\nbond_mode_map = {\n 'balance-rr': 0,\n 'active-backup': 1,\n 'balance-xor': 2,\n 'broadcast': 3,\n '802.3ad': 4,\n 'balance-tlb': 5,\n 'balance-alb': 6\n}\n\n\n@parser(Specs.ifcfg)\nclass IfCFG(LegacyItemAccess, CommandParser):\n \"\"\"\n Parse `ifcfg-` file,return a dict contain ifcfg config file info.\n \"iface\" key is interface name parse from file name\n `TEAM_CONFIG`, `TEAM_PORT_CONFIG` will return a dict with user config dict\n `BONDING_OPTS` also will return a dict\n\n Properties:\n ifname (str): The interface name as defined in the name of the file\n (i.e. the part after ``ifcfg-``).\n \"\"\"\n\n def __init__(self, context):\n super(IfCFG, self).__init__(context)\n self.data[\"iface\"] = context.path.rsplit(\"-\", 1)[1]\n self.ifname = self.data['iface']\n self._has_empty_line = any(l.strip() == '' for l in context.content)\n\n def parse_content(self, content):\n self.data = {}\n for line in get_active_lines(content):\n if \"=\" not in line:\n continue\n key, value = line.split(\"=\", 1)\n # Since keys are variable names in bash, stripping quotes and\n # spaces off them makes no sense.\n key = key.strip().strip(QUOTES).upper()\n\n # In some cases we want to know what the actual value-side\n # of the key is before dequoting and stripping.\n if key in [\"DEVICE\", \"MASTER\", \"BONDING_OPTS\"]:\n self.data[\"raw_{0}_value\".format(key.split('_')[0].lower())] = value\n if key != \"DEVICE\":\n value = value.strip().strip(QUOTES)\n if key in JSON_FIELDS:\n value = json.loads(value.replace(\"\\\\\", \"\"))\n if key == \"BONDING_OPTS\":\n value_map = OrderedDict()\n value = re.sub(r'\\s*=\\s*', '=', value)\n for key_value_pair in value.split():\n sub_key, sub_value = [\n s.strip() for s in key_value_pair.split(\"=\", 1)\n ]\n value_map[sub_key] = sub_value\n value = value_map\n self.data[key] = value\n\n @property\n def bonding_mode(self):\n \"\"\"\n (int) the numeric value of bonding mode, or `None` if no bonding\n mode is found.\n \"\"\"\n if \"BONDING_OPTS\" not in self or 'mode' not in self['BONDING_OPTS']:\n return None\n\n m = self[\"BONDING_OPTS\"][\"mode\"]\n if m.isdigit():\n return int(m)\n if m in bond_mode_map:\n return bond_mode_map[m]\n return None\n\n @property\n def has_empty_line(self):\n \"\"\"\n (bool) `True` if the file has empty line else `False`.\n \"\"\"\n return self._has_empty_line\n", "path": "insights/parsers/ifcfg.py"}], "after_files": [{"content": "\"\"\"\nIfCFG - files ``/etc/sysconfig/network-scripts/ifcfg-*``\n========================================================\n\nIfCFG is a parser for the network interface definition files in\n``/etc/sysconfig/network-scripts``. These are pulled into the network\nscripts using ``source``, so they are mainly ``bash`` environment\ndeclarations of the form **KEY=value**. These are stored in the ``data``\nproperty as a dictionary. Quotes surrounding the value\n\nThree options are handled differently:\n\n* ``BONDING_OPTS`` is usually a quoted list of key=value arguments separated\n by spaces.\n* ``TEAM_CONFIG`` and ``TEAM_PORT_CONFIG`` are treated as JSON stored as a\n single string. Double quotes within the string are escaped using double\n back slashes, and these are removed so that the quoting is preserved.\n\nBecause this parser reads multiple files, the interfaces are stored as a\nlist within the parser and need to be iterated through in order to find\nspecific interfaces.\n\nSample configuration from a teamed interface in file ``/etc/sysconfig/network-scripts/ifcfg-team1``::\n\n DEVICE=team1\n DEVICETYPE=Team\n ONBOOT=yes\n NETMASK=255.255.252.0\n IPADDR=192.168.0.1\n TEAM_CONFIG='{\"runner\": {\"name\": \"lacp\", \"active\": \"true\", \"tx_hash\": [\"eth\", \"ipv4\"]}, \"tx_balancer\": {\"name\": \"basic\"}, \"link_watch\": {\"name\": \"ethtool\"}}'\n\nExamples:\n\n >>> for nic in shared[IfCFG]: # Parser contains list of all interfaces\n ... print 'NIC:', nic.iname\n ... print 'IP address:', nic['IPADDR']\n ... if 'TEAM_CONFIG' in nic:\n ... print 'Team runner name:', nic['TEAM_CONFIG']['runner']['name']\n ...\n NIC: team1\n IP addresss: 192.168.0.1\n Team runner name: lacp\n\n\"\"\"\n\nimport json\nimport re\nfrom collections import OrderedDict\nfrom .. import parser, get_active_lines, LegacyItemAccess, CommandParser\nfrom insights.specs import Specs\n\nJSON_FIELDS = [\"TEAM_CONFIG\", \"TEAM_PORT_CONFIG\"]\n\nQUOTES = \"\\\"'\"\n\nbond_mode_map = {\n 'balance-rr': 0,\n 'active-backup': 1,\n 'balance-xor': 2,\n 'broadcast': 3,\n '802.3ad': 4,\n 'balance-tlb': 5,\n 'balance-alb': 6\n}\n\n\n@parser(Specs.ifcfg)\nclass IfCFG(LegacyItemAccess, CommandParser):\n \"\"\"\n Parse `ifcfg-` file,return a dict contain ifcfg config file info.\n \"iface\" key is interface name parse from file name\n `TEAM_CONFIG`, `TEAM_PORT_CONFIG` will return a dict with user config dict\n `BONDING_OPTS` also will return a dict\n\n Properties:\n ifname (str): The interface name as defined in the name of the file\n (i.e. the part after ``ifcfg-``).\n \"\"\"\n\n def __init__(self, context):\n super(IfCFG, self).__init__(context)\n self.data[\"iface\"] = context.path.rsplit(\"-\", 1)[1]\n self.ifname = self.data['iface']\n self._has_empty_line = any(l.strip() == '' for l in context.content)\n\n def parse_content(self, content):\n self.data = {}\n for line in get_active_lines(content):\n if \"=\" not in line:\n continue\n key, value = line.split(\"=\", 1)\n # Since keys are variable names in bash, stripping quotes and\n # spaces off them makes no sense.\n key = key.strip().strip(QUOTES).upper()\n\n # In some cases we want to know what the actual value-side\n # of the key is before dequoting and stripping.\n if key in [\"DEVICE\", \"MASTER\", \"TEAM_MASTER\", \"BONDING_OPTS\"]:\n self.data[\"raw_{0}_value\".format(key.split('_')[0].lower())] = value\n if key != \"DEVICE\":\n value = value.strip().strip(QUOTES)\n if key in JSON_FIELDS:\n value = json.loads(value.replace(\"\\\\\", \"\"))\n if key == \"BONDING_OPTS\":\n value_map = OrderedDict()\n value = re.sub(r'\\s*=\\s*', '=', value)\n for key_value_pair in value.split():\n sub_key, sub_value = [\n s.strip() for s in key_value_pair.split(\"=\", 1)\n ]\n value_map[sub_key] = sub_value\n value = value_map\n self.data[key] = value\n\n @property\n def bonding_mode(self):\n \"\"\"\n (int) the numeric value of bonding mode, or `None` if no bonding\n mode is found.\n \"\"\"\n if \"BONDING_OPTS\" not in self or 'mode' not in self['BONDING_OPTS']:\n return None\n\n m = self[\"BONDING_OPTS\"][\"mode\"]\n if m.isdigit():\n return int(m)\n if m in bond_mode_map:\n return bond_mode_map[m]\n return None\n\n @property\n def has_empty_line(self):\n \"\"\"\n (bool) `True` if the file has empty line else `False`.\n \"\"\"\n return self._has_empty_line\n", "path": "insights/parsers/ifcfg.py"}]} | 1,835 | 164 |
gh_patches_debug_19868 | rasdani/github-patches | git_diff | google__mobly-258 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exceptions from `CallbackHandler` should include timeout value
Right now some timeout exceptions thrown by `CallbackHandler` do not include how long the timeout was, making debugging more difficult.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mobly/controllers/android_device_lib/callback_handler.py`
Content:
```
1 # Copyright 2017 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import time
16
17 from mobly.controllers.android_device_lib import snippet_event
18
19 # The max timeout cannot be larger than the max time the socket waits for a
20 # response message. Otherwise, the socket would timeout before the Rpc call
21 # does, leaving both server and client in unknown states.
22 MAX_TIMEOUT = 60 * 10
23 DEFAULT_TIMEOUT = 120 # two minutes
24
25
26 class Error(Exception):
27 pass
28
29
30 class TimeoutError(Error):
31 pass
32
33
34 class CallbackHandler(object):
35 """The class used to handle a specific group of callback events.
36
37 All the events handled by a CallbackHandler are originally triggered by one
38 async Rpc call. All the events are tagged with a callback_id specific to a
39 call to an AsyncRpc method defined on the server side.
40
41 The raw message representing an event looks like:
42 {
43 'callbackId': <string, callbackId>,
44 'name': <string, name of the event>,
45 'time': <long, epoch time of when the event was created on the server
46 side>,
47 'data': <dict, extra data from the callback on the server side>
48 }
49
50 Each message is then used to create a SnippetEvent object on the client
51 side.
52
53 Attributes:
54 ret_value: The direct return value of the async Rpc call.
55 """
56
57 def __init__(self, callback_id, event_client, ret_value, method_name):
58 self._id = callback_id
59 self._event_client = event_client
60 self.ret_value = ret_value
61 self._method_name = method_name
62
63 def waitAndGet(self, event_name, timeout=DEFAULT_TIMEOUT):
64 """Blocks until an event of the specified name has been received and
65 return the event, or timeout.
66
67 Args:
68 event_name: string, name of the event to get.
69 timeout: float, the number of seconds to wait before giving up.
70
71 Returns:
72 SnippetEvent, the oldest entry of the specified event.
73
74 Raises:
75 Error: If the specified timeout is longer than the max timeout
76 supported.
77 TimeoutError: The expected event does not occur within time limit.
78 """
79 if timeout:
80 if timeout > MAX_TIMEOUT:
81 raise Error(
82 'Specified timeout %s is longer than max timeout %s.' %
83 (timeout, MAX_TIMEOUT))
84 timeout *= 1000 # convert to milliseconds for java side
85 try:
86 raw_event = self._event_client.eventWaitAndGet(self._id,
87 event_name, timeout)
88 except Exception as e:
89 if 'EventSnippetException: timeout.' in str(e):
90 raise TimeoutError(
91 'Timeout waiting for event "%s" triggered by %s (%s).' %
92 (event_name, self._method_name, self._id))
93 raise
94 return snippet_event.from_dict(raw_event)
95
96 def waitForEvent(self, event_name, predicate, timeout=DEFAULT_TIMEOUT):
97 """Wait for an event of a specific name that satisfies the predicate.
98
99 This call will block until the expected event has been received or time
100 out.
101
102 The predicate function defines the condition the event is expected to
103 satisfy. It takes an event and returns True if the condition is
104 satisfied, False otherwise.
105
106 Note all events of the same name that are received but don't satisfy
107 the predicate will be discarded and not be available for further
108 consumption.
109
110 Args:
111 event_name: string, the name of the event to wait for.
112 predicate: function, a function that takes an event (dictionary) and
113 returns a bool.
114 timeout: float, default is 120s.
115
116 Returns:
117 dictionary, the event that satisfies the predicate if received.
118
119 Raises:
120 TimeoutError: raised if no event that satisfies the predicate is
121 received after timeout seconds.
122 """
123 deadline = time.time() + timeout
124 while time.time() <= deadline:
125 # Calculate the max timeout for the next event rpc call.
126 rpc_timeout = deadline - time.time()
127 if rpc_timeout < 0:
128 break
129 try:
130 event = self.waitAndGet(event_name, rpc_timeout)
131 except TimeoutError:
132 # Ignoring TimeoutError since we need to throw one with a more
133 # specific message.
134 break
135 if predicate(event):
136 return event
137 raise TimeoutError(
138 'Timed out after %ss waiting for an "%s" event that satisfies the '
139 'predicate "%s".' % (timeout, event_name, predicate.__name__))
140
141 def getAll(self, event_name):
142 """Gets all the events of a certain name that have been received so
143 far. This is a non-blocking call.
144
145 Args:
146 callback_id: The id of the callback.
147 event_name: string, the name of the event to get.
148
149 Returns:
150 A list of SnippetEvent, each representing an event from the Java
151 side.
152 """
153 raw_events = self._event_client.eventGetAll(self._id, event_name)
154 return [snippet_event.from_dict(msg) for msg in raw_events]
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mobly/controllers/android_device_lib/callback_handler.py b/mobly/controllers/android_device_lib/callback_handler.py
--- a/mobly/controllers/android_device_lib/callback_handler.py
+++ b/mobly/controllers/android_device_lib/callback_handler.py
@@ -83,13 +83,14 @@
(timeout, MAX_TIMEOUT))
timeout *= 1000 # convert to milliseconds for java side
try:
- raw_event = self._event_client.eventWaitAndGet(self._id,
- event_name, timeout)
+ raw_event = self._event_client.eventWaitAndGet(
+ self._id, event_name, timeout)
except Exception as e:
if 'EventSnippetException: timeout.' in str(e):
raise TimeoutError(
- 'Timeout waiting for event "%s" triggered by %s (%s).' %
- (event_name, self._method_name, self._id))
+ 'Timed out after waiting %ss for event "%s" triggered by'
+ ' %s (%s).' % (timeout, event_name, self._method_name,
+ self._id))
raise
return snippet_event.from_dict(raw_event)
| {"golden_diff": "diff --git a/mobly/controllers/android_device_lib/callback_handler.py b/mobly/controllers/android_device_lib/callback_handler.py\n--- a/mobly/controllers/android_device_lib/callback_handler.py\n+++ b/mobly/controllers/android_device_lib/callback_handler.py\n@@ -83,13 +83,14 @@\n (timeout, MAX_TIMEOUT))\n timeout *= 1000 # convert to milliseconds for java side\n try:\n- raw_event = self._event_client.eventWaitAndGet(self._id,\n- event_name, timeout)\n+ raw_event = self._event_client.eventWaitAndGet(\n+ self._id, event_name, timeout)\n except Exception as e:\n if 'EventSnippetException: timeout.' in str(e):\n raise TimeoutError(\n- 'Timeout waiting for event \"%s\" triggered by %s (%s).' %\n- (event_name, self._method_name, self._id))\n+ 'Timed out after waiting %ss for event \"%s\" triggered by'\n+ ' %s (%s).' % (timeout, event_name, self._method_name,\n+ self._id))\n raise\n return snippet_event.from_dict(raw_event)\n", "issue": "Exceptions from `CallbackHandler` should include timeout value\nRight now some timeout exceptions thrown by `CallbackHandler` do not include how long the timeout was, making debugging more difficult.\n", "before_files": [{"content": "# Copyright 2017 Google Inc.\n# \n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport time\n\nfrom mobly.controllers.android_device_lib import snippet_event\n\n# The max timeout cannot be larger than the max time the socket waits for a\n# response message. Otherwise, the socket would timeout before the Rpc call\n# does, leaving both server and client in unknown states.\nMAX_TIMEOUT = 60 * 10\nDEFAULT_TIMEOUT = 120 # two minutes\n\n\nclass Error(Exception):\n pass\n\n\nclass TimeoutError(Error):\n pass\n\n\nclass CallbackHandler(object):\n \"\"\"The class used to handle a specific group of callback events.\n\n All the events handled by a CallbackHandler are originally triggered by one\n async Rpc call. All the events are tagged with a callback_id specific to a\n call to an AsyncRpc method defined on the server side.\n\n The raw message representing an event looks like:\n {\n 'callbackId': <string, callbackId>,\n 'name': <string, name of the event>,\n 'time': <long, epoch time of when the event was created on the server\n side>,\n 'data': <dict, extra data from the callback on the server side>\n }\n\n Each message is then used to create a SnippetEvent object on the client\n side.\n\n Attributes:\n ret_value: The direct return value of the async Rpc call.\n \"\"\"\n\n def __init__(self, callback_id, event_client, ret_value, method_name):\n self._id = callback_id\n self._event_client = event_client\n self.ret_value = ret_value\n self._method_name = method_name\n\n def waitAndGet(self, event_name, timeout=DEFAULT_TIMEOUT):\n \"\"\"Blocks until an event of the specified name has been received and\n return the event, or timeout.\n\n Args:\n event_name: string, name of the event to get.\n timeout: float, the number of seconds to wait before giving up.\n\n Returns:\n SnippetEvent, the oldest entry of the specified event.\n\n Raises:\n Error: If the specified timeout is longer than the max timeout\n supported.\n TimeoutError: The expected event does not occur within time limit.\n \"\"\"\n if timeout:\n if timeout > MAX_TIMEOUT:\n raise Error(\n 'Specified timeout %s is longer than max timeout %s.' %\n (timeout, MAX_TIMEOUT))\n timeout *= 1000 # convert to milliseconds for java side\n try:\n raw_event = self._event_client.eventWaitAndGet(self._id,\n event_name, timeout)\n except Exception as e:\n if 'EventSnippetException: timeout.' in str(e):\n raise TimeoutError(\n 'Timeout waiting for event \"%s\" triggered by %s (%s).' %\n (event_name, self._method_name, self._id))\n raise\n return snippet_event.from_dict(raw_event)\n\n def waitForEvent(self, event_name, predicate, timeout=DEFAULT_TIMEOUT):\n \"\"\"Wait for an event of a specific name that satisfies the predicate.\n\n This call will block until the expected event has been received or time\n out.\n\n The predicate function defines the condition the event is expected to\n satisfy. It takes an event and returns True if the condition is\n satisfied, False otherwise.\n\n Note all events of the same name that are received but don't satisfy\n the predicate will be discarded and not be available for further\n consumption.\n\n Args:\n event_name: string, the name of the event to wait for.\n predicate: function, a function that takes an event (dictionary) and\n returns a bool.\n timeout: float, default is 120s.\n\n Returns:\n dictionary, the event that satisfies the predicate if received.\n\n Raises:\n TimeoutError: raised if no event that satisfies the predicate is\n received after timeout seconds.\n \"\"\"\n deadline = time.time() + timeout\n while time.time() <= deadline:\n # Calculate the max timeout for the next event rpc call.\n rpc_timeout = deadline - time.time()\n if rpc_timeout < 0:\n break\n try:\n event = self.waitAndGet(event_name, rpc_timeout)\n except TimeoutError:\n # Ignoring TimeoutError since we need to throw one with a more\n # specific message.\n break\n if predicate(event):\n return event\n raise TimeoutError(\n 'Timed out after %ss waiting for an \"%s\" event that satisfies the '\n 'predicate \"%s\".' % (timeout, event_name, predicate.__name__))\n\n def getAll(self, event_name):\n \"\"\"Gets all the events of a certain name that have been received so\n far. This is a non-blocking call.\n\n Args:\n callback_id: The id of the callback.\n event_name: string, the name of the event to get.\n\n Returns:\n A list of SnippetEvent, each representing an event from the Java\n side.\n \"\"\"\n raw_events = self._event_client.eventGetAll(self._id, event_name)\n return [snippet_event.from_dict(msg) for msg in raw_events]\n", "path": "mobly/controllers/android_device_lib/callback_handler.py"}], "after_files": [{"content": "# Copyright 2017 Google Inc.\n# \n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport time\n\nfrom mobly.controllers.android_device_lib import snippet_event\n\n# The max timeout cannot be larger than the max time the socket waits for a\n# response message. Otherwise, the socket would timeout before the Rpc call\n# does, leaving both server and client in unknown states.\nMAX_TIMEOUT = 60 * 10\nDEFAULT_TIMEOUT = 120 # two minutes\n\n\nclass Error(Exception):\n pass\n\n\nclass TimeoutError(Error):\n pass\n\n\nclass CallbackHandler(object):\n \"\"\"The class used to handle a specific group of callback events.\n\n All the events handled by a CallbackHandler are originally triggered by one\n async Rpc call. All the events are tagged with a callback_id specific to a\n call to an AsyncRpc method defined on the server side.\n\n The raw message representing an event looks like:\n {\n 'callbackId': <string, callbackId>,\n 'name': <string, name of the event>,\n 'time': <long, epoch time of when the event was created on the server\n side>,\n 'data': <dict, extra data from the callback on the server side>\n }\n\n Each message is then used to create a SnippetEvent object on the client\n side.\n\n Attributes:\n ret_value: The direct return value of the async Rpc call.\n \"\"\"\n\n def __init__(self, callback_id, event_client, ret_value, method_name):\n self._id = callback_id\n self._event_client = event_client\n self.ret_value = ret_value\n self._method_name = method_name\n\n def waitAndGet(self, event_name, timeout=DEFAULT_TIMEOUT):\n \"\"\"Blocks until an event of the specified name has been received and\n return the event, or timeout.\n\n Args:\n event_name: string, name of the event to get.\n timeout: float, the number of seconds to wait before giving up.\n\n Returns:\n SnippetEvent, the oldest entry of the specified event.\n\n Raises:\n Error: If the specified timeout is longer than the max timeout\n supported.\n TimeoutError: The expected event does not occur within time limit.\n \"\"\"\n if timeout:\n if timeout > MAX_TIMEOUT:\n raise Error(\n 'Specified timeout %s is longer than max timeout %s.' %\n (timeout, MAX_TIMEOUT))\n timeout *= 1000 # convert to milliseconds for java side\n try:\n raw_event = self._event_client.eventWaitAndGet(\n self._id, event_name, timeout)\n except Exception as e:\n if 'EventSnippetException: timeout.' in str(e):\n raise TimeoutError(\n 'Timed out after waiting %ss for event \"%s\" triggered by'\n ' %s (%s).' % (timeout, event_name, self._method_name,\n self._id))\n raise\n return snippet_event.from_dict(raw_event)\n\n def waitForEvent(self, event_name, predicate, timeout=DEFAULT_TIMEOUT):\n \"\"\"Wait for an event of a specific name that satisfies the predicate.\n\n This call will block until the expected event has been received or time\n out.\n\n The predicate function defines the condition the event is expected to\n satisfy. It takes an event and returns True if the condition is\n satisfied, False otherwise.\n\n Note all events of the same name that are received but don't satisfy\n the predicate will be discarded and not be available for further\n consumption.\n\n Args:\n event_name: string, the name of the event to wait for.\n predicate: function, a function that takes an event (dictionary) and\n returns a bool.\n timeout: float, default is 120s.\n\n Returns:\n dictionary, the event that satisfies the predicate if received.\n\n Raises:\n TimeoutError: raised if no event that satisfies the predicate is\n received after timeout seconds.\n \"\"\"\n deadline = time.time() + timeout\n while time.time() <= deadline:\n # Calculate the max timeout for the next event rpc call.\n rpc_timeout = deadline - time.time()\n if rpc_timeout < 0:\n break\n try:\n event = self.waitAndGet(event_name, rpc_timeout)\n except TimeoutError:\n # Ignoring TimeoutError since we need to throw one with a more\n # specific message.\n break\n if predicate(event):\n return event\n raise TimeoutError(\n 'Timed out after %ss waiting for an \"%s\" event that satisfies the '\n 'predicate \"%s\".' % (timeout, event_name, predicate.__name__))\n\n def getAll(self, event_name):\n \"\"\"Gets all the events of a certain name that have been received so\n far. This is a non-blocking call.\n\n Args:\n callback_id: The id of the callback.\n event_name: string, the name of the event to get.\n\n Returns:\n A list of SnippetEvent, each representing an event from the Java\n side.\n \"\"\"\n raw_events = self._event_client.eventGetAll(self._id, event_name)\n return [snippet_event.from_dict(msg) for msg in raw_events]\n", "path": "mobly/controllers/android_device_lib/callback_handler.py"}]} | 1,882 | 253 |
gh_patches_debug_8343 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-530 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add RQ subclass of HerokuWorker
The "Using RQ on Heroku" docs section ( https://python-rq.org/patterns/ ) shows using a subclass of `Worker` specialized for Heroku. Unfortunateely using that, rather than the Scout RQ Worker subclass means that scout isn't instrumented. We should also provide a `ScoutHerokuWorker` class.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/rq.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import datetime as dt
5
6 import wrapt
7 from rq import SimpleWorker as RqSimpleWorker
8 from rq import Worker as RqWorker
9 from rq.job import Job
10
11 import scout_apm.core
12 from scout_apm.core.tracked_request import TrackedRequest
13
14 install_attempted = False
15 installed = None
16
17
18 def ensure_scout_installed():
19 global install_attempted, installed
20
21 if not install_attempted:
22 install_attempted = True
23 installed = scout_apm.core.install()
24
25
26 class WorkerMixin(object):
27 def __init__(self, *args, **kwargs):
28 global installed
29 ensure_scout_installed()
30 if installed:
31 ensure_job_instrumented()
32 super(WorkerMixin, self).__init__(*args, **kwargs)
33
34
35 class Worker(WorkerMixin, RqWorker):
36 pass
37
38
39 class SimpleWorker(WorkerMixin, RqSimpleWorker):
40 pass
41
42
43 job_instrumented = False
44
45
46 def ensure_job_instrumented():
47 global job_instrumented
48 if job_instrumented:
49 return
50 job_instrumented = True
51 Job.perform = wrap_perform(Job.perform)
52
53
54 @wrapt.decorator
55 def wrap_perform(wrapped, instance, args, kwargs):
56 global installed
57 if not installed:
58 return wrapped(*args, **kwargs)
59
60 tracked_request = TrackedRequest.instance()
61 tracked_request.is_real_request = True
62 tracked_request.tag("task_id", instance.get_id())
63 tracked_request.tag("queue", instance.origin)
64 queue_time = (dt.datetime.utcnow() - instance.enqueued_at).total_seconds()
65 tracked_request.tag("queue_time", queue_time)
66 tracked_request.start_span(operation="Job/{}".format(instance.func_name))
67 try:
68 return wrapped(*args, **kwargs)
69 except Exception:
70 tracked_request.tag("error", "true")
71 raise
72 finally:
73 tracked_request.stop_span()
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/rq.py b/src/scout_apm/rq.py
--- a/src/scout_apm/rq.py
+++ b/src/scout_apm/rq.py
@@ -7,6 +7,7 @@
from rq import SimpleWorker as RqSimpleWorker
from rq import Worker as RqWorker
from rq.job import Job
+from rq.worker import HerokuWorker as RqHerokuWorker
import scout_apm.core
from scout_apm.core.tracked_request import TrackedRequest
@@ -40,6 +41,10 @@
pass
+class HerokuWorker(WorkerMixin, RqHerokuWorker):
+ pass
+
+
job_instrumented = False
| {"golden_diff": "diff --git a/src/scout_apm/rq.py b/src/scout_apm/rq.py\n--- a/src/scout_apm/rq.py\n+++ b/src/scout_apm/rq.py\n@@ -7,6 +7,7 @@\n from rq import SimpleWorker as RqSimpleWorker\n from rq import Worker as RqWorker\n from rq.job import Job\n+from rq.worker import HerokuWorker as RqHerokuWorker\n \n import scout_apm.core\n from scout_apm.core.tracked_request import TrackedRequest\n@@ -40,6 +41,10 @@\n pass\n \n \n+class HerokuWorker(WorkerMixin, RqHerokuWorker):\n+ pass\n+\n+\n job_instrumented = False\n", "issue": "Add RQ subclass of HerokuWorker\nThe \"Using RQ on Heroku\" docs section ( https://python-rq.org/patterns/ ) shows using a subclass of `Worker` specialized for Heroku. Unfortunateely using that, rather than the Scout RQ Worker subclass means that scout isn't instrumented. We should also provide a `ScoutHerokuWorker` class.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nimport wrapt\nfrom rq import SimpleWorker as RqSimpleWorker\nfrom rq import Worker as RqWorker\nfrom rq.job import Job\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\n\ninstall_attempted = False\ninstalled = None\n\n\ndef ensure_scout_installed():\n global install_attempted, installed\n\n if not install_attempted:\n install_attempted = True\n installed = scout_apm.core.install()\n\n\nclass WorkerMixin(object):\n def __init__(self, *args, **kwargs):\n global installed\n ensure_scout_installed()\n if installed:\n ensure_job_instrumented()\n super(WorkerMixin, self).__init__(*args, **kwargs)\n\n\nclass Worker(WorkerMixin, RqWorker):\n pass\n\n\nclass SimpleWorker(WorkerMixin, RqSimpleWorker):\n pass\n\n\njob_instrumented = False\n\n\ndef ensure_job_instrumented():\n global job_instrumented\n if job_instrumented:\n return\n job_instrumented = True\n Job.perform = wrap_perform(Job.perform)\n\n\[email protected]\ndef wrap_perform(wrapped, instance, args, kwargs):\n global installed\n if not installed:\n return wrapped(*args, **kwargs)\n\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n tracked_request.tag(\"task_id\", instance.get_id())\n tracked_request.tag(\"queue\", instance.origin)\n queue_time = (dt.datetime.utcnow() - instance.enqueued_at).total_seconds()\n tracked_request.tag(\"queue_time\", queue_time)\n tracked_request.start_span(operation=\"Job/{}\".format(instance.func_name))\n try:\n return wrapped(*args, **kwargs)\n except Exception:\n tracked_request.tag(\"error\", \"true\")\n raise\n finally:\n tracked_request.stop_span()\n", "path": "src/scout_apm/rq.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nimport wrapt\nfrom rq import SimpleWorker as RqSimpleWorker\nfrom rq import Worker as RqWorker\nfrom rq.job import Job\nfrom rq.worker import HerokuWorker as RqHerokuWorker\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\n\ninstall_attempted = False\ninstalled = None\n\n\ndef ensure_scout_installed():\n global install_attempted, installed\n\n if not install_attempted:\n install_attempted = True\n installed = scout_apm.core.install()\n\n\nclass WorkerMixin(object):\n def __init__(self, *args, **kwargs):\n global installed\n ensure_scout_installed()\n if installed:\n ensure_job_instrumented()\n super(WorkerMixin, self).__init__(*args, **kwargs)\n\n\nclass Worker(WorkerMixin, RqWorker):\n pass\n\n\nclass SimpleWorker(WorkerMixin, RqSimpleWorker):\n pass\n\n\nclass HerokuWorker(WorkerMixin, RqHerokuWorker):\n pass\n\n\njob_instrumented = False\n\n\ndef ensure_job_instrumented():\n global job_instrumented\n if job_instrumented:\n return\n job_instrumented = True\n Job.perform = wrap_perform(Job.perform)\n\n\[email protected]\ndef wrap_perform(wrapped, instance, args, kwargs):\n global installed\n if not installed:\n return wrapped(*args, **kwargs)\n\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n tracked_request.tag(\"task_id\", instance.get_id())\n tracked_request.tag(\"queue\", instance.origin)\n queue_time = (dt.datetime.utcnow() - instance.enqueued_at).total_seconds()\n tracked_request.tag(\"queue_time\", queue_time)\n tracked_request.start_span(operation=\"Job/{}\".format(instance.func_name))\n try:\n return wrapped(*args, **kwargs)\n except Exception:\n tracked_request.tag(\"error\", \"true\")\n raise\n finally:\n tracked_request.stop_span()\n", "path": "src/scout_apm/rq.py"}]} | 915 | 159 |
gh_patches_debug_8529 | rasdani/github-patches | git_diff | conan-io__conan-center-index-16999 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] fakeit/*: Package id ignores options
### Description
The `fakeit` option for integration is meant to select the correct header file for the matching integration, as there are different header files based on the integration chosen e.g. `gtest`, `boost`, `standalone`.
These options can be seen in the recipe.
Including the package step in the recipe which copies a different header based on the `integration` option
The link for the source shows the separate header files in it under the `single_header` folder: https://github.com/eranpeer/FakeIt/releases/tag/2.3.2
The problem is that there is only one package and it contains the header for the `standalone` `integration` option only.
At least part of the cause of the problem can be seen in the recipe file with the `package_id()`
The package id for fakeit is ignore the option `integration` which changes which header file is used for the package (and package id)
Currently the recipe specifies:
```
def package_id(self):
self.info.header_only()
```
But the header_only is designed to ignore options, which is incorrect in this case, as we have a different header filee to package based on the integrated test library e.g. gtest or boost (or standalone).
```
def header_only(self):
self.settings.clear()
self.options.clear()
self.requires.clear()
```
### Package and Environment Details
* Package Name/Version: **fakeit/\***
* Operating System+version: **All**
* Compiler+version: **All**
* Docker image: **All**
* Conan version: **All**
* Python version: **All**
### Conan profile
[settings]
os=Windows
os_build=Windows
arch=x86_64
arch_build=x86_64
compiler=Visual Studio
compiler.version=16
build_type=Debug
[options]
[conf]
[build_requires]
[env]
### Steps to reproduce
conan install .
### Logs
<details><summary>Click to expand log</summary>
```
Build requirements
fakeit/2.3.2 from 'conan-center' - Cache
gtest/1.11.0 from 'conan-center' - Cache
Build requirements packages
fakeit/2.3.2:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache
gtest/1.11.0:875c67f4d8a79bdd002908b75efce119eb82836d - Cache
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/fakeit/all/conanfile.py`
Content:
```
1 from conan import ConanFile
2 from conan.errors import ConanInvalidConfiguration
3 from conan.tools.build import check_min_cppstd
4 from conan.tools.files import apply_conandata_patches, export_conandata_patches, get, copy
5 from conan.tools.layout import basic_layout
6 import os
7
8
9 required_conan_version = ">=1.52.0"
10
11 class FakeItConan(ConanFile):
12 name = "fakeit"
13 description = "C++ mocking made easy. A simple yet very expressive, headers only library for c++ mocking."
14 topics = ("mock", "fake", "spy")
15 license = "MIT"
16 homepage = "https://github.com/eranpeer/FakeIt"
17 url = "https://github.com/conan-io/conan-center-index"
18 package_type = "header-library"
19 settings = "os", "arch", "compiler", "build_type"
20 options = {
21 "integration": ["boost", "catch", "cute", "gtest", "mettle", "nunit", "mstest", "qtest", "standalone", "tpunit"]
22 }
23 default_options = {"integration": "standalone"}
24 no_copy_source = True
25
26 @property
27 def _min_cppstd(self):
28 return 11
29
30 def export_sources(self):
31 export_conandata_patches(self)
32
33 def layout(self):
34 basic_layout(self, src_folder="src")
35
36 def requirements(self):
37 if self.options.integration == "boost":
38 self.requires("boost/1.79.0")
39 elif self.options.integration == "catch":
40 self.requires("catch2/2.13.9")
41 elif self.options.integration == "gtest":
42 self.requires("gtest/1.11.0")
43 elif self.options.integration == "qtest":
44 self.requires("qt/6.3.0")
45 elif self.options.integration == "standalone":
46 pass
47 else:
48 raise ConanInvalidConfiguration("%s is not (yet) available on cci" % self.options.integration)
49
50 def package_id(self):
51 self.info.clear()
52
53 def validate(self):
54 if self.settings.compiler.get_safe("cppstd"):
55 check_min_cppstd(self, self._min_cppstd)
56
57 def source(self):
58 get(self, **self.conan_data["sources"][self.version], strip_root=True)
59
60 def build(self):
61 apply_conandata_patches(self)
62
63 def package(self):
64 copy(self, pattern="LICENSE", dst=os.path.join(self.package_folder, "licenses"), src=self.source_folder)
65 copy(
66 self,
67 pattern="fakeit.hpp",
68 dst=os.path.join(self.package_folder, "include"),
69 src=os.path.join(self.source_folder, "single_header", str(self.options.integration)),
70 )
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/fakeit/all/conanfile.py b/recipes/fakeit/all/conanfile.py
--- a/recipes/fakeit/all/conanfile.py
+++ b/recipes/fakeit/all/conanfile.py
@@ -48,7 +48,10 @@
raise ConanInvalidConfiguration("%s is not (yet) available on cci" % self.options.integration)
def package_id(self):
- self.info.clear()
+ # The "integration" option must be kept because it will impact which header is packaged,
+ # therefor self.info.clear() cannot be used.
+ self.info.settings.clear()
+ self.info.requires.clear()
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
| {"golden_diff": "diff --git a/recipes/fakeit/all/conanfile.py b/recipes/fakeit/all/conanfile.py\n--- a/recipes/fakeit/all/conanfile.py\n+++ b/recipes/fakeit/all/conanfile.py\n@@ -48,7 +48,10 @@\n raise ConanInvalidConfiguration(\"%s is not (yet) available on cci\" % self.options.integration)\n \n def package_id(self):\n- self.info.clear()\n+ # The \"integration\" option must be kept because it will impact which header is packaged,\n+ # therefor self.info.clear() cannot be used.\n+ self.info.settings.clear()\n+ self.info.requires.clear()\n \n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n", "issue": "[package] fakeit/*: Package id ignores options\n### Description\r\n\r\nThe `fakeit` option for integration is meant to select the correct header file for the matching integration, as there are different header files based on the integration chosen e.g. `gtest`, `boost`, `standalone`.\r\n\r\nThese options can be seen in the recipe.\r\nIncluding the package step in the recipe which copies a different header based on the `integration` option\r\n\r\nThe link for the source shows the separate header files in it under the `single_header` folder: https://github.com/eranpeer/FakeIt/releases/tag/2.3.2\r\n\r\nThe problem is that there is only one package and it contains the header for the `standalone` `integration` option only.\r\n\r\nAt least part of the cause of the problem can be seen in the recipe file with the `package_id()`\r\n\r\nThe package id for fakeit is ignore the option `integration` which changes which header file is used for the package (and package id)\r\nCurrently the recipe specifies:\r\n```\r\n def package_id(self):\r\n self.info.header_only()\r\n```\r\n\r\nBut the header_only is designed to ignore options, which is incorrect in this case, as we have a different header filee to package based on the integrated test library e.g. gtest or boost (or standalone).\r\n\r\n```\r\n def header_only(self):\r\n self.settings.clear()\r\n self.options.clear()\r\n self.requires.clear()\r\n```\r\n\r\n\r\n### Package and Environment Details\r\n\r\n* Package Name/Version: **fakeit/\\***\r\n* Operating System+version: **All**\r\n* Compiler+version: **All**\r\n* Docker image: **All**\r\n* Conan version: **All**\r\n* Python version: **All**\r\n\r\n\r\n### Conan profile\r\n\r\n[settings]\r\nos=Windows\r\nos_build=Windows\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=Visual Studio\r\ncompiler.version=16\r\nbuild_type=Debug\r\n[options]\r\n[conf]\r\n[build_requires]\r\n[env]\r\n\r\n### Steps to reproduce\r\n\r\nconan install .\r\n\r\n### Logs\r\n\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nBuild requirements\r\n fakeit/2.3.2 from 'conan-center' - Cache\r\n gtest/1.11.0 from 'conan-center' - Cache\r\nBuild requirements packages\r\n fakeit/2.3.2:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache\r\n gtest/1.11.0:875c67f4d8a79bdd002908b75efce119eb82836d - Cache\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.build import check_min_cppstd\nfrom conan.tools.files import apply_conandata_patches, export_conandata_patches, get, copy\nfrom conan.tools.layout import basic_layout\nimport os\n\n\nrequired_conan_version = \">=1.52.0\"\n\nclass FakeItConan(ConanFile):\n name = \"fakeit\"\n description = \"C++ mocking made easy. A simple yet very expressive, headers only library for c++ mocking.\"\n topics = (\"mock\", \"fake\", \"spy\")\n license = \"MIT\"\n homepage = \"https://github.com/eranpeer/FakeIt\"\n url = \"https://github.com/conan-io/conan-center-index\"\n package_type = \"header-library\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"integration\": [\"boost\", \"catch\", \"cute\", \"gtest\", \"mettle\", \"nunit\", \"mstest\", \"qtest\", \"standalone\", \"tpunit\"]\n }\n default_options = {\"integration\": \"standalone\"}\n no_copy_source = True\n\n @property\n def _min_cppstd(self):\n return 11\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def layout(self):\n basic_layout(self, src_folder=\"src\")\n\n def requirements(self):\n if self.options.integration == \"boost\":\n self.requires(\"boost/1.79.0\")\n elif self.options.integration == \"catch\":\n self.requires(\"catch2/2.13.9\")\n elif self.options.integration == \"gtest\":\n self.requires(\"gtest/1.11.0\")\n elif self.options.integration == \"qtest\":\n self.requires(\"qt/6.3.0\")\n elif self.options.integration == \"standalone\":\n pass\n else:\n raise ConanInvalidConfiguration(\"%s is not (yet) available on cci\" % self.options.integration)\n\n def package_id(self):\n self.info.clear()\n\n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n check_min_cppstd(self, self._min_cppstd)\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version], strip_root=True)\n\n def build(self):\n apply_conandata_patches(self)\n\n def package(self):\n copy(self, pattern=\"LICENSE\", dst=os.path.join(self.package_folder, \"licenses\"), src=self.source_folder)\n copy(\n self,\n pattern=\"fakeit.hpp\",\n dst=os.path.join(self.package_folder, \"include\"),\n src=os.path.join(self.source_folder, \"single_header\", str(self.options.integration)),\n )\n", "path": "recipes/fakeit/all/conanfile.py"}], "after_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.build import check_min_cppstd\nfrom conan.tools.files import apply_conandata_patches, export_conandata_patches, get, copy\nfrom conan.tools.layout import basic_layout\nimport os\n\n\nrequired_conan_version = \">=1.52.0\"\n\nclass FakeItConan(ConanFile):\n name = \"fakeit\"\n description = \"C++ mocking made easy. A simple yet very expressive, headers only library for c++ mocking.\"\n topics = (\"mock\", \"fake\", \"spy\")\n license = \"MIT\"\n homepage = \"https://github.com/eranpeer/FakeIt\"\n url = \"https://github.com/conan-io/conan-center-index\"\n package_type = \"header-library\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"integration\": [\"boost\", \"catch\", \"cute\", \"gtest\", \"mettle\", \"nunit\", \"mstest\", \"qtest\", \"standalone\", \"tpunit\"]\n }\n default_options = {\"integration\": \"standalone\"}\n no_copy_source = True\n\n @property\n def _min_cppstd(self):\n return 11\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def layout(self):\n basic_layout(self, src_folder=\"src\")\n\n def requirements(self):\n if self.options.integration == \"boost\":\n self.requires(\"boost/1.79.0\")\n elif self.options.integration == \"catch\":\n self.requires(\"catch2/2.13.9\")\n elif self.options.integration == \"gtest\":\n self.requires(\"gtest/1.11.0\")\n elif self.options.integration == \"qtest\":\n self.requires(\"qt/6.3.0\")\n elif self.options.integration == \"standalone\":\n pass\n else:\n raise ConanInvalidConfiguration(\"%s is not (yet) available on cci\" % self.options.integration)\n\n def package_id(self):\n # The \"integration\" option must be kept because it will impact which header is packaged,\n # therefor self.info.clear() cannot be used.\n self.info.settings.clear()\n self.info.requires.clear()\n\n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n check_min_cppstd(self, self._min_cppstd)\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version], strip_root=True)\n\n def build(self):\n apply_conandata_patches(self)\n\n def package(self):\n copy(self, pattern=\"LICENSE\", dst=os.path.join(self.package_folder, \"licenses\"), src=self.source_folder)\n copy(\n self,\n pattern=\"fakeit.hpp\",\n dst=os.path.join(self.package_folder, \"include\"),\n src=os.path.join(self.source_folder, \"single_header\", str(self.options.integration)),\n )\n", "path": "recipes/fakeit/all/conanfile.py"}]} | 1,585 | 164 |
gh_patches_debug_55064 | rasdani/github-patches | git_diff | secdev__scapy-1402 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
man page spelling error
intances should be instances.
It would be nice if this wasn't gz compressed in the source, otherwise I'd have done a pull request.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2
3 """
4 Distutils setup file for Scapy.
5 """
6
7
8 from distutils import archive_util
9 from distutils import sysconfig
10 from distutils.core import setup
11 from distutils.command.sdist import sdist
12 import os
13
14
15 EZIP_HEADER = """#! /bin/sh
16 PYTHONPATH=$0/%s exec python -m scapy
17 """
18
19
20 def make_ezipfile(base_name, base_dir, verbose=0, dry_run=0, **kwargs):
21 fname = archive_util.make_zipfile(base_name, base_dir, verbose, dry_run)
22 ofname = fname + ".old"
23 os.rename(fname, ofname)
24 of = open(ofname)
25 f = open(fname, "w")
26 f.write(EZIP_HEADER % base_dir)
27 while True:
28 data = of.read(8192)
29 if not data:
30 break
31 f.write(data)
32 f.close()
33 os.system("zip -A '%s'" % fname)
34 of.close()
35 os.unlink(ofname)
36 os.chmod(fname, 0o755)
37 return fname
38
39
40 archive_util.ARCHIVE_FORMATS["ezip"] = (
41 make_ezipfile, [], 'Executable ZIP file')
42
43 SCRIPTS = ['bin/scapy', 'bin/UTscapy']
44 # On Windows we also need additional batch files to run the above scripts
45 if os.name == "nt":
46 SCRIPTS += ['bin/scapy.bat', 'bin/UTscapy.bat']
47
48 setup(
49 name='scapy',
50 version=__import__('scapy').VERSION,
51 packages=[
52 'scapy',
53 'scapy/arch',
54 'scapy/arch/bpf',
55 'scapy/arch/windows',
56 'scapy/contrib',
57 'scapy/layers',
58 'scapy/layers/tls',
59 'scapy/layers/tls/crypto',
60 'scapy/modules',
61 'scapy/modules/krack',
62 'scapy/asn1',
63 'scapy/tools',
64 ],
65 scripts=SCRIPTS,
66 data_files=[('share/man/man1', ["doc/scapy.1.gz"])],
67 package_data={
68 'scapy': ['VERSION'],
69 },
70
71 # Metadata
72 author='Philippe BIONDI',
73 author_email='phil(at)secdev.org',
74 maintainer='Pierre LALET, Guillaume VALADON',
75 description='Scapy: interactive packet manipulation tool',
76 license='GPLv2',
77 url='http://www.secdev.org/projects/scapy',
78 download_url='https://github.com/secdev/scapy/tarball/master',
79 keywords=["network"],
80 classifiers=[
81 "Development Status :: 5 - Production/Stable",
82 "Environment :: Console",
83 "Intended Audience :: Developers",
84 "Intended Audience :: Information Technology",
85 "Intended Audience :: Science/Research",
86 "Intended Audience :: System Administrators",
87 "Intended Audience :: Telecommunications Industry",
88 "License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
89 "Programming Language :: Python :: 2",
90 "Programming Language :: Python :: 2.7",
91 "Programming Language :: Python :: 3",
92 "Programming Language :: Python :: 3.4",
93 "Programming Language :: Python :: 3.5",
94 "Programming Language :: Python :: 3.6",
95 "Topic :: Security",
96 "Topic :: System :: Networking",
97 "Topic :: System :: Networking :: Monitoring",
98 ]
99 )
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -63,7 +63,7 @@
'scapy/tools',
],
scripts=SCRIPTS,
- data_files=[('share/man/man1', ["doc/scapy.1.gz"])],
+ data_files=[('share/man/man1', ["doc/scapy.1"])],
package_data={
'scapy': ['VERSION'],
},
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -63,7 +63,7 @@\n 'scapy/tools',\n ],\n scripts=SCRIPTS,\n- data_files=[('share/man/man1', [\"doc/scapy.1.gz\"])],\n+ data_files=[('share/man/man1', [\"doc/scapy.1\"])],\n package_data={\n 'scapy': ['VERSION'],\n },\n", "issue": "man page spelling error\nintances should be instances.\r\n\r\nIt would be nice if this wasn't gz compressed in the source, otherwise I'd have done a pull request.\n", "before_files": [{"content": "#! /usr/bin/env python\n\n\"\"\"\nDistutils setup file for Scapy.\n\"\"\"\n\n\nfrom distutils import archive_util\nfrom distutils import sysconfig\nfrom distutils.core import setup\nfrom distutils.command.sdist import sdist\nimport os\n\n\nEZIP_HEADER = \"\"\"#! /bin/sh\nPYTHONPATH=$0/%s exec python -m scapy\n\"\"\"\n\n\ndef make_ezipfile(base_name, base_dir, verbose=0, dry_run=0, **kwargs):\n fname = archive_util.make_zipfile(base_name, base_dir, verbose, dry_run)\n ofname = fname + \".old\"\n os.rename(fname, ofname)\n of = open(ofname)\n f = open(fname, \"w\")\n f.write(EZIP_HEADER % base_dir)\n while True:\n data = of.read(8192)\n if not data:\n break\n f.write(data)\n f.close()\n os.system(\"zip -A '%s'\" % fname)\n of.close()\n os.unlink(ofname)\n os.chmod(fname, 0o755)\n return fname\n\n\narchive_util.ARCHIVE_FORMATS[\"ezip\"] = (\n make_ezipfile, [], 'Executable ZIP file')\n\nSCRIPTS = ['bin/scapy', 'bin/UTscapy']\n# On Windows we also need additional batch files to run the above scripts\nif os.name == \"nt\":\n SCRIPTS += ['bin/scapy.bat', 'bin/UTscapy.bat']\n\nsetup(\n name='scapy',\n version=__import__('scapy').VERSION,\n packages=[\n 'scapy',\n 'scapy/arch',\n 'scapy/arch/bpf',\n 'scapy/arch/windows',\n 'scapy/contrib',\n 'scapy/layers',\n 'scapy/layers/tls',\n 'scapy/layers/tls/crypto',\n 'scapy/modules',\n 'scapy/modules/krack',\n 'scapy/asn1',\n 'scapy/tools',\n ],\n scripts=SCRIPTS,\n data_files=[('share/man/man1', [\"doc/scapy.1.gz\"])],\n package_data={\n 'scapy': ['VERSION'],\n },\n\n # Metadata\n author='Philippe BIONDI',\n author_email='phil(at)secdev.org',\n maintainer='Pierre LALET, Guillaume VALADON',\n description='Scapy: interactive packet manipulation tool',\n license='GPLv2',\n url='http://www.secdev.org/projects/scapy',\n download_url='https://github.com/secdev/scapy/tarball/master',\n keywords=[\"network\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Information Technology\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: System Administrators\",\n \"Intended Audience :: Telecommunications Industry\",\n \"License :: OSI Approved :: GNU General Public License v2 (GPLv2)\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Security\",\n \"Topic :: System :: Networking\",\n \"Topic :: System :: Networking :: Monitoring\",\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python\n\n\"\"\"\nDistutils setup file for Scapy.\n\"\"\"\n\n\nfrom distutils import archive_util\nfrom distutils import sysconfig\nfrom distutils.core import setup\nfrom distutils.command.sdist import sdist\nimport os\n\n\nEZIP_HEADER = \"\"\"#! /bin/sh\nPYTHONPATH=$0/%s exec python -m scapy\n\"\"\"\n\n\ndef make_ezipfile(base_name, base_dir, verbose=0, dry_run=0, **kwargs):\n fname = archive_util.make_zipfile(base_name, base_dir, verbose, dry_run)\n ofname = fname + \".old\"\n os.rename(fname, ofname)\n of = open(ofname)\n f = open(fname, \"w\")\n f.write(EZIP_HEADER % base_dir)\n while True:\n data = of.read(8192)\n if not data:\n break\n f.write(data)\n f.close()\n os.system(\"zip -A '%s'\" % fname)\n of.close()\n os.unlink(ofname)\n os.chmod(fname, 0o755)\n return fname\n\n\narchive_util.ARCHIVE_FORMATS[\"ezip\"] = (\n make_ezipfile, [], 'Executable ZIP file')\n\nSCRIPTS = ['bin/scapy', 'bin/UTscapy']\n# On Windows we also need additional batch files to run the above scripts\nif os.name == \"nt\":\n SCRIPTS += ['bin/scapy.bat', 'bin/UTscapy.bat']\n\nsetup(\n name='scapy',\n version=__import__('scapy').VERSION,\n packages=[\n 'scapy',\n 'scapy/arch',\n 'scapy/arch/bpf',\n 'scapy/arch/windows',\n 'scapy/contrib',\n 'scapy/layers',\n 'scapy/layers/tls',\n 'scapy/layers/tls/crypto',\n 'scapy/modules',\n 'scapy/modules/krack',\n 'scapy/asn1',\n 'scapy/tools',\n ],\n scripts=SCRIPTS,\n data_files=[('share/man/man1', [\"doc/scapy.1\"])],\n package_data={\n 'scapy': ['VERSION'],\n },\n\n # Metadata\n author='Philippe BIONDI',\n author_email='phil(at)secdev.org',\n maintainer='Pierre LALET, Guillaume VALADON',\n description='Scapy: interactive packet manipulation tool',\n license='GPLv2',\n url='http://www.secdev.org/projects/scapy',\n download_url='https://github.com/secdev/scapy/tarball/master',\n keywords=[\"network\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Information Technology\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: System Administrators\",\n \"Intended Audience :: Telecommunications Industry\",\n \"License :: OSI Approved :: GNU General Public License v2 (GPLv2)\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Security\",\n \"Topic :: System :: Networking\",\n \"Topic :: System :: Networking :: Monitoring\",\n ]\n)\n", "path": "setup.py"}]} | 1,230 | 99 |
gh_patches_debug_40795 | rasdani/github-patches | git_diff | goauthentik__authentik-3556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `x5c` and `x5t`to the `jwks` response
**Is your feature request related to a problem? Please describe.**
I am trying to use Authentik as the identity provider for netbird via OAuth2/OIDC
**Describe the solution you'd like**
netbird expects the JWKS endpoint which is `/application/o/<provider name>/jwks/` to have a property for the `x5c`. The `x5c` (X.509 certificate chain) Header Parameter contains the X.509 public key certificate or certificate chain corresponding to the key used to digitally sign the JWS (JSON Web Signature).
**Describe alternatives you've considered**
n/a
**Additional context**
For the OAuth2 Provider, I specified a signing key which populated the `jwks` endpoint response with the following values:
```
{
"keys": [
{
"kty": "RSA",
"alg": "RS256",
"use": "sig",
"kid": "*REDACTED*",
"n": "*REDACTED*",
"e": "AQAB"
}
]
}
```
Comparing it to the example here: https://example.eu.auth0.com/.well-known/jwks.json , it is missing the `x5t` and `x5c` properties.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/providers/oauth2/views/jwks.py`
Content:
```
1 """authentik OAuth2 JWKS Views"""
2 from base64 import urlsafe_b64encode
3 from typing import Optional
4
5 from cryptography.hazmat.primitives.asymmetric.ec import (
6 EllipticCurvePrivateKey,
7 EllipticCurvePublicKey,
8 )
9 from cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey, RSAPublicKey
10 from django.http import HttpRequest, HttpResponse, JsonResponse
11 from django.shortcuts import get_object_or_404
12 from django.views import View
13
14 from authentik.core.models import Application
15 from authentik.crypto.models import CertificateKeyPair
16 from authentik.providers.oauth2.models import JWTAlgorithms, OAuth2Provider
17
18
19 def b64_enc(number: int) -> str:
20 """Convert number to base64-encoded octet-value"""
21 length = ((number).bit_length() + 7) // 8
22 number_bytes = number.to_bytes(length, "big")
23 final = urlsafe_b64encode(number_bytes).rstrip(b"=")
24 return final.decode("ascii")
25
26
27 class JWKSView(View):
28 """Show RSA Key data for Provider"""
29
30 def get_jwk_for_key(self, key: CertificateKeyPair) -> Optional[dict]:
31 """Convert a certificate-key pair into JWK"""
32 private_key = key.private_key
33 if not private_key:
34 return None
35 if isinstance(private_key, RSAPrivateKey):
36 public_key: RSAPublicKey = private_key.public_key()
37 public_numbers = public_key.public_numbers()
38 return {
39 "kty": "RSA",
40 "alg": JWTAlgorithms.RS256,
41 "use": "sig",
42 "kid": key.kid,
43 "n": b64_enc(public_numbers.n),
44 "e": b64_enc(public_numbers.e),
45 }
46 if isinstance(private_key, EllipticCurvePrivateKey):
47 public_key: EllipticCurvePublicKey = private_key.public_key()
48 public_numbers = public_key.public_numbers()
49 return {
50 "kty": "EC",
51 "alg": JWTAlgorithms.ES256,
52 "use": "sig",
53 "kid": key.kid,
54 "n": b64_enc(public_numbers.n),
55 "e": b64_enc(public_numbers.e),
56 }
57 return None
58
59 def get(self, request: HttpRequest, application_slug: str) -> HttpResponse:
60 """Show JWK Key data for Provider"""
61 application = get_object_or_404(Application, slug=application_slug)
62 provider: OAuth2Provider = get_object_or_404(OAuth2Provider, pk=application.provider_id)
63 signing_key: CertificateKeyPair = provider.signing_key
64
65 response_data = {}
66
67 if signing_key:
68 jwk = self.get_jwk_for_key(signing_key)
69 if jwk:
70 response_data["keys"] = [jwk]
71
72 response = JsonResponse(response_data)
73 response["Access-Control-Allow-Origin"] = "*"
74
75 return response
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/providers/oauth2/views/jwks.py b/authentik/providers/oauth2/views/jwks.py
--- a/authentik/providers/oauth2/views/jwks.py
+++ b/authentik/providers/oauth2/views/jwks.py
@@ -1,12 +1,14 @@
"""authentik OAuth2 JWKS Views"""
-from base64 import urlsafe_b64encode
+from base64 import b64encode, urlsafe_b64encode
from typing import Optional
+from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric.ec import (
EllipticCurvePrivateKey,
EllipticCurvePublicKey,
)
from cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey, RSAPublicKey
+from cryptography.hazmat.primitives.serialization import Encoding
from django.http import HttpRequest, HttpResponse, JsonResponse
from django.shortcuts import get_object_or_404
from django.views import View
@@ -30,12 +32,13 @@
def get_jwk_for_key(self, key: CertificateKeyPair) -> Optional[dict]:
"""Convert a certificate-key pair into JWK"""
private_key = key.private_key
+ key_data = None
if not private_key:
- return None
+ return key_data
if isinstance(private_key, RSAPrivateKey):
public_key: RSAPublicKey = private_key.public_key()
public_numbers = public_key.public_numbers()
- return {
+ key_data = {
"kty": "RSA",
"alg": JWTAlgorithms.RS256,
"use": "sig",
@@ -43,10 +46,10 @@
"n": b64_enc(public_numbers.n),
"e": b64_enc(public_numbers.e),
}
- if isinstance(private_key, EllipticCurvePrivateKey):
+ elif isinstance(private_key, EllipticCurvePrivateKey):
public_key: EllipticCurvePublicKey = private_key.public_key()
public_numbers = public_key.public_numbers()
- return {
+ key_data = {
"kty": "EC",
"alg": JWTAlgorithms.ES256,
"use": "sig",
@@ -54,7 +57,20 @@
"n": b64_enc(public_numbers.n),
"e": b64_enc(public_numbers.e),
}
- return None
+ else:
+ return key_data
+ key_data["x5c"] = [b64encode(key.certificate.public_bytes(Encoding.DER)).decode("utf-8")]
+ key_data["x5t"] = (
+ urlsafe_b64encode(key.certificate.fingerprint(hashes.SHA1())) # nosec
+ .decode("utf-8")
+ .rstrip("=")
+ )
+ key_data["x5t#S256"] = (
+ urlsafe_b64encode(key.certificate.fingerprint(hashes.SHA256()))
+ .decode("utf-8")
+ .rstrip("=")
+ )
+ return key_data
def get(self, request: HttpRequest, application_slug: str) -> HttpResponse:
"""Show JWK Key data for Provider"""
| {"golden_diff": "diff --git a/authentik/providers/oauth2/views/jwks.py b/authentik/providers/oauth2/views/jwks.py\n--- a/authentik/providers/oauth2/views/jwks.py\n+++ b/authentik/providers/oauth2/views/jwks.py\n@@ -1,12 +1,14 @@\n \"\"\"authentik OAuth2 JWKS Views\"\"\"\n-from base64 import urlsafe_b64encode\n+from base64 import b64encode, urlsafe_b64encode\n from typing import Optional\n \n+from cryptography.hazmat.primitives import hashes\n from cryptography.hazmat.primitives.asymmetric.ec import (\n EllipticCurvePrivateKey,\n EllipticCurvePublicKey,\n )\n from cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey, RSAPublicKey\n+from cryptography.hazmat.primitives.serialization import Encoding\n from django.http import HttpRequest, HttpResponse, JsonResponse\n from django.shortcuts import get_object_or_404\n from django.views import View\n@@ -30,12 +32,13 @@\n def get_jwk_for_key(self, key: CertificateKeyPair) -> Optional[dict]:\n \"\"\"Convert a certificate-key pair into JWK\"\"\"\n private_key = key.private_key\n+ key_data = None\n if not private_key:\n- return None\n+ return key_data\n if isinstance(private_key, RSAPrivateKey):\n public_key: RSAPublicKey = private_key.public_key()\n public_numbers = public_key.public_numbers()\n- return {\n+ key_data = {\n \"kty\": \"RSA\",\n \"alg\": JWTAlgorithms.RS256,\n \"use\": \"sig\",\n@@ -43,10 +46,10 @@\n \"n\": b64_enc(public_numbers.n),\n \"e\": b64_enc(public_numbers.e),\n }\n- if isinstance(private_key, EllipticCurvePrivateKey):\n+ elif isinstance(private_key, EllipticCurvePrivateKey):\n public_key: EllipticCurvePublicKey = private_key.public_key()\n public_numbers = public_key.public_numbers()\n- return {\n+ key_data = {\n \"kty\": \"EC\",\n \"alg\": JWTAlgorithms.ES256,\n \"use\": \"sig\",\n@@ -54,7 +57,20 @@\n \"n\": b64_enc(public_numbers.n),\n \"e\": b64_enc(public_numbers.e),\n }\n- return None\n+ else:\n+ return key_data\n+ key_data[\"x5c\"] = [b64encode(key.certificate.public_bytes(Encoding.DER)).decode(\"utf-8\")]\n+ key_data[\"x5t\"] = (\n+ urlsafe_b64encode(key.certificate.fingerprint(hashes.SHA1())) # nosec\n+ .decode(\"utf-8\")\n+ .rstrip(\"=\")\n+ )\n+ key_data[\"x5t#S256\"] = (\n+ urlsafe_b64encode(key.certificate.fingerprint(hashes.SHA256()))\n+ .decode(\"utf-8\")\n+ .rstrip(\"=\")\n+ )\n+ return key_data\n \n def get(self, request: HttpRequest, application_slug: str) -> HttpResponse:\n \"\"\"Show JWK Key data for Provider\"\"\"\n", "issue": "Add `x5c` and `x5t`to the `jwks` response\n**Is your feature request related to a problem? Please describe.**\r\nI am trying to use Authentik as the identity provider for netbird via OAuth2/OIDC\r\n\r\n**Describe the solution you'd like**\r\nnetbird expects the JWKS endpoint which is `/application/o/<provider name>/jwks/` to have a property for the `x5c`. The `x5c` (X.509 certificate chain) Header Parameter contains the X.509 public key certificate or certificate chain corresponding to the key used to digitally sign the JWS (JSON Web Signature).\r\n\r\n**Describe alternatives you've considered**\r\nn/a\r\n\r\n**Additional context**\r\nFor the OAuth2 Provider, I specified a signing key which populated the `jwks` endpoint response with the following values:\r\n```\r\n{\r\n \"keys\": [\r\n {\r\n \"kty\": \"RSA\",\r\n \"alg\": \"RS256\",\r\n \"use\": \"sig\",\r\n \"kid\": \"*REDACTED*\",\r\n \"n\": \"*REDACTED*\",\r\n \"e\": \"AQAB\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\nComparing it to the example here: https://example.eu.auth0.com/.well-known/jwks.json , it is missing the `x5t` and `x5c` properties.\n", "before_files": [{"content": "\"\"\"authentik OAuth2 JWKS Views\"\"\"\nfrom base64 import urlsafe_b64encode\nfrom typing import Optional\n\nfrom cryptography.hazmat.primitives.asymmetric.ec import (\n EllipticCurvePrivateKey,\n EllipticCurvePublicKey,\n)\nfrom cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey, RSAPublicKey\nfrom django.http import HttpRequest, HttpResponse, JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.views import View\n\nfrom authentik.core.models import Application\nfrom authentik.crypto.models import CertificateKeyPair\nfrom authentik.providers.oauth2.models import JWTAlgorithms, OAuth2Provider\n\n\ndef b64_enc(number: int) -> str:\n \"\"\"Convert number to base64-encoded octet-value\"\"\"\n length = ((number).bit_length() + 7) // 8\n number_bytes = number.to_bytes(length, \"big\")\n final = urlsafe_b64encode(number_bytes).rstrip(b\"=\")\n return final.decode(\"ascii\")\n\n\nclass JWKSView(View):\n \"\"\"Show RSA Key data for Provider\"\"\"\n\n def get_jwk_for_key(self, key: CertificateKeyPair) -> Optional[dict]:\n \"\"\"Convert a certificate-key pair into JWK\"\"\"\n private_key = key.private_key\n if not private_key:\n return None\n if isinstance(private_key, RSAPrivateKey):\n public_key: RSAPublicKey = private_key.public_key()\n public_numbers = public_key.public_numbers()\n return {\n \"kty\": \"RSA\",\n \"alg\": JWTAlgorithms.RS256,\n \"use\": \"sig\",\n \"kid\": key.kid,\n \"n\": b64_enc(public_numbers.n),\n \"e\": b64_enc(public_numbers.e),\n }\n if isinstance(private_key, EllipticCurvePrivateKey):\n public_key: EllipticCurvePublicKey = private_key.public_key()\n public_numbers = public_key.public_numbers()\n return {\n \"kty\": \"EC\",\n \"alg\": JWTAlgorithms.ES256,\n \"use\": \"sig\",\n \"kid\": key.kid,\n \"n\": b64_enc(public_numbers.n),\n \"e\": b64_enc(public_numbers.e),\n }\n return None\n\n def get(self, request: HttpRequest, application_slug: str) -> HttpResponse:\n \"\"\"Show JWK Key data for Provider\"\"\"\n application = get_object_or_404(Application, slug=application_slug)\n provider: OAuth2Provider = get_object_or_404(OAuth2Provider, pk=application.provider_id)\n signing_key: CertificateKeyPair = provider.signing_key\n\n response_data = {}\n\n if signing_key:\n jwk = self.get_jwk_for_key(signing_key)\n if jwk:\n response_data[\"keys\"] = [jwk]\n\n response = JsonResponse(response_data)\n response[\"Access-Control-Allow-Origin\"] = \"*\"\n\n return response\n", "path": "authentik/providers/oauth2/views/jwks.py"}], "after_files": [{"content": "\"\"\"authentik OAuth2 JWKS Views\"\"\"\nfrom base64 import b64encode, urlsafe_b64encode\nfrom typing import Optional\n\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric.ec import (\n EllipticCurvePrivateKey,\n EllipticCurvePublicKey,\n)\nfrom cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey, RSAPublicKey\nfrom cryptography.hazmat.primitives.serialization import Encoding\nfrom django.http import HttpRequest, HttpResponse, JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.views import View\n\nfrom authentik.core.models import Application\nfrom authentik.crypto.models import CertificateKeyPair\nfrom authentik.providers.oauth2.models import JWTAlgorithms, OAuth2Provider\n\n\ndef b64_enc(number: int) -> str:\n \"\"\"Convert number to base64-encoded octet-value\"\"\"\n length = ((number).bit_length() + 7) // 8\n number_bytes = number.to_bytes(length, \"big\")\n final = urlsafe_b64encode(number_bytes).rstrip(b\"=\")\n return final.decode(\"ascii\")\n\n\nclass JWKSView(View):\n \"\"\"Show RSA Key data for Provider\"\"\"\n\n def get_jwk_for_key(self, key: CertificateKeyPair) -> Optional[dict]:\n \"\"\"Convert a certificate-key pair into JWK\"\"\"\n private_key = key.private_key\n key_data = None\n if not private_key:\n return key_data\n if isinstance(private_key, RSAPrivateKey):\n public_key: RSAPublicKey = private_key.public_key()\n public_numbers = public_key.public_numbers()\n key_data = {\n \"kty\": \"RSA\",\n \"alg\": JWTAlgorithms.RS256,\n \"use\": \"sig\",\n \"kid\": key.kid,\n \"n\": b64_enc(public_numbers.n),\n \"e\": b64_enc(public_numbers.e),\n }\n elif isinstance(private_key, EllipticCurvePrivateKey):\n public_key: EllipticCurvePublicKey = private_key.public_key()\n public_numbers = public_key.public_numbers()\n key_data = {\n \"kty\": \"EC\",\n \"alg\": JWTAlgorithms.ES256,\n \"use\": \"sig\",\n \"kid\": key.kid,\n \"n\": b64_enc(public_numbers.n),\n \"e\": b64_enc(public_numbers.e),\n }\n else:\n return key_data\n key_data[\"x5c\"] = [b64encode(key.certificate.public_bytes(Encoding.DER)).decode(\"utf-8\")]\n key_data[\"x5t\"] = (\n urlsafe_b64encode(key.certificate.fingerprint(hashes.SHA1())) # nosec\n .decode(\"utf-8\")\n .rstrip(\"=\")\n )\n key_data[\"x5t#S256\"] = (\n urlsafe_b64encode(key.certificate.fingerprint(hashes.SHA256()))\n .decode(\"utf-8\")\n .rstrip(\"=\")\n )\n return key_data\n\n def get(self, request: HttpRequest, application_slug: str) -> HttpResponse:\n \"\"\"Show JWK Key data for Provider\"\"\"\n application = get_object_or_404(Application, slug=application_slug)\n provider: OAuth2Provider = get_object_or_404(OAuth2Provider, pk=application.provider_id)\n signing_key: CertificateKeyPair = provider.signing_key\n\n response_data = {}\n\n if signing_key:\n jwk = self.get_jwk_for_key(signing_key)\n if jwk:\n response_data[\"keys\"] = [jwk]\n\n response = JsonResponse(response_data)\n response[\"Access-Control-Allow-Origin\"] = \"*\"\n\n return response\n", "path": "authentik/providers/oauth2/views/jwks.py"}]} | 1,349 | 717 |
gh_patches_debug_38206 | rasdani/github-patches | git_diff | fossasia__open-event-server-9030 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Store check in kiosk id to mark association
Allow organiser to create station name for each event
- station name
- location (based on the locations available for the venue) - if registration is selected, location can be empty
- type (registration / daily / check in / check out )
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/station.py`
Content:
```
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from flask_rest_jsonapi.exceptions import ObjectNotFound
3
4 from app.api.helpers.db import safe_query_kwargs
5 from app.api.helpers.errors import UnprocessableEntityError
6 from app.api.helpers.permission_manager import has_access
7 from app.api.helpers.permissions import jwt_required
8 from app.api.helpers.utilities import require_relationship
9 from app.api.schema.station import StationSchema
10 from app.models import db
11 from app.models.event import Event
12 from app.models.microlocation import Microlocation
13 from app.models.station import Station
14
15
16 class StationList(ResourceList):
17 """Create and List Station"""
18
19 def query(self, view_kwargs):
20 """
21 query method for different view_kwargs
22 :param view_kwargs:
23 :return:
24 """
25 query_ = self.session.query(Station)
26 if view_kwargs.get('event_id'):
27 event = safe_query_kwargs(Event, view_kwargs, 'event_id')
28 query_ = query_.filter_by(event_id=event.id)
29
30 elif view_kwargs.get('microlocation_id'):
31 event = safe_query_kwargs(Microlocation, view_kwargs, 'microlocation_id')
32 query_ = query_.filter_by(microlocation_id=event.id)
33
34 return query_
35
36 view_kwargs = True
37 schema = StationSchema
38 data_layer = {
39 'session': db.session,
40 'model': Station,
41 'methods': {'query': query},
42 }
43
44
45 class StationDetail(ResourceDetail):
46 """Station detail by id"""
47
48 @staticmethod
49 def before_patch(args, kwargs, data):
50 """
51 before patch method
52 :param args:
53 :param kwargs:
54 :param data:
55 :return:
56 """
57 require_relationship(['event'], data)
58 if not has_access('is_coorganizer', event_id=data['event']):
59 raise ObjectNotFound(
60 {'parameter': 'event'},
61 f"Event: {data['event']} not found {args} {kwargs}",
62 )
63
64 if data.get('microlocation'):
65 require_relationship(['microlocation'], data)
66 else:
67 if data['station_type'] in ('check in', 'check out', 'daily'):
68 raise ObjectNotFound(
69 {'parameter': 'microlocation'},
70 "Microlocation: microlocation_id is missing from your request.",
71 )
72 station = Station.query.filter_by(
73 station_type=data.get('station_type'),
74 microlocation_id=data.get('microlocation'),
75 event_id=data.get('event'),
76 ).first()
77 if station:
78 raise UnprocessableEntityError(
79 {
80 'station_type': data.get('station_type'),
81 'microlocation_id': data.get('microlocation'),
82 'event_id': data.get('event'),
83 },
84 "A Station already exists for the provided Event ID"
85 ", Microlocation ID and Station type",
86 )
87
88 schema = StationSchema
89 data_layer = {
90 'session': db.session,
91 'model': Station,
92 }
93
94
95 class StationRelationship(ResourceRelationship):
96 """Station Relationship (Required)"""
97
98 decorators = (jwt_required,)
99 methods = ['GET', 'PATCH']
100 schema = StationSchema
101 data_layer = {'session': db.session, 'model': Station}
102
103
104 class StationListPost(ResourceList):
105 """Create and List Station"""
106
107 @staticmethod
108 def before_post(args, kwargs, data):
109 """
110 method to check for required relationship with event and microlocation
111 :param data:
112 :param args:
113 :param kwargs:
114 :return:
115 """
116 require_relationship(['event'], data)
117 if not has_access('is_coorganizer', event_id=data['event']):
118 raise ObjectNotFound(
119 {'parameter': 'event'},
120 f"Event: {data['event']} not found {args} {kwargs}",
121 )
122
123 if data.get('microlocation'):
124 require_relationship(['microlocation'], data)
125 else:
126 if data['station_type'] in ('check in', 'check out', 'daily'):
127 raise ObjectNotFound(
128 {'parameter': 'microlocation'},
129 "Microlocation: missing from your request.",
130 )
131
132 def before_create_object(self, data, view_kwargs):
133 """
134 function to check if station already exist
135 @param data:
136 @param view_kwargs:
137 """
138 station = (
139 self.session.query(Station)
140 .filter_by(
141 station_type=data.get('station_type'),
142 microlocation_id=data.get('microlocation'),
143 event_id=data.get('event'),
144 )
145 .first()
146 )
147 if station:
148 raise UnprocessableEntityError(
149 {
150 'station_type': data.get('station_type'),
151 'microlocation_id': data.get('microlocation'),
152 'event_id': data.get('event'),
153 'view_kwargs': view_kwargs,
154 },
155 "A Station already exists for the provided Event ID"
156 ", Microlocation ID and Station type",
157 )
158
159 schema = StationSchema
160 methods = [
161 'POST',
162 ]
163 data_layer = {
164 'session': db.session,
165 'model': Station,
166 'methods': {'before_create_object': before_create_object},
167 }
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/station.py b/app/api/station.py
--- a/app/api/station.py
+++ b/app/api/station.py
@@ -2,7 +2,6 @@
from flask_rest_jsonapi.exceptions import ObjectNotFound
from app.api.helpers.db import safe_query_kwargs
-from app.api.helpers.errors import UnprocessableEntityError
from app.api.helpers.permission_manager import has_access
from app.api.helpers.permissions import jwt_required
from app.api.helpers.utilities import require_relationship
@@ -69,21 +68,6 @@
{'parameter': 'microlocation'},
"Microlocation: microlocation_id is missing from your request.",
)
- station = Station.query.filter_by(
- station_type=data.get('station_type'),
- microlocation_id=data.get('microlocation'),
- event_id=data.get('event'),
- ).first()
- if station:
- raise UnprocessableEntityError(
- {
- 'station_type': data.get('station_type'),
- 'microlocation_id': data.get('microlocation'),
- 'event_id': data.get('event'),
- },
- "A Station already exists for the provided Event ID"
- ", Microlocation ID and Station type",
- )
schema = StationSchema
data_layer = {
@@ -129,33 +113,6 @@
"Microlocation: missing from your request.",
)
- def before_create_object(self, data, view_kwargs):
- """
- function to check if station already exist
- @param data:
- @param view_kwargs:
- """
- station = (
- self.session.query(Station)
- .filter_by(
- station_type=data.get('station_type'),
- microlocation_id=data.get('microlocation'),
- event_id=data.get('event'),
- )
- .first()
- )
- if station:
- raise UnprocessableEntityError(
- {
- 'station_type': data.get('station_type'),
- 'microlocation_id': data.get('microlocation'),
- 'event_id': data.get('event'),
- 'view_kwargs': view_kwargs,
- },
- "A Station already exists for the provided Event ID"
- ", Microlocation ID and Station type",
- )
-
schema = StationSchema
methods = [
'POST',
@@ -163,5 +120,4 @@
data_layer = {
'session': db.session,
'model': Station,
- 'methods': {'before_create_object': before_create_object},
}
| {"golden_diff": "diff --git a/app/api/station.py b/app/api/station.py\n--- a/app/api/station.py\n+++ b/app/api/station.py\n@@ -2,7 +2,6 @@\n from flask_rest_jsonapi.exceptions import ObjectNotFound\n \n from app.api.helpers.db import safe_query_kwargs\n-from app.api.helpers.errors import UnprocessableEntityError\n from app.api.helpers.permission_manager import has_access\n from app.api.helpers.permissions import jwt_required\n from app.api.helpers.utilities import require_relationship\n@@ -69,21 +68,6 @@\n {'parameter': 'microlocation'},\n \"Microlocation: microlocation_id is missing from your request.\",\n )\n- station = Station.query.filter_by(\n- station_type=data.get('station_type'),\n- microlocation_id=data.get('microlocation'),\n- event_id=data.get('event'),\n- ).first()\n- if station:\n- raise UnprocessableEntityError(\n- {\n- 'station_type': data.get('station_type'),\n- 'microlocation_id': data.get('microlocation'),\n- 'event_id': data.get('event'),\n- },\n- \"A Station already exists for the provided Event ID\"\n- \", Microlocation ID and Station type\",\n- )\n \n schema = StationSchema\n data_layer = {\n@@ -129,33 +113,6 @@\n \"Microlocation: missing from your request.\",\n )\n \n- def before_create_object(self, data, view_kwargs):\n- \"\"\"\n- function to check if station already exist\n- @param data:\n- @param view_kwargs:\n- \"\"\"\n- station = (\n- self.session.query(Station)\n- .filter_by(\n- station_type=data.get('station_type'),\n- microlocation_id=data.get('microlocation'),\n- event_id=data.get('event'),\n- )\n- .first()\n- )\n- if station:\n- raise UnprocessableEntityError(\n- {\n- 'station_type': data.get('station_type'),\n- 'microlocation_id': data.get('microlocation'),\n- 'event_id': data.get('event'),\n- 'view_kwargs': view_kwargs,\n- },\n- \"A Station already exists for the provided Event ID\"\n- \", Microlocation ID and Station type\",\n- )\n-\n schema = StationSchema\n methods = [\n 'POST',\n@@ -163,5 +120,4 @@\n data_layer = {\n 'session': db.session,\n 'model': Station,\n- 'methods': {'before_create_object': before_create_object},\n }\n", "issue": "Store check in kiosk id to mark association\nAllow organiser to create station name for each event\r\n\r\n- station name\r\n- location (based on the locations available for the venue) - if registration is selected, location can be empty\r\n- type (registration / daily / check in / check out )\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.db import safe_query_kwargs\nfrom app.api.helpers.errors import UnprocessableEntityError\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.station import StationSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.microlocation import Microlocation\nfrom app.models.station import Station\n\n\nclass StationList(ResourceList):\n \"\"\"Create and List Station\"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Station)\n if view_kwargs.get('event_id'):\n event = safe_query_kwargs(Event, view_kwargs, 'event_id')\n query_ = query_.filter_by(event_id=event.id)\n\n elif view_kwargs.get('microlocation_id'):\n event = safe_query_kwargs(Microlocation, view_kwargs, 'microlocation_id')\n query_ = query_.filter_by(microlocation_id=event.id)\n\n return query_\n\n view_kwargs = True\n schema = StationSchema\n data_layer = {\n 'session': db.session,\n 'model': Station,\n 'methods': {'query': query},\n }\n\n\nclass StationDetail(ResourceDetail):\n \"\"\"Station detail by id\"\"\"\n\n @staticmethod\n def before_patch(args, kwargs, data):\n \"\"\"\n before patch method\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound(\n {'parameter': 'event'},\n f\"Event: {data['event']} not found {args} {kwargs}\",\n )\n\n if data.get('microlocation'):\n require_relationship(['microlocation'], data)\n else:\n if data['station_type'] in ('check in', 'check out', 'daily'):\n raise ObjectNotFound(\n {'parameter': 'microlocation'},\n \"Microlocation: microlocation_id is missing from your request.\",\n )\n station = Station.query.filter_by(\n station_type=data.get('station_type'),\n microlocation_id=data.get('microlocation'),\n event_id=data.get('event'),\n ).first()\n if station:\n raise UnprocessableEntityError(\n {\n 'station_type': data.get('station_type'),\n 'microlocation_id': data.get('microlocation'),\n 'event_id': data.get('event'),\n },\n \"A Station already exists for the provided Event ID\"\n \", Microlocation ID and Station type\",\n )\n\n schema = StationSchema\n data_layer = {\n 'session': db.session,\n 'model': Station,\n }\n\n\nclass StationRelationship(ResourceRelationship):\n \"\"\"Station Relationship (Required)\"\"\"\n\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = StationSchema\n data_layer = {'session': db.session, 'model': Station}\n\n\nclass StationListPost(ResourceList):\n \"\"\"Create and List Station\"\"\"\n\n @staticmethod\n def before_post(args, kwargs, data):\n \"\"\"\n method to check for required relationship with event and microlocation\n :param data:\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound(\n {'parameter': 'event'},\n f\"Event: {data['event']} not found {args} {kwargs}\",\n )\n\n if data.get('microlocation'):\n require_relationship(['microlocation'], data)\n else:\n if data['station_type'] in ('check in', 'check out', 'daily'):\n raise ObjectNotFound(\n {'parameter': 'microlocation'},\n \"Microlocation: missing from your request.\",\n )\n\n def before_create_object(self, data, view_kwargs):\n \"\"\"\n function to check if station already exist\n @param data:\n @param view_kwargs:\n \"\"\"\n station = (\n self.session.query(Station)\n .filter_by(\n station_type=data.get('station_type'),\n microlocation_id=data.get('microlocation'),\n event_id=data.get('event'),\n )\n .first()\n )\n if station:\n raise UnprocessableEntityError(\n {\n 'station_type': data.get('station_type'),\n 'microlocation_id': data.get('microlocation'),\n 'event_id': data.get('event'),\n 'view_kwargs': view_kwargs,\n },\n \"A Station already exists for the provided Event ID\"\n \", Microlocation ID and Station type\",\n )\n\n schema = StationSchema\n methods = [\n 'POST',\n ]\n data_layer = {\n 'session': db.session,\n 'model': Station,\n 'methods': {'before_create_object': before_create_object},\n }\n", "path": "app/api/station.py"}], "after_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.db import safe_query_kwargs\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.station import StationSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.microlocation import Microlocation\nfrom app.models.station import Station\n\n\nclass StationList(ResourceList):\n \"\"\"Create and List Station\"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Station)\n if view_kwargs.get('event_id'):\n event = safe_query_kwargs(Event, view_kwargs, 'event_id')\n query_ = query_.filter_by(event_id=event.id)\n\n elif view_kwargs.get('microlocation_id'):\n event = safe_query_kwargs(Microlocation, view_kwargs, 'microlocation_id')\n query_ = query_.filter_by(microlocation_id=event.id)\n\n return query_\n\n view_kwargs = True\n schema = StationSchema\n data_layer = {\n 'session': db.session,\n 'model': Station,\n 'methods': {'query': query},\n }\n\n\nclass StationDetail(ResourceDetail):\n \"\"\"Station detail by id\"\"\"\n\n @staticmethod\n def before_patch(args, kwargs, data):\n \"\"\"\n before patch method\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound(\n {'parameter': 'event'},\n f\"Event: {data['event']} not found {args} {kwargs}\",\n )\n\n if data.get('microlocation'):\n require_relationship(['microlocation'], data)\n else:\n if data['station_type'] in ('check in', 'check out', 'daily'):\n raise ObjectNotFound(\n {'parameter': 'microlocation'},\n \"Microlocation: microlocation_id is missing from your request.\",\n )\n\n schema = StationSchema\n data_layer = {\n 'session': db.session,\n 'model': Station,\n }\n\n\nclass StationRelationship(ResourceRelationship):\n \"\"\"Station Relationship (Required)\"\"\"\n\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = StationSchema\n data_layer = {'session': db.session, 'model': Station}\n\n\nclass StationListPost(ResourceList):\n \"\"\"Create and List Station\"\"\"\n\n @staticmethod\n def before_post(args, kwargs, data):\n \"\"\"\n method to check for required relationship with event and microlocation\n :param data:\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound(\n {'parameter': 'event'},\n f\"Event: {data['event']} not found {args} {kwargs}\",\n )\n\n if data.get('microlocation'):\n require_relationship(['microlocation'], data)\n else:\n if data['station_type'] in ('check in', 'check out', 'daily'):\n raise ObjectNotFound(\n {'parameter': 'microlocation'},\n \"Microlocation: missing from your request.\",\n )\n\n schema = StationSchema\n methods = [\n 'POST',\n ]\n data_layer = {\n 'session': db.session,\n 'model': Station,\n }\n", "path": "app/api/station.py"}]} | 1,794 | 558 |
gh_patches_debug_9868 | rasdani/github-patches | git_diff | ckan__ckan-7906 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replacing MD5 hashing algorithm with SHA512
In file: common_middleware.py, method: __call__, the used hashing algorithm is no longer considered secure because it is possible to have collisions. This can lead to brute force attempt to find two or more inputs that produce the same hash. iCR suggested that safer alternative hash algorithms, such as SHA-256, SHA-512, SHA-3 are used.
In the file, MD5 is used to generate a key based on several parameters and inserted into the database as `user_key`. In that case, it's recommended to use a more secure, less collision prone hash function such as- SHA256 or SHA512.
### Sponsorship and Support:
This work is done by the security researchers from OpenRefactory and is supported by the [Open Source Security Foundation (OpenSSF)](https://openssf.org/): [Project Alpha-Omega](https://alpha-omega.dev/). Alpha-Omega is a project partnering with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code - and get them fixed – to improve global software supply chain security.
The bug is found by running the Intelligent Code Repair (iCR) tool by OpenRefactory and then manually triaging the results.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext/tracking/middleware.py`
Content:
```
1 import hashlib
2
3 from urllib.parse import unquote
4
5 from ckan.model.meta import engine
6 from ckan.common import request
7 from ckan.types import Response
8
9
10 def track_request(response: Response) -> Response:
11 path = request.environ.get('PATH_INFO')
12 method = request.environ.get('REQUEST_METHOD')
13 if path == '/_tracking' and method == 'POST':
14 # wsgi.input is a BytesIO object
15 payload = request.environ['wsgi.input'].read().decode()
16 parts = payload.split('&')
17 data = {}
18 for part in parts:
19 k, v = part.split('=')
20 data[k] = unquote(v)
21
22 # we want a unique anonomized key for each user so that we do
23 # not count multiple clicks from the same user.
24 key = ''.join([
25 request.environ['HTTP_USER_AGENT'],
26 request.environ['REMOTE_ADDR'],
27 request.environ.get('HTTP_ACCEPT_LANGUAGE', ''),
28 request.environ.get('HTTP_ACCEPT_ENCODING', ''),
29 ])
30 key = hashlib.md5(key.encode()).hexdigest()
31 # store key/data here
32 sql = '''INSERT INTO tracking_raw
33 (user_key, url, tracking_type)
34 VALUES (%s, %s, %s)'''
35 engine.execute( # type: ignore
36 sql, key, data.get('url'), data.get('type')
37 )
38 return response
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext/tracking/middleware.py b/ckanext/tracking/middleware.py
--- a/ckanext/tracking/middleware.py
+++ b/ckanext/tracking/middleware.py
@@ -27,7 +27,9 @@
request.environ.get('HTTP_ACCEPT_LANGUAGE', ''),
request.environ.get('HTTP_ACCEPT_ENCODING', ''),
])
- key = hashlib.md5(key.encode()).hexdigest()
+ # raises a type error on python<3.9
+ h = hashlib.new('md5', usedforsecurity=False) # type: ignore
+ key = h.update(key.encode()).hexdigest()
# store key/data here
sql = '''INSERT INTO tracking_raw
(user_key, url, tracking_type)
| {"golden_diff": "diff --git a/ckanext/tracking/middleware.py b/ckanext/tracking/middleware.py\n--- a/ckanext/tracking/middleware.py\n+++ b/ckanext/tracking/middleware.py\n@@ -27,7 +27,9 @@\n request.environ.get('HTTP_ACCEPT_LANGUAGE', ''),\n request.environ.get('HTTP_ACCEPT_ENCODING', ''),\n ])\n- key = hashlib.md5(key.encode()).hexdigest()\n+ # raises a type error on python<3.9\n+ h = hashlib.new('md5', usedforsecurity=False) # type: ignore\n+ key = h.update(key.encode()).hexdigest()\n # store key/data here\n sql = '''INSERT INTO tracking_raw\n (user_key, url, tracking_type)\n", "issue": "Replacing MD5 hashing algorithm with SHA512\nIn file: common_middleware.py, method: __call__, the used hashing algorithm is no longer considered secure because it is possible to have collisions. This can lead to brute force attempt to find two or more inputs that produce the same hash. iCR suggested that safer alternative hash algorithms, such as SHA-256, SHA-512, SHA-3 are used. \n\nIn the file, MD5 is used to generate a key based on several parameters and inserted into the database as `user_key`. In that case, it's recommended to use a more secure, less collision prone hash function such as- SHA256 or SHA512.\n\n\n### Sponsorship and Support:\n\nThis work is done by the security researchers from OpenRefactory and is supported by the [Open Source Security Foundation (OpenSSF)](https://openssf.org/): [Project Alpha-Omega](https://alpha-omega.dev/). Alpha-Omega is a project partnering with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code - and get them fixed \u2013 to improve global software supply chain security.\n\nThe bug is found by running the Intelligent Code Repair (iCR) tool by OpenRefactory and then manually triaging the results.\n", "before_files": [{"content": "import hashlib\n\nfrom urllib.parse import unquote\n\nfrom ckan.model.meta import engine\nfrom ckan.common import request\nfrom ckan.types import Response\n\n\ndef track_request(response: Response) -> Response:\n path = request.environ.get('PATH_INFO')\n method = request.environ.get('REQUEST_METHOD')\n if path == '/_tracking' and method == 'POST':\n # wsgi.input is a BytesIO object\n payload = request.environ['wsgi.input'].read().decode()\n parts = payload.split('&')\n data = {}\n for part in parts:\n k, v = part.split('=')\n data[k] = unquote(v)\n\n # we want a unique anonomized key for each user so that we do\n # not count multiple clicks from the same user.\n key = ''.join([\n request.environ['HTTP_USER_AGENT'],\n request.environ['REMOTE_ADDR'],\n request.environ.get('HTTP_ACCEPT_LANGUAGE', ''),\n request.environ.get('HTTP_ACCEPT_ENCODING', ''),\n ])\n key = hashlib.md5(key.encode()).hexdigest()\n # store key/data here\n sql = '''INSERT INTO tracking_raw\n (user_key, url, tracking_type)\n VALUES (%s, %s, %s)'''\n engine.execute( # type: ignore\n sql, key, data.get('url'), data.get('type')\n )\n return response\n", "path": "ckanext/tracking/middleware.py"}], "after_files": [{"content": "import hashlib\n\nfrom urllib.parse import unquote\n\nfrom ckan.model.meta import engine\nfrom ckan.common import request\nfrom ckan.types import Response\n\n\ndef track_request(response: Response) -> Response:\n path = request.environ.get('PATH_INFO')\n method = request.environ.get('REQUEST_METHOD')\n if path == '/_tracking' and method == 'POST':\n # wsgi.input is a BytesIO object\n payload = request.environ['wsgi.input'].read().decode()\n parts = payload.split('&')\n data = {}\n for part in parts:\n k, v = part.split('=')\n data[k] = unquote(v)\n\n # we want a unique anonomized key for each user so that we do\n # not count multiple clicks from the same user.\n key = ''.join([\n request.environ['HTTP_USER_AGENT'],\n request.environ['REMOTE_ADDR'],\n request.environ.get('HTTP_ACCEPT_LANGUAGE', ''),\n request.environ.get('HTTP_ACCEPT_ENCODING', ''),\n ])\n # raises a type error on python<3.9\n h = hashlib.new('md5', usedforsecurity=False) # type: ignore\n key = h.update(key.encode()).hexdigest()\n # store key/data here\n sql = '''INSERT INTO tracking_raw\n (user_key, url, tracking_type)\n VALUES (%s, %s, %s)'''\n engine.execute( # type: ignore\n sql, key, data.get('url'), data.get('type')\n )\n return response\n", "path": "ckanext/tracking/middleware.py"}]} | 897 | 166 |
gh_patches_debug_13520 | rasdani/github-patches | git_diff | rucio__rucio-2801 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
1.20.4rc2 storm protocol bug
Motivation
----------
The Storm protocol in RSEManager returns the input lfn as the pfn in lfns2pfns. This causes a crash as an InternalScope is then used as a dictionary key in list_replicas.
Modification
------------
The lfns dictionary should be sanitised so that scope is returned as an external string.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/rucio/rse/protocols/storm.py`
Content:
```
1 # Copyright European Organization for Nuclear Research (CERN)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # You may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Authors:
9 # - Tomas Javor Javurek, <[email protected]>, 2019
10
11
12 import os
13
14 from exceptions import NotImplementedError
15 from xml.dom import minidom
16
17 from rucio.common import exception
18 from rucio.common.utils import run_cmd_process
19 from rucio.rse.protocols import protocol
20
21
22 class Default(protocol.RSEProtocol):
23 """ Implementing access to RSEs using the local filesystem."""
24
25 def __init__(self, protocol_attr, rse_settings):
26 """ Initializes the object with information about the referred RSE.
27
28 :param props Properties derived from the RSE Repository
29 """
30 super(Default, self).__init__(protocol_attr, rse_settings)
31 self.attributes.pop('determinism_type', None)
32 self.files = []
33
34 def _get_path(self, scope, name):
35 """ Transforms the physical file name into the local URI in the referred RSE.
36 Suitable for sites implementoing the RUCIO naming convention.
37
38 :param name: filename
39 :param scope: scope
40
41 :returns: RSE specific URI of the physical file
42 """
43 return '%s/%s' % (scope, name)
44
45 def lfns2pfns(self, lfns):
46 """ In this case, just returns back lfn. """
47 return lfns
48
49 def path2pfn(self, path):
50 """
51 Retruns a fully qualified PFN for the file referred by path.
52
53 :param path: The path to the file.
54
55 :returns: Fully qualified PFN.
56
57 """
58 return ''.join([self.rse['scheme'], '://%s' % self.rse['hostname'], path])
59
60 def exists(self, pfn):
61 """ Checks if the requested file is known by the referred RSE.
62
63 :param pfn Physical file name
64
65 :returns: True if the file exists, False if it doesn't
66
67 :raise ServiceUnavailable
68 """
69 raise NotImplementedError
70
71 def connect(self):
72 """ Establishes the actual connection to the referred RSE.
73
74 :param credentials Provide all necessary information to establish a connection
75 to the referred storage system. Some is loaded from the repository inside the
76 RSE class and some must be provided specific for the SFTP protocol like
77 username, password, private_key, private_key_pass, port.
78 For details about possible additional parameters and details about their usage
79 see the pysftp.Connection() documentation.
80 NOTE: the host parametrer is overwritten with the value provided by the repository
81
82 :raise RSEAccessDenied
83 """
84 pass
85
86 def close(self):
87 """ Closes the connection to RSE."""
88 pass
89
90 def get(self, pfn, dest, transfer_timeout=None):
91 """ Provides access to files stored inside connected the RSE.
92
93 :param pfn Physical file name of requested file
94 :param dest Name and path of the files when stored at the client
95 :param transfer_timeout Transfer timeout (in seconds)
96
97 :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound
98 """
99
100 # storm prefix needs to be replaced by davs in order to get etag
101 pfn = 'davs' + pfn[5:]
102
103 # retrieve the TURL from the webdav etag, TODO: make it configurable
104 cmd = 'davix-http --capath /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/etc/grid-security-emi/certificates --cert $X509_USER_PROXY -X PROPFIND %s' % pfn
105 try:
106 rcode, output = run_cmd_process(cmd, timeout=10)
107 except Exception as e:
108 raise exception.ServiceUnavailable('Could not retrieve STORM WebDAV ETag: %s' % str(e))
109 p_output = minidom.parseString(output)
110
111 # we need to strip off the quotation marks and the <timestamp> from the etag
112 # but since we can have multiple underscores, we have to rely on the uniqueness
113 # of the full LFN to make the split
114 target = p_output.getElementsByTagName('d:getetag')[0].childNodes[0].nodeValue.replace('"', '')
115 target_ending = '_' + target.split('_')[-1]
116 target = target.split(target_ending)[0]
117
118 # make the symlink
119 try:
120 os.symlink(target, dest)
121 except Exception as e:
122 exception.ServiceUnavailable('Could not create symlink: %s for target %s' % (str(e), str(target)))
123
124 def put(self, source, target, source_dir=None, transfer_timeout=None):
125 """ Allows to store files inside the referred RSE.
126
127 :param source Physical file name
128 :param target Name of the file on the storage system e.g. with prefixed scope
129 :param source_dir Path where the to be transferred files are stored in the local file system
130 :param transfer_timeout Transfer timeout (in seconds)
131
132 :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound
133 """
134 raise NotImplementedError
135
136 def delete(self, pfn):
137 """ Deletes a file from the connected RSE.
138
139 :param pfn Physical file name
140
141 :raises ServiceUnavailable, SourceNotFound
142 """
143 raise NotImplementedError
144
145 def rename(self, pfn, new_pfn):
146 """ Allows to rename a file stored inside the connected RSE.
147
148 :param pfn Current physical file name
149 :param new_pfn New physical file name
150
151 :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound
152 """
153 raise NotImplementedError
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/rucio/rse/protocols/storm.py b/lib/rucio/rse/protocols/storm.py
--- a/lib/rucio/rse/protocols/storm.py
+++ b/lib/rucio/rse/protocols/storm.py
@@ -43,8 +43,23 @@
return '%s/%s' % (scope, name)
def lfns2pfns(self, lfns):
- """ In this case, just returns back lfn. """
- return lfns
+ """ In this case, just returns back lfn with external scope. """
+ if type(lfns) == dict:
+ val = lfns.copy()
+ if 'scope' in val and val['scope'] is not None:
+ val['scope'] = val['scope'].external
+
+ elif type(lfns) == list:
+ val = []
+ for l in lfns:
+ v = l.copy()
+ if 'scope' in v and v['scope'] is not None:
+ v['scope'] = v['scope'].external
+ val.append(v)
+
+ else:
+ val = lfns
+ return val
def path2pfn(self, path):
"""
| {"golden_diff": "diff --git a/lib/rucio/rse/protocols/storm.py b/lib/rucio/rse/protocols/storm.py\n--- a/lib/rucio/rse/protocols/storm.py\n+++ b/lib/rucio/rse/protocols/storm.py\n@@ -43,8 +43,23 @@\n return '%s/%s' % (scope, name)\n \n def lfns2pfns(self, lfns):\n- \"\"\" In this case, just returns back lfn. \"\"\"\n- return lfns\n+ \"\"\" In this case, just returns back lfn with external scope. \"\"\"\n+ if type(lfns) == dict:\n+ val = lfns.copy()\n+ if 'scope' in val and val['scope'] is not None:\n+ val['scope'] = val['scope'].external\n+\n+ elif type(lfns) == list:\n+ val = []\n+ for l in lfns:\n+ v = l.copy()\n+ if 'scope' in v and v['scope'] is not None:\n+ v['scope'] = v['scope'].external\n+ val.append(v)\n+\n+ else:\n+ val = lfns\n+ return val\n \n def path2pfn(self, path):\n \"\"\"\n", "issue": "1.20.4rc2 storm protocol bug\nMotivation\r\n----------\r\nThe Storm protocol in RSEManager returns the input lfn as the pfn in lfns2pfns. This causes a crash as an InternalScope is then used as a dictionary key in list_replicas.\r\n\r\nModification\r\n------------\r\nThe lfns dictionary should be sanitised so that scope is returned as an external string.\r\n\n", "before_files": [{"content": "# Copyright European Organization for Nuclear Research (CERN)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# You may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Authors:\n# - Tomas Javor Javurek, <[email protected]>, 2019\n\n\nimport os\n\nfrom exceptions import NotImplementedError\nfrom xml.dom import minidom\n\nfrom rucio.common import exception\nfrom rucio.common.utils import run_cmd_process\nfrom rucio.rse.protocols import protocol\n\n\nclass Default(protocol.RSEProtocol):\n \"\"\" Implementing access to RSEs using the local filesystem.\"\"\"\n\n def __init__(self, protocol_attr, rse_settings):\n \"\"\" Initializes the object with information about the referred RSE.\n\n :param props Properties derived from the RSE Repository\n \"\"\"\n super(Default, self).__init__(protocol_attr, rse_settings)\n self.attributes.pop('determinism_type', None)\n self.files = []\n\n def _get_path(self, scope, name):\n \"\"\" Transforms the physical file name into the local URI in the referred RSE.\n Suitable for sites implementoing the RUCIO naming convention.\n\n :param name: filename\n :param scope: scope\n\n :returns: RSE specific URI of the physical file\n \"\"\"\n return '%s/%s' % (scope, name)\n\n def lfns2pfns(self, lfns):\n \"\"\" In this case, just returns back lfn. \"\"\"\n return lfns\n\n def path2pfn(self, path):\n \"\"\"\n Retruns a fully qualified PFN for the file referred by path.\n\n :param path: The path to the file.\n\n :returns: Fully qualified PFN.\n\n \"\"\"\n return ''.join([self.rse['scheme'], '://%s' % self.rse['hostname'], path])\n\n def exists(self, pfn):\n \"\"\" Checks if the requested file is known by the referred RSE.\n\n :param pfn Physical file name\n\n :returns: True if the file exists, False if it doesn't\n\n :raise ServiceUnavailable\n \"\"\"\n raise NotImplementedError\n\n def connect(self):\n \"\"\" Establishes the actual connection to the referred RSE.\n\n :param credentials Provide all necessary information to establish a connection\n to the referred storage system. Some is loaded from the repository inside the\n RSE class and some must be provided specific for the SFTP protocol like\n username, password, private_key, private_key_pass, port.\n For details about possible additional parameters and details about their usage\n see the pysftp.Connection() documentation.\n NOTE: the host parametrer is overwritten with the value provided by the repository\n\n :raise RSEAccessDenied\n \"\"\"\n pass\n\n def close(self):\n \"\"\" Closes the connection to RSE.\"\"\"\n pass\n\n def get(self, pfn, dest, transfer_timeout=None):\n \"\"\" Provides access to files stored inside connected the RSE.\n\n :param pfn Physical file name of requested file\n :param dest Name and path of the files when stored at the client\n :param transfer_timeout Transfer timeout (in seconds)\n\n :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound\n \"\"\"\n\n # storm prefix needs to be replaced by davs in order to get etag\n pfn = 'davs' + pfn[5:]\n\n # retrieve the TURL from the webdav etag, TODO: make it configurable\n cmd = 'davix-http --capath /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/etc/grid-security-emi/certificates --cert $X509_USER_PROXY -X PROPFIND %s' % pfn\n try:\n rcode, output = run_cmd_process(cmd, timeout=10)\n except Exception as e:\n raise exception.ServiceUnavailable('Could not retrieve STORM WebDAV ETag: %s' % str(e))\n p_output = minidom.parseString(output)\n\n # we need to strip off the quotation marks and the <timestamp> from the etag\n # but since we can have multiple underscores, we have to rely on the uniqueness\n # of the full LFN to make the split\n target = p_output.getElementsByTagName('d:getetag')[0].childNodes[0].nodeValue.replace('\"', '')\n target_ending = '_' + target.split('_')[-1]\n target = target.split(target_ending)[0]\n\n # make the symlink\n try:\n os.symlink(target, dest)\n except Exception as e:\n exception.ServiceUnavailable('Could not create symlink: %s for target %s' % (str(e), str(target)))\n\n def put(self, source, target, source_dir=None, transfer_timeout=None):\n \"\"\" Allows to store files inside the referred RSE.\n\n :param source Physical file name\n :param target Name of the file on the storage system e.g. with prefixed scope\n :param source_dir Path where the to be transferred files are stored in the local file system\n :param transfer_timeout Transfer timeout (in seconds)\n\n :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound\n \"\"\"\n raise NotImplementedError\n\n def delete(self, pfn):\n \"\"\" Deletes a file from the connected RSE.\n\n :param pfn Physical file name\n\n :raises ServiceUnavailable, SourceNotFound\n \"\"\"\n raise NotImplementedError\n\n def rename(self, pfn, new_pfn):\n \"\"\" Allows to rename a file stored inside the connected RSE.\n\n :param pfn Current physical file name\n :param new_pfn New physical file name\n\n :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound\n \"\"\"\n raise NotImplementedError\n", "path": "lib/rucio/rse/protocols/storm.py"}], "after_files": [{"content": "# Copyright European Organization for Nuclear Research (CERN)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# You may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Authors:\n# - Tomas Javor Javurek, <[email protected]>, 2019\n\n\nimport os\n\nfrom exceptions import NotImplementedError\nfrom xml.dom import minidom\n\nfrom rucio.common import exception\nfrom rucio.common.utils import run_cmd_process\nfrom rucio.rse.protocols import protocol\n\n\nclass Default(protocol.RSEProtocol):\n \"\"\" Implementing access to RSEs using the local filesystem.\"\"\"\n\n def __init__(self, protocol_attr, rse_settings):\n \"\"\" Initializes the object with information about the referred RSE.\n\n :param props Properties derived from the RSE Repository\n \"\"\"\n super(Default, self).__init__(protocol_attr, rse_settings)\n self.attributes.pop('determinism_type', None)\n self.files = []\n\n def _get_path(self, scope, name):\n \"\"\" Transforms the physical file name into the local URI in the referred RSE.\n Suitable for sites implementoing the RUCIO naming convention.\n\n :param name: filename\n :param scope: scope\n\n :returns: RSE specific URI of the physical file\n \"\"\"\n return '%s/%s' % (scope, name)\n\n def lfns2pfns(self, lfns):\n \"\"\" In this case, just returns back lfn with external scope. \"\"\"\n if type(lfns) == dict:\n val = lfns.copy()\n if 'scope' in val and val['scope'] is not None:\n val['scope'] = val['scope'].external\n\n elif type(lfns) == list:\n val = []\n for l in lfns:\n v = l.copy()\n if 'scope' in v and v['scope'] is not None:\n v['scope'] = v['scope'].external\n val.append(v)\n\n else:\n val = lfns\n return val\n\n def path2pfn(self, path):\n \"\"\"\n Retruns a fully qualified PFN for the file referred by path.\n\n :param path: The path to the file.\n\n :returns: Fully qualified PFN.\n\n \"\"\"\n return ''.join([self.rse['scheme'], '://%s' % self.rse['hostname'], path])\n\n def exists(self, pfn):\n \"\"\" Checks if the requested file is known by the referred RSE.\n\n :param pfn Physical file name\n\n :returns: True if the file exists, False if it doesn't\n\n :raise ServiceUnavailable\n \"\"\"\n raise NotImplementedError\n\n def connect(self):\n \"\"\" Establishes the actual connection to the referred RSE.\n\n :param credentials Provide all necessary information to establish a connection\n to the referred storage system. Some is loaded from the repository inside the\n RSE class and some must be provided specific for the SFTP protocol like\n username, password, private_key, private_key_pass, port.\n For details about possible additional parameters and details about their usage\n see the pysftp.Connection() documentation.\n NOTE: the host parametrer is overwritten with the value provided by the repository\n\n :raise RSEAccessDenied\n \"\"\"\n pass\n\n def close(self):\n \"\"\" Closes the connection to RSE.\"\"\"\n pass\n\n def get(self, pfn, dest, transfer_timeout=None):\n \"\"\" Provides access to files stored inside connected the RSE.\n\n :param pfn Physical file name of requested file\n :param dest Name and path of the files when stored at the client\n :param transfer_timeout Transfer timeout (in seconds)\n\n :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound\n \"\"\"\n\n # storm prefix needs to be replaced by davs in order to get etag\n pfn = 'davs' + pfn[5:]\n\n # retrieve the TURL from the webdav etag, TODO: make it configurable\n cmd = 'davix-http --capath /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/etc/grid-security-emi/certificates --cert $X509_USER_PROXY -X PROPFIND %s' % pfn\n try:\n rcode, output = run_cmd_process(cmd, timeout=10)\n except Exception as e:\n raise exception.ServiceUnavailable('Could not retrieve STORM WebDAV ETag: %s' % str(e))\n p_output = minidom.parseString(output)\n\n # we need to strip off the quotation marks and the <timestamp> from the etag\n # but since we can have multiple underscores, we have to rely on the uniqueness\n # of the full LFN to make the split\n target = p_output.getElementsByTagName('d:getetag')[0].childNodes[0].nodeValue.replace('\"', '')\n target_ending = '_' + target.split('_')[-1]\n target = target.split(target_ending)[0]\n\n # make the symlink\n try:\n os.symlink(target, dest)\n except Exception as e:\n exception.ServiceUnavailable('Could not create symlink: %s for target %s' % (str(e), str(target)))\n\n def put(self, source, target, source_dir=None, transfer_timeout=None):\n \"\"\" Allows to store files inside the referred RSE.\n\n :param source Physical file name\n :param target Name of the file on the storage system e.g. with prefixed scope\n :param source_dir Path where the to be transferred files are stored in the local file system\n :param transfer_timeout Transfer timeout (in seconds)\n\n :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound\n \"\"\"\n raise NotImplementedError\n\n def delete(self, pfn):\n \"\"\" Deletes a file from the connected RSE.\n\n :param pfn Physical file name\n\n :raises ServiceUnavailable, SourceNotFound\n \"\"\"\n raise NotImplementedError\n\n def rename(self, pfn, new_pfn):\n \"\"\" Allows to rename a file stored inside the connected RSE.\n\n :param pfn Current physical file name\n :param new_pfn New physical file name\n\n :raises DestinationNotAccessible, ServiceUnavailable, SourceNotFound\n \"\"\"\n raise NotImplementedError\n", "path": "lib/rucio/rse/protocols/storm.py"}]} | 1,989 | 278 |
gh_patches_debug_24799 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-645 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deliver fallback of missing imprint translations in API
### Motivation
<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->
The imprint is mandatory for all regions and languages.
### Proposed Solution
<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->
Always return a result in the [imprint API](https://github.com/Integreat/integreat-cms/blob/develop/src/api/v3/imprint.py). If the translation is missing, deliver the imprint in the region's default language.
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, and why you're proposed solution is better. -->
### Additional Context
<!-- Add any other information or screenshots about the feature request here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/api/v3/imprint.py`
Content:
```
1 """
2 imprint API endpoint
3 """
4 from django.http import JsonResponse
5
6 from backend.settings import BASE_URL
7 from cms.models import Region
8
9 from ..decorators import json_response
10
11
12 def transform_imprint(imprint_translation):
13 """
14 Function to create a JSON from a single imprint_translation object.
15
16 :param imprint_translation: single page translation object
17 :type imprint_translation: ~cms.models.pages.page_translation.PageTranslation
18
19 :return: return data necessary for API
20 :rtype: dict
21 """
22 if imprint_translation.page.icon:
23 thumbnail = BASE_URL + imprint_translation.page.icon.url
24 else:
25 thumbnail = None
26 return {
27 "id": imprint_translation.id,
28 "url": imprint_translation.permalink,
29 "title": imprint_translation.title,
30 "modified_gmt": imprint_translation.last_updated,
31 "excerpt": imprint_translation.text,
32 "content": imprint_translation.text,
33 "parent": None,
34 "available_languages": imprint_translation.available_languages,
35 "thumbnail": thumbnail,
36 "hash": None,
37 }
38
39
40 @json_response
41 # pylint: disable=unused-argument
42 def imprint(request, region_slug, language_code):
43 """
44 Get imprint for language and return JSON object to client
45
46 :param request: Django request
47 :type request: ~django.http.HttpRequest
48 :param region_slug: slug of a region
49 :type region_slug: str
50 :param language_code: language code
51 :type language_code: str
52
53 :return: JSON object according to APIv3 imprint endpoint definition
54 :rtype: ~django.http.JsonResponse
55 """
56 region = Region.get_current_region(request)
57 if hasattr(region, "imprint"):
58 imprint_translation = region.imprint.get_public_translation(language_code)
59 if imprint_translation:
60 return JsonResponse(transform_imprint(imprint_translation))
61 # If imprint does not exist, return an empty response. Turn off Safe-Mode to allow serializing arrays
62 return JsonResponse([], safe=False)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/api/v3/imprint.py b/src/api/v3/imprint.py
--- a/src/api/v3/imprint.py
+++ b/src/api/v3/imprint.py
@@ -41,7 +41,9 @@
# pylint: disable=unused-argument
def imprint(request, region_slug, language_code):
"""
- Get imprint for language and return JSON object to client
+ Get imprint for language and return JSON object to client. If no imprint translation
+ is available in the selected language, try to return the translation in the region
+ default language.
:param request: Django request
:type request: ~django.http.HttpRequest
@@ -58,5 +60,11 @@
imprint_translation = region.imprint.get_public_translation(language_code)
if imprint_translation:
return JsonResponse(transform_imprint(imprint_translation))
+ if region.default_language:
+ imprint_default_translation = region.imprint.get_public_translation(
+ region.default_language.code
+ )
+ if imprint_default_translation:
+ return JsonResponse(transform_imprint(imprint_default_translation))
# If imprint does not exist, return an empty response. Turn off Safe-Mode to allow serializing arrays
return JsonResponse([], safe=False)
| {"golden_diff": "diff --git a/src/api/v3/imprint.py b/src/api/v3/imprint.py\n--- a/src/api/v3/imprint.py\n+++ b/src/api/v3/imprint.py\n@@ -41,7 +41,9 @@\n # pylint: disable=unused-argument\n def imprint(request, region_slug, language_code):\n \"\"\"\n- Get imprint for language and return JSON object to client\n+ Get imprint for language and return JSON object to client. If no imprint translation\n+ is available in the selected language, try to return the translation in the region\n+ default language.\n \n :param request: Django request\n :type request: ~django.http.HttpRequest\n@@ -58,5 +60,11 @@\n imprint_translation = region.imprint.get_public_translation(language_code)\n if imprint_translation:\n return JsonResponse(transform_imprint(imprint_translation))\n+ if region.default_language:\n+ imprint_default_translation = region.imprint.get_public_translation(\n+ region.default_language.code\n+ )\n+ if imprint_default_translation:\n+ return JsonResponse(transform_imprint(imprint_default_translation))\n # If imprint does not exist, return an empty response. Turn off Safe-Mode to allow serializing arrays\n return JsonResponse([], safe=False)\n", "issue": "Deliver fallback of missing imprint translations in API\n### Motivation\r\n<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->\r\nThe imprint is mandatory for all regions and languages.\r\n\r\n### Proposed Solution\r\n<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->\r\nAlways return a result in the [imprint API](https://github.com/Integreat/integreat-cms/blob/develop/src/api/v3/imprint.py). If the translation is missing, deliver the imprint in the region's default language.\r\n\r\n### Alternatives\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, and why you're proposed solution is better. -->\r\n\r\n\r\n### Additional Context\r\n<!-- Add any other information or screenshots about the feature request here. -->\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nimprint API endpoint\n\"\"\"\nfrom django.http import JsonResponse\n\nfrom backend.settings import BASE_URL\nfrom cms.models import Region\n\nfrom ..decorators import json_response\n\n\ndef transform_imprint(imprint_translation):\n \"\"\"\n Function to create a JSON from a single imprint_translation object.\n\n :param imprint_translation: single page translation object\n :type imprint_translation: ~cms.models.pages.page_translation.PageTranslation\n\n :return: return data necessary for API\n :rtype: dict\n \"\"\"\n if imprint_translation.page.icon:\n thumbnail = BASE_URL + imprint_translation.page.icon.url\n else:\n thumbnail = None\n return {\n \"id\": imprint_translation.id,\n \"url\": imprint_translation.permalink,\n \"title\": imprint_translation.title,\n \"modified_gmt\": imprint_translation.last_updated,\n \"excerpt\": imprint_translation.text,\n \"content\": imprint_translation.text,\n \"parent\": None,\n \"available_languages\": imprint_translation.available_languages,\n \"thumbnail\": thumbnail,\n \"hash\": None,\n }\n\n\n@json_response\n# pylint: disable=unused-argument\ndef imprint(request, region_slug, language_code):\n \"\"\"\n Get imprint for language and return JSON object to client\n\n :param request: Django request\n :type request: ~django.http.HttpRequest\n :param region_slug: slug of a region\n :type region_slug: str\n :param language_code: language code\n :type language_code: str\n\n :return: JSON object according to APIv3 imprint endpoint definition\n :rtype: ~django.http.JsonResponse\n \"\"\"\n region = Region.get_current_region(request)\n if hasattr(region, \"imprint\"):\n imprint_translation = region.imprint.get_public_translation(language_code)\n if imprint_translation:\n return JsonResponse(transform_imprint(imprint_translation))\n # If imprint does not exist, return an empty response. Turn off Safe-Mode to allow serializing arrays\n return JsonResponse([], safe=False)\n", "path": "src/api/v3/imprint.py"}], "after_files": [{"content": "\"\"\"\nimprint API endpoint\n\"\"\"\nfrom django.http import JsonResponse\n\nfrom backend.settings import BASE_URL\nfrom cms.models import Region\n\nfrom ..decorators import json_response\n\n\ndef transform_imprint(imprint_translation):\n \"\"\"\n Function to create a JSON from a single imprint_translation object.\n\n :param imprint_translation: single page translation object\n :type imprint_translation: ~cms.models.pages.page_translation.PageTranslation\n\n :return: return data necessary for API\n :rtype: dict\n \"\"\"\n if imprint_translation.page.icon:\n thumbnail = BASE_URL + imprint_translation.page.icon.url\n else:\n thumbnail = None\n return {\n \"id\": imprint_translation.id,\n \"url\": imprint_translation.permalink,\n \"title\": imprint_translation.title,\n \"modified_gmt\": imprint_translation.last_updated,\n \"excerpt\": imprint_translation.text,\n \"content\": imprint_translation.text,\n \"parent\": None,\n \"available_languages\": imprint_translation.available_languages,\n \"thumbnail\": thumbnail,\n \"hash\": None,\n }\n\n\n@json_response\n# pylint: disable=unused-argument\ndef imprint(request, region_slug, language_code):\n \"\"\"\n Get imprint for language and return JSON object to client. If no imprint translation\n is available in the selected language, try to return the translation in the region\n default language.\n\n :param request: Django request\n :type request: ~django.http.HttpRequest\n :param region_slug: slug of a region\n :type region_slug: str\n :param language_code: language code\n :type language_code: str\n\n :return: JSON object according to APIv3 imprint endpoint definition\n :rtype: ~django.http.JsonResponse\n \"\"\"\n region = Region.get_current_region(request)\n if hasattr(region, \"imprint\"):\n imprint_translation = region.imprint.get_public_translation(language_code)\n if imprint_translation:\n return JsonResponse(transform_imprint(imprint_translation))\n if region.default_language:\n imprint_default_translation = region.imprint.get_public_translation(\n region.default_language.code\n )\n if imprint_default_translation:\n return JsonResponse(transform_imprint(imprint_default_translation))\n # If imprint does not exist, return an empty response. Turn off Safe-Mode to allow serializing arrays\n return JsonResponse([], safe=False)\n", "path": "src/api/v3/imprint.py"}]} | 973 | 268 |
gh_patches_debug_11379 | rasdani/github-patches | git_diff | networkx__networkx-1045 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need JSON format description
The page on JSON serialization lacks information about the actual structure of produced data. This make it hard to see if networkx is a suitable tool for a backend of already existing JavaScript front.
http://networkx.lanl.gov/reference/readwrite.json_graph.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `networkx/readwrite/json_graph/__init__.py`
Content:
```
1 """
2 *********
3 JSON data
4 *********
5 Generate and parse JSON serializable data for NetworkX graphs.
6 """
7 from networkx.readwrite.json_graph.node_link import *
8 from networkx.readwrite.json_graph.adjacency import *
9 from networkx.readwrite.json_graph.tree import *
10 from networkx.readwrite.json_graph.serialize import *
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/networkx/readwrite/json_graph/__init__.py b/networkx/readwrite/json_graph/__init__.py
--- a/networkx/readwrite/json_graph/__init__.py
+++ b/networkx/readwrite/json_graph/__init__.py
@@ -1,8 +1,16 @@
"""
*********
-JSON data
+JSON data
*********
Generate and parse JSON serializable data for NetworkX graphs.
+
+These formats are suitable for use with the d3.js examples http://d3js.org/
+
+The three formats that you can generate with NetworkX are:
+
+ - node-link like in the d3.js example http://bl.ocks.org/mbostock/4062045
+ - tree like in the d3.js example http://bl.ocks.org/mbostock/4063550
+ - adjacency like in the d3.js example http://bost.ocks.org/mike/miserables/
"""
from networkx.readwrite.json_graph.node_link import *
from networkx.readwrite.json_graph.adjacency import *
| {"golden_diff": "diff --git a/networkx/readwrite/json_graph/__init__.py b/networkx/readwrite/json_graph/__init__.py\n--- a/networkx/readwrite/json_graph/__init__.py\n+++ b/networkx/readwrite/json_graph/__init__.py\n@@ -1,8 +1,16 @@\n \"\"\"\n *********\n-JSON data \n+JSON data\n *********\n Generate and parse JSON serializable data for NetworkX graphs.\n+\n+These formats are suitable for use with the d3.js examples http://d3js.org/\n+\n+The three formats that you can generate with NetworkX are:\n+\n+ - node-link like in the d3.js example http://bl.ocks.org/mbostock/4062045\n+ - tree like in the d3.js example http://bl.ocks.org/mbostock/4063550\n+ - adjacency like in the d3.js example http://bost.ocks.org/mike/miserables/\n \"\"\"\n from networkx.readwrite.json_graph.node_link import *\n from networkx.readwrite.json_graph.adjacency import *\n", "issue": "Need JSON format description\nThe page on JSON serialization lacks information about the actual structure of produced data. This make it hard to see if networkx is a suitable tool for a backend of already existing JavaScript front.\n\nhttp://networkx.lanl.gov/reference/readwrite.json_graph.html\n\n", "before_files": [{"content": "\"\"\"\n*********\nJSON data \n*********\nGenerate and parse JSON serializable data for NetworkX graphs.\n\"\"\"\nfrom networkx.readwrite.json_graph.node_link import *\nfrom networkx.readwrite.json_graph.adjacency import *\nfrom networkx.readwrite.json_graph.tree import *\nfrom networkx.readwrite.json_graph.serialize import *\n", "path": "networkx/readwrite/json_graph/__init__.py"}], "after_files": [{"content": "\"\"\"\n*********\nJSON data\n*********\nGenerate and parse JSON serializable data for NetworkX graphs.\n\nThese formats are suitable for use with the d3.js examples http://d3js.org/\n\nThe three formats that you can generate with NetworkX are:\n\n - node-link like in the d3.js example http://bl.ocks.org/mbostock/4062045\n - tree like in the d3.js example http://bl.ocks.org/mbostock/4063550\n - adjacency like in the d3.js example http://bost.ocks.org/mike/miserables/\n\"\"\"\nfrom networkx.readwrite.json_graph.node_link import *\nfrom networkx.readwrite.json_graph.adjacency import *\nfrom networkx.readwrite.json_graph.tree import *\nfrom networkx.readwrite.json_graph.serialize import *\n", "path": "networkx/readwrite/json_graph/__init__.py"}]} | 397 | 229 |
gh_patches_debug_8211 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-1975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show banner throughout application when "live demo mode" is turned on.
We should show a banner at the top of the screen on all pages that explains that Mathesar is in live demo mode and that each session has its own copy of demo data and that data will be deleted regularly.
Assigning this to @mathemancer to make sure it gets implemented at some point, @ghislaineguerin for the design, and @pavish for the frontend.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `config/context_processors.py`
Content:
```
1 from django.conf import settings
2
3 from mathesar.utils.frontend import get_manifest_data
4
5
6 def frontend_settings(request):
7 frontend_settings = {
8 'development_mode': settings.MATHESAR_MODE == 'DEVELOPMENT',
9 'manifest_data': get_manifest_data()
10 }
11 # Only include development URL if we're in development mode.
12 if frontend_settings['development_mode'] is True:
13 frontend_settings['client_dev_url'] = settings.MATHESAR_CLIENT_DEV_URL
14 return frontend_settings
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/config/context_processors.py b/config/context_processors.py
--- a/config/context_processors.py
+++ b/config/context_processors.py
@@ -6,7 +6,8 @@
def frontend_settings(request):
frontend_settings = {
'development_mode': settings.MATHESAR_MODE == 'DEVELOPMENT',
- 'manifest_data': get_manifest_data()
+ 'manifest_data': get_manifest_data(),
+ 'live_demo_mode': getattr(settings, 'MATHESAR_LIVE_DEMO', False)
}
# Only include development URL if we're in development mode.
if frontend_settings['development_mode'] is True:
| {"golden_diff": "diff --git a/config/context_processors.py b/config/context_processors.py\n--- a/config/context_processors.py\n+++ b/config/context_processors.py\n@@ -6,7 +6,8 @@\n def frontend_settings(request):\n frontend_settings = {\n 'development_mode': settings.MATHESAR_MODE == 'DEVELOPMENT',\n- 'manifest_data': get_manifest_data()\n+ 'manifest_data': get_manifest_data(),\n+ 'live_demo_mode': getattr(settings, 'MATHESAR_LIVE_DEMO', False)\n }\n # Only include development URL if we're in development mode.\n if frontend_settings['development_mode'] is True:\n", "issue": "Show banner throughout application when \"live demo mode\" is turned on.\nWe should show a banner at the top of the screen on all pages that explains that Mathesar is in live demo mode and that each session has its own copy of demo data and that data will be deleted regularly.\r\n\r\nAssigning this to @mathemancer to make sure it gets implemented at some point, @ghislaineguerin for the design, and @pavish for the frontend.\n", "before_files": [{"content": "from django.conf import settings\n\nfrom mathesar.utils.frontend import get_manifest_data\n\n\ndef frontend_settings(request):\n frontend_settings = {\n 'development_mode': settings.MATHESAR_MODE == 'DEVELOPMENT',\n 'manifest_data': get_manifest_data()\n }\n # Only include development URL if we're in development mode.\n if frontend_settings['development_mode'] is True:\n frontend_settings['client_dev_url'] = settings.MATHESAR_CLIENT_DEV_URL\n return frontend_settings\n", "path": "config/context_processors.py"}], "after_files": [{"content": "from django.conf import settings\n\nfrom mathesar.utils.frontend import get_manifest_data\n\n\ndef frontend_settings(request):\n frontend_settings = {\n 'development_mode': settings.MATHESAR_MODE == 'DEVELOPMENT',\n 'manifest_data': get_manifest_data(),\n 'live_demo_mode': getattr(settings, 'MATHESAR_LIVE_DEMO', False)\n }\n # Only include development URL if we're in development mode.\n if frontend_settings['development_mode'] is True:\n frontend_settings['client_dev_url'] = settings.MATHESAR_CLIENT_DEV_URL\n return frontend_settings\n", "path": "config/context_processors.py"}]} | 479 | 133 |
gh_patches_debug_27688 | rasdani/github-patches | git_diff | google__turbinia-802 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Plaso hangs on VSS prompt
We should set `--vss_stores none` by default and also pass the `--unattended` flag.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/workers/plaso.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20 from tempfile import NamedTemporaryFile
21
22 from turbinia import config
23 from turbinia.evidence import APFSEncryptedDisk
24 from turbinia.evidence import EvidenceState as state
25 from turbinia.evidence import PlasoFile
26 from turbinia.workers import TurbiniaTask
27
28
29 class PlasoTask(TurbiniaTask):
30 """Task to run Plaso (log2timeline)."""
31
32 # Plaso requires the Disk to be attached, but doesn't require it be mounted.
33 REQUIRED_STATES = [state.ATTACHED, state.DECOMPRESSED]
34
35 def run(self, evidence, result):
36 """Task that process data with Plaso.
37
38 Args:
39 evidence (Evidence object): The evidence we will process.
40 result (TurbiniaTaskResult): The object to place task results into.
41
42 Returns:
43 TurbiniaTaskResult object.
44 """
45 config.LoadConfig()
46
47 # TODO: Convert to using real recipes after
48 # https://github.com/google/turbinia/pull/486 is in. For now we're just
49 # using the --recipe_config flag, and this can be used with colon separated
50 # values like:
51 # --recipe_config='artifact_filters=BrowserFoo:BrowserBar,parsers=foo:bar'
52 if evidence.config and evidence.config.get('artifact_filters'):
53 artifact_filters = evidence.config.get('artifact_filters')
54 artifact_filters = artifact_filters.replace(':', ',')
55 else:
56 artifact_filters = None
57
58 if evidence.config and evidence.config.get('parsers'):
59 parsers = evidence.config.get('parsers')
60 parsers = parsers.replace(':', ',')
61 else:
62 parsers = None
63
64 if evidence.config and evidence.config.get('file_filters'):
65 file_filters = evidence.config.get('file_filters')
66 file_filter_file = os.path.join(self.tmp_dir, 'file_filter.txt')
67 try:
68 with open(file_filter_file, 'wb') as file_filter_fh:
69 for filter_ in file_filters.split(':'):
70 file_filter_fh.write(filter_.encode('utf-8') + b'\n')
71 except IOError as exception:
72 message = 'Cannot write to filter file {0:s}: {1!s}'.format(
73 file_filter_file, exception)
74 result.close(self, success=False, status=message)
75 return result
76 else:
77 file_filters = None
78 file_filter_file = None
79
80 if evidence.config and evidence.config.get('vss'):
81 vss = evidence.config.get('vss')
82 else:
83 vss = None
84
85 if evidence.config and evidence.config.get('yara_rules'):
86 yara_rules = evidence.config.get('yara_rules')
87 with NamedTemporaryFile(dir=self.tmp_dir, delete=False, mode='w') as fh:
88 yara_file_path = fh.name
89 fh.write(yara_rules)
90 else:
91 yara_rules = None
92
93 # Write plaso file into tmp_dir because sqlite has issues with some shared
94 # filesystems (e.g NFS).
95 plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))
96 plaso_evidence = PlasoFile(source_path=plaso_file)
97 plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))
98
99 # TODO(aarontp): Move these flags into a recipe
100 cmd = (
101 'log2timeline.py --status_view none --hashers all '
102 '--partition all').split()
103 if config.DEBUG_TASKS or evidence.config.get('debug_tasks'):
104 cmd.append('-d')
105 if artifact_filters:
106 cmd.extend(['--artifact_filters', artifact_filters])
107 if parsers:
108 cmd.extend(['--parsers', parsers])
109 if file_filters:
110 cmd.extend(['--file_filter', file_filter_file])
111 if vss:
112 cmd.extend(['--vss_stores', vss])
113 if yara_rules:
114 cmd.extend(['--yara_rules', yara_file_path])
115
116 # TODO(dfjxs): This can be removed once APFS encryption is implemented
117 # natively in Turbinia
118 if isinstance(evidence, APFSEncryptedDisk):
119 if evidence.recovery_key:
120 cmd.extend([
121 '--credential', 'recovery_password:{0:s}'.format(
122 evidence.recovery_key)
123 ])
124 elif evidence.password:
125 cmd.extend(['--credential', 'password:{0:s}'.format(evidence.password)])
126 else:
127 result.close(
128 self, False, 'No credentials were provided '
129 'for a bitlocker disk.')
130 return result
131
132 if evidence.credentials:
133 for credential in evidence.credentials:
134 credential_type = credential['credential_type']
135 credential_data = credential['credential_data']
136 cmd.extend([
137 '--credential', '{0:s}:{1:s}'.format(
138 credential_type, credential_data)
139 ])
140
141 cmd.extend(['--temporary_directory', self.tmp_dir])
142 cmd.extend(['--logfile', plaso_log])
143 cmd.extend([plaso_file, evidence.local_path])
144
145 result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))
146
147 self.execute(
148 cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],
149 close=True)
150
151 return result
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py
--- a/turbinia/workers/plaso.py
+++ b/turbinia/workers/plaso.py
@@ -80,7 +80,7 @@
if evidence.config and evidence.config.get('vss'):
vss = evidence.config.get('vss')
else:
- vss = None
+ vss = 'none'
if evidence.config and evidence.config.get('yara_rules'):
yara_rules = evidence.config.get('yara_rules')
@@ -99,7 +99,7 @@
# TODO(aarontp): Move these flags into a recipe
cmd = (
'log2timeline.py --status_view none --hashers all '
- '--partition all').split()
+ '--partition all -u').split()
if config.DEBUG_TASKS or evidence.config.get('debug_tasks'):
cmd.append('-d')
if artifact_filters:
@@ -108,10 +108,9 @@
cmd.extend(['--parsers', parsers])
if file_filters:
cmd.extend(['--file_filter', file_filter_file])
- if vss:
- cmd.extend(['--vss_stores', vss])
if yara_rules:
cmd.extend(['--yara_rules', yara_file_path])
+ cmd.extend(['--vss_stores', vss])
# TODO(dfjxs): This can be removed once APFS encryption is implemented
# natively in Turbinia
| {"golden_diff": "diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py\n--- a/turbinia/workers/plaso.py\n+++ b/turbinia/workers/plaso.py\n@@ -80,7 +80,7 @@\n if evidence.config and evidence.config.get('vss'):\n vss = evidence.config.get('vss')\n else:\n- vss = None\n+ vss = 'none'\n \n if evidence.config and evidence.config.get('yara_rules'):\n yara_rules = evidence.config.get('yara_rules')\n@@ -99,7 +99,7 @@\n # TODO(aarontp): Move these flags into a recipe\n cmd = (\n 'log2timeline.py --status_view none --hashers all '\n- '--partition all').split()\n+ '--partition all -u').split()\n if config.DEBUG_TASKS or evidence.config.get('debug_tasks'):\n cmd.append('-d')\n if artifact_filters:\n@@ -108,10 +108,9 @@\n cmd.extend(['--parsers', parsers])\n if file_filters:\n cmd.extend(['--file_filter', file_filter_file])\n- if vss:\n- cmd.extend(['--vss_stores', vss])\n if yara_rules:\n cmd.extend(['--yara_rules', yara_file_path])\n+ cmd.extend(['--vss_stores', vss])\n \n # TODO(dfjxs): This can be removed once APFS encryption is implemented\n # natively in Turbinia\n", "issue": "Plaso hangs on VSS prompt\nWe should set `--vss_stores none` by default and also pass the `--unattended` flag.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nfrom tempfile import NamedTemporaryFile\n\nfrom turbinia import config\nfrom turbinia.evidence import APFSEncryptedDisk\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [state.ATTACHED, state.DECOMPRESSED]\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n config.LoadConfig()\n\n # TODO: Convert to using real recipes after\n # https://github.com/google/turbinia/pull/486 is in. For now we're just\n # using the --recipe_config flag, and this can be used with colon separated\n # values like:\n # --recipe_config='artifact_filters=BrowserFoo:BrowserBar,parsers=foo:bar'\n if evidence.config and evidence.config.get('artifact_filters'):\n artifact_filters = evidence.config.get('artifact_filters')\n artifact_filters = artifact_filters.replace(':', ',')\n else:\n artifact_filters = None\n\n if evidence.config and evidence.config.get('parsers'):\n parsers = evidence.config.get('parsers')\n parsers = parsers.replace(':', ',')\n else:\n parsers = None\n\n if evidence.config and evidence.config.get('file_filters'):\n file_filters = evidence.config.get('file_filters')\n file_filter_file = os.path.join(self.tmp_dir, 'file_filter.txt')\n try:\n with open(file_filter_file, 'wb') as file_filter_fh:\n for filter_ in file_filters.split(':'):\n file_filter_fh.write(filter_.encode('utf-8') + b'\\n')\n except IOError as exception:\n message = 'Cannot write to filter file {0:s}: {1!s}'.format(\n file_filter_file, exception)\n result.close(self, success=False, status=message)\n return result\n else:\n file_filters = None\n file_filter_file = None\n\n if evidence.config and evidence.config.get('vss'):\n vss = evidence.config.get('vss')\n else:\n vss = None\n\n if evidence.config and evidence.config.get('yara_rules'):\n yara_rules = evidence.config.get('yara_rules')\n with NamedTemporaryFile(dir=self.tmp_dir, delete=False, mode='w') as fh:\n yara_file_path = fh.name\n fh.write(yara_rules)\n else:\n yara_rules = None\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n # TODO(aarontp): Move these flags into a recipe\n cmd = (\n 'log2timeline.py --status_view none --hashers all '\n '--partition all').split()\n if config.DEBUG_TASKS or evidence.config.get('debug_tasks'):\n cmd.append('-d')\n if artifact_filters:\n cmd.extend(['--artifact_filters', artifact_filters])\n if parsers:\n cmd.extend(['--parsers', parsers])\n if file_filters:\n cmd.extend(['--file_filter', file_filter_file])\n if vss:\n cmd.extend(['--vss_stores', vss])\n if yara_rules:\n cmd.extend(['--yara_rules', yara_file_path])\n\n # TODO(dfjxs): This can be removed once APFS encryption is implemented\n # natively in Turbinia\n if isinstance(evidence, APFSEncryptedDisk):\n if evidence.recovery_key:\n cmd.extend([\n '--credential', 'recovery_password:{0:s}'.format(\n evidence.recovery_key)\n ])\n elif evidence.password:\n cmd.extend(['--credential', 'password:{0:s}'.format(evidence.password)])\n else:\n result.close(\n self, False, 'No credentials were provided '\n 'for a bitlocker disk.')\n return result\n\n if evidence.credentials:\n for credential in evidence.credentials:\n credential_type = credential['credential_type']\n credential_data = credential['credential_data']\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend([plaso_file, evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nfrom tempfile import NamedTemporaryFile\n\nfrom turbinia import config\nfrom turbinia.evidence import APFSEncryptedDisk\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [state.ATTACHED, state.DECOMPRESSED]\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n config.LoadConfig()\n\n # TODO: Convert to using real recipes after\n # https://github.com/google/turbinia/pull/486 is in. For now we're just\n # using the --recipe_config flag, and this can be used with colon separated\n # values like:\n # --recipe_config='artifact_filters=BrowserFoo:BrowserBar,parsers=foo:bar'\n if evidence.config and evidence.config.get('artifact_filters'):\n artifact_filters = evidence.config.get('artifact_filters')\n artifact_filters = artifact_filters.replace(':', ',')\n else:\n artifact_filters = None\n\n if evidence.config and evidence.config.get('parsers'):\n parsers = evidence.config.get('parsers')\n parsers = parsers.replace(':', ',')\n else:\n parsers = None\n\n if evidence.config and evidence.config.get('file_filters'):\n file_filters = evidence.config.get('file_filters')\n file_filter_file = os.path.join(self.tmp_dir, 'file_filter.txt')\n try:\n with open(file_filter_file, 'wb') as file_filter_fh:\n for filter_ in file_filters.split(':'):\n file_filter_fh.write(filter_.encode('utf-8') + b'\\n')\n except IOError as exception:\n message = 'Cannot write to filter file {0:s}: {1!s}'.format(\n file_filter_file, exception)\n result.close(self, success=False, status=message)\n return result\n else:\n file_filters = None\n file_filter_file = None\n\n if evidence.config and evidence.config.get('vss'):\n vss = evidence.config.get('vss')\n else:\n vss = 'none'\n\n if evidence.config and evidence.config.get('yara_rules'):\n yara_rules = evidence.config.get('yara_rules')\n with NamedTemporaryFile(dir=self.tmp_dir, delete=False, mode='w') as fh:\n yara_file_path = fh.name\n fh.write(yara_rules)\n else:\n yara_rules = None\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n # TODO(aarontp): Move these flags into a recipe\n cmd = (\n 'log2timeline.py --status_view none --hashers all '\n '--partition all -u').split()\n if config.DEBUG_TASKS or evidence.config.get('debug_tasks'):\n cmd.append('-d')\n if artifact_filters:\n cmd.extend(['--artifact_filters', artifact_filters])\n if parsers:\n cmd.extend(['--parsers', parsers])\n if file_filters:\n cmd.extend(['--file_filter', file_filter_file])\n if yara_rules:\n cmd.extend(['--yara_rules', yara_file_path])\n cmd.extend(['--vss_stores', vss])\n\n # TODO(dfjxs): This can be removed once APFS encryption is implemented\n # natively in Turbinia\n if isinstance(evidence, APFSEncryptedDisk):\n if evidence.recovery_key:\n cmd.extend([\n '--credential', 'recovery_password:{0:s}'.format(\n evidence.recovery_key)\n ])\n elif evidence.password:\n cmd.extend(['--credential', 'password:{0:s}'.format(evidence.password)])\n else:\n result.close(\n self, False, 'No credentials were provided '\n 'for a bitlocker disk.')\n return result\n\n if evidence.credentials:\n for credential in evidence.credentials:\n credential_type = credential['credential_type']\n credential_data = credential['credential_data']\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend([plaso_file, evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}]} | 1,926 | 340 |
gh_patches_debug_5782 | rasdani/github-patches | git_diff | googleapis__python-bigquery-79 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unit tests fail in Python 2.7, 3.5 (dependency issue)
Unit tests check fails on Python 2.7 and Python 3.5, because not all dependencies can be installed.
#### Environment details
- OS type and version: Linux (and possibly others?)
- Python version: 2.7, 3.5
- pip version: `pip --version`: 20.0.2
- `google-cloud-bigquery` version: 1.24.0
#### Steps to reproduce
1. Run uni tests session for Python 2.7 or 3.5, e.g.:
```
nox -f noxfile.py -s unit-2.7
```
2. Test do not run, an error occurs when installing dependencies.
#### Code example
```python
# example
```
#### Stack trace
```
Building wheels for collected packages: llvmlite
...
RuntimeError: Building llvmlite requires LLVM 7.0.x, 7.1.x or 8.0.x, got '11.0.0'. Be sure to set LLVM_CONFIG to the right executable path.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25 version = "1.24.0"
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 'enum34; python_version < "3.4"',
33 "google-auth >= 1.9.0, < 2.0dev",
34 "google-api-core >= 1.15.0, < 2.0dev",
35 "google-cloud-core >= 1.1.0, < 2.0dev",
36 "google-resumable-media >= 0.5.0, < 0.6dev",
37 "protobuf >= 3.6.0",
38 "six >=1.13.0,< 2.0.0dev",
39 ]
40 extras = {
41 "bqstorage": [
42 "google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev",
43 "pyarrow>=0.16.0, < 2.0dev",
44 ],
45 "pandas": ["pandas>=0.17.1"],
46 # Exclude PyArrow dependency from Windows Python 2.7.
47 'pyarrow: platform_system != "Windows" or python_version >= "3.4"': [
48 # Bad Linux release for 0.14.0.
49 # https://issues.apache.org/jira/browse/ARROW-5868
50 "pyarrow>=0.4.1, != 0.14.0"
51 ],
52 "tqdm": ["tqdm >= 4.0.0, <5.0.0dev"],
53 "fastparquet": ["fastparquet", "python-snappy"],
54 }
55
56 all_extras = []
57
58 for extra in extras:
59 if extra == "fastparquet":
60 # Skip fastparquet from "all" because it is redundant with pyarrow and
61 # creates a dependency on pre-release versions of numpy. See:
62 # https://github.com/googleapis/google-cloud-python/issues/8549
63 continue
64 all_extras.extend(extras[extra])
65
66 extras["all"] = all_extras
67
68 # Setup boilerplate below this line.
69
70 package_root = os.path.abspath(os.path.dirname(__file__))
71
72 readme_filename = os.path.join(package_root, "README.rst")
73 with io.open(readme_filename, encoding="utf-8") as readme_file:
74 readme = readme_file.read()
75
76 # Only include packages under the 'google' namespace. Do not include tests,
77 # benchmarks, etc.
78 packages = [
79 package for package in setuptools.find_packages() if package.startswith("google")
80 ]
81
82 # Determine which namespaces are needed.
83 namespaces = ["google"]
84 if "google.cloud" in packages:
85 namespaces.append("google.cloud")
86
87
88 setuptools.setup(
89 name=name,
90 version=version,
91 description=description,
92 long_description=readme,
93 author="Google LLC",
94 author_email="[email protected]",
95 license="Apache 2.0",
96 url="https://github.com/googleapis/python-bigquery",
97 classifiers=[
98 release_status,
99 "Intended Audience :: Developers",
100 "License :: OSI Approved :: Apache Software License",
101 "Programming Language :: Python",
102 "Programming Language :: Python :: 2",
103 "Programming Language :: Python :: 2.7",
104 "Programming Language :: Python :: 3",
105 "Programming Language :: Python :: 3.5",
106 "Programming Language :: Python :: 3.6",
107 "Programming Language :: Python :: 3.7",
108 "Programming Language :: Python :: 3.8",
109 "Operating System :: OS Independent",
110 "Topic :: Internet",
111 ],
112 platforms="Posix; MacOS X; Windows",
113 packages=packages,
114 namespace_packages=namespaces,
115 install_requires=dependencies,
116 extras_require=extras,
117 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
118 include_package_data=True,
119 zip_safe=False,
120 )
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,7 +50,14 @@
"pyarrow>=0.4.1, != 0.14.0"
],
"tqdm": ["tqdm >= 4.0.0, <5.0.0dev"],
- "fastparquet": ["fastparquet", "python-snappy"],
+ "fastparquet": [
+ "fastparquet",
+ "python-snappy",
+ # llvmlite >= 0.32.0 cannot be installed on Python 3.5 and below
+ # (building the wheel fails), thus needs to be restricted.
+ # See: https://github.com/googleapis/python-bigquery/issues/78
+ "llvmlite <= 0.31.0",
+ ],
}
all_extras = []
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,7 +50,14 @@\n \"pyarrow>=0.4.1, != 0.14.0\"\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n- \"fastparquet\": [\"fastparquet\", \"python-snappy\"],\n+ \"fastparquet\": [\n+ \"fastparquet\",\n+ \"python-snappy\",\n+ # llvmlite >= 0.32.0 cannot be installed on Python 3.5 and below\n+ # (building the wheel fails), thus needs to be restricted.\n+ # See: https://github.com/googleapis/python-bigquery/issues/78\n+ \"llvmlite <= 0.31.0\",\n+ ],\n }\n \n all_extras = []\n", "issue": "Unit tests fail in Python 2.7, 3.5 (dependency issue)\nUnit tests check fails on Python 2.7 and Python 3.5, because not all dependencies can be installed.\r\n\r\n#### Environment details\r\n\r\n - OS type and version: Linux (and possibly others?)\r\n - Python version: 2.7, 3.5\r\n - pip version: `pip --version`: 20.0.2\r\n - `google-cloud-bigquery` version: 1.24.0\r\n\r\n#### Steps to reproduce\r\n\r\n 1. Run uni tests session for Python 2.7 or 3.5, e.g.:\r\n ```\r\n nox -f noxfile.py -s unit-2.7 \r\n ```\r\n 2. Test do not run, an error occurs when installing dependencies.\r\n\r\n#### Code example\r\n\r\n```python\r\n# example\r\n```\r\n\r\n#### Stack trace\r\n```\r\nBuilding wheels for collected packages: llvmlite\r\n...\r\nRuntimeError: Building llvmlite requires LLVM 7.0.x, 7.1.x or 8.0.x, got '11.0.0'. Be sure to set LLVM_CONFIG to the right executable path.\r\n```\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.24.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-auth >= 1.9.0, < 2.0dev\",\n \"google-api-core >= 1.15.0, < 2.0dev\",\n \"google-cloud-core >= 1.1.0, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 0.6dev\",\n \"protobuf >= 3.6.0\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev\",\n \"pyarrow>=0.16.0, < 2.0dev\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n 'pyarrow: platform_system != \"Windows\" or python_version >= \"3.4\"': [\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.4.1, != 0.14.0\"\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\"fastparquet\", \"python-snappy\"],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra == \"fastparquet\":\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.24.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-auth >= 1.9.0, < 2.0dev\",\n \"google-api-core >= 1.15.0, < 2.0dev\",\n \"google-cloud-core >= 1.1.0, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 0.6dev\",\n \"protobuf >= 3.6.0\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev\",\n \"pyarrow>=0.16.0, < 2.0dev\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n 'pyarrow: platform_system != \"Windows\" or python_version >= \"3.4\"': [\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.4.1, != 0.14.0\"\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\n \"fastparquet\",\n \"python-snappy\",\n # llvmlite >= 0.32.0 cannot be installed on Python 3.5 and below\n # (building the wheel fails), thus needs to be restricted.\n # See: https://github.com/googleapis/python-bigquery/issues/78\n \"llvmlite <= 0.31.0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra == \"fastparquet\":\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 1,829 | 207 |
gh_patches_debug_12637 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-10551 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API: allow remote repo full name query
The new dashboard is still using the API v2 remote repo API, which does not allow for expansion on project results and doesn't have all of the fields that I'd like to use in the results listing. The API v3 needs the v2 API implementation for searching by full_name, the current pattern for searching `full_name` by icontains on the v2 API works okay for now.
I didn't want to alter the v2 API further, as we should really be moving towards the v3 API, but if it's just easier to add expansion there for some reason, that is also fine.
Note: this also gives expansion on the nested projects in the result, so we can get fields like the avatar_url, etc. The current v2 search only returns the project slug and a link to the project dashboard.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/api/v3/filters.py`
Content:
```
1 import django_filters.rest_framework as filters
2
3 from readthedocs.builds.constants import BUILD_FINAL_STATES
4 from readthedocs.builds.models import Build, Version
5 from readthedocs.oauth.models import RemoteOrganization, RemoteRepository
6 from readthedocs.projects.models import Project
7
8
9 class ProjectFilter(filters.FilterSet):
10
11 # TODO this is copying the patterns from other filter sets, where the fields
12 # are all ``icontains`` lookups by default. We discussed reversing this
13 # pattern in the future though, see:
14 # https://github.com/readthedocs/readthedocs.org/issues/9862
15 name = filters.CharFilter(lookup_expr="icontains")
16 slug = filters.CharFilter(lookup_expr="icontains")
17
18 class Meta:
19 model = Project
20 fields = [
21 "name",
22 "slug",
23 "language",
24 "programming_language",
25 ]
26
27
28 class VersionFilter(filters.FilterSet):
29 slug = filters.CharFilter(lookup_expr='icontains')
30 verbose_name = filters.CharFilter(lookup_expr='icontains')
31
32 class Meta:
33 model = Version
34 fields = [
35 'verbose_name',
36 'privacy_level',
37 'active',
38 'built',
39 'uploaded',
40 'slug',
41 'type',
42 ]
43
44
45 class BuildFilter(filters.FilterSet):
46 running = filters.BooleanFilter(method='get_running')
47
48 class Meta:
49 model = Build
50 fields = [
51 'commit',
52 'running',
53 ]
54
55 def get_running(self, queryset, name, value):
56 if value:
57 return queryset.exclude(state__in=BUILD_FINAL_STATES)
58
59 return queryset.filter(state__in=BUILD_FINAL_STATES)
60
61
62 class RemoteRepositoryFilter(filters.FilterSet):
63 name = filters.CharFilter(field_name='name', lookup_expr='icontains')
64 organization = filters.CharFilter(field_name='organization__slug')
65
66 class Meta:
67 model = RemoteRepository
68 fields = [
69 'name',
70 'vcs_provider',
71 'organization',
72 ]
73
74
75 class RemoteOrganizationFilter(filters.FilterSet):
76 name = filters.CharFilter(field_name='name', lookup_expr='icontains')
77
78 class Meta:
79 model = RemoteOrganization
80 fields = [
81 'name',
82 'vcs_provider',
83 ]
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/readthedocs/api/v3/filters.py b/readthedocs/api/v3/filters.py
--- a/readthedocs/api/v3/filters.py
+++ b/readthedocs/api/v3/filters.py
@@ -60,15 +60,17 @@
class RemoteRepositoryFilter(filters.FilterSet):
- name = filters.CharFilter(field_name='name', lookup_expr='icontains')
- organization = filters.CharFilter(field_name='organization__slug')
+ name = filters.CharFilter(field_name="name", lookup_expr="icontains")
+ full_name = filters.CharFilter(field_name="full_name", lookup_expr="icontains")
+ organization = filters.CharFilter(field_name="organization__slug")
class Meta:
model = RemoteRepository
fields = [
- 'name',
- 'vcs_provider',
- 'organization',
+ "name",
+ "full_name",
+ "vcs_provider",
+ "organization",
]
| {"golden_diff": "diff --git a/readthedocs/api/v3/filters.py b/readthedocs/api/v3/filters.py\n--- a/readthedocs/api/v3/filters.py\n+++ b/readthedocs/api/v3/filters.py\n@@ -60,15 +60,17 @@\n \n \n class RemoteRepositoryFilter(filters.FilterSet):\n- name = filters.CharFilter(field_name='name', lookup_expr='icontains')\n- organization = filters.CharFilter(field_name='organization__slug')\n+ name = filters.CharFilter(field_name=\"name\", lookup_expr=\"icontains\")\n+ full_name = filters.CharFilter(field_name=\"full_name\", lookup_expr=\"icontains\")\n+ organization = filters.CharFilter(field_name=\"organization__slug\")\n \n class Meta:\n model = RemoteRepository\n fields = [\n- 'name',\n- 'vcs_provider',\n- 'organization',\n+ \"name\",\n+ \"full_name\",\n+ \"vcs_provider\",\n+ \"organization\",\n ]\n", "issue": "API: allow remote repo full name query\nThe new dashboard is still using the API v2 remote repo API, which does not allow for expansion on project results and doesn't have all of the fields that I'd like to use in the results listing. The API v3 needs the v2 API implementation for searching by full_name, the current pattern for searching `full_name` by icontains on the v2 API works okay for now.\r\n\r\nI didn't want to alter the v2 API further, as we should really be moving towards the v3 API, but if it's just easier to add expansion there for some reason, that is also fine.\r\n\r\nNote: this also gives expansion on the nested projects in the result, so we can get fields like the avatar_url, etc. The current v2 search only returns the project slug and a link to the project dashboard.\n", "before_files": [{"content": "import django_filters.rest_framework as filters\n\nfrom readthedocs.builds.constants import BUILD_FINAL_STATES\nfrom readthedocs.builds.models import Build, Version\nfrom readthedocs.oauth.models import RemoteOrganization, RemoteRepository\nfrom readthedocs.projects.models import Project\n\n\nclass ProjectFilter(filters.FilterSet):\n\n # TODO this is copying the patterns from other filter sets, where the fields\n # are all ``icontains`` lookups by default. We discussed reversing this\n # pattern in the future though, see:\n # https://github.com/readthedocs/readthedocs.org/issues/9862\n name = filters.CharFilter(lookup_expr=\"icontains\")\n slug = filters.CharFilter(lookup_expr=\"icontains\")\n\n class Meta:\n model = Project\n fields = [\n \"name\",\n \"slug\",\n \"language\",\n \"programming_language\",\n ]\n\n\nclass VersionFilter(filters.FilterSet):\n slug = filters.CharFilter(lookup_expr='icontains')\n verbose_name = filters.CharFilter(lookup_expr='icontains')\n\n class Meta:\n model = Version\n fields = [\n 'verbose_name',\n 'privacy_level',\n 'active',\n 'built',\n 'uploaded',\n 'slug',\n 'type',\n ]\n\n\nclass BuildFilter(filters.FilterSet):\n running = filters.BooleanFilter(method='get_running')\n\n class Meta:\n model = Build\n fields = [\n 'commit',\n 'running',\n ]\n\n def get_running(self, queryset, name, value):\n if value:\n return queryset.exclude(state__in=BUILD_FINAL_STATES)\n\n return queryset.filter(state__in=BUILD_FINAL_STATES)\n\n\nclass RemoteRepositoryFilter(filters.FilterSet):\n name = filters.CharFilter(field_name='name', lookup_expr='icontains')\n organization = filters.CharFilter(field_name='organization__slug')\n\n class Meta:\n model = RemoteRepository\n fields = [\n 'name',\n 'vcs_provider',\n 'organization',\n ]\n\n\nclass RemoteOrganizationFilter(filters.FilterSet):\n name = filters.CharFilter(field_name='name', lookup_expr='icontains')\n\n class Meta:\n model = RemoteOrganization\n fields = [\n 'name',\n 'vcs_provider',\n ]\n", "path": "readthedocs/api/v3/filters.py"}], "after_files": [{"content": "import django_filters.rest_framework as filters\n\nfrom readthedocs.builds.constants import BUILD_FINAL_STATES\nfrom readthedocs.builds.models import Build, Version\nfrom readthedocs.oauth.models import RemoteOrganization, RemoteRepository\nfrom readthedocs.projects.models import Project\n\n\nclass ProjectFilter(filters.FilterSet):\n\n # TODO this is copying the patterns from other filter sets, where the fields\n # are all ``icontains`` lookups by default. We discussed reversing this\n # pattern in the future though, see:\n # https://github.com/readthedocs/readthedocs.org/issues/9862\n name = filters.CharFilter(lookup_expr=\"icontains\")\n slug = filters.CharFilter(lookup_expr=\"icontains\")\n\n class Meta:\n model = Project\n fields = [\n \"name\",\n \"slug\",\n \"language\",\n \"programming_language\",\n ]\n\n\nclass VersionFilter(filters.FilterSet):\n slug = filters.CharFilter(lookup_expr='icontains')\n verbose_name = filters.CharFilter(lookup_expr='icontains')\n\n class Meta:\n model = Version\n fields = [\n 'verbose_name',\n 'privacy_level',\n 'active',\n 'built',\n 'uploaded',\n 'slug',\n 'type',\n ]\n\n\nclass BuildFilter(filters.FilterSet):\n running = filters.BooleanFilter(method='get_running')\n\n class Meta:\n model = Build\n fields = [\n 'commit',\n 'running',\n ]\n\n def get_running(self, queryset, name, value):\n if value:\n return queryset.exclude(state__in=BUILD_FINAL_STATES)\n\n return queryset.filter(state__in=BUILD_FINAL_STATES)\n\n\nclass RemoteRepositoryFilter(filters.FilterSet):\n name = filters.CharFilter(field_name=\"name\", lookup_expr=\"icontains\")\n full_name = filters.CharFilter(field_name=\"full_name\", lookup_expr=\"icontains\")\n organization = filters.CharFilter(field_name=\"organization__slug\")\n\n class Meta:\n model = RemoteRepository\n fields = [\n \"name\",\n \"full_name\",\n \"vcs_provider\",\n \"organization\",\n ]\n\n\nclass RemoteOrganizationFilter(filters.FilterSet):\n name = filters.CharFilter(field_name='name', lookup_expr='icontains')\n\n class Meta:\n model = RemoteOrganization\n fields = [\n 'name',\n 'vcs_provider',\n ]\n", "path": "readthedocs/api/v3/filters.py"}]} | 1,074 | 209 |
gh_patches_debug_330 | rasdani/github-patches | git_diff | Pylons__pyramid-3272 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump Sphinx to >=1.7.2
Would anyone be opposed to bumping Sphinx to >=1.7.2, != 1.7.3 in `setup.py`? I really want our PDFs to have `emphasize-lines` support, at long last, and bring in support for Unicode characters in PDFs via xelatex.
Refs:
* #667
* #2572
* https://github.com/rtfd/readthedocs.org/issues/4015
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 ##############################################################################
2 #
3 # Copyright (c) 2008-2013 Agendaless Consulting and Contributors.
4 # All Rights Reserved.
5 #
6 # This software is subject to the provisions of the BSD-like license at
7 # http://www.repoze.org/LICENSE.txt. A copy of the license should accompany
8 # this distribution. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL
9 # EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,
10 # THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND
11 # FITNESS FOR A PARTICULAR PURPOSE
12 #
13 ##############################################################################
14
15 import os
16
17 from setuptools import setup, find_packages
18
19 here = os.path.abspath(os.path.dirname(__file__))
20 try:
21 with open(os.path.join(here, 'README.rst')) as f:
22 README = f.read()
23 with open(os.path.join(here, 'CHANGES.txt')) as f:
24 CHANGES = f.read()
25 except IOError:
26 README = CHANGES = ''
27
28 install_requires = [
29 'setuptools',
30 'WebOb >= 1.7.0', # Response.has_body
31 'repoze.lru >= 0.4', # py3 compat
32 'zope.interface >= 3.8.0', # has zope.interface.registry
33 'zope.deprecation >= 3.5.0', # py3 compat
34 'venusian >= 1.0a3', # ``ignore``
35 'translationstring >= 0.4', # py3 compat
36 'PasteDeploy >= 1.5.0', # py3 compat
37 'plaster',
38 'plaster_pastedeploy',
39 'hupper',
40 ]
41
42 tests_require = [
43 'WebTest >= 1.3.1', # py3 compat
44 'zope.component >= 4.0', # py3 compat
45 ]
46
47
48 docs_extras = [
49 'Sphinx >= 1.3.5, != 1.7.3',
50 'docutils',
51 'repoze.sphinx.autointerface',
52 'pylons_sphinx_latesturl',
53 'pylons-sphinx-themes',
54 'sphinxcontrib-autoprogram',
55 ]
56
57 testing_extras = tests_require + [
58 'nose',
59 'coverage',
60 'virtualenv', # for scaffolding tests
61 ]
62
63 setup(name='pyramid',
64 version='1.9.2',
65 description='The Pyramid Web Framework, a Pylons project',
66 long_description=README + '\n\n' + CHANGES,
67 classifiers=[
68 "Development Status :: 6 - Mature",
69 "Intended Audience :: Developers",
70 "Programming Language :: Python",
71 "Programming Language :: Python :: 2.7",
72 "Programming Language :: Python :: 3",
73 "Programming Language :: Python :: 3.4",
74 "Programming Language :: Python :: 3.5",
75 "Programming Language :: Python :: 3.6",
76 "Programming Language :: Python :: Implementation :: CPython",
77 "Programming Language :: Python :: Implementation :: PyPy",
78 "Framework :: Pyramid",
79 "Topic :: Internet :: WWW/HTTP",
80 "Topic :: Internet :: WWW/HTTP :: WSGI",
81 "License :: Repoze Public License",
82 ],
83 keywords='web wsgi pylons pyramid',
84 author="Chris McDonough, Agendaless Consulting",
85 author_email="[email protected]",
86 url="https://trypyramid.com",
87 license="BSD-derived (http://www.repoze.org/LICENSE.txt)",
88 packages=find_packages(),
89 include_package_data=True,
90 zip_safe=False,
91 python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
92 install_requires=install_requires,
93 extras_require={
94 'testing': testing_extras,
95 'docs': docs_extras,
96 },
97 tests_require=tests_require,
98 test_suite="pyramid.tests",
99 entry_points="""\
100 [pyramid.scaffold]
101 starter=pyramid.scaffolds:StarterProjectTemplate
102 zodb=pyramid.scaffolds:ZODBProjectTemplate
103 alchemy=pyramid.scaffolds:AlchemyProjectTemplate
104 [pyramid.pshell_runner]
105 python=pyramid.scripts.pshell:python_shell_runner
106 [console_scripts]
107 pcreate = pyramid.scripts.pcreate:main
108 pserve = pyramid.scripts.pserve:main
109 pshell = pyramid.scripts.pshell:main
110 proutes = pyramid.scripts.proutes:main
111 pviews = pyramid.scripts.pviews:main
112 ptweens = pyramid.scripts.ptweens:main
113 prequest = pyramid.scripts.prequest:main
114 pdistreport = pyramid.scripts.pdistreport:main
115 [paste.server_runner]
116 wsgiref = pyramid.scripts.pserve:wsgiref_server_runner
117 cherrypy = pyramid.scripts.pserve:cherrypy_server_runner
118 """
119 )
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,7 +46,7 @@
docs_extras = [
- 'Sphinx >= 1.3.5, != 1.7.3',
+ 'Sphinx >= 1.7.4',
'docutils',
'repoze.sphinx.autointerface',
'pylons_sphinx_latesturl',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,7 +46,7 @@\n \n \n docs_extras = [\n- 'Sphinx >= 1.3.5, != 1.7.3',\n+ 'Sphinx >= 1.7.4',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n", "issue": "Bump Sphinx to >=1.7.2\nWould anyone be opposed to bumping Sphinx to >=1.7.2, != 1.7.3 in `setup.py`? I really want our PDFs to have `emphasize-lines` support, at long last, and bring in support for Unicode characters in PDFs via xelatex.\r\n\r\nRefs:\r\n* #667\r\n* #2572\r\n* https://github.com/rtfd/readthedocs.org/issues/4015\r\n\n", "before_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\n\nimport os\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n with open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n with open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\nexcept IOError:\n README = CHANGES = ''\n\ninstall_requires = [\n 'setuptools',\n 'WebOb >= 1.7.0', # Response.has_body\n 'repoze.lru >= 0.4', # py3 compat\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0a3', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n 'plaster',\n 'plaster_pastedeploy',\n 'hupper',\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n 'zope.component >= 4.0', # py3 compat\n ]\n\n\ndocs_extras = [\n 'Sphinx >= 1.3.5, != 1.7.3',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-autoprogram',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.9.2',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Development Status :: 6 - Mature\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"https://trypyramid.com\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n install_requires=install_requires,\n extras_require={\n 'testing': testing_extras,\n 'docs': docs_extras,\n },\n tests_require=tests_require,\n test_suite=\"pyramid.tests\",\n entry_points=\"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [pyramid.pshell_runner]\n python=pyramid.scripts.pshell:python_shell_runner\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n", "path": "setup.py"}], "after_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\n\nimport os\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n with open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n with open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\nexcept IOError:\n README = CHANGES = ''\n\ninstall_requires = [\n 'setuptools',\n 'WebOb >= 1.7.0', # Response.has_body\n 'repoze.lru >= 0.4', # py3 compat\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0a3', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n 'plaster',\n 'plaster_pastedeploy',\n 'hupper',\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n 'zope.component >= 4.0', # py3 compat\n ]\n\n\ndocs_extras = [\n 'Sphinx >= 1.7.4',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-autoprogram',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.9.2',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Development Status :: 6 - Mature\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"https://trypyramid.com\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n install_requires=install_requires,\n extras_require={\n 'testing': testing_extras,\n 'docs': docs_extras,\n },\n tests_require=tests_require,\n test_suite=\"pyramid.tests\",\n entry_points=\"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [pyramid.pshell_runner]\n python=pyramid.scripts.pshell:python_shell_runner\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n", "path": "setup.py"}]} | 1,699 | 98 |
gh_patches_debug_23277 | rasdani/github-patches | git_diff | fidals__shopelectro-1006 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search shows products with no category
It should not, of course

Search link: https://www.shopelectro.ru/search/?term=MK1215NC
Link to the product: https://www.shopelectro.ru/catalog/products/7608/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/views/search.py`
Content:
```
1 from django.conf import settings
2
3 from search import views as search_views, search as search_engine
4
5 from pages.models import Page
6
7 from shopelectro.models import Category, Product
8
9
10 class Search(search_views.SearchView):
11 def get_redirect_search_entity(self):
12 return next(s for s in self.search_entities if s.name == 'product')
13
14 # ignore CPDBear
15 search_entities = [
16 search_engine.Search(
17 name='category',
18 qs=Category.objects.active(),
19 fields=['name'], # Ignore CPDBear
20 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
21 ),
22 search_engine.Search(
23 name='product',
24 qs=Product.objects.active(),
25 fields=['name'],
26 redirect_field='vendor_code',
27 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
28 ),
29 search_engine.Search(
30 name='page', # Ignore CPDBear
31 qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),
32 fields=['name'],
33 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
34 )
35 ]
36
37 redirect_field = 'vendor_code'
38
39
40 class Autocomplete(search_views.AutocompleteView):
41
42 # ignore CPDBear
43 search_entities = [
44 search_engine.Search(
45 name='category',
46 qs=Category.objects.filter(page__is_active=True),
47 fields=['name', 'id'],
48 template_fields=['name', 'url'],
49 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
50 ),
51 search_engine.Search(
52 name='product',
53 qs=Product.objects.active(),
54 fields=['name', 'id', 'vendor_code'],
55 template_fields=['name', 'price', 'url'], # Ignore CPDBear
56 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
57 ),
58 search_engine.Search(
59 name='pages',
60 qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),
61 fields=['name'],
62 template_fields=['name', 'url'],
63 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
64 )
65 ]
66
67 see_all_label = settings.SEARCH_SEE_ALL_LABEL
68
69
70 class AdminAutocomplete(search_views.AdminAutocompleteView):
71
72 # ignore CPDBear
73 search_entities = [
74 search_engine.Search(
75 name='category',
76 qs=Category.objects.filter(page__is_active=True),
77 fields=['name'],
78 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
79 ),
80 search_engine.Search(
81 name='product',
82 qs=Product.objects.active(),
83 fields=['name'],
84 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
85 ),
86 search_engine.Search(
87 name='pages',
88 qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),
89 fields=['name'],
90 min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
91 )
92 ]
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/views/search.py b/shopelectro/views/search.py
--- a/shopelectro/views/search.py
+++ b/shopelectro/views/search.py
@@ -1,9 +1,7 @@
from django.conf import settings
-from search import views as search_views, search as search_engine
-
from pages.models import Page
-
+from search import views as search_views, search as search_engine
from shopelectro.models import Category, Product
@@ -21,14 +19,14 @@
),
search_engine.Search(
name='product',
- qs=Product.objects.active(),
+ qs=Product.objects.active().exclude(category__isnull=True),
fields=['name'],
redirect_field='vendor_code',
min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
),
search_engine.Search(
name='page', # Ignore CPDBear
- qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),
+ qs=Page.objects.active().exclude(type=Page.MODEL_TYPE),
fields=['name'],
min_similarity=settings.TRIGRAM_MIN_SIMILARITY,
)
| {"golden_diff": "diff --git a/shopelectro/views/search.py b/shopelectro/views/search.py\n--- a/shopelectro/views/search.py\n+++ b/shopelectro/views/search.py\n@@ -1,9 +1,7 @@\n from django.conf import settings\n \n-from search import views as search_views, search as search_engine\n-\n from pages.models import Page\n-\n+from search import views as search_views, search as search_engine\n from shopelectro.models import Category, Product\n \n \n@@ -21,14 +19,14 @@\n ),\n search_engine.Search(\n name='product',\n- qs=Product.objects.active(),\n+ qs=Product.objects.active().exclude(category__isnull=True),\n fields=['name'],\n redirect_field='vendor_code',\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='page', # Ignore CPDBear\n- qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),\n+ qs=Page.objects.active().exclude(type=Page.MODEL_TYPE),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n )\n", "issue": "Search shows products with no category\nIt should not, of course\r\n\r\n\r\n\r\nSearch link: https://www.shopelectro.ru/search/?term=MK1215NC\r\nLink to the product: https://www.shopelectro.ru/catalog/products/7608/\r\n\n", "before_files": [{"content": "from django.conf import settings\n\nfrom search import views as search_views, search as search_engine\n\nfrom pages.models import Page\n\nfrom shopelectro.models import Category, Product\n\n\nclass Search(search_views.SearchView):\n def get_redirect_search_entity(self):\n return next(s for s in self.search_entities if s.name == 'product')\n\n # ignore CPDBear\n search_entities = [\n search_engine.Search(\n name='category',\n qs=Category.objects.active(),\n fields=['name'], # Ignore CPDBear\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='product',\n qs=Product.objects.active(),\n fields=['name'],\n redirect_field='vendor_code',\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='page', # Ignore CPDBear\n qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n )\n ]\n\n redirect_field = 'vendor_code'\n\n\nclass Autocomplete(search_views.AutocompleteView):\n\n # ignore CPDBear\n search_entities = [\n search_engine.Search(\n name='category',\n qs=Category.objects.filter(page__is_active=True),\n fields=['name', 'id'],\n template_fields=['name', 'url'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='product',\n qs=Product.objects.active(),\n fields=['name', 'id', 'vendor_code'],\n template_fields=['name', 'price', 'url'], # Ignore CPDBear\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='pages',\n qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),\n fields=['name'],\n template_fields=['name', 'url'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n )\n ]\n\n see_all_label = settings.SEARCH_SEE_ALL_LABEL\n\n\nclass AdminAutocomplete(search_views.AdminAutocompleteView):\n\n # ignore CPDBear\n search_entities = [\n search_engine.Search(\n name='category',\n qs=Category.objects.filter(page__is_active=True),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='product',\n qs=Product.objects.active(),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='pages',\n qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n )\n ]\n", "path": "shopelectro/views/search.py"}], "after_files": [{"content": "from django.conf import settings\n\nfrom pages.models import Page\nfrom search import views as search_views, search as search_engine\nfrom shopelectro.models import Category, Product\n\n\nclass Search(search_views.SearchView):\n def get_redirect_search_entity(self):\n return next(s for s in self.search_entities if s.name == 'product')\n\n # ignore CPDBear\n search_entities = [\n search_engine.Search(\n name='category',\n qs=Category.objects.active(),\n fields=['name'], # Ignore CPDBear\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='product',\n qs=Product.objects.active().exclude(category__isnull=True),\n fields=['name'],\n redirect_field='vendor_code',\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='page', # Ignore CPDBear\n qs=Page.objects.active().exclude(type=Page.MODEL_TYPE),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n )\n ]\n\n redirect_field = 'vendor_code'\n\n\nclass Autocomplete(search_views.AutocompleteView):\n\n # ignore CPDBear\n search_entities = [\n search_engine.Search(\n name='category',\n qs=Category.objects.filter(page__is_active=True),\n fields=['name', 'id'],\n template_fields=['name', 'url'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='product',\n qs=Product.objects.active(),\n fields=['name', 'id', 'vendor_code'],\n template_fields=['name', 'price', 'url'], # Ignore CPDBear\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='pages',\n qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),\n fields=['name'],\n template_fields=['name', 'url'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n )\n ]\n\n see_all_label = settings.SEARCH_SEE_ALL_LABEL\n\n\nclass AdminAutocomplete(search_views.AdminAutocompleteView):\n\n # ignore CPDBear\n search_entities = [\n search_engine.Search(\n name='category',\n qs=Category.objects.filter(page__is_active=True),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='product',\n qs=Product.objects.active(),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n ),\n search_engine.Search(\n name='pages',\n qs=Page.objects.filter(is_active=True).exclude(type=Page.MODEL_TYPE),\n fields=['name'],\n min_similarity=settings.TRIGRAM_MIN_SIMILARITY,\n )\n ]\n", "path": "shopelectro/views/search.py"}]} | 1,167 | 249 |
gh_patches_debug_14820 | rasdani/github-patches | git_diff | crytic__slither-786 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'StructureTopLevel' object has no attribute 'contract'
On 0x0cf55d57d241161e0ec68e72cbb175dbfe84173a
Here there should be a different case for top-level elements and non-top-level:
https://github.com/crytic/slither/blob/c0c581b3ba830b6ce8dc3f4be82592a7a42e9752/slither/core/solidity_types/user_defined_type.py#L65-L66
AttributeError: 'StructureTopLevel' object has no attribute 'contract'
On 0x0cf55d57d241161e0ec68e72cbb175dbfe84173a
Here there should be a different case for top-level elements and non-top-level:
https://github.com/crytic/slither/blob/c0c581b3ba830b6ce8dc3f4be82592a7a42e9752/slither/core/solidity_types/user_defined_type.py#L65-L66
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/core/solidity_types/user_defined_type.py`
Content:
```
1 from typing import Union, TYPE_CHECKING, Tuple
2 import math
3
4 from slither.core.solidity_types.type import Type
5 from slither.exceptions import SlitherException
6
7 if TYPE_CHECKING:
8 from slither.core.declarations.structure import Structure
9 from slither.core.declarations.enum import Enum
10 from slither.core.declarations.contract import Contract
11
12 # pylint: disable=import-outside-toplevel
13 class UserDefinedType(Type):
14 def __init__(self, t):
15 from slither.core.declarations.structure import Structure
16 from slither.core.declarations.enum import Enum
17 from slither.core.declarations.contract import Contract
18
19 assert isinstance(t, (Contract, Enum, Structure))
20 super().__init__()
21 self._type = t
22
23 @property
24 def type(self) -> Union["Contract", "Enum", "Structure"]:
25 return self._type
26
27 @property
28 def storage_size(self) -> Tuple[int, bool]:
29 from slither.core.declarations.structure import Structure
30 from slither.core.declarations.enum import Enum
31 from slither.core.declarations.contract import Contract
32
33 if isinstance(self._type, Contract):
34 return 20, False
35 if isinstance(self._type, Enum):
36 return int(math.ceil(math.log2(len(self._type.values)) / 8)), False
37 if isinstance(self._type, Structure):
38 # todo there's some duplicate logic here and slither_core, can we refactor this?
39 slot = 0
40 offset = 0
41 for elem in self._type.elems_ordered:
42 size, new_slot = elem.type.storage_size
43 if new_slot:
44 if offset > 0:
45 slot += 1
46 offset = 0
47 elif size + offset > 32:
48 slot += 1
49 offset = 0
50
51 if new_slot:
52 slot += math.ceil(size / 32)
53 else:
54 offset += size
55 if offset > 0:
56 slot += 1
57 return slot * 32, True
58 to_log = f"{self} does not have storage size"
59 raise SlitherException(to_log)
60
61 def __str__(self):
62 from slither.core.declarations.structure import Structure
63 from slither.core.declarations.enum import Enum
64
65 if isinstance(self.type, (Enum, Structure)):
66 return str(self.type.contract) + "." + str(self.type.name)
67 return str(self.type.name)
68
69 def __eq__(self, other):
70 if not isinstance(other, UserDefinedType):
71 return False
72 return self.type == other.type
73
74 def __hash__(self):
75 return hash(str(self))
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/core/solidity_types/user_defined_type.py b/slither/core/solidity_types/user_defined_type.py
--- a/slither/core/solidity_types/user_defined_type.py
+++ b/slither/core/solidity_types/user_defined_type.py
@@ -59,12 +59,13 @@
raise SlitherException(to_log)
def __str__(self):
- from slither.core.declarations.structure import Structure
- from slither.core.declarations.enum import Enum
+ from slither.core.declarations.structure_contract import StructureContract
+ from slither.core.declarations.enum_contract import EnumContract
- if isinstance(self.type, (Enum, Structure)):
- return str(self.type.contract) + "." + str(self.type.name)
- return str(self.type.name)
+ type_used = self.type
+ if isinstance(type_used, (EnumContract, StructureContract)):
+ return str(type_used.contract) + "." + str(type_used.name)
+ return str(type_used.name)
def __eq__(self, other):
if not isinstance(other, UserDefinedType):
| {"golden_diff": "diff --git a/slither/core/solidity_types/user_defined_type.py b/slither/core/solidity_types/user_defined_type.py\n--- a/slither/core/solidity_types/user_defined_type.py\n+++ b/slither/core/solidity_types/user_defined_type.py\n@@ -59,12 +59,13 @@\n raise SlitherException(to_log)\n \n def __str__(self):\n- from slither.core.declarations.structure import Structure\n- from slither.core.declarations.enum import Enum\n+ from slither.core.declarations.structure_contract import StructureContract\n+ from slither.core.declarations.enum_contract import EnumContract\n \n- if isinstance(self.type, (Enum, Structure)):\n- return str(self.type.contract) + \".\" + str(self.type.name)\n- return str(self.type.name)\n+ type_used = self.type\n+ if isinstance(type_used, (EnumContract, StructureContract)):\n+ return str(type_used.contract) + \".\" + str(type_used.name)\n+ return str(type_used.name)\n \n def __eq__(self, other):\n if not isinstance(other, UserDefinedType):\n", "issue": "AttributeError: 'StructureTopLevel' object has no attribute 'contract'\nOn 0x0cf55d57d241161e0ec68e72cbb175dbfe84173a\r\n\r\nHere there should be a different case for top-level elements and non-top-level:\r\n\r\nhttps://github.com/crytic/slither/blob/c0c581b3ba830b6ce8dc3f4be82592a7a42e9752/slither/core/solidity_types/user_defined_type.py#L65-L66\nAttributeError: 'StructureTopLevel' object has no attribute 'contract'\nOn 0x0cf55d57d241161e0ec68e72cbb175dbfe84173a\r\n\r\nHere there should be a different case for top-level elements and non-top-level:\r\n\r\nhttps://github.com/crytic/slither/blob/c0c581b3ba830b6ce8dc3f4be82592a7a42e9752/slither/core/solidity_types/user_defined_type.py#L65-L66\n", "before_files": [{"content": "from typing import Union, TYPE_CHECKING, Tuple\nimport math\n\nfrom slither.core.solidity_types.type import Type\nfrom slither.exceptions import SlitherException\n\nif TYPE_CHECKING:\n from slither.core.declarations.structure import Structure\n from slither.core.declarations.enum import Enum\n from slither.core.declarations.contract import Contract\n\n# pylint: disable=import-outside-toplevel\nclass UserDefinedType(Type):\n def __init__(self, t):\n from slither.core.declarations.structure import Structure\n from slither.core.declarations.enum import Enum\n from slither.core.declarations.contract import Contract\n\n assert isinstance(t, (Contract, Enum, Structure))\n super().__init__()\n self._type = t\n\n @property\n def type(self) -> Union[\"Contract\", \"Enum\", \"Structure\"]:\n return self._type\n\n @property\n def storage_size(self) -> Tuple[int, bool]:\n from slither.core.declarations.structure import Structure\n from slither.core.declarations.enum import Enum\n from slither.core.declarations.contract import Contract\n\n if isinstance(self._type, Contract):\n return 20, False\n if isinstance(self._type, Enum):\n return int(math.ceil(math.log2(len(self._type.values)) / 8)), False\n if isinstance(self._type, Structure):\n # todo there's some duplicate logic here and slither_core, can we refactor this?\n slot = 0\n offset = 0\n for elem in self._type.elems_ordered:\n size, new_slot = elem.type.storage_size\n if new_slot:\n if offset > 0:\n slot += 1\n offset = 0\n elif size + offset > 32:\n slot += 1\n offset = 0\n\n if new_slot:\n slot += math.ceil(size / 32)\n else:\n offset += size\n if offset > 0:\n slot += 1\n return slot * 32, True\n to_log = f\"{self} does not have storage size\"\n raise SlitherException(to_log)\n\n def __str__(self):\n from slither.core.declarations.structure import Structure\n from slither.core.declarations.enum import Enum\n\n if isinstance(self.type, (Enum, Structure)):\n return str(self.type.contract) + \".\" + str(self.type.name)\n return str(self.type.name)\n\n def __eq__(self, other):\n if not isinstance(other, UserDefinedType):\n return False\n return self.type == other.type\n\n def __hash__(self):\n return hash(str(self))\n", "path": "slither/core/solidity_types/user_defined_type.py"}], "after_files": [{"content": "from typing import Union, TYPE_CHECKING, Tuple\nimport math\n\nfrom slither.core.solidity_types.type import Type\nfrom slither.exceptions import SlitherException\n\nif TYPE_CHECKING:\n from slither.core.declarations.structure import Structure\n from slither.core.declarations.enum import Enum\n from slither.core.declarations.contract import Contract\n\n# pylint: disable=import-outside-toplevel\nclass UserDefinedType(Type):\n def __init__(self, t):\n from slither.core.declarations.structure import Structure\n from slither.core.declarations.enum import Enum\n from slither.core.declarations.contract import Contract\n\n assert isinstance(t, (Contract, Enum, Structure))\n super().__init__()\n self._type = t\n\n @property\n def type(self) -> Union[\"Contract\", \"Enum\", \"Structure\"]:\n return self._type\n\n @property\n def storage_size(self) -> Tuple[int, bool]:\n from slither.core.declarations.structure import Structure\n from slither.core.declarations.enum import Enum\n from slither.core.declarations.contract import Contract\n\n if isinstance(self._type, Contract):\n return 20, False\n if isinstance(self._type, Enum):\n return int(math.ceil(math.log2(len(self._type.values)) / 8)), False\n if isinstance(self._type, Structure):\n # todo there's some duplicate logic here and slither_core, can we refactor this?\n slot = 0\n offset = 0\n for elem in self._type.elems_ordered:\n size, new_slot = elem.type.storage_size\n if new_slot:\n if offset > 0:\n slot += 1\n offset = 0\n elif size + offset > 32:\n slot += 1\n offset = 0\n\n if new_slot:\n slot += math.ceil(size / 32)\n else:\n offset += size\n if offset > 0:\n slot += 1\n return slot * 32, True\n to_log = f\"{self} does not have storage size\"\n raise SlitherException(to_log)\n\n def __str__(self):\n from slither.core.declarations.structure_contract import StructureContract\n from slither.core.declarations.enum_contract import EnumContract\n\n type_used = self.type\n if isinstance(type_used, (EnumContract, StructureContract)):\n return str(type_used.contract) + \".\" + str(type_used.name)\n return str(type_used.name)\n\n def __eq__(self, other):\n if not isinstance(other, UserDefinedType):\n return False\n return self.type == other.type\n\n def __hash__(self):\n return hash(str(self))\n", "path": "slither/core/solidity_types/user_defined_type.py"}]} | 1,252 | 240 |
gh_patches_debug_31896 | rasdani/github-patches | git_diff | rootpy__rootpy-785 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
basestring
Hi there
I found the following issue:
If I'm using the F1 object from rootpy.plotting and try to access a parameter using [parnr] (the __getitem__) methode, I get the following error:
`NameError: name 'basestring' is not defined`
I'm using python 3.6 which doesn't has the basestring data type anymore..
https://github.com/rootpy/rootpy/blob/457e074056a916fff848978ef68b7f5107856e47/rootpy/plotting/func.py#L63
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rootpy/plotting/func.py`
Content:
```
1 from __future__ import absolute_import
2
3 from .. import QROOT
4 from ..decorators import snake_case_methods
5 from .base import Plottable
6 from ..base import NameOnlyObject
7
8
9 __all__ = [
10 'F1',
11 'F2',
12 'F3',
13 ]
14
15 class BaseFunction(object):
16 class ParProxy(object):
17 def __init__(self, fcn, idx):
18 self.fcn_ = fcn
19 self.idx_ = idx
20
21 @property
22 def index(self):
23 return self.idx_
24
25 @property
26 def name(self):
27 return self.fcn_.GetParName(self.idx_)
28
29 @name.setter
30 def name(self, val):
31 return self.fcn_.SetParName(self.idx_, val)
32
33 @property
34 def value(self):
35 return self.fcn_.GetParameter(self.idx_)
36
37 @value.setter
38 def value(self, val):
39 self.fcn_.SetParameter(self.idx_, val)
40
41 @property
42 def error(self):
43 return self.fcn_.GetParError(self.idx_)
44
45 @error.setter
46 def error(self, val):
47 return self.fcn_.SetParError(self.idx_, val)
48
49 @property
50 def limits(self):
51 m = QROOT.Double()
52 M = QROOT.Double()
53 self.fcn_.GetParLimits(self.idx_, m, M)
54 return float(m), float(M)
55
56 @limits.setter
57 def limits(self, val):
58 if not hastattr(val, '__len__') and len(val) != 2:
59 raise RuntimeError('Function limits must be a tuple size 2')
60 self.fcn_.SetParLimits(self.idx_, val[0], val[1])
61
62 def __getitem__(self, value):
63 if isinstance(value, basestring):
64 idx = self.GetParNumber(value)
65 elif isinstance(value, int):
66 idx = value
67 else:
68 raise ValueError('Function index must be a integer or a string')
69 return BaseFunction.ParProxy(self, idx)
70
71
72 @snake_case_methods
73 class F1(Plottable, NameOnlyObject, BaseFunction, QROOT.TF1):
74 _ROOT = QROOT.TF1
75
76 def __init__(self, *args, **kwargs):
77 name = kwargs.pop('name', None)
78 super(F1, self).__init__(*args, name=name)
79 self._post_init(**kwargs)
80
81
82 @snake_case_methods
83 class F2(Plottable, NameOnlyObject, BaseFunction, QROOT.TF2):
84 _ROOT = QROOT.TF2
85
86 def __init__(self, *args, **kwargs):
87 name = kwargs.pop('name', None)
88 super(F2, self).__init__(*args, name=name)
89 self._post_init(**kwargs)
90
91
92 @snake_case_methods
93 class F3(Plottable, NameOnlyObject, BaseFunction, QROOT.TF3):
94 _ROOT = QROOT.TF3
95
96 def __init__(self, *args, **kwargs):
97 name = kwargs.pop('name', None)
98 super(F3, self).__init__(*args, name=name)
99 self._post_init(**kwargs)
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rootpy/plotting/func.py b/rootpy/plotting/func.py
--- a/rootpy/plotting/func.py
+++ b/rootpy/plotting/func.py
@@ -4,7 +4,7 @@
from ..decorators import snake_case_methods
from .base import Plottable
from ..base import NameOnlyObject
-
+import six
__all__ = [
'F1',
@@ -17,7 +17,7 @@
def __init__(self, fcn, idx):
self.fcn_ = fcn
self.idx_ = idx
-
+
@property
def index(self):
return self.idx_
@@ -25,7 +25,7 @@
@property
def name(self):
return self.fcn_.GetParName(self.idx_)
-
+
@name.setter
def name(self, val):
return self.fcn_.SetParName(self.idx_, val)
@@ -55,14 +55,14 @@
@limits.setter
def limits(self, val):
- if not hastattr(val, '__len__') and len(val) != 2:
+ if not hasattr(val, '__len__') and len(val) != 2:
raise RuntimeError('Function limits must be a tuple size 2')
self.fcn_.SetParLimits(self.idx_, val[0], val[1])
def __getitem__(self, value):
- if isinstance(value, basestring):
+ if isinstance(value, six.string_types):
idx = self.GetParNumber(value)
- elif isinstance(value, int):
+ elif isinstance(value, six.integer_types):
idx = value
else:
raise ValueError('Function index must be a integer or a string')
| {"golden_diff": "diff --git a/rootpy/plotting/func.py b/rootpy/plotting/func.py\n--- a/rootpy/plotting/func.py\n+++ b/rootpy/plotting/func.py\n@@ -4,7 +4,7 @@\n from ..decorators import snake_case_methods\n from .base import Plottable\n from ..base import NameOnlyObject\n-\n+import six\n \n __all__ = [\n 'F1',\n@@ -17,7 +17,7 @@\n def __init__(self, fcn, idx):\n self.fcn_ = fcn\n self.idx_ = idx\n- \n+\n @property\n def index(self):\n return self.idx_\n@@ -25,7 +25,7 @@\n @property\n def name(self):\n return self.fcn_.GetParName(self.idx_)\n- \n+\n @name.setter\n def name(self, val):\n return self.fcn_.SetParName(self.idx_, val)\n@@ -55,14 +55,14 @@\n \n @limits.setter\n def limits(self, val):\n- if not hastattr(val, '__len__') and len(val) != 2:\n+ if not hasattr(val, '__len__') and len(val) != 2:\n raise RuntimeError('Function limits must be a tuple size 2')\n self.fcn_.SetParLimits(self.idx_, val[0], val[1])\n \n def __getitem__(self, value):\n- if isinstance(value, basestring):\n+ if isinstance(value, six.string_types):\n idx = self.GetParNumber(value)\n- elif isinstance(value, int):\n+ elif isinstance(value, six.integer_types):\n idx = value\n else:\n raise ValueError('Function index must be a integer or a string')\n", "issue": "basestring\nHi there\r\nI found the following issue:\r\nIf I'm using the F1 object from rootpy.plotting and try to access a parameter using [parnr] (the __getitem__) methode, I get the following error:\r\n`NameError: name 'basestring' is not defined`\r\nI'm using python 3.6 which doesn't has the basestring data type anymore..\r\n\r\nhttps://github.com/rootpy/rootpy/blob/457e074056a916fff848978ef68b7f5107856e47/rootpy/plotting/func.py#L63\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom .. import QROOT\nfrom ..decorators import snake_case_methods\nfrom .base import Plottable\nfrom ..base import NameOnlyObject\n\n\n__all__ = [\n 'F1',\n 'F2',\n 'F3',\n]\n\nclass BaseFunction(object):\n class ParProxy(object):\n def __init__(self, fcn, idx):\n self.fcn_ = fcn\n self.idx_ = idx\n \n @property\n def index(self):\n return self.idx_\n\n @property\n def name(self):\n return self.fcn_.GetParName(self.idx_)\n \n @name.setter\n def name(self, val):\n return self.fcn_.SetParName(self.idx_, val)\n\n @property\n def value(self):\n return self.fcn_.GetParameter(self.idx_)\n\n @value.setter\n def value(self, val):\n self.fcn_.SetParameter(self.idx_, val)\n\n @property\n def error(self):\n return self.fcn_.GetParError(self.idx_)\n\n @error.setter\n def error(self, val):\n return self.fcn_.SetParError(self.idx_, val)\n\n @property\n def limits(self):\n m = QROOT.Double()\n M = QROOT.Double()\n self.fcn_.GetParLimits(self.idx_, m, M)\n return float(m), float(M)\n\n @limits.setter\n def limits(self, val):\n if not hastattr(val, '__len__') and len(val) != 2:\n raise RuntimeError('Function limits must be a tuple size 2')\n self.fcn_.SetParLimits(self.idx_, val[0], val[1])\n\n def __getitem__(self, value):\n if isinstance(value, basestring):\n idx = self.GetParNumber(value)\n elif isinstance(value, int):\n idx = value\n else:\n raise ValueError('Function index must be a integer or a string')\n return BaseFunction.ParProxy(self, idx)\n\n\n@snake_case_methods\nclass F1(Plottable, NameOnlyObject, BaseFunction, QROOT.TF1):\n _ROOT = QROOT.TF1\n\n def __init__(self, *args, **kwargs):\n name = kwargs.pop('name', None)\n super(F1, self).__init__(*args, name=name)\n self._post_init(**kwargs)\n\n\n@snake_case_methods\nclass F2(Plottable, NameOnlyObject, BaseFunction, QROOT.TF2):\n _ROOT = QROOT.TF2\n\n def __init__(self, *args, **kwargs):\n name = kwargs.pop('name', None)\n super(F2, self).__init__(*args, name=name)\n self._post_init(**kwargs)\n\n\n@snake_case_methods\nclass F3(Plottable, NameOnlyObject, BaseFunction, QROOT.TF3):\n _ROOT = QROOT.TF3\n\n def __init__(self, *args, **kwargs):\n name = kwargs.pop('name', None)\n super(F3, self).__init__(*args, name=name)\n self._post_init(**kwargs)\n", "path": "rootpy/plotting/func.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom .. import QROOT\nfrom ..decorators import snake_case_methods\nfrom .base import Plottable\nfrom ..base import NameOnlyObject\nimport six\n\n__all__ = [\n 'F1',\n 'F2',\n 'F3',\n]\n\nclass BaseFunction(object):\n class ParProxy(object):\n def __init__(self, fcn, idx):\n self.fcn_ = fcn\n self.idx_ = idx\n\n @property\n def index(self):\n return self.idx_\n\n @property\n def name(self):\n return self.fcn_.GetParName(self.idx_)\n\n @name.setter\n def name(self, val):\n return self.fcn_.SetParName(self.idx_, val)\n\n @property\n def value(self):\n return self.fcn_.GetParameter(self.idx_)\n\n @value.setter\n def value(self, val):\n self.fcn_.SetParameter(self.idx_, val)\n\n @property\n def error(self):\n return self.fcn_.GetParError(self.idx_)\n\n @error.setter\n def error(self, val):\n return self.fcn_.SetParError(self.idx_, val)\n\n @property\n def limits(self):\n m = QROOT.Double()\n M = QROOT.Double()\n self.fcn_.GetParLimits(self.idx_, m, M)\n return float(m), float(M)\n\n @limits.setter\n def limits(self, val):\n if not hasattr(val, '__len__') and len(val) != 2:\n raise RuntimeError('Function limits must be a tuple size 2')\n self.fcn_.SetParLimits(self.idx_, val[0], val[1])\n\n def __getitem__(self, value):\n if isinstance(value, six.string_types):\n idx = self.GetParNumber(value)\n elif isinstance(value, six.integer_types):\n idx = value\n else:\n raise ValueError('Function index must be a integer or a string')\n return BaseFunction.ParProxy(self, idx)\n\n\n@snake_case_methods\nclass F1(Plottable, NameOnlyObject, BaseFunction, QROOT.TF1):\n _ROOT = QROOT.TF1\n\n def __init__(self, *args, **kwargs):\n name = kwargs.pop('name', None)\n super(F1, self).__init__(*args, name=name)\n self._post_init(**kwargs)\n\n\n@snake_case_methods\nclass F2(Plottable, NameOnlyObject, BaseFunction, QROOT.TF2):\n _ROOT = QROOT.TF2\n\n def __init__(self, *args, **kwargs):\n name = kwargs.pop('name', None)\n super(F2, self).__init__(*args, name=name)\n self._post_init(**kwargs)\n\n\n@snake_case_methods\nclass F3(Plottable, NameOnlyObject, BaseFunction, QROOT.TF3):\n _ROOT = QROOT.TF3\n\n def __init__(self, *args, **kwargs):\n name = kwargs.pop('name', None)\n super(F3, self).__init__(*args, name=name)\n self._post_init(**kwargs)\n", "path": "rootpy/plotting/func.py"}]} | 1,288 | 389 |
gh_patches_debug_19990 | rasdani/github-patches | git_diff | Parsl__parsl-201 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Passing Files breaks over IPP
The new File class contains a dictionary that maps DataFutures for each site to which it is being staged and contains a reference to the DataManager. Neither of these are pickle-able.
So if we do something like this :+1:
```
data = File("foo.txt")
fu = remote_app(inputs=[data])
fu.result() # <--- We'll get an error from here
```
Here's the relevant piece from the exception traceback :
```
File "/usr/local/lib/python3.5/dist-packages/ipyparallel/serialize/serialize.py", line 112, in serialize_object
buffers.insert(0, pickle.dumps(cobj, PICKLE_PROTOCOL))
TypeError: can't pickle _thread.lock objects
```
I believe that the File object is the best place to hold the Future information about itself, and that would give us the opportunity to do smarter file staging in the future. So I propose that we fix this with a custom pickler for the File class.
This is blocker for 0.5.0.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/data_provider/files.py`
Content:
```
1 """Define the File Type.
2
3 The primary purpose of the File object is to track the protocol to be used
4 to transfer the file as well as to give the appropriate filepath depending
5 on where(client-side, remote-side, intermediary-side) the File.filepath is
6 being called from
7 """
8
9 import os
10 import logging
11 from urllib.parse import urlparse
12 from parsl.data_provider.data_manager import DataManager
13
14
15 logger = logging.getLogger(__name__)
16
17
18 class File(str):
19 """The Parsl File Class.
20
21 This is planned to be a very simple class that simply
22 captures various attributes of a file, and relies on client-side and worker-side
23 systems to enable to appropriate transfer of files.
24 """
25
26 def __init__(self, url, dman=None, cache=False, caching_dir=".", staging='direct'):
27 """Construct a File object from a url string.
28
29 Args:
30 - url (string) : url string of the file e.g.
31 - 'input.txt'
32 - 'file:///scratch/proj101/input.txt'
33 - 'globus://go#ep1/~/data/input.txt'
34 - 'globus://ddb59aef-6d04-11e5-ba46-22000b92c6ec/home/johndoe/data/input.txt'
35 - dman (DataManager) : data manager
36 """
37 self.url = url
38 parsed_url = urlparse(self.url)
39 self.scheme = parsed_url.scheme if parsed_url.scheme else 'file'
40 self.netloc = parsed_url.netloc
41 self.path = parsed_url.path
42 self.filename = os.path.basename(self.path)
43 self.dman = dman if dman else DataManager.get_data_manager()
44 self.data_future = {}
45 if self.scheme != 'file':
46 self.dman.add_file(self)
47
48 self.cache = cache
49 self.caching_dir = caching_dir
50 self.staging = staging
51
52 def __str__(self):
53 return self.filepath
54
55 def __repr__(self):
56 return self.__str__()
57
58 def __fspath__(self):
59 return self.filepath
60
61 @property
62 def filepath(self):
63 """Return the resolved filepath on the side where it is called from.
64
65 The appropriate filepath will be returned when called from within
66 an app running remotely as well as regular python on the client side.
67
68 Args:
69 - self
70 Returns:
71 - filepath (string)
72 """
73 if self.scheme == 'globus':
74 if hasattr(self, 'local_path'):
75 return self.local_path
76
77 if 'exec_site' not in globals() or self.staging == 'direct':
78 # Assume local and direct
79 return self.path
80 else:
81 # Return self.path for now
82 return self.path
83
84 def stage_in(self, site=None):
85 """Transport file from the site of origin to local site."""
86 return self.dman.stage_in(self, site)
87
88 def stage_out(self):
89 """Transport file from local filesystem to origin site."""
90 return self.dman.stage_out(self)
91
92 def set_data_future(self, df, site=None):
93 self.data_future[site] = df
94
95 def get_data_future(self, site):
96 return self.data_future.get(site)
97
98
99 if __name__ == '__main__':
100
101 x = File('./files.py')
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsl/data_provider/files.py b/parsl/data_provider/files.py
--- a/parsl/data_provider/files.py
+++ b/parsl/data_provider/files.py
@@ -95,6 +95,34 @@
def get_data_future(self, site):
return self.data_future.get(site)
+ def __getstate__(self):
+ """ Overriding the default pickling method.
+
+ The File object get's pickled and transmitted to remote sites during app
+ execution. This enables pickling while retaining the lockable resources
+ to the DFK/Client side.
+ """
+
+ state = self.__dict__.copy()
+
+ # We have already made a copy of the future objects, they are now no longer
+ # reliable as means to wait for the staging events
+ for site in state["data_future"]:
+ # This is assumed to be safe, since the data_future represents staging to a specific site
+ # and a site will only have one filepath.
+ state["data_future"][site] = state["data_future"][site].filepath
+
+ state["dman"] = None
+
+ return state
+
+ def __setstate__(self, state):
+ """ Overloading the default pickle method to reconstruct a File from serialized form
+
+ This might require knowledge of whethere a DataManager is already present in the context.
+ """
+ self.__dict__.update(state)
+
if __name__ == '__main__':
| {"golden_diff": "diff --git a/parsl/data_provider/files.py b/parsl/data_provider/files.py\n--- a/parsl/data_provider/files.py\n+++ b/parsl/data_provider/files.py\n@@ -95,6 +95,34 @@\n def get_data_future(self, site):\n return self.data_future.get(site)\n \n+ def __getstate__(self):\n+ \"\"\" Overriding the default pickling method.\n+\n+ The File object get's pickled and transmitted to remote sites during app\n+ execution. This enables pickling while retaining the lockable resources\n+ to the DFK/Client side.\n+ \"\"\"\n+\n+ state = self.__dict__.copy()\n+\n+ # We have already made a copy of the future objects, they are now no longer\n+ # reliable as means to wait for the staging events\n+ for site in state[\"data_future\"]:\n+ # This is assumed to be safe, since the data_future represents staging to a specific site\n+ # and a site will only have one filepath.\n+ state[\"data_future\"][site] = state[\"data_future\"][site].filepath\n+\n+ state[\"dman\"] = None\n+\n+ return state\n+\n+ def __setstate__(self, state):\n+ \"\"\" Overloading the default pickle method to reconstruct a File from serialized form\n+\n+ This might require knowledge of whethere a DataManager is already present in the context.\n+ \"\"\"\n+ self.__dict__.update(state)\n+\n \n if __name__ == '__main__':\n", "issue": "Passing Files breaks over IPP\nThe new File class contains a dictionary that maps DataFutures for each site to which it is being staged and contains a reference to the DataManager. Neither of these are pickle-able.\r\n\r\nSo if we do something like this :+1: \r\n```\r\ndata = File(\"foo.txt\")\r\nfu = remote_app(inputs=[data])\r\nfu.result() # <--- We'll get an error from here\r\n```\r\nHere's the relevant piece from the exception traceback :\r\n```\r\nFile \"/usr/local/lib/python3.5/dist-packages/ipyparallel/serialize/serialize.py\", line 112, in serialize_object\r\n buffers.insert(0, pickle.dumps(cobj, PICKLE_PROTOCOL))\r\nTypeError: can't pickle _thread.lock objects\r\n```\r\n\r\nI believe that the File object is the best place to hold the Future information about itself, and that would give us the opportunity to do smarter file staging in the future. So I propose that we fix this with a custom pickler for the File class.\r\n\r\nThis is blocker for 0.5.0.\n", "before_files": [{"content": "\"\"\"Define the File Type.\n\nThe primary purpose of the File object is to track the protocol to be used\nto transfer the file as well as to give the appropriate filepath depending\non where(client-side, remote-side, intermediary-side) the File.filepath is\nbeing called from\n\"\"\"\n\nimport os\nimport logging\nfrom urllib.parse import urlparse\nfrom parsl.data_provider.data_manager import DataManager\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass File(str):\n \"\"\"The Parsl File Class.\n\n This is planned to be a very simple class that simply\n captures various attributes of a file, and relies on client-side and worker-side\n systems to enable to appropriate transfer of files.\n \"\"\"\n\n def __init__(self, url, dman=None, cache=False, caching_dir=\".\", staging='direct'):\n \"\"\"Construct a File object from a url string.\n\n Args:\n - url (string) : url string of the file e.g.\n - 'input.txt'\n - 'file:///scratch/proj101/input.txt'\n - 'globus://go#ep1/~/data/input.txt'\n - 'globus://ddb59aef-6d04-11e5-ba46-22000b92c6ec/home/johndoe/data/input.txt'\n - dman (DataManager) : data manager\n \"\"\"\n self.url = url\n parsed_url = urlparse(self.url)\n self.scheme = parsed_url.scheme if parsed_url.scheme else 'file'\n self.netloc = parsed_url.netloc\n self.path = parsed_url.path\n self.filename = os.path.basename(self.path)\n self.dman = dman if dman else DataManager.get_data_manager()\n self.data_future = {}\n if self.scheme != 'file':\n self.dman.add_file(self)\n\n self.cache = cache\n self.caching_dir = caching_dir\n self.staging = staging\n\n def __str__(self):\n return self.filepath\n\n def __repr__(self):\n return self.__str__()\n\n def __fspath__(self):\n return self.filepath\n\n @property\n def filepath(self):\n \"\"\"Return the resolved filepath on the side where it is called from.\n\n The appropriate filepath will be returned when called from within\n an app running remotely as well as regular python on the client side.\n\n Args:\n - self\n Returns:\n - filepath (string)\n \"\"\"\n if self.scheme == 'globus':\n if hasattr(self, 'local_path'):\n return self.local_path\n\n if 'exec_site' not in globals() or self.staging == 'direct':\n # Assume local and direct\n return self.path\n else:\n # Return self.path for now\n return self.path\n\n def stage_in(self, site=None):\n \"\"\"Transport file from the site of origin to local site.\"\"\"\n return self.dman.stage_in(self, site)\n\n def stage_out(self):\n \"\"\"Transport file from local filesystem to origin site.\"\"\"\n return self.dman.stage_out(self)\n\n def set_data_future(self, df, site=None):\n self.data_future[site] = df\n\n def get_data_future(self, site):\n return self.data_future.get(site)\n\n\nif __name__ == '__main__':\n\n x = File('./files.py')\n", "path": "parsl/data_provider/files.py"}], "after_files": [{"content": "\"\"\"Define the File Type.\n\nThe primary purpose of the File object is to track the protocol to be used\nto transfer the file as well as to give the appropriate filepath depending\non where(client-side, remote-side, intermediary-side) the File.filepath is\nbeing called from\n\"\"\"\n\nimport os\nimport logging\nfrom urllib.parse import urlparse\nfrom parsl.data_provider.data_manager import DataManager\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass File(str):\n \"\"\"The Parsl File Class.\n\n This is planned to be a very simple class that simply\n captures various attributes of a file, and relies on client-side and worker-side\n systems to enable to appropriate transfer of files.\n \"\"\"\n\n def __init__(self, url, dman=None, cache=False, caching_dir=\".\", staging='direct'):\n \"\"\"Construct a File object from a url string.\n\n Args:\n - url (string) : url string of the file e.g.\n - 'input.txt'\n - 'file:///scratch/proj101/input.txt'\n - 'globus://go#ep1/~/data/input.txt'\n - 'globus://ddb59aef-6d04-11e5-ba46-22000b92c6ec/home/johndoe/data/input.txt'\n - dman (DataManager) : data manager\n \"\"\"\n self.url = url\n parsed_url = urlparse(self.url)\n self.scheme = parsed_url.scheme if parsed_url.scheme else 'file'\n self.netloc = parsed_url.netloc\n self.path = parsed_url.path\n self.filename = os.path.basename(self.path)\n self.dman = dman if dman else DataManager.get_data_manager()\n self.data_future = {}\n if self.scheme != 'file':\n self.dman.add_file(self)\n\n self.cache = cache\n self.caching_dir = caching_dir\n self.staging = staging\n\n def __str__(self):\n return self.filepath\n\n def __repr__(self):\n return self.__str__()\n\n def __fspath__(self):\n return self.filepath\n\n @property\n def filepath(self):\n \"\"\"Return the resolved filepath on the side where it is called from.\n\n The appropriate filepath will be returned when called from within\n an app running remotely as well as regular python on the client side.\n\n Args:\n - self\n Returns:\n - filepath (string)\n \"\"\"\n if self.scheme == 'globus':\n if hasattr(self, 'local_path'):\n return self.local_path\n\n if 'exec_site' not in globals() or self.staging == 'direct':\n # Assume local and direct\n return self.path\n else:\n # Return self.path for now\n return self.path\n\n def stage_in(self, site=None):\n \"\"\"Transport file from the site of origin to local site.\"\"\"\n return self.dman.stage_in(self, site)\n\n def stage_out(self):\n \"\"\"Transport file from local filesystem to origin site.\"\"\"\n return self.dman.stage_out(self)\n\n def set_data_future(self, df, site=None):\n self.data_future[site] = df\n\n def get_data_future(self, site):\n return self.data_future.get(site)\n\n def __getstate__(self):\n \"\"\" Overriding the default pickling method.\n\n The File object get's pickled and transmitted to remote sites during app\n execution. This enables pickling while retaining the lockable resources\n to the DFK/Client side.\n \"\"\"\n\n state = self.__dict__.copy()\n\n # We have already made a copy of the future objects, they are now no longer\n # reliable as means to wait for the staging events\n for site in state[\"data_future\"]:\n # This is assumed to be safe, since the data_future represents staging to a specific site\n # and a site will only have one filepath.\n state[\"data_future\"][site] = state[\"data_future\"][site].filepath\n\n state[\"dman\"] = None\n\n return state\n\n def __setstate__(self, state):\n \"\"\" Overloading the default pickle method to reconstruct a File from serialized form\n\n This might require knowledge of whethere a DataManager is already present in the context.\n \"\"\"\n self.__dict__.update(state)\n\n\nif __name__ == '__main__':\n\n x = File('./files.py')\n", "path": "parsl/data_provider/files.py"}]} | 1,402 | 332 |
gh_patches_debug_64419 | rasdani/github-patches | git_diff | pwndbg__pwndbg-584 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
r2: 'NoneType' object has no attribute 'cast' (<class 'AttributeError'>)
### Description
This happens when i initiate r2 after loading a binary in pwndbg
I have tested both in wsl and a 64bit ubuntu machine same behavior sorta
### Steps to reproduce
1. Load a binary
2. Run r2
Exception occured: r2: 'NoneType' object has no attribute 'cast' (<class 'AttributeError'>)
Traceback (most recent call last):
File "/root/reverse/pwndbg/pwndbg/commands/__init__.py", line 135, in __call__
return self.function(*args, **kwargs)
File "/root/reverse/pwndbg/pwndbg/commands/__init__.py", line 215, in _OnlyWithFile
return function(*a, **kw)
File "/root/reverse/pwndbg/pwndbg/commands/radare2.py", line 28, in r2
addr = pwndbg.regs.pc
File "/root/reverse/pwndbg/pwndbg/memoize.py", line 48, in __call__
value = self.func(*args, **kwargs)
File "/root/reverse/pwndbg/pwndbg/regs.py", line 280, in __getattr__
value = value.cast(pwndbg.typeinfo.ptrdiff)
AttributeError: 'NoneType' object has no attribute 'cast'
### My setup
Gdb: 7.11.1
Python: 3.5.2 (default, Nov 12 2018, 13:43:14) [GCC 5.4.0 20160609]
Pwndbg: 1.1.0 build: 054f209
Capstone: 4.0.1024
Unicorn: 1.0.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/radare2.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6 from __future__ import unicode_literals
7
8 import argparse
9 import subprocess
10
11 import pwndbg.commands
12
13 parser = argparse.ArgumentParser(description='Launches radare2',
14 epilog="Example: r2 -- -S -AA")
15 parser.add_argument('--no-seek', action='store_true',
16 help='Do not seek to current pc')
17 parser.add_argument('arguments', nargs='*', type=str,
18 help='Arguments to pass to radare')
19
20
21 @pwndbg.commands.ArgparsedCommand(parser)
22 @pwndbg.commands.OnlyWithFile
23 def r2(arguments, no_seek=False):
24 filename = pwndbg.file.get_file(pwndbg.proc.exe)
25
26 # Build up the command line to run
27 cmd = ['radare2', filename]
28 addr = pwndbg.regs.pc
29 if pwndbg.elf.get_elf_info(filename).is_pie:
30 addr -= pwndbg.elf.exe().address
31 if not no_seek and pwndbg.proc.alive:
32 cmd.extend(['-s', hex(addr)])
33 cmd += arguments
34
35 try:
36 subprocess.call(cmd)
37 except Exception:
38 print("Could not run radare2. Please ensure it's installed and in $PATH.")
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/commands/radare2.py b/pwndbg/commands/radare2.py
--- a/pwndbg/commands/radare2.py
+++ b/pwndbg/commands/radare2.py
@@ -25,11 +25,12 @@
# Build up the command line to run
cmd = ['radare2', filename]
- addr = pwndbg.regs.pc
- if pwndbg.elf.get_elf_info(filename).is_pie:
- addr -= pwndbg.elf.exe().address
- if not no_seek and pwndbg.proc.alive:
- cmd.extend(['-s', hex(addr)])
+ if pwndbg.proc.alive:
+ addr = pwndbg.regs.pc
+ if pwndbg.elf.get_elf_info(filename).is_pie:
+ addr -= pwndbg.elf.exe().address
+ if not no_seek:
+ cmd.extend(['-s', hex(addr)])
cmd += arguments
try:
| {"golden_diff": "diff --git a/pwndbg/commands/radare2.py b/pwndbg/commands/radare2.py\n--- a/pwndbg/commands/radare2.py\n+++ b/pwndbg/commands/radare2.py\n@@ -25,11 +25,12 @@\n \n # Build up the command line to run\n cmd = ['radare2', filename]\n- addr = pwndbg.regs.pc\n- if pwndbg.elf.get_elf_info(filename).is_pie:\n- addr -= pwndbg.elf.exe().address\n- if not no_seek and pwndbg.proc.alive:\n- cmd.extend(['-s', hex(addr)])\n+ if pwndbg.proc.alive:\n+ addr = pwndbg.regs.pc\n+ if pwndbg.elf.get_elf_info(filename).is_pie:\n+ addr -= pwndbg.elf.exe().address\n+ if not no_seek:\n+ cmd.extend(['-s', hex(addr)])\n cmd += arguments\n \n try:\n", "issue": "r2: 'NoneType' object has no attribute 'cast' (<class 'AttributeError'>)\n### Description\r\n\r\n\r\nThis happens when i initiate r2 after loading a binary in pwndbg \r\nI have tested both in wsl and a 64bit ubuntu machine same behavior sorta \r\n\r\n\r\n### Steps to reproduce\r\n\r\n\r\n1. Load a binary \r\n2. Run r2 \r\nException occured: r2: 'NoneType' object has no attribute 'cast' (<class 'AttributeError'>)\r\nTraceback (most recent call last):\r\n File \"/root/reverse/pwndbg/pwndbg/commands/__init__.py\", line 135, in __call__\r\n return self.function(*args, **kwargs)\r\n File \"/root/reverse/pwndbg/pwndbg/commands/__init__.py\", line 215, in _OnlyWithFile\r\n return function(*a, **kw)\r\n File \"/root/reverse/pwndbg/pwndbg/commands/radare2.py\", line 28, in r2\r\n addr = pwndbg.regs.pc\r\n File \"/root/reverse/pwndbg/pwndbg/memoize.py\", line 48, in __call__\r\n value = self.func(*args, **kwargs)\r\n File \"/root/reverse/pwndbg/pwndbg/regs.py\", line 280, in __getattr__\r\n value = value.cast(pwndbg.typeinfo.ptrdiff)\r\nAttributeError: 'NoneType' object has no attribute 'cast'\r\n\r\n\r\n\r\n\r\n\r\n### My setup\r\n\r\n\r\nGdb: 7.11.1\r\nPython: 3.5.2 (default, Nov 12 2018, 13:43:14) [GCC 5.4.0 20160609]\r\nPwndbg: 1.1.0 build: 054f209\r\nCapstone: 4.0.1024\r\nUnicorn: 1.0.1\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport argparse\nimport subprocess\n\nimport pwndbg.commands\n\nparser = argparse.ArgumentParser(description='Launches radare2',\n epilog=\"Example: r2 -- -S -AA\")\nparser.add_argument('--no-seek', action='store_true',\n help='Do not seek to current pc')\nparser.add_argument('arguments', nargs='*', type=str,\n help='Arguments to pass to radare')\n\n\[email protected](parser)\[email protected]\ndef r2(arguments, no_seek=False):\n filename = pwndbg.file.get_file(pwndbg.proc.exe)\n\n # Build up the command line to run\n cmd = ['radare2', filename]\n addr = pwndbg.regs.pc\n if pwndbg.elf.get_elf_info(filename).is_pie:\n addr -= pwndbg.elf.exe().address\n if not no_seek and pwndbg.proc.alive:\n cmd.extend(['-s', hex(addr)])\n cmd += arguments\n\n try:\n subprocess.call(cmd)\n except Exception:\n print(\"Could not run radare2. Please ensure it's installed and in $PATH.\")\n", "path": "pwndbg/commands/radare2.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport argparse\nimport subprocess\n\nimport pwndbg.commands\n\nparser = argparse.ArgumentParser(description='Launches radare2',\n epilog=\"Example: r2 -- -S -AA\")\nparser.add_argument('--no-seek', action='store_true',\n help='Do not seek to current pc')\nparser.add_argument('arguments', nargs='*', type=str,\n help='Arguments to pass to radare')\n\n\[email protected](parser)\[email protected]\ndef r2(arguments, no_seek=False):\n filename = pwndbg.file.get_file(pwndbg.proc.exe)\n\n # Build up the command line to run\n cmd = ['radare2', filename]\n if pwndbg.proc.alive:\n addr = pwndbg.regs.pc\n if pwndbg.elf.get_elf_info(filename).is_pie:\n addr -= pwndbg.elf.exe().address\n if not no_seek:\n cmd.extend(['-s', hex(addr)])\n cmd += arguments\n\n try:\n subprocess.call(cmd)\n except Exception:\n print(\"Could not run radare2. Please ensure it's installed and in $PATH.\")\n", "path": "pwndbg/commands/radare2.py"}]} | 1,067 | 232 |
gh_patches_debug_19134 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1521 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Executable `prettier` not found
hello hello!
As discussed on discord, I'm having an issue running prettier via pre-commit:
```bash
$ pre-commit --version
pre-commit 2.5.1
$ cat .pre-commit-config.yaml
repos:
- repo: https://github.com/prettier/prettier
rev: 2.0.5
hooks:
- id: prettier
$ pre-commit clean
Cleaned /home/rkm/.cache/pre-commit.
> pre-commit run prettier --files README.md
[INFO] Initializing environment for https://github.com/prettier/prettier.
[INFO] Installing environment for https://github.com/prettier/prettier.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
prettier.................................................................Failed
- hook id: prettier
- exit code: 1
Executable `prettier` not found
```
it seems like prettier is installed correctly, but the symlink to it is not created:
```bash
$ find ~/.cache/pre-commit/ -name prettier.js
/home/rkm/.cache/pre-commit/repoes79dg4v/bin/prettier.js
$ ls -l $(find ~/.cache/pre-commit/ -name node_env-default)/bin
total 70376
-rwxr-xr-x. 1 rkm rkm 3702 Jun 17 17:30 activate
-rwxr-xr-x. 1 rkm rkm 3964 Jun 17 17:30 activate.fish
-rwxr-xr-x. 1 rkm rkm 72052312 Jun 2 14:33 node
lrwxrwxrwx. 1 rkm rkm 4 Jun 17 17:30 nodejs -> node
lrwxrwxrwx. 1 rkm rkm 38 Jun 17 17:30 npm -> ../lib/node_modules/npm/bin/npm-cli.js
lrwxrwxrwx. 1 rkm rkm 38 Jun 17 17:30 npx -> ../lib/node_modules/npm/bin/npx-cli.js
-rwxr-xr-x. 1 rkm rkm 355 Jun 17 17:30 shim
```
(doing the same in a docker container results in a `prettier` symlink being created there).
I suspect my VM may be borked somehow, but not sure how to debug this further. Any thoughts? Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/node.py`
Content:
```
1 import contextlib
2 import functools
3 import os
4 import sys
5 from typing import Generator
6 from typing import Sequence
7 from typing import Tuple
8
9 import pre_commit.constants as C
10 from pre_commit import parse_shebang
11 from pre_commit.envcontext import envcontext
12 from pre_commit.envcontext import PatchesT
13 from pre_commit.envcontext import Var
14 from pre_commit.hook import Hook
15 from pre_commit.languages import helpers
16 from pre_commit.languages.python import bin_dir
17 from pre_commit.prefix import Prefix
18 from pre_commit.util import clean_path_on_failure
19 from pre_commit.util import cmd_output
20 from pre_commit.util import cmd_output_b
21
22 ENVIRONMENT_DIR = 'node_env'
23 healthy = helpers.basic_healthy
24
25
26 @functools.lru_cache(maxsize=1)
27 def get_default_version() -> str:
28 # nodeenv does not yet support `-n system` on windows
29 if sys.platform == 'win32':
30 return C.DEFAULT
31 # if node is already installed, we can save a bunch of setup time by
32 # using the installed version
33 elif all(parse_shebang.find_executable(exe) for exe in ('node', 'npm')):
34 return 'system'
35 else:
36 return C.DEFAULT
37
38
39 def _envdir(prefix: Prefix, version: str) -> str:
40 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
41 return prefix.path(directory)
42
43
44 def get_env_patch(venv: str) -> PatchesT:
45 if sys.platform == 'cygwin': # pragma: no cover
46 _, win_venv, _ = cmd_output('cygpath', '-w', venv)
47 install_prefix = fr'{win_venv.strip()}\bin'
48 lib_dir = 'lib'
49 elif sys.platform == 'win32': # pragma: no cover
50 install_prefix = bin_dir(venv)
51 lib_dir = 'Scripts'
52 else: # pragma: win32 no cover
53 install_prefix = venv
54 lib_dir = 'lib'
55 return (
56 ('NODE_VIRTUAL_ENV', venv),
57 ('NPM_CONFIG_PREFIX', install_prefix),
58 ('npm_config_prefix', install_prefix),
59 ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),
60 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
61 )
62
63
64 @contextlib.contextmanager
65 def in_env(
66 prefix: Prefix,
67 language_version: str,
68 ) -> Generator[None, None, None]:
69 with envcontext(get_env_patch(_envdir(prefix, language_version))):
70 yield
71
72
73 def install_environment(
74 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
75 ) -> None:
76 additional_dependencies = tuple(additional_dependencies)
77 assert prefix.exists('package.json')
78 envdir = _envdir(prefix, version)
79
80 # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath
81 if sys.platform == 'win32': # pragma: no cover
82 envdir = fr'\\?\{os.path.normpath(envdir)}'
83 with clean_path_on_failure(envdir):
84 cmd = [
85 sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir,
86 ]
87 if version != C.DEFAULT:
88 cmd.extend(['-n', version])
89 cmd_output_b(*cmd)
90
91 with in_env(prefix, version):
92 # https://npm.community/t/npm-install-g-git-vs-git-clone-cd-npm-install-g/5449
93 # install as if we installed from git
94 helpers.run_setup_cmd(prefix, ('npm', 'install'))
95 helpers.run_setup_cmd(
96 prefix,
97 ('npm', 'install', '-g', '.', *additional_dependencies),
98 )
99
100
101 def run_hook(
102 hook: Hook,
103 file_args: Sequence[str],
104 color: bool,
105 ) -> Tuple[int, bytes]:
106 with in_env(hook.prefix, hook.language_version):
107 return helpers.run_xargs(hook, hook.cmd, file_args, color=color)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -10,6 +10,7 @@
from pre_commit import parse_shebang
from pre_commit.envcontext import envcontext
from pre_commit.envcontext import PatchesT
+from pre_commit.envcontext import UNSET
from pre_commit.envcontext import Var
from pre_commit.hook import Hook
from pre_commit.languages import helpers
@@ -56,6 +57,8 @@
('NODE_VIRTUAL_ENV', venv),
('NPM_CONFIG_PREFIX', install_prefix),
('npm_config_prefix', install_prefix),
+ ('NPM_CONFIG_USERCONFIG', UNSET),
+ ('npm_config_userconfig', UNSET),
('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),
('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
)
| {"golden_diff": "diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py\n--- a/pre_commit/languages/node.py\n+++ b/pre_commit/languages/node.py\n@@ -10,6 +10,7 @@\n from pre_commit import parse_shebang\n from pre_commit.envcontext import envcontext\n from pre_commit.envcontext import PatchesT\n+from pre_commit.envcontext import UNSET\n from pre_commit.envcontext import Var\n from pre_commit.hook import Hook\n from pre_commit.languages import helpers\n@@ -56,6 +57,8 @@\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n+ ('NPM_CONFIG_USERCONFIG', UNSET),\n+ ('npm_config_userconfig', UNSET),\n ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n", "issue": "Executable `prettier` not found\nhello hello!\r\n\r\nAs discussed on discord, I'm having an issue running prettier via pre-commit:\r\n\r\n```bash\r\n$ pre-commit --version\r\npre-commit 2.5.1\r\n\r\n$ cat .pre-commit-config.yaml\r\nrepos:\r\n - repo: https://github.com/prettier/prettier\r\n rev: 2.0.5\r\n hooks:\r\n - id: prettier\r\n\r\n$ pre-commit clean\r\nCleaned /home/rkm/.cache/pre-commit.\r\n\r\n> pre-commit run prettier --files README.md\r\n[INFO] Initializing environment for https://github.com/prettier/prettier.\r\n[INFO] Installing environment for https://github.com/prettier/prettier.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\nprettier.................................................................Failed\r\n- hook id: prettier\r\n- exit code: 1\r\n\r\nExecutable `prettier` not found\r\n```\r\n\r\nit seems like prettier is installed correctly, but the symlink to it is not created:\r\n\r\n```bash\r\n$ find ~/.cache/pre-commit/ -name prettier.js\r\n/home/rkm/.cache/pre-commit/repoes79dg4v/bin/prettier.js\r\n\r\n$ ls -l $(find ~/.cache/pre-commit/ -name node_env-default)/bin\r\ntotal 70376\r\n-rwxr-xr-x. 1 rkm rkm 3702 Jun 17 17:30 activate\r\n-rwxr-xr-x. 1 rkm rkm 3964 Jun 17 17:30 activate.fish\r\n-rwxr-xr-x. 1 rkm rkm 72052312 Jun 2 14:33 node\r\nlrwxrwxrwx. 1 rkm rkm 4 Jun 17 17:30 nodejs -> node\r\nlrwxrwxrwx. 1 rkm rkm 38 Jun 17 17:30 npm -> ../lib/node_modules/npm/bin/npm-cli.js\r\nlrwxrwxrwx. 1 rkm rkm 38 Jun 17 17:30 npx -> ../lib/node_modules/npm/bin/npx-cli.js\r\n-rwxr-xr-x. 1 rkm rkm 355 Jun 17 17:30 shim \r\n```\r\n\r\n(doing the same in a docker container results in a `prettier` symlink being created there).\r\n\r\nI suspect my VM may be borked somehow, but not sure how to debug this further. Any thoughts? Thanks!\r\n\n", "before_files": [{"content": "import contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Generator\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit import parse_shebang\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.languages.python import bin_dir\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'node_env'\nhealthy = helpers.basic_healthy\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str:\n # nodeenv does not yet support `-n system` on windows\n if sys.platform == 'win32':\n return C.DEFAULT\n # if node is already installed, we can save a bunch of setup time by\n # using the installed version\n elif all(parse_shebang.find_executable(exe) for exe in ('node', 'npm')):\n return 'system'\n else:\n return C.DEFAULT\n\n\ndef _envdir(prefix: Prefix, version: str) -> str:\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n return prefix.path(directory)\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = fr'{win_venv.strip()}\\bin'\n lib_dir = 'lib'\n elif sys.platform == 'win32': # pragma: no cover\n install_prefix = bin_dir(venv)\n lib_dir = 'Scripts'\n else: # pragma: win32 no cover\n install_prefix = venv\n lib_dir = 'lib'\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(\n prefix: Prefix,\n language_version: str,\n) -> Generator[None, None, None]:\n with envcontext(get_env_patch(_envdir(prefix, language_version))):\n yield\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None:\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n envdir = _envdir(prefix, version)\n\n # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n if sys.platform == 'win32': # pragma: no cover\n envdir = fr'\\\\?\\{os.path.normpath(envdir)}'\n with clean_path_on_failure(envdir):\n cmd = [\n sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir,\n ]\n if version != C.DEFAULT:\n cmd.extend(['-n', version])\n cmd_output_b(*cmd)\n\n with in_env(prefix, version):\n # https://npm.community/t/npm-install-g-git-vs-git-clone-cd-npm-install-g/5449\n # install as if we installed from git\n helpers.run_setup_cmd(prefix, ('npm', 'install'))\n helpers.run_setup_cmd(\n prefix,\n ('npm', 'install', '-g', '.', *additional_dependencies),\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]:\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n", "path": "pre_commit/languages/node.py"}], "after_files": [{"content": "import contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Generator\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit import parse_shebang\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.languages.python import bin_dir\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'node_env'\nhealthy = helpers.basic_healthy\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str:\n # nodeenv does not yet support `-n system` on windows\n if sys.platform == 'win32':\n return C.DEFAULT\n # if node is already installed, we can save a bunch of setup time by\n # using the installed version\n elif all(parse_shebang.find_executable(exe) for exe in ('node', 'npm')):\n return 'system'\n else:\n return C.DEFAULT\n\n\ndef _envdir(prefix: Prefix, version: str) -> str:\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n return prefix.path(directory)\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = fr'{win_venv.strip()}\\bin'\n lib_dir = 'lib'\n elif sys.platform == 'win32': # pragma: no cover\n install_prefix = bin_dir(venv)\n lib_dir = 'Scripts'\n else: # pragma: win32 no cover\n install_prefix = venv\n lib_dir = 'lib'\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NPM_CONFIG_USERCONFIG', UNSET),\n ('npm_config_userconfig', UNSET),\n ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(\n prefix: Prefix,\n language_version: str,\n) -> Generator[None, None, None]:\n with envcontext(get_env_patch(_envdir(prefix, language_version))):\n yield\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None:\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n envdir = _envdir(prefix, version)\n\n # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n if sys.platform == 'win32': # pragma: no cover\n envdir = fr'\\\\?\\{os.path.normpath(envdir)}'\n with clean_path_on_failure(envdir):\n cmd = [\n sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir,\n ]\n if version != C.DEFAULT:\n cmd.extend(['-n', version])\n cmd_output_b(*cmd)\n\n with in_env(prefix, version):\n # https://npm.community/t/npm-install-g-git-vs-git-clone-cd-npm-install-g/5449\n # install as if we installed from git\n helpers.run_setup_cmd(prefix, ('npm', 'install'))\n helpers.run_setup_cmd(\n prefix,\n ('npm', 'install', '-g', '.', *additional_dependencies),\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]:\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n", "path": "pre_commit/languages/node.py"}]} | 1,958 | 213 |
gh_patches_debug_648 | rasdani/github-patches | git_diff | pex-tool__pex-2000 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.117
On the docket:
+ [x] Published pex on github no longer works with PyPy since 2.1.109 #1995
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.116"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.116"
+__version__ = "2.1.117"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.116\"\n+__version__ = \"2.1.117\"\n", "issue": "Release 2.1.117\nOn the docket:\r\n+ [x] Published pex on github no longer works with PyPy since 2.1.109 #1995\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.116\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.117\"\n", "path": "pex/version.py"}]} | 353 | 98 |
gh_patches_debug_10025 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5170 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_DOCKER_11 false positive when `--platform` is used
**Describe the issue**
CKV_DOCKER_11 false positive when `--platform` is used (possibly other arguments as well)
For reference: _"CKV_DOCKER_11: "Ensure From Alias are unique for multistage builds."_ In other words, make sure you add `as myAlias` at the end of your `FROM` line
**Examples**
This will PASS as expected:
`FROM node:16 as build`
Now, add `--platform` and it will FAIL:
`FROM --platform=linux/amd64 node:16 as build`
**Version (please complete the following information):**
```
> checkov -v
2.3.240
```
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/dockerfile/checks/AliasIsUnique.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from checkov.common.models.enums import CheckCategories, CheckResult
6 from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
7
8 if TYPE_CHECKING:
9 from dockerfile_parse.parser import _Instruction
10
11
12 class AliasIsUnique(BaseDockerfileCheck):
13 def __init__(self) -> None:
14 """
15 Ensure From Alias are unique for multistage builds.
16 """
17 name = "Ensure From Alias are unique for multistage builds."
18 id = "CKV_DOCKER_11"
19 supported_instructions = ("FROM",)
20 categories = (CheckCategories.CONVENTION,)
21 super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
22
23 def scan_resource_conf(self, conf: list[_Instruction]) -> tuple[CheckResult, list[_Instruction] | None]:
24 alias = []
25 for instruction in conf:
26 if " as " in instruction["value"]:
27 temp = instruction["value"].split()
28 alias += [temp[2]]
29
30 if len(alias) == len(set(alias)):
31 return CheckResult.PASSED, None
32 else:
33 return CheckResult.FAILED, [conf[0]]
34
35
36 check = AliasIsUnique()
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/dockerfile/checks/AliasIsUnique.py b/checkov/dockerfile/checks/AliasIsUnique.py
--- a/checkov/dockerfile/checks/AliasIsUnique.py
+++ b/checkov/dockerfile/checks/AliasIsUnique.py
@@ -24,13 +24,12 @@
alias = []
for instruction in conf:
if " as " in instruction["value"]:
- temp = instruction["value"].split()
- alias += [temp[2]]
+ alias.append(instruction["value"].rsplit(maxsplit=1)[-1])
if len(alias) == len(set(alias)):
return CheckResult.PASSED, None
- else:
- return CheckResult.FAILED, [conf[0]]
+
+ return CheckResult.FAILED, [conf[0]]
check = AliasIsUnique()
| {"golden_diff": "diff --git a/checkov/dockerfile/checks/AliasIsUnique.py b/checkov/dockerfile/checks/AliasIsUnique.py\n--- a/checkov/dockerfile/checks/AliasIsUnique.py\n+++ b/checkov/dockerfile/checks/AliasIsUnique.py\n@@ -24,13 +24,12 @@\n alias = []\n for instruction in conf:\n if \" as \" in instruction[\"value\"]:\n- temp = instruction[\"value\"].split()\n- alias += [temp[2]]\n+ alias.append(instruction[\"value\"].rsplit(maxsplit=1)[-1])\n \n if len(alias) == len(set(alias)):\n return CheckResult.PASSED, None\n- else:\n- return CheckResult.FAILED, [conf[0]]\n+\n+ return CheckResult.FAILED, [conf[0]]\n \n \n check = AliasIsUnique()\n", "issue": "CKV_DOCKER_11 false positive when `--platform` is used\n**Describe the issue**\r\n\r\nCKV_DOCKER_11 false positive when `--platform` is used (possibly other arguments as well)\r\n\r\nFor reference: _\"CKV_DOCKER_11: \"Ensure From Alias are unique for multistage builds.\"_ In other words, make sure you add `as myAlias` at the end of your `FROM` line\r\n\r\n**Examples**\r\n\r\nThis will PASS as expected:\r\n`FROM node:16 as build`\r\n\r\nNow, add `--platform` and it will FAIL:\r\n`FROM --platform=linux/amd64 node:16 as build`\r\n\r\n**Version (please complete the following information):**\r\n```\r\n> checkov -v \r\n2.3.240\r\n```\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\nif TYPE_CHECKING:\n from dockerfile_parse.parser import _Instruction\n\n\nclass AliasIsUnique(BaseDockerfileCheck):\n def __init__(self) -> None:\n \"\"\"\n Ensure From Alias are unique for multistage builds.\n \"\"\"\n name = \"Ensure From Alias are unique for multistage builds.\"\n id = \"CKV_DOCKER_11\"\n supported_instructions = (\"FROM\",)\n categories = (CheckCategories.CONVENTION,)\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_resource_conf(self, conf: list[_Instruction]) -> tuple[CheckResult, list[_Instruction] | None]:\n alias = []\n for instruction in conf:\n if \" as \" in instruction[\"value\"]:\n temp = instruction[\"value\"].split()\n alias += [temp[2]]\n\n if len(alias) == len(set(alias)):\n return CheckResult.PASSED, None\n else:\n return CheckResult.FAILED, [conf[0]]\n\n\ncheck = AliasIsUnique()\n", "path": "checkov/dockerfile/checks/AliasIsUnique.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\nif TYPE_CHECKING:\n from dockerfile_parse.parser import _Instruction\n\n\nclass AliasIsUnique(BaseDockerfileCheck):\n def __init__(self) -> None:\n \"\"\"\n Ensure From Alias are unique for multistage builds.\n \"\"\"\n name = \"Ensure From Alias are unique for multistage builds.\"\n id = \"CKV_DOCKER_11\"\n supported_instructions = (\"FROM\",)\n categories = (CheckCategories.CONVENTION,)\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_resource_conf(self, conf: list[_Instruction]) -> tuple[CheckResult, list[_Instruction] | None]:\n alias = []\n for instruction in conf:\n if \" as \" in instruction[\"value\"]:\n alias.append(instruction[\"value\"].rsplit(maxsplit=1)[-1])\n\n if len(alias) == len(set(alias)):\n return CheckResult.PASSED, None\n\n return CheckResult.FAILED, [conf[0]]\n\n\ncheck = AliasIsUnique()\n", "path": "checkov/dockerfile/checks/AliasIsUnique.py"}]} | 797 | 187 |
gh_patches_debug_37821 | rasdani/github-patches | git_diff | sunpy__sunpy-3056 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AIA FITS headers have inaccurate HGS coordinates
AIA FITS headers apparently have inaccurate Heliographic Stonyhurst (HGS) coordinates (`HGLN_OBS` and `HGLT_OBS`). For example, the distance from Earth center does not match the orbit radius. We currently use these keywords to generate the `observer_coordinate` for an AIA map, so we provide an inaccurate observer location.
The headers also have Heliographic Aries Ecliptic (HAE) coordinates (`HAEX_OBS`, `HAEY_OBS`, and `HAEZ_OBS`), and the HAE coordinates are inconsistent with the HGS coordinates in the same header. We have previously verified the accuracy of SunPy's transformation from HAE to HGS (e.g., https://github.com/sunpy/sunpy/issues/2445#issuecomment-364531159). The HAE coordinates appear to be credible, and likely should be trusted over the HGS coordinates.
My recommendation is for `AIAMap` to override the generation of `observer_coordinate` to use the HAE coordinates. Discuss.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/map/sources/sdo.py`
Content:
```
1 """SDO Map subclass definitions"""
2 #pylint: disable=W0221,W0222,E1101,E1121
3
4 __author__ = "Keith Hughitt"
5 __email__ = "[email protected]"
6
7 import matplotlib.pyplot as plt
8
9 from astropy.visualization.mpl_normalize import ImageNormalize
10 from astropy.visualization import AsinhStretch
11
12 from sunpy.map import GenericMap
13 from sunpy.map.sources.source_type import source_stretch
14
15 __all__ = ['AIAMap', 'HMIMap']
16
17
18 class AIAMap(GenericMap):
19 """AIA Image Map.
20
21 The Atmospheric Imaging Assembly is a set of four telescopes that employ
22 normal-incidence, multi-layer coated optics to provide narrow-band imaging
23 of the Sun. It provides high resolution full-disk images of the corona and
24 transition region up to 0.5 solar radii above the solar limb with 1.5
25 arcsecond angular resolution and 12-second temporal resolution. It observes
26 the Sun in the following seven extreme ultraviolet bandpasses: 94 A
27 (Fe XVIII), 131 A (Fe VIII, XXI), 171 A (Fe IX), 193 A (Fe XII, XXIV),
28 211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes
29 in the visible 1600 A (C IV) and the nearby continuun (1700 A).
30
31 References
32 ----------
33 * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_
34 * `Instrument Page <https://aia.lmsal.com>`_
35 * `Fits Header keywords <http://jsoc.stanford.edu/doc/keywords/AIA/AIA02840_A_AIA-SDO_FITS_Keyword_Documents.pdf>`_
36 * `Analysis Guide <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/>`_
37 * `Instrument Paper <https://doi.org/10.1007/s11207-011-9776-8>`_
38 * `wavelengths and temperature response reference <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/figures/aia_tel_resp.png>`_
39 """
40
41 def __init__(self, data, header, **kwargs):
42 GenericMap.__init__(self, data, header, **kwargs)
43
44 # Fill in some missing info
45 self.meta['detector'] = "AIA"
46 self._nickname = self.detector
47 self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())
48 self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))
49
50 @property
51 def observatory(self):
52 """
53 Returns the observatory.
54 """
55 return self.meta['telescop'].split('/')[0]
56
57 @classmethod
58 def is_datasource_for(cls, data, header, **kwargs):
59 """Determines if header corresponds to an AIA image"""
60 return header.get('instrume', '').startswith('AIA')
61
62
63 class HMIMap(GenericMap):
64 """HMI Image Map.
65
66 HMI consists of a refracting telescope, a polarization selector,
67 an image stabilization system, a narrow band tunable filter
68 and two 4096 pixel CCD cameras. It observes the full solar disk in the Fe I
69 absorption line at 6173 Angstrom with a resolution of 1 arc-second.
70 HMI takes images in a sequence of tuning and polarizations at a 4-second
71 cadence for each camera. One camera is dedicated to a 45 s Doppler and
72 line-of-sight field sequence while the other to a 90 s vector field
73 sequence.
74
75 References
76 ----------
77 * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_
78 * `Instrument Page <http://hmi.stanford.edu>`_
79 * `Analysis Guide <http://hmi.stanford.edu/doc/magnetic/guide.pdf>`_
80 """
81 def __init__(self, data, header, **kwargs):
82
83 GenericMap.__init__(self, data, header, **kwargs)
84
85 self.meta['detector'] = "HMI"
86 # self.meta['instrme'] = "HMI"
87 # self.meta['obsrvtry'] = "SDO"
88 self._nickname = self.detector
89
90 @property
91 def measurement(self):
92 """
93 Returns the measurement type.
94 """
95 return self.meta['content'].split(" ")[0].lower()
96
97 @property
98 def observatory(self):
99 """
100 Returns the observatory.
101 """
102 return self.meta['telescop'].split('/')[0]
103
104 @classmethod
105 def is_datasource_for(cls, data, header, **kwargs):
106 """Determines if header corresponds to an HMI image"""
107 return header.get('instrume', '').startswith('HMI')
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/map/sources/sdo.py b/sunpy/map/sources/sdo.py
--- a/sunpy/map/sources/sdo.py
+++ b/sunpy/map/sources/sdo.py
@@ -6,6 +6,14 @@
import matplotlib.pyplot as plt
+from astropy.coordinates import CartesianRepresentation, SkyCoord
+# Versions of Astropy that do not have HeliocentricMeanEcliptic have the same frame
+# with the incorrect name HeliocentricTrueEcliptic
+try:
+ from astropy.coordinates import HeliocentricMeanEcliptic
+except ImportError:
+ from astropy.coordinates import HeliocentricTrueEcliptic as HeliocentricMeanEcliptic
+import astropy.units as u
from astropy.visualization.mpl_normalize import ImageNormalize
from astropy.visualization import AsinhStretch
@@ -28,6 +36,13 @@
211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes
in the visible 1600 A (C IV) and the nearby continuun (1700 A).
+ Notes
+ -----
+ Observer location: The standard AIA FITS header provides the spacecraft location in multiple
+ coordinate systems, including Heliocentric Aries Ecliptic (HAE) and Heliographic Stonyhurst
+ (HGS). SunPy uses the provided HAE coordinates due to accuracy concerns with the provided
+ HGS coordinates, but other software packages may make different choices.
+
References
----------
* `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_
@@ -47,6 +62,45 @@
self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())
self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))
+ @property
+ def observer_coordinate(self):
+ """
+ The Heliographic Stonyhurst Coordinate of the observer.
+
+ This coordinate is determined using the Heliocentric Aries Ecliptic (HAE) coordinates
+ in the header.
+ """
+ vector = CartesianRepresentation(self.meta['haex_obs'],
+ self.meta['haey_obs'],
+ self.meta['haez_obs'])
+ coord = SkyCoord(vector * u.m, frame=HeliocentricMeanEcliptic, obstime=self.date)
+ return coord.heliographic_stonyhurst
+
+ @property
+ def heliographic_latitude(self):
+ """Heliographic latitude."""
+ return self.observer_coordinate.lat
+
+ @property
+ def heliographic_longitude(self):
+ """Heliographic longitude."""
+ return self.observer_coordinate.lon
+
+ @property
+ def carrington_latitude(self):
+ """Carrington latitude."""
+ return self.observer_coordinate.heliographic_carrington.lat
+
+ @property
+ def carrington_longitude(self):
+ """Carrington longitude."""
+ return self.observer_coordinate.heliographic_carrington.lon
+
+ @property
+ def dsun(self):
+ """The observer distance from the Sun."""
+ return self.observer_coordinate.radius.to('m')
+
@property
def observatory(self):
"""
| {"golden_diff": "diff --git a/sunpy/map/sources/sdo.py b/sunpy/map/sources/sdo.py\n--- a/sunpy/map/sources/sdo.py\n+++ b/sunpy/map/sources/sdo.py\n@@ -6,6 +6,14 @@\n \n import matplotlib.pyplot as plt\n \n+from astropy.coordinates import CartesianRepresentation, SkyCoord\n+# Versions of Astropy that do not have HeliocentricMeanEcliptic have the same frame\n+# with the incorrect name HeliocentricTrueEcliptic\n+try:\n+ from astropy.coordinates import HeliocentricMeanEcliptic\n+except ImportError:\n+ from astropy.coordinates import HeliocentricTrueEcliptic as HeliocentricMeanEcliptic\n+import astropy.units as u\n from astropy.visualization.mpl_normalize import ImageNormalize\n from astropy.visualization import AsinhStretch\n \n@@ -28,6 +36,13 @@\n 211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes\n in the visible 1600 A (C IV) and the nearby continuun (1700 A).\n \n+ Notes\n+ -----\n+ Observer location: The standard AIA FITS header provides the spacecraft location in multiple\n+ coordinate systems, including Heliocentric Aries Ecliptic (HAE) and Heliographic Stonyhurst\n+ (HGS). SunPy uses the provided HAE coordinates due to accuracy concerns with the provided\n+ HGS coordinates, but other software packages may make different choices.\n+\n References\n ----------\n * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_\n@@ -47,6 +62,45 @@\n self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())\n self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))\n \n+ @property\n+ def observer_coordinate(self):\n+ \"\"\"\n+ The Heliographic Stonyhurst Coordinate of the observer.\n+\n+ This coordinate is determined using the Heliocentric Aries Ecliptic (HAE) coordinates\n+ in the header.\n+ \"\"\"\n+ vector = CartesianRepresentation(self.meta['haex_obs'],\n+ self.meta['haey_obs'],\n+ self.meta['haez_obs'])\n+ coord = SkyCoord(vector * u.m, frame=HeliocentricMeanEcliptic, obstime=self.date)\n+ return coord.heliographic_stonyhurst\n+\n+ @property\n+ def heliographic_latitude(self):\n+ \"\"\"Heliographic latitude.\"\"\"\n+ return self.observer_coordinate.lat\n+\n+ @property\n+ def heliographic_longitude(self):\n+ \"\"\"Heliographic longitude.\"\"\"\n+ return self.observer_coordinate.lon\n+\n+ @property\n+ def carrington_latitude(self):\n+ \"\"\"Carrington latitude.\"\"\"\n+ return self.observer_coordinate.heliographic_carrington.lat\n+\n+ @property\n+ def carrington_longitude(self):\n+ \"\"\"Carrington longitude.\"\"\"\n+ return self.observer_coordinate.heliographic_carrington.lon\n+\n+ @property\n+ def dsun(self):\n+ \"\"\"The observer distance from the Sun.\"\"\"\n+ return self.observer_coordinate.radius.to('m')\n+\n @property\n def observatory(self):\n \"\"\"\n", "issue": "AIA FITS headers have inaccurate HGS coordinates\nAIA FITS headers apparently have inaccurate Heliographic Stonyhurst (HGS) coordinates (`HGLN_OBS` and `HGLT_OBS`). For example, the distance from Earth center does not match the orbit radius. We currently use these keywords to generate the `observer_coordinate` for an AIA map, so we provide an inaccurate observer location.\r\n\r\nThe headers also have Heliographic Aries Ecliptic (HAE) coordinates (`HAEX_OBS`, `HAEY_OBS`, and `HAEZ_OBS`), and the HAE coordinates are inconsistent with the HGS coordinates in the same header. We have previously verified the accuracy of SunPy's transformation from HAE to HGS (e.g., https://github.com/sunpy/sunpy/issues/2445#issuecomment-364531159). The HAE coordinates appear to be credible, and likely should be trusted over the HGS coordinates.\r\n\r\nMy recommendation is for `AIAMap` to override the generation of `observer_coordinate` to use the HAE coordinates. Discuss.\n", "before_files": [{"content": "\"\"\"SDO Map subclass definitions\"\"\"\n#pylint: disable=W0221,W0222,E1101,E1121\n\n__author__ = \"Keith Hughitt\"\n__email__ = \"[email protected]\"\n\nimport matplotlib.pyplot as plt\n\nfrom astropy.visualization.mpl_normalize import ImageNormalize\nfrom astropy.visualization import AsinhStretch\n\nfrom sunpy.map import GenericMap\nfrom sunpy.map.sources.source_type import source_stretch\n\n__all__ = ['AIAMap', 'HMIMap']\n\n\nclass AIAMap(GenericMap):\n \"\"\"AIA Image Map.\n\n The Atmospheric Imaging Assembly is a set of four telescopes that employ\n normal-incidence, multi-layer coated optics to provide narrow-band imaging\n of the Sun. It provides high resolution full-disk images of the corona and\n transition region up to 0.5 solar radii above the solar limb with 1.5\n arcsecond angular resolution and 12-second temporal resolution. It observes\n the Sun in the following seven extreme ultraviolet bandpasses: 94 A\n (Fe XVIII), 131 A (Fe VIII, XXI), 171 A (Fe IX), 193 A (Fe XII, XXIV),\n 211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes\n in the visible 1600 A (C IV) and the nearby continuun (1700 A).\n\n References\n ----------\n * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_\n * `Instrument Page <https://aia.lmsal.com>`_\n * `Fits Header keywords <http://jsoc.stanford.edu/doc/keywords/AIA/AIA02840_A_AIA-SDO_FITS_Keyword_Documents.pdf>`_\n * `Analysis Guide <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/>`_\n * `Instrument Paper <https://doi.org/10.1007/s11207-011-9776-8>`_\n * `wavelengths and temperature response reference <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/figures/aia_tel_resp.png>`_\n \"\"\"\n\n def __init__(self, data, header, **kwargs):\n GenericMap.__init__(self, data, header, **kwargs)\n\n # Fill in some missing info\n self.meta['detector'] = \"AIA\"\n self._nickname = self.detector\n self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())\n self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))\n\n @property\n def observatory(self):\n \"\"\"\n Returns the observatory.\n \"\"\"\n return self.meta['telescop'].split('/')[0]\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an AIA image\"\"\"\n return header.get('instrume', '').startswith('AIA')\n\n\nclass HMIMap(GenericMap):\n \"\"\"HMI Image Map.\n\n HMI consists of a refracting telescope, a polarization selector,\n an image stabilization system, a narrow band tunable filter\n and two 4096 pixel CCD cameras. It observes the full solar disk in the Fe I\n absorption line at 6173 Angstrom with a resolution of 1 arc-second.\n HMI takes images in a sequence of tuning and polarizations at a 4-second\n cadence for each camera. One camera is dedicated to a 45 s Doppler and\n line-of-sight field sequence while the other to a 90 s vector field\n sequence.\n\n References\n ----------\n * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_\n * `Instrument Page <http://hmi.stanford.edu>`_\n * `Analysis Guide <http://hmi.stanford.edu/doc/magnetic/guide.pdf>`_\n \"\"\"\n def __init__(self, data, header, **kwargs):\n\n GenericMap.__init__(self, data, header, **kwargs)\n\n self.meta['detector'] = \"HMI\"\n# self.meta['instrme'] = \"HMI\"\n# self.meta['obsrvtry'] = \"SDO\"\n self._nickname = self.detector\n\n @property\n def measurement(self):\n \"\"\"\n Returns the measurement type.\n \"\"\"\n return self.meta['content'].split(\" \")[0].lower()\n\n @property\n def observatory(self):\n \"\"\"\n Returns the observatory.\n \"\"\"\n return self.meta['telescop'].split('/')[0]\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an HMI image\"\"\"\n return header.get('instrume', '').startswith('HMI')\n", "path": "sunpy/map/sources/sdo.py"}], "after_files": [{"content": "\"\"\"SDO Map subclass definitions\"\"\"\n#pylint: disable=W0221,W0222,E1101,E1121\n\n__author__ = \"Keith Hughitt\"\n__email__ = \"[email protected]\"\n\nimport matplotlib.pyplot as plt\n\nfrom astropy.coordinates import CartesianRepresentation, SkyCoord\n# Versions of Astropy that do not have HeliocentricMeanEcliptic have the same frame\n# with the incorrect name HeliocentricTrueEcliptic\ntry:\n from astropy.coordinates import HeliocentricMeanEcliptic\nexcept ImportError:\n from astropy.coordinates import HeliocentricTrueEcliptic as HeliocentricMeanEcliptic\nimport astropy.units as u\nfrom astropy.visualization.mpl_normalize import ImageNormalize\nfrom astropy.visualization import AsinhStretch\n\nfrom sunpy.map import GenericMap\nfrom sunpy.map.sources.source_type import source_stretch\n\n__all__ = ['AIAMap', 'HMIMap']\n\n\nclass AIAMap(GenericMap):\n \"\"\"AIA Image Map.\n\n The Atmospheric Imaging Assembly is a set of four telescopes that employ\n normal-incidence, multi-layer coated optics to provide narrow-band imaging\n of the Sun. It provides high resolution full-disk images of the corona and\n transition region up to 0.5 solar radii above the solar limb with 1.5\n arcsecond angular resolution and 12-second temporal resolution. It observes\n the Sun in the following seven extreme ultraviolet bandpasses: 94 A\n (Fe XVIII), 131 A (Fe VIII, XXI), 171 A (Fe IX), 193 A (Fe XII, XXIV),\n 211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes\n in the visible 1600 A (C IV) and the nearby continuun (1700 A).\n\n Notes\n -----\n Observer location: The standard AIA FITS header provides the spacecraft location in multiple\n coordinate systems, including Heliocentric Aries Ecliptic (HAE) and Heliographic Stonyhurst\n (HGS). SunPy uses the provided HAE coordinates due to accuracy concerns with the provided\n HGS coordinates, but other software packages may make different choices.\n\n References\n ----------\n * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_\n * `Instrument Page <https://aia.lmsal.com>`_\n * `Fits Header keywords <http://jsoc.stanford.edu/doc/keywords/AIA/AIA02840_A_AIA-SDO_FITS_Keyword_Documents.pdf>`_\n * `Analysis Guide <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/>`_\n * `Instrument Paper <https://doi.org/10.1007/s11207-011-9776-8>`_\n * `wavelengths and temperature response reference <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/figures/aia_tel_resp.png>`_\n \"\"\"\n\n def __init__(self, data, header, **kwargs):\n GenericMap.__init__(self, data, header, **kwargs)\n\n # Fill in some missing info\n self.meta['detector'] = \"AIA\"\n self._nickname = self.detector\n self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())\n self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))\n\n @property\n def observer_coordinate(self):\n \"\"\"\n The Heliographic Stonyhurst Coordinate of the observer.\n\n This coordinate is determined using the Heliocentric Aries Ecliptic (HAE) coordinates\n in the header.\n \"\"\"\n vector = CartesianRepresentation(self.meta['haex_obs'],\n self.meta['haey_obs'],\n self.meta['haez_obs'])\n coord = SkyCoord(vector * u.m, frame=HeliocentricMeanEcliptic, obstime=self.date)\n return coord.heliographic_stonyhurst\n\n @property\n def heliographic_latitude(self):\n \"\"\"Heliographic latitude.\"\"\"\n return self.observer_coordinate.lat\n\n @property\n def heliographic_longitude(self):\n \"\"\"Heliographic longitude.\"\"\"\n return self.observer_coordinate.lon\n\n @property\n def carrington_latitude(self):\n \"\"\"Carrington latitude.\"\"\"\n return self.observer_coordinate.heliographic_carrington.lat\n\n @property\n def carrington_longitude(self):\n \"\"\"Carrington longitude.\"\"\"\n return self.observer_coordinate.heliographic_carrington.lon\n\n @property\n def dsun(self):\n \"\"\"The observer distance from the Sun.\"\"\"\n return self.observer_coordinate.radius.to('m')\n\n @property\n def observatory(self):\n \"\"\"\n Returns the observatory.\n \"\"\"\n return self.meta['telescop'].split('/')[0]\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an AIA image\"\"\"\n return header.get('instrume', '').startswith('AIA')\n\n\nclass HMIMap(GenericMap):\n \"\"\"HMI Image Map.\n\n HMI consists of a refracting telescope, a polarization selector,\n an image stabilization system, a narrow band tunable filter\n and two 4096 pixel CCD cameras. It observes the full solar disk in the Fe I\n absorption line at 6173 Angstrom with a resolution of 1 arc-second.\n HMI takes images in a sequence of tuning and polarizations at a 4-second\n cadence for each camera. One camera is dedicated to a 45 s Doppler and\n line-of-sight field sequence while the other to a 90 s vector field\n sequence.\n\n References\n ----------\n * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_\n * `Instrument Page <http://hmi.stanford.edu>`_\n * `Analysis Guide <http://hmi.stanford.edu/doc/magnetic/guide.pdf>`_\n \"\"\"\n def __init__(self, data, header, **kwargs):\n\n GenericMap.__init__(self, data, header, **kwargs)\n\n self.meta['detector'] = \"HMI\"\n# self.meta['instrme'] = \"HMI\"\n# self.meta['obsrvtry'] = \"SDO\"\n self._nickname = self.detector\n\n @property\n def measurement(self):\n \"\"\"\n Returns the measurement type.\n \"\"\"\n return self.meta['content'].split(\" \")[0].lower()\n\n @property\n def observatory(self):\n \"\"\"\n Returns the observatory.\n \"\"\"\n return self.meta['telescop'].split('/')[0]\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an HMI image\"\"\"\n return header.get('instrume', '').startswith('HMI')\n", "path": "sunpy/map/sources/sdo.py"}]} | 1,858 | 761 |
gh_patches_debug_17639 | rasdani/github-patches | git_diff | wagtail__wagtail-715 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
search fields can't be overridden
In the past, you were able to override a search field of a parent class by redefining it. This functionality appears to be broken in Wagtail 0.7
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailsearch/index.py`
Content:
```
1 import warnings
2
3 from six import string_types
4
5 from django.db import models
6
7
8 class Indexed(object):
9 @classmethod
10 def indexed_get_parent(cls, require_model=True):
11 for base in cls.__bases__:
12 if issubclass(base, Indexed) and (issubclass(base, models.Model) or require_model is False):
13 return base
14
15 @classmethod
16 def indexed_get_content_type(cls):
17 # Work out content type
18 content_type = (cls._meta.app_label + '_' + cls.__name__).lower()
19
20 # Get parent content type
21 parent = cls.indexed_get_parent()
22 if parent:
23 parent_content_type = parent.indexed_get_content_type()
24 return parent_content_type + '_' + content_type
25 else:
26 return content_type
27
28 @classmethod
29 def indexed_get_toplevel_content_type(cls):
30 # Get parent content type
31 parent = cls.indexed_get_parent()
32 if parent:
33 return parent.indexed_get_content_type()
34 else:
35 # At toplevel, return this content type
36 return (cls._meta.app_label + '_' + cls.__name__).lower()
37
38 @classmethod
39 def get_search_fields(cls):
40 return cls.search_fields
41
42 @classmethod
43 def get_searchable_search_fields(cls):
44 return filter(lambda field: isinstance(field, SearchField), cls.get_search_fields())
45
46 @classmethod
47 def get_filterable_search_fields(cls):
48 return filter(lambda field: isinstance(field, FilterField), cls.get_search_fields())
49
50 @classmethod
51 def get_indexed_objects(cls):
52 return cls.objects.all()
53
54 search_fields = ()
55
56
57 class BaseField(object):
58 suffix = ''
59
60 def __init__(self, field_name, **kwargs):
61 self.field_name = field_name
62 self.kwargs = kwargs
63
64 def get_field(self, cls):
65 return cls._meta.get_field_by_name(self.field_name)[0]
66
67 def get_attname(self, cls):
68 try:
69 field = self.get_field(cls)
70 return field.attname
71 except models.fields.FieldDoesNotExist:
72 return self.field_name
73
74 def get_index_name(self, cls):
75 return self.get_attname(cls) + self.suffix
76
77 def get_type(self, cls):
78 if 'type' in self.kwargs:
79 return self.kwargs['type']
80
81 try:
82 field = self.get_field(cls)
83 return field.get_internal_type()
84 except models.fields.FieldDoesNotExist:
85 return 'CharField'
86
87 def get_value(self, obj):
88 try:
89 field = self.get_field(obj.__class__)
90 return field._get_val_from_obj(obj)
91 except models.fields.FieldDoesNotExist:
92 value = getattr(obj, self.field_name, None)
93 if hasattr(value, '__call__'):
94 value = value()
95 return value
96
97 def __repr__(self):
98 return '<%s: %s>' % (self.__class__.__name__, self.field_name)
99
100
101 class SearchField(BaseField):
102 def __init__(self, field_name, boost=None, partial_match=False, **kwargs):
103 super(SearchField, self).__init__(field_name, **kwargs)
104 self.boost = boost
105 self.partial_match = partial_match
106
107
108 class FilterField(BaseField):
109 suffix = '_filter'
110
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/wagtailsearch/index.py b/wagtail/wagtailsearch/index.py
--- a/wagtail/wagtailsearch/index.py
+++ b/wagtail/wagtailsearch/index.py
@@ -37,15 +37,26 @@
@classmethod
def get_search_fields(cls):
- return cls.search_fields
+ search_fields = {}
+
+ for field in cls.search_fields:
+ search_fields[(type(field), field.field_name)] = field
+
+ return list(search_fields.values())
@classmethod
def get_searchable_search_fields(cls):
- return filter(lambda field: isinstance(field, SearchField), cls.get_search_fields())
+ return [
+ field for field in cls.get_search_fields()
+ if isinstance(field, SearchField)
+ ]
@classmethod
def get_filterable_search_fields(cls):
- return filter(lambda field: isinstance(field, FilterField), cls.get_search_fields())
+ return [
+ field for field in cls.get_search_fields()
+ if isinstance(field, FilterField)
+ ]
@classmethod
def get_indexed_objects(cls):
| {"golden_diff": "diff --git a/wagtail/wagtailsearch/index.py b/wagtail/wagtailsearch/index.py\n--- a/wagtail/wagtailsearch/index.py\n+++ b/wagtail/wagtailsearch/index.py\n@@ -37,15 +37,26 @@\n \n @classmethod\n def get_search_fields(cls):\n- return cls.search_fields\n+ search_fields = {}\n+\n+ for field in cls.search_fields:\n+ search_fields[(type(field), field.field_name)] = field\n+\n+ return list(search_fields.values())\n \n @classmethod\n def get_searchable_search_fields(cls):\n- return filter(lambda field: isinstance(field, SearchField), cls.get_search_fields())\n+ return [\n+ field for field in cls.get_search_fields()\n+ if isinstance(field, SearchField)\n+ ]\n \n @classmethod\n def get_filterable_search_fields(cls):\n- return filter(lambda field: isinstance(field, FilterField), cls.get_search_fields())\n+ return [\n+ field for field in cls.get_search_fields()\n+ if isinstance(field, FilterField)\n+ ]\n \n @classmethod\n def get_indexed_objects(cls):\n", "issue": "search fields can't be overridden\nIn the past, you were able to override a search field of a parent class by redefining it. This functionality appears to be broken in Wagtail 0.7\n\n", "before_files": [{"content": "import warnings\n\nfrom six import string_types\n\nfrom django.db import models\n\n\nclass Indexed(object):\n @classmethod\n def indexed_get_parent(cls, require_model=True):\n for base in cls.__bases__:\n if issubclass(base, Indexed) and (issubclass(base, models.Model) or require_model is False):\n return base\n\n @classmethod\n def indexed_get_content_type(cls):\n # Work out content type\n content_type = (cls._meta.app_label + '_' + cls.__name__).lower()\n\n # Get parent content type\n parent = cls.indexed_get_parent()\n if parent:\n parent_content_type = parent.indexed_get_content_type()\n return parent_content_type + '_' + content_type\n else:\n return content_type\n\n @classmethod\n def indexed_get_toplevel_content_type(cls):\n # Get parent content type\n parent = cls.indexed_get_parent()\n if parent:\n return parent.indexed_get_content_type()\n else:\n # At toplevel, return this content type\n return (cls._meta.app_label + '_' + cls.__name__).lower()\n\n @classmethod\n def get_search_fields(cls):\n return cls.search_fields\n\n @classmethod\n def get_searchable_search_fields(cls):\n return filter(lambda field: isinstance(field, SearchField), cls.get_search_fields())\n\n @classmethod\n def get_filterable_search_fields(cls):\n return filter(lambda field: isinstance(field, FilterField), cls.get_search_fields())\n\n @classmethod\n def get_indexed_objects(cls):\n return cls.objects.all()\n\n search_fields = ()\n\n\nclass BaseField(object):\n suffix = ''\n\n def __init__(self, field_name, **kwargs):\n self.field_name = field_name\n self.kwargs = kwargs\n\n def get_field(self, cls):\n return cls._meta.get_field_by_name(self.field_name)[0]\n\n def get_attname(self, cls):\n try:\n field = self.get_field(cls)\n return field.attname\n except models.fields.FieldDoesNotExist:\n return self.field_name\n\n def get_index_name(self, cls):\n return self.get_attname(cls) + self.suffix\n\n def get_type(self, cls):\n if 'type' in self.kwargs:\n return self.kwargs['type']\n\n try:\n field = self.get_field(cls)\n return field.get_internal_type()\n except models.fields.FieldDoesNotExist:\n return 'CharField'\n\n def get_value(self, obj):\n try:\n field = self.get_field(obj.__class__)\n return field._get_val_from_obj(obj)\n except models.fields.FieldDoesNotExist:\n value = getattr(obj, self.field_name, None)\n if hasattr(value, '__call__'):\n value = value()\n return value\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self.field_name)\n\n\nclass SearchField(BaseField):\n def __init__(self, field_name, boost=None, partial_match=False, **kwargs):\n super(SearchField, self).__init__(field_name, **kwargs)\n self.boost = boost\n self.partial_match = partial_match\n\n\nclass FilterField(BaseField):\n suffix = '_filter'\n\n", "path": "wagtail/wagtailsearch/index.py"}], "after_files": [{"content": "import warnings\n\nfrom six import string_types\n\nfrom django.db import models\n\n\nclass Indexed(object):\n @classmethod\n def indexed_get_parent(cls, require_model=True):\n for base in cls.__bases__:\n if issubclass(base, Indexed) and (issubclass(base, models.Model) or require_model is False):\n return base\n\n @classmethod\n def indexed_get_content_type(cls):\n # Work out content type\n content_type = (cls._meta.app_label + '_' + cls.__name__).lower()\n\n # Get parent content type\n parent = cls.indexed_get_parent()\n if parent:\n parent_content_type = parent.indexed_get_content_type()\n return parent_content_type + '_' + content_type\n else:\n return content_type\n\n @classmethod\n def indexed_get_toplevel_content_type(cls):\n # Get parent content type\n parent = cls.indexed_get_parent()\n if parent:\n return parent.indexed_get_content_type()\n else:\n # At toplevel, return this content type\n return (cls._meta.app_label + '_' + cls.__name__).lower()\n\n @classmethod\n def get_search_fields(cls):\n search_fields = {}\n\n for field in cls.search_fields:\n search_fields[(type(field), field.field_name)] = field\n\n return list(search_fields.values())\n\n @classmethod\n def get_searchable_search_fields(cls):\n return [\n field for field in cls.get_search_fields()\n if isinstance(field, SearchField)\n ]\n\n @classmethod\n def get_filterable_search_fields(cls):\n return [\n field for field in cls.get_search_fields()\n if isinstance(field, FilterField)\n ]\n\n @classmethod\n def get_indexed_objects(cls):\n return cls.objects.all()\n\n search_fields = ()\n\n\nclass BaseField(object):\n suffix = ''\n\n def __init__(self, field_name, **kwargs):\n self.field_name = field_name\n self.kwargs = kwargs\n\n def get_field(self, cls):\n return cls._meta.get_field_by_name(self.field_name)[0]\n\n def get_attname(self, cls):\n try:\n field = self.get_field(cls)\n return field.attname\n except models.fields.FieldDoesNotExist:\n return self.field_name\n\n def get_index_name(self, cls):\n return self.get_attname(cls) + self.suffix\n\n def get_type(self, cls):\n if 'type' in self.kwargs:\n return self.kwargs['type']\n\n try:\n field = self.get_field(cls)\n return field.get_internal_type()\n except models.fields.FieldDoesNotExist:\n return 'CharField'\n\n def get_value(self, obj):\n try:\n field = self.get_field(obj.__class__)\n return field._get_val_from_obj(obj)\n except models.fields.FieldDoesNotExist:\n value = getattr(obj, self.field_name, None)\n if hasattr(value, '__call__'):\n value = value()\n return value\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self.field_name)\n\n\nclass SearchField(BaseField):\n def __init__(self, field_name, boost=None, partial_match=False, **kwargs):\n super(SearchField, self).__init__(field_name, **kwargs)\n self.boost = boost\n self.partial_match = partial_match\n\n\nclass FilterField(BaseField):\n suffix = '_filter'\n\n", "path": "wagtail/wagtailsearch/index.py"}]} | 1,233 | 252 |
gh_patches_debug_14214 | rasdani/github-patches | git_diff | sublimelsp__LSP-2376 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Provide a way to save all modified files after applying workspace edits
**Is your feature request related to a problem? Please describe.**
When applying refactorings (like renames) it's often the case that multiple files are modified. It's a chore to then have to find and save all those modified files.
**Describe the solution you'd like**
We could provide a dialog after more than one file was modified asking the user whether all modified files should be saved. Note that some people don't like dialogs (#1922).
Or maybe even show a "tree view" in a sheet, showing all modified files and allowing the user to navigate to them and save all at once.
**Describe alternatives you've considered**
- Saving all edited files one by one.
- Using "save all" command but that fails when there is some unsaved buffer in the window (for example I like to have some to keep notes in them)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/save_command.py`
Content:
```
1 from .core.registry import LspTextCommand
2 from .core.settings import userprefs
3 from .core.typing import Callable, List, Type
4 from abc import ABCMeta, abstractmethod
5 import sublime
6 import sublime_plugin
7
8
9 class SaveTask(metaclass=ABCMeta):
10 """
11 Base class for tasks that run on save.
12
13 Note: The whole task runs on the async thread.
14 """
15
16 @classmethod
17 @abstractmethod
18 def is_applicable(cls, view: sublime.View) -> bool:
19 pass
20
21 def __init__(self, task_runner: LspTextCommand, on_done: Callable[[], None]):
22 self._task_runner = task_runner
23 self._on_done = on_done
24 self._completed = False
25 self._cancelled = False
26 self._status_key = type(self).__name__
27
28 def run_async(self) -> None:
29 self._erase_view_status()
30 sublime.set_timeout_async(self._on_timeout, userprefs().on_save_task_timeout_ms)
31
32 def _on_timeout(self) -> None:
33 if not self._completed and not self._cancelled:
34 self._set_view_status('LSP: Timeout processing {}'.format(self.__class__.__name__))
35 self._cancelled = True
36 self._on_done()
37
38 def cancel(self) -> None:
39 self._cancelled = True
40
41 def _set_view_status(self, text: str) -> None:
42 self._task_runner.view.set_status(self._status_key, text)
43 sublime.set_timeout_async(self._erase_view_status, 5000)
44
45 def _erase_view_status(self) -> None:
46 self._task_runner.view.erase_status(self._status_key)
47
48 def _on_complete(self) -> None:
49 assert not self._completed
50 self._completed = True
51 if not self._cancelled:
52 self._on_done()
53
54 def _purge_changes_async(self) -> None:
55 # Supermassive hack that will go away later.
56 listeners = sublime_plugin.view_event_listeners.get(self._task_runner.view.id(), [])
57 for listener in listeners:
58 if listener.__class__.__name__ == 'DocumentSyncListener':
59 listener.purge_changes_async() # type: ignore
60 break
61
62
63 class LspSaveCommand(LspTextCommand):
64 """
65 A command used as a substitute for native save command. Runs code actions and document
66 formatting before triggering the native save command.
67 """
68 _tasks = [] # type: List[Type[SaveTask]]
69
70 @classmethod
71 def register_task(cls, task: Type[SaveTask]) -> None:
72 assert task not in cls._tasks
73 cls._tasks.append(task)
74
75 def __init__(self, view: sublime.View) -> None:
76 super().__init__(view)
77 self._pending_tasks = [] # type: List[SaveTask]
78
79 def run(self, edit: sublime.Edit) -> None:
80 if self._pending_tasks:
81 for task in self._pending_tasks:
82 task.cancel()
83 self._pending_tasks = []
84 sublime.set_timeout_async(self._trigger_on_pre_save_async)
85 for Task in self._tasks:
86 if Task.is_applicable(self.view):
87 self._pending_tasks.append(Task(self, self._on_task_completed_async))
88 if self._pending_tasks:
89 sublime.set_timeout_async(self._run_next_task_async)
90 else:
91 self._trigger_native_save()
92
93 def _trigger_on_pre_save_async(self) -> None:
94 # Supermassive hack that will go away later.
95 listeners = sublime_plugin.view_event_listeners.get(self.view.id(), [])
96 for listener in listeners:
97 if listener.__class__.__name__ == 'DocumentSyncListener':
98 listener.trigger_on_pre_save_async() # type: ignore
99 break
100
101 def _run_next_task_async(self) -> None:
102 current_task = self._pending_tasks[0]
103 current_task.run_async()
104
105 def _on_task_completed_async(self) -> None:
106 self._pending_tasks.pop(0)
107 if self._pending_tasks:
108 # Even though we are on the async thread already, we want to give ST a chance to notify us about
109 # potential document changes.
110 sublime.set_timeout_async(self._run_next_task_async)
111 else:
112 self._trigger_native_save()
113
114 def _trigger_native_save(self) -> None:
115 # Triggered from set_timeout to preserve original semantics of on_pre_save handling
116 sublime.set_timeout(lambda: self.view.run_command('save', {"async": True}))
117
118
119 class LspSaveAllCommand(sublime_plugin.WindowCommand):
120 def run(self) -> None:
121 done = set()
122 for view in self.window.views():
123 buffer_id = view.buffer_id()
124 if buffer_id in done:
125 continue
126 if not view.is_dirty():
127 continue
128 done.add(buffer_id)
129 view.run_command("lsp_save", None)
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/save_command.py b/plugin/save_command.py
--- a/plugin/save_command.py
+++ b/plugin/save_command.py
@@ -117,7 +117,7 @@
class LspSaveAllCommand(sublime_plugin.WindowCommand):
- def run(self) -> None:
+ def run(self, only_files: bool = False) -> None:
done = set()
for view in self.window.views():
buffer_id = view.buffer_id()
@@ -125,5 +125,7 @@
continue
if not view.is_dirty():
continue
+ if only_files and view.file_name() is None:
+ continue
done.add(buffer_id)
view.run_command("lsp_save", None)
| {"golden_diff": "diff --git a/plugin/save_command.py b/plugin/save_command.py\n--- a/plugin/save_command.py\n+++ b/plugin/save_command.py\n@@ -117,7 +117,7 @@\n \n \n class LspSaveAllCommand(sublime_plugin.WindowCommand):\n- def run(self) -> None:\n+ def run(self, only_files: bool = False) -> None:\n done = set()\n for view in self.window.views():\n buffer_id = view.buffer_id()\n@@ -125,5 +125,7 @@\n continue\n if not view.is_dirty():\n continue\n+ if only_files and view.file_name() is None:\n+ continue\n done.add(buffer_id)\n view.run_command(\"lsp_save\", None)\n", "issue": "Provide a way to save all modified files after applying workspace edits\n**Is your feature request related to a problem? Please describe.**\r\n\r\nWhen applying refactorings (like renames) it's often the case that multiple files are modified. It's a chore to then have to find and save all those modified files.\r\n\r\n**Describe the solution you'd like**\r\n\r\nWe could provide a dialog after more than one file was modified asking the user whether all modified files should be saved. Note that some people don't like dialogs (#1922).\r\n\r\nOr maybe even show a \"tree view\" in a sheet, showing all modified files and allowing the user to navigate to them and save all at once.\r\n\r\n**Describe alternatives you've considered**\r\n\r\n- Saving all edited files one by one.\r\n- Using \"save all\" command but that fails when there is some unsaved buffer in the window (for example I like to have some to keep notes in them)\r\n\n", "before_files": [{"content": "from .core.registry import LspTextCommand\nfrom .core.settings import userprefs\nfrom .core.typing import Callable, List, Type\nfrom abc import ABCMeta, abstractmethod\nimport sublime\nimport sublime_plugin\n\n\nclass SaveTask(metaclass=ABCMeta):\n \"\"\"\n Base class for tasks that run on save.\n\n Note: The whole task runs on the async thread.\n \"\"\"\n\n @classmethod\n @abstractmethod\n def is_applicable(cls, view: sublime.View) -> bool:\n pass\n\n def __init__(self, task_runner: LspTextCommand, on_done: Callable[[], None]):\n self._task_runner = task_runner\n self._on_done = on_done\n self._completed = False\n self._cancelled = False\n self._status_key = type(self).__name__\n\n def run_async(self) -> None:\n self._erase_view_status()\n sublime.set_timeout_async(self._on_timeout, userprefs().on_save_task_timeout_ms)\n\n def _on_timeout(self) -> None:\n if not self._completed and not self._cancelled:\n self._set_view_status('LSP: Timeout processing {}'.format(self.__class__.__name__))\n self._cancelled = True\n self._on_done()\n\n def cancel(self) -> None:\n self._cancelled = True\n\n def _set_view_status(self, text: str) -> None:\n self._task_runner.view.set_status(self._status_key, text)\n sublime.set_timeout_async(self._erase_view_status, 5000)\n\n def _erase_view_status(self) -> None:\n self._task_runner.view.erase_status(self._status_key)\n\n def _on_complete(self) -> None:\n assert not self._completed\n self._completed = True\n if not self._cancelled:\n self._on_done()\n\n def _purge_changes_async(self) -> None:\n # Supermassive hack that will go away later.\n listeners = sublime_plugin.view_event_listeners.get(self._task_runner.view.id(), [])\n for listener in listeners:\n if listener.__class__.__name__ == 'DocumentSyncListener':\n listener.purge_changes_async() # type: ignore\n break\n\n\nclass LspSaveCommand(LspTextCommand):\n \"\"\"\n A command used as a substitute for native save command. Runs code actions and document\n formatting before triggering the native save command.\n \"\"\"\n _tasks = [] # type: List[Type[SaveTask]]\n\n @classmethod\n def register_task(cls, task: Type[SaveTask]) -> None:\n assert task not in cls._tasks\n cls._tasks.append(task)\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self._pending_tasks = [] # type: List[SaveTask]\n\n def run(self, edit: sublime.Edit) -> None:\n if self._pending_tasks:\n for task in self._pending_tasks:\n task.cancel()\n self._pending_tasks = []\n sublime.set_timeout_async(self._trigger_on_pre_save_async)\n for Task in self._tasks:\n if Task.is_applicable(self.view):\n self._pending_tasks.append(Task(self, self._on_task_completed_async))\n if self._pending_tasks:\n sublime.set_timeout_async(self._run_next_task_async)\n else:\n self._trigger_native_save()\n\n def _trigger_on_pre_save_async(self) -> None:\n # Supermassive hack that will go away later.\n listeners = sublime_plugin.view_event_listeners.get(self.view.id(), [])\n for listener in listeners:\n if listener.__class__.__name__ == 'DocumentSyncListener':\n listener.trigger_on_pre_save_async() # type: ignore\n break\n\n def _run_next_task_async(self) -> None:\n current_task = self._pending_tasks[0]\n current_task.run_async()\n\n def _on_task_completed_async(self) -> None:\n self._pending_tasks.pop(0)\n if self._pending_tasks:\n # Even though we are on the async thread already, we want to give ST a chance to notify us about\n # potential document changes.\n sublime.set_timeout_async(self._run_next_task_async)\n else:\n self._trigger_native_save()\n\n def _trigger_native_save(self) -> None:\n # Triggered from set_timeout to preserve original semantics of on_pre_save handling\n sublime.set_timeout(lambda: self.view.run_command('save', {\"async\": True}))\n\n\nclass LspSaveAllCommand(sublime_plugin.WindowCommand):\n def run(self) -> None:\n done = set()\n for view in self.window.views():\n buffer_id = view.buffer_id()\n if buffer_id in done:\n continue\n if not view.is_dirty():\n continue\n done.add(buffer_id)\n view.run_command(\"lsp_save\", None)\n", "path": "plugin/save_command.py"}], "after_files": [{"content": "from .core.registry import LspTextCommand\nfrom .core.settings import userprefs\nfrom .core.typing import Callable, List, Type\nfrom abc import ABCMeta, abstractmethod\nimport sublime\nimport sublime_plugin\n\n\nclass SaveTask(metaclass=ABCMeta):\n \"\"\"\n Base class for tasks that run on save.\n\n Note: The whole task runs on the async thread.\n \"\"\"\n\n @classmethod\n @abstractmethod\n def is_applicable(cls, view: sublime.View) -> bool:\n pass\n\n def __init__(self, task_runner: LspTextCommand, on_done: Callable[[], None]):\n self._task_runner = task_runner\n self._on_done = on_done\n self._completed = False\n self._cancelled = False\n self._status_key = type(self).__name__\n\n def run_async(self) -> None:\n self._erase_view_status()\n sublime.set_timeout_async(self._on_timeout, userprefs().on_save_task_timeout_ms)\n\n def _on_timeout(self) -> None:\n if not self._completed and not self._cancelled:\n self._set_view_status('LSP: Timeout processing {}'.format(self.__class__.__name__))\n self._cancelled = True\n self._on_done()\n\n def cancel(self) -> None:\n self._cancelled = True\n\n def _set_view_status(self, text: str) -> None:\n self._task_runner.view.set_status(self._status_key, text)\n sublime.set_timeout_async(self._erase_view_status, 5000)\n\n def _erase_view_status(self) -> None:\n self._task_runner.view.erase_status(self._status_key)\n\n def _on_complete(self) -> None:\n assert not self._completed\n self._completed = True\n if not self._cancelled:\n self._on_done()\n\n def _purge_changes_async(self) -> None:\n # Supermassive hack that will go away later.\n listeners = sublime_plugin.view_event_listeners.get(self._task_runner.view.id(), [])\n for listener in listeners:\n if listener.__class__.__name__ == 'DocumentSyncListener':\n listener.purge_changes_async() # type: ignore\n break\n\n\nclass LspSaveCommand(LspTextCommand):\n \"\"\"\n A command used as a substitute for native save command. Runs code actions and document\n formatting before triggering the native save command.\n \"\"\"\n _tasks = [] # type: List[Type[SaveTask]]\n\n @classmethod\n def register_task(cls, task: Type[SaveTask]) -> None:\n assert task not in cls._tasks\n cls._tasks.append(task)\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self._pending_tasks = [] # type: List[SaveTask]\n\n def run(self, edit: sublime.Edit) -> None:\n if self._pending_tasks:\n for task in self._pending_tasks:\n task.cancel()\n self._pending_tasks = []\n sublime.set_timeout_async(self._trigger_on_pre_save_async)\n for Task in self._tasks:\n if Task.is_applicable(self.view):\n self._pending_tasks.append(Task(self, self._on_task_completed_async))\n if self._pending_tasks:\n sublime.set_timeout_async(self._run_next_task_async)\n else:\n self._trigger_native_save()\n\n def _trigger_on_pre_save_async(self) -> None:\n # Supermassive hack that will go away later.\n listeners = sublime_plugin.view_event_listeners.get(self.view.id(), [])\n for listener in listeners:\n if listener.__class__.__name__ == 'DocumentSyncListener':\n listener.trigger_on_pre_save_async() # type: ignore\n break\n\n def _run_next_task_async(self) -> None:\n current_task = self._pending_tasks[0]\n current_task.run_async()\n\n def _on_task_completed_async(self) -> None:\n self._pending_tasks.pop(0)\n if self._pending_tasks:\n # Even though we are on the async thread already, we want to give ST a chance to notify us about\n # potential document changes.\n sublime.set_timeout_async(self._run_next_task_async)\n else:\n self._trigger_native_save()\n\n def _trigger_native_save(self) -> None:\n # Triggered from set_timeout to preserve original semantics of on_pre_save handling\n sublime.set_timeout(lambda: self.view.run_command('save', {\"async\": True}))\n\n\nclass LspSaveAllCommand(sublime_plugin.WindowCommand):\n def run(self, only_files: bool = False) -> None:\n done = set()\n for view in self.window.views():\n buffer_id = view.buffer_id()\n if buffer_id in done:\n continue\n if not view.is_dirty():\n continue\n if only_files and view.file_name() is None:\n continue\n done.add(buffer_id)\n view.run_command(\"lsp_save\", None)\n", "path": "plugin/save_command.py"}]} | 1,780 | 162 |
gh_patches_debug_29535 | rasdani/github-patches | git_diff | conan-io__conan-center-index-7032 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] imgui/1.84.1: Shared library does not automatically import global data symbols
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **imgui/1.84.1**
* Operating System+version: **Windows 10 21H1 Build 19043.1165**
* Compiler+version: **Visual Studio 16 (2019)**
* Docker image: **N/A**
* Conan version: **conan 1.39.0**
* Python version: **Python 3.9.6**
### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)
```
[settings]
os=Windows
os_build=Windows
arch=x86_64
arch_build=x86_64
compiler=Visual Studio
compiler.version=16
build_type=Release
[options]
[conf]
[build_requires]
[env]
```
### Steps to reproduce (Include if Applicable)
Try to reference any code that uses global data symbols since those need to use `__declspec(dllimport)` when using [`WINDOWS_EXPORT_ALL_SYMBOLS`](https://cmake.org/cmake/help/latest/prop_tgt/WINDOWS_EXPORT_ALL_SYMBOLS.html#windows-export-all-symbols). One example could be using [`ImGuiTextBuffer`](https://github.com/ocornut/imgui/blob/v1.84.1/imgui.h#L2078) (which has `IMGUI_API static char EmptyString[1];`).
The following diff is for ImGui's [`test_package.cpp`](https://github.com/conan-io/conan-center-index/blob/master/recipes/imgui/all/test_package/test_package.cpp) and can reproduce this issue.
```
--- a/recipes/imgui/all/test_package/test_package.cpp
+++ b/recipes/imgui/all/test_package/test_package.cpp
@@ -5,6 +5,9 @@ int main(int, char**)
{
ImGuiContext* context =ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
+
+ ImGuiTextBuffer textBuffer;
+ textBuffer.append("Hello, ImGui");
// Build atlas
unsigned char* tex_pixels = NULL;
@@ -20,6 +23,7 @@ int main(int, char**)
static float f = 0.0f;
ImGui::Text("Hello, world!");
+ ImGui::Text(textBuffer.begin());
ImGui::SliderFloat("float", &f, 0.0f, 1.0f);
ImGui::Text("Application average %.3f ms/frame (%.1f FPS)", 1000.0f / io.Framerate, io.Framerate);
ImGui::ShowDemoWindow(NULL);
```
Then, try to create the package with `conan create . imgui/1.84.1@ -o imgui:shared=True`.
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
test_package.obj : error LNK2019: unresolved external symbol "public: static char * ImGuiTextBuffer::EmptyString" (?Emp
tyString@ImGuiTextBuffer@@2PADA) referenced in function main
```
</details>
I think the simplest solution would be to add something like this
```
if self.options.shared and self.settings.os == "Windows":
self.cpp_info.defines.append("IMGUI_API=__declspec(dllimport)")
```
I'd be happy to open a PR with this change.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/imgui/all/conanfile.py`
Content:
```
1 from conans import ConanFile, CMake, tools
2 import os
3
4 required_conan_version = ">=1.33.0"
5
6
7 class IMGUIConan(ConanFile):
8 name = "imgui"
9 url = "https://github.com/conan-io/conan-center-index"
10 homepage = "https://github.com/ocornut/imgui"
11 description = "Bloat-free Immediate Mode Graphical User interface for C++ with minimal dependencies"
12 topics = ("conan", "imgui", "gui", "graphical")
13 license = "MIT"
14
15 exports_sources = ["CMakeLists.txt"]
16 generators = "cmake"
17
18 settings = "os", "arch", "compiler", "build_type"
19 options = {
20 "shared": [True, False],
21 "fPIC": [True, False]
22 }
23 default_options = {
24 "shared": False,
25 "fPIC": True
26 }
27
28 _cmake = None
29
30 @property
31 def _source_subfolder(self):
32 return "source_subfolder"
33
34 def config_options(self):
35 if self.settings.os == "Windows":
36 del self.options.fPIC
37
38 def configure(self):
39 if self.options.shared:
40 del self.options.fPIC
41
42 def source(self):
43 tools.get(**self.conan_data["sources"][self.version],
44 destination=self._source_subfolder, strip_root=True)
45
46 def _configure_cmake(self):
47 if self._cmake:
48 return self._cmake
49 self._cmake = CMake(self)
50 self._cmake.configure()
51 return self._cmake
52
53 def build(self):
54 cmake = self._configure_cmake()
55 cmake.build()
56
57 def package(self):
58 self.copy(pattern="LICENSE.txt", dst="licenses", src=self._source_subfolder)
59 backends_folder = src=os.path.join(
60 self._source_subfolder,
61 "backends" if tools.Version(self.version) >= "1.80" else "examples"
62 )
63 self.copy(pattern="imgui_impl_*",
64 dst=os.path.join("res", "bindings"),
65 src=backends_folder)
66 cmake = self._configure_cmake()
67 cmake.install()
68
69 def package_info(self):
70 self.cpp_info.libs = ["imgui"]
71 if self.settings.os == "Linux":
72 self.cpp_info.system_libs.append("m")
73 self.cpp_info.srcdirs = [os.path.join("res", "bindings")]
74
75 bin_path = os.path.join(self.package_folder, "bin")
76 self.output.info("Appending PATH env var with : {}".format(bin_path))
77 self.env_info.PATH.append(bin_path)
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/imgui/all/conanfile.py b/recipes/imgui/all/conanfile.py
--- a/recipes/imgui/all/conanfile.py
+++ b/recipes/imgui/all/conanfile.py
@@ -12,19 +12,18 @@
topics = ("conan", "imgui", "gui", "graphical")
license = "MIT"
- exports_sources = ["CMakeLists.txt"]
- generators = "cmake"
-
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
- "fPIC": [True, False]
+ "fPIC": [True, False],
}
default_options = {
"shared": False,
- "fPIC": True
+ "fPIC": True,
}
+ exports_sources = "CMakeLists.txt"
+ generators = "cmake"
_cmake = None
@property
@@ -56,7 +55,7 @@
def package(self):
self.copy(pattern="LICENSE.txt", dst="licenses", src=self._source_subfolder)
- backends_folder = src=os.path.join(
+ backends_folder = os.path.join(
self._source_subfolder,
"backends" if tools.Version(self.version) >= "1.80" else "examples"
)
@@ -68,6 +67,7 @@
def package_info(self):
self.cpp_info.libs = ["imgui"]
+ self.cpp_info.defines.append("IMGUI_USER_CONFIG=\"imgui_user_config.h\"")
if self.settings.os == "Linux":
self.cpp_info.system_libs.append("m")
self.cpp_info.srcdirs = [os.path.join("res", "bindings")]
| {"golden_diff": "diff --git a/recipes/imgui/all/conanfile.py b/recipes/imgui/all/conanfile.py\n--- a/recipes/imgui/all/conanfile.py\n+++ b/recipes/imgui/all/conanfile.py\n@@ -12,19 +12,18 @@\n topics = (\"conan\", \"imgui\", \"gui\", \"graphical\")\n license = \"MIT\"\n \n- exports_sources = [\"CMakeLists.txt\"]\n- generators = \"cmake\"\n-\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n- \"fPIC\": [True, False]\n+ \"fPIC\": [True, False],\n }\n default_options = {\n \"shared\": False,\n- \"fPIC\": True\n+ \"fPIC\": True,\n }\n \n+ exports_sources = \"CMakeLists.txt\"\n+ generators = \"cmake\"\n _cmake = None\n \n @property\n@@ -56,7 +55,7 @@\n \n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n- backends_folder = src=os.path.join(\n+ backends_folder = os.path.join(\n self._source_subfolder,\n \"backends\" if tools.Version(self.version) >= \"1.80\" else \"examples\"\n )\n@@ -68,6 +67,7 @@\n \n def package_info(self):\n self.cpp_info.libs = [\"imgui\"]\n+ self.cpp_info.defines.append(\"IMGUI_USER_CONFIG=\\\"imgui_user_config.h\\\"\")\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"m\")\n self.cpp_info.srcdirs = [os.path.join(\"res\", \"bindings\")]\n", "issue": "[package] imgui/1.84.1: Shared library does not automatically import global data symbols\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **imgui/1.84.1**\r\n * Operating System+version: **Windows 10 21H1 Build 19043.1165**\r\n * Compiler+version: **Visual Studio 16 (2019)**\r\n * Docker image: **N/A**\r\n * Conan version: **conan 1.39.0**\r\n * Python version: **Python 3.9.6**\r\n\r\n\r\n### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)\r\n```\r\n[settings]\r\nos=Windows\r\nos_build=Windows\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=Visual Studio\r\ncompiler.version=16\r\nbuild_type=Release\r\n[options]\r\n[conf]\r\n[build_requires]\r\n[env]\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nTry to reference any code that uses global data symbols since those need to use `__declspec(dllimport)` when using [`WINDOWS_EXPORT_ALL_SYMBOLS`](https://cmake.org/cmake/help/latest/prop_tgt/WINDOWS_EXPORT_ALL_SYMBOLS.html#windows-export-all-symbols). One example could be using [`ImGuiTextBuffer`](https://github.com/ocornut/imgui/blob/v1.84.1/imgui.h#L2078) (which has `IMGUI_API static char EmptyString[1];`).\r\nThe following diff is for ImGui's [`test_package.cpp`](https://github.com/conan-io/conan-center-index/blob/master/recipes/imgui/all/test_package/test_package.cpp) and can reproduce this issue.\r\n\r\n```\r\n--- a/recipes/imgui/all/test_package/test_package.cpp\r\n+++ b/recipes/imgui/all/test_package/test_package.cpp\r\n@@ -5,6 +5,9 @@ int main(int, char**)\r\n {\r\n ImGuiContext* context =ImGui::CreateContext();\r\n ImGuiIO& io = ImGui::GetIO();\r\n+\t\r\n+ ImGuiTextBuffer textBuffer;\r\n+ textBuffer.append(\"Hello, ImGui\");\r\n \r\n // Build atlas\r\n unsigned char* tex_pixels = NULL;\r\n@@ -20,6 +23,7 @@ int main(int, char**)\r\n \r\n static float f = 0.0f;\r\n ImGui::Text(\"Hello, world!\");\r\n+ ImGui::Text(textBuffer.begin());\r\n ImGui::SliderFloat(\"float\", &f, 0.0f, 1.0f);\r\n ImGui::Text(\"Application average %.3f ms/frame (%.1f FPS)\", 1000.0f / io.Framerate, io.Framerate);\r\n ImGui::ShowDemoWindow(NULL);\r\n```\r\n\r\nThen, try to create the package with `conan create . imgui/1.84.1@ -o imgui:shared=True`.\r\n\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\ntest_package.obj : error LNK2019: unresolved external symbol \"public: static char * ImGuiTextBuffer::EmptyString\" (?Emp\r\ntyString@ImGuiTextBuffer@@2PADA) referenced in function main\r\n```\r\n\r\n</details>\r\n\r\nI think the simplest solution would be to add something like this\r\n\r\n```\r\nif self.options.shared and self.settings.os == \"Windows\":\r\n self.cpp_info.defines.append(\"IMGUI_API=__declspec(dllimport)\")\r\n```\r\n\r\nI'd be happy to open a PR with this change.\n", "before_files": [{"content": "from conans import ConanFile, CMake, tools\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass IMGUIConan(ConanFile):\n name = \"imgui\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/ocornut/imgui\"\n description = \"Bloat-free Immediate Mode Graphical User interface for C++ with minimal dependencies\"\n topics = (\"conan\", \"imgui\", \"gui\", \"graphical\")\n license = \"MIT\"\n\n exports_sources = [\"CMakeLists.txt\"]\n generators = \"cmake\"\n\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.configure()\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n backends_folder = src=os.path.join(\n self._source_subfolder,\n \"backends\" if tools.Version(self.version) >= \"1.80\" else \"examples\"\n )\n self.copy(pattern=\"imgui_impl_*\",\n dst=os.path.join(\"res\", \"bindings\"),\n src=backends_folder)\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = [\"imgui\"]\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"m\")\n self.cpp_info.srcdirs = [os.path.join(\"res\", \"bindings\")]\n\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH env var with : {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/imgui/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, CMake, tools\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass IMGUIConan(ConanFile):\n name = \"imgui\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/ocornut/imgui\"\n description = \"Bloat-free Immediate Mode Graphical User interface for C++ with minimal dependencies\"\n topics = (\"conan\", \"imgui\", \"gui\", \"graphical\")\n license = \"MIT\"\n\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n }\n\n exports_sources = \"CMakeLists.txt\"\n generators = \"cmake\"\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.configure()\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n backends_folder = os.path.join(\n self._source_subfolder,\n \"backends\" if tools.Version(self.version) >= \"1.80\" else \"examples\"\n )\n self.copy(pattern=\"imgui_impl_*\",\n dst=os.path.join(\"res\", \"bindings\"),\n src=backends_folder)\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = [\"imgui\"]\n self.cpp_info.defines.append(\"IMGUI_USER_CONFIG=\\\"imgui_user_config.h\\\"\")\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"m\")\n self.cpp_info.srcdirs = [os.path.join(\"res\", \"bindings\")]\n\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH env var with : {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/imgui/all/conanfile.py"}]} | 1,767 | 396 |
gh_patches_debug_36505 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-2690 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/gemini/gemini_context.py`
Content:
```
1 from enum import EnumMeta
2
3
4 class GeminiMemoryManager(object):
5
6 def __init__(self, states_cls: EnumMeta):
7 super().__init__()
8 self.states_cls = states_cls
9 self._cnter = 0 # the counter of instances
10
11 self.total_mem = dict()
12 self.state_mem = dict()
13 self.state_mem['cpu'] = dict()
14 self.state_mem['cuda'] = dict()
15
16 self.reset()
17
18 @property
19 def total_number(self):
20 return self._cnter
21
22 def reset(self):
23 self._cnter = 0 # the counter of instances
24
25 self.total_mem['cpu'] = 0 # memory occupation of instances in cpu
26 self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda
27
28 # memory conditions for all states
29 for state in self.states_cls:
30 self.state_mem['cpu'][state] = 0
31 self.state_mem['cuda'][state] = 0
32
33 def register_new_instance(self):
34 self._cnter += 1
35
36 def delete_instance(self):
37 self._cnter -= 1
38
39 def print_info(self):
40 print(f"Total number: {self.total_number}",
41 f"Total CPU memory occupation: {self.total_mem['cpu']}",
42 f"Total CUDA memory occupation: {self.total_mem['cuda']}\n",
43 sep='\n')
44
45 for state in self.states_cls:
46 print(f"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}",
47 f"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\n",
48 sep='\n')
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/colossalai/gemini/gemini_context.py b/colossalai/gemini/gemini_context.py
--- a/colossalai/gemini/gemini_context.py
+++ b/colossalai/gemini/gemini_context.py
@@ -1,48 +1,48 @@
-from enum import EnumMeta
-
-
-class GeminiMemoryManager(object):
-
- def __init__(self, states_cls: EnumMeta):
- super().__init__()
- self.states_cls = states_cls
- self._cnter = 0 # the counter of instances
-
- self.total_mem = dict()
- self.state_mem = dict()
- self.state_mem['cpu'] = dict()
- self.state_mem['cuda'] = dict()
-
- self.reset()
-
- @property
- def total_number(self):
- return self._cnter
-
- def reset(self):
- self._cnter = 0 # the counter of instances
-
- self.total_mem['cpu'] = 0 # memory occupation of instances in cpu
- self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda
-
- # memory conditions for all states
- for state in self.states_cls:
- self.state_mem['cpu'][state] = 0
- self.state_mem['cuda'][state] = 0
-
- def register_new_instance(self):
- self._cnter += 1
-
- def delete_instance(self):
- self._cnter -= 1
-
- def print_info(self):
- print(f"Total number: {self.total_number}",
- f"Total CPU memory occupation: {self.total_mem['cpu']}",
- f"Total CUDA memory occupation: {self.total_mem['cuda']}\n",
- sep='\n')
-
- for state in self.states_cls:
- print(f"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}",
- f"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\n",
- sep='\n')
+from enum import EnumMeta
+
+
+class GeminiMemoryManager(object):
+
+ def __init__(self, states_cls: EnumMeta):
+ super().__init__()
+ self.states_cls = states_cls
+ self._cnter = 0 # the counter of instances
+
+ self.total_mem = dict()
+ self.state_mem = dict()
+ self.state_mem['cpu'] = dict()
+ self.state_mem['cuda'] = dict()
+
+ self.reset()
+
+ @property
+ def total_number(self):
+ return self._cnter
+
+ def reset(self):
+ self._cnter = 0 # the counter of instances
+
+ self.total_mem['cpu'] = 0 # memory occupation of instances in cpu
+ self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda
+
+ # memory conditions for all states
+ for state in self.states_cls:
+ self.state_mem['cpu'][state] = 0
+ self.state_mem['cuda'][state] = 0
+
+ def register_new_instance(self):
+ self._cnter += 1
+
+ def delete_instance(self):
+ self._cnter -= 1
+
+ def print_info(self):
+ print(f"Total number: {self.total_number}",
+ f"Total CPU memory occupation: {self.total_mem['cpu']}",
+ f"Total CUDA memory occupation: {self.total_mem['cuda']}\n",
+ sep='\n')
+
+ for state in self.states_cls:
+ print(f"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}",
+ f"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\n",
+ sep='\n')
| {"golden_diff": "diff --git a/colossalai/gemini/gemini_context.py b/colossalai/gemini/gemini_context.py\n--- a/colossalai/gemini/gemini_context.py\n+++ b/colossalai/gemini/gemini_context.py\n@@ -1,48 +1,48 @@\n-from enum import EnumMeta\r\n-\r\n-\r\n-class GeminiMemoryManager(object):\r\n-\r\n- def __init__(self, states_cls: EnumMeta):\r\n- super().__init__()\r\n- self.states_cls = states_cls\r\n- self._cnter = 0 # the counter of instances\r\n-\r\n- self.total_mem = dict()\r\n- self.state_mem = dict()\r\n- self.state_mem['cpu'] = dict()\r\n- self.state_mem['cuda'] = dict()\r\n-\r\n- self.reset()\r\n-\r\n- @property\r\n- def total_number(self):\r\n- return self._cnter\r\n-\r\n- def reset(self):\r\n- self._cnter = 0 # the counter of instances\r\n-\r\n- self.total_mem['cpu'] = 0 # memory occupation of instances in cpu\r\n- self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda\r\n-\r\n- # memory conditions for all states\r\n- for state in self.states_cls:\r\n- self.state_mem['cpu'][state] = 0\r\n- self.state_mem['cuda'][state] = 0\r\n-\r\n- def register_new_instance(self):\r\n- self._cnter += 1\r\n-\r\n- def delete_instance(self):\r\n- self._cnter -= 1\r\n-\r\n- def print_info(self):\r\n- print(f\"Total number: {self.total_number}\",\r\n- f\"Total CPU memory occupation: {self.total_mem['cpu']}\",\r\n- f\"Total CUDA memory occupation: {self.total_mem['cuda']}\\n\",\r\n- sep='\\n')\r\n-\r\n- for state in self.states_cls:\r\n- print(f\"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}\",\r\n- f\"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\\n\",\r\n- sep='\\n')\r\n+from enum import EnumMeta\n+\n+\n+class GeminiMemoryManager(object):\n+\n+ def __init__(self, states_cls: EnumMeta):\n+ super().__init__()\n+ self.states_cls = states_cls\n+ self._cnter = 0 # the counter of instances\n+\n+ self.total_mem = dict()\n+ self.state_mem = dict()\n+ self.state_mem['cpu'] = dict()\n+ self.state_mem['cuda'] = dict()\n+\n+ self.reset()\n+\n+ @property\n+ def total_number(self):\n+ return self._cnter\n+\n+ def reset(self):\n+ self._cnter = 0 # the counter of instances\n+\n+ self.total_mem['cpu'] = 0 # memory occupation of instances in cpu\n+ self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda\n+\n+ # memory conditions for all states\n+ for state in self.states_cls:\n+ self.state_mem['cpu'][state] = 0\n+ self.state_mem['cuda'][state] = 0\n+\n+ def register_new_instance(self):\n+ self._cnter += 1\n+\n+ def delete_instance(self):\n+ self._cnter -= 1\n+\n+ def print_info(self):\n+ print(f\"Total number: {self.total_number}\",\n+ f\"Total CPU memory occupation: {self.total_mem['cpu']}\",\n+ f\"Total CUDA memory occupation: {self.total_mem['cuda']}\\n\",\n+ sep='\\n')\n+\n+ for state in self.states_cls:\n+ print(f\"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}\",\n+ f\"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\\n\",\n+ sep='\\n')\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from enum import EnumMeta\r\n\r\n\r\nclass GeminiMemoryManager(object):\r\n\r\n def __init__(self, states_cls: EnumMeta):\r\n super().__init__()\r\n self.states_cls = states_cls\r\n self._cnter = 0 # the counter of instances\r\n\r\n self.total_mem = dict()\r\n self.state_mem = dict()\r\n self.state_mem['cpu'] = dict()\r\n self.state_mem['cuda'] = dict()\r\n\r\n self.reset()\r\n\r\n @property\r\n def total_number(self):\r\n return self._cnter\r\n\r\n def reset(self):\r\n self._cnter = 0 # the counter of instances\r\n\r\n self.total_mem['cpu'] = 0 # memory occupation of instances in cpu\r\n self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda\r\n\r\n # memory conditions for all states\r\n for state in self.states_cls:\r\n self.state_mem['cpu'][state] = 0\r\n self.state_mem['cuda'][state] = 0\r\n\r\n def register_new_instance(self):\r\n self._cnter += 1\r\n\r\n def delete_instance(self):\r\n self._cnter -= 1\r\n\r\n def print_info(self):\r\n print(f\"Total number: {self.total_number}\",\r\n f\"Total CPU memory occupation: {self.total_mem['cpu']}\",\r\n f\"Total CUDA memory occupation: {self.total_mem['cuda']}\\n\",\r\n sep='\\n')\r\n\r\n for state in self.states_cls:\r\n print(f\"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}\",\r\n f\"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\\n\",\r\n sep='\\n')\r\n", "path": "colossalai/gemini/gemini_context.py"}], "after_files": [{"content": "from enum import EnumMeta\n\n\nclass GeminiMemoryManager(object):\n\n def __init__(self, states_cls: EnumMeta):\n super().__init__()\n self.states_cls = states_cls\n self._cnter = 0 # the counter of instances\n\n self.total_mem = dict()\n self.state_mem = dict()\n self.state_mem['cpu'] = dict()\n self.state_mem['cuda'] = dict()\n\n self.reset()\n\n @property\n def total_number(self):\n return self._cnter\n\n def reset(self):\n self._cnter = 0 # the counter of instances\n\n self.total_mem['cpu'] = 0 # memory occupation of instances in cpu\n self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda\n\n # memory conditions for all states\n for state in self.states_cls:\n self.state_mem['cpu'][state] = 0\n self.state_mem['cuda'][state] = 0\n\n def register_new_instance(self):\n self._cnter += 1\n\n def delete_instance(self):\n self._cnter -= 1\n\n def print_info(self):\n print(f\"Total number: {self.total_number}\",\n f\"Total CPU memory occupation: {self.total_mem['cpu']}\",\n f\"Total CUDA memory occupation: {self.total_mem['cuda']}\\n\",\n sep='\\n')\n\n for state in self.states_cls:\n print(f\"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}\",\n f\"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\\n\",\n sep='\\n')\n", "path": "colossalai/gemini/gemini_context.py"}]} | 744 | 870 |
gh_patches_debug_363 | rasdani/github-patches | git_diff | mozilla__bugbug-3921 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[model:regressor] AttributeError: 'IsotonicRegressionCalibrator' object has no attribute 'n_features_in_'
https://community-tc.services.mozilla.com/tasks/HncpjvKKRcSnxL_GJ8PV9A/runs/0/logs/public/logs/live.log
```
Traceback (most recent call last):
File "/usr/local/bin/bugbug-train", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/site-packages/scripts/trainer.py", line 141, in main
retriever.go(args)
File "/usr/local/lib/python3.10/site-packages/scripts/trainer.py", line 41, in go
metrics = model_obj.train(limit=args.limit)
File "/usr/local/lib/python3.10/site-packages/bugbug/model.py", line 418, in train
logger.info("Number of features: %d", self.clf.steps[-1][1].n_features_in_)
AttributeError: 'IsotonicRegressionCalibrator' object has no attribute 'n_features_in_'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bugbug/model_calibration.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 from sklearn.base import BaseEstimator, ClassifierMixin
7 from sklearn.calibration import CalibratedClassifierCV
8 from sklearn.model_selection import train_test_split
9
10
11 class IsotonicRegressionCalibrator(BaseEstimator, ClassifierMixin):
12 def __init__(self, base_clf):
13 self.base_clf = base_clf
14 self.calibrated_clf = CalibratedClassifierCV(
15 base_clf, cv="prefit", method="isotonic"
16 )
17
18 def fit(self, X_train, y_train):
19 X_train, X_val, y_train, y_val = train_test_split(
20 X_train, y_train, test_size=0.2, random_state=42
21 )
22 self.base_clf.fit(X_train, y_train)
23 self.calibrated_clf.fit(X_val, y_val)
24
25 def predict(self, X):
26 return self.calibrated_clf.predict(X)
27
28 def predict_proba(self, X):
29 return self.calibrated_clf.predict_proba(X)
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bugbug/model_calibration.py b/bugbug/model_calibration.py
--- a/bugbug/model_calibration.py
+++ b/bugbug/model_calibration.py
@@ -27,3 +27,7 @@
def predict_proba(self, X):
return self.calibrated_clf.predict_proba(X)
+
+ @property
+ def n_features_in_(self):
+ return self.base_clf.n_features_in_
| {"golden_diff": "diff --git a/bugbug/model_calibration.py b/bugbug/model_calibration.py\n--- a/bugbug/model_calibration.py\n+++ b/bugbug/model_calibration.py\n@@ -27,3 +27,7 @@\n \n def predict_proba(self, X):\n return self.calibrated_clf.predict_proba(X)\n+\n+ @property\n+ def n_features_in_(self):\n+ return self.base_clf.n_features_in_\n", "issue": "[model:regressor] AttributeError: 'IsotonicRegressionCalibrator' object has no attribute 'n_features_in_'\nhttps://community-tc.services.mozilla.com/tasks/HncpjvKKRcSnxL_GJ8PV9A/runs/0/logs/public/logs/live.log\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/bugbug-train\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.10/site-packages/scripts/trainer.py\", line 141, in main\r\n retriever.go(args)\r\n File \"/usr/local/lib/python3.10/site-packages/scripts/trainer.py\", line 41, in go\r\n metrics = model_obj.train(limit=args.limit)\r\n File \"/usr/local/lib/python3.10/site-packages/bugbug/model.py\", line 418, in train\r\n logger.info(\"Number of features: %d\", self.clf.steps[-1][1].n_features_in_)\r\nAttributeError: 'IsotonicRegressionCalibrator' object has no attribute 'n_features_in_'\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nfrom sklearn.base import BaseEstimator, ClassifierMixin\nfrom sklearn.calibration import CalibratedClassifierCV\nfrom sklearn.model_selection import train_test_split\n\n\nclass IsotonicRegressionCalibrator(BaseEstimator, ClassifierMixin):\n def __init__(self, base_clf):\n self.base_clf = base_clf\n self.calibrated_clf = CalibratedClassifierCV(\n base_clf, cv=\"prefit\", method=\"isotonic\"\n )\n\n def fit(self, X_train, y_train):\n X_train, X_val, y_train, y_val = train_test_split(\n X_train, y_train, test_size=0.2, random_state=42\n )\n self.base_clf.fit(X_train, y_train)\n self.calibrated_clf.fit(X_val, y_val)\n\n def predict(self, X):\n return self.calibrated_clf.predict(X)\n\n def predict_proba(self, X):\n return self.calibrated_clf.predict_proba(X)\n", "path": "bugbug/model_calibration.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nfrom sklearn.base import BaseEstimator, ClassifierMixin\nfrom sklearn.calibration import CalibratedClassifierCV\nfrom sklearn.model_selection import train_test_split\n\n\nclass IsotonicRegressionCalibrator(BaseEstimator, ClassifierMixin):\n def __init__(self, base_clf):\n self.base_clf = base_clf\n self.calibrated_clf = CalibratedClassifierCV(\n base_clf, cv=\"prefit\", method=\"isotonic\"\n )\n\n def fit(self, X_train, y_train):\n X_train, X_val, y_train, y_val = train_test_split(\n X_train, y_train, test_size=0.2, random_state=42\n )\n self.base_clf.fit(X_train, y_train)\n self.calibrated_clf.fit(X_val, y_val)\n\n def predict(self, X):\n return self.calibrated_clf.predict(X)\n\n def predict_proba(self, X):\n return self.calibrated_clf.predict_proba(X)\n\n @property\n def n_features_in_(self):\n return self.base_clf.n_features_in_\n", "path": "bugbug/model_calibration.py"}]} | 823 | 94 |
gh_patches_debug_30439 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-38 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
support blocking pull in PS so client don't need to retry in a loop
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow/ps/ps.py`
Content:
```
1 import tensorflow as tf
2 tf.enable_eager_execution()
3 import tensorflow.contrib.eager as tfe
4 import numpy as np
5 import queue
6 import threading
7
8
9 class ParameterServer(object):
10 def __init__(self, optimizer, vars):
11 self._opt = optimizer
12 self._vars = {}
13 for k, v in vars.items():
14 if (not isinstance(v, np.ndarray)
15 or v.dtype not in (np.float32, np.float64)):
16 raise ValueError(
17 'Initial value for variable %s is not of float type ndarray' %
18 k)
19 self._vars[k] = tfe.Variable(v, name=k)
20 self._step = 0
21 self._grad_q = queue.Queue()
22 self._lock = threading.Lock()
23 self._runner = threading.Thread(target=self._run, name='ps-runner')
24 self._exiting = False
25
26 def pull(self, min_step=0, names=None):
27 with self._lock:
28 if min_step > self._step:
29 raise LookupError('Required step is not ready yet: %s' % min_step)
30 if names:
31 res = {k: self._vars[k].numpy() for k in names}
32 else:
33 res = {k: v.numpy() for k, v in self._vars.items()}
34 return self._step, res
35
36 def push(self, base_step, sub_step, grads):
37 with self._lock:
38 if base_step > self._step:
39 raise ValueError(
40 'Illegal base step %s, parameter server step is %s' %
41 (base_step, self._step))
42
43 if sub_step < 0:
44 raise ValueError('Illegal sub step %s' % sub_step)
45
46 for k, g in grads.items():
47 v = self._vars[k]
48 if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:
49 raise ValueError('Incompatible gradient for variable %s' % k)
50 # TODO(l.zou): use @dataclass when python 3.7 is available.
51 self._grad_q.put((base_step, sub_step, grads))
52
53 def _compute(self, grads):
54 grads_vars = [(g, self._vars[k]) for k, g in grads.items()]
55 with self._lock:
56 self._opt.apply_gradients(grads_vars)
57 self._step += 1
58
59 def _run(self):
60 while not self._exiting:
61 # TODO(l.zou): How to properly accumulate and decay grads?
62 try:
63 base_step, sub_step, grads = self._grad_q.get(timeout=1.0)
64 self._compute(grads)
65 except queue.Empty:
66 pass
67
68 def start(self):
69 self._runner.start()
70
71 def join(self):
72 self._exiting = True
73 self._runner.join()
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensorflow/ps/ps.py b/tensorflow/ps/ps.py
--- a/tensorflow/ps/ps.py
+++ b/tensorflow/ps/ps.py
@@ -1,9 +1,9 @@
+import threading
+import queue
+import numpy as np
+import tensorflow.contrib.eager as tfe
import tensorflow as tf
tf.enable_eager_execution()
-import tensorflow.contrib.eager as tfe
-import numpy as np
-import queue
-import threading
class ParameterServer(object):
@@ -22,11 +22,18 @@
self._lock = threading.Lock()
self._runner = threading.Thread(target=self._run, name='ps-runner')
self._exiting = False
+ self._min_step_cv = threading.Condition()
- def pull(self, min_step=0, names=None):
+ def pull(self, names=None, min_step=0, blocking=True, timeout=None):
+ with self._min_step_cv:
+ self._min_step_cv.wait_for(
+ lambda: not blocking or min_step <= self._step,
+ timeout=timeout)
with self._lock:
if min_step > self._step:
- raise LookupError('Required step is not ready yet: %s' % min_step)
+ raise LookupError(
+ 'Required step is not ready yet: %s' %
+ min_step)
if names:
res = {k: self._vars[k].numpy() for k in names}
else:
@@ -54,7 +61,9 @@
grads_vars = [(g, self._vars[k]) for k, g in grads.items()]
with self._lock:
self._opt.apply_gradients(grads_vars)
+ with self._min_step_cv:
self._step += 1
+ self._min_step_cv.notify_all()
def _run(self):
while not self._exiting:
| {"golden_diff": "diff --git a/tensorflow/ps/ps.py b/tensorflow/ps/ps.py\n--- a/tensorflow/ps/ps.py\n+++ b/tensorflow/ps/ps.py\n@@ -1,9 +1,9 @@\n+import threading\n+import queue\n+import numpy as np\n+import tensorflow.contrib.eager as tfe\n import tensorflow as tf\n tf.enable_eager_execution()\n-import tensorflow.contrib.eager as tfe\n-import numpy as np\n-import queue\n-import threading\n \n \n class ParameterServer(object):\n@@ -22,11 +22,18 @@\n self._lock = threading.Lock()\n self._runner = threading.Thread(target=self._run, name='ps-runner')\n self._exiting = False\n+ self._min_step_cv = threading.Condition()\n \n- def pull(self, min_step=0, names=None):\n+ def pull(self, names=None, min_step=0, blocking=True, timeout=None):\n+ with self._min_step_cv:\n+ self._min_step_cv.wait_for(\n+ lambda: not blocking or min_step <= self._step,\n+ timeout=timeout)\n with self._lock:\n if min_step > self._step:\n- raise LookupError('Required step is not ready yet: %s' % min_step)\n+ raise LookupError(\n+ 'Required step is not ready yet: %s' %\n+ min_step)\n if names:\n res = {k: self._vars[k].numpy() for k in names}\n else:\n@@ -54,7 +61,9 @@\n grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n self._opt.apply_gradients(grads_vars)\n+ with self._min_step_cv:\n self._step += 1\n+ self._min_step_cv.notify_all()\n \n def _run(self):\n while not self._exiting:\n", "issue": "support blocking pull in PS so client don't need to retry in a loop\n\n", "before_files": [{"content": "import tensorflow as tf\ntf.enable_eager_execution()\nimport tensorflow.contrib.eager as tfe\nimport numpy as np\nimport queue\nimport threading\n\n\nclass ParameterServer(object):\n def __init__(self, optimizer, vars):\n self._opt = optimizer\n self._vars = {}\n for k, v in vars.items():\n if (not isinstance(v, np.ndarray)\n or v.dtype not in (np.float32, np.float64)):\n raise ValueError(\n 'Initial value for variable %s is not of float type ndarray' %\n k)\n self._vars[k] = tfe.Variable(v, name=k)\n self._step = 0\n self._grad_q = queue.Queue()\n self._lock = threading.Lock()\n self._runner = threading.Thread(target=self._run, name='ps-runner')\n self._exiting = False\n\n def pull(self, min_step=0, names=None):\n with self._lock:\n if min_step > self._step:\n raise LookupError('Required step is not ready yet: %s' % min_step)\n if names:\n res = {k: self._vars[k].numpy() for k in names}\n else:\n res = {k: v.numpy() for k, v in self._vars.items()}\n return self._step, res\n\n def push(self, base_step, sub_step, grads):\n with self._lock:\n if base_step > self._step:\n raise ValueError(\n 'Illegal base step %s, parameter server step is %s' %\n (base_step, self._step))\n\n if sub_step < 0:\n raise ValueError('Illegal sub step %s' % sub_step)\n\n for k, g in grads.items():\n v = self._vars[k]\n if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:\n raise ValueError('Incompatible gradient for variable %s' % k)\n # TODO(l.zou): use @dataclass when python 3.7 is available.\n self._grad_q.put((base_step, sub_step, grads))\n\n def _compute(self, grads):\n grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n self._opt.apply_gradients(grads_vars)\n self._step += 1\n\n def _run(self):\n while not self._exiting:\n # TODO(l.zou): How to properly accumulate and decay grads?\n try:\n base_step, sub_step, grads = self._grad_q.get(timeout=1.0)\n self._compute(grads)\n except queue.Empty:\n pass\n\n def start(self):\n self._runner.start()\n\n def join(self):\n self._exiting = True\n self._runner.join()\n", "path": "tensorflow/ps/ps.py"}], "after_files": [{"content": "import threading\nimport queue\nimport numpy as np\nimport tensorflow.contrib.eager as tfe\nimport tensorflow as tf\ntf.enable_eager_execution()\n\n\nclass ParameterServer(object):\n def __init__(self, optimizer, vars):\n self._opt = optimizer\n self._vars = {}\n for k, v in vars.items():\n if (not isinstance(v, np.ndarray)\n or v.dtype not in (np.float32, np.float64)):\n raise ValueError(\n 'Initial value for variable %s is not of float type ndarray' %\n k)\n self._vars[k] = tfe.Variable(v, name=k)\n self._step = 0\n self._grad_q = queue.Queue()\n self._lock = threading.Lock()\n self._runner = threading.Thread(target=self._run, name='ps-runner')\n self._exiting = False\n self._min_step_cv = threading.Condition()\n\n def pull(self, names=None, min_step=0, blocking=True, timeout=None):\n with self._min_step_cv:\n self._min_step_cv.wait_for(\n lambda: not blocking or min_step <= self._step,\n timeout=timeout)\n with self._lock:\n if min_step > self._step:\n raise LookupError(\n 'Required step is not ready yet: %s' %\n min_step)\n if names:\n res = {k: self._vars[k].numpy() for k in names}\n else:\n res = {k: v.numpy() for k, v in self._vars.items()}\n return self._step, res\n\n def push(self, base_step, sub_step, grads):\n with self._lock:\n if base_step > self._step:\n raise ValueError(\n 'Illegal base step %s, parameter server step is %s' %\n (base_step, self._step))\n\n if sub_step < 0:\n raise ValueError('Illegal sub step %s' % sub_step)\n\n for k, g in grads.items():\n v = self._vars[k]\n if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:\n raise ValueError('Incompatible gradient for variable %s' % k)\n # TODO(l.zou): use @dataclass when python 3.7 is available.\n self._grad_q.put((base_step, sub_step, grads))\n\n def _compute(self, grads):\n grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n self._opt.apply_gradients(grads_vars)\n with self._min_step_cv:\n self._step += 1\n self._min_step_cv.notify_all()\n\n def _run(self):\n while not self._exiting:\n # TODO(l.zou): How to properly accumulate and decay grads?\n try:\n base_step, sub_step, grads = self._grad_q.get(timeout=1.0)\n self._compute(grads)\n except queue.Empty:\n pass\n\n def start(self):\n self._runner.start()\n\n def join(self):\n self._exiting = True\n self._runner.join()\n", "path": "tensorflow/ps/ps.py"}]} | 1,020 | 422 |
gh_patches_debug_9871 | rasdani/github-patches | git_diff | OCA__social-623 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[13.0] [BUG]"base_search_mail_content" module > Getting bug with "hr" (Employees) module
module: base_search_mail_content
version: 13.0
**Context :**
OCB 13.0 Odoo Server up to date [(08/30/2020),]
Virgin database , to reproduce issue faced on my test environnement.
Also !! >> Get same bug on runbot : http://3437172-13-0-56e0a2.runbot2-2.odoo-community.org
**Steps to reproduce**
- Install together "base_search_mail_content" & "hr" (Employees) native odoo module, and try to access to : hr" (Employees)
**Current behavior** (model=hr.employee&view_type=kanban, or tree)
When i try to access to menu "Employees"There is this following message :
> Something went wrong !
Only types ['many2one'] are supported for category (found type text)
**Current resolution**
i uninstall "base_search_mail_content" to retreive access to hr" (Employees)
-----------------------------------------------------------------------------------------------------------------------


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `base_search_mail_content/models/mail_thread.py`
Content:
```
1 # Copyright 2016-17 Eficent Business and IT Consulting Services S.L.
2 # (http://www.eficent.com)
3 # Copyright 2016 Serpent Consulting Services Pvt. Ltd.
4 # (<http://www.serpentcs.com>)
5 # License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl.html).
6
7 from lxml import etree
8
9 from odoo import _, api, fields, models
10 from odoo.osv import expression
11
12
13 class MailThread(models.AbstractModel):
14 _inherit = "mail.thread"
15
16 def _search_message_content(self, operator, value):
17 model_domain = [("model", "=", self._name)]
18 if operator not in expression.NEGATIVE_TERM_OPERATORS:
19 model_domain += ["|"] * 4
20 model_domain += [
21 ("record_name", operator, value),
22 ("subject", operator, value),
23 ("body", operator, value),
24 ("email_from", operator, value),
25 ("reply_to", operator, value),
26 ]
27 recs = self.env["mail.message"].search(model_domain)
28 return [("id", "in", recs.mapped("res_id"))]
29
30 message_content = fields.Text(
31 string="Message Content",
32 help="Message content, to be used only in searches",
33 compute=lambda self: False,
34 search="_search_message_content",
35 )
36
37 @api.model
38 def fields_view_get(
39 self, view_id=None, view_type="form", toolbar=False, submenu=False
40 ):
41 """
42 Override to add message_content field in all the objects
43 that inherits mail.thread
44 """
45 res = super(MailThread, self).fields_view_get(
46 view_id=view_id, view_type=view_type, toolbar=toolbar, submenu=submenu
47 )
48 if view_type == "search" and self._fields.get("message_content"):
49 doc = etree.XML(res["arch"])
50 res["fields"].update(
51 {"message_content": {"type": "char", "string": _("Message Content")}}
52 )
53
54 for node in doc.xpath("//field[last()]"):
55 # Add message_content in search view
56 elem = etree.Element("field", {"name": "message_content"})
57 node.addnext(elem)
58 res["arch"] = etree.tostring(doc)
59 return res
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/base_search_mail_content/models/mail_thread.py b/base_search_mail_content/models/mail_thread.py
--- a/base_search_mail_content/models/mail_thread.py
+++ b/base_search_mail_content/models/mail_thread.py
@@ -50,8 +50,7 @@
res["fields"].update(
{"message_content": {"type": "char", "string": _("Message Content")}}
)
-
- for node in doc.xpath("//field[last()]"):
+ for node in doc.xpath("/search/field[last()]"):
# Add message_content in search view
elem = etree.Element("field", {"name": "message_content"})
node.addnext(elem)
| {"golden_diff": "diff --git a/base_search_mail_content/models/mail_thread.py b/base_search_mail_content/models/mail_thread.py\n--- a/base_search_mail_content/models/mail_thread.py\n+++ b/base_search_mail_content/models/mail_thread.py\n@@ -50,8 +50,7 @@\n res[\"fields\"].update(\n {\"message_content\": {\"type\": \"char\", \"string\": _(\"Message Content\")}}\n )\n-\n- for node in doc.xpath(\"//field[last()]\"):\n+ for node in doc.xpath(\"/search/field[last()]\"):\n # Add message_content in search view\n elem = etree.Element(\"field\", {\"name\": \"message_content\"})\n node.addnext(elem)\n", "issue": "[13.0] [BUG]\"base_search_mail_content\" module > Getting bug with \"hr\" (Employees) module\nmodule: base_search_mail_content\r\nversion: 13.0\r\n\r\n**Context :**\r\nOCB 13.0 Odoo Server up to date [(08/30/2020),]\r\nVirgin database , to reproduce issue faced on my test environnement.\r\nAlso !! >> Get same bug on runbot : http://3437172-13-0-56e0a2.runbot2-2.odoo-community.org\r\n\r\n**Steps to reproduce**\r\n- Install together \"base_search_mail_content\" & \"hr\" (Employees) native odoo module, and try to access to : hr\" (Employees)\r\n\r\n**Current behavior** (model=hr.employee&view_type=kanban, or tree)\r\nWhen i try to access to menu \"Employees\"There is this following message : \r\n> Something went wrong !\r\nOnly types ['many2one'] are supported for category (found type text)\r\n\r\n**Current resolution**\r\ni uninstall \"base_search_mail_content\" to retreive access to hr\" (Employees)\r\n\r\n-----------------------------------------------------------------------------------------------------------------------\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2016-17 Eficent Business and IT Consulting Services S.L.\n# (http://www.eficent.com)\n# Copyright 2016 Serpent Consulting Services Pvt. Ltd.\n# (<http://www.serpentcs.com>)\n# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl.html).\n\nfrom lxml import etree\n\nfrom odoo import _, api, fields, models\nfrom odoo.osv import expression\n\n\nclass MailThread(models.AbstractModel):\n _inherit = \"mail.thread\"\n\n def _search_message_content(self, operator, value):\n model_domain = [(\"model\", \"=\", self._name)]\n if operator not in expression.NEGATIVE_TERM_OPERATORS:\n model_domain += [\"|\"] * 4\n model_domain += [\n (\"record_name\", operator, value),\n (\"subject\", operator, value),\n (\"body\", operator, value),\n (\"email_from\", operator, value),\n (\"reply_to\", operator, value),\n ]\n recs = self.env[\"mail.message\"].search(model_domain)\n return [(\"id\", \"in\", recs.mapped(\"res_id\"))]\n\n message_content = fields.Text(\n string=\"Message Content\",\n help=\"Message content, to be used only in searches\",\n compute=lambda self: False,\n search=\"_search_message_content\",\n )\n\n @api.model\n def fields_view_get(\n self, view_id=None, view_type=\"form\", toolbar=False, submenu=False\n ):\n \"\"\"\n Override to add message_content field in all the objects\n that inherits mail.thread\n \"\"\"\n res = super(MailThread, self).fields_view_get(\n view_id=view_id, view_type=view_type, toolbar=toolbar, submenu=submenu\n )\n if view_type == \"search\" and self._fields.get(\"message_content\"):\n doc = etree.XML(res[\"arch\"])\n res[\"fields\"].update(\n {\"message_content\": {\"type\": \"char\", \"string\": _(\"Message Content\")}}\n )\n\n for node in doc.xpath(\"//field[last()]\"):\n # Add message_content in search view\n elem = etree.Element(\"field\", {\"name\": \"message_content\"})\n node.addnext(elem)\n res[\"arch\"] = etree.tostring(doc)\n return res\n", "path": "base_search_mail_content/models/mail_thread.py"}], "after_files": [{"content": "# Copyright 2016-17 Eficent Business and IT Consulting Services S.L.\n# (http://www.eficent.com)\n# Copyright 2016 Serpent Consulting Services Pvt. Ltd.\n# (<http://www.serpentcs.com>)\n# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl.html).\n\nfrom lxml import etree\n\nfrom odoo import _, api, fields, models\nfrom odoo.osv import expression\n\n\nclass MailThread(models.AbstractModel):\n _inherit = \"mail.thread\"\n\n def _search_message_content(self, operator, value):\n model_domain = [(\"model\", \"=\", self._name)]\n if operator not in expression.NEGATIVE_TERM_OPERATORS:\n model_domain += [\"|\"] * 4\n model_domain += [\n (\"record_name\", operator, value),\n (\"subject\", operator, value),\n (\"body\", operator, value),\n (\"email_from\", operator, value),\n (\"reply_to\", operator, value),\n ]\n recs = self.env[\"mail.message\"].search(model_domain)\n return [(\"id\", \"in\", recs.mapped(\"res_id\"))]\n\n message_content = fields.Text(\n string=\"Message Content\",\n help=\"Message content, to be used only in searches\",\n compute=lambda self: False,\n search=\"_search_message_content\",\n )\n\n @api.model\n def fields_view_get(\n self, view_id=None, view_type=\"form\", toolbar=False, submenu=False\n ):\n \"\"\"\n Override to add message_content field in all the objects\n that inherits mail.thread\n \"\"\"\n res = super(MailThread, self).fields_view_get(\n view_id=view_id, view_type=view_type, toolbar=toolbar, submenu=submenu\n )\n if view_type == \"search\" and self._fields.get(\"message_content\"):\n doc = etree.XML(res[\"arch\"])\n res[\"fields\"].update(\n {\"message_content\": {\"type\": \"char\", \"string\": _(\"Message Content\")}}\n )\n for node in doc.xpath(\"/search/field[last()]\"):\n # Add message_content in search view\n elem = etree.Element(\"field\", {\"name\": \"message_content\"})\n node.addnext(elem)\n res[\"arch\"] = etree.tostring(doc)\n return res\n", "path": "base_search_mail_content/models/mail_thread.py"}]} | 1,252 | 141 |
gh_patches_debug_13356 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2791 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Modify page routing to allow for any database name
## Current behavior
- Many of our pages have URLs that begin with the database name.
- We also have routes that begin with things like `administration` and `auth`.
- Those routing rules produce an ambiguous routing grammar making it impossible to use Mathesar with a database named "administration" (for example).
## Desired behavior
We should change `/<db_name>/` to `/db/<db_name>`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/urls.py`
Content:
```
1 from django.contrib.auth.views import LoginView
2 from django.urls import include, path, re_path
3 from rest_framework_nested import routers
4
5 from mathesar import views
6 from mathesar.api.db import viewsets as db_viewsets
7 from mathesar.api.ui import viewsets as ui_viewsets
8 from mathesar.users.password_reset import MathesarPasswordResetConfirmView
9
10 db_router = routers.DefaultRouter()
11 db_router.register(r'tables', db_viewsets.TableViewSet, basename='table')
12 db_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')
13 db_router.register(r'links', db_viewsets.LinkViewSet, basename='links')
14 db_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')
15 db_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')
16 db_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')
17
18 db_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')
19 db_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')
20 db_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')
21 db_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')
22 db_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')
23
24 ui_router = routers.DefaultRouter()
25 ui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')
26 ui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')
27 ui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')
28 ui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')
29 ui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')
30
31 ui_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')
32 ui_table_router.register(r'records', ui_viewsets.RecordViewSet, basename='table-record')
33
34 urlpatterns = [
35 path('api/db/v0/', include(db_router.urls)),
36 path('api/db/v0/', include(db_table_router.urls)),
37 path('api/ui/v0/', include(ui_router.urls)),
38 path('api/ui/v0/', include(ui_table_router.urls)),
39 path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),
40 path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),
41 path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),
42 path('auth/', include('django.contrib.auth.urls')),
43 path('', views.home, name='home'),
44 path('profile/', views.profile, name='profile'),
45 path('administration/', views.admin_home, name='admin_home'),
46 path('administration/users/', views.admin_home, name='admin_users_home'),
47 path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),
48 path('administration/update/', views.admin_home, name='admin_update'),
49 path('<db_name>/', views.schemas, name='schemas'),
50 re_path(
51 r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
52 views.schema_home,
53 name='schema_home'
54 ),
55 ]
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/urls.py b/mathesar/urls.py
--- a/mathesar/urls.py
+++ b/mathesar/urls.py
@@ -46,9 +46,10 @@
path('administration/users/', views.admin_home, name='admin_users_home'),
path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),
path('administration/update/', views.admin_home, name='admin_update'),
- path('<db_name>/', views.schemas, name='schemas'),
+ path('db/', views.home, name='db_home'),
+ path('db/<db_name>/', views.schemas, name='schemas'),
re_path(
- r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
+ r'^db/(?P<db_name>\w+)/(?P<schema_id>\w+)/',
views.schema_home,
name='schema_home'
),
| {"golden_diff": "diff --git a/mathesar/urls.py b/mathesar/urls.py\n--- a/mathesar/urls.py\n+++ b/mathesar/urls.py\n@@ -46,9 +46,10 @@\n path('administration/users/', views.admin_home, name='admin_users_home'),\n path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n path('administration/update/', views.admin_home, name='admin_update'),\n- path('<db_name>/', views.schemas, name='schemas'),\n+ path('db/', views.home, name='db_home'),\n+ path('db/<db_name>/', views.schemas, name='schemas'),\n re_path(\n- r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n+ r'^db/(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n", "issue": "Modify page routing to allow for any database name\n## Current behavior\r\n\r\n- Many of our pages have URLs that begin with the database name.\r\n- We also have routes that begin with things like `administration` and `auth`.\r\n- Those routing rules produce an ambiguous routing grammar making it impossible to use Mathesar with a database named \"administration\" (for example).\r\n\r\n## Desired behavior\r\n\r\nWe should change `/<db_name>/` to `/db/<db_name>`\r\n\r\n\n", "before_files": [{"content": "from django.contrib.auth.views import LoginView\nfrom django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar import views\nfrom mathesar.api.db import viewsets as db_viewsets\nfrom mathesar.api.ui import viewsets as ui_viewsets\nfrom mathesar.users.password_reset import MathesarPasswordResetConfirmView\n\ndb_router = routers.DefaultRouter()\ndb_router.register(r'tables', db_viewsets.TableViewSet, basename='table')\ndb_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')\ndb_router.register(r'links', db_viewsets.LinkViewSet, basename='links')\ndb_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')\ndb_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')\ndb_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')\n\ndb_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\ndb_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')\ndb_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')\ndb_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')\ndb_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')\n\nui_router = routers.DefaultRouter()\nui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')\nui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')\nui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')\nui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')\nui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')\n\nui_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\nui_table_router.register(r'records', ui_viewsets.RecordViewSet, basename='table-record')\n\nurlpatterns = [\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n path('api/ui/v0/', include(ui_table_router.urls)),\n path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),\n path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),\n path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),\n path('auth/', include('django.contrib.auth.urls')),\n path('', views.home, name='home'),\n path('profile/', views.profile, name='profile'),\n path('administration/', views.admin_home, name='admin_home'),\n path('administration/users/', views.admin_home, name='admin_users_home'),\n path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n path('administration/update/', views.admin_home, name='admin_update'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n]\n", "path": "mathesar/urls.py"}], "after_files": [{"content": "from django.contrib.auth.views import LoginView\nfrom django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar import views\nfrom mathesar.api.db import viewsets as db_viewsets\nfrom mathesar.api.ui import viewsets as ui_viewsets\nfrom mathesar.users.password_reset import MathesarPasswordResetConfirmView\n\ndb_router = routers.DefaultRouter()\ndb_router.register(r'tables', db_viewsets.TableViewSet, basename='table')\ndb_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')\ndb_router.register(r'links', db_viewsets.LinkViewSet, basename='links')\ndb_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')\ndb_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')\ndb_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')\n\ndb_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\ndb_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')\ndb_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')\ndb_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')\ndb_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')\n\nui_router = routers.DefaultRouter()\nui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')\nui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')\nui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')\nui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')\nui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')\n\nui_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\nui_table_router.register(r'records', ui_viewsets.RecordViewSet, basename='table-record')\n\nurlpatterns = [\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n path('api/ui/v0/', include(ui_table_router.urls)),\n path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),\n path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),\n path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),\n path('auth/', include('django.contrib.auth.urls')),\n path('', views.home, name='home'),\n path('profile/', views.profile, name='profile'),\n path('administration/', views.admin_home, name='admin_home'),\n path('administration/users/', views.admin_home, name='admin_users_home'),\n path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n path('administration/update/', views.admin_home, name='admin_update'),\n path('db/', views.home, name='db_home'),\n path('db/<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^db/(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n]\n", "path": "mathesar/urls.py"}]} | 1,160 | 204 |
gh_patches_debug_7778 | rasdani/github-patches | git_diff | nipy__nipype-2096 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
interfaces.camino.convert.FSL2Scheme does not show up in documentation
Diagram on front page of docs contains a typo
"Idiosynchratic" should be "Idiosyncratic"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/build_interface_docs.py`
Content:
```
1 #!/usr/bin/env python
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 """Script to auto-generate interface docs.
5 """
6 from __future__ import print_function, unicode_literals
7 # stdlib imports
8 import os
9 import sys
10
11 # *****************************************************************************
12 if __name__ == '__main__':
13 nipypepath = os.path.abspath('..')
14 sys.path.insert(1, nipypepath)
15 # local imports
16 from interfacedocgen import InterfaceHelpWriter
17 package = 'nipype'
18 outdir = os.path.join('interfaces', 'generated')
19 docwriter = InterfaceHelpWriter(package)
20 # Packages that should not be included in generated API docs.
21 docwriter.package_skip_patterns += ['\.external$',
22 '\.fixes$',
23 '\.utils$',
24 '\.pipeline',
25 '\.testing',
26 '\.caching',
27 '\.scripts',
28 ]
29 # Modules that should not be included in generated API docs.
30 docwriter.module_skip_patterns += ['\.version$',
31 '\.interfaces\.base$',
32 '\.interfaces\.matlab$',
33 '\.interfaces\.rest$',
34 '\.interfaces\.pymvpa$',
35 '\.interfaces\.slicer\.generate_classes$',
36 '\.interfaces\.spm\.base$',
37 '\.interfaces\.traits',
38 '\.pipeline\.alloy$',
39 '\.pipeline\.s3_node_wrapper$',
40 '\.testing',
41 '\.scripts',
42 ]
43 docwriter.class_skip_patterns += ['AFNICommand',
44 'ANTS',
45 'FSL',
46 'FS',
47 'Info',
48 '^SPM',
49 'Tester',
50 'Spec$',
51 'Numpy'
52 # NipypeTester raises an
53 # exception when instantiated in
54 # InterfaceHelpWriter.generate_api_doc
55 'NipypeTester',
56 ]
57 docwriter.write_api_docs(outdir)
58 docwriter.write_index(outdir, 'gen', relative_to='interfaces')
59 print('%d files written' % len(docwriter.written_modules))
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/build_interface_docs.py b/tools/build_interface_docs.py
--- a/tools/build_interface_docs.py
+++ b/tools/build_interface_docs.py
@@ -42,7 +42,7 @@
]
docwriter.class_skip_patterns += ['AFNICommand',
'ANTS',
- 'FSL',
+ 'FSLCommand',
'FS',
'Info',
'^SPM',
| {"golden_diff": "diff --git a/tools/build_interface_docs.py b/tools/build_interface_docs.py\n--- a/tools/build_interface_docs.py\n+++ b/tools/build_interface_docs.py\n@@ -42,7 +42,7 @@\n ]\n docwriter.class_skip_patterns += ['AFNICommand',\n 'ANTS',\n- 'FSL',\n+ 'FSLCommand',\n 'FS',\n 'Info',\n '^SPM',\n", "issue": "interfaces.camino.convert.FSL2Scheme does not show up in documentation\n\nDiagram on front page of docs contains a typo\n\"Idiosynchratic\" should be \"Idiosyncratic\"\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"Script to auto-generate interface docs.\n\"\"\"\nfrom __future__ import print_function, unicode_literals\n# stdlib imports\nimport os\nimport sys\n\n# *****************************************************************************\nif __name__ == '__main__':\n nipypepath = os.path.abspath('..')\n sys.path.insert(1, nipypepath)\n # local imports\n from interfacedocgen import InterfaceHelpWriter\n package = 'nipype'\n outdir = os.path.join('interfaces', 'generated')\n docwriter = InterfaceHelpWriter(package)\n # Packages that should not be included in generated API docs.\n docwriter.package_skip_patterns += ['\\.external$',\n '\\.fixes$',\n '\\.utils$',\n '\\.pipeline',\n '\\.testing',\n '\\.caching',\n '\\.scripts',\n ]\n # Modules that should not be included in generated API docs.\n docwriter.module_skip_patterns += ['\\.version$',\n '\\.interfaces\\.base$',\n '\\.interfaces\\.matlab$',\n '\\.interfaces\\.rest$',\n '\\.interfaces\\.pymvpa$',\n '\\.interfaces\\.slicer\\.generate_classes$',\n '\\.interfaces\\.spm\\.base$',\n '\\.interfaces\\.traits',\n '\\.pipeline\\.alloy$',\n '\\.pipeline\\.s3_node_wrapper$',\n '\\.testing',\n '\\.scripts',\n ]\n docwriter.class_skip_patterns += ['AFNICommand',\n 'ANTS',\n 'FSL',\n 'FS',\n 'Info',\n '^SPM',\n 'Tester',\n 'Spec$',\n 'Numpy'\n # NipypeTester raises an\n # exception when instantiated in\n # InterfaceHelpWriter.generate_api_doc\n 'NipypeTester',\n ]\n docwriter.write_api_docs(outdir)\n docwriter.write_index(outdir, 'gen', relative_to='interfaces')\n print('%d files written' % len(docwriter.written_modules))\n", "path": "tools/build_interface_docs.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"Script to auto-generate interface docs.\n\"\"\"\nfrom __future__ import print_function, unicode_literals\n# stdlib imports\nimport os\nimport sys\n\n# *****************************************************************************\nif __name__ == '__main__':\n nipypepath = os.path.abspath('..')\n sys.path.insert(1, nipypepath)\n # local imports\n from interfacedocgen import InterfaceHelpWriter\n package = 'nipype'\n outdir = os.path.join('interfaces', 'generated')\n docwriter = InterfaceHelpWriter(package)\n # Packages that should not be included in generated API docs.\n docwriter.package_skip_patterns += ['\\.external$',\n '\\.fixes$',\n '\\.utils$',\n '\\.pipeline',\n '\\.testing',\n '\\.caching',\n '\\.scripts',\n ]\n # Modules that should not be included in generated API docs.\n docwriter.module_skip_patterns += ['\\.version$',\n '\\.interfaces\\.base$',\n '\\.interfaces\\.matlab$',\n '\\.interfaces\\.rest$',\n '\\.interfaces\\.pymvpa$',\n '\\.interfaces\\.slicer\\.generate_classes$',\n '\\.interfaces\\.spm\\.base$',\n '\\.interfaces\\.traits',\n '\\.pipeline\\.alloy$',\n '\\.pipeline\\.s3_node_wrapper$',\n '\\.testing',\n '\\.scripts',\n ]\n docwriter.class_skip_patterns += ['AFNICommand',\n 'ANTS',\n 'FSLCommand',\n 'FS',\n 'Info',\n '^SPM',\n 'Tester',\n 'Spec$',\n 'Numpy'\n # NipypeTester raises an\n # exception when instantiated in\n # InterfaceHelpWriter.generate_api_doc\n 'NipypeTester',\n ]\n docwriter.write_api_docs(outdir)\n docwriter.write_index(outdir, 'gen', relative_to='interfaces')\n print('%d files written' % len(docwriter.written_modules))\n", "path": "tools/build_interface_docs.py"}]} | 873 | 88 |
gh_patches_debug_2400 | rasdani/github-patches | git_diff | dask__distributed-2975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dask.distributed.progress no longer callable in 2.3.0?
We've used the progress() function from dask.distributed a bunch in the past to display a progress bar in JupyterLab, but it seems to have stopped working after upgrading to Dask 2.3.0:
```
from dask.distributed import Client, progress
import dask.dataframe as dd
df = dd.demo.make_timeseries('2010', '2016',
{'value': float, 'name': str, 'id': int},
freq='10s', partition_freq='7d', seed=1)
df = df.persist()
progress(df)
```
Executing this in a single cell in JupyterLab (with an existing Dask cluster already running) results in:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-16af814d7204> in <module>
7
8 df = df.persist()
----> 9 progress(df)
TypeError: 'module' object is not callable
```
Let me know if I can provide any more info. Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/__init__.py`
Content:
```
1 from . import config
2 from dask.config import config
3 from .actor import Actor, ActorFuture
4 from .core import connect, rpc
5 from .deploy import LocalCluster, Adaptive, SpecCluster
6 from .diagnostics import progress
7 from .client import (
8 Client,
9 Executor,
10 CompatibleExecutor,
11 wait,
12 as_completed,
13 default_client,
14 fire_and_forget,
15 Future,
16 futures_of,
17 get_task_stream,
18 )
19 from .lock import Lock
20 from .nanny import Nanny
21 from .pubsub import Pub, Sub
22 from .queues import Queue
23 from .scheduler import Scheduler
24 from .threadpoolexecutor import rejoin
25 from .utils import sync
26 from .variable import Variable
27 from .worker import Worker, get_worker, get_client, secede, Reschedule
28 from .worker_client import local_client, worker_client
29
30 from tornado.gen import TimeoutError
31
32 from ._version import get_versions
33
34 versions = get_versions()
35 __version__ = versions["version"]
36 __git_revision__ = versions["full-revisionid"]
37 del get_versions, versions
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/distributed/__init__.py b/distributed/__init__.py
--- a/distributed/__init__.py
+++ b/distributed/__init__.py
@@ -3,7 +3,7 @@
from .actor import Actor, ActorFuture
from .core import connect, rpc
from .deploy import LocalCluster, Adaptive, SpecCluster
-from .diagnostics import progress
+from .diagnostics.progressbar import progress
from .client import (
Client,
Executor,
| {"golden_diff": "diff --git a/distributed/__init__.py b/distributed/__init__.py\n--- a/distributed/__init__.py\n+++ b/distributed/__init__.py\n@@ -3,7 +3,7 @@\n from .actor import Actor, ActorFuture\n from .core import connect, rpc\n from .deploy import LocalCluster, Adaptive, SpecCluster\n-from .diagnostics import progress\n+from .diagnostics.progressbar import progress\n from .client import (\n Client,\n Executor,\n", "issue": "dask.distributed.progress no longer callable in 2.3.0?\nWe've used the progress() function from dask.distributed a bunch in the past to display a progress bar in JupyterLab, but it seems to have stopped working after upgrading to Dask 2.3.0:\r\n\r\n```\r\nfrom dask.distributed import Client, progress\r\nimport dask.dataframe as dd\r\n\r\ndf = dd.demo.make_timeseries('2010', '2016',\r\n {'value': float, 'name': str, 'id': int},\r\n freq='10s', partition_freq='7d', seed=1)\r\n\r\ndf = df.persist()\r\nprogress(df)\r\n```\r\n\r\nExecuting this in a single cell in JupyterLab (with an existing Dask cluster already running) results in:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-1-16af814d7204> in <module>\r\n 7 \r\n 8 df = df.persist()\r\n----> 9 progress(df)\r\n\r\nTypeError: 'module' object is not callable\r\n```\r\n\r\nLet me know if I can provide any more info. Thanks!\n", "before_files": [{"content": "from . import config\nfrom dask.config import config\nfrom .actor import Actor, ActorFuture\nfrom .core import connect, rpc\nfrom .deploy import LocalCluster, Adaptive, SpecCluster\nfrom .diagnostics import progress\nfrom .client import (\n Client,\n Executor,\n CompatibleExecutor,\n wait,\n as_completed,\n default_client,\n fire_and_forget,\n Future,\n futures_of,\n get_task_stream,\n)\nfrom .lock import Lock\nfrom .nanny import Nanny\nfrom .pubsub import Pub, Sub\nfrom .queues import Queue\nfrom .scheduler import Scheduler\nfrom .threadpoolexecutor import rejoin\nfrom .utils import sync\nfrom .variable import Variable\nfrom .worker import Worker, get_worker, get_client, secede, Reschedule\nfrom .worker_client import local_client, worker_client\n\nfrom tornado.gen import TimeoutError\n\nfrom ._version import get_versions\n\nversions = get_versions()\n__version__ = versions[\"version\"]\n__git_revision__ = versions[\"full-revisionid\"]\ndel get_versions, versions\n", "path": "distributed/__init__.py"}], "after_files": [{"content": "from . import config\nfrom dask.config import config\nfrom .actor import Actor, ActorFuture\nfrom .core import connect, rpc\nfrom .deploy import LocalCluster, Adaptive, SpecCluster\nfrom .diagnostics.progressbar import progress\nfrom .client import (\n Client,\n Executor,\n CompatibleExecutor,\n wait,\n as_completed,\n default_client,\n fire_and_forget,\n Future,\n futures_of,\n get_task_stream,\n)\nfrom .lock import Lock\nfrom .nanny import Nanny\nfrom .pubsub import Pub, Sub\nfrom .queues import Queue\nfrom .scheduler import Scheduler\nfrom .threadpoolexecutor import rejoin\nfrom .utils import sync\nfrom .variable import Variable\nfrom .worker import Worker, get_worker, get_client, secede, Reschedule\nfrom .worker_client import local_client, worker_client\n\nfrom tornado.gen import TimeoutError\n\nfrom ._version import get_versions\n\nversions = get_versions()\n__version__ = versions[\"version\"]\n__git_revision__ = versions[\"full-revisionid\"]\ndel get_versions, versions\n", "path": "distributed/__init__.py"}]} | 802 | 103 |
gh_patches_debug_18781 | rasdani/github-patches | git_diff | ivy-llc__ivy-15979 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
selu
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/nn/functional/activation.py`
Content:
```
1 # local
2 from ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh
3 from ivy.functional.frontends.paddle.tensor.math import (
4 log_softmax as paddle_log_softmax,
5 )
6
7
8 tanh = paddle_tanh
9 log_softmax = paddle_log_softmax
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/nn/functional/activation.py b/ivy/functional/frontends/paddle/nn/functional/activation.py
--- a/ivy/functional/frontends/paddle/nn/functional/activation.py
+++ b/ivy/functional/frontends/paddle/nn/functional/activation.py
@@ -1,9 +1,33 @@
# local
+import ivy
+from ivy.func_wrapper import with_supported_dtypes
+from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
from ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh
from ivy.functional.frontends.paddle.tensor.math import (
log_softmax as paddle_log_softmax,
)
+@with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
+@to_ivy_arrays_and_back
+def selu(
+ x,
+ /,
+ *,
+ alpha=1.6732632423543772848170429916717,
+ scale=1.0507009873554804934193349852946,
+ name=None,
+):
+ if scale <= 1.0:
+ raise ValueError(f"The scale must be greater than 1.0. Received: {scale}.")
+
+ if alpha < 0:
+ raise ValueError(f"The alpha must be no less than zero. Received: {alpha}.")
+
+ ret = ivy.where(x > 0, x, alpha * ivy.expm1(x))
+ arr = scale * ret
+ return ivy.astype(arr, x.dtype)
+
+
tanh = paddle_tanh
log_softmax = paddle_log_softmax
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/nn/functional/activation.py b/ivy/functional/frontends/paddle/nn/functional/activation.py\n--- a/ivy/functional/frontends/paddle/nn/functional/activation.py\n+++ b/ivy/functional/frontends/paddle/nn/functional/activation.py\n@@ -1,9 +1,33 @@\n # local\n+import ivy\n+from ivy.func_wrapper import with_supported_dtypes\n+from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n from ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh\n from ivy.functional.frontends.paddle.tensor.math import (\n log_softmax as paddle_log_softmax,\n )\n \n \n+@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def selu(\n+ x,\n+ /,\n+ *,\n+ alpha=1.6732632423543772848170429916717,\n+ scale=1.0507009873554804934193349852946,\n+ name=None,\n+):\n+ if scale <= 1.0:\n+ raise ValueError(f\"The scale must be greater than 1.0. Received: {scale}.\")\n+\n+ if alpha < 0:\n+ raise ValueError(f\"The alpha must be no less than zero. Received: {alpha}.\")\n+\n+ ret = ivy.where(x > 0, x, alpha * ivy.expm1(x))\n+ arr = scale * ret\n+ return ivy.astype(arr, x.dtype)\n+\n+\n tanh = paddle_tanh\n log_softmax = paddle_log_softmax\n", "issue": "selu\n\n", "before_files": [{"content": "# local\nfrom ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh\nfrom ivy.functional.frontends.paddle.tensor.math import (\n log_softmax as paddle_log_softmax,\n)\n\n\ntanh = paddle_tanh\nlog_softmax = paddle_log_softmax\n", "path": "ivy/functional/frontends/paddle/nn/functional/activation.py"}], "after_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh\nfrom ivy.functional.frontends.paddle.tensor.math import (\n log_softmax as paddle_log_softmax,\n)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef selu(\n x,\n /,\n *,\n alpha=1.6732632423543772848170429916717,\n scale=1.0507009873554804934193349852946,\n name=None,\n):\n if scale <= 1.0:\n raise ValueError(f\"The scale must be greater than 1.0. Received: {scale}.\")\n\n if alpha < 0:\n raise ValueError(f\"The alpha must be no less than zero. Received: {alpha}.\")\n\n ret = ivy.where(x > 0, x, alpha * ivy.expm1(x))\n arr = scale * ret\n return ivy.astype(arr, x.dtype)\n\n\ntanh = paddle_tanh\nlog_softmax = paddle_log_softmax\n", "path": "ivy/functional/frontends/paddle/nn/functional/activation.py"}]} | 340 | 420 |
gh_patches_debug_22439 | rasdani/github-patches | git_diff | scrapy__scrapy-5722 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add logging functionality to `memusage` extension
## Summary
To add logging functionality to memusage extension.
## Motivation
Scrapy jobs with `MEMUSAGE_ENABLED : True` and defined `MEMUSAGE_LIMIT_MB` (all jobs on scrapy cloud) can be stopped early due to overuse of RAM memory and receive `memusage_exceeded` outcome.
First thing required to debug RAM memory leaks - is to identify.. pattern of RAM memory usage.
Is RAM usage continuously increased at higher rates during runtime?
or Is RAM usage rapidly increased over limit in last several minutes after hours or even days of stable runtime performance?
Each reason require different approaches to debug RAM memory leaks.
It will be much easier to debug this if value of `self.get_virtual_size()` will be added to log in `_check_limit` method of `memusage` extension
https://github.com/scrapy/scrapy/blob/6ded3cf4cd134b615239babe28bb28c3ff524b05/scrapy/extensions/memusage.py#L77-L89
## Describe alternatives you've considered
Applying `MEMUSAGE_WARNING_MB` setting to ~80-90% of `MEMUSAGE_LIMIT_MB` - current implementation of `memusage` extension warns only 1 time so it is not enough data for this.
Manually subclass `memusage` extension with similar changes - as well as any other option it will require to reschedule job. It may be not siutable for jobs with several days(and more) total runtime. So from this side it is preferable that it will be applied in scrapy itself and with enabled this loggin by default.
## Additional context
Similar functionality previously requested here https://github.com/scrapy/scrapy/issues/2173
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/extensions/memusage.py`
Content:
```
1 """
2 MemoryUsage extension
3
4 See documentation in docs/topics/extensions.rst
5 """
6 import sys
7 import socket
8 import logging
9 from pprint import pformat
10 from importlib import import_module
11
12 from twisted.internet import task
13
14 from scrapy import signals
15 from scrapy.exceptions import NotConfigured
16 from scrapy.mail import MailSender
17 from scrapy.utils.engine import get_engine_status
18
19 logger = logging.getLogger(__name__)
20
21
22 class MemoryUsage:
23
24 def __init__(self, crawler):
25 if not crawler.settings.getbool('MEMUSAGE_ENABLED'):
26 raise NotConfigured
27 try:
28 # stdlib's resource module is only available on unix platforms.
29 self.resource = import_module('resource')
30 except ImportError:
31 raise NotConfigured
32
33 self.crawler = crawler
34 self.warned = False
35 self.notify_mails = crawler.settings.getlist('MEMUSAGE_NOTIFY_MAIL')
36 self.limit = crawler.settings.getint('MEMUSAGE_LIMIT_MB') * 1024 * 1024
37 self.warning = crawler.settings.getint('MEMUSAGE_WARNING_MB') * 1024 * 1024
38 self.check_interval = crawler.settings.getfloat('MEMUSAGE_CHECK_INTERVAL_SECONDS')
39 self.mail = MailSender.from_settings(crawler.settings)
40 crawler.signals.connect(self.engine_started, signal=signals.engine_started)
41 crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped)
42
43 @classmethod
44 def from_crawler(cls, crawler):
45 return cls(crawler)
46
47 def get_virtual_size(self):
48 size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss
49 if sys.platform != 'darwin':
50 # on macOS ru_maxrss is in bytes, on Linux it is in KB
51 size *= 1024
52 return size
53
54 def engine_started(self):
55 self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())
56 self.tasks = []
57 tsk = task.LoopingCall(self.update)
58 self.tasks.append(tsk)
59 tsk.start(self.check_interval, now=True)
60 if self.limit:
61 tsk = task.LoopingCall(self._check_limit)
62 self.tasks.append(tsk)
63 tsk.start(self.check_interval, now=True)
64 if self.warning:
65 tsk = task.LoopingCall(self._check_warning)
66 self.tasks.append(tsk)
67 tsk.start(self.check_interval, now=True)
68
69 def engine_stopped(self):
70 for tsk in self.tasks:
71 if tsk.running:
72 tsk.stop()
73
74 def update(self):
75 self.crawler.stats.max_value('memusage/max', self.get_virtual_size())
76
77 def _check_limit(self):
78 if self.get_virtual_size() > self.limit:
79 self.crawler.stats.set_value('memusage/limit_reached', 1)
80 mem = self.limit / 1024 / 1024
81 logger.error("Memory usage exceeded %(memusage)dM. Shutting down Scrapy...",
82 {'memusage': mem}, extra={'crawler': self.crawler})
83 if self.notify_mails:
84 subj = (
85 f"{self.crawler.settings['BOT_NAME']} terminated: "
86 f"memory usage exceeded {mem}M at {socket.gethostname()}"
87 )
88 self._send_report(self.notify_mails, subj)
89 self.crawler.stats.set_value('memusage/limit_notified', 1)
90
91 if self.crawler.engine.spider is not None:
92 self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')
93 else:
94 self.crawler.stop()
95
96 def _check_warning(self):
97 if self.warned: # warn only once
98 return
99 if self.get_virtual_size() > self.warning:
100 self.crawler.stats.set_value('memusage/warning_reached', 1)
101 mem = self.warning / 1024 / 1024
102 logger.warning("Memory usage reached %(memusage)dM",
103 {'memusage': mem}, extra={'crawler': self.crawler})
104 if self.notify_mails:
105 subj = (
106 f"{self.crawler.settings['BOT_NAME']} warning: "
107 f"memory usage reached {mem}M at {socket.gethostname()}"
108 )
109 self._send_report(self.notify_mails, subj)
110 self.crawler.stats.set_value('memusage/warning_notified', 1)
111 self.warned = True
112
113 def _send_report(self, rcpts, subject):
114 """send notification mail with some additional useful info"""
115 stats = self.crawler.stats
116 s = f"Memory usage at engine startup : {stats.get_value('memusage/startup')/1024/1024}M\r\n"
117 s += f"Maximum memory usage : {stats.get_value('memusage/max')/1024/1024}M\r\n"
118 s += f"Current memory usage : {self.get_virtual_size()/1024/1024}M\r\n"
119
120 s += "ENGINE STATUS ------------------------------------------------------- \r\n"
121 s += "\r\n"
122 s += pformat(get_engine_status(self.crawler.engine))
123 s += "\r\n"
124 self.mail.send(rcpts, subject, s)
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/extensions/memusage.py b/scrapy/extensions/memusage.py
--- a/scrapy/extensions/memusage.py
+++ b/scrapy/extensions/memusage.py
@@ -75,7 +75,8 @@
self.crawler.stats.max_value('memusage/max', self.get_virtual_size())
def _check_limit(self):
- if self.get_virtual_size() > self.limit:
+ current_mem_usage = self.get_virtual_size()
+ if current_mem_usage > self.limit:
self.crawler.stats.set_value('memusage/limit_reached', 1)
mem = self.limit / 1024 / 1024
logger.error("Memory usage exceeded %(memusage)dM. Shutting down Scrapy...",
@@ -92,6 +93,8 @@
self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')
else:
self.crawler.stop()
+ else:
+ logger.info("Current memory usage is %(virtualsize)dM", {'virtualsize': current_mem_usage / 1024 / 1024})
def _check_warning(self):
if self.warned: # warn only once
| {"golden_diff": "diff --git a/scrapy/extensions/memusage.py b/scrapy/extensions/memusage.py\n--- a/scrapy/extensions/memusage.py\n+++ b/scrapy/extensions/memusage.py\n@@ -75,7 +75,8 @@\n self.crawler.stats.max_value('memusage/max', self.get_virtual_size())\n \n def _check_limit(self):\n- if self.get_virtual_size() > self.limit:\n+ current_mem_usage = self.get_virtual_size()\n+ if current_mem_usage > self.limit:\n self.crawler.stats.set_value('memusage/limit_reached', 1)\n mem = self.limit / 1024 / 1024\n logger.error(\"Memory usage exceeded %(memusage)dM. Shutting down Scrapy...\",\n@@ -92,6 +93,8 @@\n self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')\n else:\n self.crawler.stop()\n+ else:\n+ logger.info(\"Current memory usage is %(virtualsize)dM\", {'virtualsize': current_mem_usage / 1024 / 1024})\n \n def _check_warning(self):\n if self.warned: # warn only once\n", "issue": "Add logging functionality to `memusage` extension\n\r\n\r\n## Summary\r\n\r\nTo add logging functionality to memusage extension.\r\n\r\n## Motivation\r\n\r\nScrapy jobs with `MEMUSAGE_ENABLED : True` and defined `MEMUSAGE_LIMIT_MB` (all jobs on scrapy cloud) can be stopped early due to overuse of RAM memory and receive `memusage_exceeded` outcome.\r\n\r\nFirst thing required to debug RAM memory leaks - is to identify.. pattern of RAM memory usage.\r\nIs RAM usage continuously increased at higher rates during runtime?\r\nor Is RAM usage rapidly increased over limit in last several minutes after hours or even days of stable runtime performance?\r\nEach reason require different approaches to debug RAM memory leaks.\r\n\r\nIt will be much easier to debug this if value of `self.get_virtual_size()` will be added to log in `_check_limit` method of `memusage` extension\r\nhttps://github.com/scrapy/scrapy/blob/6ded3cf4cd134b615239babe28bb28c3ff524b05/scrapy/extensions/memusage.py#L77-L89\r\n\r\n## Describe alternatives you've considered\r\n\r\nApplying `MEMUSAGE_WARNING_MB` setting to ~80-90% of `MEMUSAGE_LIMIT_MB` - current implementation of `memusage` extension warns only 1 time so it is not enough data for this.\r\n\r\nManually subclass `memusage` extension with similar changes - as well as any other option it will require to reschedule job. It may be not siutable for jobs with several days(and more) total runtime. So from this side it is preferable that it will be applied in scrapy itself and with enabled this loggin by default.\r\n \r\n## Additional context\r\n\r\nSimilar functionality previously requested here https://github.com/scrapy/scrapy/issues/2173\r\n\n", "before_files": [{"content": "\"\"\"\nMemoryUsage extension\n\nSee documentation in docs/topics/extensions.rst\n\"\"\"\nimport sys\nimport socket\nimport logging\nfrom pprint import pformat\nfrom importlib import import_module\n\nfrom twisted.internet import task\n\nfrom scrapy import signals\nfrom scrapy.exceptions import NotConfigured\nfrom scrapy.mail import MailSender\nfrom scrapy.utils.engine import get_engine_status\n\nlogger = logging.getLogger(__name__)\n\n\nclass MemoryUsage:\n\n def __init__(self, crawler):\n if not crawler.settings.getbool('MEMUSAGE_ENABLED'):\n raise NotConfigured\n try:\n # stdlib's resource module is only available on unix platforms.\n self.resource = import_module('resource')\n except ImportError:\n raise NotConfigured\n\n self.crawler = crawler\n self.warned = False\n self.notify_mails = crawler.settings.getlist('MEMUSAGE_NOTIFY_MAIL')\n self.limit = crawler.settings.getint('MEMUSAGE_LIMIT_MB') * 1024 * 1024\n self.warning = crawler.settings.getint('MEMUSAGE_WARNING_MB') * 1024 * 1024\n self.check_interval = crawler.settings.getfloat('MEMUSAGE_CHECK_INTERVAL_SECONDS')\n self.mail = MailSender.from_settings(crawler.settings)\n crawler.signals.connect(self.engine_started, signal=signals.engine_started)\n crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped)\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(crawler)\n\n def get_virtual_size(self):\n size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss\n if sys.platform != 'darwin':\n # on macOS ru_maxrss is in bytes, on Linux it is in KB\n size *= 1024\n return size\n\n def engine_started(self):\n self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())\n self.tasks = []\n tsk = task.LoopingCall(self.update)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n if self.limit:\n tsk = task.LoopingCall(self._check_limit)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n if self.warning:\n tsk = task.LoopingCall(self._check_warning)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n\n def engine_stopped(self):\n for tsk in self.tasks:\n if tsk.running:\n tsk.stop()\n\n def update(self):\n self.crawler.stats.max_value('memusage/max', self.get_virtual_size())\n\n def _check_limit(self):\n if self.get_virtual_size() > self.limit:\n self.crawler.stats.set_value('memusage/limit_reached', 1)\n mem = self.limit / 1024 / 1024\n logger.error(\"Memory usage exceeded %(memusage)dM. Shutting down Scrapy...\",\n {'memusage': mem}, extra={'crawler': self.crawler})\n if self.notify_mails:\n subj = (\n f\"{self.crawler.settings['BOT_NAME']} terminated: \"\n f\"memory usage exceeded {mem}M at {socket.gethostname()}\"\n )\n self._send_report(self.notify_mails, subj)\n self.crawler.stats.set_value('memusage/limit_notified', 1)\n\n if self.crawler.engine.spider is not None:\n self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')\n else:\n self.crawler.stop()\n\n def _check_warning(self):\n if self.warned: # warn only once\n return\n if self.get_virtual_size() > self.warning:\n self.crawler.stats.set_value('memusage/warning_reached', 1)\n mem = self.warning / 1024 / 1024\n logger.warning(\"Memory usage reached %(memusage)dM\",\n {'memusage': mem}, extra={'crawler': self.crawler})\n if self.notify_mails:\n subj = (\n f\"{self.crawler.settings['BOT_NAME']} warning: \"\n f\"memory usage reached {mem}M at {socket.gethostname()}\"\n )\n self._send_report(self.notify_mails, subj)\n self.crawler.stats.set_value('memusage/warning_notified', 1)\n self.warned = True\n\n def _send_report(self, rcpts, subject):\n \"\"\"send notification mail with some additional useful info\"\"\"\n stats = self.crawler.stats\n s = f\"Memory usage at engine startup : {stats.get_value('memusage/startup')/1024/1024}M\\r\\n\"\n s += f\"Maximum memory usage : {stats.get_value('memusage/max')/1024/1024}M\\r\\n\"\n s += f\"Current memory usage : {self.get_virtual_size()/1024/1024}M\\r\\n\"\n\n s += \"ENGINE STATUS ------------------------------------------------------- \\r\\n\"\n s += \"\\r\\n\"\n s += pformat(get_engine_status(self.crawler.engine))\n s += \"\\r\\n\"\n self.mail.send(rcpts, subject, s)\n", "path": "scrapy/extensions/memusage.py"}], "after_files": [{"content": "\"\"\"\nMemoryUsage extension\n\nSee documentation in docs/topics/extensions.rst\n\"\"\"\nimport sys\nimport socket\nimport logging\nfrom pprint import pformat\nfrom importlib import import_module\n\nfrom twisted.internet import task\n\nfrom scrapy import signals\nfrom scrapy.exceptions import NotConfigured\nfrom scrapy.mail import MailSender\nfrom scrapy.utils.engine import get_engine_status\n\nlogger = logging.getLogger(__name__)\n\n\nclass MemoryUsage:\n\n def __init__(self, crawler):\n if not crawler.settings.getbool('MEMUSAGE_ENABLED'):\n raise NotConfigured\n try:\n # stdlib's resource module is only available on unix platforms.\n self.resource = import_module('resource')\n except ImportError:\n raise NotConfigured\n\n self.crawler = crawler\n self.warned = False\n self.notify_mails = crawler.settings.getlist('MEMUSAGE_NOTIFY_MAIL')\n self.limit = crawler.settings.getint('MEMUSAGE_LIMIT_MB') * 1024 * 1024\n self.warning = crawler.settings.getint('MEMUSAGE_WARNING_MB') * 1024 * 1024\n self.check_interval = crawler.settings.getfloat('MEMUSAGE_CHECK_INTERVAL_SECONDS')\n self.mail = MailSender.from_settings(crawler.settings)\n crawler.signals.connect(self.engine_started, signal=signals.engine_started)\n crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped)\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(crawler)\n\n def get_virtual_size(self):\n size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss\n if sys.platform != 'darwin':\n # on macOS ru_maxrss is in bytes, on Linux it is in KB\n size *= 1024\n return size\n\n def engine_started(self):\n self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())\n self.tasks = []\n tsk = task.LoopingCall(self.update)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n if self.limit:\n tsk = task.LoopingCall(self._check_limit)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n if self.warning:\n tsk = task.LoopingCall(self._check_warning)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n\n def engine_stopped(self):\n for tsk in self.tasks:\n if tsk.running:\n tsk.stop()\n\n def update(self):\n self.crawler.stats.max_value('memusage/max', self.get_virtual_size())\n\n def _check_limit(self):\n current_mem_usage = self.get_virtual_size()\n if current_mem_usage > self.limit:\n self.crawler.stats.set_value('memusage/limit_reached', 1)\n mem = self.limit / 1024 / 1024\n logger.error(\"Memory usage exceeded %(memusage)dM. Shutting down Scrapy...\",\n {'memusage': mem}, extra={'crawler': self.crawler})\n if self.notify_mails:\n subj = (\n f\"{self.crawler.settings['BOT_NAME']} terminated: \"\n f\"memory usage exceeded {mem}M at {socket.gethostname()}\"\n )\n self._send_report(self.notify_mails, subj)\n self.crawler.stats.set_value('memusage/limit_notified', 1)\n\n if self.crawler.engine.spider is not None:\n self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')\n else:\n self.crawler.stop()\n else:\n logger.info(\"Current memory usage is %(virtualsize)dM\", {'virtualsize': current_mem_usage / 1024 / 1024})\n\n def _check_warning(self):\n if self.warned: # warn only once\n return\n if self.get_virtual_size() > self.warning:\n self.crawler.stats.set_value('memusage/warning_reached', 1)\n mem = self.warning / 1024 / 1024\n logger.warning(\"Memory usage reached %(memusage)dM\",\n {'memusage': mem}, extra={'crawler': self.crawler})\n if self.notify_mails:\n subj = (\n f\"{self.crawler.settings['BOT_NAME']} warning: \"\n f\"memory usage reached {mem}M at {socket.gethostname()}\"\n )\n self._send_report(self.notify_mails, subj)\n self.crawler.stats.set_value('memusage/warning_notified', 1)\n self.warned = True\n\n def _send_report(self, rcpts, subject):\n \"\"\"send notification mail with some additional useful info\"\"\"\n stats = self.crawler.stats\n s = f\"Memory usage at engine startup : {stats.get_value('memusage/startup')/1024/1024}M\\r\\n\"\n s += f\"Maximum memory usage : {stats.get_value('memusage/max')/1024/1024}M\\r\\n\"\n s += f\"Current memory usage : {self.get_virtual_size()/1024/1024}M\\r\\n\"\n\n s += \"ENGINE STATUS ------------------------------------------------------- \\r\\n\"\n s += \"\\r\\n\"\n s += pformat(get_engine_status(self.crawler.engine))\n s += \"\\r\\n\"\n self.mail.send(rcpts, subject, s)\n", "path": "scrapy/extensions/memusage.py"}]} | 2,038 | 266 |
gh_patches_debug_41505 | rasdani/github-patches | git_diff | great-expectations__great_expectations-3279 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py`
Content:
```
1 from dateutil.parser import parse
2
3 from great_expectations.execution_engine import (
4 PandasExecutionEngine,
5 SqlAlchemyExecutionEngine,
6 )
7 from great_expectations.expectations.metrics.import_manager import sa
8 from great_expectations.expectations.metrics.map_metric_provider import (
9 ColumnPairMapMetricProvider,
10 column_pair_condition_partial,
11 )
12
13
14 class ColumnPairValuesEqual(ColumnPairMapMetricProvider):
15 condition_metric_name = "column_pair_values.equal"
16 condition_domain_keys = (
17 "batch_id",
18 "table",
19 "column_A",
20 "column_B",
21 "row_condition",
22 "condition_parser",
23 "ignore_row_if",
24 )
25 condition_value_keys = ()
26
27 # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>
28 @column_pair_condition_partial(engine=PandasExecutionEngine)
29 def _pandas(cls, column_A, column_B, **kwargs):
30 return column_A == column_B
31
32 @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)
33 def _sqlalchemy(cls, column_A, column_B, **kwargs):
34 return sa.case((column_A == column_B, True), else_=False)
35
```
Path: `great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py`
Content:
```
1 from dateutil.parser import parse
2
3 from great_expectations.execution_engine import (
4 PandasExecutionEngine,
5 SqlAlchemyExecutionEngine,
6 )
7 from great_expectations.expectations.metrics.import_manager import sa
8 from great_expectations.expectations.metrics.map_metric_provider import (
9 ColumnPairMapMetricProvider,
10 column_pair_condition_partial,
11 )
12
13
14 class ColumnPairValuesAGreaterThanB(ColumnPairMapMetricProvider):
15 condition_metric_name = "column_pair_values.a_greater_than_b"
16 condition_domain_keys = (
17 "batch_id",
18 "table",
19 "column_A",
20 "column_B",
21 "row_condition",
22 "condition_parser",
23 "ignore_row_if",
24 )
25 condition_value_keys = (
26 "or_equal",
27 "parse_strings_as_datetimes",
28 "allow_cross_type_comparisons",
29 )
30
31 # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>
32 # noinspection PyPep8Naming
33 @column_pair_condition_partial(engine=PandasExecutionEngine)
34 def _pandas(cls, column_A, column_B, **kwargs):
35 allow_cross_type_comparisons = kwargs.get("allow_cross_type_comparisons")
36 if allow_cross_type_comparisons:
37 raise NotImplementedError
38
39 parse_strings_as_datetimes = kwargs.get("parse_strings_as_datetimes")
40 if parse_strings_as_datetimes:
41 # noinspection PyPep8Naming
42 temp_column_A = column_A.map(parse)
43 # noinspection PyPep8Naming
44 temp_column_B = column_B.map(parse)
45 else:
46 temp_column_A = column_A
47 temp_column_B = column_B
48
49 or_equal = kwargs.get("or_equal")
50 if or_equal:
51 return temp_column_A >= temp_column_B
52 else:
53 return temp_column_A > temp_column_B
54
55 # noinspection PyPep8Naming
56 @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)
57 def _sqlalchemy(cls, column_A, column_B, **kwargs):
58 allow_cross_type_comparisons = kwargs.get("allow_cross_type_comparisons")
59 if allow_cross_type_comparisons:
60 raise NotImplementedError
61
62 parse_strings_as_datetimes = kwargs.get("parse_strings_as_datetimes")
63 if parse_strings_as_datetimes:
64 raise NotImplementedError
65
66 or_equal = kwargs.get("or_equal")
67 if or_equal:
68 return sa.case((column_A >= column_B, True), else_=False)
69 else:
70 return sa.case((column_A > column_B, True), else_=False)
71
```
Path: `great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py`
Content:
```
1 from great_expectations.execution_engine import PandasExecutionEngine
2 from great_expectations.expectations.metrics.map_metric_provider import (
3 MulticolumnMapMetricProvider,
4 multicolumn_condition_partial,
5 )
6
7
8 class SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):
9 condition_metric_name = "select_column_values.unique.within_record"
10 condition_domain_keys = (
11 "batch_id",
12 "table",
13 "column_list",
14 "row_condition",
15 "condition_parser",
16 "ignore_row_if",
17 )
18
19 # TODO: <Alex>ALEX -- temporarily only a Pandas implementation is provided (others to follow).</Alex>
20 @multicolumn_condition_partial(engine=PandasExecutionEngine)
21 def _pandas(cls, column_list, **kwargs):
22 num_columns = len(column_list.columns)
23 row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns
24 return row_wise_cond
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py
--- a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py
+++ b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py
@@ -1,5 +1,3 @@
-from dateutil.parser import parse
-
from great_expectations.execution_engine import (
PandasExecutionEngine,
SqlAlchemyExecutionEngine,
diff --git a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py
--- a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py
+++ b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py
@@ -38,9 +38,7 @@
parse_strings_as_datetimes = kwargs.get("parse_strings_as_datetimes")
if parse_strings_as_datetimes:
- # noinspection PyPep8Naming
temp_column_A = column_A.map(parse)
- # noinspection PyPep8Naming
temp_column_B = column_B.map(parse)
else:
temp_column_A = column_A
diff --git a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
--- a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
+++ b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
@@ -1,9 +1,17 @@
-from great_expectations.execution_engine import PandasExecutionEngine
+import logging
+
+from great_expectations.execution_engine import (
+ PandasExecutionEngine,
+ SqlAlchemyExecutionEngine,
+)
+from great_expectations.expectations.metrics.import_manager import sa
from great_expectations.expectations.metrics.map_metric_provider import (
MulticolumnMapMetricProvider,
multicolumn_condition_partial,
)
+logger = logging.getLogger(__name__)
+
class SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):
condition_metric_name = "select_column_values.unique.within_record"
@@ -16,9 +24,37 @@
"ignore_row_if",
)
- # TODO: <Alex>ALEX -- temporarily only a Pandas implementation is provided (others to follow).</Alex>
+ # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>
@multicolumn_condition_partial(engine=PandasExecutionEngine)
def _pandas(cls, column_list, **kwargs):
num_columns = len(column_list.columns)
row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns
return row_wise_cond
+
+ @multicolumn_condition_partial(engine=SqlAlchemyExecutionEngine)
+ def _sqlalchemy(cls, column_list, **kwargs):
+ """
+ The present approach relies on an inefficient query condition construction implementation, whose computational
+ cost is O(num_columns^2). However, until a more efficient implementation compatible with SQLAlchemy is
+ available, this is the only feasible mechanism under the current architecture, where map metric providers must
+ return a condition. Nevertheless, SQL query length limit is 1GB (sufficient for most practical scenarios).
+ """
+ num_columns = len(column_list)
+
+ # An arbitrary "num_columns" value used for issuing an explanatory message as a warning.
+ if num_columns > 100:
+ logger.warning(
+ f"""Batch data with {num_columns} columns is detected. Computing the "{cls.condition_metric_name}" \
+metric for wide tables using SQLAlchemy leads to long WHERE clauses for the underlying database engine to process.
+"""
+ )
+
+ condition = sa.or_()
+ for idx_src in range(num_columns - 1):
+ for idx_dest in range(idx_src + 1, num_columns):
+ condition = sa.or_(
+ condition, (column_list[idx_src] == column_list[idx_dest])
+ )
+
+ condition = sa.not_(condition)
+ return sa.case((condition, True), else_=False)
| {"golden_diff": "diff --git a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py\n--- a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py\n+++ b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py\n@@ -1,5 +1,3 @@\n-from dateutil.parser import parse\n-\n from great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\ndiff --git a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py\n--- a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py\n+++ b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py\n@@ -38,9 +38,7 @@\n \n parse_strings_as_datetimes = kwargs.get(\"parse_strings_as_datetimes\")\n if parse_strings_as_datetimes:\n- # noinspection PyPep8Naming\n temp_column_A = column_A.map(parse)\n- # noinspection PyPep8Naming\n temp_column_B = column_B.map(parse)\n else:\n temp_column_A = column_A\ndiff --git a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n--- a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n+++ b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n@@ -1,9 +1,17 @@\n-from great_expectations.execution_engine import PandasExecutionEngine\n+import logging\n+\n+from great_expectations.execution_engine import (\n+ PandasExecutionEngine,\n+ SqlAlchemyExecutionEngine,\n+)\n+from great_expectations.expectations.metrics.import_manager import sa\n from great_expectations.expectations.metrics.map_metric_provider import (\n MulticolumnMapMetricProvider,\n multicolumn_condition_partial,\n )\n \n+logger = logging.getLogger(__name__)\n+\n \n class SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):\n condition_metric_name = \"select_column_values.unique.within_record\"\n@@ -16,9 +24,37 @@\n \"ignore_row_if\",\n )\n \n- # TODO: <Alex>ALEX -- temporarily only a Pandas implementation is provided (others to follow).</Alex>\n+ # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n @multicolumn_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_list, **kwargs):\n num_columns = len(column_list.columns)\n row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns\n return row_wise_cond\n+\n+ @multicolumn_condition_partial(engine=SqlAlchemyExecutionEngine)\n+ def _sqlalchemy(cls, column_list, **kwargs):\n+ \"\"\"\n+ The present approach relies on an inefficient query condition construction implementation, whose computational\n+ cost is O(num_columns^2). However, until a more efficient implementation compatible with SQLAlchemy is\n+ available, this is the only feasible mechanism under the current architecture, where map metric providers must\n+ return a condition. Nevertheless, SQL query length limit is 1GB (sufficient for most practical scenarios).\n+ \"\"\"\n+ num_columns = len(column_list)\n+\n+ # An arbitrary \"num_columns\" value used for issuing an explanatory message as a warning.\n+ if num_columns > 100:\n+ logger.warning(\n+ f\"\"\"Batch data with {num_columns} columns is detected. Computing the \"{cls.condition_metric_name}\" \\\n+metric for wide tables using SQLAlchemy leads to long WHERE clauses for the underlying database engine to process.\n+\"\"\"\n+ )\n+\n+ condition = sa.or_()\n+ for idx_src in range(num_columns - 1):\n+ for idx_dest in range(idx_src + 1, num_columns):\n+ condition = sa.or_(\n+ condition, (column_list[idx_src] == column_list[idx_dest])\n+ )\n+\n+ condition = sa.not_(condition)\n+ return sa.case((condition, True), else_=False)\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "from dateutil.parser import parse\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import sa\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n ColumnPairMapMetricProvider,\n column_pair_condition_partial,\n)\n\n\nclass ColumnPairValuesEqual(ColumnPairMapMetricProvider):\n condition_metric_name = \"column_pair_values.equal\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_A\",\n \"column_B\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n condition_value_keys = ()\n\n # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n @column_pair_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_A, column_B, **kwargs):\n return column_A == column_B\n\n @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column_A, column_B, **kwargs):\n return sa.case((column_A == column_B, True), else_=False)\n", "path": "great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py"}, {"content": "from dateutil.parser import parse\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import sa\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n ColumnPairMapMetricProvider,\n column_pair_condition_partial,\n)\n\n\nclass ColumnPairValuesAGreaterThanB(ColumnPairMapMetricProvider):\n condition_metric_name = \"column_pair_values.a_greater_than_b\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_A\",\n \"column_B\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n condition_value_keys = (\n \"or_equal\",\n \"parse_strings_as_datetimes\",\n \"allow_cross_type_comparisons\",\n )\n\n # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n # noinspection PyPep8Naming\n @column_pair_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_A, column_B, **kwargs):\n allow_cross_type_comparisons = kwargs.get(\"allow_cross_type_comparisons\")\n if allow_cross_type_comparisons:\n raise NotImplementedError\n\n parse_strings_as_datetimes = kwargs.get(\"parse_strings_as_datetimes\")\n if parse_strings_as_datetimes:\n # noinspection PyPep8Naming\n temp_column_A = column_A.map(parse)\n # noinspection PyPep8Naming\n temp_column_B = column_B.map(parse)\n else:\n temp_column_A = column_A\n temp_column_B = column_B\n\n or_equal = kwargs.get(\"or_equal\")\n if or_equal:\n return temp_column_A >= temp_column_B\n else:\n return temp_column_A > temp_column_B\n\n # noinspection PyPep8Naming\n @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column_A, column_B, **kwargs):\n allow_cross_type_comparisons = kwargs.get(\"allow_cross_type_comparisons\")\n if allow_cross_type_comparisons:\n raise NotImplementedError\n\n parse_strings_as_datetimes = kwargs.get(\"parse_strings_as_datetimes\")\n if parse_strings_as_datetimes:\n raise NotImplementedError\n\n or_equal = kwargs.get(\"or_equal\")\n if or_equal:\n return sa.case((column_A >= column_B, True), else_=False)\n else:\n return sa.case((column_A > column_B, True), else_=False)\n", "path": "great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py"}, {"content": "from great_expectations.execution_engine import PandasExecutionEngine\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n MulticolumnMapMetricProvider,\n multicolumn_condition_partial,\n)\n\n\nclass SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):\n condition_metric_name = \"select_column_values.unique.within_record\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_list\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n\n # TODO: <Alex>ALEX -- temporarily only a Pandas implementation is provided (others to follow).</Alex>\n @multicolumn_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_list, **kwargs):\n num_columns = len(column_list.columns)\n row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns\n return row_wise_cond\n", "path": "great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py"}], "after_files": [{"content": "from great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import sa\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n ColumnPairMapMetricProvider,\n column_pair_condition_partial,\n)\n\n\nclass ColumnPairValuesEqual(ColumnPairMapMetricProvider):\n condition_metric_name = \"column_pair_values.equal\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_A\",\n \"column_B\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n condition_value_keys = ()\n\n # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n @column_pair_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_A, column_B, **kwargs):\n return column_A == column_B\n\n @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column_A, column_B, **kwargs):\n return sa.case((column_A == column_B, True), else_=False)\n", "path": "great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py"}, {"content": "from dateutil.parser import parse\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import sa\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n ColumnPairMapMetricProvider,\n column_pair_condition_partial,\n)\n\n\nclass ColumnPairValuesAGreaterThanB(ColumnPairMapMetricProvider):\n condition_metric_name = \"column_pair_values.a_greater_than_b\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_A\",\n \"column_B\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n condition_value_keys = (\n \"or_equal\",\n \"parse_strings_as_datetimes\",\n \"allow_cross_type_comparisons\",\n )\n\n # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n # noinspection PyPep8Naming\n @column_pair_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_A, column_B, **kwargs):\n allow_cross_type_comparisons = kwargs.get(\"allow_cross_type_comparisons\")\n if allow_cross_type_comparisons:\n raise NotImplementedError\n\n parse_strings_as_datetimes = kwargs.get(\"parse_strings_as_datetimes\")\n if parse_strings_as_datetimes:\n temp_column_A = column_A.map(parse)\n temp_column_B = column_B.map(parse)\n else:\n temp_column_A = column_A\n temp_column_B = column_B\n\n or_equal = kwargs.get(\"or_equal\")\n if or_equal:\n return temp_column_A >= temp_column_B\n else:\n return temp_column_A > temp_column_B\n\n # noinspection PyPep8Naming\n @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column_A, column_B, **kwargs):\n allow_cross_type_comparisons = kwargs.get(\"allow_cross_type_comparisons\")\n if allow_cross_type_comparisons:\n raise NotImplementedError\n\n parse_strings_as_datetimes = kwargs.get(\"parse_strings_as_datetimes\")\n if parse_strings_as_datetimes:\n raise NotImplementedError\n\n or_equal = kwargs.get(\"or_equal\")\n if or_equal:\n return sa.case((column_A >= column_B, True), else_=False)\n else:\n return sa.case((column_A > column_B, True), else_=False)\n", "path": "great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py"}, {"content": "import logging\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import sa\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n MulticolumnMapMetricProvider,\n multicolumn_condition_partial,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):\n condition_metric_name = \"select_column_values.unique.within_record\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_list\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n\n # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n @multicolumn_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_list, **kwargs):\n num_columns = len(column_list.columns)\n row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns\n return row_wise_cond\n\n @multicolumn_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column_list, **kwargs):\n \"\"\"\n The present approach relies on an inefficient query condition construction implementation, whose computational\n cost is O(num_columns^2). However, until a more efficient implementation compatible with SQLAlchemy is\n available, this is the only feasible mechanism under the current architecture, where map metric providers must\n return a condition. Nevertheless, SQL query length limit is 1GB (sufficient for most practical scenarios).\n \"\"\"\n num_columns = len(column_list)\n\n # An arbitrary \"num_columns\" value used for issuing an explanatory message as a warning.\n if num_columns > 100:\n logger.warning(\n f\"\"\"Batch data with {num_columns} columns is detected. Computing the \"{cls.condition_metric_name}\" \\\nmetric for wide tables using SQLAlchemy leads to long WHERE clauses for the underlying database engine to process.\n\"\"\"\n )\n\n condition = sa.or_()\n for idx_src in range(num_columns - 1):\n for idx_dest in range(idx_src + 1, num_columns):\n condition = sa.or_(\n condition, (column_list[idx_src] == column_list[idx_dest])\n )\n\n condition = sa.not_(condition)\n return sa.case((condition, True), else_=False)\n", "path": "great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py"}]} | 1,588 | 974 |
gh_patches_debug_27085 | rasdani/github-patches | git_diff | fossasia__open-event-server-2825 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Event does not show up on "manage events" page when it is a copy
When the user copies an event and edits it, it does not show up on the event management page.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/helpers/wizard/clone.py`
Content:
```
1 from sqlalchemy.orm import make_transient
2
3 from app.helpers.data import save_to_db
4 from app.helpers.data_getter import DataGetter
5 from app.models import db
6
7
8 def clone_row(row, event_id=None):
9 db.session.expunge(row)
10 make_transient(row)
11 row.id = None
12 if event_id:
13 row.event_id = event_id
14 save_to_db(row)
15 db.session.flush()
16 return row
17
18
19 def create_event_copy(event_id):
20 old_event = DataGetter.get_event(event_id)
21 event = clone_row(old_event)
22 event.name = "Copy of " + event.name
23 event.state = "Draft"
24 save_to_db(event)
25
26 sponsors_old = DataGetter.get_sponsors(event_id).all()
27 tracks_old = DataGetter.get_tracks(event_id).all()
28 microlocations_old = DataGetter.get_microlocations(event_id).all()
29 call_for_paper_old = DataGetter.get_call_for_papers(event_id).first()
30 social_links = DataGetter.get_social_links_by_event_id(event_id).all()
31 custom_forms = DataGetter.get_custom_form_elements(event_id)
32
33 for social_link in social_links:
34 clone_row(social_link, event.id)
35
36 for sponsor in sponsors_old:
37 clone_row(sponsor, event.id)
38
39 for track in tracks_old:
40 clone_row(track, event.id)
41
42 for microlocation in microlocations_old:
43 clone_row(microlocation, event.id)
44
45 if call_for_paper_old:
46 clone_row(call_for_paper_old, event.id)
47
48 if custom_forms:
49 clone_row(custom_forms, event.id)
50
51 return event
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/helpers/wizard/clone.py b/app/helpers/wizard/clone.py
--- a/app/helpers/wizard/clone.py
+++ b/app/helpers/wizard/clone.py
@@ -1,8 +1,13 @@
from sqlalchemy.orm import make_transient
+from flask.ext import login
from app.helpers.data import save_to_db
from app.helpers.data_getter import DataGetter
from app.models import db
+from app.models.users_events_roles import UsersEventsRoles
+from app.models.role import Role
+from app.models.email_notifications import EmailNotification
+from app.models.user import ORGANIZER
def clone_row(row, event_id=None):
@@ -23,6 +28,17 @@
event.state = "Draft"
save_to_db(event)
+ role = Role.query.filter_by(name=ORGANIZER).first()
+ uer = UsersEventsRoles(login.current_user, event, role)
+ if save_to_db(uer, "Event saved"):
+ new_email_notification_setting = EmailNotification(next_event=1,
+ new_paper=1,
+ session_schedule=1,
+ session_accept_reject=1,
+ user_id=login.current_user.id,
+ event_id=event.id)
+ save_to_db(new_email_notification_setting, "EmailSetting Saved")
+
sponsors_old = DataGetter.get_sponsors(event_id).all()
tracks_old = DataGetter.get_tracks(event_id).all()
microlocations_old = DataGetter.get_microlocations(event_id).all()
| {"golden_diff": "diff --git a/app/helpers/wizard/clone.py b/app/helpers/wizard/clone.py\n--- a/app/helpers/wizard/clone.py\n+++ b/app/helpers/wizard/clone.py\n@@ -1,8 +1,13 @@\n from sqlalchemy.orm import make_transient\n+from flask.ext import login\n \n from app.helpers.data import save_to_db\n from app.helpers.data_getter import DataGetter\n from app.models import db\n+from app.models.users_events_roles import UsersEventsRoles\n+from app.models.role import Role\n+from app.models.email_notifications import EmailNotification\n+from app.models.user import ORGANIZER\n \n \n def clone_row(row, event_id=None):\n@@ -23,6 +28,17 @@\n event.state = \"Draft\"\n save_to_db(event)\n \n+ role = Role.query.filter_by(name=ORGANIZER).first()\n+ uer = UsersEventsRoles(login.current_user, event, role)\n+ if save_to_db(uer, \"Event saved\"):\n+ new_email_notification_setting = EmailNotification(next_event=1,\n+ new_paper=1,\n+ session_schedule=1,\n+ session_accept_reject=1,\n+ user_id=login.current_user.id,\n+ event_id=event.id)\n+ save_to_db(new_email_notification_setting, \"EmailSetting Saved\")\n+\n sponsors_old = DataGetter.get_sponsors(event_id).all()\n tracks_old = DataGetter.get_tracks(event_id).all()\n microlocations_old = DataGetter.get_microlocations(event_id).all()\n", "issue": "Event does not show up on \"manage events\" page when it is a copy\nWhen the user copies an event and edits it, it does not show up on the event management page.\r\n\r\n\r\n\n", "before_files": [{"content": "from sqlalchemy.orm import make_transient\n\nfrom app.helpers.data import save_to_db\nfrom app.helpers.data_getter import DataGetter\nfrom app.models import db\n\n\ndef clone_row(row, event_id=None):\n db.session.expunge(row)\n make_transient(row)\n row.id = None\n if event_id:\n row.event_id = event_id\n save_to_db(row)\n db.session.flush()\n return row\n\n\ndef create_event_copy(event_id):\n old_event = DataGetter.get_event(event_id)\n event = clone_row(old_event)\n event.name = \"Copy of \" + event.name\n event.state = \"Draft\"\n save_to_db(event)\n\n sponsors_old = DataGetter.get_sponsors(event_id).all()\n tracks_old = DataGetter.get_tracks(event_id).all()\n microlocations_old = DataGetter.get_microlocations(event_id).all()\n call_for_paper_old = DataGetter.get_call_for_papers(event_id).first()\n social_links = DataGetter.get_social_links_by_event_id(event_id).all()\n custom_forms = DataGetter.get_custom_form_elements(event_id)\n\n for social_link in social_links:\n clone_row(social_link, event.id)\n\n for sponsor in sponsors_old:\n clone_row(sponsor, event.id)\n\n for track in tracks_old:\n clone_row(track, event.id)\n\n for microlocation in microlocations_old:\n clone_row(microlocation, event.id)\n\n if call_for_paper_old:\n clone_row(call_for_paper_old, event.id)\n\n if custom_forms:\n clone_row(custom_forms, event.id)\n\n return event\n", "path": "app/helpers/wizard/clone.py"}], "after_files": [{"content": "from sqlalchemy.orm import make_transient\nfrom flask.ext import login\n\nfrom app.helpers.data import save_to_db\nfrom app.helpers.data_getter import DataGetter\nfrom app.models import db\nfrom app.models.users_events_roles import UsersEventsRoles\nfrom app.models.role import Role\nfrom app.models.email_notifications import EmailNotification\nfrom app.models.user import ORGANIZER\n\n\ndef clone_row(row, event_id=None):\n db.session.expunge(row)\n make_transient(row)\n row.id = None\n if event_id:\n row.event_id = event_id\n save_to_db(row)\n db.session.flush()\n return row\n\n\ndef create_event_copy(event_id):\n old_event = DataGetter.get_event(event_id)\n event = clone_row(old_event)\n event.name = \"Copy of \" + event.name\n event.state = \"Draft\"\n save_to_db(event)\n\n role = Role.query.filter_by(name=ORGANIZER).first()\n uer = UsersEventsRoles(login.current_user, event, role)\n if save_to_db(uer, \"Event saved\"):\n new_email_notification_setting = EmailNotification(next_event=1,\n new_paper=1,\n session_schedule=1,\n session_accept_reject=1,\n user_id=login.current_user.id,\n event_id=event.id)\n save_to_db(new_email_notification_setting, \"EmailSetting Saved\")\n\n sponsors_old = DataGetter.get_sponsors(event_id).all()\n tracks_old = DataGetter.get_tracks(event_id).all()\n microlocations_old = DataGetter.get_microlocations(event_id).all()\n call_for_paper_old = DataGetter.get_call_for_papers(event_id).first()\n social_links = DataGetter.get_social_links_by_event_id(event_id).all()\n custom_forms = DataGetter.get_custom_form_elements(event_id)\n\n for social_link in social_links:\n clone_row(social_link, event.id)\n\n for sponsor in sponsors_old:\n clone_row(sponsor, event.id)\n\n for track in tracks_old:\n clone_row(track, event.id)\n\n for microlocation in microlocations_old:\n clone_row(microlocation, event.id)\n\n if call_for_paper_old:\n clone_row(call_for_paper_old, event.id)\n\n if custom_forms:\n clone_row(custom_forms, event.id)\n\n return event\n", "path": "app/helpers/wizard/clone.py"}]} | 825 | 324 |
gh_patches_debug_31040 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-4372 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
distutils not included with latest virtualenv (16.4.0)
This was already reported in #4031. The issue was closed without a fix so I'm creating this one.
**With virtualenv 16.4.0, pyinstaller reports :**
```
3583 INFO: distutils: retargeting to non-venv dir '/usr/lib64/python3.6/distutils/__init__.py'
```
and then during "Loading module hook" sequence, the `hook-distutils.py` is missing and distutils modules are not included into the final executable binary.
When executing the binary the error is:
```
ModuleNotFoundError: No module named 'distutils'
[10373] Failed to execute script <name here>
```
**With virtualenv 16.1.0, pyinstaller reports :**
```
3157 INFO: Processing pre-find module path hook distutils
5053 INFO: Loading module hook "hook-distutils.py"...
```
and distutils modules are included into the final executable binary.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/pre_find_module_path/hook-distutils.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2019, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9
10 """
11 `distutils`-specific pre-find module path hook.
12
13 When run from within a venv (virtual environment), this hook changes the
14 `__path__` of the `distutils` package to that of the system-wide rather than
15 venv-specific `distutils` package. While the former is suitable for freezing,
16 the latter is intended for use _only_ from within venvs.
17 """
18
19
20 import distutils
21 import os
22
23 from PyInstaller.utils.hooks import logger
24
25
26 def pre_find_module_path(api):
27 # Absolute path of the system-wide "distutils" package when run from within
28 # a venv or None otherwise.
29 distutils_dir = getattr(distutils, 'distutils_path', None)
30 if distutils_dir is not None:
31 # Find this package in its parent directory.
32 api.search_dirs = [os.path.dirname(distutils_dir)]
33 logger.info('distutils: retargeting to non-venv dir %r' % distutils_dir)
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/pre_find_module_path/hook-distutils.py b/PyInstaller/hooks/pre_find_module_path/hook-distutils.py
--- a/PyInstaller/hooks/pre_find_module_path/hook-distutils.py
+++ b/PyInstaller/hooks/pre_find_module_path/hook-distutils.py
@@ -1,11 +1,11 @@
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
# Copyright (c) 2005-2019, PyInstaller Development Team.
#
# Distributed under the terms of the GNU General Public License with exception
# for distributing bootloader.
#
# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
"""
`distutils`-specific pre-find module path hook.
@@ -26,8 +26,16 @@
def pre_find_module_path(api):
# Absolute path of the system-wide "distutils" package when run from within
# a venv or None otherwise.
- distutils_dir = getattr(distutils, 'distutils_path', None)
- if distutils_dir is not None:
+
+ # opcode is not a virtualenv module, so we can use it to find the stdlib.
+ # Technique taken from virtualenv's "distutils" package detection at
+ # https://github.com/pypa/virtualenv/blob/16.3.0/virtualenv_embedded/distutils-init.py#L5
+ import opcode
+
+ system_module_path = os.path.normpath(os.path.dirname(opcode.__file__))
+ loaded_module_path = os.path.normpath(os.path.dirname(distutils.__file__))
+ if system_module_path != loaded_module_path:
# Find this package in its parent directory.
- api.search_dirs = [os.path.dirname(distutils_dir)]
- logger.info('distutils: retargeting to non-venv dir %r' % distutils_dir)
+ api.search_dirs = [system_module_path]
+ logger.info('distutils: retargeting to non-venv dir %r',
+ system_module_path)
| {"golden_diff": "diff --git a/PyInstaller/hooks/pre_find_module_path/hook-distutils.py b/PyInstaller/hooks/pre_find_module_path/hook-distutils.py\n--- a/PyInstaller/hooks/pre_find_module_path/hook-distutils.py\n+++ b/PyInstaller/hooks/pre_find_module_path/hook-distutils.py\n@@ -1,11 +1,11 @@\n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n # Copyright (c) 2005-2019, PyInstaller Development Team.\n #\n # Distributed under the terms of the GNU General Public License with exception\n # for distributing bootloader.\n #\n # The full license is in the file COPYING.txt, distributed with this software.\n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n \n \"\"\"\n `distutils`-specific pre-find module path hook.\n@@ -26,8 +26,16 @@\n def pre_find_module_path(api):\n # Absolute path of the system-wide \"distutils\" package when run from within\n # a venv or None otherwise.\n- distutils_dir = getattr(distutils, 'distutils_path', None)\n- if distutils_dir is not None:\n+\n+ # opcode is not a virtualenv module, so we can use it to find the stdlib.\n+ # Technique taken from virtualenv's \"distutils\" package detection at\n+ # https://github.com/pypa/virtualenv/blob/16.3.0/virtualenv_embedded/distutils-init.py#L5\n+ import opcode\n+\n+ system_module_path = os.path.normpath(os.path.dirname(opcode.__file__))\n+ loaded_module_path = os.path.normpath(os.path.dirname(distutils.__file__))\n+ if system_module_path != loaded_module_path:\n # Find this package in its parent directory.\n- api.search_dirs = [os.path.dirname(distutils_dir)]\n- logger.info('distutils: retargeting to non-venv dir %r' % distutils_dir)\n+ api.search_dirs = [system_module_path]\n+ logger.info('distutils: retargeting to non-venv dir %r',\n+ system_module_path)\n", "issue": "distutils not included with latest virtualenv (16.4.0)\nThis was already reported in #4031. The issue was closed without a fix so I'm creating this one.\r\n\r\n**With virtualenv 16.4.0, pyinstaller reports :**\r\n\r\n```\r\n3583 INFO: distutils: retargeting to non-venv dir '/usr/lib64/python3.6/distutils/__init__.py'\r\n```\r\nand then during \"Loading module hook\" sequence, the `hook-distutils.py` is missing and distutils modules are not included into the final executable binary.\r\n\r\nWhen executing the binary the error is:\r\n\r\n```\r\nModuleNotFoundError: No module named 'distutils'\r\n[10373] Failed to execute script <name here>\r\n```\r\n\r\n**With virtualenv 16.1.0, pyinstaller reports :**\r\n\r\n```\r\n3157 INFO: Processing pre-find module path hook distutils\r\n5053 INFO: Loading module hook \"hook-distutils.py\"...\r\n```\r\n\r\nand distutils modules are included into the final executable binary.\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n\"\"\"\n`distutils`-specific pre-find module path hook.\n\nWhen run from within a venv (virtual environment), this hook changes the\n`__path__` of the `distutils` package to that of the system-wide rather than\nvenv-specific `distutils` package. While the former is suitable for freezing,\nthe latter is intended for use _only_ from within venvs.\n\"\"\"\n\n\nimport distutils\nimport os\n\nfrom PyInstaller.utils.hooks import logger\n\n\ndef pre_find_module_path(api):\n # Absolute path of the system-wide \"distutils\" package when run from within\n # a venv or None otherwise.\n distutils_dir = getattr(distutils, 'distutils_path', None)\n if distutils_dir is not None:\n # Find this package in its parent directory.\n api.search_dirs = [os.path.dirname(distutils_dir)]\n logger.info('distutils: retargeting to non-venv dir %r' % distutils_dir)\n", "path": "PyInstaller/hooks/pre_find_module_path/hook-distutils.py"}], "after_files": [{"content": "# -----------------------------------------------------------------------------\n# Copyright (c) 2005-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n# -----------------------------------------------------------------------------\n\n\"\"\"\n`distutils`-specific pre-find module path hook.\n\nWhen run from within a venv (virtual environment), this hook changes the\n`__path__` of the `distutils` package to that of the system-wide rather than\nvenv-specific `distutils` package. While the former is suitable for freezing,\nthe latter is intended for use _only_ from within venvs.\n\"\"\"\n\n\nimport distutils\nimport os\n\nfrom PyInstaller.utils.hooks import logger\n\n\ndef pre_find_module_path(api):\n # Absolute path of the system-wide \"distutils\" package when run from within\n # a venv or None otherwise.\n\n # opcode is not a virtualenv module, so we can use it to find the stdlib.\n # Technique taken from virtualenv's \"distutils\" package detection at\n # https://github.com/pypa/virtualenv/blob/16.3.0/virtualenv_embedded/distutils-init.py#L5\n import opcode\n\n system_module_path = os.path.normpath(os.path.dirname(opcode.__file__))\n loaded_module_path = os.path.normpath(os.path.dirname(distutils.__file__))\n if system_module_path != loaded_module_path:\n # Find this package in its parent directory.\n api.search_dirs = [system_module_path]\n logger.info('distutils: retargeting to non-venv dir %r',\n system_module_path)\n", "path": "PyInstaller/hooks/pre_find_module_path/hook-distutils.py"}]} | 835 | 444 |
gh_patches_debug_14392 | rasdani/github-patches | git_diff | pre-commit__pre-commit-216 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pre-commit potentially uses the wrong `virtualenv` when building environments
It should use `sys.executable, '-m', 'virtualenv'` instead of `'virtualenv'`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/python.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import distutils.spawn
5 import os
6
7 import virtualenv
8
9 from pre_commit.languages import helpers
10 from pre_commit.util import clean_path_on_failure
11
12
13 ENVIRONMENT_DIR = 'py_env'
14
15
16 class PythonEnv(helpers.Environment):
17 @property
18 def env_prefix(self):
19 return ". '{{prefix}}{0}activate' &&".format(
20 virtualenv.path_locations(
21 ENVIRONMENT_DIR,
22 )[-1].rstrip(os.sep) + os.sep,
23 'activate',
24 )
25
26
27 @contextlib.contextmanager
28 def in_env(repo_cmd_runner):
29 yield PythonEnv(repo_cmd_runner)
30
31
32 def norm_version(version):
33 if os.name == 'nt': # pragma: no cover (windows)
34 if not distutils.spawn.find_executable(version):
35 # The default place for python on windows is:
36 # C:\PythonXX\python.exe
37 version = r'C:\{0}\python.exe'.format(version.replace('.', ''))
38 return version
39
40
41 def install_environment(repo_cmd_runner, version='default'):
42 assert repo_cmd_runner.exists('setup.py')
43
44 # Install a virtualenv
45 with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):
46 venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]
47 if version != 'default':
48 venv_cmd.extend(['-p', norm_version(version)])
49 repo_cmd_runner.run(venv_cmd)
50 with in_env(repo_cmd_runner) as env:
51 env.run("cd '{prefix}' && pip install .")
52
53
54 def run_hook(repo_cmd_runner, hook, file_args):
55 with in_env(repo_cmd_runner) as env:
56 return helpers.run_hook(env, hook, file_args)
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -3,6 +3,7 @@
import contextlib
import distutils.spawn
import os
+import sys
import virtualenv
@@ -43,7 +44,10 @@
# Install a virtualenv
with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):
- venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]
+ venv_cmd = [
+ sys.executable, '-m', 'virtualenv',
+ '{{prefix}}{0}'.format(ENVIRONMENT_DIR)
+ ]
if version != 'default':
venv_cmd.extend(['-p', norm_version(version)])
repo_cmd_runner.run(venv_cmd)
| {"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -3,6 +3,7 @@\n import contextlib\n import distutils.spawn\n import os\n+import sys\n \n import virtualenv\n \n@@ -43,7 +44,10 @@\n \n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):\n- venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]\n+ venv_cmd = [\n+ sys.executable, '-m', 'virtualenv',\n+ '{{prefix}}{0}'.format(ENVIRONMENT_DIR)\n+ ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n", "issue": "pre-commit potentially uses the wrong `virtualenv` when building environments\nIt should use `sys.executable, '-m', 'virtualenv'` instead of `'virtualenv'`\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\n\nimport virtualenv\n\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\nclass PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n return \". '{{prefix}}{0}activate' &&\".format(\n virtualenv.path_locations(\n ENVIRONMENT_DIR,\n )[-1].rstrip(os.sep) + os.sep,\n 'activate',\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner):\n yield PythonEnv(repo_cmd_runner)\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n if not distutils.spawn.find_executable(version):\n # The default place for python on windows is:\n # C:\\PythonXX\\python.exe\n version = r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n return version\n\n\ndef install_environment(repo_cmd_runner, version='default'):\n assert repo_cmd_runner.exists('setup.py')\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):\n venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner) as env:\n env.run(\"cd '{prefix}' && pip install .\")\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner) as env:\n return helpers.run_hook(env, hook, file_args)\n", "path": "pre_commit/languages/python.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nimport virtualenv\n\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\nclass PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n return \". '{{prefix}}{0}activate' &&\".format(\n virtualenv.path_locations(\n ENVIRONMENT_DIR,\n )[-1].rstrip(os.sep) + os.sep,\n 'activate',\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner):\n yield PythonEnv(repo_cmd_runner)\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n if not distutils.spawn.find_executable(version):\n # The default place for python on windows is:\n # C:\\PythonXX\\python.exe\n version = r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n return version\n\n\ndef install_environment(repo_cmd_runner, version='default'):\n assert repo_cmd_runner.exists('setup.py')\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{0}'.format(ENVIRONMENT_DIR)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner) as env:\n env.run(\"cd '{prefix}' && pip install .\")\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner) as env:\n return helpers.run_hook(env, hook, file_args)\n", "path": "pre_commit/languages/python.py"}]} | 780 | 191 |
gh_patches_debug_30909 | rasdani/github-patches | git_diff | ephios-dev__ephios-1012 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve explanation for page slugs
Users did not understand what the page slug means. We should provide a help text, either directly or in the docs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/plugins/pages/models.py`
Content:
```
1 from django.db import models
2 from django.utils.translation import gettext_lazy as _
3
4
5 class Page(models.Model):
6 title = models.CharField(verbose_name=_("Title"), max_length=250)
7 content = models.TextField(_("Content"), blank=True)
8 slug = models.SlugField(_("Slug"), max_length=250, unique=True)
9 show_in_footer = models.BooleanField(_("Show in footer"), default=False)
10 publicly_visible = models.BooleanField(_("Publicly visible"), default=False)
11
12 def __str__(self):
13 return str(self.title)
14
15 class Meta:
16 verbose_name = "Page"
17 verbose_name_plural = "Pages"
18
```
Path: `ephios/plugins/pages/views.py`
Content:
```
1 from django.contrib import messages
2 from django.contrib.auth.views import redirect_to_login
3 from django.urls import reverse
4 from django.utils.translation import gettext as _
5 from django.views.generic import CreateView, DeleteView, DetailView, ListView, UpdateView
6
7 from ephios.extra.mixins import CustomPermissionRequiredMixin
8 from ephios.plugins.pages.models import Page
9
10
11 class PageListView(CustomPermissionRequiredMixin, ListView):
12 model = Page
13 permission_required = "pages.add_page"
14
15
16 class PageView(DetailView):
17 model = Page
18
19 def setup(self, request, *args, **kwargs):
20 super().setup(request, *args, **kwargs)
21 self.object = self.get_object()
22
23 def dispatch(self, request, *args, **kwargs):
24 if not request.user.is_authenticated and not self.object.publicly_visible:
25 return redirect_to_login(self.request.get_full_path())
26 return super().dispatch(request, *args, **kwargs)
27
28
29 class PageCreateView(CustomPermissionRequiredMixin, CreateView):
30 model = Page
31 permission_required = "pages.add_page"
32 fields = ["title", "content", "slug", "show_in_footer", "publicly_visible"]
33
34 def get_success_url(self):
35 messages.success(self.request, _("Page saved successfully."))
36 return reverse("pages:settings_page_list")
37
38
39 class PageUpdateView(CustomPermissionRequiredMixin, UpdateView):
40 model = Page
41 permission_required = "pages.change_page"
42 fields = ["title", "content", "slug", "show_in_footer", "publicly_visible"]
43
44 def get_success_url(self):
45 messages.success(self.request, _("Page saved successfully."))
46 return reverse("pages:settings_page_list")
47
48
49 class PageDeleteView(CustomPermissionRequiredMixin, DeleteView):
50 model = Page
51 permission_required = "pages.delete_page"
52
53 def get_success_url(self):
54 messages.info(self.request, _("Page deleted successfully."))
55 return reverse("pages:settings_page_list")
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/plugins/pages/models.py b/ephios/plugins/pages/models.py
--- a/ephios/plugins/pages/models.py
+++ b/ephios/plugins/pages/models.py
@@ -5,7 +5,12 @@
class Page(models.Model):
title = models.CharField(verbose_name=_("Title"), max_length=250)
content = models.TextField(_("Content"), blank=True)
- slug = models.SlugField(_("Slug"), max_length=250, unique=True)
+ slug = models.SlugField(
+ _("URL slug"),
+ help_text=_("The slug is used to generate the page's URL."),
+ max_length=250,
+ unique=True,
+ )
show_in_footer = models.BooleanField(_("Show in footer"), default=False)
publicly_visible = models.BooleanField(_("Publicly visible"), default=False)
diff --git a/ephios/plugins/pages/views.py b/ephios/plugins/pages/views.py
--- a/ephios/plugins/pages/views.py
+++ b/ephios/plugins/pages/views.py
@@ -29,7 +29,7 @@
class PageCreateView(CustomPermissionRequiredMixin, CreateView):
model = Page
permission_required = "pages.add_page"
- fields = ["title", "content", "slug", "show_in_footer", "publicly_visible"]
+ fields = ["title", "slug", "content", "show_in_footer", "publicly_visible"]
def get_success_url(self):
messages.success(self.request, _("Page saved successfully."))
@@ -39,7 +39,7 @@
class PageUpdateView(CustomPermissionRequiredMixin, UpdateView):
model = Page
permission_required = "pages.change_page"
- fields = ["title", "content", "slug", "show_in_footer", "publicly_visible"]
+ fields = ["title", "slug", "content", "show_in_footer", "publicly_visible"]
def get_success_url(self):
messages.success(self.request, _("Page saved successfully."))
| {"golden_diff": "diff --git a/ephios/plugins/pages/models.py b/ephios/plugins/pages/models.py\n--- a/ephios/plugins/pages/models.py\n+++ b/ephios/plugins/pages/models.py\n@@ -5,7 +5,12 @@\n class Page(models.Model):\n title = models.CharField(verbose_name=_(\"Title\"), max_length=250)\n content = models.TextField(_(\"Content\"), blank=True)\n- slug = models.SlugField(_(\"Slug\"), max_length=250, unique=True)\n+ slug = models.SlugField(\n+ _(\"URL slug\"),\n+ help_text=_(\"The slug is used to generate the page's URL.\"),\n+ max_length=250,\n+ unique=True,\n+ )\n show_in_footer = models.BooleanField(_(\"Show in footer\"), default=False)\n publicly_visible = models.BooleanField(_(\"Publicly visible\"), default=False)\n \ndiff --git a/ephios/plugins/pages/views.py b/ephios/plugins/pages/views.py\n--- a/ephios/plugins/pages/views.py\n+++ b/ephios/plugins/pages/views.py\n@@ -29,7 +29,7 @@\n class PageCreateView(CustomPermissionRequiredMixin, CreateView):\n model = Page\n permission_required = \"pages.add_page\"\n- fields = [\"title\", \"content\", \"slug\", \"show_in_footer\", \"publicly_visible\"]\n+ fields = [\"title\", \"slug\", \"content\", \"show_in_footer\", \"publicly_visible\"]\n \n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n@@ -39,7 +39,7 @@\n class PageUpdateView(CustomPermissionRequiredMixin, UpdateView):\n model = Page\n permission_required = \"pages.change_page\"\n- fields = [\"title\", \"content\", \"slug\", \"show_in_footer\", \"publicly_visible\"]\n+ fields = [\"title\", \"slug\", \"content\", \"show_in_footer\", \"publicly_visible\"]\n \n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n", "issue": "Improve explanation for page slugs\nUsers did not understand what the page slug means. We should provide a help text, either directly or in the docs.\n", "before_files": [{"content": "from django.db import models\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass Page(models.Model):\n title = models.CharField(verbose_name=_(\"Title\"), max_length=250)\n content = models.TextField(_(\"Content\"), blank=True)\n slug = models.SlugField(_(\"Slug\"), max_length=250, unique=True)\n show_in_footer = models.BooleanField(_(\"Show in footer\"), default=False)\n publicly_visible = models.BooleanField(_(\"Publicly visible\"), default=False)\n\n def __str__(self):\n return str(self.title)\n\n class Meta:\n verbose_name = \"Page\"\n verbose_name_plural = \"Pages\"\n", "path": "ephios/plugins/pages/models.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.views import redirect_to_login\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\nfrom django.views.generic import CreateView, DeleteView, DetailView, ListView, UpdateView\n\nfrom ephios.extra.mixins import CustomPermissionRequiredMixin\nfrom ephios.plugins.pages.models import Page\n\n\nclass PageListView(CustomPermissionRequiredMixin, ListView):\n model = Page\n permission_required = \"pages.add_page\"\n\n\nclass PageView(DetailView):\n model = Page\n\n def setup(self, request, *args, **kwargs):\n super().setup(request, *args, **kwargs)\n self.object = self.get_object()\n\n def dispatch(self, request, *args, **kwargs):\n if not request.user.is_authenticated and not self.object.publicly_visible:\n return redirect_to_login(self.request.get_full_path())\n return super().dispatch(request, *args, **kwargs)\n\n\nclass PageCreateView(CustomPermissionRequiredMixin, CreateView):\n model = Page\n permission_required = \"pages.add_page\"\n fields = [\"title\", \"content\", \"slug\", \"show_in_footer\", \"publicly_visible\"]\n\n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n return reverse(\"pages:settings_page_list\")\n\n\nclass PageUpdateView(CustomPermissionRequiredMixin, UpdateView):\n model = Page\n permission_required = \"pages.change_page\"\n fields = [\"title\", \"content\", \"slug\", \"show_in_footer\", \"publicly_visible\"]\n\n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n return reverse(\"pages:settings_page_list\")\n\n\nclass PageDeleteView(CustomPermissionRequiredMixin, DeleteView):\n model = Page\n permission_required = \"pages.delete_page\"\n\n def get_success_url(self):\n messages.info(self.request, _(\"Page deleted successfully.\"))\n return reverse(\"pages:settings_page_list\")\n", "path": "ephios/plugins/pages/views.py"}], "after_files": [{"content": "from django.db import models\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass Page(models.Model):\n title = models.CharField(verbose_name=_(\"Title\"), max_length=250)\n content = models.TextField(_(\"Content\"), blank=True)\n slug = models.SlugField(\n _(\"URL slug\"),\n help_text=_(\"The slug is used to generate the page's URL.\"),\n max_length=250,\n unique=True,\n )\n show_in_footer = models.BooleanField(_(\"Show in footer\"), default=False)\n publicly_visible = models.BooleanField(_(\"Publicly visible\"), default=False)\n\n def __str__(self):\n return str(self.title)\n\n class Meta:\n verbose_name = \"Page\"\n verbose_name_plural = \"Pages\"\n", "path": "ephios/plugins/pages/models.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.views import redirect_to_login\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\nfrom django.views.generic import CreateView, DeleteView, DetailView, ListView, UpdateView\n\nfrom ephios.extra.mixins import CustomPermissionRequiredMixin\nfrom ephios.plugins.pages.models import Page\n\n\nclass PageListView(CustomPermissionRequiredMixin, ListView):\n model = Page\n permission_required = \"pages.add_page\"\n\n\nclass PageView(DetailView):\n model = Page\n\n def setup(self, request, *args, **kwargs):\n super().setup(request, *args, **kwargs)\n self.object = self.get_object()\n\n def dispatch(self, request, *args, **kwargs):\n if not request.user.is_authenticated and not self.object.publicly_visible:\n return redirect_to_login(self.request.get_full_path())\n return super().dispatch(request, *args, **kwargs)\n\n\nclass PageCreateView(CustomPermissionRequiredMixin, CreateView):\n model = Page\n permission_required = \"pages.add_page\"\n fields = [\"title\", \"slug\", \"content\", \"show_in_footer\", \"publicly_visible\"]\n\n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n return reverse(\"pages:settings_page_list\")\n\n\nclass PageUpdateView(CustomPermissionRequiredMixin, UpdateView):\n model = Page\n permission_required = \"pages.change_page\"\n fields = [\"title\", \"slug\", \"content\", \"show_in_footer\", \"publicly_visible\"]\n\n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n return reverse(\"pages:settings_page_list\")\n\n\nclass PageDeleteView(CustomPermissionRequiredMixin, DeleteView):\n model = Page\n permission_required = \"pages.delete_page\"\n\n def get_success_url(self):\n messages.info(self.request, _(\"Page deleted successfully.\"))\n return reverse(\"pages:settings_page_list\")\n", "path": "ephios/plugins/pages/views.py"}]} | 989 | 431 |
gh_patches_debug_15187 | rasdani/github-patches | git_diff | vispy__vispy-1362 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
examples/tutorial/app/simple_wx.py issue
```
Traceback (most recent call last):
File "simple_wx.py", line 58, in <module>
frame = TestFrame()
File "simple_wx.py", line 49, in __init__
self.canvas = Canvas(app="wx", parent=self, show=True)
File "simple_wx.py", line 20, in __init__
app.Canvas.__init__(self, *args, **kwargs)
File "/home/eldar/src/vispy/vispy/app/canvas.py", line 208, in __init__
self.set_current()
File "/home/eldar/src/vispy/vispy/app/canvas.py", line 406, in set_current
self._backend._vispy_set_current()
File "/home/eldar/src/vispy/vispy/app/backends/_wx.py", line 302, in _vispy_set_current
self.SetCurrent(self._gl_context)
wx._core.wxAssertionError: C++ assertion "xid" failed at /home/eldar/src/wx/wxPython_Phoenix/wxPython-4.0.0b2/ext/wxWidgets/src/unix/glx11.cpp(194) in SetCurrent(): window must be shown
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/tutorial/app/simple_wx.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # vispy: testskip
3 # Copyright (c) 2015, Vispy Development Team.
4 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
5 """
6 This is a very minimal example that opens a window and makes the background
7 color to change from black to white to black ...
8
9 The wx backend is used to embed the canvas in a simple wx Frame with
10 a menubar.
11 """
12
13 import wx
14 import math
15 from vispy import app, gloo
16
17
18 class Canvas(app.Canvas):
19 def __init__(self, *args, **kwargs):
20 app.Canvas.__init__(self, *args, **kwargs)
21 self._timer = app.Timer('auto', connect=self.on_timer, start=True)
22 self.tick = 0
23
24 def on_draw(self, event):
25 gloo.clear(color=True)
26
27 def on_timer(self, event):
28 self.tick += 1 / 60.0
29 c = abs(math.sin(self.tick))
30 gloo.set_clear_color((c, c, c, 1))
31 self.update()
32
33 def stop_timer(self):
34 self._timer.stop()
35
36
37 class TestFrame(wx.Frame):
38 def __init__(self):
39 wx.Frame.__init__(self, None, -1, "Vispy Test",
40 wx.DefaultPosition, size=(500, 500))
41
42 MenuBar = wx.MenuBar()
43 file_menu = wx.Menu()
44 file_menu.Append(wx.ID_EXIT, "&Quit")
45 self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)
46 MenuBar.Append(file_menu, "&File")
47 self.SetMenuBar(MenuBar)
48
49 self.canvas = Canvas(app="wx", parent=self, show=True)
50
51 def on_quit(self, event):
52 self.canvas.stop_timer()
53 self.Close(True)
54
55
56 if __name__ == '__main__':
57 myapp = wx.App(0)
58 frame = TestFrame()
59 frame.Show(True)
60 myapp.MainLoop()
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/tutorial/app/simple_wx.py b/examples/tutorial/app/simple_wx.py
--- a/examples/tutorial/app/simple_wx.py
+++ b/examples/tutorial/app/simple_wx.py
@@ -43,15 +43,20 @@
file_menu = wx.Menu()
file_menu.Append(wx.ID_EXIT, "&Quit")
self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)
+ self.Bind(wx.EVT_SHOW, self.on_show)
MenuBar.Append(file_menu, "&File")
self.SetMenuBar(MenuBar)
- self.canvas = Canvas(app="wx", parent=self, show=True)
+ self.canvas = Canvas(app="wx", parent=self)
def on_quit(self, event):
self.canvas.stop_timer()
self.Close(True)
+ def on_show(self, event):
+ self.canvas.show()
+ event.Skip()
+
if __name__ == '__main__':
myapp = wx.App(0)
| {"golden_diff": "diff --git a/examples/tutorial/app/simple_wx.py b/examples/tutorial/app/simple_wx.py\n--- a/examples/tutorial/app/simple_wx.py\n+++ b/examples/tutorial/app/simple_wx.py\n@@ -43,15 +43,20 @@\n file_menu = wx.Menu()\n file_menu.Append(wx.ID_EXIT, \"&Quit\")\n self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)\n+ self.Bind(wx.EVT_SHOW, self.on_show)\n MenuBar.Append(file_menu, \"&File\")\n self.SetMenuBar(MenuBar)\n \n- self.canvas = Canvas(app=\"wx\", parent=self, show=True)\n+ self.canvas = Canvas(app=\"wx\", parent=self)\n \n def on_quit(self, event):\n self.canvas.stop_timer()\n self.Close(True)\n \n+ def on_show(self, event):\n+ self.canvas.show()\n+ event.Skip()\n+\n \n if __name__ == '__main__':\n myapp = wx.App(0)\n", "issue": "examples/tutorial/app/simple_wx.py issue\n```\r\nTraceback (most recent call last):\r\n File \"simple_wx.py\", line 58, in <module>\r\n frame = TestFrame()\r\n File \"simple_wx.py\", line 49, in __init__\r\n self.canvas = Canvas(app=\"wx\", parent=self, show=True)\r\n File \"simple_wx.py\", line 20, in __init__\r\n app.Canvas.__init__(self, *args, **kwargs)\r\n File \"/home/eldar/src/vispy/vispy/app/canvas.py\", line 208, in __init__\r\n self.set_current()\r\n File \"/home/eldar/src/vispy/vispy/app/canvas.py\", line 406, in set_current\r\n self._backend._vispy_set_current()\r\n File \"/home/eldar/src/vispy/vispy/app/backends/_wx.py\", line 302, in _vispy_set_current\r\n self.SetCurrent(self._gl_context)\r\nwx._core.wxAssertionError: C++ assertion \"xid\" failed at /home/eldar/src/wx/wxPython_Phoenix/wxPython-4.0.0b2/ext/wxWidgets/src/unix/glx11.cpp(194) in SetCurrent(): window must be shown\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# vispy: testskip\n# Copyright (c) 2015, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"\nThis is a very minimal example that opens a window and makes the background\ncolor to change from black to white to black ...\n\nThe wx backend is used to embed the canvas in a simple wx Frame with\na menubar.\n\"\"\"\n\nimport wx\nimport math\nfrom vispy import app, gloo\n\n\nclass Canvas(app.Canvas):\n def __init__(self, *args, **kwargs):\n app.Canvas.__init__(self, *args, **kwargs)\n self._timer = app.Timer('auto', connect=self.on_timer, start=True)\n self.tick = 0\n\n def on_draw(self, event):\n gloo.clear(color=True)\n\n def on_timer(self, event):\n self.tick += 1 / 60.0\n c = abs(math.sin(self.tick))\n gloo.set_clear_color((c, c, c, 1))\n self.update()\n\n def stop_timer(self):\n self._timer.stop()\n\n\nclass TestFrame(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None, -1, \"Vispy Test\",\n wx.DefaultPosition, size=(500, 500))\n\n MenuBar = wx.MenuBar()\n file_menu = wx.Menu()\n file_menu.Append(wx.ID_EXIT, \"&Quit\")\n self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)\n MenuBar.Append(file_menu, \"&File\")\n self.SetMenuBar(MenuBar)\n\n self.canvas = Canvas(app=\"wx\", parent=self, show=True)\n\n def on_quit(self, event):\n self.canvas.stop_timer()\n self.Close(True)\n\n\nif __name__ == '__main__':\n myapp = wx.App(0)\n frame = TestFrame()\n frame.Show(True)\n myapp.MainLoop()\n", "path": "examples/tutorial/app/simple_wx.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# vispy: testskip\n# Copyright (c) 2015, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"\nThis is a very minimal example that opens a window and makes the background\ncolor to change from black to white to black ...\n\nThe wx backend is used to embed the canvas in a simple wx Frame with\na menubar.\n\"\"\"\n\nimport wx\nimport math\nfrom vispy import app, gloo\n\n\nclass Canvas(app.Canvas):\n def __init__(self, *args, **kwargs):\n app.Canvas.__init__(self, *args, **kwargs)\n self._timer = app.Timer('auto', connect=self.on_timer, start=True)\n self.tick = 0\n\n def on_draw(self, event):\n gloo.clear(color=True)\n\n def on_timer(self, event):\n self.tick += 1 / 60.0\n c = abs(math.sin(self.tick))\n gloo.set_clear_color((c, c, c, 1))\n self.update()\n\n def stop_timer(self):\n self._timer.stop()\n\n\nclass TestFrame(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None, -1, \"Vispy Test\",\n wx.DefaultPosition, size=(500, 500))\n\n MenuBar = wx.MenuBar()\n file_menu = wx.Menu()\n file_menu.Append(wx.ID_EXIT, \"&Quit\")\n self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)\n self.Bind(wx.EVT_SHOW, self.on_show)\n MenuBar.Append(file_menu, \"&File\")\n self.SetMenuBar(MenuBar)\n\n self.canvas = Canvas(app=\"wx\", parent=self)\n\n def on_quit(self, event):\n self.canvas.stop_timer()\n self.Close(True)\n\n def on_show(self, event):\n self.canvas.show()\n event.Skip()\n\n\nif __name__ == '__main__':\n myapp = wx.App(0)\n frame = TestFrame()\n frame.Show(True)\n myapp.MainLoop()\n", "path": "examples/tutorial/app/simple_wx.py"}]} | 1,097 | 207 |
gh_patches_debug_975 | rasdani/github-patches | git_diff | PennyLaneAI__pennylane-2947 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] `qml.equal` ignore in-place inversion
Currently, we have:
```
>>> qml.equal(qml.RX(1.0, wires=0), qml.RX(1.0, wires=0).inv())
True
```
If two operations are inverses of each other, they should not be equal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pennylane/ops/functions/equal.py`
Content:
```
1 # Copyright 2018-2021 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 This module contains the qml.equal function.
16 """
17 # pylint: disable=too-many-arguments,too-many-return-statements
18 import pennylane as qml
19 from pennylane.operation import Operator
20
21
22 def equal(
23 op1: Operator,
24 op2: Operator,
25 check_interface=True,
26 check_trainability=True,
27 rtol=1e-5,
28 atol=1e-9,
29 ):
30 r"""Function for determining operator equality.
31
32 Args:
33 op1 (.Operator): First operator to compare
34 op2 (.Operator): Second operator to compare
35 check_interface (bool, optional): Whether to compare interfaces. Default: `True`
36 check_trainability (bool, optional): Whether to compare trainability status. Default: `True`
37 rtol (float, optional): Relative tolerance for parameters
38 atol (float, optional): Absolute tolerance for parameters
39
40 Returns:
41 bool: `True` if the operators are equal, else `False`
42
43 **Example**
44
45 Given two operators, ``qml.equal`` determines their equality:
46
47 >>> op1 = qml.RX(np.array(.12), wires=0)
48 >>> op2 = qml.RY(np.array(1.23), wires=0)
49 >>> qml.equal(op1, op1), qml.equal(op1, op2)
50 True False
51
52 .. details::
53 :title: Usage Details
54
55 You can use the optional arguments to get more specific results.
56
57 Consider the following comparisons:
58
59 >>> op1 = qml.RX(torch.tensor(1.2), wires=0)
60 >>> op2 = qml.RX(jax.numpy.array(1.2), wires=0)
61 >>> qml.equal(op1, op2)
62 False
63
64 >>> qml.equal(op1, op2, check_interface=False, check_trainability=False)
65 True
66
67 >>> op3 = qml.RX(np.array(1.2, requires_grad=True), wires=0)
68 >>> op4 = qml.RX(np.array(1.2, requires_grad=False), wires=0)
69 >>> qml.equal(op3, op4)
70 False
71
72 >>> qml.equal(op3, op4, check_trainability=False)
73 True
74 """
75 if op1.__class__ is not op2.__class__ or op1.arithmetic_depth != op2.arithmetic_depth:
76 return False
77 if op1.arithmetic_depth > 0:
78 raise NotImplementedError(
79 "Comparison of operators with an arithmetic depth larger than 0 is not yet implemented."
80 )
81 if not all(
82 qml.math.allclose(d1, d2, rtol=rtol, atol=atol) for d1, d2 in zip(op1.data, op2.data)
83 ):
84 return False
85 if op1.wires != op2.wires:
86 return False
87 for kwarg in op1.hyperparameters:
88 if op1.hyperparameters[kwarg] != op2.hyperparameters[kwarg]:
89 return False
90
91 if check_trainability:
92 for params_1, params_2 in zip(op1.data, op2.data):
93 if qml.math.requires_grad(params_1) != qml.math.requires_grad(params_2):
94 return False
95
96 if check_interface:
97 for params_1, params_2 in zip(op1.data, op2.data):
98 if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):
99 return False
100
101 return True
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pennylane/ops/functions/equal.py b/pennylane/ops/functions/equal.py
--- a/pennylane/ops/functions/equal.py
+++ b/pennylane/ops/functions/equal.py
@@ -98,4 +98,4 @@
if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):
return False
- return True
+ return getattr(op1, "inverse", False) == getattr(op2, "inverse", False)
| {"golden_diff": "diff --git a/pennylane/ops/functions/equal.py b/pennylane/ops/functions/equal.py\n--- a/pennylane/ops/functions/equal.py\n+++ b/pennylane/ops/functions/equal.py\n@@ -98,4 +98,4 @@\n if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):\n return False\n \n- return True\n+ return getattr(op1, \"inverse\", False) == getattr(op2, \"inverse\", False)\n", "issue": "[BUG] `qml.equal` ignore in-place inversion\nCurrently, we have:\r\n```\r\n>>> qml.equal(qml.RX(1.0, wires=0), qml.RX(1.0, wires=0).inv())\r\nTrue\r\n```\r\n\r\nIf two operations are inverses of each other, they should not be equal.\n", "before_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis module contains the qml.equal function.\n\"\"\"\n# pylint: disable=too-many-arguments,too-many-return-statements\nimport pennylane as qml\nfrom pennylane.operation import Operator\n\n\ndef equal(\n op1: Operator,\n op2: Operator,\n check_interface=True,\n check_trainability=True,\n rtol=1e-5,\n atol=1e-9,\n):\n r\"\"\"Function for determining operator equality.\n\n Args:\n op1 (.Operator): First operator to compare\n op2 (.Operator): Second operator to compare\n check_interface (bool, optional): Whether to compare interfaces. Default: `True`\n check_trainability (bool, optional): Whether to compare trainability status. Default: `True`\n rtol (float, optional): Relative tolerance for parameters\n atol (float, optional): Absolute tolerance for parameters\n\n Returns:\n bool: `True` if the operators are equal, else `False`\n\n **Example**\n\n Given two operators, ``qml.equal`` determines their equality:\n\n >>> op1 = qml.RX(np.array(.12), wires=0)\n >>> op2 = qml.RY(np.array(1.23), wires=0)\n >>> qml.equal(op1, op1), qml.equal(op1, op2)\n True False\n\n .. details::\n :title: Usage Details\n\n You can use the optional arguments to get more specific results.\n\n Consider the following comparisons:\n\n >>> op1 = qml.RX(torch.tensor(1.2), wires=0)\n >>> op2 = qml.RX(jax.numpy.array(1.2), wires=0)\n >>> qml.equal(op1, op2)\n False\n\n >>> qml.equal(op1, op2, check_interface=False, check_trainability=False)\n True\n\n >>> op3 = qml.RX(np.array(1.2, requires_grad=True), wires=0)\n >>> op4 = qml.RX(np.array(1.2, requires_grad=False), wires=0)\n >>> qml.equal(op3, op4)\n False\n\n >>> qml.equal(op3, op4, check_trainability=False)\n True\n \"\"\"\n if op1.__class__ is not op2.__class__ or op1.arithmetic_depth != op2.arithmetic_depth:\n return False\n if op1.arithmetic_depth > 0:\n raise NotImplementedError(\n \"Comparison of operators with an arithmetic depth larger than 0 is not yet implemented.\"\n )\n if not all(\n qml.math.allclose(d1, d2, rtol=rtol, atol=atol) for d1, d2 in zip(op1.data, op2.data)\n ):\n return False\n if op1.wires != op2.wires:\n return False\n for kwarg in op1.hyperparameters:\n if op1.hyperparameters[kwarg] != op2.hyperparameters[kwarg]:\n return False\n\n if check_trainability:\n for params_1, params_2 in zip(op1.data, op2.data):\n if qml.math.requires_grad(params_1) != qml.math.requires_grad(params_2):\n return False\n\n if check_interface:\n for params_1, params_2 in zip(op1.data, op2.data):\n if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):\n return False\n\n return True\n", "path": "pennylane/ops/functions/equal.py"}], "after_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis module contains the qml.equal function.\n\"\"\"\n# pylint: disable=too-many-arguments,too-many-return-statements\nimport pennylane as qml\nfrom pennylane.operation import Operator\n\n\ndef equal(\n op1: Operator,\n op2: Operator,\n check_interface=True,\n check_trainability=True,\n rtol=1e-5,\n atol=1e-9,\n):\n r\"\"\"Function for determining operator equality.\n\n Args:\n op1 (.Operator): First operator to compare\n op2 (.Operator): Second operator to compare\n check_interface (bool, optional): Whether to compare interfaces. Default: `True`\n check_trainability (bool, optional): Whether to compare trainability status. Default: `True`\n rtol (float, optional): Relative tolerance for parameters\n atol (float, optional): Absolute tolerance for parameters\n\n Returns:\n bool: `True` if the operators are equal, else `False`\n\n **Example**\n\n Given two operators, ``qml.equal`` determines their equality:\n\n >>> op1 = qml.RX(np.array(.12), wires=0)\n >>> op2 = qml.RY(np.array(1.23), wires=0)\n >>> qml.equal(op1, op1), qml.equal(op1, op2)\n True False\n\n .. details::\n :title: Usage Details\n\n You can use the optional arguments to get more specific results.\n\n Consider the following comparisons:\n\n >>> op1 = qml.RX(torch.tensor(1.2), wires=0)\n >>> op2 = qml.RX(jax.numpy.array(1.2), wires=0)\n >>> qml.equal(op1, op2)\n False\n\n >>> qml.equal(op1, op2, check_interface=False, check_trainability=False)\n True\n\n >>> op3 = qml.RX(np.array(1.2, requires_grad=True), wires=0)\n >>> op4 = qml.RX(np.array(1.2, requires_grad=False), wires=0)\n >>> qml.equal(op3, op4)\n False\n\n >>> qml.equal(op3, op4, check_trainability=False)\n True\n \"\"\"\n if op1.__class__ is not op2.__class__ or op1.arithmetic_depth != op2.arithmetic_depth:\n return False\n if op1.arithmetic_depth > 0:\n raise NotImplementedError(\n \"Comparison of operators with an arithmetic depth larger than 0 is not yet implemented.\"\n )\n if not all(\n qml.math.allclose(d1, d2, rtol=rtol, atol=atol) for d1, d2 in zip(op1.data, op2.data)\n ):\n return False\n if op1.wires != op2.wires:\n return False\n for kwarg in op1.hyperparameters:\n if op1.hyperparameters[kwarg] != op2.hyperparameters[kwarg]:\n return False\n\n if check_trainability:\n for params_1, params_2 in zip(op1.data, op2.data):\n if qml.math.requires_grad(params_1) != qml.math.requires_grad(params_2):\n return False\n\n if check_interface:\n for params_1, params_2 in zip(op1.data, op2.data):\n if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):\n return False\n\n return getattr(op1, \"inverse\", False) == getattr(op2, \"inverse\", False)\n", "path": "pennylane/ops/functions/equal.py"}]} | 1,438 | 116 |
gh_patches_debug_4858 | rasdani/github-patches | git_diff | Gallopsled__pwntools-752 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
3.0.3 Release Broken
It appears that the archive uploaded to PyPI does not include README.md, which is referred to by setup.py.
@Idolf can you update the release to include the README?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python2
2 import glob
3 import os
4 import platform
5 import sys
6 from distutils.command.install import INSTALL_SCHEMES
7 from distutils.sysconfig import get_python_inc
8 from distutils.util import convert_path
9
10 from setuptools import find_packages
11 from setuptools import setup
12
13 # Get all template files
14 templates = []
15 for dirpath, dirnames, filenames in os.walk(convert_path('pwnlib/shellcraft/templates')):
16 for f in filenames:
17 templates.append(os.path.relpath(os.path.join(dirpath, f), 'pwnlib'))
18
19 # This makes pwntools-LICENSE.txt appear with the package folders
20 for scheme in INSTALL_SCHEMES.values():
21 scheme['data'] = scheme['purelib']
22
23 # Find all of the console scripts
24 console_scripts = []
25
26 for filename in glob.glob('pwnlib/commandline/*'):
27 filename = os.path.basename(filename)
28 filename, ext = os.path.splitext(filename)
29
30 if ext != '.py' or '__init__' in filename:
31 continue
32
33 script = '%s=pwnlib.commandline.%s:main' % (filename, filename)
34 console_scripts.append(script)
35
36 install_requires = ['paramiko>=1.15.2',
37 'mako>=1.0.0',
38 'pyelftools>=0.2.4',
39 'capstone',
40 'ropgadget>=5.3',
41 'pyserial>=2.7',
42 'requests>=2.0',
43 'pip>=6.0.8',
44 'tox>=1.8.1',
45 'pygments>=2.0',
46 'pysocks',
47 'python-dateutil',
48 'pypandoc',
49 'packaging']
50
51 # This is a hack until somebody ports psutil to OpenBSD
52 if platform.system() != 'OpenBSD':
53 install_requires.append('psutil>=2.1.3')
54
55 # Check that the user has installed the Python development headers
56 PythonH = os.path.join(get_python_inc(), 'Python.h')
57 if not os.path.exists(PythonH):
58 print >> sys.stderr, "You must install the Python development headers!"
59 print >> sys.stderr, "$ apt-get install python-dev"
60 sys.exit(-1)
61
62 # Convert README.md to reStructuredText for PyPI
63 long_description = ''
64 try:
65 import pypandoc
66 try:
67 pypandoc.get_pandoc_path()
68 except OSError:
69 pypandoc.download_pandoc()
70 long_description = pypandoc.convert_file('README.md', 'rst')
71 except ImportError:
72 pass
73
74
75 setup(
76 name = 'pwntools',
77 packages = find_packages(),
78 version = '3.0.3',
79 data_files = [('',
80 ['LICENSE-pwntools.txt',
81 ]),
82 ],
83 package_data = {
84 'pwnlib': [
85 'data/crcsums.txt',
86 'data/useragents/useragents.txt',
87 'data/binutils/*',
88 'data/includes/*.h',
89 'data/includes/*/*.h',
90 ] + templates,
91 },
92 entry_points = {'console_scripts': console_scripts},
93 scripts = glob.glob("bin/*"),
94 description = "Pwntools CTF framework and exploit development library.",
95 long_description = long_description,
96 author = "Gallopsled et al.",
97 author_email = "#pwntools @ freenode.net",
98 url = 'https://pwntools.com',
99 download_url = "https://github.com/Gallopsled/pwntools/releases",
100 install_requires = install_requires,
101 license = "Mostly MIT, some GPL/BSD, see LICENSE-pwntools.txt",
102 keywords = 'pwntools exploit ctf capture the flag binary wargame overflow stack heap defcon',
103 classifiers = [
104 'Development Status :: 5 - Production/Stable',
105 'Environment :: Console',
106 'Intended Audience :: Developers',
107 'Intended Audience :: Science/Research',
108 'Intended Audience :: System Administrators',
109 'License :: OSI Approved :: MIT License',
110 'Natural Language :: English',
111 'Operating System :: POSIX :: Linux',
112 'Programming Language :: Python :: 2.7',
113 'Topic :: Security',
114 'Topic :: Software Development :: Assemblers',
115 'Topic :: Software Development :: Debuggers',
116 'Topic :: Software Development :: Disassemblers',
117 'Topic :: Software Development :: Embedded Systems',
118 'Topic :: Software Development :: Libraries :: Python Modules',
119 'Topic :: System :: System Shells',
120 'Topic :: Utilities',
121 ]
122 )
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -77,8 +77,7 @@
packages = find_packages(),
version = '3.0.3',
data_files = [('',
- ['LICENSE-pwntools.txt',
- ]),
+ glob.glob('*.md') + glob.glob('*.txt')),
],
package_data = {
'pwnlib': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -77,8 +77,7 @@\n packages = find_packages(),\n version = '3.0.3',\n data_files = [('',\n- ['LICENSE-pwntools.txt',\n- ]),\n+ glob.glob('*.md') + glob.glob('*.txt')),\n ],\n package_data = {\n 'pwnlib': [\n", "issue": "3.0.3 Release Broken\nIt appears that the archive uploaded to PyPI does not include README.md, which is referred to by setup.py.\n\n@Idolf can you update the release to include the README?\n\n", "before_files": [{"content": "#!/usr/bin/env python2\nimport glob\nimport os\nimport platform\nimport sys\nfrom distutils.command.install import INSTALL_SCHEMES\nfrom distutils.sysconfig import get_python_inc\nfrom distutils.util import convert_path\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Get all template files\ntemplates = []\nfor dirpath, dirnames, filenames in os.walk(convert_path('pwnlib/shellcraft/templates')):\n for f in filenames:\n templates.append(os.path.relpath(os.path.join(dirpath, f), 'pwnlib'))\n\n# This makes pwntools-LICENSE.txt appear with the package folders\nfor scheme in INSTALL_SCHEMES.values():\n scheme['data'] = scheme['purelib']\n\n# Find all of the console scripts\nconsole_scripts = []\n\nfor filename in glob.glob('pwnlib/commandline/*'):\n filename = os.path.basename(filename)\n filename, ext = os.path.splitext(filename)\n\n if ext != '.py' or '__init__' in filename:\n continue\n\n script = '%s=pwnlib.commandline.%s:main' % (filename, filename)\n console_scripts.append(script)\n\ninstall_requires = ['paramiko>=1.15.2',\n 'mako>=1.0.0',\n 'pyelftools>=0.2.4',\n 'capstone',\n 'ropgadget>=5.3',\n 'pyserial>=2.7',\n 'requests>=2.0',\n 'pip>=6.0.8',\n 'tox>=1.8.1',\n 'pygments>=2.0',\n 'pysocks',\n 'python-dateutil',\n 'pypandoc',\n 'packaging']\n\n# This is a hack until somebody ports psutil to OpenBSD\nif platform.system() != 'OpenBSD':\n install_requires.append('psutil>=2.1.3')\n\n# Check that the user has installed the Python development headers\nPythonH = os.path.join(get_python_inc(), 'Python.h')\nif not os.path.exists(PythonH):\n print >> sys.stderr, \"You must install the Python development headers!\"\n print >> sys.stderr, \"$ apt-get install python-dev\"\n sys.exit(-1)\n\n# Convert README.md to reStructuredText for PyPI\nlong_description = ''\ntry:\n import pypandoc\n try:\n pypandoc.get_pandoc_path()\n except OSError:\n pypandoc.download_pandoc()\n long_description = pypandoc.convert_file('README.md', 'rst')\nexcept ImportError:\n pass\n\n\nsetup(\n name = 'pwntools',\n packages = find_packages(),\n version = '3.0.3',\n data_files = [('',\n ['LICENSE-pwntools.txt',\n ]),\n ],\n package_data = {\n 'pwnlib': [\n 'data/crcsums.txt',\n 'data/useragents/useragents.txt',\n 'data/binutils/*',\n 'data/includes/*.h',\n 'data/includes/*/*.h',\n ] + templates,\n },\n entry_points = {'console_scripts': console_scripts},\n scripts = glob.glob(\"bin/*\"),\n description = \"Pwntools CTF framework and exploit development library.\",\n long_description = long_description,\n author = \"Gallopsled et al.\",\n author_email = \"#pwntools @ freenode.net\",\n url = 'https://pwntools.com',\n download_url = \"https://github.com/Gallopsled/pwntools/releases\",\n install_requires = install_requires,\n license = \"Mostly MIT, some GPL/BSD, see LICENSE-pwntools.txt\",\n keywords = 'pwntools exploit ctf capture the flag binary wargame overflow stack heap defcon',\n classifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Security',\n 'Topic :: Software Development :: Assemblers',\n 'Topic :: Software Development :: Debuggers',\n 'Topic :: Software Development :: Disassemblers',\n 'Topic :: Software Development :: Embedded Systems',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: System :: System Shells',\n 'Topic :: Utilities',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python2\nimport glob\nimport os\nimport platform\nimport sys\nfrom distutils.command.install import INSTALL_SCHEMES\nfrom distutils.sysconfig import get_python_inc\nfrom distutils.util import convert_path\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Get all template files\ntemplates = []\nfor dirpath, dirnames, filenames in os.walk(convert_path('pwnlib/shellcraft/templates')):\n for f in filenames:\n templates.append(os.path.relpath(os.path.join(dirpath, f), 'pwnlib'))\n\n# This makes pwntools-LICENSE.txt appear with the package folders\nfor scheme in INSTALL_SCHEMES.values():\n scheme['data'] = scheme['purelib']\n\n# Find all of the console scripts\nconsole_scripts = []\n\nfor filename in glob.glob('pwnlib/commandline/*'):\n filename = os.path.basename(filename)\n filename, ext = os.path.splitext(filename)\n\n if ext != '.py' or '__init__' in filename:\n continue\n\n script = '%s=pwnlib.commandline.%s:main' % (filename, filename)\n console_scripts.append(script)\n\ninstall_requires = ['paramiko>=1.15.2',\n 'mako>=1.0.0',\n 'pyelftools>=0.2.4',\n 'capstone',\n 'ropgadget>=5.3',\n 'pyserial>=2.7',\n 'requests>=2.0',\n 'pip>=6.0.8',\n 'tox>=1.8.1',\n 'pygments>=2.0',\n 'pysocks',\n 'python-dateutil',\n 'pypandoc',\n 'packaging']\n\n# This is a hack until somebody ports psutil to OpenBSD\nif platform.system() != 'OpenBSD':\n install_requires.append('psutil>=2.1.3')\n\n# Check that the user has installed the Python development headers\nPythonH = os.path.join(get_python_inc(), 'Python.h')\nif not os.path.exists(PythonH):\n print >> sys.stderr, \"You must install the Python development headers!\"\n print >> sys.stderr, \"$ apt-get install python-dev\"\n sys.exit(-1)\n\n# Convert README.md to reStructuredText for PyPI\nlong_description = ''\ntry:\n import pypandoc\n try:\n pypandoc.get_pandoc_path()\n except OSError:\n pypandoc.download_pandoc()\n long_description = pypandoc.convert_file('README.md', 'rst')\nexcept ImportError:\n pass\n\n\nsetup(\n name = 'pwntools',\n packages = find_packages(),\n version = '3.0.3',\n data_files = [('',\n glob.glob('*.md') + glob.glob('*.txt')),\n ],\n package_data = {\n 'pwnlib': [\n 'data/crcsums.txt',\n 'data/useragents/useragents.txt',\n 'data/binutils/*',\n 'data/includes/*.h',\n 'data/includes/*/*.h',\n ] + templates,\n },\n entry_points = {'console_scripts': console_scripts},\n scripts = glob.glob(\"bin/*\"),\n description = \"Pwntools CTF framework and exploit development library.\",\n long_description = long_description,\n author = \"Gallopsled et al.\",\n author_email = \"#pwntools @ freenode.net\",\n url = 'https://pwntools.com',\n download_url = \"https://github.com/Gallopsled/pwntools/releases\",\n install_requires = install_requires,\n license = \"Mostly MIT, some GPL/BSD, see LICENSE-pwntools.txt\",\n keywords = 'pwntools exploit ctf capture the flag binary wargame overflow stack heap defcon',\n classifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Security',\n 'Topic :: Software Development :: Assemblers',\n 'Topic :: Software Development :: Debuggers',\n 'Topic :: Software Development :: Disassemblers',\n 'Topic :: Software Development :: Embedded Systems',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: System :: System Shells',\n 'Topic :: Utilities',\n ]\n)\n", "path": "setup.py"}]} | 1,556 | 100 |
gh_patches_debug_2598 | rasdani/github-patches | git_diff | ivy-llc__ivy-13425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
normal
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/torch/random_sampling.py`
Content:
```
1 import ivy
2 from ivy.func_wrapper import with_supported_dtypes
3 from ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back
4
5 try:
6 from torch import Generator
7 except ImportError:
8 from types import SimpleNamespace
9
10 Generator = SimpleNamespace
11
12
13 def seed() -> int:
14 """Returns a 64 bit number used to seed the RNG"""
15 return int(ivy.randint(-(2**63), 2**63 - 1))
16
17
18 @to_ivy_arrays_and_back
19 def manual_seed(seed: int):
20 ivy.seed(seed_value=seed)
21 return Generator().manual_seed(seed)
22
23
24 @with_supported_dtypes(
25 {
26 "1.11.0 and below": (
27 "float32",
28 "float64",
29 )
30 },
31 "torch",
32 )
33 @to_ivy_arrays_and_back
34 def multinomial(input, num_samples, replacement=False, *, generator=None, out=None):
35 return ivy.multinomial(
36 num_samples + 1, # doesn't matter because `probs` is provided, but should be
37 # greater than the number of samples
38 num_samples,
39 probs=input,
40 replace=replacement,
41 out=out,
42 )
43
44
45 @with_supported_dtypes(
46 {
47 "1.11.0 and below": (
48 "float32",
49 "float64",
50 )
51 },
52 "torch",
53 )
54 @to_ivy_arrays_and_back
55 def poisson(input, generator=None):
56 return ivy.poisson(input, shape=None)
57
58
59 @to_ivy_arrays_and_back
60 def rand(
61 size,
62 *,
63 generator=None,
64 out=None,
65 dtype=None,
66 layout=None,
67 device=None,
68 requires_grad=False,
69 pin_memory=False
70 ):
71 return ivy.random_uniform(
72 shape=size,
73 out=out,
74 dtype=dtype,
75 device=device,
76 )
77
78
79 @to_ivy_arrays_and_back
80 def rand_like(
81 input,
82 *,
83 dtype=None,
84 layout=None,
85 device=None,
86 requires_grad=False,
87 memory_format=False
88 ):
89 shape = input.shape
90 if not dtype:
91 dtype = input.dtype
92
93 return ivy.random_uniform(
94 shape=shape,
95 dtype=dtype,
96 device=device,
97 )
98
99
100 @to_ivy_arrays_and_back
101 def randn(
102 size,
103 *,
104 generator=None,
105 out=None,
106 dtype=None,
107 layout=None,
108 device=None,
109 requires_grad=False,
110 pin_memory=False
111 ):
112 return ivy.random_normal(
113 shape=size,
114 out=out,
115 dtype=dtype,
116 device=device,
117 )
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/torch/random_sampling.py b/ivy/functional/frontends/torch/random_sampling.py
--- a/ivy/functional/frontends/torch/random_sampling.py
+++ b/ivy/functional/frontends/torch/random_sampling.py
@@ -76,6 +76,20 @@
)
+@with_supported_dtypes(
+ {
+ "1.11.0 and below": (
+ "float32",
+ "float64",
+ )
+ },
+ "torch",
+)
+@to_ivy_arrays_and_back
+def normal(mean, std, *, generator=None, out=None):
+ return ivy.random_normal(mean=mean, std=std, out=out)
+
+
@to_ivy_arrays_and_back
def rand_like(
input,
| {"golden_diff": "diff --git a/ivy/functional/frontends/torch/random_sampling.py b/ivy/functional/frontends/torch/random_sampling.py\n--- a/ivy/functional/frontends/torch/random_sampling.py\n+++ b/ivy/functional/frontends/torch/random_sampling.py\n@@ -76,6 +76,20 @@\n )\n \n \n+@with_supported_dtypes(\n+ {\n+ \"1.11.0 and below\": (\n+ \"float32\",\n+ \"float64\",\n+ )\n+ },\n+ \"torch\",\n+)\n+@to_ivy_arrays_and_back\n+def normal(mean, std, *, generator=None, out=None):\n+ return ivy.random_normal(mean=mean, std=std, out=out)\n+ \n+\n @to_ivy_arrays_and_back\n def rand_like(\n input,\n", "issue": "normal\n\n", "before_files": [{"content": "import ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back\n\ntry:\n from torch import Generator\nexcept ImportError:\n from types import SimpleNamespace\n\n Generator = SimpleNamespace\n\n\ndef seed() -> int:\n \"\"\"Returns a 64 bit number used to seed the RNG\"\"\"\n return int(ivy.randint(-(2**63), 2**63 - 1))\n\n\n@to_ivy_arrays_and_back\ndef manual_seed(seed: int):\n ivy.seed(seed_value=seed)\n return Generator().manual_seed(seed)\n\n\n@with_supported_dtypes(\n {\n \"1.11.0 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"torch\",\n)\n@to_ivy_arrays_and_back\ndef multinomial(input, num_samples, replacement=False, *, generator=None, out=None):\n return ivy.multinomial(\n num_samples + 1, # doesn't matter because `probs` is provided, but should be\n # greater than the number of samples\n num_samples,\n probs=input,\n replace=replacement,\n out=out,\n )\n\n\n@with_supported_dtypes(\n {\n \"1.11.0 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"torch\",\n)\n@to_ivy_arrays_and_back\ndef poisson(input, generator=None):\n return ivy.poisson(input, shape=None)\n\n\n@to_ivy_arrays_and_back\ndef rand(\n size,\n *,\n generator=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False\n):\n return ivy.random_uniform(\n shape=size,\n out=out,\n dtype=dtype,\n device=device,\n )\n\n\n@to_ivy_arrays_and_back\ndef rand_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=False\n):\n shape = input.shape\n if not dtype:\n dtype = input.dtype\n\n return ivy.random_uniform(\n shape=shape,\n dtype=dtype,\n device=device,\n )\n\n\n@to_ivy_arrays_and_back\ndef randn(\n size,\n *,\n generator=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False\n):\n return ivy.random_normal(\n shape=size,\n out=out,\n dtype=dtype,\n device=device,\n )\n", "path": "ivy/functional/frontends/torch/random_sampling.py"}], "after_files": [{"content": "import ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back\n\ntry:\n from torch import Generator\nexcept ImportError:\n from types import SimpleNamespace\n\n Generator = SimpleNamespace\n\n\ndef seed() -> int:\n \"\"\"Returns a 64 bit number used to seed the RNG\"\"\"\n return int(ivy.randint(-(2**63), 2**63 - 1))\n\n\n@to_ivy_arrays_and_back\ndef manual_seed(seed: int):\n ivy.seed(seed_value=seed)\n return Generator().manual_seed(seed)\n\n\n@with_supported_dtypes(\n {\n \"1.11.0 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"torch\",\n)\n@to_ivy_arrays_and_back\ndef multinomial(input, num_samples, replacement=False, *, generator=None, out=None):\n return ivy.multinomial(\n num_samples + 1, # doesn't matter because `probs` is provided, but should be\n # greater than the number of samples\n num_samples,\n probs=input,\n replace=replacement,\n out=out,\n )\n\n\n@with_supported_dtypes(\n {\n \"1.11.0 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"torch\",\n)\n@to_ivy_arrays_and_back\ndef poisson(input, generator=None):\n return ivy.poisson(input, shape=None)\n\n\n@to_ivy_arrays_and_back\ndef rand(\n size,\n *,\n generator=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False\n):\n return ivy.random_uniform(\n shape=size,\n out=out,\n dtype=dtype,\n device=device,\n )\n\n\n@with_supported_dtypes(\n {\n \"1.11.0 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"torch\",\n)\n@to_ivy_arrays_and_back\ndef normal(mean, std, *, generator=None, out=None):\n return ivy.random_normal(mean=mean, std=std, out=out)\n \n\n@to_ivy_arrays_and_back\ndef rand_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=False\n):\n shape = input.shape\n if not dtype:\n dtype = input.dtype\n\n return ivy.random_uniform(\n shape=shape,\n dtype=dtype,\n device=device,\n )\n\n\n@to_ivy_arrays_and_back\ndef randn(\n size,\n *,\n generator=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False\n):\n return ivy.random_normal(\n shape=size,\n out=out,\n dtype=dtype,\n device=device,\n )\n", "path": "ivy/functional/frontends/torch/random_sampling.py"}]} | 1,083 | 181 |
gh_patches_debug_4790 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3040 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
show total ever given
Suggested [via Twitter](https://twitter.com/tripflex/status/532597015210131456):
> is there no way for me to see the total I have donated? I know I can see it weekly, but what about overall total?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/utils/history.py`
Content:
```
1 def iter_payday_events(db, participant):
2 """Yields payday events for the given participant.
3 """
4 username = participant.username
5 exchanges = db.all("""
6 SELECT *
7 FROM exchanges
8 WHERE participant=%s
9 """, (username,), back_as=dict)
10 transfers = db.all("""
11 SELECT *
12 FROM transfers
13 WHERE tipper=%(username)s OR tippee=%(username)s
14 """, locals(), back_as=dict)
15
16 if not (exchanges or transfers):
17 return
18
19 payday_dates = db.all("""
20 SELECT ts_start::date
21 FROM paydays
22 ORDER BY ts_start ASC
23 """)
24
25 balance = participant.balance
26 prev_date = None
27 get_timestamp = lambda e: e['timestamp']
28 events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)
29 for event in events:
30
31 event['balance'] = balance
32
33 event_date = event['timestamp'].date()
34 if event_date != prev_date:
35 if prev_date:
36 yield dict(kind='day-close', balance=balance)
37 day_open = dict(kind='day-open', date=event_date, balance=balance)
38 if payday_dates:
39 while payday_dates and payday_dates[-1] > event_date:
40 payday_dates.pop()
41 payday_date = payday_dates[-1] if payday_dates else None
42 if event_date == payday_date:
43 day_open['payday_number'] = len(payday_dates) - 1
44 yield day_open
45 prev_date = event_date
46
47 if 'fee' in event:
48 if event['amount'] > 0:
49 kind = 'charge'
50 if event['status'] in (None, 'succeeded'):
51 balance -= event['amount']
52 else:
53 kind = 'credit'
54 if event['status'] != 'failed':
55 balance -= event['amount'] - event['fee']
56 else:
57 kind = 'transfer'
58 if event['tippee'] == username:
59 balance -= event['amount']
60 else:
61 balance += event['amount']
62 event['kind'] = kind
63
64 yield event
65
66 yield dict(kind='day-close', balance='0.00')
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gratipay/utils/history.py b/gratipay/utils/history.py
--- a/gratipay/utils/history.py
+++ b/gratipay/utils/history.py
@@ -16,6 +16,13 @@
if not (exchanges or transfers):
return
+ if transfers:
+ yield dict(
+ kind='totals',
+ given=sum(t['amount'] for t in transfers if t['tipper'] == username),
+ received=sum(t['amount'] for t in transfers if t['tippee'] == username),
+ )
+
payday_dates = db.all("""
SELECT ts_start::date
FROM paydays
| {"golden_diff": "diff --git a/gratipay/utils/history.py b/gratipay/utils/history.py\n--- a/gratipay/utils/history.py\n+++ b/gratipay/utils/history.py\n@@ -16,6 +16,13 @@\n if not (exchanges or transfers):\n return\n \n+ if transfers:\n+ yield dict(\n+ kind='totals',\n+ given=sum(t['amount'] for t in transfers if t['tipper'] == username),\n+ received=sum(t['amount'] for t in transfers if t['tippee'] == username),\n+ )\n+\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n", "issue": "show total ever given\nSuggested [via Twitter](https://twitter.com/tripflex/status/532597015210131456):\n\n> is there no way for me to see the total I have donated? I know I can see it weekly, but what about overall total?\n\n", "before_files": [{"content": "def iter_payday_events(db, participant):\n \"\"\"Yields payday events for the given participant.\n \"\"\"\n username = participant.username\n exchanges = db.all(\"\"\"\n SELECT *\n FROM exchanges\n WHERE participant=%s\n \"\"\", (username,), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n WHERE tipper=%(username)s OR tippee=%(username)s\n \"\"\", locals(), back_as=dict)\n\n if not (exchanges or transfers):\n return\n\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n ORDER BY ts_start ASC\n \"\"\")\n\n balance = participant.balance\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)\n for event in events:\n\n event['balance'] = balance\n\n event_date = event['timestamp'].date()\n if event_date != prev_date:\n if prev_date:\n yield dict(kind='day-close', balance=balance)\n day_open = dict(kind='day-open', date=event_date, balance=balance)\n if payday_dates:\n while payday_dates and payday_dates[-1] > event_date:\n payday_dates.pop()\n payday_date = payday_dates[-1] if payday_dates else None\n if event_date == payday_date:\n day_open['payday_number'] = len(payday_dates) - 1\n yield day_open\n prev_date = event_date\n\n if 'fee' in event:\n if event['amount'] > 0:\n kind = 'charge'\n if event['status'] in (None, 'succeeded'):\n balance -= event['amount']\n else:\n kind = 'credit'\n if event['status'] != 'failed':\n balance -= event['amount'] - event['fee']\n else:\n kind = 'transfer'\n if event['tippee'] == username:\n balance -= event['amount']\n else:\n balance += event['amount']\n event['kind'] = kind\n\n yield event\n\n yield dict(kind='day-close', balance='0.00')\n", "path": "gratipay/utils/history.py"}], "after_files": [{"content": "def iter_payday_events(db, participant):\n \"\"\"Yields payday events for the given participant.\n \"\"\"\n username = participant.username\n exchanges = db.all(\"\"\"\n SELECT *\n FROM exchanges\n WHERE participant=%s\n \"\"\", (username,), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n WHERE tipper=%(username)s OR tippee=%(username)s\n \"\"\", locals(), back_as=dict)\n\n if not (exchanges or transfers):\n return\n\n if transfers:\n yield dict(\n kind='totals',\n given=sum(t['amount'] for t in transfers if t['tipper'] == username),\n received=sum(t['amount'] for t in transfers if t['tippee'] == username),\n )\n\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n ORDER BY ts_start ASC\n \"\"\")\n\n balance = participant.balance\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)\n for event in events:\n\n event['balance'] = balance\n\n event_date = event['timestamp'].date()\n if event_date != prev_date:\n if prev_date:\n yield dict(kind='day-close', balance=balance)\n day_open = dict(kind='day-open', date=event_date, balance=balance)\n if payday_dates:\n while payday_dates and payday_dates[-1] > event_date:\n payday_dates.pop()\n payday_date = payday_dates[-1] if payday_dates else None\n if event_date == payday_date:\n day_open['payday_number'] = len(payday_dates) - 1\n yield day_open\n prev_date = event_date\n\n if 'fee' in event:\n if event['amount'] > 0:\n kind = 'charge'\n if event['status'] in (None, 'succeeded'):\n balance -= event['amount']\n else:\n kind = 'credit'\n if event['status'] != 'failed':\n balance -= event['amount'] - event['fee']\n else:\n kind = 'transfer'\n if event['tippee'] == username:\n balance -= event['amount']\n else:\n balance += event['amount']\n event['kind'] = kind\n\n yield event\n\n yield dict(kind='day-close', balance='0.00')\n", "path": "gratipay/utils/history.py"}]} | 922 | 142 |
gh_patches_debug_5928 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-616 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to install with opentracing extras
I was following the [OpenTracing setup instructions](https://docs.datadoghq.com/tracing/advanced_usage/?tab=python#opentracing) but got a warning about missing extras:
```sh
(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip list
Package Version
---------- -------
pip 18.0
setuptools 40.4.1
wheel 0.31.1
(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> python --version
Python 2.7.14
(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip --version
pip 18.0 from /home/sam/.local/share/virtualenvs/blah-YneZd-6L/lib/python2.7/site-packages/pip (python 2.7)
(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip install 'ddtrace[opentracing] == 0.14.0'
Collecting ddtrace[opentracing]==0.14.0
ddtrace 0.14.0 does not provide the extra 'opentracing'
Collecting msgpack-python (from ddtrace[opentracing]==0.14.0)
Collecting wrapt (from ddtrace[opentracing]==0.14.0)
Installing collected packages: msgpack-python, wrapt, ddtrace
Successfully installed ddtrace-0.14.0 msgpack-python-0.5.6 wrapt-1.10.11
```
> `ddtrace 0.14.0 does not provide the extra 'opentracing'`
Happens on Python 3.6 as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3 import re
4
5 from setuptools import setup, find_packages
6 from setuptools.command.test import test as TestCommand
7
8
9 def get_version(package):
10 """
11 Return package version as listed in `__version__` in `__init__.py`.
12 This method prevents to import packages at setup-time.
13 """
14 init_py = open(os.path.join(package, '__init__.py')).read()
15 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
16
17
18 class Tox(TestCommand):
19
20 user_options = [('tox-args=', 'a', "Arguments to pass to tox")]
21
22 def initialize_options(self):
23 TestCommand.initialize_options(self)
24 self.tox_args = None
25
26 def finalize_options(self):
27 TestCommand.finalize_options(self)
28 self.test_args = []
29 self.test_suite = True
30
31 def run_tests(self):
32 # import here, cause outside the eggs aren't loaded
33 import tox
34 import shlex
35 args = self.tox_args
36 if args:
37 args = shlex.split(self.tox_args)
38 errno = tox.cmdline(args=args)
39 sys.exit(errno)
40
41
42 version = get_version('ddtrace')
43 # Append a suffix to the version for dev builds
44 if os.environ.get('VERSION_SUFFIX'):
45 version = '{v}+{s}'.format(
46 v=version,
47 s=os.environ.get('VERSION_SUFFIX'),
48 )
49
50 setup(
51 name='ddtrace',
52 version=version,
53 description='Datadog tracing code',
54 url='https://github.com/DataDog/dd-trace-py',
55 author='Datadog, Inc.',
56 author_email='[email protected]',
57 license='BSD',
58 packages=find_packages(exclude=['tests*']),
59 install_requires=[
60 "wrapt",
61 "msgpack-python",
62 ],
63 extra_requires={
64 # users can include opentracing by having:
65 # install_requires=["ddtrace[opentracing]", ...]
66 "opentracing": ["opentracing"],
67 },
68 # plugin tox
69 tests_require=['tox', 'flake8'],
70 cmdclass={'test': Tox},
71 entry_points={
72 'console_scripts': [
73 'ddtrace-run = ddtrace.commands.ddtrace_run:main'
74 ]
75 },
76 classifiers=[
77 'Programming Language :: Python',
78 'Programming Language :: Python :: 2.7',
79 'Programming Language :: Python :: 3.4',
80 'Programming Language :: Python :: 3.5',
81 'Programming Language :: Python :: 3.6',
82 ],
83 )
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -60,10 +60,10 @@
"wrapt",
"msgpack-python",
],
- extra_requires={
+ extras_require={
# users can include opentracing by having:
# install_requires=["ddtrace[opentracing]", ...]
- "opentracing": ["opentracing"],
+ "opentracing": ["opentracing>=2.0.0"],
},
# plugin tox
tests_require=['tox', 'flake8'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -60,10 +60,10 @@\n \"wrapt\",\n \"msgpack-python\",\n ],\n- extra_requires={\n+ extras_require={\n # users can include opentracing by having:\n # install_requires=[\"ddtrace[opentracing]\", ...]\n- \"opentracing\": [\"opentracing\"],\n+ \"opentracing\": [\"opentracing>=2.0.0\"],\n },\n # plugin tox\n tests_require=['tox', 'flake8'],\n", "issue": "Unable to install with opentracing extras\nI was following the [OpenTracing setup instructions](https://docs.datadoghq.com/tracing/advanced_usage/?tab=python#opentracing) but got a warning about missing extras:\r\n\r\n```sh\r\n(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip list\r\nPackage Version\r\n---------- -------\r\npip 18.0 \r\nsetuptools 40.4.1 \r\nwheel 0.31.1 \r\n\r\n(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> python --version\r\nPython 2.7.14\r\n\r\n(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip --version\r\npip 18.0 from /home/sam/.local/share/virtualenvs/blah-YneZd-6L/lib/python2.7/site-packages/pip (python 2.7)\r\n\r\n\r\n(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip install 'ddtrace[opentracing] == 0.14.0'\r\nCollecting ddtrace[opentracing]==0.14.0\r\n ddtrace 0.14.0 does not provide the extra 'opentracing'\r\nCollecting msgpack-python (from ddtrace[opentracing]==0.14.0)\r\nCollecting wrapt (from ddtrace[opentracing]==0.14.0)\r\nInstalling collected packages: msgpack-python, wrapt, ddtrace\r\nSuccessfully installed ddtrace-0.14.0 msgpack-python-0.5.6 wrapt-1.10.11\r\n```\r\n\r\n> `ddtrace 0.14.0 does not provide the extra 'opentracing'`\r\n\r\nHappens on Python 3.6 as well.\n", "before_files": [{"content": "import os\nimport sys\nimport re\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `__init__.py`.\n This method prevents to import packages at setup-time.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\nclass Tox(TestCommand):\n\n user_options = [('tox-args=', 'a', \"Arguments to pass to tox\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.tox_args = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import tox\n import shlex\n args = self.tox_args\n if args:\n args = shlex.split(self.tox_args)\n errno = tox.cmdline(args=args)\n sys.exit(errno)\n\n\nversion = get_version('ddtrace')\n# Append a suffix to the version for dev builds\nif os.environ.get('VERSION_SUFFIX'):\n version = '{v}+{s}'.format(\n v=version,\n s=os.environ.get('VERSION_SUFFIX'),\n )\n\nsetup(\n name='ddtrace',\n version=version,\n description='Datadog tracing code',\n url='https://github.com/DataDog/dd-trace-py',\n author='Datadog, Inc.',\n author_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=['tests*']),\n install_requires=[\n \"wrapt\",\n \"msgpack-python\",\n ],\n extra_requires={\n # users can include opentracing by having:\n # install_requires=[\"ddtrace[opentracing]\", ...]\n \"opentracing\": [\"opentracing\"],\n },\n # plugin tox\n tests_require=['tox', 'flake8'],\n cmdclass={'test': Tox},\n entry_points={\n 'console_scripts': [\n 'ddtrace-run = ddtrace.commands.ddtrace_run:main'\n ]\n },\n classifiers=[\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\nimport re\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `__init__.py`.\n This method prevents to import packages at setup-time.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\nclass Tox(TestCommand):\n\n user_options = [('tox-args=', 'a', \"Arguments to pass to tox\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.tox_args = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import tox\n import shlex\n args = self.tox_args\n if args:\n args = shlex.split(self.tox_args)\n errno = tox.cmdline(args=args)\n sys.exit(errno)\n\n\nversion = get_version('ddtrace')\n# Append a suffix to the version for dev builds\nif os.environ.get('VERSION_SUFFIX'):\n version = '{v}+{s}'.format(\n v=version,\n s=os.environ.get('VERSION_SUFFIX'),\n )\n\nsetup(\n name='ddtrace',\n version=version,\n description='Datadog tracing code',\n url='https://github.com/DataDog/dd-trace-py',\n author='Datadog, Inc.',\n author_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=['tests*']),\n install_requires=[\n \"wrapt\",\n \"msgpack-python\",\n ],\n extras_require={\n # users can include opentracing by having:\n # install_requires=[\"ddtrace[opentracing]\", ...]\n \"opentracing\": [\"opentracing>=2.0.0\"],\n },\n # plugin tox\n tests_require=['tox', 'flake8'],\n cmdclass={'test': Tox},\n entry_points={\n 'console_scripts': [\n 'ddtrace-run = ddtrace.commands.ddtrace_run:main'\n ]\n },\n classifiers=[\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}]} | 1,395 | 129 |
gh_patches_debug_33594 | rasdani/github-patches | git_diff | saleor__saleor-5530 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ProductCreate weight mutation


productCreate mutation
I followed the "amount unit" as said in the comment, but returns a "'StringValue(value='10.00 kg')' value must be a float."
Sorry just a beginner
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/core/scalars.py`
Content:
```
1 import decimal
2
3 import graphene
4 from measurement.measures import Weight
5
6 from ...core.weight import convert_weight, get_default_weight_unit
7
8
9 class Decimal(graphene.Float):
10 """Custom Decimal implementation.
11
12 Returns Decimal as a float in the API,
13 parses float to the Decimal on the way back.
14 """
15
16 @staticmethod
17 def parse_literal(node):
18 try:
19 return decimal.Decimal(node.value)
20 except decimal.DecimalException:
21 return None
22
23 @staticmethod
24 def parse_value(value):
25 try:
26 # Converting the float to str before parsing it to Decimal is
27 # necessary to keep the decimal places as typed
28 value = str(value)
29 return decimal.Decimal(value)
30 except decimal.DecimalException:
31 return None
32
33
34 class WeightScalar(graphene.Scalar):
35 @staticmethod
36 def parse_value(value):
37 # Expects value to be a string "amount unit" separated by a single
38 # space.
39 try:
40 value = decimal.Decimal(value)
41 except decimal.DecimalException:
42 return None
43 default_unit = get_default_weight_unit()
44 return Weight(**{default_unit: value})
45
46 @staticmethod
47 def serialize(weight):
48 if isinstance(weight, Weight):
49 default_unit = get_default_weight_unit()
50 if weight.unit != default_unit:
51 weight = convert_weight(weight, default_unit)
52 return str(weight)
53 return None
54
55 @staticmethod
56 def parse_literal(node):
57 return node
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/core/scalars.py b/saleor/graphql/core/scalars.py
--- a/saleor/graphql/core/scalars.py
+++ b/saleor/graphql/core/scalars.py
@@ -1,6 +1,8 @@
import decimal
import graphene
+from graphql.language import ast
+from graphql.error import GraphQLError
from measurement.measures import Weight
from ...core.weight import convert_weight, get_default_weight_unit
@@ -34,14 +36,14 @@
class WeightScalar(graphene.Scalar):
@staticmethod
def parse_value(value):
- # Expects value to be a string "amount unit" separated by a single
- # space.
- try:
- value = decimal.Decimal(value)
- except decimal.DecimalException:
- return None
- default_unit = get_default_weight_unit()
- return Weight(**{default_unit: value})
+ weight = None
+ if isinstance(value, dict):
+ weight = Weight(**{value["unit"]: value["value"]})
+ else:
+ weight = WeightScalar.parse_decimal(value)
+ if not weight:
+ raise GraphQLError(f"Unsupported value: {value}")
+ return weight
@staticmethod
def serialize(weight):
@@ -54,4 +56,35 @@
@staticmethod
def parse_literal(node):
- return node
+ weight = None
+ if isinstance(node, ast.ObjectValue):
+ weight = WeightScalar.parse_literal_object(node)
+ else:
+ weight = WeightScalar.parse_decimal(node.value)
+ if not weight:
+ raise GraphQLError(f"Unsupported value: {node.value}")
+ return weight
+
+ @staticmethod
+ def parse_decimal(value):
+ try:
+ value = decimal.Decimal(value)
+ except decimal.DecimalException:
+ return None
+ default_unit = get_default_weight_unit()
+ return Weight(**{default_unit: value})
+
+ @staticmethod
+ def parse_literal_object(node):
+ value = 0
+ unit = get_default_weight_unit()
+
+ for field in node.fields:
+ if field.name.value == "value":
+ try:
+ value = decimal.Decimal(field.value.value)
+ except decimal.DecimalException:
+ raise GraphQLError(f"Unsupported value: {field.value.value}")
+ if field.name.value == "unit":
+ unit = field.value.value
+ return Weight(**{unit: value})
| {"golden_diff": "diff --git a/saleor/graphql/core/scalars.py b/saleor/graphql/core/scalars.py\n--- a/saleor/graphql/core/scalars.py\n+++ b/saleor/graphql/core/scalars.py\n@@ -1,6 +1,8 @@\n import decimal\n \n import graphene\n+from graphql.language import ast\n+from graphql.error import GraphQLError\n from measurement.measures import Weight\n \n from ...core.weight import convert_weight, get_default_weight_unit\n@@ -34,14 +36,14 @@\n class WeightScalar(graphene.Scalar):\n @staticmethod\n def parse_value(value):\n- # Expects value to be a string \"amount unit\" separated by a single\n- # space.\n- try:\n- value = decimal.Decimal(value)\n- except decimal.DecimalException:\n- return None\n- default_unit = get_default_weight_unit()\n- return Weight(**{default_unit: value})\n+ weight = None\n+ if isinstance(value, dict):\n+ weight = Weight(**{value[\"unit\"]: value[\"value\"]})\n+ else:\n+ weight = WeightScalar.parse_decimal(value)\n+ if not weight:\n+ raise GraphQLError(f\"Unsupported value: {value}\")\n+ return weight\n \n @staticmethod\n def serialize(weight):\n@@ -54,4 +56,35 @@\n \n @staticmethod\n def parse_literal(node):\n- return node\n+ weight = None\n+ if isinstance(node, ast.ObjectValue):\n+ weight = WeightScalar.parse_literal_object(node)\n+ else:\n+ weight = WeightScalar.parse_decimal(node.value)\n+ if not weight:\n+ raise GraphQLError(f\"Unsupported value: {node.value}\")\n+ return weight\n+\n+ @staticmethod\n+ def parse_decimal(value):\n+ try:\n+ value = decimal.Decimal(value)\n+ except decimal.DecimalException:\n+ return None\n+ default_unit = get_default_weight_unit()\n+ return Weight(**{default_unit: value})\n+\n+ @staticmethod\n+ def parse_literal_object(node):\n+ value = 0\n+ unit = get_default_weight_unit()\n+\n+ for field in node.fields:\n+ if field.name.value == \"value\":\n+ try:\n+ value = decimal.Decimal(field.value.value)\n+ except decimal.DecimalException:\n+ raise GraphQLError(f\"Unsupported value: {field.value.value}\")\n+ if field.name.value == \"unit\":\n+ unit = field.value.value\n+ return Weight(**{unit: value})\n", "issue": "ProductCreate weight mutation\n\r\n\r\n\r\n\r\n\r\nproductCreate mutation\r\n\r\nI followed the \"amount unit\" as said in the comment, but returns a \"'StringValue(value='10.00 kg')' value must be a float.\"\r\nSorry just a beginner\n", "before_files": [{"content": "import decimal\n\nimport graphene\nfrom measurement.measures import Weight\n\nfrom ...core.weight import convert_weight, get_default_weight_unit\n\n\nclass Decimal(graphene.Float):\n \"\"\"Custom Decimal implementation.\n\n Returns Decimal as a float in the API,\n parses float to the Decimal on the way back.\n \"\"\"\n\n @staticmethod\n def parse_literal(node):\n try:\n return decimal.Decimal(node.value)\n except decimal.DecimalException:\n return None\n\n @staticmethod\n def parse_value(value):\n try:\n # Converting the float to str before parsing it to Decimal is\n # necessary to keep the decimal places as typed\n value = str(value)\n return decimal.Decimal(value)\n except decimal.DecimalException:\n return None\n\n\nclass WeightScalar(graphene.Scalar):\n @staticmethod\n def parse_value(value):\n # Expects value to be a string \"amount unit\" separated by a single\n # space.\n try:\n value = decimal.Decimal(value)\n except decimal.DecimalException:\n return None\n default_unit = get_default_weight_unit()\n return Weight(**{default_unit: value})\n\n @staticmethod\n def serialize(weight):\n if isinstance(weight, Weight):\n default_unit = get_default_weight_unit()\n if weight.unit != default_unit:\n weight = convert_weight(weight, default_unit)\n return str(weight)\n return None\n\n @staticmethod\n def parse_literal(node):\n return node\n", "path": "saleor/graphql/core/scalars.py"}], "after_files": [{"content": "import decimal\n\nimport graphene\nfrom graphql.language import ast\nfrom graphql.error import GraphQLError\nfrom measurement.measures import Weight\n\nfrom ...core.weight import convert_weight, get_default_weight_unit\n\n\nclass Decimal(graphene.Float):\n \"\"\"Custom Decimal implementation.\n\n Returns Decimal as a float in the API,\n parses float to the Decimal on the way back.\n \"\"\"\n\n @staticmethod\n def parse_literal(node):\n try:\n return decimal.Decimal(node.value)\n except decimal.DecimalException:\n return None\n\n @staticmethod\n def parse_value(value):\n try:\n # Converting the float to str before parsing it to Decimal is\n # necessary to keep the decimal places as typed\n value = str(value)\n return decimal.Decimal(value)\n except decimal.DecimalException:\n return None\n\n\nclass WeightScalar(graphene.Scalar):\n @staticmethod\n def parse_value(value):\n weight = None\n if isinstance(value, dict):\n weight = Weight(**{value[\"unit\"]: value[\"value\"]})\n else:\n weight = WeightScalar.parse_decimal(value)\n if not weight:\n raise GraphQLError(f\"Unsupported value: {value}\")\n return weight\n\n @staticmethod\n def serialize(weight):\n if isinstance(weight, Weight):\n default_unit = get_default_weight_unit()\n if weight.unit != default_unit:\n weight = convert_weight(weight, default_unit)\n return str(weight)\n return None\n\n @staticmethod\n def parse_literal(node):\n weight = None\n if isinstance(node, ast.ObjectValue):\n weight = WeightScalar.parse_literal_object(node)\n else:\n weight = WeightScalar.parse_decimal(node.value)\n if not weight:\n raise GraphQLError(f\"Unsupported value: {node.value}\")\n return weight\n\n @staticmethod\n def parse_decimal(value):\n try:\n value = decimal.Decimal(value)\n except decimal.DecimalException:\n return None\n default_unit = get_default_weight_unit()\n return Weight(**{default_unit: value})\n\n @staticmethod\n def parse_literal_object(node):\n value = 0\n unit = get_default_weight_unit()\n\n for field in node.fields:\n if field.name.value == \"value\":\n try:\n value = decimal.Decimal(field.value.value)\n except decimal.DecimalException:\n raise GraphQLError(f\"Unsupported value: {field.value.value}\")\n if field.name.value == \"unit\":\n unit = field.value.value\n return Weight(**{unit: value})\n", "path": "saleor/graphql/core/scalars.py"}]} | 850 | 544 |
gh_patches_debug_23896 | rasdani/github-patches | git_diff | rdmorganiser__rdmo-524 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sorting causes problems with import
### Description / Beschreibung
When using different `uri_prefix`es for, e.g. a domain import, the sorting by `uri` destroys the order in the file, and parent Attributes are imported *after* their children (with a different `uri_prefix` earlier in the alphabet). This is the problematic line:
https://github.com/rdmorganiser/rdmo/blob/master/rdmo/core/xml.py#L52
### Expected behaviour / Erwartetes Verhalten
I am not sure if we could get rid of the sorting, we could also sort by `path` (which would give the field some meaning in the xml again). Ideas? @triole @MyPyDavid
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rdmo/core/xml.py`
Content:
```
1 import logging
2 import re
3
4 import defusedxml.ElementTree as ET
5
6 log = logging.getLogger(__name__)
7
8
9 def read_xml_file(file_name):
10 try:
11 return ET.parse(file_name).getroot()
12 except Exception as e:
13 log.error('Xml parsing error: ' + str(e))
14
15
16 def parse_xml_string(string):
17 try:
18 return ET.fromstring(string)
19 except Exception as e:
20 log.error('Xml parsing error: ' + str(e))
21
22
23 def flat_xml_to_elements(treenode):
24 elements = []
25 ns_map = get_ns_map(treenode)
26 uri_attrib = get_ns_tag('dc:uri', ns_map)
27
28 for node in treenode:
29
30 element = {
31 'uri': get_uri(node, ns_map),
32 'type': node.tag
33 }
34
35 for subnode in node:
36 tag = strip_ns(subnode.tag, ns_map)
37
38 if uri_attrib in subnode.attrib:
39 # this node has an uri!
40 element[tag] = subnode.attrib[uri_attrib]
41 elif 'lang' in subnode.attrib:
42 # this node has the lang attribute!
43 element['%s_%s' % (tag, subnode.attrib['lang'])] = subnode.text
44 elif list(subnode):
45 # this node is a list!
46 element[tag] = [subsubnode.attrib[uri_attrib] for subsubnode in subnode]
47 else:
48 element[tag] = subnode.text
49
50 elements.append(element)
51
52 elements = sort_elements_by_key(elements, 'uri')
53 return elements
54
55
56 def get_ns_tag(tag, ns_map):
57 tag_split = tag.split(':')
58 try:
59 return '{%s}%s' % (ns_map[tag_split[0]], tag_split[1])
60 except KeyError:
61 return None
62
63
64 def get_ns_map(treenode):
65 ns_map = {}
66 treestring = ET.tostring(treenode, encoding='utf8', method='xml')
67
68 for match in re.finditer(r'(xmlns:)(.*?)(=")(.*?)(")', str(treestring)):
69 if match:
70 ns_map[match.group(2)] = match.group(4)
71
72 return ns_map
73
74
75 def get_uri(treenode, ns_map):
76 if treenode is not None:
77 ns_tag = get_ns_tag('dc:uri', ns_map)
78 if ns_tag is not None:
79 return treenode.attrib.get(ns_tag)
80
81
82 def strip_ns(tag, ns_map):
83 for ns in ns_map.values():
84 if tag.startswith('{%s}' % ns):
85 return tag.replace('{%s}' % ns, '')
86 return tag
87
88
89 def filter_elements_by_type(elements, element_type):
90 for element in elements:
91 if element['type'] == element_type:
92 yield element
93
94
95 def sort_elements_by_key(dictlist, key, reverse=False):
96 return sorted(dictlist, key=lambda k: k[key], reverse=reverse)
97
```
Path: `rdmo/core/constants.py`
Content:
```
1 from django.utils.translation import gettext_lazy as _
2
3 VALUE_TYPE_TEXT = 'text'
4 VALUE_TYPE_URL = 'url'
5 VALUE_TYPE_INTEGER = 'integer'
6 VALUE_TYPE_FLOAT = 'float'
7 VALUE_TYPE_BOOLEAN = 'boolean'
8 VALUE_TYPE_DATETIME = 'datetime'
9 VALUE_TYPE_OPTIONS = 'option'
10 VALUE_TYPE_EMAIL = 'email'
11 VALUE_TYPE_PHONE = 'phone'
12 VALUE_TYPE_FILE = 'file'
13 VALUE_TYPE_CHOICES = (
14 (VALUE_TYPE_TEXT, _('Text')),
15 (VALUE_TYPE_URL, _('URL')),
16 (VALUE_TYPE_INTEGER, _('Integer')),
17 (VALUE_TYPE_FLOAT, _('Float')),
18 (VALUE_TYPE_BOOLEAN, _('Boolean')),
19 (VALUE_TYPE_DATETIME, _('Datetime')),
20 (VALUE_TYPE_EMAIL, _('E-mail')),
21 (VALUE_TYPE_PHONE, _('Phone')),
22 (VALUE_TYPE_OPTIONS, _('Option')),
23 (VALUE_TYPE_FILE, _('File'))
24 )
25
26 PERMISSIONS = {
27 'condition': (
28 'conditions.add_condition', 'conditions.change_condition', 'conditions.delete_condition'
29 ),
30 'attribute': (
31 'domain.add_attribute', 'domain.change_attribute', 'domain.delete_attribute'
32 ),
33 'optionset': (
34 'options.add_optionset', 'options.change_optionset', 'options.delete_optionset'
35 ),
36 'option': (
37 'options.add_option', 'options.change_option', 'options.delete_option'
38 ),
39 'catalog': (
40 'questions.add_catalog', 'questions.change_catalog', 'questions.delete_catalog'
41 ),
42 'section': (
43 'questions.add_section', 'questions.change_section', 'questions.delete_section'
44 ),
45 'questionset': (
46 'questions.add_questionset', 'questions.change_questionset', 'questions.delete_questionset'
47 ),
48 'question': (
49 'questions.add_question', 'questions.change_question', 'questions.delete_question'
50 ),
51 'task': (
52 'tasks.add_task', 'tasks.change_task', 'tasks.delete_task'
53 ),
54 'view': (
55 'views.add_view', 'views.change_view', 'views.delete_view'
56 )
57 }
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rdmo/core/constants.py b/rdmo/core/constants.py
--- a/rdmo/core/constants.py
+++ b/rdmo/core/constants.py
@@ -55,3 +55,12 @@
'views.add_view', 'views.change_view', 'views.delete_view'
)
}
+
+IMPORT_SORT_ORDER = (
+ 'domain',
+ 'conditions',
+ 'options',
+ 'questions',
+ 'tasks',
+ 'views'
+)
diff --git a/rdmo/core/xml.py b/rdmo/core/xml.py
--- a/rdmo/core/xml.py
+++ b/rdmo/core/xml.py
@@ -3,6 +3,9 @@
import defusedxml.ElementTree as ET
+from .constants import IMPORT_SORT_ORDER
+
+
log = logging.getLogger(__name__)
@@ -49,7 +52,7 @@
elements.append(element)
- elements = sort_elements_by_key(elements, 'uri')
+ elements = sorted(elements, key=sort_elements)
return elements
@@ -92,5 +95,13 @@
yield element
-def sort_elements_by_key(dictlist, key, reverse=False):
- return sorted(dictlist, key=lambda k: k[key], reverse=reverse)
+def sort_elements(element):
+ # remove the uri_prefix from the uri to create the key to be sorted by
+ sort_key = element['uri'].replace(element['uri_prefix'], '')
+
+ # remove the app name from the sort_key and replace it by its import order
+ for i, item in enumerate(IMPORT_SORT_ORDER):
+ if sort_key.startswith(item):
+ sort_key = sort_key.replace(item, str(i))
+
+ return sort_key
| {"golden_diff": "diff --git a/rdmo/core/constants.py b/rdmo/core/constants.py\n--- a/rdmo/core/constants.py\n+++ b/rdmo/core/constants.py\n@@ -55,3 +55,12 @@\n 'views.add_view', 'views.change_view', 'views.delete_view'\n )\n }\n+\n+IMPORT_SORT_ORDER = (\n+ 'domain',\n+ 'conditions',\n+ 'options',\n+ 'questions',\n+ 'tasks',\n+ 'views'\n+)\ndiff --git a/rdmo/core/xml.py b/rdmo/core/xml.py\n--- a/rdmo/core/xml.py\n+++ b/rdmo/core/xml.py\n@@ -3,6 +3,9 @@\n \n import defusedxml.ElementTree as ET\n \n+from .constants import IMPORT_SORT_ORDER\n+\n+\n log = logging.getLogger(__name__)\n \n \n@@ -49,7 +52,7 @@\n \n elements.append(element)\n \n- elements = sort_elements_by_key(elements, 'uri')\n+ elements = sorted(elements, key=sort_elements)\n return elements\n \n \n@@ -92,5 +95,13 @@\n yield element\n \n \n-def sort_elements_by_key(dictlist, key, reverse=False):\n- return sorted(dictlist, key=lambda k: k[key], reverse=reverse)\n+def sort_elements(element):\n+ # remove the uri_prefix from the uri to create the key to be sorted by\n+ sort_key = element['uri'].replace(element['uri_prefix'], '')\n+\n+ # remove the app name from the sort_key and replace it by its import order\n+ for i, item in enumerate(IMPORT_SORT_ORDER):\n+ if sort_key.startswith(item):\n+ sort_key = sort_key.replace(item, str(i))\n+\n+ return sort_key\n", "issue": "Sorting causes problems with import\n### Description / Beschreibung\r\n\r\nWhen using different `uri_prefix`es for, e.g. a domain import, the sorting by `uri` destroys the order in the file, and parent Attributes are imported *after* their children (with a different `uri_prefix` earlier in the alphabet). This is the problematic line:\r\n\r\nhttps://github.com/rdmorganiser/rdmo/blob/master/rdmo/core/xml.py#L52\r\n\r\n### Expected behaviour / Erwartetes Verhalten\r\n\r\nI am not sure if we could get rid of the sorting, we could also sort by `path` (which would give the field some meaning in the xml again). Ideas? @triole @MyPyDavid \r\n\n", "before_files": [{"content": "import logging\nimport re\n\nimport defusedxml.ElementTree as ET\n\nlog = logging.getLogger(__name__)\n\n\ndef read_xml_file(file_name):\n try:\n return ET.parse(file_name).getroot()\n except Exception as e:\n log.error('Xml parsing error: ' + str(e))\n\n\ndef parse_xml_string(string):\n try:\n return ET.fromstring(string)\n except Exception as e:\n log.error('Xml parsing error: ' + str(e))\n\n\ndef flat_xml_to_elements(treenode):\n elements = []\n ns_map = get_ns_map(treenode)\n uri_attrib = get_ns_tag('dc:uri', ns_map)\n\n for node in treenode:\n\n element = {\n 'uri': get_uri(node, ns_map),\n 'type': node.tag\n }\n\n for subnode in node:\n tag = strip_ns(subnode.tag, ns_map)\n\n if uri_attrib in subnode.attrib:\n # this node has an uri!\n element[tag] = subnode.attrib[uri_attrib]\n elif 'lang' in subnode.attrib:\n # this node has the lang attribute!\n element['%s_%s' % (tag, subnode.attrib['lang'])] = subnode.text\n elif list(subnode):\n # this node is a list!\n element[tag] = [subsubnode.attrib[uri_attrib] for subsubnode in subnode]\n else:\n element[tag] = subnode.text\n\n elements.append(element)\n\n elements = sort_elements_by_key(elements, 'uri')\n return elements\n\n\ndef get_ns_tag(tag, ns_map):\n tag_split = tag.split(':')\n try:\n return '{%s}%s' % (ns_map[tag_split[0]], tag_split[1])\n except KeyError:\n return None\n\n\ndef get_ns_map(treenode):\n ns_map = {}\n treestring = ET.tostring(treenode, encoding='utf8', method='xml')\n\n for match in re.finditer(r'(xmlns:)(.*?)(=\")(.*?)(\")', str(treestring)):\n if match:\n ns_map[match.group(2)] = match.group(4)\n\n return ns_map\n\n\ndef get_uri(treenode, ns_map):\n if treenode is not None:\n ns_tag = get_ns_tag('dc:uri', ns_map)\n if ns_tag is not None:\n return treenode.attrib.get(ns_tag)\n\n\ndef strip_ns(tag, ns_map):\n for ns in ns_map.values():\n if tag.startswith('{%s}' % ns):\n return tag.replace('{%s}' % ns, '')\n return tag\n\n\ndef filter_elements_by_type(elements, element_type):\n for element in elements:\n if element['type'] == element_type:\n yield element\n\n\ndef sort_elements_by_key(dictlist, key, reverse=False):\n return sorted(dictlist, key=lambda k: k[key], reverse=reverse)\n", "path": "rdmo/core/xml.py"}, {"content": "from django.utils.translation import gettext_lazy as _\n\nVALUE_TYPE_TEXT = 'text'\nVALUE_TYPE_URL = 'url'\nVALUE_TYPE_INTEGER = 'integer'\nVALUE_TYPE_FLOAT = 'float'\nVALUE_TYPE_BOOLEAN = 'boolean'\nVALUE_TYPE_DATETIME = 'datetime'\nVALUE_TYPE_OPTIONS = 'option'\nVALUE_TYPE_EMAIL = 'email'\nVALUE_TYPE_PHONE = 'phone'\nVALUE_TYPE_FILE = 'file'\nVALUE_TYPE_CHOICES = (\n (VALUE_TYPE_TEXT, _('Text')),\n (VALUE_TYPE_URL, _('URL')),\n (VALUE_TYPE_INTEGER, _('Integer')),\n (VALUE_TYPE_FLOAT, _('Float')),\n (VALUE_TYPE_BOOLEAN, _('Boolean')),\n (VALUE_TYPE_DATETIME, _('Datetime')),\n (VALUE_TYPE_EMAIL, _('E-mail')),\n (VALUE_TYPE_PHONE, _('Phone')),\n (VALUE_TYPE_OPTIONS, _('Option')),\n (VALUE_TYPE_FILE, _('File'))\n)\n\nPERMISSIONS = {\n 'condition': (\n 'conditions.add_condition', 'conditions.change_condition', 'conditions.delete_condition'\n ),\n 'attribute': (\n 'domain.add_attribute', 'domain.change_attribute', 'domain.delete_attribute'\n ),\n 'optionset': (\n 'options.add_optionset', 'options.change_optionset', 'options.delete_optionset'\n ),\n 'option': (\n 'options.add_option', 'options.change_option', 'options.delete_option'\n ),\n 'catalog': (\n 'questions.add_catalog', 'questions.change_catalog', 'questions.delete_catalog'\n ),\n 'section': (\n 'questions.add_section', 'questions.change_section', 'questions.delete_section'\n ),\n 'questionset': (\n 'questions.add_questionset', 'questions.change_questionset', 'questions.delete_questionset'\n ),\n 'question': (\n 'questions.add_question', 'questions.change_question', 'questions.delete_question'\n ),\n 'task': (\n 'tasks.add_task', 'tasks.change_task', 'tasks.delete_task'\n ),\n 'view': (\n 'views.add_view', 'views.change_view', 'views.delete_view'\n )\n}\n", "path": "rdmo/core/constants.py"}], "after_files": [{"content": "import logging\nimport re\n\nimport defusedxml.ElementTree as ET\n\nfrom .constants import IMPORT_SORT_ORDER\n\n\nlog = logging.getLogger(__name__)\n\n\ndef read_xml_file(file_name):\n try:\n return ET.parse(file_name).getroot()\n except Exception as e:\n log.error('Xml parsing error: ' + str(e))\n\n\ndef parse_xml_string(string):\n try:\n return ET.fromstring(string)\n except Exception as e:\n log.error('Xml parsing error: ' + str(e))\n\n\ndef flat_xml_to_elements(treenode):\n elements = []\n ns_map = get_ns_map(treenode)\n uri_attrib = get_ns_tag('dc:uri', ns_map)\n\n for node in treenode:\n\n element = {\n 'uri': get_uri(node, ns_map),\n 'type': node.tag\n }\n\n for subnode in node:\n tag = strip_ns(subnode.tag, ns_map)\n\n if uri_attrib in subnode.attrib:\n # this node has an uri!\n element[tag] = subnode.attrib[uri_attrib]\n elif 'lang' in subnode.attrib:\n # this node has the lang attribute!\n element['%s_%s' % (tag, subnode.attrib['lang'])] = subnode.text\n elif list(subnode):\n # this node is a list!\n element[tag] = [subsubnode.attrib[uri_attrib] for subsubnode in subnode]\n else:\n element[tag] = subnode.text\n\n elements.append(element)\n\n elements = sorted(elements, key=sort_elements)\n return elements\n\n\ndef get_ns_tag(tag, ns_map):\n tag_split = tag.split(':')\n try:\n return '{%s}%s' % (ns_map[tag_split[0]], tag_split[1])\n except KeyError:\n return None\n\n\ndef get_ns_map(treenode):\n ns_map = {}\n treestring = ET.tostring(treenode, encoding='utf8', method='xml')\n\n for match in re.finditer(r'(xmlns:)(.*?)(=\")(.*?)(\")', str(treestring)):\n if match:\n ns_map[match.group(2)] = match.group(4)\n\n return ns_map\n\n\ndef get_uri(treenode, ns_map):\n if treenode is not None:\n ns_tag = get_ns_tag('dc:uri', ns_map)\n if ns_tag is not None:\n return treenode.attrib.get(ns_tag)\n\n\ndef strip_ns(tag, ns_map):\n for ns in ns_map.values():\n if tag.startswith('{%s}' % ns):\n return tag.replace('{%s}' % ns, '')\n return tag\n\n\ndef filter_elements_by_type(elements, element_type):\n for element in elements:\n if element['type'] == element_type:\n yield element\n\n\ndef sort_elements(element):\n # remove the uri_prefix from the uri to create the key to be sorted by\n sort_key = element['uri'].replace(element['uri_prefix'], '')\n\n # remove the app name from the sort_key and replace it by its import order\n for i, item in enumerate(IMPORT_SORT_ORDER):\n if sort_key.startswith(item):\n sort_key = sort_key.replace(item, str(i))\n\n return sort_key\n", "path": "rdmo/core/xml.py"}, {"content": "from django.utils.translation import gettext_lazy as _\n\nVALUE_TYPE_TEXT = 'text'\nVALUE_TYPE_URL = 'url'\nVALUE_TYPE_INTEGER = 'integer'\nVALUE_TYPE_FLOAT = 'float'\nVALUE_TYPE_BOOLEAN = 'boolean'\nVALUE_TYPE_DATETIME = 'datetime'\nVALUE_TYPE_OPTIONS = 'option'\nVALUE_TYPE_EMAIL = 'email'\nVALUE_TYPE_PHONE = 'phone'\nVALUE_TYPE_FILE = 'file'\nVALUE_TYPE_CHOICES = (\n (VALUE_TYPE_TEXT, _('Text')),\n (VALUE_TYPE_URL, _('URL')),\n (VALUE_TYPE_INTEGER, _('Integer')),\n (VALUE_TYPE_FLOAT, _('Float')),\n (VALUE_TYPE_BOOLEAN, _('Boolean')),\n (VALUE_TYPE_DATETIME, _('Datetime')),\n (VALUE_TYPE_EMAIL, _('E-mail')),\n (VALUE_TYPE_PHONE, _('Phone')),\n (VALUE_TYPE_OPTIONS, _('Option')),\n (VALUE_TYPE_FILE, _('File'))\n)\n\nPERMISSIONS = {\n 'condition': (\n 'conditions.add_condition', 'conditions.change_condition', 'conditions.delete_condition'\n ),\n 'attribute': (\n 'domain.add_attribute', 'domain.change_attribute', 'domain.delete_attribute'\n ),\n 'optionset': (\n 'options.add_optionset', 'options.change_optionset', 'options.delete_optionset'\n ),\n 'option': (\n 'options.add_option', 'options.change_option', 'options.delete_option'\n ),\n 'catalog': (\n 'questions.add_catalog', 'questions.change_catalog', 'questions.delete_catalog'\n ),\n 'section': (\n 'questions.add_section', 'questions.change_section', 'questions.delete_section'\n ),\n 'questionset': (\n 'questions.add_questionset', 'questions.change_questionset', 'questions.delete_questionset'\n ),\n 'question': (\n 'questions.add_question', 'questions.change_question', 'questions.delete_question'\n ),\n 'task': (\n 'tasks.add_task', 'tasks.change_task', 'tasks.delete_task'\n ),\n 'view': (\n 'views.add_view', 'views.change_view', 'views.delete_view'\n )\n}\n\nIMPORT_SORT_ORDER = (\n 'domain',\n 'conditions',\n 'options',\n 'questions',\n 'tasks',\n 'views'\n)\n", "path": "rdmo/core/constants.py"}]} | 1,784 | 382 |
gh_patches_debug_18183 | rasdani/github-patches | git_diff | keras-team__autokeras-627 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip install autokeras fails on torch ==1.1.0
### Bug Description
When executing `pip install autokeras`, I get the following message:
`Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)
No matching distribution found for torch==1.0.1.post2 (from autokeras)`
### Reproducing Steps
Steps to reproduce the behavior:
* Step 1: set up anaconda environment
* Step 2: install pytorch via their website's recommended command: `conda install pytorch-cpu torchvision-cpu -c pytorch`
* Step 3: try to install autokeras via `pip install autokeras`
* Step 4: get the following output:
```
Collecting autokeras
Downloading https://files.pythonhosted.org/packages/c2/32/de74bf6afd09925980340355a05aa6a19e7378ed91dac09e76a487bd136d/autokeras-0.4.0.tar.gz (67kB)
100% |████████████████████████████████| 71kB 1.3MB/s
Collecting scipy==1.2.0 (from autokeras)
Downloading https://files.pythonhosted.org/packages/c4/0f/2bdeab43db2b4a75863863bf7eddda8920b031b0a70494fd2665c73c9aec/scipy-1.2.0-cp36-cp36m-win_amd64.whl (31.9MB)
100% |████████████████████████████████| 31.9MB 508kB/s
Requirement already satisfied: tensorflow==1.13.1 in c:\[...]\lib\site-packages (from autokeras) (1.13.1)
Collecting torch==1.0.1.post2 (from autokeras)
Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)
No matching distribution found for torch==1.0.1.post2 (from autokeras)
```
### Expected Behavior
Autokeras is installed without error.
### Setup Details
Include the details about the versions of:
- OS type and version: Windows 10 Version 10.0.17763 Build 17763
- Python: 3.6.8 (anaconda)
- autokeras: 0.4.0
- scikit-learn: 0.20.3
- numpy:1.16.2
- keras: 2.2.4
- scipy:1.2.1
- tensorflow:1.13.1
- pytorch:1.1.0
### Additional context
<!---
Add any other context about the problem here.
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from distutils.core import setup
2 from setuptools import find_packages
3
4 setup(
5 name='autokeras',
6 packages=find_packages(exclude=('tests',)),
7 install_requires=['scipy==1.2.0',
8 'tensorflow==1.13.1',
9 'torch==1.0.1.post2',
10 'torchvision==0.2.1',
11 'numpy==1.16.1',
12 'scikit-learn==0.20.2',
13 'scikit-image==0.14.2',
14 'tqdm==4.31.0',
15 'imageio==2.5.0',
16 'requests==2.21.0'
17 ],
18 version='0.4.0',
19 description='AutoML for deep learning',
20 author='DATA Lab at Texas A&M University',
21 author_email='[email protected]',
22 url='http://autokeras.com',
23 download_url='https://github.com/keras-team/autokeras/archive/0.3.7.tar.gz',
24 keywords=['AutoML', 'keras'],
25 classifiers=[]
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,16 +4,16 @@
setup(
name='autokeras',
packages=find_packages(exclude=('tests',)),
- install_requires=['scipy==1.2.0',
- 'tensorflow==1.13.1',
- 'torch==1.0.1.post2',
- 'torchvision==0.2.1',
- 'numpy==1.16.1',
- 'scikit-learn==0.20.2',
- 'scikit-image==0.14.2',
- 'tqdm==4.31.0',
- 'imageio==2.5.0',
- 'requests==2.21.0'
+ install_requires=['scipy>=1.2.0',
+ 'tensorflow>=1.13.1',
+ 'torch>=1.0.1.post2',
+ 'torchvision>=0.2.1',
+ 'numpy>=1.16.1',
+ 'scikit-learn>=0.20.2',
+ 'scikit-image>=0.14.2',
+ 'tqdm>=4.31.0',
+ 'imageio>=2.5.0',
+ 'requests>=2.21.0'
],
version='0.4.0',
description='AutoML for deep learning',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,16 +4,16 @@\n setup(\n name='autokeras',\n packages=find_packages(exclude=('tests',)),\n- install_requires=['scipy==1.2.0',\n- 'tensorflow==1.13.1',\n- 'torch==1.0.1.post2',\n- 'torchvision==0.2.1',\n- 'numpy==1.16.1',\n- 'scikit-learn==0.20.2',\n- 'scikit-image==0.14.2',\n- 'tqdm==4.31.0',\n- 'imageio==2.5.0',\n- 'requests==2.21.0'\n+ install_requires=['scipy>=1.2.0',\n+ 'tensorflow>=1.13.1',\n+ 'torch>=1.0.1.post2',\n+ 'torchvision>=0.2.1',\n+ 'numpy>=1.16.1',\n+ 'scikit-learn>=0.20.2',\n+ 'scikit-image>=0.14.2',\n+ 'tqdm>=4.31.0',\n+ 'imageio>=2.5.0',\n+ 'requests>=2.21.0'\n ],\n version='0.4.0',\n description='AutoML for deep learning',\n", "issue": "pip install autokeras fails on torch ==1.1.0\n### Bug Description\r\nWhen executing `pip install autokeras`, I get the following message:\r\n`Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)\r\nNo matching distribution found for torch==1.0.1.post2 (from autokeras)`\r\n\r\n### Reproducing Steps\r\nSteps to reproduce the behavior:\r\n * Step 1: set up anaconda environment\r\n * Step 2: install pytorch via their website's recommended command: `conda install pytorch-cpu torchvision-cpu -c pytorch`\r\n * Step 3: try to install autokeras via `pip install autokeras`\r\n * Step 4: get the following output:\r\n\r\n```\r\nCollecting autokeras\r\n Downloading https://files.pythonhosted.org/packages/c2/32/de74bf6afd09925980340355a05aa6a19e7378ed91dac09e76a487bd136d/autokeras-0.4.0.tar.gz (67kB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 71kB 1.3MB/s\r\nCollecting scipy==1.2.0 (from autokeras)\r\n Downloading https://files.pythonhosted.org/packages/c4/0f/2bdeab43db2b4a75863863bf7eddda8920b031b0a70494fd2665c73c9aec/scipy-1.2.0-cp36-cp36m-win_amd64.whl (31.9MB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 31.9MB 508kB/s\r\nRequirement already satisfied: tensorflow==1.13.1 in c:\\[...]\\lib\\site-packages (from autokeras) (1.13.1)\r\nCollecting torch==1.0.1.post2 (from autokeras)\r\n Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)\r\nNo matching distribution found for torch==1.0.1.post2 (from autokeras)\r\n```\r\n\r\n### Expected Behavior\r\nAutokeras is installed without error.\r\n\r\n### Setup Details\r\nInclude the details about the versions of:\r\n - OS type and version: Windows 10 Version\t10.0.17763 Build 17763\r\n - Python: 3.6.8 (anaconda)\r\n - autokeras: 0.4.0\r\n - scikit-learn: 0.20.3\r\n - numpy:1.16.2\r\n - keras: 2.2.4\r\n - scipy:1.2.1\r\n - tensorflow:1.13.1\r\n - pytorch:1.1.0\r\n\r\n### Additional context\r\n<!---\r\nAdd any other context about the problem here.\r\n-->\r\n\n", "before_files": [{"content": "from distutils.core import setup\nfrom setuptools import find_packages\n\nsetup(\n name='autokeras',\n packages=find_packages(exclude=('tests',)),\n install_requires=['scipy==1.2.0',\n 'tensorflow==1.13.1',\n 'torch==1.0.1.post2',\n 'torchvision==0.2.1',\n 'numpy==1.16.1',\n 'scikit-learn==0.20.2',\n 'scikit-image==0.14.2',\n 'tqdm==4.31.0',\n 'imageio==2.5.0',\n 'requests==2.21.0'\n ],\n version='0.4.0',\n description='AutoML for deep learning',\n author='DATA Lab at Texas A&M University',\n author_email='[email protected]',\n url='http://autokeras.com',\n download_url='https://github.com/keras-team/autokeras/archive/0.3.7.tar.gz',\n keywords=['AutoML', 'keras'],\n classifiers=[]\n)\n", "path": "setup.py"}], "after_files": [{"content": "from distutils.core import setup\nfrom setuptools import find_packages\n\nsetup(\n name='autokeras',\n packages=find_packages(exclude=('tests',)),\n install_requires=['scipy>=1.2.0',\n 'tensorflow>=1.13.1',\n 'torch>=1.0.1.post2',\n 'torchvision>=0.2.1',\n 'numpy>=1.16.1',\n 'scikit-learn>=0.20.2',\n 'scikit-image>=0.14.2',\n 'tqdm>=4.31.0',\n 'imageio>=2.5.0',\n 'requests>=2.21.0'\n ],\n version='0.4.0',\n description='AutoML for deep learning',\n author='DATA Lab at Texas A&M University',\n author_email='[email protected]',\n url='http://autokeras.com',\n download_url='https://github.com/keras-team/autokeras/archive/0.3.7.tar.gz',\n keywords=['AutoML', 'keras'],\n classifiers=[]\n)\n", "path": "setup.py"}]} | 1,274 | 331 |
gh_patches_debug_7877 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-679 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OverflowError in Point.py
It seems like [this old (2013) bug](https://bugs.launchpad.net/pyqtgraph/+bug/1234528) still hasn't been fixed. I've just bumped into the same error.
I will test solution suggested by author.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyqtgraph/Point.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Point.py - Extension of QPointF which adds a few missing methods.
4 Copyright 2010 Luke Campagnola
5 Distributed under MIT/X11 license. See license.txt for more infomation.
6 """
7
8 from .Qt import QtCore
9 import numpy as np
10
11 def clip(x, mn, mx):
12 if x > mx:
13 return mx
14 if x < mn:
15 return mn
16 return x
17
18 class Point(QtCore.QPointF):
19 """Extension of QPointF which adds a few missing methods."""
20
21 def __init__(self, *args):
22 if len(args) == 1:
23 if isinstance(args[0], QtCore.QSizeF):
24 QtCore.QPointF.__init__(self, float(args[0].width()), float(args[0].height()))
25 return
26 elif isinstance(args[0], float) or isinstance(args[0], int):
27 QtCore.QPointF.__init__(self, float(args[0]), float(args[0]))
28 return
29 elif hasattr(args[0], '__getitem__'):
30 QtCore.QPointF.__init__(self, float(args[0][0]), float(args[0][1]))
31 return
32 elif len(args) == 2:
33 QtCore.QPointF.__init__(self, args[0], args[1])
34 return
35 QtCore.QPointF.__init__(self, *args)
36
37 def __len__(self):
38 return 2
39
40 def __reduce__(self):
41 return (Point, (self.x(), self.y()))
42
43 def __getitem__(self, i):
44 if i == 0:
45 return self.x()
46 elif i == 1:
47 return self.y()
48 else:
49 raise IndexError("Point has no index %s" % str(i))
50
51 def __setitem__(self, i, x):
52 if i == 0:
53 return self.setX(x)
54 elif i == 1:
55 return self.setY(x)
56 else:
57 raise IndexError("Point has no index %s" % str(i))
58
59 def __radd__(self, a):
60 return self._math_('__radd__', a)
61
62 def __add__(self, a):
63 return self._math_('__add__', a)
64
65 def __rsub__(self, a):
66 return self._math_('__rsub__', a)
67
68 def __sub__(self, a):
69 return self._math_('__sub__', a)
70
71 def __rmul__(self, a):
72 return self._math_('__rmul__', a)
73
74 def __mul__(self, a):
75 return self._math_('__mul__', a)
76
77 def __rdiv__(self, a):
78 return self._math_('__rdiv__', a)
79
80 def __div__(self, a):
81 return self._math_('__div__', a)
82
83 def __truediv__(self, a):
84 return self._math_('__truediv__', a)
85
86 def __rtruediv__(self, a):
87 return self._math_('__rtruediv__', a)
88
89 def __rpow__(self, a):
90 return self._math_('__rpow__', a)
91
92 def __pow__(self, a):
93 return self._math_('__pow__', a)
94
95 def _math_(self, op, x):
96 #print "point math:", op
97 #try:
98 #fn = getattr(QtCore.QPointF, op)
99 #pt = fn(self, x)
100 #print fn, pt, self, x
101 #return Point(pt)
102 #except AttributeError:
103 x = Point(x)
104 return Point(getattr(self[0], op)(x[0]), getattr(self[1], op)(x[1]))
105
106 def length(self):
107 """Returns the vector length of this Point."""
108 return (self[0]**2 + self[1]**2) ** 0.5
109
110 def norm(self):
111 """Returns a vector in the same direction with unit length."""
112 return self / self.length()
113
114 def angle(self, a):
115 """Returns the angle in degrees between this vector and the vector a."""
116 n1 = self.length()
117 n2 = a.length()
118 if n1 == 0. or n2 == 0.:
119 return None
120 ## Probably this should be done with arctan2 instead..
121 ang = np.arccos(clip(self.dot(a) / (n1 * n2), -1.0, 1.0)) ### in radians
122 c = self.cross(a)
123 if c > 0:
124 ang *= -1.
125 return ang * 180. / np.pi
126
127 def dot(self, a):
128 """Returns the dot product of a and this Point."""
129 a = Point(a)
130 return self[0]*a[0] + self[1]*a[1]
131
132 def cross(self, a):
133 a = Point(a)
134 return self[0]*a[1] - self[1]*a[0]
135
136 def proj(self, b):
137 """Return the projection of this vector onto the vector b"""
138 b1 = b / b.length()
139 return self.dot(b1) * b1
140
141 def __repr__(self):
142 return "Point(%f, %f)" % (self[0], self[1])
143
144
145 def min(self):
146 return min(self[0], self[1])
147
148 def max(self):
149 return max(self[0], self[1])
150
151 def copy(self):
152 return Point(self)
153
154 def toQPoint(self):
155 return QtCore.QPoint(*self)
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyqtgraph/Point.py b/pyqtgraph/Point.py
--- a/pyqtgraph/Point.py
+++ b/pyqtgraph/Point.py
@@ -105,7 +105,13 @@
def length(self):
"""Returns the vector length of this Point."""
- return (self[0]**2 + self[1]**2) ** 0.5
+ try:
+ return (self[0]**2 + self[1]**2) ** 0.5
+ except OverflowError:
+ try:
+ return self[1] / np.sin(np.arctan2(self[1], self[0]))
+ except OverflowError:
+ return np.inf
def norm(self):
"""Returns a vector in the same direction with unit length."""
| {"golden_diff": "diff --git a/pyqtgraph/Point.py b/pyqtgraph/Point.py\n--- a/pyqtgraph/Point.py\n+++ b/pyqtgraph/Point.py\n@@ -105,7 +105,13 @@\n \n def length(self):\n \"\"\"Returns the vector length of this Point.\"\"\"\n- return (self[0]**2 + self[1]**2) ** 0.5\n+ try:\n+ return (self[0]**2 + self[1]**2) ** 0.5\n+ except OverflowError:\n+ try:\n+ return self[1] / np.sin(np.arctan2(self[1], self[0]))\n+ except OverflowError:\n+ return np.inf\n \n def norm(self):\n \"\"\"Returns a vector in the same direction with unit length.\"\"\"\n", "issue": "OverflowError in Point.py\nIt seems like [this old (2013) bug](https://bugs.launchpad.net/pyqtgraph/+bug/1234528) still hasn't been fixed. I've just bumped into the same error.\r\n\r\nI will test solution suggested by author.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nPoint.py - Extension of QPointF which adds a few missing methods.\nCopyright 2010 Luke Campagnola\nDistributed under MIT/X11 license. See license.txt for more infomation.\n\"\"\"\n\nfrom .Qt import QtCore\nimport numpy as np\n\ndef clip(x, mn, mx):\n if x > mx:\n return mx\n if x < mn:\n return mn\n return x\n\nclass Point(QtCore.QPointF):\n \"\"\"Extension of QPointF which adds a few missing methods.\"\"\"\n \n def __init__(self, *args):\n if len(args) == 1:\n if isinstance(args[0], QtCore.QSizeF):\n QtCore.QPointF.__init__(self, float(args[0].width()), float(args[0].height()))\n return\n elif isinstance(args[0], float) or isinstance(args[0], int):\n QtCore.QPointF.__init__(self, float(args[0]), float(args[0]))\n return\n elif hasattr(args[0], '__getitem__'):\n QtCore.QPointF.__init__(self, float(args[0][0]), float(args[0][1]))\n return\n elif len(args) == 2:\n QtCore.QPointF.__init__(self, args[0], args[1])\n return\n QtCore.QPointF.__init__(self, *args)\n \n def __len__(self):\n return 2\n \n def __reduce__(self):\n return (Point, (self.x(), self.y()))\n \n def __getitem__(self, i):\n if i == 0:\n return self.x()\n elif i == 1:\n return self.y()\n else:\n raise IndexError(\"Point has no index %s\" % str(i))\n \n def __setitem__(self, i, x):\n if i == 0:\n return self.setX(x)\n elif i == 1:\n return self.setY(x)\n else:\n raise IndexError(\"Point has no index %s\" % str(i))\n \n def __radd__(self, a):\n return self._math_('__radd__', a)\n \n def __add__(self, a):\n return self._math_('__add__', a)\n \n def __rsub__(self, a):\n return self._math_('__rsub__', a)\n \n def __sub__(self, a):\n return self._math_('__sub__', a)\n \n def __rmul__(self, a):\n return self._math_('__rmul__', a)\n \n def __mul__(self, a):\n return self._math_('__mul__', a)\n \n def __rdiv__(self, a):\n return self._math_('__rdiv__', a)\n \n def __div__(self, a):\n return self._math_('__div__', a)\n \n def __truediv__(self, a):\n return self._math_('__truediv__', a)\n \n def __rtruediv__(self, a):\n return self._math_('__rtruediv__', a)\n \n def __rpow__(self, a):\n return self._math_('__rpow__', a)\n \n def __pow__(self, a):\n return self._math_('__pow__', a)\n \n def _math_(self, op, x):\n #print \"point math:\", op\n #try:\n #fn = getattr(QtCore.QPointF, op)\n #pt = fn(self, x)\n #print fn, pt, self, x\n #return Point(pt)\n #except AttributeError:\n x = Point(x)\n return Point(getattr(self[0], op)(x[0]), getattr(self[1], op)(x[1]))\n \n def length(self):\n \"\"\"Returns the vector length of this Point.\"\"\"\n return (self[0]**2 + self[1]**2) ** 0.5\n \n def norm(self):\n \"\"\"Returns a vector in the same direction with unit length.\"\"\"\n return self / self.length()\n \n def angle(self, a):\n \"\"\"Returns the angle in degrees between this vector and the vector a.\"\"\"\n n1 = self.length()\n n2 = a.length()\n if n1 == 0. or n2 == 0.:\n return None\n ## Probably this should be done with arctan2 instead..\n ang = np.arccos(clip(self.dot(a) / (n1 * n2), -1.0, 1.0)) ### in radians\n c = self.cross(a)\n if c > 0:\n ang *= -1.\n return ang * 180. / np.pi\n \n def dot(self, a):\n \"\"\"Returns the dot product of a and this Point.\"\"\"\n a = Point(a)\n return self[0]*a[0] + self[1]*a[1]\n \n def cross(self, a):\n a = Point(a)\n return self[0]*a[1] - self[1]*a[0]\n \n def proj(self, b):\n \"\"\"Return the projection of this vector onto the vector b\"\"\"\n b1 = b / b.length()\n return self.dot(b1) * b1\n \n def __repr__(self):\n return \"Point(%f, %f)\" % (self[0], self[1])\n \n \n def min(self):\n return min(self[0], self[1])\n \n def max(self):\n return max(self[0], self[1])\n \n def copy(self):\n return Point(self)\n \n def toQPoint(self):\n return QtCore.QPoint(*self)\n", "path": "pyqtgraph/Point.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nPoint.py - Extension of QPointF which adds a few missing methods.\nCopyright 2010 Luke Campagnola\nDistributed under MIT/X11 license. See license.txt for more infomation.\n\"\"\"\n\nfrom .Qt import QtCore\nimport numpy as np\n\ndef clip(x, mn, mx):\n if x > mx:\n return mx\n if x < mn:\n return mn\n return x\n\nclass Point(QtCore.QPointF):\n \"\"\"Extension of QPointF which adds a few missing methods.\"\"\"\n \n def __init__(self, *args):\n if len(args) == 1:\n if isinstance(args[0], QtCore.QSizeF):\n QtCore.QPointF.__init__(self, float(args[0].width()), float(args[0].height()))\n return\n elif isinstance(args[0], float) or isinstance(args[0], int):\n QtCore.QPointF.__init__(self, float(args[0]), float(args[0]))\n return\n elif hasattr(args[0], '__getitem__'):\n QtCore.QPointF.__init__(self, float(args[0][0]), float(args[0][1]))\n return\n elif len(args) == 2:\n QtCore.QPointF.__init__(self, args[0], args[1])\n return\n QtCore.QPointF.__init__(self, *args)\n \n def __len__(self):\n return 2\n \n def __reduce__(self):\n return (Point, (self.x(), self.y()))\n \n def __getitem__(self, i):\n if i == 0:\n return self.x()\n elif i == 1:\n return self.y()\n else:\n raise IndexError(\"Point has no index %s\" % str(i))\n \n def __setitem__(self, i, x):\n if i == 0:\n return self.setX(x)\n elif i == 1:\n return self.setY(x)\n else:\n raise IndexError(\"Point has no index %s\" % str(i))\n \n def __radd__(self, a):\n return self._math_('__radd__', a)\n \n def __add__(self, a):\n return self._math_('__add__', a)\n \n def __rsub__(self, a):\n return self._math_('__rsub__', a)\n \n def __sub__(self, a):\n return self._math_('__sub__', a)\n \n def __rmul__(self, a):\n return self._math_('__rmul__', a)\n \n def __mul__(self, a):\n return self._math_('__mul__', a)\n \n def __rdiv__(self, a):\n return self._math_('__rdiv__', a)\n \n def __div__(self, a):\n return self._math_('__div__', a)\n \n def __truediv__(self, a):\n return self._math_('__truediv__', a)\n \n def __rtruediv__(self, a):\n return self._math_('__rtruediv__', a)\n \n def __rpow__(self, a):\n return self._math_('__rpow__', a)\n \n def __pow__(self, a):\n return self._math_('__pow__', a)\n \n def _math_(self, op, x):\n #print \"point math:\", op\n #try:\n #fn = getattr(QtCore.QPointF, op)\n #pt = fn(self, x)\n #print fn, pt, self, x\n #return Point(pt)\n #except AttributeError:\n x = Point(x)\n return Point(getattr(self[0], op)(x[0]), getattr(self[1], op)(x[1]))\n \n def length(self):\n \"\"\"Returns the vector length of this Point.\"\"\"\n try:\n return (self[0]**2 + self[1]**2) ** 0.5\n except OverflowError:\n try:\n return self[1] / np.sin(np.arctan2(self[1], self[0]))\n except OverflowError:\n return np.inf\n \n def norm(self):\n \"\"\"Returns a vector in the same direction with unit length.\"\"\"\n return self / self.length()\n \n def angle(self, a):\n \"\"\"Returns the angle in degrees between this vector and the vector a.\"\"\"\n n1 = self.length()\n n2 = a.length()\n if n1 == 0. or n2 == 0.:\n return None\n ## Probably this should be done with arctan2 instead..\n ang = np.arccos(clip(self.dot(a) / (n1 * n2), -1.0, 1.0)) ### in radians\n c = self.cross(a)\n if c > 0:\n ang *= -1.\n return ang * 180. / np.pi\n \n def dot(self, a):\n \"\"\"Returns the dot product of a and this Point.\"\"\"\n a = Point(a)\n return self[0]*a[0] + self[1]*a[1]\n \n def cross(self, a):\n a = Point(a)\n return self[0]*a[1] - self[1]*a[0]\n \n def proj(self, b):\n \"\"\"Return the projection of this vector onto the vector b\"\"\"\n b1 = b / b.length()\n return self.dot(b1) * b1\n \n def __repr__(self):\n return \"Point(%f, %f)\" % (self[0], self[1])\n \n \n def min(self):\n return min(self[0], self[1])\n \n def max(self):\n return max(self[0], self[1])\n \n def copy(self):\n return Point(self)\n \n def toQPoint(self):\n return QtCore.QPoint(*self)\n", "path": "pyqtgraph/Point.py"}]} | 1,900 | 178 |
gh_patches_debug_39980 | rasdani/github-patches | git_diff | microsoft__ptvsd-1161 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sys.stdin not None and missing encoding attribute when running with pythonw.exe
## Environment data
- PTVSD version: 4.2
- OS and version: windows 10
- Python version (& distribution if applicable, e.g. Anaconda): CPython 3.7 using **pythonw.exe**
- Using VS Code or Visual Studio: VS
## Actual behavior
None has no attribute encoding exception
## Expected behavior
Either sys.stdin.encoding works, or sys.stdin is None (it is None when running without debugging)
## Steps to reproduce:
1. Debug this code using pythonw.exe (no console)
```
import sys
with open('issue4866.txt', 'wt') as f:
f.write('hello\n')
f.write(str(type(sys.stdin)) + '\n')
if sys.stdin is not None:
f.write(str(sys.stdin.encoding) + '\n')
f.write('bye\n')
```
From https://github.com/Microsoft/PTVS/issues/4866
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py`
Content:
```
1 from _pydevd_bundle import pydevd_constants
2
3 IS_PY3K = pydevd_constants.IS_PY3K
4
5 class IORedirector:
6 '''
7 This class works to wrap a stream (stdout/stderr) with an additional redirect.
8 '''
9
10 def __init__(self, original, new_redirect, wrap_buffer=False):
11 '''
12 :param stream original:
13 The stream to be wrapped (usually stdout/stderr).
14
15 :param stream new_redirect:
16 Usually IOBuf (below).
17
18 :param bool wrap_buffer:
19 Whether to create a buffer attribute (needed to mimick python 3 s
20 tdout/stderr which has a buffer to write binary data).
21 '''
22 self._redirect_to = (original, new_redirect)
23 if wrap_buffer and hasattr(original, 'buffer'):
24 self.buffer = IORedirector(original.buffer, new_redirect.buffer, False)
25
26 def write(self, s):
27 # Note that writing to the original stream may fail for some reasons
28 # (such as trying to write something that's not a string or having it closed).
29 for r in self._redirect_to:
30 r.write(s)
31
32 def isatty(self):
33 return self._redirect_to[0].isatty()
34
35 def flush(self):
36 for r in self._redirect_to:
37 r.flush()
38
39 def __getattr__(self, name):
40 for r in self._redirect_to:
41 if hasattr(r, name):
42 return getattr(r, name)
43 raise AttributeError(name)
44
45 class IOBuf:
46 '''This class works as a replacement for stdio and stderr.
47 It is a buffer and when its contents are requested, it will erase what
48 it has so far so that the next return will not return the same contents again.
49 '''
50 def __init__(self):
51 self.buflist = []
52 import os
53 self.encoding = os.environ.get('PYTHONIOENCODING', 'utf-8')
54
55 def getvalue(self):
56 b = self.buflist
57 self.buflist = [] # clear it
58 return ''.join(b) # bytes on py2, str on py3.
59
60 def write(self, s):
61 if not IS_PY3K:
62 if isinstance(s, unicode):
63 # can't use 'errors' as kwargs in py 2.6
64 s = s.encode(self.encoding, 'replace')
65 else:
66 if isinstance(s, bytes):
67 s = s.decode(self.encoding, errors='replace')
68 self.buflist.append(s)
69
70 def isatty(self):
71 return False
72
73 def flush(self):
74 pass
75
76 def empty(self):
77 return len(self.buflist) == 0
78
79 class _RedirectionsHolder:
80 _stack_stdout = []
81 _stack_stderr = []
82
83
84 def start_redirect(keep_original_redirection=False, std='stdout'):
85 '''
86 @param std: 'stdout', 'stderr', or 'both'
87 '''
88 import sys
89 buf = IOBuf()
90
91 if std == 'both':
92 config_stds = ['stdout', 'stderr']
93 else:
94 config_stds = [std]
95
96 for std in config_stds:
97 original = getattr(sys, std)
98 stack = getattr(_RedirectionsHolder, '_stack_%s' % std)
99 stack.append(original)
100
101 if keep_original_redirection:
102 setattr(sys, std, IORedirector(getattr(sys, std), buf))
103 else:
104 setattr(sys, std, buf)
105 return buf
106
107
108 def end_redirect(std='stdout'):
109 import sys
110 if std == 'both':
111 config_stds = ['stdout', 'stderr']
112 else:
113 config_stds = [std]
114 for std in config_stds:
115 stack = getattr(_RedirectionsHolder, '_stack_%s' % std)
116 setattr(sys, std, stack.pop())
117
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py
--- a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py
+++ b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py
@@ -2,6 +2,7 @@
IS_PY3K = pydevd_constants.IS_PY3K
+
class IORedirector:
'''
This class works to wrap a stream (stdout/stderr) with an additional redirect.
@@ -10,7 +11,7 @@
def __init__(self, original, new_redirect, wrap_buffer=False):
'''
:param stream original:
- The stream to be wrapped (usually stdout/stderr).
+ The stream to be wrapped (usually stdout/stderr, but could be None).
:param stream new_redirect:
Usually IOBuf (below).
@@ -27,14 +28,19 @@
# Note that writing to the original stream may fail for some reasons
# (such as trying to write something that's not a string or having it closed).
for r in self._redirect_to:
- r.write(s)
+ if hasattr(r, 'write'):
+ r.write(s)
def isatty(self):
- return self._redirect_to[0].isatty()
+ for r in self._redirect_to:
+ if hasattr(r, 'isatty'):
+ return r.isatty()
+ return False
def flush(self):
for r in self._redirect_to:
- r.flush()
+ if hasattr(r, 'flush'):
+ r.flush()
def __getattr__(self, name):
for r in self._redirect_to:
@@ -42,11 +48,13 @@
return getattr(r, name)
raise AttributeError(name)
+
class IOBuf:
'''This class works as a replacement for stdio and stderr.
It is a buffer and when its contents are requested, it will erase what
it has so far so that the next return will not return the same contents again.
'''
+
def __init__(self):
self.buflist = []
import os
@@ -56,7 +64,7 @@
b = self.buflist
self.buflist = [] # clear it
return ''.join(b) # bytes on py2, str on py3.
-
+
def write(self, s):
if not IS_PY3K:
if isinstance(s, unicode):
@@ -76,6 +84,7 @@
def empty(self):
return len(self.buflist) == 0
+
class _RedirectionsHolder:
_stack_stdout = []
_stack_stderr = []
| {"golden_diff": "diff --git a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py\n--- a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py\n+++ b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py\n@@ -2,6 +2,7 @@\n \n IS_PY3K = pydevd_constants.IS_PY3K\n \n+\n class IORedirector:\n '''\n This class works to wrap a stream (stdout/stderr) with an additional redirect.\n@@ -10,7 +11,7 @@\n def __init__(self, original, new_redirect, wrap_buffer=False):\n '''\n :param stream original:\n- The stream to be wrapped (usually stdout/stderr).\n+ The stream to be wrapped (usually stdout/stderr, but could be None).\n \n :param stream new_redirect:\n Usually IOBuf (below).\n@@ -27,14 +28,19 @@\n # Note that writing to the original stream may fail for some reasons\n # (such as trying to write something that's not a string or having it closed).\n for r in self._redirect_to:\n- r.write(s)\n+ if hasattr(r, 'write'):\n+ r.write(s)\n \n def isatty(self):\n- return self._redirect_to[0].isatty()\n+ for r in self._redirect_to:\n+ if hasattr(r, 'isatty'):\n+ return r.isatty()\n+ return False\n \n def flush(self):\n for r in self._redirect_to:\n- r.flush()\n+ if hasattr(r, 'flush'):\n+ r.flush()\n \n def __getattr__(self, name):\n for r in self._redirect_to:\n@@ -42,11 +48,13 @@\n return getattr(r, name)\n raise AttributeError(name)\n \n+\n class IOBuf:\n '''This class works as a replacement for stdio and stderr.\n It is a buffer and when its contents are requested, it will erase what\n it has so far so that the next return will not return the same contents again.\n '''\n+\n def __init__(self):\n self.buflist = []\n import os\n@@ -56,7 +64,7 @@\n b = self.buflist\n self.buflist = [] # clear it\n return ''.join(b) # bytes on py2, str on py3.\n- \n+\n def write(self, s):\n if not IS_PY3K:\n if isinstance(s, unicode):\n@@ -76,6 +84,7 @@\n def empty(self):\n return len(self.buflist) == 0\n \n+\n class _RedirectionsHolder:\n _stack_stdout = []\n _stack_stderr = []\n", "issue": "sys.stdin not None and missing encoding attribute when running with pythonw.exe\n## Environment data\r\n\r\n- PTVSD version: 4.2\r\n- OS and version: windows 10\r\n- Python version (& distribution if applicable, e.g. Anaconda): CPython 3.7 using **pythonw.exe**\r\n- Using VS Code or Visual Studio: VS\r\n\r\n## Actual behavior\r\n\r\nNone has no attribute encoding exception\r\n\r\n## Expected behavior\r\n\r\nEither sys.stdin.encoding works, or sys.stdin is None (it is None when running without debugging)\r\n\r\n\r\n## Steps to reproduce:\r\n1. Debug this code using pythonw.exe (no console)\r\n```\r\nimport sys\r\n\r\nwith open('issue4866.txt', 'wt') as f:\r\n f.write('hello\\n')\r\n f.write(str(type(sys.stdin)) + '\\n')\r\n if sys.stdin is not None:\r\n f.write(str(sys.stdin.encoding) + '\\n')\r\n f.write('bye\\n')\r\n```\r\n\r\nFrom https://github.com/Microsoft/PTVS/issues/4866\n", "before_files": [{"content": "from _pydevd_bundle import pydevd_constants\n\nIS_PY3K = pydevd_constants.IS_PY3K\n\nclass IORedirector:\n '''\n This class works to wrap a stream (stdout/stderr) with an additional redirect.\n '''\n\n def __init__(self, original, new_redirect, wrap_buffer=False):\n '''\n :param stream original:\n The stream to be wrapped (usually stdout/stderr).\n\n :param stream new_redirect:\n Usually IOBuf (below).\n\n :param bool wrap_buffer:\n Whether to create a buffer attribute (needed to mimick python 3 s\n tdout/stderr which has a buffer to write binary data).\n '''\n self._redirect_to = (original, new_redirect)\n if wrap_buffer and hasattr(original, 'buffer'):\n self.buffer = IORedirector(original.buffer, new_redirect.buffer, False)\n\n def write(self, s):\n # Note that writing to the original stream may fail for some reasons\n # (such as trying to write something that's not a string or having it closed).\n for r in self._redirect_to:\n r.write(s)\n\n def isatty(self):\n return self._redirect_to[0].isatty()\n\n def flush(self):\n for r in self._redirect_to:\n r.flush()\n\n def __getattr__(self, name):\n for r in self._redirect_to:\n if hasattr(r, name):\n return getattr(r, name)\n raise AttributeError(name)\n\nclass IOBuf:\n '''This class works as a replacement for stdio and stderr.\n It is a buffer and when its contents are requested, it will erase what\n it has so far so that the next return will not return the same contents again.\n '''\n def __init__(self):\n self.buflist = []\n import os\n self.encoding = os.environ.get('PYTHONIOENCODING', 'utf-8')\n\n def getvalue(self):\n b = self.buflist\n self.buflist = [] # clear it\n return ''.join(b) # bytes on py2, str on py3.\n \n def write(self, s):\n if not IS_PY3K:\n if isinstance(s, unicode):\n # can't use 'errors' as kwargs in py 2.6\n s = s.encode(self.encoding, 'replace')\n else:\n if isinstance(s, bytes):\n s = s.decode(self.encoding, errors='replace')\n self.buflist.append(s)\n\n def isatty(self):\n return False\n\n def flush(self):\n pass\n\n def empty(self):\n return len(self.buflist) == 0\n\nclass _RedirectionsHolder:\n _stack_stdout = []\n _stack_stderr = []\n\n\ndef start_redirect(keep_original_redirection=False, std='stdout'):\n '''\n @param std: 'stdout', 'stderr', or 'both'\n '''\n import sys\n buf = IOBuf()\n\n if std == 'both':\n config_stds = ['stdout', 'stderr']\n else:\n config_stds = [std]\n\n for std in config_stds:\n original = getattr(sys, std)\n stack = getattr(_RedirectionsHolder, '_stack_%s' % std)\n stack.append(original)\n\n if keep_original_redirection:\n setattr(sys, std, IORedirector(getattr(sys, std), buf))\n else:\n setattr(sys, std, buf)\n return buf\n\n\ndef end_redirect(std='stdout'):\n import sys\n if std == 'both':\n config_stds = ['stdout', 'stderr']\n else:\n config_stds = [std]\n for std in config_stds:\n stack = getattr(_RedirectionsHolder, '_stack_%s' % std)\n setattr(sys, std, stack.pop())\n\n", "path": "src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py"}], "after_files": [{"content": "from _pydevd_bundle import pydevd_constants\n\nIS_PY3K = pydevd_constants.IS_PY3K\n\n\nclass IORedirector:\n '''\n This class works to wrap a stream (stdout/stderr) with an additional redirect.\n '''\n\n def __init__(self, original, new_redirect, wrap_buffer=False):\n '''\n :param stream original:\n The stream to be wrapped (usually stdout/stderr, but could be None).\n\n :param stream new_redirect:\n Usually IOBuf (below).\n\n :param bool wrap_buffer:\n Whether to create a buffer attribute (needed to mimick python 3 s\n tdout/stderr which has a buffer to write binary data).\n '''\n self._redirect_to = (original, new_redirect)\n if wrap_buffer and hasattr(original, 'buffer'):\n self.buffer = IORedirector(original.buffer, new_redirect.buffer, False)\n\n def write(self, s):\n # Note that writing to the original stream may fail for some reasons\n # (such as trying to write something that's not a string or having it closed).\n for r in self._redirect_to:\n if hasattr(r, 'write'):\n r.write(s)\n\n def isatty(self):\n for r in self._redirect_to:\n if hasattr(r, 'isatty'):\n return r.isatty()\n return False\n\n def flush(self):\n for r in self._redirect_to:\n if hasattr(r, 'flush'):\n r.flush()\n\n def __getattr__(self, name):\n for r in self._redirect_to:\n if hasattr(r, name):\n return getattr(r, name)\n raise AttributeError(name)\n\n\nclass IOBuf:\n '''This class works as a replacement for stdio and stderr.\n It is a buffer and when its contents are requested, it will erase what\n it has so far so that the next return will not return the same contents again.\n '''\n\n def __init__(self):\n self.buflist = []\n import os\n self.encoding = os.environ.get('PYTHONIOENCODING', 'utf-8')\n\n def getvalue(self):\n b = self.buflist\n self.buflist = [] # clear it\n return ''.join(b) # bytes on py2, str on py3.\n\n def write(self, s):\n if not IS_PY3K:\n if isinstance(s, unicode):\n # can't use 'errors' as kwargs in py 2.6\n s = s.encode(self.encoding, 'replace')\n else:\n if isinstance(s, bytes):\n s = s.decode(self.encoding, errors='replace')\n self.buflist.append(s)\n\n def isatty(self):\n return False\n\n def flush(self):\n pass\n\n def empty(self):\n return len(self.buflist) == 0\n\n\nclass _RedirectionsHolder:\n _stack_stdout = []\n _stack_stderr = []\n\n\ndef start_redirect(keep_original_redirection=False, std='stdout'):\n '''\n @param std: 'stdout', 'stderr', or 'both'\n '''\n import sys\n buf = IOBuf()\n\n if std == 'both':\n config_stds = ['stdout', 'stderr']\n else:\n config_stds = [std]\n\n for std in config_stds:\n original = getattr(sys, std)\n stack = getattr(_RedirectionsHolder, '_stack_%s' % std)\n stack.append(original)\n\n if keep_original_redirection:\n setattr(sys, std, IORedirector(getattr(sys, std), buf))\n else:\n setattr(sys, std, buf)\n return buf\n\n\ndef end_redirect(std='stdout'):\n import sys\n if std == 'both':\n config_stds = ['stdout', 'stderr']\n else:\n config_stds = [std]\n for std in config_stds:\n stack = getattr(_RedirectionsHolder, '_stack_%s' % std)\n setattr(sys, std, stack.pop())\n\n", "path": "src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py"}]} | 1,575 | 646 |
gh_patches_debug_4883 | rasdani/github-patches | git_diff | pre-commit__pre-commit-2996 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use of --dev deprecated for npm
I'm seeing this warning sometimes (output seems to be hidden unless the install fails):
```
npm WARN install Usage of the `--dev` option is deprecated. Use `--include=dev` instead.
```
Which seems to be because of this:
https://github.com/pre-commit/pre-commit/blob/fe436f1eb09dfdd67032b4f9f8dfa543fb99cf06/pre_commit/languages/node.py#L104
The problem with this command was that it installed dependencies recursively, rendering them useless (AFAICT, not a node expert). The developers decided it was only a footgun in https://github.com/npm/npm/issues/5554#issuecomment-56121953 and deprecated in https://github.com/npm/npm/issues/6200.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/node.py`
Content:
```
1 from __future__ import annotations
2
3 import contextlib
4 import functools
5 import os
6 import sys
7 from typing import Generator
8 from typing import Sequence
9
10 import pre_commit.constants as C
11 from pre_commit import lang_base
12 from pre_commit.envcontext import envcontext
13 from pre_commit.envcontext import PatchesT
14 from pre_commit.envcontext import UNSET
15 from pre_commit.envcontext import Var
16 from pre_commit.languages.python import bin_dir
17 from pre_commit.prefix import Prefix
18 from pre_commit.util import cmd_output
19 from pre_commit.util import cmd_output_b
20 from pre_commit.util import rmtree
21
22 ENVIRONMENT_DIR = 'node_env'
23 run_hook = lang_base.basic_run_hook
24
25
26 @functools.lru_cache(maxsize=1)
27 def get_default_version() -> str:
28 # nodeenv does not yet support `-n system` on windows
29 if sys.platform == 'win32':
30 return C.DEFAULT
31 # if node is already installed, we can save a bunch of setup time by
32 # using the installed version
33 elif all(lang_base.exe_exists(exe) for exe in ('node', 'npm')):
34 return 'system'
35 else:
36 return C.DEFAULT
37
38
39 def get_env_patch(venv: str) -> PatchesT:
40 if sys.platform == 'cygwin': # pragma: no cover
41 _, win_venv, _ = cmd_output('cygpath', '-w', venv)
42 install_prefix = fr'{win_venv.strip()}\bin'
43 lib_dir = 'lib'
44 elif sys.platform == 'win32': # pragma: no cover
45 install_prefix = bin_dir(venv)
46 lib_dir = 'Scripts'
47 else: # pragma: win32 no cover
48 install_prefix = venv
49 lib_dir = 'lib'
50 return (
51 ('NODE_VIRTUAL_ENV', venv),
52 ('NPM_CONFIG_PREFIX', install_prefix),
53 ('npm_config_prefix', install_prefix),
54 ('NPM_CONFIG_USERCONFIG', UNSET),
55 ('npm_config_userconfig', UNSET),
56 ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),
57 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
58 )
59
60
61 @contextlib.contextmanager
62 def in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:
63 envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)
64 with envcontext(get_env_patch(envdir)):
65 yield
66
67
68 def health_check(prefix: Prefix, version: str) -> str | None:
69 with in_env(prefix, version):
70 retcode, _, _ = cmd_output_b('node', '--version', check=False)
71 if retcode != 0: # pragma: win32 no cover
72 return f'`node --version` returned {retcode}'
73 else:
74 return None
75
76
77 def install_environment(
78 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
79 ) -> None:
80 assert prefix.exists('package.json')
81 envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)
82
83 # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath
84 if sys.platform == 'win32': # pragma: no cover
85 envdir = fr'\\?\{os.path.normpath(envdir)}'
86 cmd = [sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir]
87 if version != C.DEFAULT:
88 cmd.extend(['-n', version])
89 cmd_output_b(*cmd)
90
91 with in_env(prefix, version):
92 # https://npm.community/t/npm-install-g-git-vs-git-clone-cd-npm-install-g/5449
93 # install as if we installed from git
94
95 local_install_cmd = (
96 'npm', 'install', '--dev', '--prod',
97 '--ignore-prepublish', '--no-progress', '--no-save',
98 )
99 lang_base.setup_cmd(prefix, local_install_cmd)
100
101 _, pkg, _ = cmd_output('npm', 'pack', cwd=prefix.prefix_dir)
102 pkg = prefix.path(pkg.strip())
103
104 install = ('npm', 'install', '-g', pkg, *additional_dependencies)
105 lang_base.setup_cmd(prefix, install)
106
107 # clean these up after installation
108 if prefix.exists('node_modules'): # pragma: win32 no cover
109 rmtree(prefix.path('node_modules'))
110 os.remove(pkg)
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -93,7 +93,7 @@
# install as if we installed from git
local_install_cmd = (
- 'npm', 'install', '--dev', '--prod',
+ 'npm', 'install', '--include=dev', '--include=prod',
'--ignore-prepublish', '--no-progress', '--no-save',
)
lang_base.setup_cmd(prefix, local_install_cmd)
| {"golden_diff": "diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py\n--- a/pre_commit/languages/node.py\n+++ b/pre_commit/languages/node.py\n@@ -93,7 +93,7 @@\n # install as if we installed from git\n \n local_install_cmd = (\n- 'npm', 'install', '--dev', '--prod',\n+ 'npm', 'install', '--include=dev', '--include=prod',\n '--ignore-prepublish', '--no-progress', '--no-save',\n )\n lang_base.setup_cmd(prefix, local_install_cmd)\n", "issue": "Use of --dev deprecated for npm\nI'm seeing this warning sometimes (output seems to be hidden unless the install fails):\r\n\r\n```\r\nnpm WARN install Usage of the `--dev` option is deprecated. Use `--include=dev` instead.\r\n```\r\n\r\nWhich seems to be because of this:\r\n\r\nhttps://github.com/pre-commit/pre-commit/blob/fe436f1eb09dfdd67032b4f9f8dfa543fb99cf06/pre_commit/languages/node.py#L104\r\n\r\nThe problem with this command was that it installed dependencies recursively, rendering them useless (AFAICT, not a node expert). The developers decided it was only a footgun in https://github.com/npm/npm/issues/5554#issuecomment-56121953 and deprecated in https://github.com/npm/npm/issues/6200.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Generator\nfrom typing import Sequence\n\nimport pre_commit.constants as C\nfrom pre_commit import lang_base\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages.python import bin_dir\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import rmtree\n\nENVIRONMENT_DIR = 'node_env'\nrun_hook = lang_base.basic_run_hook\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str:\n # nodeenv does not yet support `-n system` on windows\n if sys.platform == 'win32':\n return C.DEFAULT\n # if node is already installed, we can save a bunch of setup time by\n # using the installed version\n elif all(lang_base.exe_exists(exe) for exe in ('node', 'npm')):\n return 'system'\n else:\n return C.DEFAULT\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = fr'{win_venv.strip()}\\bin'\n lib_dir = 'lib'\n elif sys.platform == 'win32': # pragma: no cover\n install_prefix = bin_dir(venv)\n lib_dir = 'Scripts'\n else: # pragma: win32 no cover\n install_prefix = venv\n lib_dir = 'lib'\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NPM_CONFIG_USERCONFIG', UNSET),\n ('npm_config_userconfig', UNSET),\n ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:\n envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef health_check(prefix: Prefix, version: str) -> str | None:\n with in_env(prefix, version):\n retcode, _, _ = cmd_output_b('node', '--version', check=False)\n if retcode != 0: # pragma: win32 no cover\n return f'`node --version` returned {retcode}'\n else:\n return None\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None:\n assert prefix.exists('package.json')\n envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)\n\n # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n if sys.platform == 'win32': # pragma: no cover\n envdir = fr'\\\\?\\{os.path.normpath(envdir)}'\n cmd = [sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir]\n if version != C.DEFAULT:\n cmd.extend(['-n', version])\n cmd_output_b(*cmd)\n\n with in_env(prefix, version):\n # https://npm.community/t/npm-install-g-git-vs-git-clone-cd-npm-install-g/5449\n # install as if we installed from git\n\n local_install_cmd = (\n 'npm', 'install', '--dev', '--prod',\n '--ignore-prepublish', '--no-progress', '--no-save',\n )\n lang_base.setup_cmd(prefix, local_install_cmd)\n\n _, pkg, _ = cmd_output('npm', 'pack', cwd=prefix.prefix_dir)\n pkg = prefix.path(pkg.strip())\n\n install = ('npm', 'install', '-g', pkg, *additional_dependencies)\n lang_base.setup_cmd(prefix, install)\n\n # clean these up after installation\n if prefix.exists('node_modules'): # pragma: win32 no cover\n rmtree(prefix.path('node_modules'))\n os.remove(pkg)\n", "path": "pre_commit/languages/node.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Generator\nfrom typing import Sequence\n\nimport pre_commit.constants as C\nfrom pre_commit import lang_base\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages.python import bin_dir\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import rmtree\n\nENVIRONMENT_DIR = 'node_env'\nrun_hook = lang_base.basic_run_hook\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str:\n # nodeenv does not yet support `-n system` on windows\n if sys.platform == 'win32':\n return C.DEFAULT\n # if node is already installed, we can save a bunch of setup time by\n # using the installed version\n elif all(lang_base.exe_exists(exe) for exe in ('node', 'npm')):\n return 'system'\n else:\n return C.DEFAULT\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = fr'{win_venv.strip()}\\bin'\n lib_dir = 'lib'\n elif sys.platform == 'win32': # pragma: no cover\n install_prefix = bin_dir(venv)\n lib_dir = 'Scripts'\n else: # pragma: win32 no cover\n install_prefix = venv\n lib_dir = 'lib'\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NPM_CONFIG_USERCONFIG', UNSET),\n ('npm_config_userconfig', UNSET),\n ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:\n envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef health_check(prefix: Prefix, version: str) -> str | None:\n with in_env(prefix, version):\n retcode, _, _ = cmd_output_b('node', '--version', check=False)\n if retcode != 0: # pragma: win32 no cover\n return f'`node --version` returned {retcode}'\n else:\n return None\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None:\n assert prefix.exists('package.json')\n envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)\n\n # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n if sys.platform == 'win32': # pragma: no cover\n envdir = fr'\\\\?\\{os.path.normpath(envdir)}'\n cmd = [sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir]\n if version != C.DEFAULT:\n cmd.extend(['-n', version])\n cmd_output_b(*cmd)\n\n with in_env(prefix, version):\n # https://npm.community/t/npm-install-g-git-vs-git-clone-cd-npm-install-g/5449\n # install as if we installed from git\n\n local_install_cmd = (\n 'npm', 'install', '--include=dev', '--include=prod',\n '--ignore-prepublish', '--no-progress', '--no-save',\n )\n lang_base.setup_cmd(prefix, local_install_cmd)\n\n _, pkg, _ = cmd_output('npm', 'pack', cwd=prefix.prefix_dir)\n pkg = prefix.path(pkg.strip())\n\n install = ('npm', 'install', '-g', pkg, *additional_dependencies)\n lang_base.setup_cmd(prefix, install)\n\n # clean these up after installation\n if prefix.exists('node_modules'): # pragma: win32 no cover\n rmtree(prefix.path('node_modules'))\n os.remove(pkg)\n", "path": "pre_commit/languages/node.py"}]} | 1,673 | 123 |
gh_patches_debug_42163 | rasdani/github-patches | git_diff | cupy__cupy-2290 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`TestNpz.test_dump` test failure
https://jenkins.preferred.jp/job/chainer/job/cupy_pr/161/TEST=cupy-py3,label=mn1-p100/console
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/io/npz.py`
Content:
```
1 import numpy
2
3 import cupy
4
5
6 class NpzFile(object):
7
8 def __init__(self, npz_file):
9 self.npz_file = npz_file
10
11 def __enter__(self):
12 self.npz_file.__enter__()
13 return self
14
15 def __exit__(self, typ, val, traceback):
16 self.npz_file.__exit__(typ, val, traceback)
17
18 def __getitem__(self, key):
19 arr = self.npz_file[key]
20 return cupy.array(arr)
21
22 def close(self):
23 self.npz_file.close()
24
25
26 def load(file, mmap_mode=None):
27 """Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.
28
29 This function just calls ``numpy.load`` and then sends the arrays to the
30 current device. NPZ file is converted to NpzFile object, which defers the
31 transfer to the time of accessing the items.
32
33 Args:
34 file (file-like object or string): The file to read.
35 mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the
36 file to construct an intermediate :class:`numpy.ndarray` object and
37 transfer it to the current device.
38
39 Returns:
40 CuPy array or NpzFile object depending on the type of the file. NpzFile
41 object is a dictionary-like object with the context manager protocol
42 (which enables us to use *with* statement on it).
43
44 .. seealso:: :func:`numpy.load`
45
46 """
47 obj = numpy.load(file, mmap_mode)
48 if isinstance(obj, numpy.ndarray):
49 return cupy.array(obj)
50 elif isinstance(obj, numpy.lib.npyio.NpzFile):
51 return NpzFile(obj)
52 else:
53 return obj
54
55
56 def save(file, arr):
57 """Saves an array to a binary file in ``.npy`` format.
58
59 Args:
60 file (file or str): File or filename to save.
61 arr (array_like): Array to save. It should be able to feed to
62 :func:`cupy.asnumpy`.
63
64 .. seealso:: :func:`numpy.save`
65
66 """
67 numpy.save(file, cupy.asnumpy(arr))
68
69
70 def savez(file, *args, **kwds):
71 """Saves one or more arrays into a file in uncompressed ``.npz`` format.
72
73 Arguments without keys are treated as arguments with automatic keys named
74 ``arr_0``, ``arr_1``, etc. corresponding to the positions in the argument
75 list. The keys of arguments are used as keys in the ``.npz`` file, which
76 are used for accessing NpzFile object when the file is read by
77 :func:`cupy.load` function.
78
79 Args:
80 file (file or str): File or filename to save.
81 *args: Arrays with implicit keys.
82 **kwds: Arrays with explicit keys.
83
84 .. seealso:: :func:`numpy.savez`
85
86 """
87 args = map(cupy.asnumpy, args)
88 for key in kwds:
89 kwds[key] = cupy.asnumpy(kwds[key])
90 numpy.savez(file, *args, **kwds)
91
92
93 def savez_compressed(file, *args, **kwds):
94 """Saves one or more arrays into a file in compressed ``.npz`` format.
95
96 It is equivalent to :func:`cupy.savez` function except the output file is
97 compressed.
98
99 .. seealso::
100 :func:`cupy.savez` for more detail,
101 :func:`numpy.savez_compressed`
102
103 """
104 args = map(cupy.asnumpy, args)
105 for key in kwds:
106 kwds[key] = cupy.asnumpy(kwds[key])
107 numpy.savez_compressed(file, *args, **kwds)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/io/npz.py b/cupy/io/npz.py
--- a/cupy/io/npz.py
+++ b/cupy/io/npz.py
@@ -1,8 +1,13 @@
+import warnings
+
import numpy
import cupy
+_support_allow_pickle = (numpy.lib.NumpyVersion(numpy.__version__) >= '1.10.0')
+
+
class NpzFile(object):
def __init__(self, npz_file):
@@ -23,7 +28,7 @@
self.npz_file.close()
-def load(file, mmap_mode=None):
+def load(file, mmap_mode=None, allow_pickle=None):
"""Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.
This function just calls ``numpy.load`` and then sends the arrays to the
@@ -35,6 +40,16 @@
mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the
file to construct an intermediate :class:`numpy.ndarray` object and
transfer it to the current device.
+ allow_pickle (bool): Allow loading pickled object arrays stored in npy
+ files. Reasons for disallowing pickles include security, as
+ loading pickled data can execute arbitrary code. If pickles are
+ disallowed, loading object arrays will fail.
+ Please be aware that CuPy does not support arrays with dtype of
+ `object`.
+ The default is False.
+ This option is available only for NumPy 1.10 or later.
+ In NumPy 1.9, this option cannot be specified (loading pickled
+ objects is always allowed).
Returns:
CuPy array or NpzFile object depending on the type of the file. NpzFile
@@ -44,7 +59,14 @@
.. seealso:: :func:`numpy.load`
"""
- obj = numpy.load(file, mmap_mode)
+ if _support_allow_pickle:
+ allow_pickle = False if allow_pickle is None else allow_pickle
+ obj = numpy.load(file, mmap_mode, allow_pickle)
+ else:
+ if allow_pickle is not None:
+ warnings.warn('allow_pickle option is not supported in NumPy 1.9')
+ obj = numpy.load(file, mmap_mode)
+
if isinstance(obj, numpy.ndarray):
return cupy.array(obj)
elif isinstance(obj, numpy.lib.npyio.NpzFile):
@@ -53,18 +75,35 @@
return obj
-def save(file, arr):
+def save(file, arr, allow_pickle=None):
"""Saves an array to a binary file in ``.npy`` format.
Args:
file (file or str): File or filename to save.
arr (array_like): Array to save. It should be able to feed to
:func:`cupy.asnumpy`.
+ allow_pickle (bool): Allow saving object arrays using Python pickles.
+ Reasons for disallowing pickles include security (loading pickled
+ data can execute arbitrary code) and portability (pickled objects
+ may not be loadable on different Python installations, for example
+ if the stored objects require libraries that are not available,
+ and not all pickled data is compatible between Python 2 and Python
+ 3).
+ The default is True.
+ This option is available only for NumPy 1.10 or later.
+ In NumPy 1.9, this option cannot be specified (saving objects
+ using pickles is always allowed).
.. seealso:: :func:`numpy.save`
"""
- numpy.save(file, cupy.asnumpy(arr))
+ if _support_allow_pickle:
+ allow_pickle = True if allow_pickle is None else allow_pickle
+ numpy.save(file, cupy.asnumpy(arr), allow_pickle)
+ else:
+ if allow_pickle is not None:
+ warnings.warn('allow_pickle option is not supported in NumPy 1.9')
+ numpy.save(file, cupy.asnumpy(arr))
def savez(file, *args, **kwds):
| {"golden_diff": "diff --git a/cupy/io/npz.py b/cupy/io/npz.py\n--- a/cupy/io/npz.py\n+++ b/cupy/io/npz.py\n@@ -1,8 +1,13 @@\n+import warnings\n+\n import numpy\n \n import cupy\n \n \n+_support_allow_pickle = (numpy.lib.NumpyVersion(numpy.__version__) >= '1.10.0')\n+\n+\n class NpzFile(object):\n \n def __init__(self, npz_file):\n@@ -23,7 +28,7 @@\n self.npz_file.close()\n \n \n-def load(file, mmap_mode=None):\n+def load(file, mmap_mode=None, allow_pickle=None):\n \"\"\"Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.\n \n This function just calls ``numpy.load`` and then sends the arrays to the\n@@ -35,6 +40,16 @@\n mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the\n file to construct an intermediate :class:`numpy.ndarray` object and\n transfer it to the current device.\n+ allow_pickle (bool): Allow loading pickled object arrays stored in npy\n+ files. Reasons for disallowing pickles include security, as\n+ loading pickled data can execute arbitrary code. If pickles are\n+ disallowed, loading object arrays will fail.\n+ Please be aware that CuPy does not support arrays with dtype of\n+ `object`.\n+ The default is False.\n+ This option is available only for NumPy 1.10 or later.\n+ In NumPy 1.9, this option cannot be specified (loading pickled\n+ objects is always allowed).\n \n Returns:\n CuPy array or NpzFile object depending on the type of the file. NpzFile\n@@ -44,7 +59,14 @@\n .. seealso:: :func:`numpy.load`\n \n \"\"\"\n- obj = numpy.load(file, mmap_mode)\n+ if _support_allow_pickle:\n+ allow_pickle = False if allow_pickle is None else allow_pickle\n+ obj = numpy.load(file, mmap_mode, allow_pickle)\n+ else:\n+ if allow_pickle is not None:\n+ warnings.warn('allow_pickle option is not supported in NumPy 1.9')\n+ obj = numpy.load(file, mmap_mode)\n+\n if isinstance(obj, numpy.ndarray):\n return cupy.array(obj)\n elif isinstance(obj, numpy.lib.npyio.NpzFile):\n@@ -53,18 +75,35 @@\n return obj\n \n \n-def save(file, arr):\n+def save(file, arr, allow_pickle=None):\n \"\"\"Saves an array to a binary file in ``.npy`` format.\n \n Args:\n file (file or str): File or filename to save.\n arr (array_like): Array to save. It should be able to feed to\n :func:`cupy.asnumpy`.\n+ allow_pickle (bool): Allow saving object arrays using Python pickles.\n+ Reasons for disallowing pickles include security (loading pickled\n+ data can execute arbitrary code) and portability (pickled objects\n+ may not be loadable on different Python installations, for example\n+ if the stored objects require libraries that are not available,\n+ and not all pickled data is compatible between Python 2 and Python\n+ 3).\n+ The default is True.\n+ This option is available only for NumPy 1.10 or later.\n+ In NumPy 1.9, this option cannot be specified (saving objects\n+ using pickles is always allowed).\n \n .. seealso:: :func:`numpy.save`\n \n \"\"\"\n- numpy.save(file, cupy.asnumpy(arr))\n+ if _support_allow_pickle:\n+ allow_pickle = True if allow_pickle is None else allow_pickle\n+ numpy.save(file, cupy.asnumpy(arr), allow_pickle)\n+ else:\n+ if allow_pickle is not None:\n+ warnings.warn('allow_pickle option is not supported in NumPy 1.9')\n+ numpy.save(file, cupy.asnumpy(arr))\n \n \n def savez(file, *args, **kwds):\n", "issue": "`TestNpz.test_dump` test failure\nhttps://jenkins.preferred.jp/job/chainer/job/cupy_pr/161/TEST=cupy-py3,label=mn1-p100/console\r\n\n", "before_files": [{"content": "import numpy\n\nimport cupy\n\n\nclass NpzFile(object):\n\n def __init__(self, npz_file):\n self.npz_file = npz_file\n\n def __enter__(self):\n self.npz_file.__enter__()\n return self\n\n def __exit__(self, typ, val, traceback):\n self.npz_file.__exit__(typ, val, traceback)\n\n def __getitem__(self, key):\n arr = self.npz_file[key]\n return cupy.array(arr)\n\n def close(self):\n self.npz_file.close()\n\n\ndef load(file, mmap_mode=None):\n \"\"\"Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.\n\n This function just calls ``numpy.load`` and then sends the arrays to the\n current device. NPZ file is converted to NpzFile object, which defers the\n transfer to the time of accessing the items.\n\n Args:\n file (file-like object or string): The file to read.\n mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the\n file to construct an intermediate :class:`numpy.ndarray` object and\n transfer it to the current device.\n\n Returns:\n CuPy array or NpzFile object depending on the type of the file. NpzFile\n object is a dictionary-like object with the context manager protocol\n (which enables us to use *with* statement on it).\n\n .. seealso:: :func:`numpy.load`\n\n \"\"\"\n obj = numpy.load(file, mmap_mode)\n if isinstance(obj, numpy.ndarray):\n return cupy.array(obj)\n elif isinstance(obj, numpy.lib.npyio.NpzFile):\n return NpzFile(obj)\n else:\n return obj\n\n\ndef save(file, arr):\n \"\"\"Saves an array to a binary file in ``.npy`` format.\n\n Args:\n file (file or str): File or filename to save.\n arr (array_like): Array to save. It should be able to feed to\n :func:`cupy.asnumpy`.\n\n .. seealso:: :func:`numpy.save`\n\n \"\"\"\n numpy.save(file, cupy.asnumpy(arr))\n\n\ndef savez(file, *args, **kwds):\n \"\"\"Saves one or more arrays into a file in uncompressed ``.npz`` format.\n\n Arguments without keys are treated as arguments with automatic keys named\n ``arr_0``, ``arr_1``, etc. corresponding to the positions in the argument\n list. The keys of arguments are used as keys in the ``.npz`` file, which\n are used for accessing NpzFile object when the file is read by\n :func:`cupy.load` function.\n\n Args:\n file (file or str): File or filename to save.\n *args: Arrays with implicit keys.\n **kwds: Arrays with explicit keys.\n\n .. seealso:: :func:`numpy.savez`\n\n \"\"\"\n args = map(cupy.asnumpy, args)\n for key in kwds:\n kwds[key] = cupy.asnumpy(kwds[key])\n numpy.savez(file, *args, **kwds)\n\n\ndef savez_compressed(file, *args, **kwds):\n \"\"\"Saves one or more arrays into a file in compressed ``.npz`` format.\n\n It is equivalent to :func:`cupy.savez` function except the output file is\n compressed.\n\n .. seealso::\n :func:`cupy.savez` for more detail,\n :func:`numpy.savez_compressed`\n\n \"\"\"\n args = map(cupy.asnumpy, args)\n for key in kwds:\n kwds[key] = cupy.asnumpy(kwds[key])\n numpy.savez_compressed(file, *args, **kwds)\n", "path": "cupy/io/npz.py"}], "after_files": [{"content": "import warnings\n\nimport numpy\n\nimport cupy\n\n\n_support_allow_pickle = (numpy.lib.NumpyVersion(numpy.__version__) >= '1.10.0')\n\n\nclass NpzFile(object):\n\n def __init__(self, npz_file):\n self.npz_file = npz_file\n\n def __enter__(self):\n self.npz_file.__enter__()\n return self\n\n def __exit__(self, typ, val, traceback):\n self.npz_file.__exit__(typ, val, traceback)\n\n def __getitem__(self, key):\n arr = self.npz_file[key]\n return cupy.array(arr)\n\n def close(self):\n self.npz_file.close()\n\n\ndef load(file, mmap_mode=None, allow_pickle=None):\n \"\"\"Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.\n\n This function just calls ``numpy.load`` and then sends the arrays to the\n current device. NPZ file is converted to NpzFile object, which defers the\n transfer to the time of accessing the items.\n\n Args:\n file (file-like object or string): The file to read.\n mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the\n file to construct an intermediate :class:`numpy.ndarray` object and\n transfer it to the current device.\n allow_pickle (bool): Allow loading pickled object arrays stored in npy\n files. Reasons for disallowing pickles include security, as\n loading pickled data can execute arbitrary code. If pickles are\n disallowed, loading object arrays will fail.\n Please be aware that CuPy does not support arrays with dtype of\n `object`.\n The default is False.\n This option is available only for NumPy 1.10 or later.\n In NumPy 1.9, this option cannot be specified (loading pickled\n objects is always allowed).\n\n Returns:\n CuPy array or NpzFile object depending on the type of the file. NpzFile\n object is a dictionary-like object with the context manager protocol\n (which enables us to use *with* statement on it).\n\n .. seealso:: :func:`numpy.load`\n\n \"\"\"\n if _support_allow_pickle:\n allow_pickle = False if allow_pickle is None else allow_pickle\n obj = numpy.load(file, mmap_mode, allow_pickle)\n else:\n if allow_pickle is not None:\n warnings.warn('allow_pickle option is not supported in NumPy 1.9')\n obj = numpy.load(file, mmap_mode)\n\n if isinstance(obj, numpy.ndarray):\n return cupy.array(obj)\n elif isinstance(obj, numpy.lib.npyio.NpzFile):\n return NpzFile(obj)\n else:\n return obj\n\n\ndef save(file, arr, allow_pickle=None):\n \"\"\"Saves an array to a binary file in ``.npy`` format.\n\n Args:\n file (file or str): File or filename to save.\n arr (array_like): Array to save. It should be able to feed to\n :func:`cupy.asnumpy`.\n allow_pickle (bool): Allow saving object arrays using Python pickles.\n Reasons for disallowing pickles include security (loading pickled\n data can execute arbitrary code) and portability (pickled objects\n may not be loadable on different Python installations, for example\n if the stored objects require libraries that are not available,\n and not all pickled data is compatible between Python 2 and Python\n 3).\n The default is True.\n This option is available only for NumPy 1.10 or later.\n In NumPy 1.9, this option cannot be specified (saving objects\n using pickles is always allowed).\n\n .. seealso:: :func:`numpy.save`\n\n \"\"\"\n if _support_allow_pickle:\n allow_pickle = True if allow_pickle is None else allow_pickle\n numpy.save(file, cupy.asnumpy(arr), allow_pickle)\n else:\n if allow_pickle is not None:\n warnings.warn('allow_pickle option is not supported in NumPy 1.9')\n numpy.save(file, cupy.asnumpy(arr))\n\n\ndef savez(file, *args, **kwds):\n \"\"\"Saves one or more arrays into a file in uncompressed ``.npz`` format.\n\n Arguments without keys are treated as arguments with automatic keys named\n ``arr_0``, ``arr_1``, etc. corresponding to the positions in the argument\n list. The keys of arguments are used as keys in the ``.npz`` file, which\n are used for accessing NpzFile object when the file is read by\n :func:`cupy.load` function.\n\n Args:\n file (file or str): File or filename to save.\n *args: Arrays with implicit keys.\n **kwds: Arrays with explicit keys.\n\n .. seealso:: :func:`numpy.savez`\n\n \"\"\"\n args = map(cupy.asnumpy, args)\n for key in kwds:\n kwds[key] = cupy.asnumpy(kwds[key])\n numpy.savez(file, *args, **kwds)\n\n\ndef savez_compressed(file, *args, **kwds):\n \"\"\"Saves one or more arrays into a file in compressed ``.npz`` format.\n\n It is equivalent to :func:`cupy.savez` function except the output file is\n compressed.\n\n .. seealso::\n :func:`cupy.savez` for more detail,\n :func:`numpy.savez_compressed`\n\n \"\"\"\n args = map(cupy.asnumpy, args)\n for key in kwds:\n kwds[key] = cupy.asnumpy(kwds[key])\n numpy.savez_compressed(file, *args, **kwds)\n", "path": "cupy/io/npz.py"}]} | 1,368 | 931 |
gh_patches_debug_15544 | rasdani/github-patches | git_diff | qtile__qtile-4610 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`ImapWidget` may call `keyring.get_password()` with `username=None`, violating API and potentially crashing it
### Issue description
The problematic code is:
https://github.com/qtile/qtile/blob/9ccaf6f1c01a9ffbd7beacdd8f405884bd81e1c0/libqtile/widget/imapwidget.py#L78
At this point, `self.user` may be `None`. However, according to the API definition at:
https://github.com/jaraco/keyring/blob/0cebfebbf516a47e4e45911ba6b4d4dd2699845c/keyring/core.py#L54
`keyring.get_password()` expects two `str` argument, i.e. `None` is not acceptable. If `keyrings-alt` backend is installed, then it explicitly crashes on `None` username:
```pytb
libqtile/widget/imapwidget.py:78: in __init__
password = keyring.get_password("imapwidget", self.user)
.tox/py310-x11/lib/python3.10/site-packages/keyring/core.py:56: in get_password
return get_keyring().get_password(service_name, username)
.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/file_base.py:92: in get_password
assoc = self._generate_assoc(service, username)
.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/file_base.py:133: in _generate_assoc
return (escape_for_ini(service) + r'\0' + escape_for_ini(username)).encode()
.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/escape.py:29: in escape
return "".join(_escape_char(c) for c in value.encode('utf-8'))
E AttributeError: 'NoneType' object has no attribute 'encode'
```
To reproduce:
```
tox -e py310-x11 # you can cancel the initial test run, after dependencies are installed
. .tox/py310-x11/bin/activate
pip install imapclient keyring keyrings-alt
pytest --backend=x11
```
### Version
0.23.1.dev83+g9ccaf6f1
### Backend
X11 (default)
### Config
_No response_
### Logs
_No response_
### Required
- [X] I have searched past issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/widget/imapwidget.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2015 David R. Andersen
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a copy
5 # of this software and associated documentation files (the "Software"), to deal
6 # in the Software without restriction, including without limitation the rights
7 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8 # copies of the Software, and to permit persons to whom the Software is
9 # furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
17 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
20 # SOFTWARE.
21
22 import imaplib
23 import re
24
25 import keyring
26
27 from libqtile.log_utils import logger
28 from libqtile.widget import base
29
30
31 class ImapWidget(base.ThreadPoolText):
32 """Email IMAP widget
33
34 This widget will scan one of your imap email boxes and report the number of
35 unseen messages present. I've configured it to only work with imap with
36 ssl. Your password is obtained from the Gnome Keyring.
37
38 Writing your password to the keyring initially is as simple as (changing
39 out <userid> and <password> for your userid and password):
40
41 1) create the file ~/.local/share/python_keyring/keyringrc.cfg with the
42 following contents::
43
44 [backend]
45 default-keyring=keyring.backends.Gnome.Keyring
46 keyring-path=/home/<userid>/.local/share/keyring/
47
48
49 2) Execute the following python shell script once::
50
51 #!/usr/bin/env python3
52 import keyring
53 user = <userid>
54 password = <password>
55 keyring.set_password('imapwidget', user, password)
56
57 mbox names must include the path to the mbox (except for the default
58 INBOX). So, for example if your mailroot is ``~/Maildir``, and you want to
59 look at the mailbox at HomeMail/fred, the mbox setting would be:
60 ``mbox="~/Maildir/HomeMail/fred"``. Note the nested sets of quotes! Labels
61 can be whatever you choose, of course.
62
63 Widget requirements: keyring_.
64
65 .. _keyring: https://pypi.org/project/keyring/
66 """
67
68 defaults = [
69 ("mbox", '"INBOX"', "mailbox to fetch"),
70 ("label", "INBOX", "label for display"),
71 ("user", None, "email username"),
72 ("server", None, "email server name"),
73 ]
74
75 def __init__(self, **config):
76 base.ThreadPoolText.__init__(self, "", **config)
77 self.add_defaults(ImapWidget.defaults)
78 password = keyring.get_password("imapwidget", self.user)
79 if password is not None:
80 self.password = password
81 else:
82 logger.critical("Gnome Keyring Error")
83
84 def poll(self):
85 im = imaplib.IMAP4_SSL(self.server, 993)
86 if self.password == "Gnome Keyring Error":
87 text = "Gnome Keyring Error"
88 else:
89 im.login(self.user, self.password)
90 status, response = im.status(self.mbox, "(UNSEEN)")
91 text = response[0].decode()
92 text = self.label + ": " + re.sub(r"\).*$", "", re.sub(r"^.*N\s", "", text))
93 im.logout()
94 return text
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/widget/imapwidget.py b/libqtile/widget/imapwidget.py
--- a/libqtile/widget/imapwidget.py
+++ b/libqtile/widget/imapwidget.py
@@ -24,6 +24,7 @@
import keyring
+from libqtile.confreader import ConfigError
from libqtile.log_utils import logger
from libqtile.widget import base
@@ -75,6 +76,8 @@
def __init__(self, **config):
base.ThreadPoolText.__init__(self, "", **config)
self.add_defaults(ImapWidget.defaults)
+ if self.user is None:
+ raise ConfigError("You must set the 'user' parameter for the IMAP widget.")
password = keyring.get_password("imapwidget", self.user)
if password is not None:
self.password = password
| {"golden_diff": "diff --git a/libqtile/widget/imapwidget.py b/libqtile/widget/imapwidget.py\n--- a/libqtile/widget/imapwidget.py\n+++ b/libqtile/widget/imapwidget.py\n@@ -24,6 +24,7 @@\n \n import keyring\n \n+from libqtile.confreader import ConfigError\n from libqtile.log_utils import logger\n from libqtile.widget import base\n \n@@ -75,6 +76,8 @@\n def __init__(self, **config):\n base.ThreadPoolText.__init__(self, \"\", **config)\n self.add_defaults(ImapWidget.defaults)\n+ if self.user is None:\n+ raise ConfigError(\"You must set the 'user' parameter for the IMAP widget.\")\n password = keyring.get_password(\"imapwidget\", self.user)\n if password is not None:\n self.password = password\n", "issue": "`ImapWidget` may call `keyring.get_password()` with `username=None`, violating API and potentially crashing it\n### Issue description\n\nThe problematic code is:\r\n\r\nhttps://github.com/qtile/qtile/blob/9ccaf6f1c01a9ffbd7beacdd8f405884bd81e1c0/libqtile/widget/imapwidget.py#L78\r\n\r\nAt this point, `self.user` may be `None`. However, according to the API definition at:\r\n\r\nhttps://github.com/jaraco/keyring/blob/0cebfebbf516a47e4e45911ba6b4d4dd2699845c/keyring/core.py#L54\r\n\r\n`keyring.get_password()` expects two `str` argument, i.e. `None` is not acceptable. If `keyrings-alt` backend is installed, then it explicitly crashes on `None` username:\r\n\r\n```pytb\r\nlibqtile/widget/imapwidget.py:78: in __init__\r\n password = keyring.get_password(\"imapwidget\", self.user)\r\n.tox/py310-x11/lib/python3.10/site-packages/keyring/core.py:56: in get_password\r\n return get_keyring().get_password(service_name, username)\r\n.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/file_base.py:92: in get_password\r\n assoc = self._generate_assoc(service, username)\r\n.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/file_base.py:133: in _generate_assoc\r\n return (escape_for_ini(service) + r'\\0' + escape_for_ini(username)).encode()\r\n.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/escape.py:29: in escape\r\n return \"\".join(_escape_char(c) for c in value.encode('utf-8'))\r\nE AttributeError: 'NoneType' object has no attribute 'encode'\r\n```\r\n\r\nTo reproduce:\r\n\r\n```\r\ntox -e py310-x11 # you can cancel the initial test run, after dependencies are installed\r\n. .tox/py310-x11/bin/activate\r\npip install imapclient keyring keyrings-alt\r\npytest --backend=x11\r\n```\n\n### Version\n\n0.23.1.dev83+g9ccaf6f1\n\n### Backend\n\nX11 (default)\n\n### Config\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Required\n\n- [X] I have searched past issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015 David R. Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport imaplib\nimport re\n\nimport keyring\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass ImapWidget(base.ThreadPoolText):\n \"\"\"Email IMAP widget\n\n This widget will scan one of your imap email boxes and report the number of\n unseen messages present. I've configured it to only work with imap with\n ssl. Your password is obtained from the Gnome Keyring.\n\n Writing your password to the keyring initially is as simple as (changing\n out <userid> and <password> for your userid and password):\n\n 1) create the file ~/.local/share/python_keyring/keyringrc.cfg with the\n following contents::\n\n [backend]\n default-keyring=keyring.backends.Gnome.Keyring\n keyring-path=/home/<userid>/.local/share/keyring/\n\n\n 2) Execute the following python shell script once::\n\n #!/usr/bin/env python3\n import keyring\n user = <userid>\n password = <password>\n keyring.set_password('imapwidget', user, password)\n\n mbox names must include the path to the mbox (except for the default\n INBOX). So, for example if your mailroot is ``~/Maildir``, and you want to\n look at the mailbox at HomeMail/fred, the mbox setting would be:\n ``mbox=\"~/Maildir/HomeMail/fred\"``. Note the nested sets of quotes! Labels\n can be whatever you choose, of course.\n\n Widget requirements: keyring_.\n\n .. _keyring: https://pypi.org/project/keyring/\n \"\"\"\n\n defaults = [\n (\"mbox\", '\"INBOX\"', \"mailbox to fetch\"),\n (\"label\", \"INBOX\", \"label for display\"),\n (\"user\", None, \"email username\"),\n (\"server\", None, \"email server name\"),\n ]\n\n def __init__(self, **config):\n base.ThreadPoolText.__init__(self, \"\", **config)\n self.add_defaults(ImapWidget.defaults)\n password = keyring.get_password(\"imapwidget\", self.user)\n if password is not None:\n self.password = password\n else:\n logger.critical(\"Gnome Keyring Error\")\n\n def poll(self):\n im = imaplib.IMAP4_SSL(self.server, 993)\n if self.password == \"Gnome Keyring Error\":\n text = \"Gnome Keyring Error\"\n else:\n im.login(self.user, self.password)\n status, response = im.status(self.mbox, \"(UNSEEN)\")\n text = response[0].decode()\n text = self.label + \": \" + re.sub(r\"\\).*$\", \"\", re.sub(r\"^.*N\\s\", \"\", text))\n im.logout()\n return text\n", "path": "libqtile/widget/imapwidget.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015 David R. Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport imaplib\nimport re\n\nimport keyring\n\nfrom libqtile.confreader import ConfigError\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass ImapWidget(base.ThreadPoolText):\n \"\"\"Email IMAP widget\n\n This widget will scan one of your imap email boxes and report the number of\n unseen messages present. I've configured it to only work with imap with\n ssl. Your password is obtained from the Gnome Keyring.\n\n Writing your password to the keyring initially is as simple as (changing\n out <userid> and <password> for your userid and password):\n\n 1) create the file ~/.local/share/python_keyring/keyringrc.cfg with the\n following contents::\n\n [backend]\n default-keyring=keyring.backends.Gnome.Keyring\n keyring-path=/home/<userid>/.local/share/keyring/\n\n\n 2) Execute the following python shell script once::\n\n #!/usr/bin/env python3\n import keyring\n user = <userid>\n password = <password>\n keyring.set_password('imapwidget', user, password)\n\n mbox names must include the path to the mbox (except for the default\n INBOX). So, for example if your mailroot is ``~/Maildir``, and you want to\n look at the mailbox at HomeMail/fred, the mbox setting would be:\n ``mbox=\"~/Maildir/HomeMail/fred\"``. Note the nested sets of quotes! Labels\n can be whatever you choose, of course.\n\n Widget requirements: keyring_.\n\n .. _keyring: https://pypi.org/project/keyring/\n \"\"\"\n\n defaults = [\n (\"mbox\", '\"INBOX\"', \"mailbox to fetch\"),\n (\"label\", \"INBOX\", \"label for display\"),\n (\"user\", None, \"email username\"),\n (\"server\", None, \"email server name\"),\n ]\n\n def __init__(self, **config):\n base.ThreadPoolText.__init__(self, \"\", **config)\n self.add_defaults(ImapWidget.defaults)\n if self.user is None:\n raise ConfigError(\"You must set the 'user' parameter for the IMAP widget.\")\n password = keyring.get_password(\"imapwidget\", self.user)\n if password is not None:\n self.password = password\n else:\n logger.critical(\"Gnome Keyring Error\")\n\n def poll(self):\n im = imaplib.IMAP4_SSL(self.server, 993)\n if self.password == \"Gnome Keyring Error\":\n text = \"Gnome Keyring Error\"\n else:\n im.login(self.user, self.password)\n status, response = im.status(self.mbox, \"(UNSEEN)\")\n text = response[0].decode()\n text = self.label + \": \" + re.sub(r\"\\).*$\", \"\", re.sub(r\"^.*N\\s\", \"\", text))\n im.logout()\n return text\n", "path": "libqtile/widget/imapwidget.py"}]} | 1,919 | 190 |
gh_patches_debug_14753 | rasdani/github-patches | git_diff | ansible__ansible-39634 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aws_s3 is automaticly decrypting ansible-vault encrypted files before put
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and devel branch are affected too.
Always add information AFTER of these html comments. -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
aws_s3
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
2.5.1
```
##### SUMMARY
- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3.
- aws_s3 in 2.4 didn't decrypt the src: parameter.
- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.
- The aws_s3 module doesn't accept the decrypt: argument.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: upload vault to s3
aws_s3:
bucket: "the bucket"
object: "file.txt"
src: "file.txt"
mode: put
```
1. The file.txt is encrypted with ansible-vault.
2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.
##### EXPECTED RESULTS
Don't autodecrypt the src: argument or be able to specify decrypt: no.
##### ACTUAL RESULTS
The src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).
aws_s3 is automaticly decrypting ansible-vault encrypted files before put
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and devel branch are affected too.
Always add information AFTER of these html comments. -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
aws_s3
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
2.5.1
```
##### SUMMARY
- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3.
- aws_s3 in 2.4 didn't decrypt the src: parameter.
- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.
- The aws_s3 module doesn't accept the decrypt: argument.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: upload vault to s3
aws_s3:
bucket: "the bucket"
object: "file.txt"
src: "file.txt"
mode: put
```
1. The file.txt is encrypted with ansible-vault.
2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.
##### EXPECTED RESULTS
Don't autodecrypt the src: argument or be able to specify decrypt: no.
##### ACTUAL RESULTS
The src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/plugins/action/aws_s3.py`
Content:
```
1 # (c) 2012, Michael DeHaan <[email protected]>
2 # (c) 2018, Will Thames <[email protected]>
3 #
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18 from __future__ import (absolute_import, division, print_function)
19 __metaclass__ = type
20
21 import os
22
23 from ansible.errors import AnsibleError, AnsibleAction, AnsibleActionFail, AnsibleFileNotFound
24 from ansible.module_utils._text import to_text
25 from ansible.plugins.action import ActionBase
26
27
28 class ActionModule(ActionBase):
29
30 TRANSFERS_FILES = True
31
32 def run(self, tmp=None, task_vars=None):
33 ''' handler for aws_s3 operations '''
34 if task_vars is None:
35 task_vars = dict()
36
37 result = super(ActionModule, self).run(tmp, task_vars)
38 del tmp # tmp no longer has any effect
39
40 source = self._task.args.get('src', None)
41
42 try:
43 new_module_args = self._task.args.copy()
44 if source:
45 source = os.path.expanduser(source)
46
47 # For backward compatibility check if the file exists on the remote; it should take precedence
48 if not self._remote_file_exists(source):
49 try:
50 source = self._loader.get_real_file(self._find_needle('files', source))
51 new_module_args['src'] = source
52 except AnsibleFileNotFound as e:
53 # module handles error message for nonexistent files
54 new_module_args['src'] = source
55 except AnsibleError as e:
56 raise AnsibleActionFail(to_text(e))
57
58 # execute the aws_s3 module now, with the updated args
59 result.update(self._execute_module(module_args=new_module_args, task_vars=task_vars))
60 except AnsibleAction as e:
61 result.update(e.result)
62 return result
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/plugins/action/aws_s3.py b/lib/ansible/plugins/action/aws_s3.py
--- a/lib/ansible/plugins/action/aws_s3.py
+++ b/lib/ansible/plugins/action/aws_s3.py
@@ -47,7 +47,7 @@
# For backward compatibility check if the file exists on the remote; it should take precedence
if not self._remote_file_exists(source):
try:
- source = self._loader.get_real_file(self._find_needle('files', source))
+ source = self._loader.get_real_file(self._find_needle('files', source), decrypt=False)
new_module_args['src'] = source
except AnsibleFileNotFound as e:
# module handles error message for nonexistent files
| {"golden_diff": "diff --git a/lib/ansible/plugins/action/aws_s3.py b/lib/ansible/plugins/action/aws_s3.py\n--- a/lib/ansible/plugins/action/aws_s3.py\n+++ b/lib/ansible/plugins/action/aws_s3.py\n@@ -47,7 +47,7 @@\n # For backward compatibility check if the file exists on the remote; it should take precedence\n if not self._remote_file_exists(source):\n try:\n- source = self._loader.get_real_file(self._find_needle('files', source))\n+ source = self._loader.get_real_file(self._find_needle('files', source), decrypt=False)\n new_module_args['src'] = source\n except AnsibleFileNotFound as e:\n # module handles error message for nonexistent files\n", "issue": "aws_s3 is automaticly decrypting ansible-vault encrypted files before put\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and devel branch are affected too.\r\nAlways add information AFTER of these html comments. -->\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\naws_s3\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\n2.5.1\r\n```\r\n\r\n##### SUMMARY\r\n- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3. \r\n- aws_s3 in 2.4 didn't decrypt the src: parameter.\r\n- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.\r\n- The aws_s3 module doesn't accept the decrypt: argument.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: upload vault to s3\r\n aws_s3:\r\n bucket: \"the bucket\"\r\n object: \"file.txt\"\r\n src: \"file.txt\"\r\n mode: put\r\n```\r\n1. The file.txt is encrypted with ansible-vault. \r\n2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.\r\n\r\n##### EXPECTED RESULTS\r\nDon't autodecrypt the src: argument or be able to specify decrypt: no.\r\n\r\n##### ACTUAL RESULTS\r\nThe src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).\r\n\naws_s3 is automaticly decrypting ansible-vault encrypted files before put\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and devel branch are affected too.\r\nAlways add information AFTER of these html comments. -->\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\naws_s3\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\n2.5.1\r\n```\r\n\r\n##### SUMMARY\r\n- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3. \r\n- aws_s3 in 2.4 didn't decrypt the src: parameter.\r\n- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.\r\n- The aws_s3 module doesn't accept the decrypt: argument.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: upload vault to s3\r\n aws_s3:\r\n bucket: \"the bucket\"\r\n object: \"file.txt\"\r\n src: \"file.txt\"\r\n mode: put\r\n```\r\n1. The file.txt is encrypted with ansible-vault. \r\n2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.\r\n\r\n##### EXPECTED RESULTS\r\nDon't autodecrypt the src: argument or be able to specify decrypt: no.\r\n\r\n##### ACTUAL RESULTS\r\nThe src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).\r\n\n", "before_files": [{"content": "# (c) 2012, Michael DeHaan <[email protected]>\n# (c) 2018, Will Thames <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport os\n\nfrom ansible.errors import AnsibleError, AnsibleAction, AnsibleActionFail, AnsibleFileNotFound\nfrom ansible.module_utils._text import to_text\nfrom ansible.plugins.action import ActionBase\n\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = True\n\n def run(self, tmp=None, task_vars=None):\n ''' handler for aws_s3 operations '''\n if task_vars is None:\n task_vars = dict()\n\n result = super(ActionModule, self).run(tmp, task_vars)\n del tmp # tmp no longer has any effect\n\n source = self._task.args.get('src', None)\n\n try:\n new_module_args = self._task.args.copy()\n if source:\n source = os.path.expanduser(source)\n\n # For backward compatibility check if the file exists on the remote; it should take precedence\n if not self._remote_file_exists(source):\n try:\n source = self._loader.get_real_file(self._find_needle('files', source))\n new_module_args['src'] = source\n except AnsibleFileNotFound as e:\n # module handles error message for nonexistent files\n new_module_args['src'] = source\n except AnsibleError as e:\n raise AnsibleActionFail(to_text(e))\n\n # execute the aws_s3 module now, with the updated args\n result.update(self._execute_module(module_args=new_module_args, task_vars=task_vars))\n except AnsibleAction as e:\n result.update(e.result)\n return result\n", "path": "lib/ansible/plugins/action/aws_s3.py"}], "after_files": [{"content": "# (c) 2012, Michael DeHaan <[email protected]>\n# (c) 2018, Will Thames <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport os\n\nfrom ansible.errors import AnsibleError, AnsibleAction, AnsibleActionFail, AnsibleFileNotFound\nfrom ansible.module_utils._text import to_text\nfrom ansible.plugins.action import ActionBase\n\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = True\n\n def run(self, tmp=None, task_vars=None):\n ''' handler for aws_s3 operations '''\n if task_vars is None:\n task_vars = dict()\n\n result = super(ActionModule, self).run(tmp, task_vars)\n del tmp # tmp no longer has any effect\n\n source = self._task.args.get('src', None)\n\n try:\n new_module_args = self._task.args.copy()\n if source:\n source = os.path.expanduser(source)\n\n # For backward compatibility check if the file exists on the remote; it should take precedence\n if not self._remote_file_exists(source):\n try:\n source = self._loader.get_real_file(self._find_needle('files', source), decrypt=False)\n new_module_args['src'] = source\n except AnsibleFileNotFound as e:\n # module handles error message for nonexistent files\n new_module_args['src'] = source\n except AnsibleError as e:\n raise AnsibleActionFail(to_text(e))\n\n # execute the aws_s3 module now, with the updated args\n result.update(self._execute_module(module_args=new_module_args, task_vars=task_vars))\n except AnsibleAction as e:\n result.update(e.result)\n return result\n", "path": "lib/ansible/plugins/action/aws_s3.py"}]} | 1,655 | 164 |
gh_patches_debug_7144 | rasdani/github-patches | git_diff | adap__flower-1735 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
deprecated eval_fn still used in examples
### Describe the bug
While running the `embedded_devices` example, an issue is faced due to the use of `eval_fn` keyword which was deprecated after Flower 1.0.0 and has now been changed to `evaluate_fn`
### Steps/Code to Reproduce
Setup the server as mentioned in the `examples/embedded_devices` readme
### Expected Results
The server should start without any error
### Actual Results
The following error is encountered:
```
File "/embedded_devices/server.py", line 109, in main
strategy = fl.server.strategy.FedAvg(
TypeError: FedAvg.__init__() got an unexpected keyword argument 'eval_fn'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/embedded_devices/server.py`
Content:
```
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Minimal example on how to start a simple Flower server."""
16
17
18 import argparse
19 from collections import OrderedDict
20 from typing import Callable, Dict, Optional, Tuple
21
22 import flwr as fl
23 import numpy as np
24 import torch
25 import torchvision
26
27 import utils
28
29 # pylint: disable=no-member
30 DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
31 # pylint: enable=no-member
32
33 parser = argparse.ArgumentParser(description="Flower")
34 parser.add_argument(
35 "--server_address",
36 type=str,
37 required=True,
38 help=f"gRPC server address",
39 )
40 parser.add_argument(
41 "--rounds",
42 type=int,
43 default=1,
44 help="Number of rounds of federated learning (default: 1)",
45 )
46 parser.add_argument(
47 "--sample_fraction",
48 type=float,
49 default=1.0,
50 help="Fraction of available clients used for fit/evaluate (default: 1.0)",
51 )
52 parser.add_argument(
53 "--min_sample_size",
54 type=int,
55 default=2,
56 help="Minimum number of clients used for fit/evaluate (default: 2)",
57 )
58 parser.add_argument(
59 "--min_num_clients",
60 type=int,
61 default=2,
62 help="Minimum number of available clients required for sampling (default: 2)",
63 )
64 parser.add_argument(
65 "--log_host",
66 type=str,
67 help="Logserver address (no default)",
68 )
69 parser.add_argument(
70 "--model",
71 type=str,
72 default="ResNet18",
73 choices=["Net", "ResNet18"],
74 help="model to train",
75 )
76 parser.add_argument(
77 "--batch_size",
78 type=int,
79 default=32,
80 help="training batch size",
81 )
82 parser.add_argument(
83 "--num_workers",
84 type=int,
85 default=4,
86 help="number of workers for dataset reading",
87 )
88 parser.add_argument("--pin_memory", action="store_true")
89 args = parser.parse_args()
90
91
92 def main() -> None:
93 """Start server and train five rounds."""
94
95 print(args)
96
97 assert (
98 args.min_sample_size <= args.min_num_clients
99 ), f"Num_clients shouldn't be lower than min_sample_size"
100
101 # Configure logger
102 fl.common.logger.configure("server", host=args.log_host)
103
104 # Load evaluation data
105 _, testset = utils.load_cifar(download=True)
106
107 # Create client_manager, strategy, and server
108 client_manager = fl.server.SimpleClientManager()
109 strategy = fl.server.strategy.FedAvg(
110 fraction_fit=args.sample_fraction,
111 min_fit_clients=args.min_sample_size,
112 min_available_clients=args.min_num_clients,
113 eval_fn=get_eval_fn(testset),
114 on_fit_config_fn=fit_config,
115 )
116 server = fl.server.Server(client_manager=client_manager, strategy=strategy)
117
118 # Run server
119 fl.server.start_server(
120 server_address=args.server_address,
121 server=server,
122 config=fl.server.ServerConfig(num_rounds=args.rounds),
123 )
124
125
126 def fit_config(server_round: int) -> Dict[str, fl.common.Scalar]:
127 """Return a configuration with static batch size and (local) epochs."""
128 config = {
129 "epoch_global": str(server_round),
130 "epochs": str(1),
131 "batch_size": str(args.batch_size),
132 "num_workers": str(args.num_workers),
133 "pin_memory": str(args.pin_memory),
134 }
135 return config
136
137
138 def set_weights(model: torch.nn.ModuleList, weights: fl.common.NDArrays) -> None:
139 """Set model weights from a list of NumPy ndarrays."""
140 state_dict = OrderedDict(
141 {
142 k: torch.tensor(np.atleast_1d(v))
143 for k, v in zip(model.state_dict().keys(), weights)
144 }
145 )
146 model.load_state_dict(state_dict, strict=True)
147
148
149 def get_eval_fn(
150 testset: torchvision.datasets.CIFAR10,
151 ) -> Callable[[fl.common.NDArrays], Optional[Tuple[float, float]]]:
152 """Return an evaluation function for centralized evaluation."""
153
154 def evaluate(weights: fl.common.NDArrays) -> Optional[Tuple[float, float]]:
155 """Use the entire CIFAR-10 test set for evaluation."""
156
157 model = utils.load_model(args.model)
158 set_weights(model, weights)
159 model.to(DEVICE)
160
161 testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)
162 loss, accuracy = utils.test(model, testloader, device=DEVICE)
163 return loss, {"accuracy": accuracy}
164
165 return evaluate
166
167
168 if __name__ == "__main__":
169 main()
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/embedded_devices/server.py b/examples/embedded_devices/server.py
--- a/examples/embedded_devices/server.py
+++ b/examples/embedded_devices/server.py
@@ -110,7 +110,7 @@
fraction_fit=args.sample_fraction,
min_fit_clients=args.min_sample_size,
min_available_clients=args.min_num_clients,
- eval_fn=get_eval_fn(testset),
+ evaluate_fn=get_eval_fn(testset),
on_fit_config_fn=fit_config,
)
server = fl.server.Server(client_manager=client_manager, strategy=strategy)
| {"golden_diff": "diff --git a/examples/embedded_devices/server.py b/examples/embedded_devices/server.py\n--- a/examples/embedded_devices/server.py\n+++ b/examples/embedded_devices/server.py\n@@ -110,7 +110,7 @@\n fraction_fit=args.sample_fraction,\n min_fit_clients=args.min_sample_size,\n min_available_clients=args.min_num_clients,\n- eval_fn=get_eval_fn(testset),\n+ evaluate_fn=get_eval_fn(testset),\n on_fit_config_fn=fit_config,\n )\n server = fl.server.Server(client_manager=client_manager, strategy=strategy)\n", "issue": "deprecated eval_fn still used in examples\n### Describe the bug\n\nWhile running the `embedded_devices` example, an issue is faced due to the use of `eval_fn` keyword which was deprecated after Flower 1.0.0 and has now been changed to `evaluate_fn`\n\n### Steps/Code to Reproduce\n\nSetup the server as mentioned in the `examples/embedded_devices` readme\n\n### Expected Results\n\nThe server should start without any error\n\n### Actual Results\n\nThe following error is encountered:\r\n```\r\nFile \"/embedded_devices/server.py\", line 109, in main\r\n strategy = fl.server.strategy.FedAvg(\r\nTypeError: FedAvg.__init__() got an unexpected keyword argument 'eval_fn'\r\n```\n", "before_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Minimal example on how to start a simple Flower server.\"\"\"\n\n\nimport argparse\nfrom collections import OrderedDict\nfrom typing import Callable, Dict, Optional, Tuple\n\nimport flwr as fl\nimport numpy as np\nimport torch\nimport torchvision\n\nimport utils\n\n# pylint: disable=no-member\nDEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n# pylint: enable=no-member\n\nparser = argparse.ArgumentParser(description=\"Flower\")\nparser.add_argument(\n \"--server_address\",\n type=str,\n required=True,\n help=f\"gRPC server address\",\n)\nparser.add_argument(\n \"--rounds\",\n type=int,\n default=1,\n help=\"Number of rounds of federated learning (default: 1)\",\n)\nparser.add_argument(\n \"--sample_fraction\",\n type=float,\n default=1.0,\n help=\"Fraction of available clients used for fit/evaluate (default: 1.0)\",\n)\nparser.add_argument(\n \"--min_sample_size\",\n type=int,\n default=2,\n help=\"Minimum number of clients used for fit/evaluate (default: 2)\",\n)\nparser.add_argument(\n \"--min_num_clients\",\n type=int,\n default=2,\n help=\"Minimum number of available clients required for sampling (default: 2)\",\n)\nparser.add_argument(\n \"--log_host\",\n type=str,\n help=\"Logserver address (no default)\",\n)\nparser.add_argument(\n \"--model\",\n type=str,\n default=\"ResNet18\",\n choices=[\"Net\", \"ResNet18\"],\n help=\"model to train\",\n)\nparser.add_argument(\n \"--batch_size\",\n type=int,\n default=32,\n help=\"training batch size\",\n)\nparser.add_argument(\n \"--num_workers\",\n type=int,\n default=4,\n help=\"number of workers for dataset reading\",\n)\nparser.add_argument(\"--pin_memory\", action=\"store_true\")\nargs = parser.parse_args()\n\n\ndef main() -> None:\n \"\"\"Start server and train five rounds.\"\"\"\n\n print(args)\n\n assert (\n args.min_sample_size <= args.min_num_clients\n ), f\"Num_clients shouldn't be lower than min_sample_size\"\n\n # Configure logger\n fl.common.logger.configure(\"server\", host=args.log_host)\n\n # Load evaluation data\n _, testset = utils.load_cifar(download=True)\n\n # Create client_manager, strategy, and server\n client_manager = fl.server.SimpleClientManager()\n strategy = fl.server.strategy.FedAvg(\n fraction_fit=args.sample_fraction,\n min_fit_clients=args.min_sample_size,\n min_available_clients=args.min_num_clients,\n eval_fn=get_eval_fn(testset),\n on_fit_config_fn=fit_config,\n )\n server = fl.server.Server(client_manager=client_manager, strategy=strategy)\n\n # Run server\n fl.server.start_server(\n server_address=args.server_address,\n server=server,\n config=fl.server.ServerConfig(num_rounds=args.rounds),\n )\n\n\ndef fit_config(server_round: int) -> Dict[str, fl.common.Scalar]:\n \"\"\"Return a configuration with static batch size and (local) epochs.\"\"\"\n config = {\n \"epoch_global\": str(server_round),\n \"epochs\": str(1),\n \"batch_size\": str(args.batch_size),\n \"num_workers\": str(args.num_workers),\n \"pin_memory\": str(args.pin_memory),\n }\n return config\n\n\ndef set_weights(model: torch.nn.ModuleList, weights: fl.common.NDArrays) -> None:\n \"\"\"Set model weights from a list of NumPy ndarrays.\"\"\"\n state_dict = OrderedDict(\n {\n k: torch.tensor(np.atleast_1d(v))\n for k, v in zip(model.state_dict().keys(), weights)\n }\n )\n model.load_state_dict(state_dict, strict=True)\n\n\ndef get_eval_fn(\n testset: torchvision.datasets.CIFAR10,\n) -> Callable[[fl.common.NDArrays], Optional[Tuple[float, float]]]:\n \"\"\"Return an evaluation function for centralized evaluation.\"\"\"\n\n def evaluate(weights: fl.common.NDArrays) -> Optional[Tuple[float, float]]:\n \"\"\"Use the entire CIFAR-10 test set for evaluation.\"\"\"\n\n model = utils.load_model(args.model)\n set_weights(model, weights)\n model.to(DEVICE)\n\n testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)\n loss, accuracy = utils.test(model, testloader, device=DEVICE)\n return loss, {\"accuracy\": accuracy}\n\n return evaluate\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/embedded_devices/server.py"}], "after_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Minimal example on how to start a simple Flower server.\"\"\"\n\n\nimport argparse\nfrom collections import OrderedDict\nfrom typing import Callable, Dict, Optional, Tuple\n\nimport flwr as fl\nimport numpy as np\nimport torch\nimport torchvision\n\nimport utils\n\n# pylint: disable=no-member\nDEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n# pylint: enable=no-member\n\nparser = argparse.ArgumentParser(description=\"Flower\")\nparser.add_argument(\n \"--server_address\",\n type=str,\n required=True,\n help=f\"gRPC server address\",\n)\nparser.add_argument(\n \"--rounds\",\n type=int,\n default=1,\n help=\"Number of rounds of federated learning (default: 1)\",\n)\nparser.add_argument(\n \"--sample_fraction\",\n type=float,\n default=1.0,\n help=\"Fraction of available clients used for fit/evaluate (default: 1.0)\",\n)\nparser.add_argument(\n \"--min_sample_size\",\n type=int,\n default=2,\n help=\"Minimum number of clients used for fit/evaluate (default: 2)\",\n)\nparser.add_argument(\n \"--min_num_clients\",\n type=int,\n default=2,\n help=\"Minimum number of available clients required for sampling (default: 2)\",\n)\nparser.add_argument(\n \"--log_host\",\n type=str,\n help=\"Logserver address (no default)\",\n)\nparser.add_argument(\n \"--model\",\n type=str,\n default=\"ResNet18\",\n choices=[\"Net\", \"ResNet18\"],\n help=\"model to train\",\n)\nparser.add_argument(\n \"--batch_size\",\n type=int,\n default=32,\n help=\"training batch size\",\n)\nparser.add_argument(\n \"--num_workers\",\n type=int,\n default=4,\n help=\"number of workers for dataset reading\",\n)\nparser.add_argument(\"--pin_memory\", action=\"store_true\")\nargs = parser.parse_args()\n\n\ndef main() -> None:\n \"\"\"Start server and train five rounds.\"\"\"\n\n print(args)\n\n assert (\n args.min_sample_size <= args.min_num_clients\n ), f\"Num_clients shouldn't be lower than min_sample_size\"\n\n # Configure logger\n fl.common.logger.configure(\"server\", host=args.log_host)\n\n # Load evaluation data\n _, testset = utils.load_cifar(download=True)\n\n # Create client_manager, strategy, and server\n client_manager = fl.server.SimpleClientManager()\n strategy = fl.server.strategy.FedAvg(\n fraction_fit=args.sample_fraction,\n min_fit_clients=args.min_sample_size,\n min_available_clients=args.min_num_clients,\n evaluate_fn=get_eval_fn(testset),\n on_fit_config_fn=fit_config,\n )\n server = fl.server.Server(client_manager=client_manager, strategy=strategy)\n\n # Run server\n fl.server.start_server(\n server_address=args.server_address,\n server=server,\n config=fl.server.ServerConfig(num_rounds=args.rounds),\n )\n\n\ndef fit_config(server_round: int) -> Dict[str, fl.common.Scalar]:\n \"\"\"Return a configuration with static batch size and (local) epochs.\"\"\"\n config = {\n \"epoch_global\": str(server_round),\n \"epochs\": str(1),\n \"batch_size\": str(args.batch_size),\n \"num_workers\": str(args.num_workers),\n \"pin_memory\": str(args.pin_memory),\n }\n return config\n\n\ndef set_weights(model: torch.nn.ModuleList, weights: fl.common.NDArrays) -> None:\n \"\"\"Set model weights from a list of NumPy ndarrays.\"\"\"\n state_dict = OrderedDict(\n {\n k: torch.tensor(np.atleast_1d(v))\n for k, v in zip(model.state_dict().keys(), weights)\n }\n )\n model.load_state_dict(state_dict, strict=True)\n\n\ndef get_eval_fn(\n testset: torchvision.datasets.CIFAR10,\n) -> Callable[[fl.common.NDArrays], Optional[Tuple[float, float]]]:\n \"\"\"Return an evaluation function for centralized evaluation.\"\"\"\n\n def evaluate(weights: fl.common.NDArrays) -> Optional[Tuple[float, float]]:\n \"\"\"Use the entire CIFAR-10 test set for evaluation.\"\"\"\n\n model = utils.load_model(args.model)\n set_weights(model, weights)\n model.to(DEVICE)\n\n testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)\n loss, accuracy = utils.test(model, testloader, device=DEVICE)\n return loss, {\"accuracy\": accuracy}\n\n return evaluate\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/embedded_devices/server.py"}]} | 1,933 | 122 |
gh_patches_debug_34994 | rasdani/github-patches | git_diff | getredash__redash-725 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User should be redirected to his original destination after login with Google OAuth
If the user tried to open a page before being logged in, he should be redirected to this page after successful login.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/google_oauth.py`
Content:
```
1 import logging
2 from flask.ext.login import login_user
3 import requests
4 from flask import redirect, url_for, Blueprint, flash
5 from flask_oauth import OAuth
6 from redash import models, settings
7
8 logger = logging.getLogger('google_oauth')
9 oauth = OAuth()
10
11
12 if not settings.GOOGLE_APPS_DOMAIN:
13 logger.warning("No Google Apps domain defined, all Google accounts allowed.")
14
15 google = oauth.remote_app('google',
16 base_url='https://www.google.com/accounts/',
17 authorize_url='https://accounts.google.com/o/oauth2/auth',
18 request_token_url=None,
19 request_token_params={
20 'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',
21 'response_type': 'code'
22 },
23 access_token_url='https://accounts.google.com/o/oauth2/token',
24 access_token_method='POST',
25 access_token_params={'grant_type': 'authorization_code'},
26 consumer_key=settings.GOOGLE_CLIENT_ID,
27 consumer_secret=settings.GOOGLE_CLIENT_SECRET)
28
29
30 blueprint = Blueprint('google_oauth', __name__)
31
32
33 def get_user_profile(access_token):
34 headers = {'Authorization': 'OAuth {}'.format(access_token)}
35 response = requests.get('https://www.googleapis.com/oauth2/v1/userinfo', headers=headers)
36
37 if response.status_code == 401:
38 logger.warning("Failed getting user profile (response code 401).")
39 return None
40
41 return response.json()
42
43
44 def verify_profile(profile):
45 if not settings.GOOGLE_APPS_DOMAIN:
46 return True
47
48 domain = profile['email'].split('@')[-1]
49 return domain in settings.GOOGLE_APPS_DOMAIN
50
51
52 def create_and_login_user(name, email):
53 try:
54 user_object = models.User.get_by_email(email)
55 if user_object.name != name:
56 logger.debug("Updating user name (%r -> %r)", user_object.name, name)
57 user_object.name = name
58 user_object.save()
59 except models.User.DoesNotExist:
60 logger.debug("Creating user object (%r)", name)
61 user_object = models.User.create(name=name, email=email, groups=models.User.DEFAULT_GROUPS)
62
63 login_user(user_object, remember=True)
64
65
66 @blueprint.route('/oauth/google', endpoint="authorize")
67 def login():
68 # TODO, suport next
69 callback=url_for('.callback', _external=True)
70 logger.debug("Callback url: %s", callback)
71 return google.authorize(callback=callback)
72
73
74 @blueprint.route('/oauth/google_callback', endpoint="callback")
75 @google.authorized_handler
76 def authorized(resp):
77 access_token = resp['access_token']
78
79 if access_token is None:
80 logger.warning("Access token missing in call back request.")
81 flash("Validation error. Please retry.")
82 return redirect(url_for('login'))
83
84 profile = get_user_profile(access_token)
85 if profile is None:
86 flash("Validation error. Please retry.")
87 return redirect(url_for('login'))
88
89 if not verify_profile(profile):
90 logger.warning("User tried to login with unauthorized domain name: %s", profile['email'])
91 flash("Your Google Apps domain name isn't allowed.")
92 return redirect(url_for('login'))
93
94 create_and_login_user(profile['name'], profile['email'])
95
96 return redirect(url_for('index'))
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/google_oauth.py b/redash/google_oauth.py
--- a/redash/google_oauth.py
+++ b/redash/google_oauth.py
@@ -1,8 +1,8 @@
import logging
from flask.ext.login import login_user
import requests
-from flask import redirect, url_for, Blueprint, flash
-from flask_oauth import OAuth
+from flask import redirect, url_for, Blueprint, flash, request
+from flask_oauthlib.client import OAuth
from redash import models, settings
logger = logging.getLogger('google_oauth')
@@ -18,11 +18,9 @@
request_token_url=None,
request_token_params={
'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',
- 'response_type': 'code'
},
access_token_url='https://accounts.google.com/o/oauth2/token',
access_token_method='POST',
- access_token_params={'grant_type': 'authorization_code'},
consumer_key=settings.GOOGLE_CLIENT_ID,
consumer_secret=settings.GOOGLE_CLIENT_SECRET)
@@ -65,10 +63,10 @@
@blueprint.route('/oauth/google', endpoint="authorize")
def login():
- # TODO, suport next
+ next = request.args.get('next','/')
callback=url_for('.callback', _external=True)
logger.debug("Callback url: %s", callback)
- return google.authorize(callback=callback)
+ return google.authorize(callback=callback, state=next)
@blueprint.route('/oauth/google_callback', endpoint="callback")
@@ -93,4 +91,6 @@
create_and_login_user(profile['name'], profile['email'])
- return redirect(url_for('index'))
\ No newline at end of file
+ next = request.args.get('state','/')
+
+ return redirect(next)
| {"golden_diff": "diff --git a/redash/google_oauth.py b/redash/google_oauth.py\n--- a/redash/google_oauth.py\n+++ b/redash/google_oauth.py\n@@ -1,8 +1,8 @@\n import logging\n from flask.ext.login import login_user\n import requests\n-from flask import redirect, url_for, Blueprint, flash\n-from flask_oauth import OAuth\n+from flask import redirect, url_for, Blueprint, flash, request\n+from flask_oauthlib.client import OAuth\n from redash import models, settings\n \n logger = logging.getLogger('google_oauth')\n@@ -18,11 +18,9 @@\n request_token_url=None,\n request_token_params={\n 'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',\n- 'response_type': 'code'\n },\n access_token_url='https://accounts.google.com/o/oauth2/token',\n access_token_method='POST',\n- access_token_params={'grant_type': 'authorization_code'},\n consumer_key=settings.GOOGLE_CLIENT_ID,\n consumer_secret=settings.GOOGLE_CLIENT_SECRET)\n \n@@ -65,10 +63,10 @@\n \n @blueprint.route('/oauth/google', endpoint=\"authorize\")\n def login():\n- # TODO, suport next\n+ next = request.args.get('next','/')\n callback=url_for('.callback', _external=True)\n logger.debug(\"Callback url: %s\", callback)\n- return google.authorize(callback=callback)\n+ return google.authorize(callback=callback, state=next)\n \n \n @blueprint.route('/oauth/google_callback', endpoint=\"callback\")\n@@ -93,4 +91,6 @@\n \n create_and_login_user(profile['name'], profile['email'])\n \n- return redirect(url_for('index'))\n\\ No newline at end of file\n+ next = request.args.get('state','/')\n+\n+ return redirect(next)\n", "issue": "User should be redirected to his original destination after login with Google OAuth\nIf the user tried to open a page before being logged in, he should be redirected to this page after successful login.\n\n", "before_files": [{"content": "import logging\nfrom flask.ext.login import login_user\nimport requests\nfrom flask import redirect, url_for, Blueprint, flash\nfrom flask_oauth import OAuth\nfrom redash import models, settings\n\nlogger = logging.getLogger('google_oauth')\noauth = OAuth()\n\n\nif not settings.GOOGLE_APPS_DOMAIN:\n logger.warning(\"No Google Apps domain defined, all Google accounts allowed.\")\n\ngoogle = oauth.remote_app('google',\n base_url='https://www.google.com/accounts/',\n authorize_url='https://accounts.google.com/o/oauth2/auth',\n request_token_url=None,\n request_token_params={\n 'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',\n 'response_type': 'code'\n },\n access_token_url='https://accounts.google.com/o/oauth2/token',\n access_token_method='POST',\n access_token_params={'grant_type': 'authorization_code'},\n consumer_key=settings.GOOGLE_CLIENT_ID,\n consumer_secret=settings.GOOGLE_CLIENT_SECRET)\n\n\nblueprint = Blueprint('google_oauth', __name__)\n\n\ndef get_user_profile(access_token):\n headers = {'Authorization': 'OAuth {}'.format(access_token)}\n response = requests.get('https://www.googleapis.com/oauth2/v1/userinfo', headers=headers)\n\n if response.status_code == 401:\n logger.warning(\"Failed getting user profile (response code 401).\")\n return None\n\n return response.json()\n\n\ndef verify_profile(profile):\n if not settings.GOOGLE_APPS_DOMAIN:\n return True\n\n domain = profile['email'].split('@')[-1]\n return domain in settings.GOOGLE_APPS_DOMAIN\n\n\ndef create_and_login_user(name, email):\n try:\n user_object = models.User.get_by_email(email)\n if user_object.name != name:\n logger.debug(\"Updating user name (%r -> %r)\", user_object.name, name)\n user_object.name = name\n user_object.save()\n except models.User.DoesNotExist:\n logger.debug(\"Creating user object (%r)\", name)\n user_object = models.User.create(name=name, email=email, groups=models.User.DEFAULT_GROUPS)\n\n login_user(user_object, remember=True)\n\n\[email protected]('/oauth/google', endpoint=\"authorize\")\ndef login():\n # TODO, suport next\n callback=url_for('.callback', _external=True)\n logger.debug(\"Callback url: %s\", callback)\n return google.authorize(callback=callback)\n\n\[email protected]('/oauth/google_callback', endpoint=\"callback\")\[email protected]_handler\ndef authorized(resp):\n access_token = resp['access_token']\n\n if access_token is None:\n logger.warning(\"Access token missing in call back request.\")\n flash(\"Validation error. Please retry.\")\n return redirect(url_for('login'))\n\n profile = get_user_profile(access_token)\n if profile is None:\n flash(\"Validation error. Please retry.\")\n return redirect(url_for('login'))\n\n if not verify_profile(profile):\n logger.warning(\"User tried to login with unauthorized domain name: %s\", profile['email'])\n flash(\"Your Google Apps domain name isn't allowed.\")\n return redirect(url_for('login'))\n\n create_and_login_user(profile['name'], profile['email'])\n\n return redirect(url_for('index'))", "path": "redash/google_oauth.py"}], "after_files": [{"content": "import logging\nfrom flask.ext.login import login_user\nimport requests\nfrom flask import redirect, url_for, Blueprint, flash, request\nfrom flask_oauthlib.client import OAuth\nfrom redash import models, settings\n\nlogger = logging.getLogger('google_oauth')\noauth = OAuth()\n\n\nif not settings.GOOGLE_APPS_DOMAIN:\n logger.warning(\"No Google Apps domain defined, all Google accounts allowed.\")\n\ngoogle = oauth.remote_app('google',\n base_url='https://www.google.com/accounts/',\n authorize_url='https://accounts.google.com/o/oauth2/auth',\n request_token_url=None,\n request_token_params={\n 'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',\n },\n access_token_url='https://accounts.google.com/o/oauth2/token',\n access_token_method='POST',\n consumer_key=settings.GOOGLE_CLIENT_ID,\n consumer_secret=settings.GOOGLE_CLIENT_SECRET)\n\n\nblueprint = Blueprint('google_oauth', __name__)\n\n\ndef get_user_profile(access_token):\n headers = {'Authorization': 'OAuth {}'.format(access_token)}\n response = requests.get('https://www.googleapis.com/oauth2/v1/userinfo', headers=headers)\n\n if response.status_code == 401:\n logger.warning(\"Failed getting user profile (response code 401).\")\n return None\n\n return response.json()\n\n\ndef verify_profile(profile):\n if not settings.GOOGLE_APPS_DOMAIN:\n return True\n\n domain = profile['email'].split('@')[-1]\n return domain in settings.GOOGLE_APPS_DOMAIN\n\n\ndef create_and_login_user(name, email):\n try:\n user_object = models.User.get_by_email(email)\n if user_object.name != name:\n logger.debug(\"Updating user name (%r -> %r)\", user_object.name, name)\n user_object.name = name\n user_object.save()\n except models.User.DoesNotExist:\n logger.debug(\"Creating user object (%r)\", name)\n user_object = models.User.create(name=name, email=email, groups=models.User.DEFAULT_GROUPS)\n\n login_user(user_object, remember=True)\n\n\[email protected]('/oauth/google', endpoint=\"authorize\")\ndef login():\n next = request.args.get('next','/')\n callback=url_for('.callback', _external=True)\n logger.debug(\"Callback url: %s\", callback)\n return google.authorize(callback=callback, state=next)\n\n\[email protected]('/oauth/google_callback', endpoint=\"callback\")\[email protected]_handler\ndef authorized(resp):\n access_token = resp['access_token']\n\n if access_token is None:\n logger.warning(\"Access token missing in call back request.\")\n flash(\"Validation error. Please retry.\")\n return redirect(url_for('login'))\n\n profile = get_user_profile(access_token)\n if profile is None:\n flash(\"Validation error. Please retry.\")\n return redirect(url_for('login'))\n\n if not verify_profile(profile):\n logger.warning(\"User tried to login with unauthorized domain name: %s\", profile['email'])\n flash(\"Your Google Apps domain name isn't allowed.\")\n return redirect(url_for('login'))\n\n create_and_login_user(profile['name'], profile['email'])\n\n next = request.args.get('state','/')\n\n return redirect(next)\n", "path": "redash/google_oauth.py"}]} | 1,169 | 400 |
gh_patches_debug_551 | rasdani/github-patches | git_diff | pypi__warehouse-5814 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sorting searches by 'Date last updated' results in 503
**Describe the bug**
When trying to search for anything on pypi.org, sorting by relevance or trending works fine, but sorting by date last updated returns a 503 error.
**Expected behavior**
Search results, sorted by date.
**To Reproduce**
Example URL: https://pypi.org/search/?q=test&o=-created
Result:
> Sorry, something went wrong
>
> PyPI is down for maintenance or is having an outage.
>
> This is affecting several of our services, including our web interface.
> If you are trying to install a package, you should be able to pip install packages without problem.
>
> Check our status page, or
> View Python Status on Twitter
The status page, though, shows all green.
**My Platform**
- Win 10, Firefox 66.0.3
- Ubuntu 18.04, Chrome 74.0.3729.108
---
Edit: I know this functionality was working at least as recently as last Thursday, 2 May 2019.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/packaging/search.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import packaging.version
14
15 from elasticsearch_dsl import Date, Document, Float, Keyword, Text, analyzer
16
17 from warehouse.search.utils import doc_type
18
19 EmailAnalyzer = analyzer(
20 "email",
21 tokenizer="uax_url_email",
22 filter=["standard", "lowercase", "stop", "snowball"],
23 )
24
25 NameAnalyzer = analyzer(
26 "normalized_name",
27 tokenizer="lowercase",
28 filter=["standard", "lowercase", "word_delimiter"],
29 )
30
31
32 @doc_type
33 class Project(Document):
34
35 name = Text()
36 normalized_name = Text(analyzer=NameAnalyzer)
37 version = Keyword(multi=True)
38 latest_version = Keyword()
39 summary = Text(analyzer="snowball")
40 description = Text(analyzer="snowball")
41 author = Text()
42 author_email = Text(analyzer=EmailAnalyzer)
43 maintainer = Text()
44 maintainer_email = Text(analyzer=EmailAnalyzer)
45 license = Text()
46 home_page = Keyword()
47 download_url = Keyword()
48 keywords = Text(analyzer="snowball")
49 platform = Keyword()
50 created = Date()
51 classifiers = Keyword(multi=True)
52 zscore = Float()
53
54 @classmethod
55 def from_db(cls, release):
56 obj = cls(meta={"id": release.normalized_name})
57 obj["name"] = release.name
58 obj["normalized_name"] = release.normalized_name
59 obj["version"] = sorted(
60 release.all_versions, key=lambda r: packaging.version.parse(r), reverse=True
61 )
62 obj["latest_version"] = release.latest_version
63 obj["summary"] = release.summary
64 obj["description"] = release.description
65 obj["author"] = release.author
66 obj["author_email"] = release.author_email
67 obj["maintainer"] = release.maintainer
68 obj["maintainer_email"] = release.maintainer_email
69 obj["home_page"] = release.home_page
70 obj["download_url"] = release.download_url
71 obj["keywords"] = release.keywords
72 obj["platform"] = release.platform
73 obj["created"] = release.created
74 obj["classifiers"] = release.classifiers
75 obj["zscore"] = release.zscore
76
77 return obj
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/packaging/search.py b/warehouse/packaging/search.py
--- a/warehouse/packaging/search.py
+++ b/warehouse/packaging/search.py
@@ -75,3 +75,8 @@
obj["zscore"] = release.zscore
return obj
+
+ class Index:
+ # make sure this class can match any index so it will always be used to
+ # deserialize data coming from elasticsearch.
+ name = "*"
| {"golden_diff": "diff --git a/warehouse/packaging/search.py b/warehouse/packaging/search.py\n--- a/warehouse/packaging/search.py\n+++ b/warehouse/packaging/search.py\n@@ -75,3 +75,8 @@\n obj[\"zscore\"] = release.zscore\n \n return obj\n+\n+ class Index:\n+ # make sure this class can match any index so it will always be used to\n+ # deserialize data coming from elasticsearch.\n+ name = \"*\"\n", "issue": "Sorting searches by 'Date last updated' results in 503\n**Describe the bug**\r\n\r\nWhen trying to search for anything on pypi.org, sorting by relevance or trending works fine, but sorting by date last updated returns a 503 error.\r\n\r\n**Expected behavior**\r\n\r\nSearch results, sorted by date.\r\n\r\n**To Reproduce**\r\n\r\nExample URL: https://pypi.org/search/?q=test&o=-created\r\n\r\nResult:\r\n\r\n> Sorry, something went wrong\r\n> \r\n> PyPI is down for maintenance or is having an outage.\r\n> \r\n> This is affecting several of our services, including our web interface.\r\n> If you are trying to install a package, you should be able to pip install packages without problem.\r\n> \r\n> Check our status page, or\r\n> View Python Status on Twitter\r\n\r\nThe status page, though, shows all green.\r\n\r\n\r\n**My Platform**\r\n\r\n- Win 10, Firefox 66.0.3\r\n- Ubuntu 18.04, Chrome 74.0.3729.108\r\n\r\n---\r\n\r\nEdit: I know this functionality was working at least as recently as last Thursday, 2 May 2019.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport packaging.version\n\nfrom elasticsearch_dsl import Date, Document, Float, Keyword, Text, analyzer\n\nfrom warehouse.search.utils import doc_type\n\nEmailAnalyzer = analyzer(\n \"email\",\n tokenizer=\"uax_url_email\",\n filter=[\"standard\", \"lowercase\", \"stop\", \"snowball\"],\n)\n\nNameAnalyzer = analyzer(\n \"normalized_name\",\n tokenizer=\"lowercase\",\n filter=[\"standard\", \"lowercase\", \"word_delimiter\"],\n)\n\n\n@doc_type\nclass Project(Document):\n\n name = Text()\n normalized_name = Text(analyzer=NameAnalyzer)\n version = Keyword(multi=True)\n latest_version = Keyword()\n summary = Text(analyzer=\"snowball\")\n description = Text(analyzer=\"snowball\")\n author = Text()\n author_email = Text(analyzer=EmailAnalyzer)\n maintainer = Text()\n maintainer_email = Text(analyzer=EmailAnalyzer)\n license = Text()\n home_page = Keyword()\n download_url = Keyword()\n keywords = Text(analyzer=\"snowball\")\n platform = Keyword()\n created = Date()\n classifiers = Keyword(multi=True)\n zscore = Float()\n\n @classmethod\n def from_db(cls, release):\n obj = cls(meta={\"id\": release.normalized_name})\n obj[\"name\"] = release.name\n obj[\"normalized_name\"] = release.normalized_name\n obj[\"version\"] = sorted(\n release.all_versions, key=lambda r: packaging.version.parse(r), reverse=True\n )\n obj[\"latest_version\"] = release.latest_version\n obj[\"summary\"] = release.summary\n obj[\"description\"] = release.description\n obj[\"author\"] = release.author\n obj[\"author_email\"] = release.author_email\n obj[\"maintainer\"] = release.maintainer\n obj[\"maintainer_email\"] = release.maintainer_email\n obj[\"home_page\"] = release.home_page\n obj[\"download_url\"] = release.download_url\n obj[\"keywords\"] = release.keywords\n obj[\"platform\"] = release.platform\n obj[\"created\"] = release.created\n obj[\"classifiers\"] = release.classifiers\n obj[\"zscore\"] = release.zscore\n\n return obj\n", "path": "warehouse/packaging/search.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport packaging.version\n\nfrom elasticsearch_dsl import Date, Document, Float, Keyword, Text, analyzer\n\nfrom warehouse.search.utils import doc_type\n\nEmailAnalyzer = analyzer(\n \"email\",\n tokenizer=\"uax_url_email\",\n filter=[\"standard\", \"lowercase\", \"stop\", \"snowball\"],\n)\n\nNameAnalyzer = analyzer(\n \"normalized_name\",\n tokenizer=\"lowercase\",\n filter=[\"standard\", \"lowercase\", \"word_delimiter\"],\n)\n\n\n@doc_type\nclass Project(Document):\n\n name = Text()\n normalized_name = Text(analyzer=NameAnalyzer)\n version = Keyword(multi=True)\n latest_version = Keyword()\n summary = Text(analyzer=\"snowball\")\n description = Text(analyzer=\"snowball\")\n author = Text()\n author_email = Text(analyzer=EmailAnalyzer)\n maintainer = Text()\n maintainer_email = Text(analyzer=EmailAnalyzer)\n license = Text()\n home_page = Keyword()\n download_url = Keyword()\n keywords = Text(analyzer=\"snowball\")\n platform = Keyword()\n created = Date()\n classifiers = Keyword(multi=True)\n zscore = Float()\n\n @classmethod\n def from_db(cls, release):\n obj = cls(meta={\"id\": release.normalized_name})\n obj[\"name\"] = release.name\n obj[\"normalized_name\"] = release.normalized_name\n obj[\"version\"] = sorted(\n release.all_versions, key=lambda r: packaging.version.parse(r), reverse=True\n )\n obj[\"latest_version\"] = release.latest_version\n obj[\"summary\"] = release.summary\n obj[\"description\"] = release.description\n obj[\"author\"] = release.author\n obj[\"author_email\"] = release.author_email\n obj[\"maintainer\"] = release.maintainer\n obj[\"maintainer_email\"] = release.maintainer_email\n obj[\"home_page\"] = release.home_page\n obj[\"download_url\"] = release.download_url\n obj[\"keywords\"] = release.keywords\n obj[\"platform\"] = release.platform\n obj[\"created\"] = release.created\n obj[\"classifiers\"] = release.classifiers\n obj[\"zscore\"] = release.zscore\n\n return obj\n\n class Index:\n # make sure this class can match any index so it will always be used to\n # deserialize data coming from elasticsearch.\n name = \"*\"\n", "path": "warehouse/packaging/search.py"}]} | 1,245 | 108 |
gh_patches_debug_22279 | rasdani/github-patches | git_diff | chainer__chainer-243 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add type check to NonparameterizedConvolution2D function
Related to #123
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/nonparameterized_convolution_2d.py`
Content:
```
1 from chainer import cuda
2 from chainer import function
3 from chainer.functions import convolution_2d as conv2d_module
4
5
6 class NonparameterizedConvolution2D(function.Function):
7
8 """Two-dimensional nonparameterized convolution class.
9
10 Args:
11 stride (int or (int, int)): Stride of filter applications.
12 ``stride=s`` and ``stride=(s, s)`` are equivalent.
13 pad (int or (int, int)): Spatial padding width for input arrays.
14 ``pad=p`` and ``pad=(p, p)`` are equivalent.
15 use_cudnn (bool): If True, then this function uses CuDNN if available.
16
17 .. seealso:: :class:`Convolution2D`
18
19 """
20 def __init__(self, stride=1, pad=0, use_cudnn=True):
21 self.stride = stride
22 self.pad = pad
23
24 self.use_cudnn = use_cudnn
25
26 def forward(self, x):
27 W = x[1]
28 b = None
29 if len(x) == 3:
30 b = x[2]
31 func = conv2d_module.Convolution2D(
32 W.shape[1], W.shape[0], W.shape[2:],
33 stride=self.stride, pad=self.pad, use_cudnn=self.use_cudnn,
34 initialW=W, initial_bias=b)
35 self.func = func
36 if any(isinstance(i, cuda.GPUArray) for i in x):
37 func.to_gpu()
38 return func.forward(x[:1])
39
40 def backward(self, x, gy):
41 func = self.func
42 func.zero_grads()
43 gx = func.backward(x[:1], gy)
44 if func.gb is None:
45 return (gx[0], func.gW)
46 return (gx[0], func.gW, func.gb)
47
48
49 def convolution_2d(x, W, b=None, stride=1, pad=0, use_cudnn=True):
50 """Two-dimensional convolution function.
51
52 Args:
53 x (~chainer.Variable): Input variable.
54 W (~chainer.Variable): Weight variable.
55 b (~chainer.Variable): Bias variable.
56 stride (int or (int, int)): Stride of filter applications.
57 ``stride=s`` and ``stride=(s, s)`` are equivalent.
58 pad (int or (int, int)): Spatial padding width for input arrays.
59 ``pad=p`` and ``pad=(p, p)`` are equivalent.
60 use_cudnn (bool): If True, then this function uses CuDNN if available.
61
62 Returns:
63 ~chainer.Variable: Output variable.
64
65 .. seealso:: :class:`Convolution2D`
66
67 """
68 return NonparameterizedConvolution2D(
69 stride=stride, pad=pad, use_cudnn=use_cudnn)(x, W, b)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/functions/nonparameterized_convolution_2d.py b/chainer/functions/nonparameterized_convolution_2d.py
--- a/chainer/functions/nonparameterized_convolution_2d.py
+++ b/chainer/functions/nonparameterized_convolution_2d.py
@@ -1,6 +1,9 @@
+import numpy
+
from chainer import cuda
from chainer import function
from chainer.functions import convolution_2d as conv2d_module
+from chainer.utils import type_check
class NonparameterizedConvolution2D(function.Function):
@@ -23,6 +26,30 @@
self.use_cudnn = use_cudnn
+ def check_type_forward(self, in_types):
+ type_check.expect(
+ 2 <= in_types.size(),
+ in_types.size() <= 3,
+ )
+
+ x_type = in_types[0]
+ w_type = in_types[1]
+ type_check.expect(
+ x_type.dtype == numpy.float32,
+ w_type.dtype == numpy.float32,
+ x_type.ndim == 4,
+ w_type.ndim == 4,
+ x_type.shape[1] == w_type.shape[1],
+ )
+
+ if in_types.size().eval() == 3:
+ b_type = in_types[2]
+ type_check.expect(
+ b_type.dtype == numpy.float32,
+ b_type.ndim == 1,
+ b_type.shape[0] == w_type.shape[0],
+ )
+
def forward(self, x):
W = x[1]
b = None
| {"golden_diff": "diff --git a/chainer/functions/nonparameterized_convolution_2d.py b/chainer/functions/nonparameterized_convolution_2d.py\n--- a/chainer/functions/nonparameterized_convolution_2d.py\n+++ b/chainer/functions/nonparameterized_convolution_2d.py\n@@ -1,6 +1,9 @@\n+import numpy\n+\n from chainer import cuda\n from chainer import function\n from chainer.functions import convolution_2d as conv2d_module\n+from chainer.utils import type_check\n \n \n class NonparameterizedConvolution2D(function.Function):\n@@ -23,6 +26,30 @@\n \n self.use_cudnn = use_cudnn\n \n+ def check_type_forward(self, in_types):\n+ type_check.expect(\n+ 2 <= in_types.size(),\n+ in_types.size() <= 3,\n+ )\n+\n+ x_type = in_types[0]\n+ w_type = in_types[1]\n+ type_check.expect(\n+ x_type.dtype == numpy.float32,\n+ w_type.dtype == numpy.float32,\n+ x_type.ndim == 4,\n+ w_type.ndim == 4,\n+ x_type.shape[1] == w_type.shape[1],\n+ )\n+\n+ if in_types.size().eval() == 3:\n+ b_type = in_types[2]\n+ type_check.expect(\n+ b_type.dtype == numpy.float32,\n+ b_type.ndim == 1,\n+ b_type.shape[0] == w_type.shape[0],\n+ )\n+\n def forward(self, x):\n W = x[1]\n b = None\n", "issue": "Add type check to NonparameterizedConvolution2D function\nRelated to #123 \n\n", "before_files": [{"content": "from chainer import cuda\nfrom chainer import function\nfrom chainer.functions import convolution_2d as conv2d_module\n\n\nclass NonparameterizedConvolution2D(function.Function):\n\n \"\"\"Two-dimensional nonparameterized convolution class.\n\n Args:\n stride (int or (int, int)): Stride of filter applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent.\n pad (int or (int, int)): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n use_cudnn (bool): If True, then this function uses CuDNN if available.\n\n .. seealso:: :class:`Convolution2D`\n\n \"\"\"\n def __init__(self, stride=1, pad=0, use_cudnn=True):\n self.stride = stride\n self.pad = pad\n\n self.use_cudnn = use_cudnn\n\n def forward(self, x):\n W = x[1]\n b = None\n if len(x) == 3:\n b = x[2]\n func = conv2d_module.Convolution2D(\n W.shape[1], W.shape[0], W.shape[2:],\n stride=self.stride, pad=self.pad, use_cudnn=self.use_cudnn,\n initialW=W, initial_bias=b)\n self.func = func\n if any(isinstance(i, cuda.GPUArray) for i in x):\n func.to_gpu()\n return func.forward(x[:1])\n\n def backward(self, x, gy):\n func = self.func\n func.zero_grads()\n gx = func.backward(x[:1], gy)\n if func.gb is None:\n return (gx[0], func.gW)\n return (gx[0], func.gW, func.gb)\n\n\ndef convolution_2d(x, W, b=None, stride=1, pad=0, use_cudnn=True):\n \"\"\"Two-dimensional convolution function.\n\n Args:\n x (~chainer.Variable): Input variable.\n W (~chainer.Variable): Weight variable.\n b (~chainer.Variable): Bias variable.\n stride (int or (int, int)): Stride of filter applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent.\n pad (int or (int, int)): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n use_cudnn (bool): If True, then this function uses CuDNN if available.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n .. seealso:: :class:`Convolution2D`\n\n \"\"\"\n return NonparameterizedConvolution2D(\n stride=stride, pad=pad, use_cudnn=use_cudnn)(x, W, b)\n", "path": "chainer/functions/nonparameterized_convolution_2d.py"}], "after_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.functions import convolution_2d as conv2d_module\nfrom chainer.utils import type_check\n\n\nclass NonparameterizedConvolution2D(function.Function):\n\n \"\"\"Two-dimensional nonparameterized convolution class.\n\n Args:\n stride (int or (int, int)): Stride of filter applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent.\n pad (int or (int, int)): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n use_cudnn (bool): If True, then this function uses CuDNN if available.\n\n .. seealso:: :class:`Convolution2D`\n\n \"\"\"\n def __init__(self, stride=1, pad=0, use_cudnn=True):\n self.stride = stride\n self.pad = pad\n\n self.use_cudnn = use_cudnn\n\n def check_type_forward(self, in_types):\n type_check.expect(\n 2 <= in_types.size(),\n in_types.size() <= 3,\n )\n\n x_type = in_types[0]\n w_type = in_types[1]\n type_check.expect(\n x_type.dtype == numpy.float32,\n w_type.dtype == numpy.float32,\n x_type.ndim == 4,\n w_type.ndim == 4,\n x_type.shape[1] == w_type.shape[1],\n )\n\n if in_types.size().eval() == 3:\n b_type = in_types[2]\n type_check.expect(\n b_type.dtype == numpy.float32,\n b_type.ndim == 1,\n b_type.shape[0] == w_type.shape[0],\n )\n\n def forward(self, x):\n W = x[1]\n b = None\n if len(x) == 3:\n b = x[2]\n func = conv2d_module.Convolution2D(\n W.shape[1], W.shape[0], W.shape[2:],\n stride=self.stride, pad=self.pad, use_cudnn=self.use_cudnn,\n initialW=W, initial_bias=b)\n self.func = func\n if any(isinstance(i, cuda.GPUArray) for i in x):\n func.to_gpu()\n return func.forward(x[:1])\n\n def backward(self, x, gy):\n func = self.func\n func.zero_grads()\n gx = func.backward(x[:1], gy)\n if func.gb is None:\n return (gx[0], func.gW)\n return (gx[0], func.gW, func.gb)\n\n\ndef convolution_2d(x, W, b=None, stride=1, pad=0, use_cudnn=True):\n \"\"\"Two-dimensional convolution function.\n\n Args:\n x (~chainer.Variable): Input variable.\n W (~chainer.Variable): Weight variable.\n b (~chainer.Variable): Bias variable.\n stride (int or (int, int)): Stride of filter applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent.\n pad (int or (int, int)): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n use_cudnn (bool): If True, then this function uses CuDNN if available.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n .. seealso:: :class:`Convolution2D`\n\n \"\"\"\n return NonparameterizedConvolution2D(\n stride=stride, pad=pad, use_cudnn=use_cudnn)(x, W, b)\n", "path": "chainer/functions/nonparameterized_convolution_2d.py"}]} | 1,039 | 361 |
gh_patches_debug_23091 | rasdani/github-patches | git_diff | pytorch__ignite-984 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate ignite.contrib.handlers.custom_events.CustomPeriodicEvent
## 🚀 Feature
Custom events `CustomPeriodicEvent` from contrib seem heavy and unusable.
Idea is to
- [ ] raise a warning about deprecation since v0.4.0 and removing since v0.5.0 and suggest to use filtered events.
- [ ] remove all docs about them
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/contrib/handlers/custom_events.py`
Content:
```
1 from ignite.engine import Events, State, EventEnum
2
3
4 class CustomPeriodicEvent:
5 """Handler to define a custom periodic events as a number of elapsed iterations/epochs for an engine.
6
7 When custom periodic event is created and attached to an engine, the following events are fired:
8 1) K iterations is specified:
9 - `Events.ITERATIONS_<K>_STARTED`
10 - `Events.ITERATIONS_<K>_COMPLETED`
11
12 1) K epochs is specified:
13 - `Events.EPOCHS_<K>_STARTED`
14 - `Events.EPOCHS_<K>_COMPLETED`
15
16
17 Examples:
18
19 .. code-block:: python
20
21 from ignite.engine import Engine, Events
22 from ignite.contrib.handlers import CustomPeriodicEvent
23
24 # Let's define an event every 1000 iterations
25 cpe1 = CustomPeriodicEvent(n_iterations=1000)
26 cpe1.attach(trainer)
27
28 # Let's define an event every 10 epochs
29 cpe2 = CustomPeriodicEvent(n_epochs=10)
30 cpe2.attach(trainer)
31
32 @trainer.on(cpe1.Events.ITERATIONS_1000_COMPLETED)
33 def on_every_1000_iterations(engine):
34 # run a computation after 1000 iterations
35 # ...
36 print(engine.state.iterations_1000)
37
38 @trainer.on(cpe2.Events.EPOCHS_10_STARTED)
39 def on_every_10_epochs(engine):
40 # run a computation every 10 epochs
41 # ...
42 print(engine.state.epochs_10)
43
44
45 Args:
46 n_iterations (int, optional): number iterations of the custom periodic event
47 n_epochs (int, optional): number iterations of the custom periodic event. Argument is optional, but only one,
48 either n_iterations or n_epochs should defined.
49
50 """
51
52 def __init__(self, n_iterations=None, n_epochs=None):
53
54 if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):
55 raise ValueError("Argument n_iterations should be positive integer number")
56
57 if n_epochs is not None and (not isinstance(n_epochs, int) or n_epochs < 1):
58 raise ValueError("Argument n_epochs should be positive integer number")
59
60 if (n_iterations is None and n_epochs is None) or (n_iterations and n_epochs):
61 raise ValueError("Either n_iterations or n_epochs should defined")
62
63 if n_iterations:
64 prefix = "iterations"
65 self.state_attr = "iteration"
66 self.period = n_iterations
67
68 if n_epochs:
69 prefix = "epochs"
70 self.state_attr = "epoch"
71 self.period = n_epochs
72
73 self.custom_state_attr = "{}_{}".format(prefix, self.period)
74 event_name = "{}_{}".format(prefix.upper(), self.period)
75 setattr(
76 self,
77 "Events",
78 EventEnum("Events", " ".join(["{}_STARTED".format(event_name), "{}_COMPLETED".format(event_name)])),
79 )
80
81 # Update State.event_to_attr
82 for e in self.Events:
83 State.event_to_attr[e] = self.custom_state_attr
84
85 # Create aliases
86 self._periodic_event_started = getattr(self.Events, "{}_STARTED".format(event_name))
87 self._periodic_event_completed = getattr(self.Events, "{}_COMPLETED".format(event_name))
88
89 def _on_started(self, engine):
90 setattr(engine.state, self.custom_state_attr, 0)
91
92 def _on_periodic_event_started(self, engine):
93 if getattr(engine.state, self.state_attr) % self.period == 1:
94 setattr(engine.state, self.custom_state_attr, getattr(engine.state, self.custom_state_attr) + 1)
95 engine.fire_event(self._periodic_event_started)
96
97 def _on_periodic_event_completed(self, engine):
98 if getattr(engine.state, self.state_attr) % self.period == 0:
99 engine.fire_event(self._periodic_event_completed)
100
101 def attach(self, engine):
102 engine.register_events(*self.Events)
103
104 engine.add_event_handler(Events.STARTED, self._on_started)
105 engine.add_event_handler(
106 getattr(Events, "{}_STARTED".format(self.state_attr.upper())), self._on_periodic_event_started
107 )
108 engine.add_event_handler(
109 getattr(Events, "{}_COMPLETED".format(self.state_attr.upper())), self._on_periodic_event_completed
110 )
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/contrib/handlers/custom_events.py b/ignite/contrib/handlers/custom_events.py
--- a/ignite/contrib/handlers/custom_events.py
+++ b/ignite/contrib/handlers/custom_events.py
@@ -1,8 +1,11 @@
from ignite.engine import Events, State, EventEnum
+import warnings
class CustomPeriodicEvent:
- """Handler to define a custom periodic events as a number of elapsed iterations/epochs for an engine.
+ """DEPRECATED. Use filtered events instead.
+ Handler to define a custom periodic events as a number of elapsed iterations/epochs
+ for an engine.
When custom periodic event is created and attached to an engine, the following events are fired:
1) K iterations is specified:
@@ -51,6 +54,11 @@
def __init__(self, n_iterations=None, n_epochs=None):
+ warnings.warn(
+ "CustomPeriodicEvent is deprecated since 0.4.0 and will be removed in 0.5.0. Use filtered events instead.",
+ DeprecationWarning,
+ )
+
if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):
raise ValueError("Argument n_iterations should be positive integer number")
| {"golden_diff": "diff --git a/ignite/contrib/handlers/custom_events.py b/ignite/contrib/handlers/custom_events.py\n--- a/ignite/contrib/handlers/custom_events.py\n+++ b/ignite/contrib/handlers/custom_events.py\n@@ -1,8 +1,11 @@\n from ignite.engine import Events, State, EventEnum\n+import warnings\n \n \n class CustomPeriodicEvent:\n- \"\"\"Handler to define a custom periodic events as a number of elapsed iterations/epochs for an engine.\n+ \"\"\"DEPRECATED. Use filtered events instead.\n+ Handler to define a custom periodic events as a number of elapsed iterations/epochs\n+ for an engine.\n \n When custom periodic event is created and attached to an engine, the following events are fired:\n 1) K iterations is specified:\n@@ -51,6 +54,11 @@\n \n def __init__(self, n_iterations=None, n_epochs=None):\n \n+ warnings.warn(\n+ \"CustomPeriodicEvent is deprecated since 0.4.0 and will be removed in 0.5.0. Use filtered events instead.\",\n+ DeprecationWarning,\n+ )\n+\n if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):\n raise ValueError(\"Argument n_iterations should be positive integer number\")\n", "issue": "Deprecate ignite.contrib.handlers.custom_events.CustomPeriodicEvent\n## \ud83d\ude80 Feature\r\n\r\nCustom events `CustomPeriodicEvent` from contrib seem heavy and unusable. \r\n\r\nIdea is to \r\n\r\n- [ ] raise a warning about deprecation since v0.4.0 and removing since v0.5.0 and suggest to use filtered events.\r\n- [ ] remove all docs about them \r\n\n", "before_files": [{"content": "from ignite.engine import Events, State, EventEnum\n\n\nclass CustomPeriodicEvent:\n \"\"\"Handler to define a custom periodic events as a number of elapsed iterations/epochs for an engine.\n\n When custom periodic event is created and attached to an engine, the following events are fired:\n 1) K iterations is specified:\n - `Events.ITERATIONS_<K>_STARTED`\n - `Events.ITERATIONS_<K>_COMPLETED`\n\n 1) K epochs is specified:\n - `Events.EPOCHS_<K>_STARTED`\n - `Events.EPOCHS_<K>_COMPLETED`\n\n\n Examples:\n\n .. code-block:: python\n\n from ignite.engine import Engine, Events\n from ignite.contrib.handlers import CustomPeriodicEvent\n\n # Let's define an event every 1000 iterations\n cpe1 = CustomPeriodicEvent(n_iterations=1000)\n cpe1.attach(trainer)\n\n # Let's define an event every 10 epochs\n cpe2 = CustomPeriodicEvent(n_epochs=10)\n cpe2.attach(trainer)\n\n @trainer.on(cpe1.Events.ITERATIONS_1000_COMPLETED)\n def on_every_1000_iterations(engine):\n # run a computation after 1000 iterations\n # ...\n print(engine.state.iterations_1000)\n\n @trainer.on(cpe2.Events.EPOCHS_10_STARTED)\n def on_every_10_epochs(engine):\n # run a computation every 10 epochs\n # ...\n print(engine.state.epochs_10)\n\n\n Args:\n n_iterations (int, optional): number iterations of the custom periodic event\n n_epochs (int, optional): number iterations of the custom periodic event. Argument is optional, but only one,\n either n_iterations or n_epochs should defined.\n\n \"\"\"\n\n def __init__(self, n_iterations=None, n_epochs=None):\n\n if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):\n raise ValueError(\"Argument n_iterations should be positive integer number\")\n\n if n_epochs is not None and (not isinstance(n_epochs, int) or n_epochs < 1):\n raise ValueError(\"Argument n_epochs should be positive integer number\")\n\n if (n_iterations is None and n_epochs is None) or (n_iterations and n_epochs):\n raise ValueError(\"Either n_iterations or n_epochs should defined\")\n\n if n_iterations:\n prefix = \"iterations\"\n self.state_attr = \"iteration\"\n self.period = n_iterations\n\n if n_epochs:\n prefix = \"epochs\"\n self.state_attr = \"epoch\"\n self.period = n_epochs\n\n self.custom_state_attr = \"{}_{}\".format(prefix, self.period)\n event_name = \"{}_{}\".format(prefix.upper(), self.period)\n setattr(\n self,\n \"Events\",\n EventEnum(\"Events\", \" \".join([\"{}_STARTED\".format(event_name), \"{}_COMPLETED\".format(event_name)])),\n )\n\n # Update State.event_to_attr\n for e in self.Events:\n State.event_to_attr[e] = self.custom_state_attr\n\n # Create aliases\n self._periodic_event_started = getattr(self.Events, \"{}_STARTED\".format(event_name))\n self._periodic_event_completed = getattr(self.Events, \"{}_COMPLETED\".format(event_name))\n\n def _on_started(self, engine):\n setattr(engine.state, self.custom_state_attr, 0)\n\n def _on_periodic_event_started(self, engine):\n if getattr(engine.state, self.state_attr) % self.period == 1:\n setattr(engine.state, self.custom_state_attr, getattr(engine.state, self.custom_state_attr) + 1)\n engine.fire_event(self._periodic_event_started)\n\n def _on_periodic_event_completed(self, engine):\n if getattr(engine.state, self.state_attr) % self.period == 0:\n engine.fire_event(self._periodic_event_completed)\n\n def attach(self, engine):\n engine.register_events(*self.Events)\n\n engine.add_event_handler(Events.STARTED, self._on_started)\n engine.add_event_handler(\n getattr(Events, \"{}_STARTED\".format(self.state_attr.upper())), self._on_periodic_event_started\n )\n engine.add_event_handler(\n getattr(Events, \"{}_COMPLETED\".format(self.state_attr.upper())), self._on_periodic_event_completed\n )\n", "path": "ignite/contrib/handlers/custom_events.py"}], "after_files": [{"content": "from ignite.engine import Events, State, EventEnum\nimport warnings\n\n\nclass CustomPeriodicEvent:\n \"\"\"DEPRECATED. Use filtered events instead.\n Handler to define a custom periodic events as a number of elapsed iterations/epochs\n for an engine.\n\n When custom periodic event is created and attached to an engine, the following events are fired:\n 1) K iterations is specified:\n - `Events.ITERATIONS_<K>_STARTED`\n - `Events.ITERATIONS_<K>_COMPLETED`\n\n 1) K epochs is specified:\n - `Events.EPOCHS_<K>_STARTED`\n - `Events.EPOCHS_<K>_COMPLETED`\n\n\n Examples:\n\n .. code-block:: python\n\n from ignite.engine import Engine, Events\n from ignite.contrib.handlers import CustomPeriodicEvent\n\n # Let's define an event every 1000 iterations\n cpe1 = CustomPeriodicEvent(n_iterations=1000)\n cpe1.attach(trainer)\n\n # Let's define an event every 10 epochs\n cpe2 = CustomPeriodicEvent(n_epochs=10)\n cpe2.attach(trainer)\n\n @trainer.on(cpe1.Events.ITERATIONS_1000_COMPLETED)\n def on_every_1000_iterations(engine):\n # run a computation after 1000 iterations\n # ...\n print(engine.state.iterations_1000)\n\n @trainer.on(cpe2.Events.EPOCHS_10_STARTED)\n def on_every_10_epochs(engine):\n # run a computation every 10 epochs\n # ...\n print(engine.state.epochs_10)\n\n\n Args:\n n_iterations (int, optional): number iterations of the custom periodic event\n n_epochs (int, optional): number iterations of the custom periodic event. Argument is optional, but only one,\n either n_iterations or n_epochs should defined.\n\n \"\"\"\n\n def __init__(self, n_iterations=None, n_epochs=None):\n\n warnings.warn(\n \"CustomPeriodicEvent is deprecated since 0.4.0 and will be removed in 0.5.0. Use filtered events instead.\",\n DeprecationWarning,\n )\n\n if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):\n raise ValueError(\"Argument n_iterations should be positive integer number\")\n\n if n_epochs is not None and (not isinstance(n_epochs, int) or n_epochs < 1):\n raise ValueError(\"Argument n_epochs should be positive integer number\")\n\n if (n_iterations is None and n_epochs is None) or (n_iterations and n_epochs):\n raise ValueError(\"Either n_iterations or n_epochs should defined\")\n\n if n_iterations:\n prefix = \"iterations\"\n self.state_attr = \"iteration\"\n self.period = n_iterations\n\n if n_epochs:\n prefix = \"epochs\"\n self.state_attr = \"epoch\"\n self.period = n_epochs\n\n self.custom_state_attr = \"{}_{}\".format(prefix, self.period)\n event_name = \"{}_{}\".format(prefix.upper(), self.period)\n setattr(\n self,\n \"Events\",\n EventEnum(\"Events\", \" \".join([\"{}_STARTED\".format(event_name), \"{}_COMPLETED\".format(event_name)])),\n )\n\n # Update State.event_to_attr\n for e in self.Events:\n State.event_to_attr[e] = self.custom_state_attr\n\n # Create aliases\n self._periodic_event_started = getattr(self.Events, \"{}_STARTED\".format(event_name))\n self._periodic_event_completed = getattr(self.Events, \"{}_COMPLETED\".format(event_name))\n\n def _on_started(self, engine):\n setattr(engine.state, self.custom_state_attr, 0)\n\n def _on_periodic_event_started(self, engine):\n if getattr(engine.state, self.state_attr) % self.period == 1:\n setattr(engine.state, self.custom_state_attr, getattr(engine.state, self.custom_state_attr) + 1)\n engine.fire_event(self._periodic_event_started)\n\n def _on_periodic_event_completed(self, engine):\n if getattr(engine.state, self.state_attr) % self.period == 0:\n engine.fire_event(self._periodic_event_completed)\n\n def attach(self, engine):\n engine.register_events(*self.Events)\n\n engine.add_event_handler(Events.STARTED, self._on_started)\n engine.add_event_handler(\n getattr(Events, \"{}_STARTED\".format(self.state_attr.upper())), self._on_periodic_event_started\n )\n engine.add_event_handler(\n getattr(Events, \"{}_COMPLETED\".format(self.state_attr.upper())), self._on_periodic_event_completed\n )\n", "path": "ignite/contrib/handlers/custom_events.py"}]} | 1,533 | 286 |
gh_patches_debug_720 | rasdani/github-patches | git_diff | liqd__a4-opin-766 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cannot delete user in django admin if user has not uploaded avatar
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/users/signals.py`
Content:
```
1 from django.db.models import signals
2 from django.dispatch import receiver
3
4 from adhocracy4.images import services
5
6 from . import models
7
8
9 @receiver(signals.post_init, sender=models.User)
10 def backup_image_path(sender, instance, **kwargs):
11 instance._current_image_file = instance.avatar
12
13
14 @receiver(signals.post_save, sender=models.User)
15 def delete_old_image(sender, instance, **kwargs):
16 if hasattr(instance, '_current_image_file'):
17 if instance._current_image_file != instance.avatar:
18 services.delete_images([instance._current_image_file])
19
20
21 @receiver(signals.post_delete, sender=models.User)
22 def delete_images_for_User(sender, instance, **kwargs):
23 services.delete_images([instance.avatar])
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/users/signals.py b/euth/users/signals.py
--- a/euth/users/signals.py
+++ b/euth/users/signals.py
@@ -20,4 +20,4 @@
@receiver(signals.post_delete, sender=models.User)
def delete_images_for_User(sender, instance, **kwargs):
- services.delete_images([instance.avatar])
+ services.delete_images([instance._avatar])
| {"golden_diff": "diff --git a/euth/users/signals.py b/euth/users/signals.py\n--- a/euth/users/signals.py\n+++ b/euth/users/signals.py\n@@ -20,4 +20,4 @@\n \n @receiver(signals.post_delete, sender=models.User)\n def delete_images_for_User(sender, instance, **kwargs):\n- services.delete_images([instance.avatar])\n+ services.delete_images([instance._avatar])\n", "issue": "cannot delete user in django admin if user has not uploaded avatar \n\n", "before_files": [{"content": "from django.db.models import signals\nfrom django.dispatch import receiver\n\nfrom adhocracy4.images import services\n\nfrom . import models\n\n\n@receiver(signals.post_init, sender=models.User)\ndef backup_image_path(sender, instance, **kwargs):\n instance._current_image_file = instance.avatar\n\n\n@receiver(signals.post_save, sender=models.User)\ndef delete_old_image(sender, instance, **kwargs):\n if hasattr(instance, '_current_image_file'):\n if instance._current_image_file != instance.avatar:\n services.delete_images([instance._current_image_file])\n\n\n@receiver(signals.post_delete, sender=models.User)\ndef delete_images_for_User(sender, instance, **kwargs):\n services.delete_images([instance.avatar])\n", "path": "euth/users/signals.py"}], "after_files": [{"content": "from django.db.models import signals\nfrom django.dispatch import receiver\n\nfrom adhocracy4.images import services\n\nfrom . import models\n\n\n@receiver(signals.post_init, sender=models.User)\ndef backup_image_path(sender, instance, **kwargs):\n instance._current_image_file = instance.avatar\n\n\n@receiver(signals.post_save, sender=models.User)\ndef delete_old_image(sender, instance, **kwargs):\n if hasattr(instance, '_current_image_file'):\n if instance._current_image_file != instance.avatar:\n services.delete_images([instance._current_image_file])\n\n\n@receiver(signals.post_delete, sender=models.User)\ndef delete_images_for_User(sender, instance, **kwargs):\n services.delete_images([instance._avatar])\n", "path": "euth/users/signals.py"}]} | 464 | 91 |
gh_patches_debug_951 | rasdani/github-patches | git_diff | pytorch__ignite-844 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typehint of ignite._utils._to_hours_mins_secs not satisfied with float
## 🐛 Bug description
That is a so tiny bug. The `typehint` of the following function of `ignite._utils` is not satisfied with a `float` argument
``` python
def _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:
"""Convert seconds to hours, mins, and seconds."""
mins, secs = divmod(time_taken, 60)
hours, mins = divmod(mins, 60)
return hours, mins, secs
```
We have
```python
>>> divmod(10.0,2)
(5.0, 0.0)
```
## Environment
- PyTorch Version (e.g., 1.4): 1.4
- Ignite Version (e.g., 0.3.0): 0.3.0
- OS (e.g., Linux): Linux
- How you installed Ignite (`conda`, `pip`, source): conda
- Python version: 3.7
- Any other relevant information:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/_utils.py`
Content:
```
1 from typing import Union, Tuple
2
3 # For compatibilty
4 from ignite.utils import convert_tensor, apply_to_tensor, apply_to_type, to_onehot
5
6
7 def _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:
8 """Convert seconds to hours, mins, and seconds."""
9 mins, secs = divmod(time_taken, 60)
10 hours, mins = divmod(mins, 60)
11 return hours, mins, secs
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/_utils.py b/ignite/_utils.py
--- a/ignite/_utils.py
+++ b/ignite/_utils.py
@@ -8,4 +8,4 @@
"""Convert seconds to hours, mins, and seconds."""
mins, secs = divmod(time_taken, 60)
hours, mins = divmod(mins, 60)
- return hours, mins, secs
+ return round(hours), round(mins), round(secs)
| {"golden_diff": "diff --git a/ignite/_utils.py b/ignite/_utils.py\n--- a/ignite/_utils.py\n+++ b/ignite/_utils.py\n@@ -8,4 +8,4 @@\n \"\"\"Convert seconds to hours, mins, and seconds.\"\"\"\n mins, secs = divmod(time_taken, 60)\n hours, mins = divmod(mins, 60)\n- return hours, mins, secs\n+ return round(hours), round(mins), round(secs)\n", "issue": "Typehint of ignite._utils._to_hours_mins_secs not satisfied with float\n## \ud83d\udc1b Bug description\r\n\r\nThat is a so tiny bug. The `typehint` of the following function of `ignite._utils` is not satisfied with a `float` argument\r\n``` python\r\ndef _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:\r\n \"\"\"Convert seconds to hours, mins, and seconds.\"\"\"\r\n mins, secs = divmod(time_taken, 60)\r\n hours, mins = divmod(mins, 60)\r\n return hours, mins, secs\r\n```\r\nWe have\r\n```python\r\n>>> divmod(10.0,2)\r\n(5.0, 0.0)\r\n```\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.4\r\n - Ignite Version (e.g., 0.3.0): 0.3.0\r\n - OS (e.g., Linux): Linux\r\n - How you installed Ignite (`conda`, `pip`, source): conda\r\n - Python version: 3.7\r\n - Any other relevant information:\r\n\r\n\n", "before_files": [{"content": "from typing import Union, Tuple\n\n# For compatibilty\nfrom ignite.utils import convert_tensor, apply_to_tensor, apply_to_type, to_onehot\n\n\ndef _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:\n \"\"\"Convert seconds to hours, mins, and seconds.\"\"\"\n mins, secs = divmod(time_taken, 60)\n hours, mins = divmod(mins, 60)\n return hours, mins, secs\n", "path": "ignite/_utils.py"}], "after_files": [{"content": "from typing import Union, Tuple\n\n# For compatibilty\nfrom ignite.utils import convert_tensor, apply_to_tensor, apply_to_type, to_onehot\n\n\ndef _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:\n \"\"\"Convert seconds to hours, mins, and seconds.\"\"\"\n mins, secs = divmod(time_taken, 60)\n hours, mins = divmod(mins, 60)\n return round(hours), round(mins), round(secs)\n", "path": "ignite/_utils.py"}]} | 630 | 105 |
gh_patches_debug_16924 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-732 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Backtrace when run without gtk installed
If you try to bring up the graphical interface without gtk installed, you get a backtrace:
```
Traceback (most recent call last):
File "/usr/bin/solaar", line 57, in <module>
solaar.gtk.main()
File "/usr/lib/python3.7/site-packages/solaar/gtk.py", line 90, in main
gi.require_version('Gtk', '3.0')
File "/usr/lib64/python3.7/site-packages/gi/__init__.py", line 127, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gtk not available
```
It does appear that there is code to handle missing modules, but gtk doesn't get imported directly so it wouldn't trigger. Maybe something like this?
```
diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py
index f728420..20683af 100644
--- a/lib/solaar/gtk.py
+++ b/lib/solaar/gtk.py
@@ -87,8 +87,11 @@ def main():
if not args: return
if args.action: return _cli.run(args.action, args.hidraw_path)
- gi = _require('gi', 'python-gi')
- gi.require_version('Gtk', '3.0')
+ try:
+ gi.require_version('Gtk', '3.0')
+ except ValueError:
+ import sys
+ sys.exit("%s: Gtk (version 3) must be installed in order to run the graphical interface." % (NAME))
_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')
try:
```
Can send a PR if desired, but I don't know if that's the right approach.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/solaar/gtk.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- python-mode -*-
3 # -*- coding: UTF-8 -*-
4
5 ## Copyright (C) 2012-2013 Daniel Pavel
6 ##
7 ## This program is free software; you can redistribute it and/or modify
8 ## it under the terms of the GNU General Public License as published by
9 ## the Free Software Foundation; either version 2 of the License, or
10 ## (at your option) any later version.
11 ##
12 ## This program is distributed in the hope that it will be useful,
13 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
14 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 ## GNU General Public License for more details.
16 ##
17 ## You should have received a copy of the GNU General Public License along
18 ## with this program; if not, write to the Free Software Foundation, Inc.,
19 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
20
21 from __future__ import absolute_import, division, print_function, unicode_literals
22
23 import importlib
24
25
26 from solaar import __version__, NAME
27 import solaar.i18n as _i18n
28 import solaar.cli as _cli
29
30 #
31 #
32 #
33
34 def _require(module, os_package):
35 try:
36 return importlib.import_module(module)
37 except ImportError:
38 import sys
39 sys.exit("%s: missing required system package %s" % (NAME, os_package))
40
41
42 def _parse_arguments():
43 import argparse
44 arg_parser = argparse.ArgumentParser(prog=NAME.lower())
45 arg_parser.add_argument('-d', '--debug', action='count', default=0,
46 help='print logging messages, for debugging purposes (may be repeated for extra verbosity)')
47 arg_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',
48 help='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')
49 arg_parser.add_argument('--restart-on-wake-up', action='store_true',
50 help='restart Solaar on sleep wake-up (experimental)')
51 arg_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')
52 arg_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)
53 arg_parser.add_argument('--help-actions', action='store_true',
54 help='print help for the optional actions')
55 arg_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,
56 help='optional actions to perform')
57
58 args = arg_parser.parse_args()
59
60 if args.help_actions:
61 _cli.print_help()
62 return
63
64 if args.window is None:
65 args.window = 'show' # default behaviour is to show main window
66
67 import logging
68 if args.debug > 0:
69 log_level = logging.WARNING - 10 * args.debug
70 log_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'
71 logging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')
72 else:
73 logging.root.addHandler(logging.NullHandler())
74 logging.root.setLevel(logging.ERROR)
75
76 if not args.action:
77 if logging.root.isEnabledFor(logging.INFO):
78 logging.info("language %s (%s), translations path %s", _i18n.language, _i18n.encoding, _i18n.path)
79
80 return args
81
82
83 def main():
84 _require('pyudev', 'python3-pyudev')
85
86 # handle ^C in console
87 import signal
88 signal.signal(signal.SIGINT, signal.SIG_DFL)
89
90 args = _parse_arguments()
91 if not args: return
92 if args.action:
93 # if any argument, run comandline and exit
94 return _cli.run(args.action, args.hidraw_path)
95
96 gi = _require('gi', 'python3-gi or python3-gobject')
97 gi.require_version('Gtk', '3.0')
98 _require('gi.repository.Gtk', 'gir1.2-gtk-3.0')
99
100 try:
101 import solaar.ui as ui
102 import solaar.listener as listener
103 listener.setup_scanner(ui.status_changed, ui.error_dialog)
104
105 import solaar.upower as _upower
106 if args.restart_on_wake_up:
107 _upower.watch(listener.start_all, listener.stop_all)
108 else:
109 _upower.watch(lambda: listener.ping_all(True))
110
111 # main UI event loop
112 ui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')
113 except Exception as e:
114 import sys
115 sys.exit('%s: error: %s' % (NAME.lower(), e))
116
117
118 if __name__ == '__main__':
119 main()
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py
--- a/lib/solaar/gtk.py
+++ b/lib/solaar/gtk.py
@@ -31,10 +31,12 @@
#
#
-def _require(module, os_package):
+def _require(module, os_package, gi=None, gi_package=None, gi_version=None):
try:
+ if gi is not None:
+ gi.require_version(gi_package,gi_version)
return importlib.import_module(module)
- except ImportError:
+ except (ImportError, ValueError):
import sys
sys.exit("%s: missing required system package %s" % (NAME, os_package))
@@ -94,8 +96,7 @@
return _cli.run(args.action, args.hidraw_path)
gi = _require('gi', 'python3-gi or python3-gobject')
- gi.require_version('Gtk', '3.0')
- _require('gi.repository.Gtk', 'gir1.2-gtk-3.0')
+ _require('gi.repository.Gtk', 'gir1.2-gtk-3.0', gi, 'Gtk', '3.0')
try:
import solaar.ui as ui
| {"golden_diff": "diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py\n--- a/lib/solaar/gtk.py\n+++ b/lib/solaar/gtk.py\n@@ -31,10 +31,12 @@\n #\n #\n \n-def _require(module, os_package):\n+def _require(module, os_package, gi=None, gi_package=None, gi_version=None):\n \ttry:\n+\t\tif gi is not None:\n+\t\t\tgi.require_version(gi_package,gi_version)\n \t\treturn importlib.import_module(module)\n-\texcept ImportError:\n+\texcept (ImportError, ValueError):\n \t\timport sys\n \t\tsys.exit(\"%s: missing required system package %s\" % (NAME, os_package))\n \n@@ -94,8 +96,7 @@\n \t\treturn _cli.run(args.action, args.hidraw_path)\n \n \tgi = _require('gi', 'python3-gi or python3-gobject')\n-\tgi.require_version('Gtk', '3.0')\n-\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\n+\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0', gi, 'Gtk', '3.0')\n \n \ttry:\n \t\timport solaar.ui as ui\n", "issue": "Backtrace when run without gtk installed\nIf you try to bring up the graphical interface without gtk installed, you get a backtrace:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/bin/solaar\", line 57, in <module>\r\n solaar.gtk.main()\r\n File \"/usr/lib/python3.7/site-packages/solaar/gtk.py\", line 90, in main\r\n gi.require_version('Gtk', '3.0')\r\n File \"/usr/lib64/python3.7/site-packages/gi/__init__.py\", line 127, in require_version\r\n raise ValueError('Namespace %s not available' % namespace)\r\nValueError: Namespace Gtk not available\r\n```\r\nIt does appear that there is code to handle missing modules, but gtk doesn't get imported directly so it wouldn't trigger. Maybe something like this?\r\n```\r\ndiff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py\r\nindex f728420..20683af 100644\r\n--- a/lib/solaar/gtk.py\r\n+++ b/lib/solaar/gtk.py\r\n@@ -87,8 +87,11 @@ def main():\r\n if not args: return\r\n if args.action: return _cli.run(args.action, args.hidraw_path)\r\n\r\n- gi = _require('gi', 'python-gi')\r\n- gi.require_version('Gtk', '3.0')\r\n+ try:\r\n+ gi.require_version('Gtk', '3.0')\r\n+ except ValueError:\r\n+ import sys\r\n+ sys.exit(\"%s: Gtk (version 3) must be installed in order to run the graphical interface.\" % (NAME))\r\n _require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\r\n\r\n try:\r\n```\r\nCan send a PR if desired, but I don't know if that's the right approach.\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport importlib\n\n\nfrom solaar import __version__, NAME\nimport solaar.i18n as _i18n\nimport solaar.cli as _cli\n\n#\n#\n#\n\ndef _require(module, os_package):\n\ttry:\n\t\treturn importlib.import_module(module)\n\texcept ImportError:\n\t\timport sys\n\t\tsys.exit(\"%s: missing required system package %s\" % (NAME, os_package))\n\n\ndef _parse_arguments():\n\timport argparse\n\targ_parser = argparse.ArgumentParser(prog=NAME.lower())\n\targ_parser.add_argument('-d', '--debug', action='count', default=0,\n\t\t\t\t\t\t\thelp='print logging messages, for debugging purposes (may be repeated for extra verbosity)')\n\targ_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',\n\t\t\t\t\t\t\thelp='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')\n\targ_parser.add_argument('--restart-on-wake-up', action='store_true',\n\t\t\t\t\t\t\thelp='restart Solaar on sleep wake-up (experimental)')\n\targ_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')\n\targ_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n\targ_parser.add_argument('--help-actions', action='store_true',\n\t\t\t\t\t\t\thelp='print help for the optional actions')\n\targ_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,\n\t\t\t\t\t\t\thelp='optional actions to perform')\n\n\targs = arg_parser.parse_args()\n\n\tif args.help_actions:\n\t\t_cli.print_help()\n\t\treturn\n\n\tif args.window is None:\n\t\targs.window = 'show' # default behaviour is to show main window\n\n\timport logging\n\tif args.debug > 0:\n\t\tlog_level = logging.WARNING - 10 * args.debug\n\t\tlog_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'\n\t\tlogging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')\n\telse:\n\t\tlogging.root.addHandler(logging.NullHandler())\n\t\tlogging.root.setLevel(logging.ERROR)\n\n\tif not args.action:\n\t\tif logging.root.isEnabledFor(logging.INFO):\n\t\t\tlogging.info(\"language %s (%s), translations path %s\", _i18n.language, _i18n.encoding, _i18n.path)\n\n\treturn args\n\n\ndef main():\n\t_require('pyudev', 'python3-pyudev')\n\n\t# handle ^C in console\n\timport signal\n\tsignal.signal(signal.SIGINT, signal.SIG_DFL)\n\n\targs = _parse_arguments()\n\tif not args: return\n\tif args.action:\n\t\t# if any argument, run comandline and exit\n\t\treturn _cli.run(args.action, args.hidraw_path)\n\n\tgi = _require('gi', 'python3-gi or python3-gobject')\n\tgi.require_version('Gtk', '3.0')\n\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\n\n\ttry:\n\t\timport solaar.ui as ui\n\t\timport solaar.listener as listener\n\t\tlistener.setup_scanner(ui.status_changed, ui.error_dialog)\n\n\t\timport solaar.upower as _upower\n\t\tif args.restart_on_wake_up:\n\t\t\t_upower.watch(listener.start_all, listener.stop_all)\n\t\telse:\n\t\t\t_upower.watch(lambda: listener.ping_all(True))\n\n\t\t# main UI event loop\n\t\tui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')\n\texcept Exception as e:\n\t\timport sys\n\t\tsys.exit('%s: error: %s' % (NAME.lower(), e))\n\n\nif __name__ == '__main__':\n\tmain()\n", "path": "lib/solaar/gtk.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport importlib\n\n\nfrom solaar import __version__, NAME\nimport solaar.i18n as _i18n\nimport solaar.cli as _cli\n\n#\n#\n#\n\ndef _require(module, os_package, gi=None, gi_package=None, gi_version=None):\n\ttry:\n\t\tif gi is not None:\n\t\t\tgi.require_version(gi_package,gi_version)\n\t\treturn importlib.import_module(module)\n\texcept (ImportError, ValueError):\n\t\timport sys\n\t\tsys.exit(\"%s: missing required system package %s\" % (NAME, os_package))\n\n\ndef _parse_arguments():\n\timport argparse\n\targ_parser = argparse.ArgumentParser(prog=NAME.lower())\n\targ_parser.add_argument('-d', '--debug', action='count', default=0,\n\t\t\t\t\t\t\thelp='print logging messages, for debugging purposes (may be repeated for extra verbosity)')\n\targ_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',\n\t\t\t\t\t\t\thelp='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')\n\targ_parser.add_argument('--restart-on-wake-up', action='store_true',\n\t\t\t\t\t\t\thelp='restart Solaar on sleep wake-up (experimental)')\n\targ_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')\n\targ_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n\targ_parser.add_argument('--help-actions', action='store_true',\n\t\t\t\t\t\t\thelp='print help for the optional actions')\n\targ_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,\n\t\t\t\t\t\t\thelp='optional actions to perform')\n\n\targs = arg_parser.parse_args()\n\n\tif args.help_actions:\n\t\t_cli.print_help()\n\t\treturn\n\n\tif args.window is None:\n\t\targs.window = 'show' # default behaviour is to show main window\n\n\timport logging\n\tif args.debug > 0:\n\t\tlog_level = logging.WARNING - 10 * args.debug\n\t\tlog_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'\n\t\tlogging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')\n\telse:\n\t\tlogging.root.addHandler(logging.NullHandler())\n\t\tlogging.root.setLevel(logging.ERROR)\n\n\tif not args.action:\n\t\tif logging.root.isEnabledFor(logging.INFO):\n\t\t\tlogging.info(\"language %s (%s), translations path %s\", _i18n.language, _i18n.encoding, _i18n.path)\n\n\treturn args\n\n\ndef main():\n\t_require('pyudev', 'python3-pyudev')\n\n\t# handle ^C in console\n\timport signal\n\tsignal.signal(signal.SIGINT, signal.SIG_DFL)\n\n\targs = _parse_arguments()\n\tif not args: return\n\tif args.action:\n\t\t# if any argument, run comandline and exit\n\t\treturn _cli.run(args.action, args.hidraw_path)\n\n\tgi = _require('gi', 'python3-gi or python3-gobject')\n\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0', gi, 'Gtk', '3.0')\n\n\ttry:\n\t\timport solaar.ui as ui\n\t\timport solaar.listener as listener\n\t\tlistener.setup_scanner(ui.status_changed, ui.error_dialog)\n\n\t\timport solaar.upower as _upower\n\t\tif args.restart_on_wake_up:\n\t\t\t_upower.watch(listener.start_all, listener.stop_all)\n\t\telse:\n\t\t\t_upower.watch(lambda: listener.ping_all(True))\n\n\t\t# main UI event loop\n\t\tui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')\n\texcept Exception as e:\n\t\timport sys\n\t\tsys.exit('%s: error: %s' % (NAME.lower(), e))\n\n\nif __name__ == '__main__':\n\tmain()\n", "path": "lib/solaar/gtk.py"}]} | 2,011 | 280 |
gh_patches_debug_64715 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-1243 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs page describing Beta meaning
## 📚 Documentation
Add a page in our docs describing that beta means that one or all of the following are true:
- the feature has unstable dependencies
- the feature may change without notice in future versions
- the feature is not compatible with other flash / pl features
- the performance of the feature has not been verified
Anything else?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/extensions/stability.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from docutils import nodes
15 from docutils.parsers.rst import Directive
16 from docutils.statemachine import StringList
17
18 ADMONITION_TEMPLATE = """
19 .. raw:: html
20
21 <div class="admonition warning {type}">
22 <p class="admonition-title">{title}</p>
23 <p>This {scope} is currently in Beta. The interfaces and functionality may change without warning in future
24 releases.</p>
25 </div>
26 """
27
28
29 class Beta(Directive):
30 has_content = True
31 required_arguments = 1
32 optional_arguments = 0
33
34 def run(self):
35
36 scope = self.arguments[0]
37
38 admonition_rst = ADMONITION_TEMPLATE.format(type="beta", title="Beta", scope=scope)
39 admonition_list = StringList(admonition_rst.split("\n"))
40 admonition = nodes.paragraph()
41 self.state.nested_parse(admonition_list, self.content_offset, admonition)
42 return [admonition]
43
44
45 def setup(app):
46 app.add_directive("beta", Beta)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/extensions/stability.py b/docs/extensions/stability.py
--- a/docs/extensions/stability.py
+++ b/docs/extensions/stability.py
@@ -20,8 +20,14 @@
<div class="admonition warning {type}">
<p class="admonition-title">{title}</p>
- <p>This {scope} is currently in Beta. The interfaces and functionality may change without warning in future
- releases.</p>
+ <p>
+
+This {scope} is currently in Beta. The API and functionality may change without warning in future
+releases. :ref:`More details <stability>`.
+
+.. raw:: html
+
+ </p>
</div>
"""
| {"golden_diff": "diff --git a/docs/extensions/stability.py b/docs/extensions/stability.py\n--- a/docs/extensions/stability.py\n+++ b/docs/extensions/stability.py\n@@ -20,8 +20,14 @@\n \n <div class=\"admonition warning {type}\">\n <p class=\"admonition-title\">{title}</p>\n- <p>This {scope} is currently in Beta. The interfaces and functionality may change without warning in future\n- releases.</p>\n+ <p>\n+\n+This {scope} is currently in Beta. The API and functionality may change without warning in future\n+releases. :ref:`More details <stability>`.\n+\n+.. raw:: html\n+\n+ </p>\n </div>\n \"\"\"\n", "issue": "Docs page describing Beta meaning\n## \ud83d\udcda Documentation\r\n\r\nAdd a page in our docs describing that beta means that one or all of the following are true:\r\n- the feature has unstable dependencies\r\n- the feature may change without notice in future versions\r\n- the feature is not compatible with other flash / pl features\r\n- the performance of the feature has not been verified\r\n\r\nAnything else?\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive\nfrom docutils.statemachine import StringList\n\nADMONITION_TEMPLATE = \"\"\"\n.. raw:: html\n\n <div class=\"admonition warning {type}\">\n <p class=\"admonition-title\">{title}</p>\n <p>This {scope} is currently in Beta. The interfaces and functionality may change without warning in future\n releases.</p>\n </div>\n\"\"\"\n\n\nclass Beta(Directive):\n has_content = True\n required_arguments = 1\n optional_arguments = 0\n\n def run(self):\n\n scope = self.arguments[0]\n\n admonition_rst = ADMONITION_TEMPLATE.format(type=\"beta\", title=\"Beta\", scope=scope)\n admonition_list = StringList(admonition_rst.split(\"\\n\"))\n admonition = nodes.paragraph()\n self.state.nested_parse(admonition_list, self.content_offset, admonition)\n return [admonition]\n\n\ndef setup(app):\n app.add_directive(\"beta\", Beta)\n", "path": "docs/extensions/stability.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive\nfrom docutils.statemachine import StringList\n\nADMONITION_TEMPLATE = \"\"\"\n.. raw:: html\n\n <div class=\"admonition warning {type}\">\n <p class=\"admonition-title\">{title}</p>\n <p>\n\nThis {scope} is currently in Beta. The API and functionality may change without warning in future\nreleases. :ref:`More details <stability>`.\n\n.. raw:: html\n\n </p>\n </div>\n\"\"\"\n\n\nclass Beta(Directive):\n has_content = True\n required_arguments = 1\n optional_arguments = 0\n\n def run(self):\n\n scope = self.arguments[0]\n\n admonition_rst = ADMONITION_TEMPLATE.format(type=\"beta\", title=\"Beta\", scope=scope)\n admonition_list = StringList(admonition_rst.split(\"\\n\"))\n admonition = nodes.paragraph()\n self.state.nested_parse(admonition_list, self.content_offset, admonition)\n return [admonition]\n\n\ndef setup(app):\n app.add_directive(\"beta\", Beta)\n", "path": "docs/extensions/stability.py"}]} | 780 | 161 |
gh_patches_debug_2342 | rasdani/github-patches | git_diff | mozilla__bugbug-411 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use codespell in precommit hook
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `run.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import argparse
7 import csv
8 import os
9 from datetime import datetime, timedelta
10
11 import numpy as np
12
13 from bugbug import repository # noqa
14 from bugbug import bugzilla, db
15 from bugbug.models import get_model_class
16
17 if __name__ == "__main__":
18 parser = argparse.ArgumentParser()
19 parser.add_argument(
20 "--lemmatization",
21 help="Perform lemmatization (using spaCy)",
22 action="store_true",
23 )
24 parser.add_argument("--train", help="Perform training", action="store_true")
25 parser.add_argument(
26 "--goal",
27 help="Goal of the classifier",
28 choices=[
29 # bug classifiers
30 "defect",
31 "regression",
32 "tracking",
33 "qaneeded",
34 "uplift",
35 "component",
36 "devdocneeded",
37 "defectenhancementtask",
38 "assignee",
39 "bugtype",
40 "stepstoreproduce",
41 # commit classifiers
42 "backout",
43 ],
44 default="defect",
45 )
46 parser.add_argument(
47 "--classifier",
48 help="Type of the classifier",
49 choices=["default", "nn"],
50 default="default",
51 )
52 parser.add_argument("--classify", help="Perform evaluation", action="store_true")
53 parser.add_argument(
54 "--generate-sheet",
55 help="Perform evaluation on bugs from last week and generate a csv file",
56 action="store_true",
57 )
58 parser.add_argument("--token", help="Bugzilla token", action="store")
59 parser.add_argument(
60 "--historical", help="Analyze historical bugs", action="store_true"
61 )
62 args = parser.parse_args()
63
64 model_file_name = "{}{}model".format(
65 args.goal, "" if args.classifier == "default" else args.classifier
66 )
67
68 model_class_name = args.goal
69
70 if args.goal == "component":
71 if args.classifier == "default":
72 model_class_name = "component"
73 elif args.classifier == "nn":
74 model_class_name = "component_nn"
75 else:
76 raise ValueError(f"Unkown value {args.classifier}")
77
78 model_class = get_model_class(model_class_name)
79
80 if args.train:
81 db.download()
82
83 if args.historical:
84 model = model_class(args.lemmatization, args.historical)
85 else:
86 model = model_class(args.lemmatization)
87 model.train()
88 else:
89 model = model_class.load(model_file_name)
90
91 if args.classify:
92 for bug in bugzilla.get_bugs():
93 print(
94 f'https://bugzilla.mozilla.org/show_bug.cgi?id={ bug["id"] } - { bug["summary"]} '
95 )
96
97 if model.calculate_importance:
98 probas, importances = model.classify(
99 bug, probabilities=True, importances=True
100 )
101
102 feature_names = model.get_feature_names()
103 for i, (importance, index, is_positive) in enumerate(importances):
104 print(
105 f'{i + 1}. \'{feature_names[int(index)]}\' ({"+" if (is_positive) else "-"}{importance})'
106 )
107 else:
108 probas = model.classify(bug, probabilities=True, importances=False)
109
110 if np.argmax(probas) == 1:
111 print(f"Positive! {probas}")
112 else:
113 print(f"Negative! {probas}")
114 input()
115
116 if args.generate_sheet:
117 assert (
118 args.token is not None
119 ), "A Bugzilla token should be set in order to download bugs"
120 today = datetime.utcnow()
121 a_week_ago = today - timedelta(7)
122 bugzilla.set_token(args.token)
123 bugs = bugzilla.download_bugs_between(a_week_ago, today)
124
125 print(f"Classifying {len(bugs)} bugs...")
126
127 rows = [["Bug", f"{args.goal}(model)", args.goal, "Title"]]
128
129 for bug in bugs:
130 p = model.classify(bug, probabilities=True)
131 rows.append(
132 [
133 f'https://bugzilla.mozilla.org/show_bug.cgi?id={bug["id"]}',
134 "y" if p[0][1] >= 0.7 else "n",
135 "",
136 bug["summary"],
137 ]
138 )
139
140 os.makedirs("sheets", exist_ok=True)
141 with open(
142 os.path.join(
143 "sheets",
144 f'{args.goal}-{datetime.utcnow().strftime("%Y-%m-%d")}-labels.csv',
145 ),
146 "w",
147 ) as f:
148 writer = csv.writer(f)
149 writer.writerows(rows)
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/run.py b/run.py
--- a/run.py
+++ b/run.py
@@ -73,7 +73,7 @@
elif args.classifier == "nn":
model_class_name = "component_nn"
else:
- raise ValueError(f"Unkown value {args.classifier}")
+ raise ValueError(f"Unknown value {args.classifier}")
model_class = get_model_class(model_class_name)
| {"golden_diff": "diff --git a/run.py b/run.py\n--- a/run.py\n+++ b/run.py\n@@ -73,7 +73,7 @@\n elif args.classifier == \"nn\":\n model_class_name = \"component_nn\"\n else:\n- raise ValueError(f\"Unkown value {args.classifier}\")\n+ raise ValueError(f\"Unknown value {args.classifier}\")\n \n model_class = get_model_class(model_class_name)\n", "issue": "Use codespell in precommit hook\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport csv\nimport os\nfrom datetime import datetime, timedelta\n\nimport numpy as np\n\nfrom bugbug import repository # noqa\nfrom bugbug import bugzilla, db\nfrom bugbug.models import get_model_class\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--lemmatization\",\n help=\"Perform lemmatization (using spaCy)\",\n action=\"store_true\",\n )\n parser.add_argument(\"--train\", help=\"Perform training\", action=\"store_true\")\n parser.add_argument(\n \"--goal\",\n help=\"Goal of the classifier\",\n choices=[\n # bug classifiers\n \"defect\",\n \"regression\",\n \"tracking\",\n \"qaneeded\",\n \"uplift\",\n \"component\",\n \"devdocneeded\",\n \"defectenhancementtask\",\n \"assignee\",\n \"bugtype\",\n \"stepstoreproduce\",\n # commit classifiers\n \"backout\",\n ],\n default=\"defect\",\n )\n parser.add_argument(\n \"--classifier\",\n help=\"Type of the classifier\",\n choices=[\"default\", \"nn\"],\n default=\"default\",\n )\n parser.add_argument(\"--classify\", help=\"Perform evaluation\", action=\"store_true\")\n parser.add_argument(\n \"--generate-sheet\",\n help=\"Perform evaluation on bugs from last week and generate a csv file\",\n action=\"store_true\",\n )\n parser.add_argument(\"--token\", help=\"Bugzilla token\", action=\"store\")\n parser.add_argument(\n \"--historical\", help=\"Analyze historical bugs\", action=\"store_true\"\n )\n args = parser.parse_args()\n\n model_file_name = \"{}{}model\".format(\n args.goal, \"\" if args.classifier == \"default\" else args.classifier\n )\n\n model_class_name = args.goal\n\n if args.goal == \"component\":\n if args.classifier == \"default\":\n model_class_name = \"component\"\n elif args.classifier == \"nn\":\n model_class_name = \"component_nn\"\n else:\n raise ValueError(f\"Unkown value {args.classifier}\")\n\n model_class = get_model_class(model_class_name)\n\n if args.train:\n db.download()\n\n if args.historical:\n model = model_class(args.lemmatization, args.historical)\n else:\n model = model_class(args.lemmatization)\n model.train()\n else:\n model = model_class.load(model_file_name)\n\n if args.classify:\n for bug in bugzilla.get_bugs():\n print(\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={ bug[\"id\"] } - { bug[\"summary\"]} '\n )\n\n if model.calculate_importance:\n probas, importances = model.classify(\n bug, probabilities=True, importances=True\n )\n\n feature_names = model.get_feature_names()\n for i, (importance, index, is_positive) in enumerate(importances):\n print(\n f'{i + 1}. \\'{feature_names[int(index)]}\\' ({\"+\" if (is_positive) else \"-\"}{importance})'\n )\n else:\n probas = model.classify(bug, probabilities=True, importances=False)\n\n if np.argmax(probas) == 1:\n print(f\"Positive! {probas}\")\n else:\n print(f\"Negative! {probas}\")\n input()\n\n if args.generate_sheet:\n assert (\n args.token is not None\n ), \"A Bugzilla token should be set in order to download bugs\"\n today = datetime.utcnow()\n a_week_ago = today - timedelta(7)\n bugzilla.set_token(args.token)\n bugs = bugzilla.download_bugs_between(a_week_ago, today)\n\n print(f\"Classifying {len(bugs)} bugs...\")\n\n rows = [[\"Bug\", f\"{args.goal}(model)\", args.goal, \"Title\"]]\n\n for bug in bugs:\n p = model.classify(bug, probabilities=True)\n rows.append(\n [\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={bug[\"id\"]}',\n \"y\" if p[0][1] >= 0.7 else \"n\",\n \"\",\n bug[\"summary\"],\n ]\n )\n\n os.makedirs(\"sheets\", exist_ok=True)\n with open(\n os.path.join(\n \"sheets\",\n f'{args.goal}-{datetime.utcnow().strftime(\"%Y-%m-%d\")}-labels.csv',\n ),\n \"w\",\n ) as f:\n writer = csv.writer(f)\n writer.writerows(rows)\n", "path": "run.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport csv\nimport os\nfrom datetime import datetime, timedelta\n\nimport numpy as np\n\nfrom bugbug import repository # noqa\nfrom bugbug import bugzilla, db\nfrom bugbug.models import get_model_class\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--lemmatization\",\n help=\"Perform lemmatization (using spaCy)\",\n action=\"store_true\",\n )\n parser.add_argument(\"--train\", help=\"Perform training\", action=\"store_true\")\n parser.add_argument(\n \"--goal\",\n help=\"Goal of the classifier\",\n choices=[\n # bug classifiers\n \"defect\",\n \"regression\",\n \"tracking\",\n \"qaneeded\",\n \"uplift\",\n \"component\",\n \"devdocneeded\",\n \"defectenhancementtask\",\n \"assignee\",\n \"bugtype\",\n \"stepstoreproduce\",\n # commit classifiers\n \"backout\",\n ],\n default=\"defect\",\n )\n parser.add_argument(\n \"--classifier\",\n help=\"Type of the classifier\",\n choices=[\"default\", \"nn\"],\n default=\"default\",\n )\n parser.add_argument(\"--classify\", help=\"Perform evaluation\", action=\"store_true\")\n parser.add_argument(\n \"--generate-sheet\",\n help=\"Perform evaluation on bugs from last week and generate a csv file\",\n action=\"store_true\",\n )\n parser.add_argument(\"--token\", help=\"Bugzilla token\", action=\"store\")\n parser.add_argument(\n \"--historical\", help=\"Analyze historical bugs\", action=\"store_true\"\n )\n args = parser.parse_args()\n\n model_file_name = \"{}{}model\".format(\n args.goal, \"\" if args.classifier == \"default\" else args.classifier\n )\n\n model_class_name = args.goal\n\n if args.goal == \"component\":\n if args.classifier == \"default\":\n model_class_name = \"component\"\n elif args.classifier == \"nn\":\n model_class_name = \"component_nn\"\n else:\n raise ValueError(f\"Unknown value {args.classifier}\")\n\n model_class = get_model_class(model_class_name)\n\n if args.train:\n db.download()\n\n if args.historical:\n model = model_class(args.lemmatization, args.historical)\n else:\n model = model_class(args.lemmatization)\n model.train()\n else:\n model = model_class.load(model_file_name)\n\n if args.classify:\n for bug in bugzilla.get_bugs():\n print(\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={ bug[\"id\"] } - { bug[\"summary\"]} '\n )\n\n if model.calculate_importance:\n probas, importances = model.classify(\n bug, probabilities=True, importances=True\n )\n\n feature_names = model.get_feature_names()\n for i, (importance, index, is_positive) in enumerate(importances):\n print(\n f'{i + 1}. \\'{feature_names[int(index)]}\\' ({\"+\" if (is_positive) else \"-\"}{importance})'\n )\n else:\n probas = model.classify(bug, probabilities=True, importances=False)\n\n if np.argmax(probas) == 1:\n print(f\"Positive! {probas}\")\n else:\n print(f\"Negative! {probas}\")\n input()\n\n if args.generate_sheet:\n assert (\n args.token is not None\n ), \"A Bugzilla token should be set in order to download bugs\"\n today = datetime.utcnow()\n a_week_ago = today - timedelta(7)\n bugzilla.set_token(args.token)\n bugs = bugzilla.download_bugs_between(a_week_ago, today)\n\n print(f\"Classifying {len(bugs)} bugs...\")\n\n rows = [[\"Bug\", f\"{args.goal}(model)\", args.goal, \"Title\"]]\n\n for bug in bugs:\n p = model.classify(bug, probabilities=True)\n rows.append(\n [\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={bug[\"id\"]}',\n \"y\" if p[0][1] >= 0.7 else \"n\",\n \"\",\n bug[\"summary\"],\n ]\n )\n\n os.makedirs(\"sheets\", exist_ok=True)\n with open(\n os.path.join(\n \"sheets\",\n f'{args.goal}-{datetime.utcnow().strftime(\"%Y-%m-%d\")}-labels.csv',\n ),\n \"w\",\n ) as f:\n writer = csv.writer(f)\n writer.writerows(rows)\n", "path": "run.py"}]} | 1,649 | 93 |
gh_patches_debug_22273 | rasdani/github-patches | git_diff | kserve__kserve-545 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Log format is not properly setup for KFServer
/kind bug
**What steps did you take and what happened:**
Log format is not properly setup
```
INFO:root:Copying contents of /mnt/models to local
INFO:root:Registering model:cifar10
INFO:root:Listening on port 8080
INFO:tornado.general:Starting 40 processes
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 11488.05ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 22800.67ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 24200.31ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 8301.00ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 38398.63ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 38799.67ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 7599.63ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 39800.00ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 32200.33ms
```
**What did you expect to happen:**
The log format should include timestamp.
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Istio Version:
- Knative Version:
- KFServing Version: 0.2.0
- Kubeflow version:
- Minikube version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/kfserving/kfserving/kfserver.py`
Content:
```
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import tornado.ioloop
16 import tornado.web
17 import tornado.httpserver
18 import argparse
19 import logging
20 import json
21 from typing import List, Dict
22 from kfserving.handlers.http import PredictHandler, ExplainHandler
23 from kfserving import KFModel
24 from kfserving.constants import constants
25
26 DEFAULT_HTTP_PORT = 8080
27 DEFAULT_GRPC_PORT = 8081
28
29 parser = argparse.ArgumentParser(add_help=False)
30 parser.add_argument('--http_port', default=DEFAULT_HTTP_PORT, type=int,
31 help='The HTTP Port listened to by the model server.')
32 parser.add_argument('--grpc_port', default=DEFAULT_GRPC_PORT, type=int,
33 help='The GRPC Port listened to by the model server.')
34 parser.add_argument('--workers', default=0, type=int,
35 help='The number of works to fork')
36 args, _ = parser.parse_known_args()
37
38 logging.basicConfig(level=constants.KFSERVING_LOGLEVEL)
39
40
41 class KFServer():
42 def __init__(self, http_port: int = args.http_port,
43 grpc_port: int = args.grpc_port,
44 workers: int = args.workers):
45 self.registered_models = {}
46 self.http_port = http_port
47 self.grpc_port = grpc_port
48 self.workers = workers
49 self._http_server = None
50
51 def create_application(self):
52 return tornado.web.Application([
53 # Server Liveness API returns 200 if server is alive.
54 (r"/", LivenessHandler),
55 (r"/v1/models",
56 ListHandler, dict(models=self.registered_models)),
57 # Model Health API returns 200 if model is ready to serve.
58 (r"/v1/models/([a-zA-Z0-9_-]+)",
59 HealthHandler, dict(models=self.registered_models)),
60 (r"/v1/models/([a-zA-Z0-9_-]+):predict",
61 PredictHandler, dict(models=self.registered_models)),
62 (r"/v1/models/([a-zA-Z0-9_-]+):explain",
63 ExplainHandler, dict(models=self.registered_models)),
64 ])
65
66 def start(self, models: List[KFModel]):
67 for model in models:
68 self.register_model(model)
69
70 self._http_server = tornado.httpserver.HTTPServer(
71 self.create_application())
72
73 logging.info("Listening on port %s", self.http_port)
74 self._http_server.bind(self.http_port)
75 logging.info("Will fork %d workers", self.workers)
76 self._http_server.start(self.workers)
77 tornado.ioloop.IOLoop.current().start()
78
79 def register_model(self, model: KFModel):
80 if not model.name:
81 raise Exception(
82 "Failed to register model, model.name must be provided.")
83 self.registered_models[model.name] = model
84 logging.info("Registering model: %s", model.name)
85
86
87 class LivenessHandler(tornado.web.RequestHandler): # pylint:disable=too-few-public-methods
88 def get(self):
89 self.write("Alive")
90
91
92 class HealthHandler(tornado.web.RequestHandler):
93 def initialize(self, models: Dict[str, KFModel]):
94 self.models = models # pylint:disable=attribute-defined-outside-init
95
96 def get(self, name: str):
97 if name not in self.models:
98 raise tornado.web.HTTPError(
99 status_code=404,
100 reason="Model with name %s does not exist." % name
101 )
102
103 model = self.models[name]
104 self.write(json.dumps({
105 "name": model.name,
106 "ready": model.ready
107 }))
108
109
110 class ListHandler(tornado.web.RequestHandler):
111 def initialize(self, models: Dict[str, KFModel]):
112 self.models = models # pylint:disable=attribute-defined-outside-init
113
114 def get(self):
115 self.write(json.dumps(list(self.models.values())))
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/kfserving/kfserving/kfserver.py b/python/kfserving/kfserving/kfserver.py
--- a/python/kfserving/kfserving/kfserver.py
+++ b/python/kfserving/kfserving/kfserver.py
@@ -12,16 +12,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import tornado.ioloop
-import tornado.web
-import tornado.httpserver
import argparse
import logging
import json
from typing import List, Dict
+import tornado.ioloop
+import tornado.web
+import tornado.httpserver
+import tornado.log
from kfserving.handlers.http import PredictHandler, ExplainHandler
from kfserving import KFModel
-from kfserving.constants import constants
DEFAULT_HTTP_PORT = 8080
DEFAULT_GRPC_PORT = 8081
@@ -35,8 +35,7 @@
help='The number of works to fork')
args, _ = parser.parse_known_args()
-logging.basicConfig(level=constants.KFSERVING_LOGLEVEL)
-
+tornado.log.enable_pretty_logging()
class KFServer():
def __init__(self, http_port: int = args.http_port,
| {"golden_diff": "diff --git a/python/kfserving/kfserving/kfserver.py b/python/kfserving/kfserving/kfserver.py\n--- a/python/kfserving/kfserving/kfserver.py\n+++ b/python/kfserving/kfserving/kfserver.py\n@@ -12,16 +12,16 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-import tornado.ioloop\n-import tornado.web\n-import tornado.httpserver\n import argparse\n import logging\n import json\n from typing import List, Dict\n+import tornado.ioloop\n+import tornado.web\n+import tornado.httpserver\n+import tornado.log\n from kfserving.handlers.http import PredictHandler, ExplainHandler\n from kfserving import KFModel\n-from kfserving.constants import constants\n \n DEFAULT_HTTP_PORT = 8080\n DEFAULT_GRPC_PORT = 8081\n@@ -35,8 +35,7 @@\n help='The number of works to fork')\n args, _ = parser.parse_known_args()\n \n-logging.basicConfig(level=constants.KFSERVING_LOGLEVEL)\n-\n+tornado.log.enable_pretty_logging()\n \n class KFServer():\n def __init__(self, http_port: int = args.http_port,\n", "issue": "Log format is not properly setup for KFServer\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\nLog format is not properly setup\r\n\r\n```\r\nINFO:root:Copying contents of /mnt/models to local\r\nINFO:root:Registering model:cifar10\r\nINFO:root:Listening on port 8080\r\nINFO:tornado.general:Starting 40 processes\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 11488.05ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 22800.67ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 24200.31ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 8301.00ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 38398.63ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 38799.67ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 7599.63ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 39800.00ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 32200.33ms\r\n```\r\n\r\n**What did you expect to happen:**\r\nThe log format should include timestamp.\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version: 0.2.0\r\n- Kubeflow version:\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport tornado.ioloop\nimport tornado.web\nimport tornado.httpserver\nimport argparse\nimport logging\nimport json\nfrom typing import List, Dict\nfrom kfserving.handlers.http import PredictHandler, ExplainHandler\nfrom kfserving import KFModel\nfrom kfserving.constants import constants\n\nDEFAULT_HTTP_PORT = 8080\nDEFAULT_GRPC_PORT = 8081\n\nparser = argparse.ArgumentParser(add_help=False)\nparser.add_argument('--http_port', default=DEFAULT_HTTP_PORT, type=int,\n help='The HTTP Port listened to by the model server.')\nparser.add_argument('--grpc_port', default=DEFAULT_GRPC_PORT, type=int,\n help='The GRPC Port listened to by the model server.')\nparser.add_argument('--workers', default=0, type=int,\n help='The number of works to fork')\nargs, _ = parser.parse_known_args()\n\nlogging.basicConfig(level=constants.KFSERVING_LOGLEVEL)\n\n\nclass KFServer():\n def __init__(self, http_port: int = args.http_port,\n grpc_port: int = args.grpc_port,\n workers: int = args.workers):\n self.registered_models = {}\n self.http_port = http_port\n self.grpc_port = grpc_port\n self.workers = workers\n self._http_server = None\n\n def create_application(self):\n return tornado.web.Application([\n # Server Liveness API returns 200 if server is alive.\n (r\"/\", LivenessHandler),\n (r\"/v1/models\",\n ListHandler, dict(models=self.registered_models)),\n # Model Health API returns 200 if model is ready to serve.\n (r\"/v1/models/([a-zA-Z0-9_-]+)\",\n HealthHandler, dict(models=self.registered_models)),\n (r\"/v1/models/([a-zA-Z0-9_-]+):predict\",\n PredictHandler, dict(models=self.registered_models)),\n (r\"/v1/models/([a-zA-Z0-9_-]+):explain\",\n ExplainHandler, dict(models=self.registered_models)),\n ])\n\n def start(self, models: List[KFModel]):\n for model in models:\n self.register_model(model)\n\n self._http_server = tornado.httpserver.HTTPServer(\n self.create_application())\n\n logging.info(\"Listening on port %s\", self.http_port)\n self._http_server.bind(self.http_port)\n logging.info(\"Will fork %d workers\", self.workers)\n self._http_server.start(self.workers)\n tornado.ioloop.IOLoop.current().start()\n\n def register_model(self, model: KFModel):\n if not model.name:\n raise Exception(\n \"Failed to register model, model.name must be provided.\")\n self.registered_models[model.name] = model\n logging.info(\"Registering model: %s\", model.name)\n\n\nclass LivenessHandler(tornado.web.RequestHandler): # pylint:disable=too-few-public-methods\n def get(self):\n self.write(\"Alive\")\n\n\nclass HealthHandler(tornado.web.RequestHandler):\n def initialize(self, models: Dict[str, KFModel]):\n self.models = models # pylint:disable=attribute-defined-outside-init\n\n def get(self, name: str):\n if name not in self.models:\n raise tornado.web.HTTPError(\n status_code=404,\n reason=\"Model with name %s does not exist.\" % name\n )\n\n model = self.models[name]\n self.write(json.dumps({\n \"name\": model.name,\n \"ready\": model.ready\n }))\n\n\nclass ListHandler(tornado.web.RequestHandler):\n def initialize(self, models: Dict[str, KFModel]):\n self.models = models # pylint:disable=attribute-defined-outside-init\n\n def get(self):\n self.write(json.dumps(list(self.models.values())))\n", "path": "python/kfserving/kfserving/kfserver.py"}], "after_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport logging\nimport json\nfrom typing import List, Dict\nimport tornado.ioloop\nimport tornado.web\nimport tornado.httpserver\nimport tornado.log\nfrom kfserving.handlers.http import PredictHandler, ExplainHandler\nfrom kfserving import KFModel\n\nDEFAULT_HTTP_PORT = 8080\nDEFAULT_GRPC_PORT = 8081\n\nparser = argparse.ArgumentParser(add_help=False)\nparser.add_argument('--http_port', default=DEFAULT_HTTP_PORT, type=int,\n help='The HTTP Port listened to by the model server.')\nparser.add_argument('--grpc_port', default=DEFAULT_GRPC_PORT, type=int,\n help='The GRPC Port listened to by the model server.')\nparser.add_argument('--workers', default=0, type=int,\n help='The number of works to fork')\nargs, _ = parser.parse_known_args()\n\ntornado.log.enable_pretty_logging()\n\nclass KFServer():\n def __init__(self, http_port: int = args.http_port,\n grpc_port: int = args.grpc_port,\n workers: int = args.workers):\n self.registered_models = {}\n self.http_port = http_port\n self.grpc_port = grpc_port\n self.workers = workers\n self._http_server = None\n\n def create_application(self):\n return tornado.web.Application([\n # Server Liveness API returns 200 if server is alive.\n (r\"/\", LivenessHandler),\n (r\"/v1/models\",\n ListHandler, dict(models=self.registered_models)),\n # Model Health API returns 200 if model is ready to serve.\n (r\"/v1/models/([a-zA-Z0-9_-]+)\",\n HealthHandler, dict(models=self.registered_models)),\n (r\"/v1/models/([a-zA-Z0-9_-]+):predict\",\n PredictHandler, dict(models=self.registered_models)),\n (r\"/v1/models/([a-zA-Z0-9_-]+):explain\",\n ExplainHandler, dict(models=self.registered_models)),\n ])\n\n def start(self, models: List[KFModel]):\n for model in models:\n self.register_model(model)\n\n self._http_server = tornado.httpserver.HTTPServer(\n self.create_application())\n\n logging.info(\"Listening on port %s\", self.http_port)\n self._http_server.bind(self.http_port)\n logging.info(\"Will fork %d workers\", self.workers)\n self._http_server.start(self.workers)\n tornado.ioloop.IOLoop.current().start()\n\n def register_model(self, model: KFModel):\n if not model.name:\n raise Exception(\n \"Failed to register model, model.name must be provided.\")\n self.registered_models[model.name] = model\n logging.info(\"Registering model: %s\", model.name)\n\n\nclass LivenessHandler(tornado.web.RequestHandler): # pylint:disable=too-few-public-methods\n def get(self):\n self.write(\"Alive\")\n\n\nclass HealthHandler(tornado.web.RequestHandler):\n def initialize(self, models: Dict[str, KFModel]):\n self.models = models # pylint:disable=attribute-defined-outside-init\n\n def get(self, name: str):\n if name not in self.models:\n raise tornado.web.HTTPError(\n status_code=404,\n reason=\"Model with name %s does not exist.\" % name\n )\n\n model = self.models[name]\n self.write(json.dumps({\n \"name\": model.name,\n \"ready\": model.ready\n }))\n\n\nclass ListHandler(tornado.web.RequestHandler):\n def initialize(self, models: Dict[str, KFModel]):\n self.models = models # pylint:disable=attribute-defined-outside-init\n\n def get(self):\n self.write(json.dumps(list(self.models.values())))\n", "path": "python/kfserving/kfserving/kfserver.py"}]} | 2,017 | 270 |
gh_patches_debug_35906 | rasdani/github-patches | git_diff | streamlink__streamlink-5754 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.bigo: Unable to parse JSON
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest release
### Description
Hello,
the bigo.py is not working at the moment.
It is giving a parse JSON error.
Debug log is following...
### Debug log
```text
error: Unable to parse JSON: Expecting value: line 1 column 1 (char 0) ('<!DOCTYPE html>\n<html lang="en" s ...)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/bigo.py`
Content:
```
1 """
2 $description Global live streaming platform for live video game broadcasts and individual live streams.
3 $url live.bigo.tv
4 $url bigoweb.co
5 $type live
6 """
7
8 import re
9
10 from streamlink.plugin import Plugin, pluginmatcher
11 from streamlink.plugin.api import useragents, validate
12 from streamlink.stream.hls import HLSStream
13
14
15 @pluginmatcher(re.compile(
16 r"https?://(?:www\.)?bigo\.tv/([^/]+)$",
17 ))
18 class Bigo(Plugin):
19 _api_url = "https://www.bigo.tv/OInterface/getVideoParam?bigoId={0}"
20
21 _video_info_schema = validate.Schema({
22 "code": 0,
23 "msg": "success",
24 "data": {
25 "videoSrc": validate.any(None, "", validate.url()),
26 },
27 })
28
29 def _get_streams(self):
30 res = self.session.http.get(
31 self._api_url.format(self.match.group(1)),
32 allow_redirects=True,
33 headers={"User-Agent": useragents.IPHONE_6},
34 )
35 data = self.session.http.json(res, schema=self._video_info_schema)
36 videourl = data["data"]["videoSrc"]
37 if videourl:
38 yield "live", HLSStream(self.session, videourl)
39
40
41 __plugin__ = Bigo
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/bigo.py b/src/streamlink/plugins/bigo.py
--- a/src/streamlink/plugins/bigo.py
+++ b/src/streamlink/plugins/bigo.py
@@ -1,41 +1,68 @@
"""
-$description Global live streaming platform for live video game broadcasts and individual live streams.
-$url live.bigo.tv
-$url bigoweb.co
+$description Global live-streaming platform for live video game broadcasts and individual live streams.
+$url bigo.tv
$type live
+$metadata id
+$metadata author
+$metadata category
+$metadata title
"""
+import logging
import re
from streamlink.plugin import Plugin, pluginmatcher
-from streamlink.plugin.api import useragents, validate
+from streamlink.plugin.api import validate
from streamlink.stream.hls import HLSStream
+log = logging.getLogger(__name__)
+
+
@pluginmatcher(re.compile(
- r"https?://(?:www\.)?bigo\.tv/([^/]+)$",
+ r"https?://(?:www\.)?bigo\.tv/(?P<site_id>[^/]+)$",
))
class Bigo(Plugin):
- _api_url = "https://www.bigo.tv/OInterface/getVideoParam?bigoId={0}"
-
- _video_info_schema = validate.Schema({
- "code": 0,
- "msg": "success",
- "data": {
- "videoSrc": validate.any(None, "", validate.url()),
- },
- })
+ _URL_API = "https://ta.bigo.tv/official_website/studio/getInternalStudioInfo"
def _get_streams(self):
- res = self.session.http.get(
- self._api_url.format(self.match.group(1)),
- allow_redirects=True,
- headers={"User-Agent": useragents.IPHONE_6},
+ self.id, self.author, self.category, self.title, hls_url = self.session.http.post(
+ self._URL_API,
+ params={
+ "siteId": self.match["site_id"],
+ "verify": "",
+ },
+ schema=validate.Schema(
+ validate.parse_json(),
+ {
+ "code": 0,
+ "msg": "success",
+ "data": {
+ "roomId": validate.any(None, str),
+ "clientBigoId": validate.any(None, str),
+ "gameTitle": str,
+ "roomTopic": str,
+ "hls_src": validate.any(None, "", validate.url()),
+ },
+ },
+ validate.union_get(
+ ("data", "roomId"),
+ ("data", "clientBigoId"),
+ ("data", "gameTitle"),
+ ("data", "roomTopic"),
+ ("data", "hls_src"),
+ ),
+ ),
)
- data = self.session.http.json(res, schema=self._video_info_schema)
- videourl = data["data"]["videoSrc"]
- if videourl:
- yield "live", HLSStream(self.session, videourl)
+
+ if not self.id:
+ return
+
+ if not hls_url:
+ log.info("Channel is offline")
+ return
+
+ yield "live", HLSStream(self.session, hls_url)
__plugin__ = Bigo
| {"golden_diff": "diff --git a/src/streamlink/plugins/bigo.py b/src/streamlink/plugins/bigo.py\n--- a/src/streamlink/plugins/bigo.py\n+++ b/src/streamlink/plugins/bigo.py\n@@ -1,41 +1,68 @@\n \"\"\"\n-$description Global live streaming platform for live video game broadcasts and individual live streams.\n-$url live.bigo.tv\n-$url bigoweb.co\n+$description Global live-streaming platform for live video game broadcasts and individual live streams.\n+$url bigo.tv\n $type live\n+$metadata id\n+$metadata author\n+$metadata category\n+$metadata title\n \"\"\"\n \n+import logging\n import re\n \n from streamlink.plugin import Plugin, pluginmatcher\n-from streamlink.plugin.api import useragents, validate\n+from streamlink.plugin.api import validate\n from streamlink.stream.hls import HLSStream\n \n \n+log = logging.getLogger(__name__)\n+\n+\n @pluginmatcher(re.compile(\n- r\"https?://(?:www\\.)?bigo\\.tv/([^/]+)$\",\n+ r\"https?://(?:www\\.)?bigo\\.tv/(?P<site_id>[^/]+)$\",\n ))\n class Bigo(Plugin):\n- _api_url = \"https://www.bigo.tv/OInterface/getVideoParam?bigoId={0}\"\n-\n- _video_info_schema = validate.Schema({\n- \"code\": 0,\n- \"msg\": \"success\",\n- \"data\": {\n- \"videoSrc\": validate.any(None, \"\", validate.url()),\n- },\n- })\n+ _URL_API = \"https://ta.bigo.tv/official_website/studio/getInternalStudioInfo\"\n \n def _get_streams(self):\n- res = self.session.http.get(\n- self._api_url.format(self.match.group(1)),\n- allow_redirects=True,\n- headers={\"User-Agent\": useragents.IPHONE_6},\n+ self.id, self.author, self.category, self.title, hls_url = self.session.http.post(\n+ self._URL_API,\n+ params={\n+ \"siteId\": self.match[\"site_id\"],\n+ \"verify\": \"\",\n+ },\n+ schema=validate.Schema(\n+ validate.parse_json(),\n+ {\n+ \"code\": 0,\n+ \"msg\": \"success\",\n+ \"data\": {\n+ \"roomId\": validate.any(None, str),\n+ \"clientBigoId\": validate.any(None, str),\n+ \"gameTitle\": str,\n+ \"roomTopic\": str,\n+ \"hls_src\": validate.any(None, \"\", validate.url()),\n+ },\n+ },\n+ validate.union_get(\n+ (\"data\", \"roomId\"),\n+ (\"data\", \"clientBigoId\"),\n+ (\"data\", \"gameTitle\"),\n+ (\"data\", \"roomTopic\"),\n+ (\"data\", \"hls_src\"),\n+ ),\n+ ),\n )\n- data = self.session.http.json(res, schema=self._video_info_schema)\n- videourl = data[\"data\"][\"videoSrc\"]\n- if videourl:\n- yield \"live\", HLSStream(self.session, videourl)\n+\n+ if not self.id:\n+ return\n+\n+ if not hls_url:\n+ log.info(\"Channel is offline\")\n+ return\n+\n+ yield \"live\", HLSStream(self.session, hls_url)\n \n \n __plugin__ = Bigo\n", "issue": "plugins.bigo: Unable to parse JSON\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest release\n\n### Description\n\nHello,\r\n\r\nthe bigo.py is not working at the moment.\r\n\r\nIt is giving a parse JSON error.\r\n\r\nDebug log is following...\n\n### Debug log\n\n```text\nerror: Unable to parse JSON: Expecting value: line 1 column 1 (char 0) ('<!DOCTYPE html>\\n<html lang=\"en\" s ...)\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Global live streaming platform for live video game broadcasts and individual live streams.\n$url live.bigo.tv\n$url bigoweb.co\n$type live\n\"\"\"\n\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import useragents, validate\nfrom streamlink.stream.hls import HLSStream\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?bigo\\.tv/([^/]+)$\",\n))\nclass Bigo(Plugin):\n _api_url = \"https://www.bigo.tv/OInterface/getVideoParam?bigoId={0}\"\n\n _video_info_schema = validate.Schema({\n \"code\": 0,\n \"msg\": \"success\",\n \"data\": {\n \"videoSrc\": validate.any(None, \"\", validate.url()),\n },\n })\n\n def _get_streams(self):\n res = self.session.http.get(\n self._api_url.format(self.match.group(1)),\n allow_redirects=True,\n headers={\"User-Agent\": useragents.IPHONE_6},\n )\n data = self.session.http.json(res, schema=self._video_info_schema)\n videourl = data[\"data\"][\"videoSrc\"]\n if videourl:\n yield \"live\", HLSStream(self.session, videourl)\n\n\n__plugin__ = Bigo\n", "path": "src/streamlink/plugins/bigo.py"}], "after_files": [{"content": "\"\"\"\n$description Global live-streaming platform for live video game broadcasts and individual live streams.\n$url bigo.tv\n$type live\n$metadata id\n$metadata author\n$metadata category\n$metadata title\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?bigo\\.tv/(?P<site_id>[^/]+)$\",\n))\nclass Bigo(Plugin):\n _URL_API = \"https://ta.bigo.tv/official_website/studio/getInternalStudioInfo\"\n\n def _get_streams(self):\n self.id, self.author, self.category, self.title, hls_url = self.session.http.post(\n self._URL_API,\n params={\n \"siteId\": self.match[\"site_id\"],\n \"verify\": \"\",\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"code\": 0,\n \"msg\": \"success\",\n \"data\": {\n \"roomId\": validate.any(None, str),\n \"clientBigoId\": validate.any(None, str),\n \"gameTitle\": str,\n \"roomTopic\": str,\n \"hls_src\": validate.any(None, \"\", validate.url()),\n },\n },\n validate.union_get(\n (\"data\", \"roomId\"),\n (\"data\", \"clientBigoId\"),\n (\"data\", \"gameTitle\"),\n (\"data\", \"roomTopic\"),\n (\"data\", \"hls_src\"),\n ),\n ),\n )\n\n if not self.id:\n return\n\n if not hls_url:\n log.info(\"Channel is offline\")\n return\n\n yield \"live\", HLSStream(self.session, hls_url)\n\n\n__plugin__ = Bigo\n", "path": "src/streamlink/plugins/bigo.py"}]} | 866 | 724 |
gh_patches_debug_12103 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-4326 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
soundfile hook has osx/windows specific behaviour
hook-soundfile.py was added in 3.5 but it blows up on linux as follows
9727 INFO: Loading module hook "hook-soundfile.py"...
Unable to find "/home/matt/.virtualenvs/beqdesigner-entpycF3/lib/python3.7/site-packages/_soundfile_data" when adding binary and data files.
on OSX, it also fails but with a different error
ValueError: Unknown Mach-O header: 0x20202020 in <_io.BufferedReader
name='/Users/travis/build/3ll3d00d/beqdesigner/.venv/lib/python3.7/site-packages/_soundfile_data/COPYING'>
It completes successfully on Windows
The problem is that pysoundfile packages libsndfile on Windows and OSX (as per https://pysoundfile.readthedocs.io/en/0.9.0/#installation) but relies on a system package on Linux so the mentioned directory (`_soundfile_data`) will not exist on Linux. On OSX only a certain file is required (`_soundfile_data/libsndfile.dylib`)
Minimal test case can be found at https://github.com/3ll3d00d/pyinstaller-pysoundfile-bug
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-soundfile.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2016-2019, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9
10 """
11 pysoundfile:
12 https://github.com/bastibe/SoundFile
13 """
14
15 import os
16 from PyInstaller.utils.hooks import get_package_paths
17
18 # get path of soundfile
19 sfp = get_package_paths('soundfile')
20
21 # add the binaries
22 bins = os.path.join(sfp[0], "_soundfile_data")
23 binaries = [(bins, "_soundfile_data")]
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/hook-soundfile.py b/PyInstaller/hooks/hook-soundfile.py
--- a/PyInstaller/hooks/hook-soundfile.py
+++ b/PyInstaller/hooks/hook-soundfile.py
@@ -13,11 +13,20 @@
"""
import os
+
+from PyInstaller.compat import is_win, is_darwin
from PyInstaller.utils.hooks import get_package_paths
# get path of soundfile
sfp = get_package_paths('soundfile')
-# add the binaries
-bins = os.path.join(sfp[0], "_soundfile_data")
-binaries = [(bins, "_soundfile_data")]
+# add binaries packaged by soundfile on OSX and Windows
+# an external dependency (libsndfile) is used on GNU/Linux
+path = None
+if is_win:
+ path = os.path.join(sfp[0], '_soundfile_data')
+elif is_darwin:
+ path = os.path.join(sfp[0], '_soundfile_data', 'libsndfile.dylib')
+
+if path is not None and os.path.exists(path):
+ binaries = [(path, "_soundfile_data")]
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-soundfile.py b/PyInstaller/hooks/hook-soundfile.py\n--- a/PyInstaller/hooks/hook-soundfile.py\n+++ b/PyInstaller/hooks/hook-soundfile.py\n@@ -13,11 +13,20 @@\n \"\"\"\n \n import os\n+\n+from PyInstaller.compat import is_win, is_darwin\n from PyInstaller.utils.hooks import get_package_paths\n \n # get path of soundfile\n sfp = get_package_paths('soundfile')\n \n-# add the binaries\n-bins = os.path.join(sfp[0], \"_soundfile_data\")\n-binaries = [(bins, \"_soundfile_data\")]\n+# add binaries packaged by soundfile on OSX and Windows\n+# an external dependency (libsndfile) is used on GNU/Linux\n+path = None\n+if is_win:\n+ path = os.path.join(sfp[0], '_soundfile_data')\n+elif is_darwin:\n+ path = os.path.join(sfp[0], '_soundfile_data', 'libsndfile.dylib')\n+\n+if path is not None and os.path.exists(path):\n+ binaries = [(path, \"_soundfile_data\")]\n", "issue": "soundfile hook has osx/windows specific behaviour\nhook-soundfile.py was added in 3.5 but it blows up on linux as follows\r\n\r\n 9727 INFO: Loading module hook \"hook-soundfile.py\"...\r\n Unable to find \"/home/matt/.virtualenvs/beqdesigner-entpycF3/lib/python3.7/site-packages/_soundfile_data\" when adding binary and data files.\r\n\r\non OSX, it also fails but with a different error\r\n\r\n ValueError: Unknown Mach-O header: 0x20202020 in <_io.BufferedReader \r\n name='/Users/travis/build/3ll3d00d/beqdesigner/.venv/lib/python3.7/site-packages/_soundfile_data/COPYING'>\r\n\r\nIt completes successfully on Windows\r\n\r\nThe problem is that pysoundfile packages libsndfile on Windows and OSX (as per https://pysoundfile.readthedocs.io/en/0.9.0/#installation) but relies on a system package on Linux so the mentioned directory (`_soundfile_data`) will not exist on Linux. On OSX only a certain file is required (`_soundfile_data/libsndfile.dylib`)\r\n\r\nMinimal test case can be found at https://github.com/3ll3d00d/pyinstaller-pysoundfile-bug\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2016-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n\"\"\"\npysoundfile:\nhttps://github.com/bastibe/SoundFile\n\"\"\"\n\nimport os\nfrom PyInstaller.utils.hooks import get_package_paths\n\n# get path of soundfile\nsfp = get_package_paths('soundfile')\n\n# add the binaries\nbins = os.path.join(sfp[0], \"_soundfile_data\")\nbinaries = [(bins, \"_soundfile_data\")]\n", "path": "PyInstaller/hooks/hook-soundfile.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2016-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n\"\"\"\npysoundfile:\nhttps://github.com/bastibe/SoundFile\n\"\"\"\n\nimport os\n\nfrom PyInstaller.compat import is_win, is_darwin\nfrom PyInstaller.utils.hooks import get_package_paths\n\n# get path of soundfile\nsfp = get_package_paths('soundfile')\n\n# add binaries packaged by soundfile on OSX and Windows\n# an external dependency (libsndfile) is used on GNU/Linux\npath = None\nif is_win:\n path = os.path.join(sfp[0], '_soundfile_data')\nelif is_darwin:\n path = os.path.join(sfp[0], '_soundfile_data', 'libsndfile.dylib')\n\nif path is not None and os.path.exists(path):\n binaries = [(path, \"_soundfile_data\")]\n", "path": "PyInstaller/hooks/hook-soundfile.py"}]} | 730 | 257 |
gh_patches_debug_35619 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-616 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Uncaught 404s in record viewsets and pagination classes
## Description
The record viewset, column viewset and paignation classes regularly call `Table.objects.get(id=table_pk)`, which throws a `mathesar.models.Table.DoesNotExist: Table matching query does not exist.` when an invalid table id is passed.
To recreate, run `client.get(f'/api/v0/tables/3000/records/')`.
## Expected behavior
We should ensure that the table exists before querying, or catch the `DoesNotExist` error after querying. We should also include tests for table 404s.
This is probably best done after #488 is merged, as it includes a function to do exactly this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/api/pagination.py`
Content:
```
1 from collections import OrderedDict
2
3 from rest_framework.pagination import LimitOffsetPagination
4 from rest_framework.response import Response
5
6
7 class DefaultLimitOffsetPagination(LimitOffsetPagination):
8 default_limit = 50
9 max_limit = 500
10
11 def get_paginated_response(self, data):
12 return Response(OrderedDict([
13 ('count', self.count),
14 ('results', data)
15 ]))
16
17
18 class ColumnLimitOffsetPagination(DefaultLimitOffsetPagination):
19
20 def paginate_queryset(self, queryset, request, table_id):
21 self.limit = self.get_limit(request)
22 if self.limit is None:
23 self.limit = self.default_limit
24 self.offset = self.get_offset(request)
25 table = queryset.get(id=table_id)
26 self.count = len(table.sa_columns)
27 self.request = request
28 return list(table.sa_columns)[self.offset:self.offset + self.limit]
29
30
31 class TableLimitOffsetPagination(DefaultLimitOffsetPagination):
32
33 def paginate_queryset(self, queryset, request, table_id,
34 filters=[], order_by=[]):
35 self.limit = self.get_limit(request)
36 if self.limit is None:
37 self.limit = self.default_limit
38 self.offset = self.get_offset(request)
39 # TODO: Cache count value somewhere, since calculating it is expensive.
40 table = queryset.get(id=table_id)
41 self.count = table.sa_num_records(filters=filters)
42 self.request = request
43
44 return table.get_records(
45 self.limit, self.offset, filters=filters, order_by=order_by,
46 )
47
48
49 class TableLimitOffsetGroupPagination(TableLimitOffsetPagination):
50 def get_paginated_response(self, data):
51 return Response(OrderedDict([
52 ('count', self.count),
53 ('group_count', self.group_count),
54 ('results', data)
55 ]))
56
57 def paginate_queryset(self, queryset, request, table_id,
58 filters=[], order_by=[], group_count_by=[]):
59 records = super().paginate_queryset(
60 queryset, request, table_id, filters=filters, order_by=order_by
61 )
62
63 table = queryset.get(id=table_id)
64 if group_count_by:
65 group_count = table.get_group_counts(
66 group_count_by, self.limit, self.offset,
67 filters=filters, order_by=order_by
68 )
69 # Convert the tuple keys into strings so it can be converted to JSON
70 group_count = [{"values": list(cols), "count": count}
71 for cols, count in group_count.items()]
72 self.group_count = {
73 'group_count_by': group_count_by,
74 'results': group_count,
75 }
76 else:
77 self.group_count = {
78 'group_count_by': None,
79 'results': None,
80 }
81
82 return records
83
```
Path: `mathesar/api/utils.py`
Content:
```
1 from rest_framework.exceptions import NotFound
2
3 from mathesar.models import Table
4
5
6 def get_table_or_404(pk):
7 try:
8 table = Table.objects.get(id=pk)
9 except Table.DoesNotExist:
10 raise NotFound
11 return table
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/api/pagination.py b/mathesar/api/pagination.py
--- a/mathesar/api/pagination.py
+++ b/mathesar/api/pagination.py
@@ -3,6 +3,8 @@
from rest_framework.pagination import LimitOffsetPagination
from rest_framework.response import Response
+from mathesar.api.utils import get_table_or_404
+
class DefaultLimitOffsetPagination(LimitOffsetPagination):
default_limit = 50
@@ -22,7 +24,7 @@
if self.limit is None:
self.limit = self.default_limit
self.offset = self.get_offset(request)
- table = queryset.get(id=table_id)
+ table = get_table_or_404(pk=table_id)
self.count = len(table.sa_columns)
self.request = request
return list(table.sa_columns)[self.offset:self.offset + self.limit]
@@ -37,7 +39,7 @@
self.limit = self.default_limit
self.offset = self.get_offset(request)
# TODO: Cache count value somewhere, since calculating it is expensive.
- table = queryset.get(id=table_id)
+ table = get_table_or_404(pk=table_id)
self.count = table.sa_num_records(filters=filters)
self.request = request
@@ -60,7 +62,7 @@
queryset, request, table_id, filters=filters, order_by=order_by
)
- table = queryset.get(id=table_id)
+ table = get_table_or_404(pk=table_id)
if group_count_by:
group_count = table.get_group_counts(
group_count_by, self.limit, self.offset,
diff --git a/mathesar/api/utils.py b/mathesar/api/utils.py
--- a/mathesar/api/utils.py
+++ b/mathesar/api/utils.py
@@ -4,6 +4,13 @@
def get_table_or_404(pk):
+ """
+ Get table if it exists, otherwise throws a DRF NotFound error.
+ Args:
+ pk: id of table
+ Returns:
+ table: return the table based on a specific id
+ """
try:
table = Table.objects.get(id=pk)
except Table.DoesNotExist:
| {"golden_diff": "diff --git a/mathesar/api/pagination.py b/mathesar/api/pagination.py\n--- a/mathesar/api/pagination.py\n+++ b/mathesar/api/pagination.py\n@@ -3,6 +3,8 @@\n from rest_framework.pagination import LimitOffsetPagination\n from rest_framework.response import Response\n \n+from mathesar.api.utils import get_table_or_404\n+\n \n class DefaultLimitOffsetPagination(LimitOffsetPagination):\n default_limit = 50\n@@ -22,7 +24,7 @@\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n- table = queryset.get(id=table_id)\n+ table = get_table_or_404(pk=table_id)\n self.count = len(table.sa_columns)\n self.request = request\n return list(table.sa_columns)[self.offset:self.offset + self.limit]\n@@ -37,7 +39,7 @@\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n # TODO: Cache count value somewhere, since calculating it is expensive.\n- table = queryset.get(id=table_id)\n+ table = get_table_or_404(pk=table_id)\n self.count = table.sa_num_records(filters=filters)\n self.request = request\n \n@@ -60,7 +62,7 @@\n queryset, request, table_id, filters=filters, order_by=order_by\n )\n \n- table = queryset.get(id=table_id)\n+ table = get_table_or_404(pk=table_id)\n if group_count_by:\n group_count = table.get_group_counts(\n group_count_by, self.limit, self.offset,\ndiff --git a/mathesar/api/utils.py b/mathesar/api/utils.py\n--- a/mathesar/api/utils.py\n+++ b/mathesar/api/utils.py\n@@ -4,6 +4,13 @@\n \n \n def get_table_or_404(pk):\n+ \"\"\"\n+ Get table if it exists, otherwise throws a DRF NotFound error.\n+ Args:\n+ pk: id of table\n+ Returns:\n+ table: return the table based on a specific id\n+ \"\"\"\n try:\n table = Table.objects.get(id=pk)\n except Table.DoesNotExist:\n", "issue": "Uncaught 404s in record viewsets and pagination classes\n## Description\r\nThe record viewset, column viewset and paignation classes regularly call `Table.objects.get(id=table_pk)`, which throws a `mathesar.models.Table.DoesNotExist: Table matching query does not exist.` when an invalid table id is passed.\r\n\r\nTo recreate, run `client.get(f'/api/v0/tables/3000/records/')`.\r\n\r\n\r\n## Expected behavior\r\nWe should ensure that the table exists before querying, or catch the `DoesNotExist` error after querying. We should also include tests for table 404s. \r\n\r\nThis is probably best done after #488 is merged, as it includes a function to do exactly this.\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom rest_framework.pagination import LimitOffsetPagination\nfrom rest_framework.response import Response\n\n\nclass DefaultLimitOffsetPagination(LimitOffsetPagination):\n default_limit = 50\n max_limit = 500\n\n def get_paginated_response(self, data):\n return Response(OrderedDict([\n ('count', self.count),\n ('results', data)\n ]))\n\n\nclass ColumnLimitOffsetPagination(DefaultLimitOffsetPagination):\n\n def paginate_queryset(self, queryset, request, table_id):\n self.limit = self.get_limit(request)\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n table = queryset.get(id=table_id)\n self.count = len(table.sa_columns)\n self.request = request\n return list(table.sa_columns)[self.offset:self.offset + self.limit]\n\n\nclass TableLimitOffsetPagination(DefaultLimitOffsetPagination):\n\n def paginate_queryset(self, queryset, request, table_id,\n filters=[], order_by=[]):\n self.limit = self.get_limit(request)\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n # TODO: Cache count value somewhere, since calculating it is expensive.\n table = queryset.get(id=table_id)\n self.count = table.sa_num_records(filters=filters)\n self.request = request\n\n return table.get_records(\n self.limit, self.offset, filters=filters, order_by=order_by,\n )\n\n\nclass TableLimitOffsetGroupPagination(TableLimitOffsetPagination):\n def get_paginated_response(self, data):\n return Response(OrderedDict([\n ('count', self.count),\n ('group_count', self.group_count),\n ('results', data)\n ]))\n\n def paginate_queryset(self, queryset, request, table_id,\n filters=[], order_by=[], group_count_by=[]):\n records = super().paginate_queryset(\n queryset, request, table_id, filters=filters, order_by=order_by\n )\n\n table = queryset.get(id=table_id)\n if group_count_by:\n group_count = table.get_group_counts(\n group_count_by, self.limit, self.offset,\n filters=filters, order_by=order_by\n )\n # Convert the tuple keys into strings so it can be converted to JSON\n group_count = [{\"values\": list(cols), \"count\": count}\n for cols, count in group_count.items()]\n self.group_count = {\n 'group_count_by': group_count_by,\n 'results': group_count,\n }\n else:\n self.group_count = {\n 'group_count_by': None,\n 'results': None,\n }\n\n return records\n", "path": "mathesar/api/pagination.py"}, {"content": "from rest_framework.exceptions import NotFound\n\nfrom mathesar.models import Table\n\n\ndef get_table_or_404(pk):\n try:\n table = Table.objects.get(id=pk)\n except Table.DoesNotExist:\n raise NotFound\n return table\n", "path": "mathesar/api/utils.py"}], "after_files": [{"content": "from collections import OrderedDict\n\nfrom rest_framework.pagination import LimitOffsetPagination\nfrom rest_framework.response import Response\n\nfrom mathesar.api.utils import get_table_or_404\n\n\nclass DefaultLimitOffsetPagination(LimitOffsetPagination):\n default_limit = 50\n max_limit = 500\n\n def get_paginated_response(self, data):\n return Response(OrderedDict([\n ('count', self.count),\n ('results', data)\n ]))\n\n\nclass ColumnLimitOffsetPagination(DefaultLimitOffsetPagination):\n\n def paginate_queryset(self, queryset, request, table_id):\n self.limit = self.get_limit(request)\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n table = get_table_or_404(pk=table_id)\n self.count = len(table.sa_columns)\n self.request = request\n return list(table.sa_columns)[self.offset:self.offset + self.limit]\n\n\nclass TableLimitOffsetPagination(DefaultLimitOffsetPagination):\n\n def paginate_queryset(self, queryset, request, table_id,\n filters=[], order_by=[]):\n self.limit = self.get_limit(request)\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n # TODO: Cache count value somewhere, since calculating it is expensive.\n table = get_table_or_404(pk=table_id)\n self.count = table.sa_num_records(filters=filters)\n self.request = request\n\n return table.get_records(\n self.limit, self.offset, filters=filters, order_by=order_by,\n )\n\n\nclass TableLimitOffsetGroupPagination(TableLimitOffsetPagination):\n def get_paginated_response(self, data):\n return Response(OrderedDict([\n ('count', self.count),\n ('group_count', self.group_count),\n ('results', data)\n ]))\n\n def paginate_queryset(self, queryset, request, table_id,\n filters=[], order_by=[], group_count_by=[]):\n records = super().paginate_queryset(\n queryset, request, table_id, filters=filters, order_by=order_by\n )\n\n table = get_table_or_404(pk=table_id)\n if group_count_by:\n group_count = table.get_group_counts(\n group_count_by, self.limit, self.offset,\n filters=filters, order_by=order_by\n )\n # Convert the tuple keys into strings so it can be converted to JSON\n group_count = [{\"values\": list(cols), \"count\": count}\n for cols, count in group_count.items()]\n self.group_count = {\n 'group_count_by': group_count_by,\n 'results': group_count,\n }\n else:\n self.group_count = {\n 'group_count_by': None,\n 'results': None,\n }\n\n return records\n", "path": "mathesar/api/pagination.py"}, {"content": "from rest_framework.exceptions import NotFound\n\nfrom mathesar.models import Table\n\n\ndef get_table_or_404(pk):\n \"\"\"\n Get table if it exists, otherwise throws a DRF NotFound error.\n Args:\n pk: id of table\n Returns:\n table: return the table based on a specific id\n \"\"\"\n try:\n table = Table.objects.get(id=pk)\n except Table.DoesNotExist:\n raise NotFound\n return table\n", "path": "mathesar/api/utils.py"}]} | 1,228 | 487 |
gh_patches_debug_3160 | rasdani/github-patches | git_diff | ipython__ipython-7560 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Displaying a widget using displayhook produces misaligned Out[N] prompt

This doesn't look right. @jdfreder, can you investigate?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/kernel/zmq/displayhook.py`
Content:
```
1 """Replacements for sys.displayhook that publish over ZMQ."""
2
3 # Copyright (c) IPython Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 import sys
7
8 from IPython.core.displayhook import DisplayHook
9 from IPython.kernel.inprocess.socket import SocketABC
10 from IPython.utils.jsonutil import encode_images
11 from IPython.utils.py3compat import builtin_mod
12 from IPython.utils.traitlets import Instance, Dict
13 from .session import extract_header, Session
14
15 class ZMQDisplayHook(object):
16 """A simple displayhook that publishes the object's repr over a ZeroMQ
17 socket."""
18 topic=b'execute_result'
19
20 def __init__(self, session, pub_socket):
21 self.session = session
22 self.pub_socket = pub_socket
23 self.parent_header = {}
24
25 def __call__(self, obj):
26 if obj is None:
27 return
28
29 builtin_mod._ = obj
30 sys.stdout.flush()
31 sys.stderr.flush()
32 msg = self.session.send(self.pub_socket, u'execute_result', {u'data':repr(obj)},
33 parent=self.parent_header, ident=self.topic)
34
35 def set_parent(self, parent):
36 self.parent_header = extract_header(parent)
37
38
39 class ZMQShellDisplayHook(DisplayHook):
40 """A displayhook subclass that publishes data using ZeroMQ. This is intended
41 to work with an InteractiveShell instance. It sends a dict of different
42 representations of the object."""
43 topic=None
44
45 session = Instance(Session)
46 pub_socket = Instance(SocketABC)
47 parent_header = Dict({})
48
49 def set_parent(self, parent):
50 """Set the parent for outbound messages."""
51 self.parent_header = extract_header(parent)
52
53 def start_displayhook(self):
54 self.msg = self.session.msg(u'execute_result', {
55 'data': {},
56 'metadata': {},
57 }, parent=self.parent_header)
58
59 def write_output_prompt(self):
60 """Write the output prompt."""
61 self.msg['content']['execution_count'] = self.prompt_count
62
63 def write_format_data(self, format_dict, md_dict=None):
64 self.msg['content']['data'] = encode_images(format_dict)
65 self.msg['content']['metadata'] = md_dict
66
67 def finish_displayhook(self):
68 """Finish up all displayhook activities."""
69 sys.stdout.flush()
70 sys.stderr.flush()
71 self.session.send(self.pub_socket, self.msg, ident=self.topic)
72 self.msg = None
73
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/IPython/kernel/zmq/displayhook.py b/IPython/kernel/zmq/displayhook.py
--- a/IPython/kernel/zmq/displayhook.py
+++ b/IPython/kernel/zmq/displayhook.py
@@ -68,6 +68,7 @@
"""Finish up all displayhook activities."""
sys.stdout.flush()
sys.stderr.flush()
- self.session.send(self.pub_socket, self.msg, ident=self.topic)
+ if self.msg['content']['data']:
+ self.session.send(self.pub_socket, self.msg, ident=self.topic)
self.msg = None
| {"golden_diff": "diff --git a/IPython/kernel/zmq/displayhook.py b/IPython/kernel/zmq/displayhook.py\n--- a/IPython/kernel/zmq/displayhook.py\n+++ b/IPython/kernel/zmq/displayhook.py\n@@ -68,6 +68,7 @@\n \"\"\"Finish up all displayhook activities.\"\"\"\n sys.stdout.flush()\n sys.stderr.flush()\n- self.session.send(self.pub_socket, self.msg, ident=self.topic)\n+ if self.msg['content']['data']:\n+ self.session.send(self.pub_socket, self.msg, ident=self.topic)\n self.msg = None\n", "issue": "Displaying a widget using displayhook produces misaligned Out[N] prompt\n\n\nThis doesn't look right. @jdfreder, can you investigate?\n\n", "before_files": [{"content": "\"\"\"Replacements for sys.displayhook that publish over ZMQ.\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport sys\n\nfrom IPython.core.displayhook import DisplayHook\nfrom IPython.kernel.inprocess.socket import SocketABC\nfrom IPython.utils.jsonutil import encode_images\nfrom IPython.utils.py3compat import builtin_mod\nfrom IPython.utils.traitlets import Instance, Dict\nfrom .session import extract_header, Session\n\nclass ZMQDisplayHook(object):\n \"\"\"A simple displayhook that publishes the object's repr over a ZeroMQ\n socket.\"\"\"\n topic=b'execute_result'\n\n def __init__(self, session, pub_socket):\n self.session = session\n self.pub_socket = pub_socket\n self.parent_header = {}\n\n def __call__(self, obj):\n if obj is None:\n return\n\n builtin_mod._ = obj\n sys.stdout.flush()\n sys.stderr.flush()\n msg = self.session.send(self.pub_socket, u'execute_result', {u'data':repr(obj)},\n parent=self.parent_header, ident=self.topic)\n\n def set_parent(self, parent):\n self.parent_header = extract_header(parent)\n\n\nclass ZMQShellDisplayHook(DisplayHook):\n \"\"\"A displayhook subclass that publishes data using ZeroMQ. This is intended\n to work with an InteractiveShell instance. It sends a dict of different\n representations of the object.\"\"\"\n topic=None\n\n session = Instance(Session)\n pub_socket = Instance(SocketABC)\n parent_header = Dict({})\n\n def set_parent(self, parent):\n \"\"\"Set the parent for outbound messages.\"\"\"\n self.parent_header = extract_header(parent)\n\n def start_displayhook(self):\n self.msg = self.session.msg(u'execute_result', {\n 'data': {},\n 'metadata': {},\n }, parent=self.parent_header)\n\n def write_output_prompt(self):\n \"\"\"Write the output prompt.\"\"\"\n self.msg['content']['execution_count'] = self.prompt_count\n\n def write_format_data(self, format_dict, md_dict=None):\n self.msg['content']['data'] = encode_images(format_dict)\n self.msg['content']['metadata'] = md_dict\n\n def finish_displayhook(self):\n \"\"\"Finish up all displayhook activities.\"\"\"\n sys.stdout.flush()\n sys.stderr.flush()\n self.session.send(self.pub_socket, self.msg, ident=self.topic)\n self.msg = None\n\n", "path": "IPython/kernel/zmq/displayhook.py"}], "after_files": [{"content": "\"\"\"Replacements for sys.displayhook that publish over ZMQ.\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport sys\n\nfrom IPython.core.displayhook import DisplayHook\nfrom IPython.kernel.inprocess.socket import SocketABC\nfrom IPython.utils.jsonutil import encode_images\nfrom IPython.utils.py3compat import builtin_mod\nfrom IPython.utils.traitlets import Instance, Dict\nfrom .session import extract_header, Session\n\nclass ZMQDisplayHook(object):\n \"\"\"A simple displayhook that publishes the object's repr over a ZeroMQ\n socket.\"\"\"\n topic=b'execute_result'\n\n def __init__(self, session, pub_socket):\n self.session = session\n self.pub_socket = pub_socket\n self.parent_header = {}\n\n def __call__(self, obj):\n if obj is None:\n return\n\n builtin_mod._ = obj\n sys.stdout.flush()\n sys.stderr.flush()\n msg = self.session.send(self.pub_socket, u'execute_result', {u'data':repr(obj)},\n parent=self.parent_header, ident=self.topic)\n\n def set_parent(self, parent):\n self.parent_header = extract_header(parent)\n\n\nclass ZMQShellDisplayHook(DisplayHook):\n \"\"\"A displayhook subclass that publishes data using ZeroMQ. This is intended\n to work with an InteractiveShell instance. It sends a dict of different\n representations of the object.\"\"\"\n topic=None\n\n session = Instance(Session)\n pub_socket = Instance(SocketABC)\n parent_header = Dict({})\n\n def set_parent(self, parent):\n \"\"\"Set the parent for outbound messages.\"\"\"\n self.parent_header = extract_header(parent)\n\n def start_displayhook(self):\n self.msg = self.session.msg(u'execute_result', {\n 'data': {},\n 'metadata': {},\n }, parent=self.parent_header)\n\n def write_output_prompt(self):\n \"\"\"Write the output prompt.\"\"\"\n self.msg['content']['execution_count'] = self.prompt_count\n\n def write_format_data(self, format_dict, md_dict=None):\n self.msg['content']['data'] = encode_images(format_dict)\n self.msg['content']['metadata'] = md_dict\n\n def finish_displayhook(self):\n \"\"\"Finish up all displayhook activities.\"\"\"\n sys.stdout.flush()\n sys.stderr.flush()\n if self.msg['content']['data']:\n self.session.send(self.pub_socket, self.msg, ident=self.topic)\n self.msg = None\n\n", "path": "IPython/kernel/zmq/displayhook.py"}]} | 1,033 | 123 |
gh_patches_debug_61971 | rasdani/github-patches | git_diff | crytic__slither-1110 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug-Candidate]: Phi-node print missing 'f' in f-string
### Describe the issue:
When printing a Phi-node the string is not formatted.
There seems to be a 'f' missing ahead of the str in https://github.com/crytic/slither/blob/dev/slither/slithir/operations/phi.py#L36
### Code example to reproduce the issue:
slither tests/complex_func.sol --print slithir-ssa
### Version:
dev-branch dd91f770f61eaadc286e2af3c72fb5798e376c16
### Relevant log output:
```
Contract Increment
Function Increment.increaseBy1()
IRs:
{self.lvalue}({self.lvalue.type}) := ϕ({[str(v) for v in self._rvalues]})
Expression: i += 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/slithir/operations/phi.py`
Content:
```
1 from slither.slithir.operations.lvalue import OperationWithLValue
2 from slither.slithir.utils.utils import is_valid_lvalue
3
4
5 class Phi(OperationWithLValue):
6 def __init__(self, left_variable, nodes):
7 # When Phi operations are created the
8 # correct indexes of the variables are not yet computed
9 # We store the nodes where the variables are written
10 # so we can update the rvalues of the Phi operation
11 # after its instantiation
12 assert is_valid_lvalue(left_variable)
13 assert isinstance(nodes, set)
14 super().__init__()
15 self._lvalue = left_variable
16 self._rvalues = []
17 self._nodes = nodes
18
19 @property
20 def read(self):
21 return self.rvalues
22
23 @property
24 def rvalues(self):
25 return self._rvalues
26
27 @rvalues.setter
28 def rvalues(self, vals):
29 self._rvalues = vals
30
31 @property
32 def nodes(self):
33 return self._nodes
34
35 def __str__(self):
36 return "{self.lvalue}({self.lvalue.type}) := \u03D5({[str(v) for v in self._rvalues]})"
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/slithir/operations/phi.py b/slither/slithir/operations/phi.py
--- a/slither/slithir/operations/phi.py
+++ b/slither/slithir/operations/phi.py
@@ -33,4 +33,4 @@
return self._nodes
def __str__(self):
- return "{self.lvalue}({self.lvalue.type}) := \u03D5({[str(v) for v in self._rvalues]})"
+ return f"{self.lvalue}({self.lvalue.type}) := \u03D5({[str(v) for v in self._rvalues]})"
| {"golden_diff": "diff --git a/slither/slithir/operations/phi.py b/slither/slithir/operations/phi.py\n--- a/slither/slithir/operations/phi.py\n+++ b/slither/slithir/operations/phi.py\n@@ -33,4 +33,4 @@\n return self._nodes\n \n def __str__(self):\n- return \"{self.lvalue}({self.lvalue.type}) := \\u03D5({[str(v) for v in self._rvalues]})\"\n+ return f\"{self.lvalue}({self.lvalue.type}) := \\u03D5({[str(v) for v in self._rvalues]})\"\n", "issue": "[Bug-Candidate]: Phi-node print missing 'f' in f-string\n### Describe the issue:\n\nWhen printing a Phi-node the string is not formatted.\r\nThere seems to be a 'f' missing ahead of the str in https://github.com/crytic/slither/blob/dev/slither/slithir/operations/phi.py#L36\n\n### Code example to reproduce the issue:\n\nslither tests/complex_func.sol --print slithir-ssa\n\n### Version:\n\ndev-branch dd91f770f61eaadc286e2af3c72fb5798e376c16\n\n### Relevant log output:\n\n```\r\nContract Increment\r\n Function Increment.increaseBy1()\r\n IRs:\r\n {self.lvalue}({self.lvalue.type}) := \u03d5({[str(v) for v in self._rvalues]})\r\n Expression: i += 1\r\n```\n", "before_files": [{"content": "from slither.slithir.operations.lvalue import OperationWithLValue\nfrom slither.slithir.utils.utils import is_valid_lvalue\n\n\nclass Phi(OperationWithLValue):\n def __init__(self, left_variable, nodes):\n # When Phi operations are created the\n # correct indexes of the variables are not yet computed\n # We store the nodes where the variables are written\n # so we can update the rvalues of the Phi operation\n # after its instantiation\n assert is_valid_lvalue(left_variable)\n assert isinstance(nodes, set)\n super().__init__()\n self._lvalue = left_variable\n self._rvalues = []\n self._nodes = nodes\n\n @property\n def read(self):\n return self.rvalues\n\n @property\n def rvalues(self):\n return self._rvalues\n\n @rvalues.setter\n def rvalues(self, vals):\n self._rvalues = vals\n\n @property\n def nodes(self):\n return self._nodes\n\n def __str__(self):\n return \"{self.lvalue}({self.lvalue.type}) := \\u03D5({[str(v) for v in self._rvalues]})\"\n", "path": "slither/slithir/operations/phi.py"}], "after_files": [{"content": "from slither.slithir.operations.lvalue import OperationWithLValue\nfrom slither.slithir.utils.utils import is_valid_lvalue\n\n\nclass Phi(OperationWithLValue):\n def __init__(self, left_variable, nodes):\n # When Phi operations are created the\n # correct indexes of the variables are not yet computed\n # We store the nodes where the variables are written\n # so we can update the rvalues of the Phi operation\n # after its instantiation\n assert is_valid_lvalue(left_variable)\n assert isinstance(nodes, set)\n super().__init__()\n self._lvalue = left_variable\n self._rvalues = []\n self._nodes = nodes\n\n @property\n def read(self):\n return self.rvalues\n\n @property\n def rvalues(self):\n return self._rvalues\n\n @rvalues.setter\n def rvalues(self, vals):\n self._rvalues = vals\n\n @property\n def nodes(self):\n return self._nodes\n\n def __str__(self):\n return f\"{self.lvalue}({self.lvalue.type}) := \\u03D5({[str(v) for v in self._rvalues]})\"\n", "path": "slither/slithir/operations/phi.py"}]} | 788 | 148 |
gh_patches_debug_31305 | rasdani/github-patches | git_diff | sosreport__sos-2660 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[ssh] Can ssh plugin check the permissions set for /home/*/.ssh files?
Hello!
When users set wrong permissions to files in their ~/.ssh/ folder, i.e. they set write permission for `~/.ssh/authenticated_keys` for `other`, the SSH server will refuse to accept connections for this user.
I think it would be nice for the [ssh] plugin to check, if the permissions set for files in the `.ssh` folders of system users are correct, or if they are corrupted in some way.
A very simple solution for that would be to just run `ls -l .ssh/` in every home directory. Would it be OK to extend the ssh plugin to do this? Would it be better to have a separate plugin do this?
Let me know what you think, and I'll give it a try if that's ok :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sos/report/plugins/ssh.py`
Content:
```
1 # Copyright (C) 2007 Red Hat, Inc., Eugene Teo <[email protected]>
2
3 # This file is part of the sos project: https://github.com/sosreport/sos
4 #
5 # This copyrighted material is made available to anyone wishing to use,
6 # modify, copy, or redistribute it subject to the terms and conditions of
7 # version 2 of the GNU General Public License.
8 #
9 # See the LICENSE file in the source distribution for further information.
10
11 from sos.report.plugins import Plugin, IndependentPlugin
12
13
14 class Ssh(Plugin, IndependentPlugin):
15
16 short_desc = 'Secure shell service'
17
18 plugin_name = 'ssh'
19 profiles = ('services', 'security', 'system', 'identity')
20
21 def setup(self):
22
23 self.add_file_tags({
24 '/etc/ssh/sshd_config': 'sshd_config',
25 '/etc/ssh/ssh_config': 'ssh_config'
26 })
27
28 sshcfgs = [
29 "/etc/ssh/ssh_config",
30 "/etc/ssh/sshd_config"
31 ]
32
33 # Include main config files
34 self.add_copy_spec(sshcfgs)
35
36 # Read configs for any includes and copy those
37 try:
38 for sshcfg in sshcfgs:
39 tag = sshcfg.split('/')[-1]
40 with open(sshcfg, 'r') as cfgfile:
41 for line in cfgfile:
42 # skip empty lines and comments
43 if len(line.split()) == 0 or line.startswith('#'):
44 continue
45 # ssh_config keywords are allowed as case-insensitive
46 if line.lower().startswith('include'):
47 confarg = line.split()
48 self.add_copy_spec(confarg[1], tags=tag)
49 except Exception:
50 pass
51
52
53 # vim: set et ts=4 sw=4 :
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sos/report/plugins/ssh.py b/sos/report/plugins/ssh.py
--- a/sos/report/plugins/ssh.py
+++ b/sos/report/plugins/ssh.py
@@ -9,6 +9,7 @@
# See the LICENSE file in the source distribution for further information.
from sos.report.plugins import Plugin, IndependentPlugin
+import os.path
class Ssh(Plugin, IndependentPlugin):
@@ -33,6 +34,10 @@
# Include main config files
self.add_copy_spec(sshcfgs)
+ self.included_configs(sshcfgs)
+ self.user_ssh_files_permissions()
+
+ def included_configs(self, sshcfgs):
# Read configs for any includes and copy those
try:
for sshcfg in sshcfgs:
@@ -49,5 +54,33 @@
except Exception:
pass
+ def user_ssh_files_permissions(self):
+ """
+ Iterate over .ssh folders in user homes to see their permissions.
+
+ Bad permissions can prevent SSH from allowing access to given user.
+ """
+ users_data = self.exec_cmd('getent passwd')
+
+ if users_data['status']:
+ # If getent fails, fallback to just reading /etc/passwd
+ try:
+ with open('/etc/passwd') as passwd_file:
+ users_data_lines = passwd_file.readlines()
+ except Exception:
+ # If we can't read /etc/passwd, then there's something wrong.
+ self._log_error("Couldn't read /etc/passwd")
+ return
+ else:
+ users_data_lines = users_data['output'].splitlines()
+
+ # Read the home paths of users in the system and check the ~/.ssh dirs
+ for usr_line in users_data_lines:
+ try:
+ home_dir = os.path.join(usr_line.split(':')[5], '.ssh')
+ if self.path_isdir(home_dir):
+ self.add_cmd_output('ls -laZ {}'.format(home_dir))
+ except IndexError:
+ pass
# vim: set et ts=4 sw=4 :
| {"golden_diff": "diff --git a/sos/report/plugins/ssh.py b/sos/report/plugins/ssh.py\n--- a/sos/report/plugins/ssh.py\n+++ b/sos/report/plugins/ssh.py\n@@ -9,6 +9,7 @@\n # See the LICENSE file in the source distribution for further information.\n \n from sos.report.plugins import Plugin, IndependentPlugin\n+import os.path\n \n \n class Ssh(Plugin, IndependentPlugin):\n@@ -33,6 +34,10 @@\n # Include main config files\n self.add_copy_spec(sshcfgs)\n \n+ self.included_configs(sshcfgs)\n+ self.user_ssh_files_permissions()\n+\n+ def included_configs(self, sshcfgs):\n # Read configs for any includes and copy those\n try:\n for sshcfg in sshcfgs:\n@@ -49,5 +54,33 @@\n except Exception:\n pass\n \n+ def user_ssh_files_permissions(self):\n+ \"\"\"\n+ Iterate over .ssh folders in user homes to see their permissions.\n+\n+ Bad permissions can prevent SSH from allowing access to given user.\n+ \"\"\"\n+ users_data = self.exec_cmd('getent passwd')\n+\n+ if users_data['status']:\n+ # If getent fails, fallback to just reading /etc/passwd\n+ try:\n+ with open('/etc/passwd') as passwd_file:\n+ users_data_lines = passwd_file.readlines()\n+ except Exception:\n+ # If we can't read /etc/passwd, then there's something wrong.\n+ self._log_error(\"Couldn't read /etc/passwd\")\n+ return\n+ else:\n+ users_data_lines = users_data['output'].splitlines()\n+\n+ # Read the home paths of users in the system and check the ~/.ssh dirs\n+ for usr_line in users_data_lines:\n+ try:\n+ home_dir = os.path.join(usr_line.split(':')[5], '.ssh')\n+ if self.path_isdir(home_dir):\n+ self.add_cmd_output('ls -laZ {}'.format(home_dir))\n+ except IndexError:\n+ pass\n \n # vim: set et ts=4 sw=4 :\n", "issue": "[ssh] Can ssh plugin check the permissions set for /home/*/.ssh files?\nHello!\r\n\r\nWhen users set wrong permissions to files in their ~/.ssh/ folder, i.e. they set write permission for `~/.ssh/authenticated_keys` for `other`, the SSH server will refuse to accept connections for this user.\r\n\r\nI think it would be nice for the [ssh] plugin to check, if the permissions set for files in the `.ssh` folders of system users are correct, or if they are corrupted in some way. \r\n\r\nA very simple solution for that would be to just run `ls -l .ssh/` in every home directory. Would it be OK to extend the ssh plugin to do this? Would it be better to have a separate plugin do this?\r\n\r\nLet me know what you think, and I'll give it a try if that's ok :)\n", "before_files": [{"content": "# Copyright (C) 2007 Red Hat, Inc., Eugene Teo <[email protected]>\n\n# This file is part of the sos project: https://github.com/sosreport/sos\n#\n# This copyrighted material is made available to anyone wishing to use,\n# modify, copy, or redistribute it subject to the terms and conditions of\n# version 2 of the GNU General Public License.\n#\n# See the LICENSE file in the source distribution for further information.\n\nfrom sos.report.plugins import Plugin, IndependentPlugin\n\n\nclass Ssh(Plugin, IndependentPlugin):\n\n short_desc = 'Secure shell service'\n\n plugin_name = 'ssh'\n profiles = ('services', 'security', 'system', 'identity')\n\n def setup(self):\n\n self.add_file_tags({\n '/etc/ssh/sshd_config': 'sshd_config',\n '/etc/ssh/ssh_config': 'ssh_config'\n })\n\n sshcfgs = [\n \"/etc/ssh/ssh_config\",\n \"/etc/ssh/sshd_config\"\n ]\n\n # Include main config files\n self.add_copy_spec(sshcfgs)\n\n # Read configs for any includes and copy those\n try:\n for sshcfg in sshcfgs:\n tag = sshcfg.split('/')[-1]\n with open(sshcfg, 'r') as cfgfile:\n for line in cfgfile:\n # skip empty lines and comments\n if len(line.split()) == 0 or line.startswith('#'):\n continue\n # ssh_config keywords are allowed as case-insensitive\n if line.lower().startswith('include'):\n confarg = line.split()\n self.add_copy_spec(confarg[1], tags=tag)\n except Exception:\n pass\n\n\n# vim: set et ts=4 sw=4 :\n", "path": "sos/report/plugins/ssh.py"}], "after_files": [{"content": "# Copyright (C) 2007 Red Hat, Inc., Eugene Teo <[email protected]>\n\n# This file is part of the sos project: https://github.com/sosreport/sos\n#\n# This copyrighted material is made available to anyone wishing to use,\n# modify, copy, or redistribute it subject to the terms and conditions of\n# version 2 of the GNU General Public License.\n#\n# See the LICENSE file in the source distribution for further information.\n\nfrom sos.report.plugins import Plugin, IndependentPlugin\nimport os.path\n\n\nclass Ssh(Plugin, IndependentPlugin):\n\n short_desc = 'Secure shell service'\n\n plugin_name = 'ssh'\n profiles = ('services', 'security', 'system', 'identity')\n\n def setup(self):\n\n self.add_file_tags({\n '/etc/ssh/sshd_config': 'sshd_config',\n '/etc/ssh/ssh_config': 'ssh_config'\n })\n\n sshcfgs = [\n \"/etc/ssh/ssh_config\",\n \"/etc/ssh/sshd_config\"\n ]\n\n # Include main config files\n self.add_copy_spec(sshcfgs)\n\n self.included_configs(sshcfgs)\n self.user_ssh_files_permissions()\n\n def included_configs(self, sshcfgs):\n # Read configs for any includes and copy those\n try:\n for sshcfg in sshcfgs:\n tag = sshcfg.split('/')[-1]\n with open(sshcfg, 'r') as cfgfile:\n for line in cfgfile:\n # skip empty lines and comments\n if len(line.split()) == 0 or line.startswith('#'):\n continue\n # ssh_config keywords are allowed as case-insensitive\n if line.lower().startswith('include'):\n confarg = line.split()\n self.add_copy_spec(confarg[1], tags=tag)\n except Exception:\n pass\n\n def user_ssh_files_permissions(self):\n \"\"\"\n Iterate over .ssh folders in user homes to see their permissions.\n\n Bad permissions can prevent SSH from allowing access to given user.\n \"\"\"\n users_data = self.exec_cmd('getent passwd')\n\n if users_data['status']:\n # If getent fails, fallback to just reading /etc/passwd\n try:\n with open('/etc/passwd') as passwd_file:\n users_data_lines = passwd_file.readlines()\n except Exception:\n # If we can't read /etc/passwd, then there's something wrong.\n self._log_error(\"Couldn't read /etc/passwd\")\n return\n else:\n users_data_lines = users_data['output'].splitlines()\n\n # Read the home paths of users in the system and check the ~/.ssh dirs\n for usr_line in users_data_lines:\n try:\n home_dir = os.path.join(usr_line.split(':')[5], '.ssh')\n if self.path_isdir(home_dir):\n self.add_cmd_output('ls -laZ {}'.format(home_dir))\n except IndexError:\n pass\n\n# vim: set et ts=4 sw=4 :\n", "path": "sos/report/plugins/ssh.py"}]} | 925 | 465 |
gh_patches_debug_54607 | rasdani/github-patches | git_diff | zulip__zulip-13067 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.
It looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines.
See `compile_requirements` in that file for details.
My guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`.
@hackerkid this might be a good project for you.
Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.
It looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines.
See `compile_requirements` in that file for details.
My guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`.
@hackerkid this might be a good project for you.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `version.py`
Content:
```
1 import os
2
3 ZULIP_VERSION = "2.0.4+git"
4 # Add information on number of commits and commit hash to version, if available
5 zulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')
6 if os.path.exists(zulip_git_version_file):
7 with open(zulip_git_version_file) as f:
8 version = f.read().strip()
9 if version:
10 ZULIP_VERSION = version
11
12 LATEST_MAJOR_VERSION = "2.0"
13 LATEST_RELEASE_VERSION = "2.0.4"
14 LATEST_RELEASE_ANNOUNCEMENT = "https://blog.zulip.org/2019/03/01/zulip-2-0-released/"
15
16 # Bump the minor PROVISION_VERSION to indicate that folks should provision
17 # only when going from an old version of the code to a newer version. Bump
18 # the major version to indicate that folks should provision in both
19 # directions.
20
21 # Typically,
22 # * adding a dependency only requires a minor version bump;
23 # * removing a dependency requires a major version bump;
24 # * upgrading a dependency requires a major version bump, unless the
25 # upgraded dependency is backwards compatible with all of our
26 # historical commits sharing the same major version, in which case a
27 # minor version bump suffices.
28
29 PROVISION_VERSION = '49.1'
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -26,4 +26,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '49.1'
+PROVISION_VERSION = '49.2'
| {"golden_diff": "diff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -26,4 +26,4 @@\n # historical commits sharing the same major version, in which case a\n # minor version bump suffices.\n \n-PROVISION_VERSION = '49.1'\n+PROVISION_VERSION = '49.2'\n", "issue": "Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.\nIt looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines. \r\n See `compile_requirements` in that file for details. \r\n\r\nMy guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`. \r\n\r\n@hackerkid this might be a good project for you.\nClean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.\nIt looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines. \r\n See `compile_requirements` in that file for details. \r\n\r\nMy guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`. \r\n\r\n@hackerkid this might be a good project for you.\n", "before_files": [{"content": "import os\n\nZULIP_VERSION = \"2.0.4+git\"\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n version = f.read().strip()\n if version:\n ZULIP_VERSION = version\n\nLATEST_MAJOR_VERSION = \"2.0\"\nLATEST_RELEASE_VERSION = \"2.0.4\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.org/2019/03/01/zulip-2-0-released/\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = '49.1'\n", "path": "version.py"}], "after_files": [{"content": "import os\n\nZULIP_VERSION = \"2.0.4+git\"\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n version = f.read().strip()\n if version:\n ZULIP_VERSION = version\n\nLATEST_MAJOR_VERSION = \"2.0\"\nLATEST_RELEASE_VERSION = \"2.0.4\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.org/2019/03/01/zulip-2-0-released/\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = '49.2'\n", "path": "version.py"}]} | 986 | 78 |
gh_patches_debug_9751 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-481 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
project list does not show text if there are no matching projects
It should show something like "No projects could be found". Note that the text should work for two cases: "there are no projects" and "there are no projects matching the filters".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/contrib/templatetags/contrib_tags.py`
Content:
```
1 from django import template
2 from django.template.loader import render_to_string
3
4 register = template.Library()
5
6
7 @register.assignment_tag
8 def include_template_string(template, **kwargs):
9 rendered_template = render_to_string(template, kwargs)
10 return str(rendered_template)
11
12
13 @register.assignment_tag
14 def combined_url_parameter(request_query_dict, **kwargs):
15 combined_query_dict = request_query_dict.copy()
16 for key in kwargs:
17 combined_query_dict.setlist(key, [kwargs[key]])
18 encoded_parameter = '?' + combined_query_dict.urlencode()
19 return encoded_parameter
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/contrib/templatetags/contrib_tags.py b/apps/contrib/templatetags/contrib_tags.py
--- a/apps/contrib/templatetags/contrib_tags.py
+++ b/apps/contrib/templatetags/contrib_tags.py
@@ -17,3 +17,14 @@
combined_query_dict.setlist(key, [kwargs[key]])
encoded_parameter = '?' + combined_query_dict.urlencode()
return encoded_parameter
+
+
[email protected]_tag
+def filter_has_perm(perm, user, objects):
+ """Filter a list of objects based on user permissions."""
+ if not hasattr(user, 'has_perm'):
+ # If the swapped user model does not support permissions, all objects
+ # will be returned. This is taken from rules.templatetags.has_perm.
+ return objects
+ else:
+ return (obj for obj in objects if user.has_perm(perm, obj))
| {"golden_diff": "diff --git a/apps/contrib/templatetags/contrib_tags.py b/apps/contrib/templatetags/contrib_tags.py\n--- a/apps/contrib/templatetags/contrib_tags.py\n+++ b/apps/contrib/templatetags/contrib_tags.py\n@@ -17,3 +17,14 @@\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n+\n+\[email protected]_tag\n+def filter_has_perm(perm, user, objects):\n+ \"\"\"Filter a list of objects based on user permissions.\"\"\"\n+ if not hasattr(user, 'has_perm'):\n+ # If the swapped user model does not support permissions, all objects\n+ # will be returned. This is taken from rules.templatetags.has_perm.\n+ return objects\n+ else:\n+ return (obj for obj in objects if user.has_perm(perm, obj))\n", "issue": "project list does not show text if there are no matching projects\nIt should show something like \"No projects could be found\". Note that the text should work for two cases: \"there are no projects\" and \"there are no projects matching the filters\".\n", "before_files": [{"content": "from django import template\nfrom django.template.loader import render_to_string\n\nregister = template.Library()\n\n\[email protected]_tag\ndef include_template_string(template, **kwargs):\n rendered_template = render_to_string(template, kwargs)\n return str(rendered_template)\n\n\[email protected]_tag\ndef combined_url_parameter(request_query_dict, **kwargs):\n combined_query_dict = request_query_dict.copy()\n for key in kwargs:\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n", "path": "apps/contrib/templatetags/contrib_tags.py"}], "after_files": [{"content": "from django import template\nfrom django.template.loader import render_to_string\n\nregister = template.Library()\n\n\[email protected]_tag\ndef include_template_string(template, **kwargs):\n rendered_template = render_to_string(template, kwargs)\n return str(rendered_template)\n\n\[email protected]_tag\ndef combined_url_parameter(request_query_dict, **kwargs):\n combined_query_dict = request_query_dict.copy()\n for key in kwargs:\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n\n\[email protected]_tag\ndef filter_has_perm(perm, user, objects):\n \"\"\"Filter a list of objects based on user permissions.\"\"\"\n if not hasattr(user, 'has_perm'):\n # If the swapped user model does not support permissions, all objects\n # will be returned. This is taken from rules.templatetags.has_perm.\n return objects\n else:\n return (obj for obj in objects if user.has_perm(perm, obj))\n", "path": "apps/contrib/templatetags/contrib_tags.py"}]} | 465 | 209 |
gh_patches_debug_8368 | rasdani/github-patches | git_diff | wagtail__wagtail-2488 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keyerror when sending password reset email
When sending a password reset email, I'm getting an internal error
I'll just share the raven error - hopefully that doesn't review all of the site secrets (probably does)
https://app.getsentry.com/share/issue/37343334302e313233323439393235/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailadmin/templatetags/wagtailuserbar.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2
3 from django import template
4 from django.template.loader import render_to_string
5
6 from wagtail.wagtailadmin.userbar import (
7 AddPageItem, AdminItem, ApproveModerationEditPageItem, EditPageItem, ExplorePageItem,
8 RejectModerationEditPageItem)
9 from wagtail.wagtailcore import hooks
10 from wagtail.wagtailcore.models import PAGE_TEMPLATE_VAR, Page, PageRevision
11
12 # from django.contrib.auth.decorators import permission_required
13
14
15 register = template.Library()
16
17
18 def get_page_instance(context):
19 """
20 Given a template context, try and find a Page variable in the common
21 places. Returns None if a page can not be found.
22 """
23 possible_names = [PAGE_TEMPLATE_VAR, 'self']
24 for name in possible_names:
25 if name in context:
26 page = context[name]
27 if isinstance(page, Page):
28 return page
29
30
31 @register.simple_tag(takes_context=True)
32 def wagtailuserbar(context, position='bottom-right'):
33 # Find request object
34 request = context['request']
35
36
37 # Don't render if user doesn't have permission to access the admin area
38 if not request.user.has_perm('wagtailadmin.access_admin'):
39 return ''
40
41 # Only render if the context contains a variable referencing a saved page
42 page = get_page_instance(context)
43 if page is None:
44 return ''
45
46 # Dont render anything if the page has not been saved - i.e. a preview
47 if page.pk is None:
48 return ''
49
50 try:
51 revision_id = request.revision_id
52 except AttributeError:
53 revision_id = None
54
55 if revision_id is None:
56 items = [
57 AdminItem(),
58 ExplorePageItem(Page.objects.get(id=page.id)),
59 EditPageItem(Page.objects.get(id=page.id)),
60 AddPageItem(Page.objects.get(id=page.id)),
61 ]
62 else:
63 items = [
64 AdminItem(),
65 ExplorePageItem(PageRevision.objects.get(id=revision_id).page),
66 EditPageItem(PageRevision.objects.get(id=revision_id).page),
67 AddPageItem(PageRevision.objects.get(id=revision_id).page),
68 ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
69 RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
70 ]
71
72 for fn in hooks.get_hooks('construct_wagtail_userbar'):
73 fn(request, items)
74
75 # Render the items
76 rendered_items = [item.render(request) for item in items]
77
78 # Remove any unrendered items
79 rendered_items = [item for item in rendered_items if item]
80
81 # Render the userbar items
82 return render_to_string('wagtailadmin/userbar/base.html', {
83 'request': request,
84 'items': rendered_items,
85 'position': position,
86 'page': page,
87 'revision_id': revision_id
88 })
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/wagtailadmin/templatetags/wagtailuserbar.py b/wagtail/wagtailadmin/templatetags/wagtailuserbar.py
--- a/wagtail/wagtailadmin/templatetags/wagtailuserbar.py
+++ b/wagtail/wagtailadmin/templatetags/wagtailuserbar.py
@@ -31,8 +31,10 @@
@register.simple_tag(takes_context=True)
def wagtailuserbar(context, position='bottom-right'):
# Find request object
- request = context['request']
-
+ try:
+ request = context['request']
+ except KeyError:
+ return ''
# Don't render if user doesn't have permission to access the admin area
if not request.user.has_perm('wagtailadmin.access_admin'):
| {"golden_diff": "diff --git a/wagtail/wagtailadmin/templatetags/wagtailuserbar.py b/wagtail/wagtailadmin/templatetags/wagtailuserbar.py\n--- a/wagtail/wagtailadmin/templatetags/wagtailuserbar.py\n+++ b/wagtail/wagtailadmin/templatetags/wagtailuserbar.py\n@@ -31,8 +31,10 @@\n @register.simple_tag(takes_context=True)\n def wagtailuserbar(context, position='bottom-right'):\n # Find request object\n- request = context['request']\n-\n+ try:\n+ request = context['request']\n+ except KeyError:\n+ return ''\n \n # Don't render if user doesn't have permission to access the admin area\n if not request.user.has_perm('wagtailadmin.access_admin'):\n", "issue": "Keyerror when sending password reset email\nWhen sending a password reset email, I'm getting an internal error\n\nI'll just share the raven error - hopefully that doesn't review all of the site secrets (probably does)\n\nhttps://app.getsentry.com/share/issue/37343334302e313233323439393235/\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django import template\nfrom django.template.loader import render_to_string\n\nfrom wagtail.wagtailadmin.userbar import (\n AddPageItem, AdminItem, ApproveModerationEditPageItem, EditPageItem, ExplorePageItem,\n RejectModerationEditPageItem)\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import PAGE_TEMPLATE_VAR, Page, PageRevision\n\n# from django.contrib.auth.decorators import permission_required\n\n\nregister = template.Library()\n\n\ndef get_page_instance(context):\n \"\"\"\n Given a template context, try and find a Page variable in the common\n places. Returns None if a page can not be found.\n \"\"\"\n possible_names = [PAGE_TEMPLATE_VAR, 'self']\n for name in possible_names:\n if name in context:\n page = context[name]\n if isinstance(page, Page):\n return page\n\n\[email protected]_tag(takes_context=True)\ndef wagtailuserbar(context, position='bottom-right'):\n # Find request object\n request = context['request']\n\n\n # Don't render if user doesn't have permission to access the admin area\n if not request.user.has_perm('wagtailadmin.access_admin'):\n return ''\n\n # Only render if the context contains a variable referencing a saved page\n page = get_page_instance(context)\n if page is None:\n return ''\n\n # Dont render anything if the page has not been saved - i.e. a preview\n if page.pk is None:\n return ''\n\n try:\n revision_id = request.revision_id\n except AttributeError:\n revision_id = None\n\n if revision_id is None:\n items = [\n AdminItem(),\n ExplorePageItem(Page.objects.get(id=page.id)),\n EditPageItem(Page.objects.get(id=page.id)),\n AddPageItem(Page.objects.get(id=page.id)),\n ]\n else:\n items = [\n AdminItem(),\n ExplorePageItem(PageRevision.objects.get(id=revision_id).page),\n EditPageItem(PageRevision.objects.get(id=revision_id).page),\n AddPageItem(PageRevision.objects.get(id=revision_id).page),\n ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n ]\n\n for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the userbar items\n return render_to_string('wagtailadmin/userbar/base.html', {\n 'request': request,\n 'items': rendered_items,\n 'position': position,\n 'page': page,\n 'revision_id': revision_id\n })\n", "path": "wagtail/wagtailadmin/templatetags/wagtailuserbar.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django import template\nfrom django.template.loader import render_to_string\n\nfrom wagtail.wagtailadmin.userbar import (\n AddPageItem, AdminItem, ApproveModerationEditPageItem, EditPageItem, ExplorePageItem,\n RejectModerationEditPageItem)\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import PAGE_TEMPLATE_VAR, Page, PageRevision\n\n# from django.contrib.auth.decorators import permission_required\n\n\nregister = template.Library()\n\n\ndef get_page_instance(context):\n \"\"\"\n Given a template context, try and find a Page variable in the common\n places. Returns None if a page can not be found.\n \"\"\"\n possible_names = [PAGE_TEMPLATE_VAR, 'self']\n for name in possible_names:\n if name in context:\n page = context[name]\n if isinstance(page, Page):\n return page\n\n\[email protected]_tag(takes_context=True)\ndef wagtailuserbar(context, position='bottom-right'):\n # Find request object\n try:\n request = context['request']\n except KeyError:\n return ''\n\n # Don't render if user doesn't have permission to access the admin area\n if not request.user.has_perm('wagtailadmin.access_admin'):\n return ''\n\n # Only render if the context contains a variable referencing a saved page\n page = get_page_instance(context)\n if page is None:\n return ''\n\n # Dont render anything if the page has not been saved - i.e. a preview\n if page.pk is None:\n return ''\n\n try:\n revision_id = request.revision_id\n except AttributeError:\n revision_id = None\n\n if revision_id is None:\n items = [\n AdminItem(),\n ExplorePageItem(Page.objects.get(id=page.id)),\n EditPageItem(Page.objects.get(id=page.id)),\n AddPageItem(Page.objects.get(id=page.id)),\n ]\n else:\n items = [\n AdminItem(),\n ExplorePageItem(PageRevision.objects.get(id=revision_id).page),\n EditPageItem(PageRevision.objects.get(id=revision_id).page),\n AddPageItem(PageRevision.objects.get(id=revision_id).page),\n ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n ]\n\n for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the userbar items\n return render_to_string('wagtailadmin/userbar/base.html', {\n 'request': request,\n 'items': rendered_items,\n 'position': position,\n 'page': page,\n 'revision_id': revision_id\n })\n", "path": "wagtail/wagtailadmin/templatetags/wagtailuserbar.py"}]} | 1,158 | 187 |
gh_patches_debug_27250 | rasdani/github-patches | git_diff | nilearn__nilearn-3710 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation builder failure on main
https://github.com/nilearn/nilearn/actions/workflows/build-docs.yml
started occurring after merging #3698 (doubt it is related given the content of the PR)
https://github.com/nilearn/nilearn/actions/runs/4741116007
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nilearn/datasets/__init__.py`
Content:
```
1 """Helper functions to download NeuroImaging datasets."""
2
3 from .atlas import (
4 fetch_atlas_aal,
5 fetch_atlas_allen_2011,
6 fetch_atlas_basc_multiscale_2015,
7 fetch_atlas_craddock_2012,
8 fetch_atlas_destrieux_2009,
9 fetch_atlas_difumo,
10 fetch_atlas_harvard_oxford,
11 fetch_atlas_juelich,
12 fetch_atlas_msdl,
13 fetch_atlas_schaefer_2018,
14 fetch_atlas_smith_2009,
15 fetch_atlas_surf_destrieux,
16 fetch_atlas_talairach,
17 fetch_atlas_yeo_2011,
18 fetch_coords_dosenbach_2010,
19 fetch_coords_power_2011,
20 fetch_coords_seitzman_2018,
21 )
22 from .func import (
23 fetch_abide_pcp,
24 fetch_adhd,
25 fetch_bids_langloc_dataset,
26 fetch_development_fmri,
27 fetch_fiac_first_level,
28 fetch_haxby,
29 fetch_language_localizer_demo_dataset,
30 fetch_localizer_button_task,
31 fetch_localizer_calculation_task,
32 fetch_localizer_contrasts,
33 fetch_localizer_first_level,
34 fetch_megatrawls_netmats,
35 fetch_mixed_gambles,
36 fetch_miyawaki2008,
37 fetch_openneuro_dataset,
38 fetch_openneuro_dataset_index,
39 fetch_spm_auditory,
40 fetch_spm_multimodal_fmri,
41 fetch_surf_nki_enhanced,
42 patch_openneuro_dataset,
43 select_from_index,
44 )
45 from .neurovault import (
46 fetch_neurovault,
47 fetch_neurovault_auditory_computation_task,
48 fetch_neurovault_ids,
49 fetch_neurovault_motor_task,
50 )
51 from .struct import (
52 GM_MNI152_FILE_PATH,
53 MNI152_FILE_PATH,
54 WM_MNI152_FILE_PATH,
55 fetch_icbm152_2009,
56 fetch_icbm152_brain_gm_mask,
57 fetch_oasis_vbm,
58 fetch_surf_fsaverage,
59 load_mni152_brain_mask,
60 load_mni152_gm_mask,
61 load_mni152_gm_template,
62 load_mni152_template,
63 load_mni152_wm_mask,
64 load_mni152_wm_template,
65 )
66 from .utils import get_data_dirs, load_sample_motor_activation_image
67
68 __all__ = [
69 "MNI152_FILE_PATH",
70 "GM_MNI152_FILE_PATH",
71 "WM_MNI152_FILE_PATH",
72 "fetch_icbm152_2009",
73 "load_mni152_template",
74 "load_mni152_gm_template",
75 "load_mni152_wm_template",
76 "fetch_oasis_vbm",
77 "fetch_haxby",
78 "fetch_adhd",
79 "fetch_miyawaki2008",
80 "fetch_localizer_contrasts",
81 "fetch_localizer_button_task",
82 "fetch_abide_pcp",
83 "fetch_localizer_calculation_task",
84 "fetch_atlas_craddock_2012",
85 "fetch_atlas_destrieux_2009",
86 "fetch_atlas_juelich",
87 "fetch_atlas_harvard_oxford",
88 "fetch_atlas_msdl",
89 "fetch_atlas_schaefer_2018",
90 "fetch_coords_power_2011",
91 "fetch_coords_seitzman_2018",
92 "fetch_atlas_smith_2009",
93 "fetch_atlas_allen_2011",
94 "fetch_atlas_yeo_2011",
95 "fetch_mixed_gambles",
96 "fetch_atlas_aal",
97 "fetch_atlas_difumo",
98 "fetch_megatrawls_netmats",
99 "fetch_surf_nki_enhanced",
100 "fetch_development_fmri",
101 "fetch_surf_fsaverage",
102 "fetch_atlas_basc_multiscale_2015",
103 "fetch_coords_dosenbach_2010",
104 "fetch_neurovault",
105 "fetch_neurovault_ids",
106 "fetch_neurovault_motor_task",
107 "fetch_neurovault_auditory_computation_task",
108 "load_mni152_brain_mask",
109 "load_mni152_gm_mask",
110 "load_mni152_wm_mask",
111 "fetch_icbm152_brain_gm_mask",
112 "fetch_atlas_surf_destrieux",
113 "fetch_atlas_talairach",
114 "get_data_dirs",
115 "load_sample_motor_activation_image",
116 "fetch_language_localizer_demo_dataset",
117 "fetch_bids_langloc_dataset",
118 "fetch_openneuro_dataset_index",
119 "select_from_index",
120 "patch_openneuro_dataset",
121 "fetch_openneuro_dataset",
122 "fetch_localizer_first_level",
123 "fetch_spm_auditory",
124 "fetch_spm_multimodal_fmri",
125 "fetch_fiac_first_level",
126 ]
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nilearn/datasets/__init__.py b/nilearn/datasets/__init__.py
--- a/nilearn/datasets/__init__.py
+++ b/nilearn/datasets/__init__.py
@@ -10,6 +10,7 @@
fetch_atlas_harvard_oxford,
fetch_atlas_juelich,
fetch_atlas_msdl,
+ fetch_atlas_pauli_2017,
fetch_atlas_schaefer_2018,
fetch_atlas_smith_2009,
fetch_atlas_surf_destrieux,
@@ -24,6 +25,7 @@
fetch_adhd,
fetch_bids_langloc_dataset,
fetch_development_fmri,
+ fetch_ds000030_urls,
fetch_fiac_first_level,
fetch_haxby,
fetch_language_localizer_demo_dataset,
@@ -86,6 +88,7 @@
"fetch_atlas_juelich",
"fetch_atlas_harvard_oxford",
"fetch_atlas_msdl",
+ "fetch_atlas_pauli_2017",
"fetch_atlas_schaefer_2018",
"fetch_coords_power_2011",
"fetch_coords_seitzman_2018",
@@ -98,6 +101,7 @@
"fetch_megatrawls_netmats",
"fetch_surf_nki_enhanced",
"fetch_development_fmri",
+ "fetch_ds000030_urls",
"fetch_surf_fsaverage",
"fetch_atlas_basc_multiscale_2015",
"fetch_coords_dosenbach_2010",
| {"golden_diff": "diff --git a/nilearn/datasets/__init__.py b/nilearn/datasets/__init__.py\n--- a/nilearn/datasets/__init__.py\n+++ b/nilearn/datasets/__init__.py\n@@ -10,6 +10,7 @@\n fetch_atlas_harvard_oxford,\n fetch_atlas_juelich,\n fetch_atlas_msdl,\n+ fetch_atlas_pauli_2017,\n fetch_atlas_schaefer_2018,\n fetch_atlas_smith_2009,\n fetch_atlas_surf_destrieux,\n@@ -24,6 +25,7 @@\n fetch_adhd,\n fetch_bids_langloc_dataset,\n fetch_development_fmri,\n+ fetch_ds000030_urls,\n fetch_fiac_first_level,\n fetch_haxby,\n fetch_language_localizer_demo_dataset,\n@@ -86,6 +88,7 @@\n \"fetch_atlas_juelich\",\n \"fetch_atlas_harvard_oxford\",\n \"fetch_atlas_msdl\",\n+ \"fetch_atlas_pauli_2017\",\n \"fetch_atlas_schaefer_2018\",\n \"fetch_coords_power_2011\",\n \"fetch_coords_seitzman_2018\",\n@@ -98,6 +101,7 @@\n \"fetch_megatrawls_netmats\",\n \"fetch_surf_nki_enhanced\",\n \"fetch_development_fmri\",\n+ \"fetch_ds000030_urls\",\n \"fetch_surf_fsaverage\",\n \"fetch_atlas_basc_multiscale_2015\",\n \"fetch_coords_dosenbach_2010\",\n", "issue": "Documentation builder failure on main\nhttps://github.com/nilearn/nilearn/actions/workflows/build-docs.yml\r\n\r\nstarted occurring after merging #3698 (doubt it is related given the content of the PR)\r\nhttps://github.com/nilearn/nilearn/actions/runs/4741116007\r\n\r\n\n", "before_files": [{"content": "\"\"\"Helper functions to download NeuroImaging datasets.\"\"\"\n\nfrom .atlas import (\n fetch_atlas_aal,\n fetch_atlas_allen_2011,\n fetch_atlas_basc_multiscale_2015,\n fetch_atlas_craddock_2012,\n fetch_atlas_destrieux_2009,\n fetch_atlas_difumo,\n fetch_atlas_harvard_oxford,\n fetch_atlas_juelich,\n fetch_atlas_msdl,\n fetch_atlas_schaefer_2018,\n fetch_atlas_smith_2009,\n fetch_atlas_surf_destrieux,\n fetch_atlas_talairach,\n fetch_atlas_yeo_2011,\n fetch_coords_dosenbach_2010,\n fetch_coords_power_2011,\n fetch_coords_seitzman_2018,\n)\nfrom .func import (\n fetch_abide_pcp,\n fetch_adhd,\n fetch_bids_langloc_dataset,\n fetch_development_fmri,\n fetch_fiac_first_level,\n fetch_haxby,\n fetch_language_localizer_demo_dataset,\n fetch_localizer_button_task,\n fetch_localizer_calculation_task,\n fetch_localizer_contrasts,\n fetch_localizer_first_level,\n fetch_megatrawls_netmats,\n fetch_mixed_gambles,\n fetch_miyawaki2008,\n fetch_openneuro_dataset,\n fetch_openneuro_dataset_index,\n fetch_spm_auditory,\n fetch_spm_multimodal_fmri,\n fetch_surf_nki_enhanced,\n patch_openneuro_dataset,\n select_from_index,\n)\nfrom .neurovault import (\n fetch_neurovault,\n fetch_neurovault_auditory_computation_task,\n fetch_neurovault_ids,\n fetch_neurovault_motor_task,\n)\nfrom .struct import (\n GM_MNI152_FILE_PATH,\n MNI152_FILE_PATH,\n WM_MNI152_FILE_PATH,\n fetch_icbm152_2009,\n fetch_icbm152_brain_gm_mask,\n fetch_oasis_vbm,\n fetch_surf_fsaverage,\n load_mni152_brain_mask,\n load_mni152_gm_mask,\n load_mni152_gm_template,\n load_mni152_template,\n load_mni152_wm_mask,\n load_mni152_wm_template,\n)\nfrom .utils import get_data_dirs, load_sample_motor_activation_image\n\n__all__ = [\n \"MNI152_FILE_PATH\",\n \"GM_MNI152_FILE_PATH\",\n \"WM_MNI152_FILE_PATH\",\n \"fetch_icbm152_2009\",\n \"load_mni152_template\",\n \"load_mni152_gm_template\",\n \"load_mni152_wm_template\",\n \"fetch_oasis_vbm\",\n \"fetch_haxby\",\n \"fetch_adhd\",\n \"fetch_miyawaki2008\",\n \"fetch_localizer_contrasts\",\n \"fetch_localizer_button_task\",\n \"fetch_abide_pcp\",\n \"fetch_localizer_calculation_task\",\n \"fetch_atlas_craddock_2012\",\n \"fetch_atlas_destrieux_2009\",\n \"fetch_atlas_juelich\",\n \"fetch_atlas_harvard_oxford\",\n \"fetch_atlas_msdl\",\n \"fetch_atlas_schaefer_2018\",\n \"fetch_coords_power_2011\",\n \"fetch_coords_seitzman_2018\",\n \"fetch_atlas_smith_2009\",\n \"fetch_atlas_allen_2011\",\n \"fetch_atlas_yeo_2011\",\n \"fetch_mixed_gambles\",\n \"fetch_atlas_aal\",\n \"fetch_atlas_difumo\",\n \"fetch_megatrawls_netmats\",\n \"fetch_surf_nki_enhanced\",\n \"fetch_development_fmri\",\n \"fetch_surf_fsaverage\",\n \"fetch_atlas_basc_multiscale_2015\",\n \"fetch_coords_dosenbach_2010\",\n \"fetch_neurovault\",\n \"fetch_neurovault_ids\",\n \"fetch_neurovault_motor_task\",\n \"fetch_neurovault_auditory_computation_task\",\n \"load_mni152_brain_mask\",\n \"load_mni152_gm_mask\",\n \"load_mni152_wm_mask\",\n \"fetch_icbm152_brain_gm_mask\",\n \"fetch_atlas_surf_destrieux\",\n \"fetch_atlas_talairach\",\n \"get_data_dirs\",\n \"load_sample_motor_activation_image\",\n \"fetch_language_localizer_demo_dataset\",\n \"fetch_bids_langloc_dataset\",\n \"fetch_openneuro_dataset_index\",\n \"select_from_index\",\n \"patch_openneuro_dataset\",\n \"fetch_openneuro_dataset\",\n \"fetch_localizer_first_level\",\n \"fetch_spm_auditory\",\n \"fetch_spm_multimodal_fmri\",\n \"fetch_fiac_first_level\",\n]\n", "path": "nilearn/datasets/__init__.py"}], "after_files": [{"content": "\"\"\"Helper functions to download NeuroImaging datasets.\"\"\"\n\nfrom .atlas import (\n fetch_atlas_aal,\n fetch_atlas_allen_2011,\n fetch_atlas_basc_multiscale_2015,\n fetch_atlas_craddock_2012,\n fetch_atlas_destrieux_2009,\n fetch_atlas_difumo,\n fetch_atlas_harvard_oxford,\n fetch_atlas_juelich,\n fetch_atlas_msdl,\n fetch_atlas_pauli_2017,\n fetch_atlas_schaefer_2018,\n fetch_atlas_smith_2009,\n fetch_atlas_surf_destrieux,\n fetch_atlas_talairach,\n fetch_atlas_yeo_2011,\n fetch_coords_dosenbach_2010,\n fetch_coords_power_2011,\n fetch_coords_seitzman_2018,\n)\nfrom .func import (\n fetch_abide_pcp,\n fetch_adhd,\n fetch_bids_langloc_dataset,\n fetch_development_fmri,\n fetch_ds000030_urls,\n fetch_fiac_first_level,\n fetch_haxby,\n fetch_language_localizer_demo_dataset,\n fetch_localizer_button_task,\n fetch_localizer_calculation_task,\n fetch_localizer_contrasts,\n fetch_localizer_first_level,\n fetch_megatrawls_netmats,\n fetch_mixed_gambles,\n fetch_miyawaki2008,\n fetch_openneuro_dataset,\n fetch_openneuro_dataset_index,\n fetch_spm_auditory,\n fetch_spm_multimodal_fmri,\n fetch_surf_nki_enhanced,\n patch_openneuro_dataset,\n select_from_index,\n)\nfrom .neurovault import (\n fetch_neurovault,\n fetch_neurovault_auditory_computation_task,\n fetch_neurovault_ids,\n fetch_neurovault_motor_task,\n)\nfrom .struct import (\n GM_MNI152_FILE_PATH,\n MNI152_FILE_PATH,\n WM_MNI152_FILE_PATH,\n fetch_icbm152_2009,\n fetch_icbm152_brain_gm_mask,\n fetch_oasis_vbm,\n fetch_surf_fsaverage,\n load_mni152_brain_mask,\n load_mni152_gm_mask,\n load_mni152_gm_template,\n load_mni152_template,\n load_mni152_wm_mask,\n load_mni152_wm_template,\n)\nfrom .utils import get_data_dirs, load_sample_motor_activation_image\n\n__all__ = [\n \"MNI152_FILE_PATH\",\n \"GM_MNI152_FILE_PATH\",\n \"WM_MNI152_FILE_PATH\",\n \"fetch_icbm152_2009\",\n \"load_mni152_template\",\n \"load_mni152_gm_template\",\n \"load_mni152_wm_template\",\n \"fetch_oasis_vbm\",\n \"fetch_haxby\",\n \"fetch_adhd\",\n \"fetch_miyawaki2008\",\n \"fetch_localizer_contrasts\",\n \"fetch_localizer_button_task\",\n \"fetch_abide_pcp\",\n \"fetch_localizer_calculation_task\",\n \"fetch_atlas_craddock_2012\",\n \"fetch_atlas_destrieux_2009\",\n \"fetch_atlas_juelich\",\n \"fetch_atlas_harvard_oxford\",\n \"fetch_atlas_msdl\",\n \"fetch_atlas_pauli_2017\",\n \"fetch_atlas_schaefer_2018\",\n \"fetch_coords_power_2011\",\n \"fetch_coords_seitzman_2018\",\n \"fetch_atlas_smith_2009\",\n \"fetch_atlas_allen_2011\",\n \"fetch_atlas_yeo_2011\",\n \"fetch_mixed_gambles\",\n \"fetch_atlas_aal\",\n \"fetch_atlas_difumo\",\n \"fetch_megatrawls_netmats\",\n \"fetch_surf_nki_enhanced\",\n \"fetch_development_fmri\",\n \"fetch_ds000030_urls\",\n \"fetch_surf_fsaverage\",\n \"fetch_atlas_basc_multiscale_2015\",\n \"fetch_coords_dosenbach_2010\",\n \"fetch_neurovault\",\n \"fetch_neurovault_ids\",\n \"fetch_neurovault_motor_task\",\n \"fetch_neurovault_auditory_computation_task\",\n \"load_mni152_brain_mask\",\n \"load_mni152_gm_mask\",\n \"load_mni152_wm_mask\",\n \"fetch_icbm152_brain_gm_mask\",\n \"fetch_atlas_surf_destrieux\",\n \"fetch_atlas_talairach\",\n \"get_data_dirs\",\n \"load_sample_motor_activation_image\",\n \"fetch_language_localizer_demo_dataset\",\n \"fetch_bids_langloc_dataset\",\n \"fetch_openneuro_dataset_index\",\n \"select_from_index\",\n \"patch_openneuro_dataset\",\n \"fetch_openneuro_dataset\",\n \"fetch_localizer_first_level\",\n \"fetch_spm_auditory\",\n \"fetch_spm_multimodal_fmri\",\n \"fetch_fiac_first_level\",\n]\n", "path": "nilearn/datasets/__init__.py"}]} | 1,756 | 387 |
gh_patches_debug_27595 | rasdani/github-patches | git_diff | netbox-community__netbox-14870 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Simple condition (without and/or) does not work in event rule
### Deployment Type
Self-hosted
### NetBox Version
v3.7.0
### Python Version
3.11
### Steps to Reproduce
1. Create webhook: Name = Test, URL = http://127.0.0.1:9000 (doesn't matter in this case, it won't be triggered but is required to configure event rule)
2. Go to **Event rules - Add**:
- Name = Test
- Content types = Circuit
- select Updates
- set Conditions:
```
{
"attr": "status.value",
"value": "active"
}
```
- Action type = Webhook
- Webhook = Test
- **Create**
### Expected Behavior
Event rule is created
### Observed Behavior
Error is shown about the condition:
**Ruleset must have exactly one logical operator (found 2)**
The examples in https://docs.netbox.dev/en/stable/reference/conditions/ look the same: simple JSON object with attributes `attr` and `value`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/extras/conditions.py`
Content:
```
1 import functools
2 import re
3 from django.utils.translation import gettext as _
4
5 __all__ = (
6 'Condition',
7 'ConditionSet',
8 )
9
10
11 AND = 'and'
12 OR = 'or'
13
14
15 def is_ruleset(data):
16 """
17 Determine whether the given dictionary looks like a rule set.
18 """
19 return type(data) is dict and len(data) == 1 and list(data.keys())[0] in (AND, OR)
20
21
22 class Condition:
23 """
24 An individual conditional rule that evaluates a single attribute and its value.
25
26 :param attr: The name of the attribute being evaluated
27 :param value: The value being compared
28 :param op: The logical operation to use when evaluating the value (default: 'eq')
29 """
30 EQ = 'eq'
31 GT = 'gt'
32 GTE = 'gte'
33 LT = 'lt'
34 LTE = 'lte'
35 IN = 'in'
36 CONTAINS = 'contains'
37 REGEX = 'regex'
38
39 OPERATORS = (
40 EQ, GT, GTE, LT, LTE, IN, CONTAINS, REGEX
41 )
42
43 TYPES = {
44 str: (EQ, CONTAINS, REGEX),
45 bool: (EQ, CONTAINS),
46 int: (EQ, GT, GTE, LT, LTE, CONTAINS),
47 float: (EQ, GT, GTE, LT, LTE, CONTAINS),
48 list: (EQ, IN, CONTAINS),
49 type(None): (EQ,)
50 }
51
52 def __init__(self, attr, value, op=EQ, negate=False):
53 if op not in self.OPERATORS:
54 raise ValueError(_("Unknown operator: {op}. Must be one of: {operators}").format(
55 op=op, operators=', '.join(self.OPERATORS)
56 ))
57 if type(value) not in self.TYPES:
58 raise ValueError(_("Unsupported value type: {value}").format(value=type(value)))
59 if op not in self.TYPES[type(value)]:
60 raise ValueError(_("Invalid type for {op} operation: {value}").format(op=op, value=type(value)))
61
62 self.attr = attr
63 self.value = value
64 self.eval_func = getattr(self, f'eval_{op}')
65 self.negate = negate
66
67 def eval(self, data):
68 """
69 Evaluate the provided data to determine whether it matches the condition.
70 """
71 def _get(obj, key):
72 if isinstance(obj, list):
73 return [dict.get(i, key) for i in obj]
74
75 return dict.get(obj, key)
76
77 try:
78 value = functools.reduce(_get, self.attr.split('.'), data)
79 except TypeError:
80 # Invalid key path
81 value = None
82 result = self.eval_func(value)
83
84 if self.negate:
85 return not result
86 return result
87
88 # Equivalency
89
90 def eval_eq(self, value):
91 return value == self.value
92
93 def eval_neq(self, value):
94 return value != self.value
95
96 # Numeric comparisons
97
98 def eval_gt(self, value):
99 return value > self.value
100
101 def eval_gte(self, value):
102 return value >= self.value
103
104 def eval_lt(self, value):
105 return value < self.value
106
107 def eval_lte(self, value):
108 return value <= self.value
109
110 # Membership
111
112 def eval_in(self, value):
113 return value in self.value
114
115 def eval_contains(self, value):
116 return self.value in value
117
118 # Regular expressions
119
120 def eval_regex(self, value):
121 return re.match(self.value, value) is not None
122
123
124 class ConditionSet:
125 """
126 A set of one or more Condition to be evaluated per the prescribed logic (AND or OR). Example:
127
128 {"and": [
129 {"attr": "foo", "op": "eq", "value": 1},
130 {"attr": "bar", "op": "eq", "value": 2, "negate": true}
131 ]}
132
133 :param ruleset: A dictionary mapping a logical operator to a list of conditional rules
134 """
135 def __init__(self, ruleset):
136 if type(ruleset) is not dict:
137 raise ValueError(_("Ruleset must be a dictionary, not {ruleset}.").format(ruleset=type(ruleset)))
138 if len(ruleset) != 1:
139 raise ValueError(_("Ruleset must have exactly one logical operator (found {ruleset})").format(
140 ruleset=len(ruleset)))
141
142 # Determine the logic type
143 logic = list(ruleset.keys())[0]
144 if type(logic) is not str or logic.lower() not in (AND, OR):
145 raise ValueError(_("Invalid logic type: {logic} (must be '{op_and}' or '{op_or}')").format(
146 logic=logic, op_and=AND, op_or=OR
147 ))
148 self.logic = logic.lower()
149
150 # Compile the set of Conditions
151 self.conditions = [
152 ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)
153 for rule in ruleset[self.logic]
154 ]
155
156 def eval(self, data):
157 """
158 Evaluate the provided data to determine whether it matches this set of conditions.
159 """
160 func = any if self.logic == 'or' else all
161 return func(d.eval(data) for d in self.conditions)
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/extras/conditions.py b/netbox/extras/conditions.py
--- a/netbox/extras/conditions.py
+++ b/netbox/extras/conditions.py
@@ -135,23 +135,23 @@
def __init__(self, ruleset):
if type(ruleset) is not dict:
raise ValueError(_("Ruleset must be a dictionary, not {ruleset}.").format(ruleset=type(ruleset)))
- if len(ruleset) != 1:
- raise ValueError(_("Ruleset must have exactly one logical operator (found {ruleset})").format(
- ruleset=len(ruleset)))
-
- # Determine the logic type
- logic = list(ruleset.keys())[0]
- if type(logic) is not str or logic.lower() not in (AND, OR):
- raise ValueError(_("Invalid logic type: {logic} (must be '{op_and}' or '{op_or}')").format(
- logic=logic, op_and=AND, op_or=OR
- ))
- self.logic = logic.lower()
- # Compile the set of Conditions
- self.conditions = [
- ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)
- for rule in ruleset[self.logic]
- ]
+ if len(ruleset) == 1:
+ self.logic = (list(ruleset.keys())[0]).lower()
+ if self.logic not in (AND, OR):
+ raise ValueError(_("Invalid logic type: must be 'AND' or 'OR'. Please check documentation."))
+
+ # Compile the set of Conditions
+ self.conditions = [
+ ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)
+ for rule in ruleset[self.logic]
+ ]
+ else:
+ try:
+ self.logic = None
+ self.conditions = [Condition(**ruleset)]
+ except TypeError:
+ raise ValueError(_("Incorrect key(s) informed. Please check documentation."))
def eval(self, data):
"""
| {"golden_diff": "diff --git a/netbox/extras/conditions.py b/netbox/extras/conditions.py\n--- a/netbox/extras/conditions.py\n+++ b/netbox/extras/conditions.py\n@@ -135,23 +135,23 @@\n def __init__(self, ruleset):\n if type(ruleset) is not dict:\n raise ValueError(_(\"Ruleset must be a dictionary, not {ruleset}.\").format(ruleset=type(ruleset)))\n- if len(ruleset) != 1:\n- raise ValueError(_(\"Ruleset must have exactly one logical operator (found {ruleset})\").format(\n- ruleset=len(ruleset)))\n-\n- # Determine the logic type\n- logic = list(ruleset.keys())[0]\n- if type(logic) is not str or logic.lower() not in (AND, OR):\n- raise ValueError(_(\"Invalid logic type: {logic} (must be '{op_and}' or '{op_or}')\").format(\n- logic=logic, op_and=AND, op_or=OR\n- ))\n- self.logic = logic.lower()\n \n- # Compile the set of Conditions\n- self.conditions = [\n- ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)\n- for rule in ruleset[self.logic]\n- ]\n+ if len(ruleset) == 1:\n+ self.logic = (list(ruleset.keys())[0]).lower()\n+ if self.logic not in (AND, OR):\n+ raise ValueError(_(\"Invalid logic type: must be 'AND' or 'OR'. Please check documentation.\"))\n+\n+ # Compile the set of Conditions\n+ self.conditions = [\n+ ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)\n+ for rule in ruleset[self.logic]\n+ ]\n+ else:\n+ try:\n+ self.logic = None\n+ self.conditions = [Condition(**ruleset)]\n+ except TypeError:\n+ raise ValueError(_(\"Incorrect key(s) informed. Please check documentation.\"))\n \n def eval(self, data):\n \"\"\"\n", "issue": "Simple condition (without and/or) does not work in event rule\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.0\n\n### Python Version\n\n3.11\n\n### Steps to Reproduce\n\n1. Create webhook: Name = Test, URL = http://127.0.0.1:9000 (doesn't matter in this case, it won't be triggered but is required to configure event rule)\r\n2. Go to **Event rules - Add**:\r\n- Name = Test\r\n- Content types = Circuit\r\n- select Updates\r\n- set Conditions:\r\n```\r\n{\r\n \"attr\": \"status.value\",\r\n \"value\": \"active\"\r\n}\r\n```\r\n\r\n- Action type = Webhook\r\n- Webhook = Test\r\n- **Create**\r\n\n\n### Expected Behavior\n\nEvent rule is created\n\n### Observed Behavior\n\nError is shown about the condition:\r\n\r\n**Ruleset must have exactly one logical operator (found 2)** \r\n\r\nThe examples in https://docs.netbox.dev/en/stable/reference/conditions/ look the same: simple JSON object with attributes `attr` and `value`.\n", "before_files": [{"content": "import functools\nimport re\nfrom django.utils.translation import gettext as _\n\n__all__ = (\n 'Condition',\n 'ConditionSet',\n)\n\n\nAND = 'and'\nOR = 'or'\n\n\ndef is_ruleset(data):\n \"\"\"\n Determine whether the given dictionary looks like a rule set.\n \"\"\"\n return type(data) is dict and len(data) == 1 and list(data.keys())[0] in (AND, OR)\n\n\nclass Condition:\n \"\"\"\n An individual conditional rule that evaluates a single attribute and its value.\n\n :param attr: The name of the attribute being evaluated\n :param value: The value being compared\n :param op: The logical operation to use when evaluating the value (default: 'eq')\n \"\"\"\n EQ = 'eq'\n GT = 'gt'\n GTE = 'gte'\n LT = 'lt'\n LTE = 'lte'\n IN = 'in'\n CONTAINS = 'contains'\n REGEX = 'regex'\n\n OPERATORS = (\n EQ, GT, GTE, LT, LTE, IN, CONTAINS, REGEX\n )\n\n TYPES = {\n str: (EQ, CONTAINS, REGEX),\n bool: (EQ, CONTAINS),\n int: (EQ, GT, GTE, LT, LTE, CONTAINS),\n float: (EQ, GT, GTE, LT, LTE, CONTAINS),\n list: (EQ, IN, CONTAINS),\n type(None): (EQ,)\n }\n\n def __init__(self, attr, value, op=EQ, negate=False):\n if op not in self.OPERATORS:\n raise ValueError(_(\"Unknown operator: {op}. Must be one of: {operators}\").format(\n op=op, operators=', '.join(self.OPERATORS)\n ))\n if type(value) not in self.TYPES:\n raise ValueError(_(\"Unsupported value type: {value}\").format(value=type(value)))\n if op not in self.TYPES[type(value)]:\n raise ValueError(_(\"Invalid type for {op} operation: {value}\").format(op=op, value=type(value)))\n\n self.attr = attr\n self.value = value\n self.eval_func = getattr(self, f'eval_{op}')\n self.negate = negate\n\n def eval(self, data):\n \"\"\"\n Evaluate the provided data to determine whether it matches the condition.\n \"\"\"\n def _get(obj, key):\n if isinstance(obj, list):\n return [dict.get(i, key) for i in obj]\n\n return dict.get(obj, key)\n\n try:\n value = functools.reduce(_get, self.attr.split('.'), data)\n except TypeError:\n # Invalid key path\n value = None\n result = self.eval_func(value)\n\n if self.negate:\n return not result\n return result\n\n # Equivalency\n\n def eval_eq(self, value):\n return value == self.value\n\n def eval_neq(self, value):\n return value != self.value\n\n # Numeric comparisons\n\n def eval_gt(self, value):\n return value > self.value\n\n def eval_gte(self, value):\n return value >= self.value\n\n def eval_lt(self, value):\n return value < self.value\n\n def eval_lte(self, value):\n return value <= self.value\n\n # Membership\n\n def eval_in(self, value):\n return value in self.value\n\n def eval_contains(self, value):\n return self.value in value\n\n # Regular expressions\n\n def eval_regex(self, value):\n return re.match(self.value, value) is not None\n\n\nclass ConditionSet:\n \"\"\"\n A set of one or more Condition to be evaluated per the prescribed logic (AND or OR). Example:\n\n {\"and\": [\n {\"attr\": \"foo\", \"op\": \"eq\", \"value\": 1},\n {\"attr\": \"bar\", \"op\": \"eq\", \"value\": 2, \"negate\": true}\n ]}\n\n :param ruleset: A dictionary mapping a logical operator to a list of conditional rules\n \"\"\"\n def __init__(self, ruleset):\n if type(ruleset) is not dict:\n raise ValueError(_(\"Ruleset must be a dictionary, not {ruleset}.\").format(ruleset=type(ruleset)))\n if len(ruleset) != 1:\n raise ValueError(_(\"Ruleset must have exactly one logical operator (found {ruleset})\").format(\n ruleset=len(ruleset)))\n\n # Determine the logic type\n logic = list(ruleset.keys())[0]\n if type(logic) is not str or logic.lower() not in (AND, OR):\n raise ValueError(_(\"Invalid logic type: {logic} (must be '{op_and}' or '{op_or}')\").format(\n logic=logic, op_and=AND, op_or=OR\n ))\n self.logic = logic.lower()\n\n # Compile the set of Conditions\n self.conditions = [\n ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)\n for rule in ruleset[self.logic]\n ]\n\n def eval(self, data):\n \"\"\"\n Evaluate the provided data to determine whether it matches this set of conditions.\n \"\"\"\n func = any if self.logic == 'or' else all\n return func(d.eval(data) for d in self.conditions)\n", "path": "netbox/extras/conditions.py"}], "after_files": [{"content": "import functools\nimport re\nfrom django.utils.translation import gettext as _\n\n__all__ = (\n 'Condition',\n 'ConditionSet',\n)\n\n\nAND = 'and'\nOR = 'or'\n\n\ndef is_ruleset(data):\n \"\"\"\n Determine whether the given dictionary looks like a rule set.\n \"\"\"\n return type(data) is dict and len(data) == 1 and list(data.keys())[0] in (AND, OR)\n\n\nclass Condition:\n \"\"\"\n An individual conditional rule that evaluates a single attribute and its value.\n\n :param attr: The name of the attribute being evaluated\n :param value: The value being compared\n :param op: The logical operation to use when evaluating the value (default: 'eq')\n \"\"\"\n EQ = 'eq'\n GT = 'gt'\n GTE = 'gte'\n LT = 'lt'\n LTE = 'lte'\n IN = 'in'\n CONTAINS = 'contains'\n REGEX = 'regex'\n\n OPERATORS = (\n EQ, GT, GTE, LT, LTE, IN, CONTAINS, REGEX\n )\n\n TYPES = {\n str: (EQ, CONTAINS, REGEX),\n bool: (EQ, CONTAINS),\n int: (EQ, GT, GTE, LT, LTE, CONTAINS),\n float: (EQ, GT, GTE, LT, LTE, CONTAINS),\n list: (EQ, IN, CONTAINS),\n type(None): (EQ,)\n }\n\n def __init__(self, attr, value, op=EQ, negate=False):\n if op not in self.OPERATORS:\n raise ValueError(_(\"Unknown operator: {op}. Must be one of: {operators}\").format(\n op=op, operators=', '.join(self.OPERATORS)\n ))\n if type(value) not in self.TYPES:\n raise ValueError(_(\"Unsupported value type: {value}\").format(value=type(value)))\n if op not in self.TYPES[type(value)]:\n raise ValueError(_(\"Invalid type for {op} operation: {value}\").format(op=op, value=type(value)))\n\n self.attr = attr\n self.value = value\n self.eval_func = getattr(self, f'eval_{op}')\n self.negate = negate\n\n def eval(self, data):\n \"\"\"\n Evaluate the provided data to determine whether it matches the condition.\n \"\"\"\n def _get(obj, key):\n if isinstance(obj, list):\n return [dict.get(i, key) for i in obj]\n\n return dict.get(obj, key)\n\n try:\n value = functools.reduce(_get, self.attr.split('.'), data)\n except TypeError:\n # Invalid key path\n value = None\n result = self.eval_func(value)\n\n if self.negate:\n return not result\n return result\n\n # Equivalency\n\n def eval_eq(self, value):\n return value == self.value\n\n def eval_neq(self, value):\n return value != self.value\n\n # Numeric comparisons\n\n def eval_gt(self, value):\n return value > self.value\n\n def eval_gte(self, value):\n return value >= self.value\n\n def eval_lt(self, value):\n return value < self.value\n\n def eval_lte(self, value):\n return value <= self.value\n\n # Membership\n\n def eval_in(self, value):\n return value in self.value\n\n def eval_contains(self, value):\n return self.value in value\n\n # Regular expressions\n\n def eval_regex(self, value):\n return re.match(self.value, value) is not None\n\n\nclass ConditionSet:\n \"\"\"\n A set of one or more Condition to be evaluated per the prescribed logic (AND or OR). Example:\n\n {\"and\": [\n {\"attr\": \"foo\", \"op\": \"eq\", \"value\": 1},\n {\"attr\": \"bar\", \"op\": \"eq\", \"value\": 2, \"negate\": true}\n ]}\n\n :param ruleset: A dictionary mapping a logical operator to a list of conditional rules\n \"\"\"\n def __init__(self, ruleset):\n if type(ruleset) is not dict:\n raise ValueError(_(\"Ruleset must be a dictionary, not {ruleset}.\").format(ruleset=type(ruleset)))\n\n if len(ruleset) == 1:\n self.logic = (list(ruleset.keys())[0]).lower()\n if self.logic not in (AND, OR):\n raise ValueError(_(\"Invalid logic type: must be 'AND' or 'OR'. Please check documentation.\"))\n\n # Compile the set of Conditions\n self.conditions = [\n ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)\n for rule in ruleset[self.logic]\n ]\n else:\n try:\n self.logic = None\n self.conditions = [Condition(**ruleset)]\n except TypeError:\n raise ValueError(_(\"Incorrect key(s) informed. Please check documentation.\"))\n\n def eval(self, data):\n \"\"\"\n Evaluate the provided data to determine whether it matches this set of conditions.\n \"\"\"\n func = any if self.logic == 'or' else all\n return func(d.eval(data) for d in self.conditions)\n", "path": "netbox/extras/conditions.py"}]} | 2,030 | 447 |
gh_patches_debug_51300 | rasdani/github-patches | git_diff | translate__pootle-5619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Priority column is missing
Since the column reordering we've lost the priority column in the vfolders table
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/virtualfolder/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django import forms
10 from django.http import Http404
11 from django.shortcuts import get_object_or_404
12 from django.urls import reverse
13 from django.utils.functional import cached_property
14
15 from pootle.core.browser import get_table_headings
16 from pootle.core.delegate import search_backend
17 from pootle.core.exceptions import Http400
18 from pootle.core.http import JsonResponse
19 from pootle.core.url_helpers import get_path_parts, split_pootle_path
20 from pootle.i18n.gettext import ugettext as _
21 from pootle_misc.util import ajax_required
22 from pootle_store.forms import UnitSearchForm
23 from pootle_store.unit.results import GroupedResults
24 from pootle_translationproject.views import TPTranslateView
25
26 from .delegate import vfolders_data_tool
27 from .models import VirtualFolder
28
29
30 def make_vfolder_dict(context, vf, stats):
31 lang_code, proj_code = split_pootle_path(context.pootle_path)[:2]
32 base_url = reverse(
33 "pootle-vfolder-tp-translate",
34 kwargs=dict(
35 vfolder_name=vf,
36 language_code=lang_code,
37 project_code=proj_code))
38 return {
39 'href_translate': base_url,
40 'title': stats["title"],
41 'code': vf,
42 'priority': stats.get("priority"),
43 'is_grayed': not stats["isVisible"],
44 'stats': stats,
45 'icon': 'vfolder'}
46
47
48 class VFolderTPTranslateView(TPTranslateView):
49 display_vfolder_priority = False
50
51 @cached_property
52 def check_data(self):
53 return self.vfolders_data_view.vfolder_data_tool.get_checks(
54 user=self.request.user).get(self.vfolder_pk, {})
55
56 @cached_property
57 def vfolder(self):
58 return VirtualFolder.objects.get(name=self.kwargs["vfolder_name"])
59
60 @property
61 def vfolder_pk(self):
62 return self.vfolder.pk
63
64 def get_context_data(self, *args, **kwargs):
65 ctx = super(
66 VFolderTPTranslateView,
67 self).get_context_data(*args, **kwargs)
68 ctx["unit_api_root"] = reverse(
69 "vfolder-pootle-xhr-units",
70 kwargs=dict(vfolder_name=self.vfolder.name))
71 ctx["resource_path"] = (
72 "/".join(
73 ["++vfolder",
74 self.vfolder.name,
75 self.object.pootle_path.replace(self.ctx_path, "")]))
76 ctx["resource_path_parts"] = get_path_parts(ctx["resource_path"])
77 return ctx
78
79
80 @ajax_required
81 def get_vfolder_units(request, **kwargs):
82 """Gets source and target texts and its metadata.
83
84 :return: A JSON-encoded string containing the source and target texts
85 grouped by the store they belong to.
86
87 The optional `count` GET parameter defines the chunk size to
88 consider. The user's preference will be used by default.
89
90 When the `initial` GET parameter is present, a sorted list of
91 the result set ids will be returned too.
92 """
93 search_form = UnitSearchForm(request.GET, user=request.user)
94
95 vfolder = get_object_or_404(
96 VirtualFolder,
97 name=kwargs.get("vfolder_name"))
98
99 if not search_form.is_valid():
100 errors = search_form.errors.as_data()
101 if "path" in errors:
102 for error in errors["path"]:
103 if error.code == "max_length":
104 raise Http400(_('Path too long.'))
105 elif error.code == "required":
106 raise Http400(_('Arguments missing.'))
107 raise Http404(forms.ValidationError(search_form.errors).messages)
108
109 search_form.cleaned_data["vfolder"] = vfolder
110 backend = search_backend.get(VirtualFolder)(
111 request.user, **search_form.cleaned_data)
112 total, start, end, units_qs = backend.search()
113 return JsonResponse(
114 {'start': start,
115 'end': end,
116 'total': total,
117 'unitGroups': GroupedResults(units_qs).data})
118
119
120 class VFoldersDataView(object):
121
122 _table_fields = (
123 'name', 'progress', 'activity',
124 'total', 'need-translation',
125 'suggestions', 'critical')
126
127 def __init__(self, context, user, has_admin_access=False):
128 self.context = context
129 self.user = user
130 self.has_admin_access = has_admin_access
131
132 @property
133 def vfolder_data_tool(self):
134 return vfolders_data_tool.get(self.context.__class__)(self.context)
135
136 @property
137 def table_fields(self):
138 fields = self._table_fields
139 if self.has_admin_access:
140 fields += ('last-updated', )
141 return fields
142
143 @cached_property
144 def table_data(self):
145 ctx = {}
146 if len(self.all_stats) > 0:
147 ctx.update({
148 'children': {
149 'id': 'vfolders',
150 'fields': self.table_fields,
151 'headings': get_table_headings(self.table_fields),
152 'rows': self.table_items}})
153 return ctx
154
155 @cached_property
156 def all_stats(self):
157 return self.vfolder_data_tool.get_stats(user=self.user)
158
159 @cached_property
160 def stats(self):
161 return dict(children=self.all_stats)
162
163 @property
164 def table_items(self):
165 return [
166 make_vfolder_dict(self.context, *vf)
167 for vf
168 in self.all_stats.items()]
169
170 @cached_property
171 def has_data(self):
172 return (
173 self.vfolder_data_tool.all_stat_data.exists()
174 if self.vfolder_data_tool.show_all_to(self.user)
175 else self.vfolder_data_tool.stat_data.exists())
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/virtualfolder/views.py b/pootle/apps/virtualfolder/views.py
--- a/pootle/apps/virtualfolder/views.py
+++ b/pootle/apps/virtualfolder/views.py
@@ -122,7 +122,7 @@
_table_fields = (
'name', 'progress', 'activity',
'total', 'need-translation',
- 'suggestions', 'critical')
+ 'suggestions', 'critical', 'priority')
def __init__(self, context, user, has_admin_access=False):
self.context = context
| {"golden_diff": "diff --git a/pootle/apps/virtualfolder/views.py b/pootle/apps/virtualfolder/views.py\n--- a/pootle/apps/virtualfolder/views.py\n+++ b/pootle/apps/virtualfolder/views.py\n@@ -122,7 +122,7 @@\n _table_fields = (\n 'name', 'progress', 'activity',\n 'total', 'need-translation',\n- 'suggestions', 'critical')\n+ 'suggestions', 'critical', 'priority')\n \n def __init__(self, context, user, has_admin_access=False):\n self.context = context\n", "issue": "Priority column is missing\nSince the column reordering we've lost the priority column in the vfolders table\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django import forms\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.browser import get_table_headings\nfrom pootle.core.delegate import search_backend\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.http import JsonResponse\nfrom pootle.core.url_helpers import get_path_parts, split_pootle_path\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle_misc.util import ajax_required\nfrom pootle_store.forms import UnitSearchForm\nfrom pootle_store.unit.results import GroupedResults\nfrom pootle_translationproject.views import TPTranslateView\n\nfrom .delegate import vfolders_data_tool\nfrom .models import VirtualFolder\n\n\ndef make_vfolder_dict(context, vf, stats):\n lang_code, proj_code = split_pootle_path(context.pootle_path)[:2]\n base_url = reverse(\n \"pootle-vfolder-tp-translate\",\n kwargs=dict(\n vfolder_name=vf,\n language_code=lang_code,\n project_code=proj_code))\n return {\n 'href_translate': base_url,\n 'title': stats[\"title\"],\n 'code': vf,\n 'priority': stats.get(\"priority\"),\n 'is_grayed': not stats[\"isVisible\"],\n 'stats': stats,\n 'icon': 'vfolder'}\n\n\nclass VFolderTPTranslateView(TPTranslateView):\n display_vfolder_priority = False\n\n @cached_property\n def check_data(self):\n return self.vfolders_data_view.vfolder_data_tool.get_checks(\n user=self.request.user).get(self.vfolder_pk, {})\n\n @cached_property\n def vfolder(self):\n return VirtualFolder.objects.get(name=self.kwargs[\"vfolder_name\"])\n\n @property\n def vfolder_pk(self):\n return self.vfolder.pk\n\n def get_context_data(self, *args, **kwargs):\n ctx = super(\n VFolderTPTranslateView,\n self).get_context_data(*args, **kwargs)\n ctx[\"unit_api_root\"] = reverse(\n \"vfolder-pootle-xhr-units\",\n kwargs=dict(vfolder_name=self.vfolder.name))\n ctx[\"resource_path\"] = (\n \"/\".join(\n [\"++vfolder\",\n self.vfolder.name,\n self.object.pootle_path.replace(self.ctx_path, \"\")]))\n ctx[\"resource_path_parts\"] = get_path_parts(ctx[\"resource_path\"])\n return ctx\n\n\n@ajax_required\ndef get_vfolder_units(request, **kwargs):\n \"\"\"Gets source and target texts and its metadata.\n\n :return: A JSON-encoded string containing the source and target texts\n grouped by the store they belong to.\n\n The optional `count` GET parameter defines the chunk size to\n consider. The user's preference will be used by default.\n\n When the `initial` GET parameter is present, a sorted list of\n the result set ids will be returned too.\n \"\"\"\n search_form = UnitSearchForm(request.GET, user=request.user)\n\n vfolder = get_object_or_404(\n VirtualFolder,\n name=kwargs.get(\"vfolder_name\"))\n\n if not search_form.is_valid():\n errors = search_form.errors.as_data()\n if \"path\" in errors:\n for error in errors[\"path\"]:\n if error.code == \"max_length\":\n raise Http400(_('Path too long.'))\n elif error.code == \"required\":\n raise Http400(_('Arguments missing.'))\n raise Http404(forms.ValidationError(search_form.errors).messages)\n\n search_form.cleaned_data[\"vfolder\"] = vfolder\n backend = search_backend.get(VirtualFolder)(\n request.user, **search_form.cleaned_data)\n total, start, end, units_qs = backend.search()\n return JsonResponse(\n {'start': start,\n 'end': end,\n 'total': total,\n 'unitGroups': GroupedResults(units_qs).data})\n\n\nclass VFoldersDataView(object):\n\n _table_fields = (\n 'name', 'progress', 'activity',\n 'total', 'need-translation',\n 'suggestions', 'critical')\n\n def __init__(self, context, user, has_admin_access=False):\n self.context = context\n self.user = user\n self.has_admin_access = has_admin_access\n\n @property\n def vfolder_data_tool(self):\n return vfolders_data_tool.get(self.context.__class__)(self.context)\n\n @property\n def table_fields(self):\n fields = self._table_fields\n if self.has_admin_access:\n fields += ('last-updated', )\n return fields\n\n @cached_property\n def table_data(self):\n ctx = {}\n if len(self.all_stats) > 0:\n ctx.update({\n 'children': {\n 'id': 'vfolders',\n 'fields': self.table_fields,\n 'headings': get_table_headings(self.table_fields),\n 'rows': self.table_items}})\n return ctx\n\n @cached_property\n def all_stats(self):\n return self.vfolder_data_tool.get_stats(user=self.user)\n\n @cached_property\n def stats(self):\n return dict(children=self.all_stats)\n\n @property\n def table_items(self):\n return [\n make_vfolder_dict(self.context, *vf)\n for vf\n in self.all_stats.items()]\n\n @cached_property\n def has_data(self):\n return (\n self.vfolder_data_tool.all_stat_data.exists()\n if self.vfolder_data_tool.show_all_to(self.user)\n else self.vfolder_data_tool.stat_data.exists())\n", "path": "pootle/apps/virtualfolder/views.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django import forms\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.browser import get_table_headings\nfrom pootle.core.delegate import search_backend\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.http import JsonResponse\nfrom pootle.core.url_helpers import get_path_parts, split_pootle_path\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle_misc.util import ajax_required\nfrom pootle_store.forms import UnitSearchForm\nfrom pootle_store.unit.results import GroupedResults\nfrom pootle_translationproject.views import TPTranslateView\n\nfrom .delegate import vfolders_data_tool\nfrom .models import VirtualFolder\n\n\ndef make_vfolder_dict(context, vf, stats):\n lang_code, proj_code = split_pootle_path(context.pootle_path)[:2]\n base_url = reverse(\n \"pootle-vfolder-tp-translate\",\n kwargs=dict(\n vfolder_name=vf,\n language_code=lang_code,\n project_code=proj_code))\n return {\n 'href_translate': base_url,\n 'title': stats[\"title\"],\n 'code': vf,\n 'priority': stats.get(\"priority\"),\n 'is_grayed': not stats[\"isVisible\"],\n 'stats': stats,\n 'icon': 'vfolder'}\n\n\nclass VFolderTPTranslateView(TPTranslateView):\n display_vfolder_priority = False\n\n @cached_property\n def check_data(self):\n return self.vfolders_data_view.vfolder_data_tool.get_checks(\n user=self.request.user).get(self.vfolder_pk, {})\n\n @cached_property\n def vfolder(self):\n return VirtualFolder.objects.get(name=self.kwargs[\"vfolder_name\"])\n\n @property\n def vfolder_pk(self):\n return self.vfolder.pk\n\n def get_context_data(self, *args, **kwargs):\n ctx = super(\n VFolderTPTranslateView,\n self).get_context_data(*args, **kwargs)\n ctx[\"unit_api_root\"] = reverse(\n \"vfolder-pootle-xhr-units\",\n kwargs=dict(vfolder_name=self.vfolder.name))\n ctx[\"resource_path\"] = (\n \"/\".join(\n [\"++vfolder\",\n self.vfolder.name,\n self.object.pootle_path.replace(self.ctx_path, \"\")]))\n ctx[\"resource_path_parts\"] = get_path_parts(ctx[\"resource_path\"])\n return ctx\n\n\n@ajax_required\ndef get_vfolder_units(request, **kwargs):\n \"\"\"Gets source and target texts and its metadata.\n\n :return: A JSON-encoded string containing the source and target texts\n grouped by the store they belong to.\n\n The optional `count` GET parameter defines the chunk size to\n consider. The user's preference will be used by default.\n\n When the `initial` GET parameter is present, a sorted list of\n the result set ids will be returned too.\n \"\"\"\n search_form = UnitSearchForm(request.GET, user=request.user)\n\n vfolder = get_object_or_404(\n VirtualFolder,\n name=kwargs.get(\"vfolder_name\"))\n\n if not search_form.is_valid():\n errors = search_form.errors.as_data()\n if \"path\" in errors:\n for error in errors[\"path\"]:\n if error.code == \"max_length\":\n raise Http400(_('Path too long.'))\n elif error.code == \"required\":\n raise Http400(_('Arguments missing.'))\n raise Http404(forms.ValidationError(search_form.errors).messages)\n\n search_form.cleaned_data[\"vfolder\"] = vfolder\n backend = search_backend.get(VirtualFolder)(\n request.user, **search_form.cleaned_data)\n total, start, end, units_qs = backend.search()\n return JsonResponse(\n {'start': start,\n 'end': end,\n 'total': total,\n 'unitGroups': GroupedResults(units_qs).data})\n\n\nclass VFoldersDataView(object):\n\n _table_fields = (\n 'name', 'progress', 'activity',\n 'total', 'need-translation',\n 'suggestions', 'critical', 'priority')\n\n def __init__(self, context, user, has_admin_access=False):\n self.context = context\n self.user = user\n self.has_admin_access = has_admin_access\n\n @property\n def vfolder_data_tool(self):\n return vfolders_data_tool.get(self.context.__class__)(self.context)\n\n @property\n def table_fields(self):\n fields = self._table_fields\n if self.has_admin_access:\n fields += ('last-updated', )\n return fields\n\n @cached_property\n def table_data(self):\n ctx = {}\n if len(self.all_stats) > 0:\n ctx.update({\n 'children': {\n 'id': 'vfolders',\n 'fields': self.table_fields,\n 'headings': get_table_headings(self.table_fields),\n 'rows': self.table_items}})\n return ctx\n\n @cached_property\n def all_stats(self):\n return self.vfolder_data_tool.get_stats(user=self.user)\n\n @cached_property\n def stats(self):\n return dict(children=self.all_stats)\n\n @property\n def table_items(self):\n return [\n make_vfolder_dict(self.context, *vf)\n for vf\n in self.all_stats.items()]\n\n @cached_property\n def has_data(self):\n return (\n self.vfolder_data_tool.all_stat_data.exists()\n if self.vfolder_data_tool.show_all_to(self.user)\n else self.vfolder_data_tool.stat_data.exists())\n", "path": "pootle/apps/virtualfolder/views.py"}]} | 2,000 | 131 |
gh_patches_debug_1413 | rasdani/github-patches | git_diff | gratipay__gratipay.com-1314 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
reset.css doesn't load sometimes
@clone1018 saw this when we first started caching static assets. It's why I turned off static caching initially. Now static caching is back with #1245 and indeed we're seeing this again. :(

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gittip/cache_static.py`
Content:
```
1 """
2 Handles caching of static resources.
3 """
4 import os
5 from calendar import timegm
6 from email.utils import parsedate
7 from wsgiref.handlers import format_date_time
8
9 from aspen import Response
10
11
12 def version_is_available(request):
13 """Return a boolean, whether we have the version they asked for.
14 """
15 path = request.line.uri.path
16 version = request.website.version
17 return path['version'] == version if 'version' in path else True
18
19
20 def version_is_dash(request):
21 """Return a boolean, whether the version they asked for is -.
22 """
23 return request.line.uri.path.get('version') == '-'
24
25
26 def get_last_modified(fs_path):
27 """Get the last modified time, as int, of the file pointed to by fs_path.
28 """
29 return int(os.path.getctime(fs_path))
30
31
32 def inbound(request):
33 """Try to serve a 304 for resources under assets/.
34 """
35 uri = request.line.uri
36
37 if not uri.startswith('/assets/'):
38
39 # Only apply to the assets/ directory.
40
41 return request
42
43 if version_is_dash(request):
44
45 # Special-case a version of '-' to never 304/404 here.
46
47 return request
48
49 if not version_is_available(request):
50
51 # Don't serve one version of a file as if it were another.
52
53 raise Response(404)
54
55 ims = request.headers.get('If-Modified-Since')
56 if not ims:
57
58 # This client doesn't care about when the file was modified.
59
60 return request
61
62 if request.fs.endswith('.spt'):
63
64 # This is a requests for a dynamic resource. Perhaps in the future
65 # we'll delegate to such resources to compute a sensible Last-Modified
66 # or E-Tag, but for now we punt. This is okay, because we expect to
67 # put our dynamic assets behind a CDN in production.
68
69 return request
70
71
72 try:
73 ims = timegm(parsedate(ims))
74 except:
75
76 # Malformed If-Modified-Since header. Proceed with the request.
77
78 return request
79
80 last_modified = get_last_modified(request.fs)
81 if ims < last_modified:
82
83 # The file has been modified since. Serve the whole thing.
84
85 return request
86
87
88 # Huzzah!
89 # =======
90 # We can serve a 304! :D
91
92 response = Response(304)
93 response.headers['Last-Modified'] = format_date_time(last_modified)
94 response.headers['Cache-Control'] = 'no-cache'
95 raise response
96
97
98 def outbound(response):
99 """Set caching headers for resources under assets/.
100 """
101 request = response.request
102 website = request.website
103 uri = request.line.uri
104
105 version = website.version
106 response.headers['X-Gittip-Version'] = version
107
108 if not uri.startswith('/assets/'):
109 return response
110
111 response.headers.cookie.clear()
112
113 if response.code == 304:
114 return response
115
116 if website.cache_static:
117
118 # https://developers.google.com/speed/docs/best-practices/caching
119 response.headers['Cache-Control'] = 'public'
120 response.headers['Vary'] = 'accept-encoding'
121
122 if 'version' in uri.path:
123 # This specific asset is versioned, so it's fine to cache it.
124 response.headers['Expires'] = 'Sun, 17 Jan 2038 19:14:07 GMT'
125 else:
126 # Asset is not versioned. Don't cache it, but set Last-Modified.
127 last_modified = get_last_modified(request.fs)
128 response.headers['Last-Modified'] = format_date_time(last_modified)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gittip/cache_static.py b/gittip/cache_static.py
--- a/gittip/cache_static.py
+++ b/gittip/cache_static.py
@@ -111,6 +111,10 @@
response.headers.cookie.clear()
if response.code == 304:
+
+ # https://github.com/gittip/www.gittip.com/issues/1308
+ del response.headers['Content-Type']
+
return response
if website.cache_static:
| {"golden_diff": "diff --git a/gittip/cache_static.py b/gittip/cache_static.py\n--- a/gittip/cache_static.py\n+++ b/gittip/cache_static.py\n@@ -111,6 +111,10 @@\n response.headers.cookie.clear()\n \n if response.code == 304:\n+\n+ # https://github.com/gittip/www.gittip.com/issues/1308\n+ del response.headers['Content-Type']\n+\n return response\n \n if website.cache_static:\n", "issue": "reset.css doesn't load sometimes\n@clone1018 saw this when we first started caching static assets. It's why I turned off static caching initially. Now static caching is back with #1245 and indeed we're seeing this again. :(\n\n\n\n", "before_files": [{"content": "\"\"\"\nHandles caching of static resources.\n\"\"\"\nimport os\nfrom calendar import timegm\nfrom email.utils import parsedate\nfrom wsgiref.handlers import format_date_time\n\nfrom aspen import Response\n\n\ndef version_is_available(request):\n \"\"\"Return a boolean, whether we have the version they asked for.\n \"\"\"\n path = request.line.uri.path\n version = request.website.version\n return path['version'] == version if 'version' in path else True\n\n\ndef version_is_dash(request):\n \"\"\"Return a boolean, whether the version they asked for is -.\n \"\"\"\n return request.line.uri.path.get('version') == '-'\n\n\ndef get_last_modified(fs_path):\n \"\"\"Get the last modified time, as int, of the file pointed to by fs_path.\n \"\"\"\n return int(os.path.getctime(fs_path))\n\n\ndef inbound(request):\n \"\"\"Try to serve a 304 for resources under assets/.\n \"\"\"\n uri = request.line.uri\n\n if not uri.startswith('/assets/'):\n\n # Only apply to the assets/ directory.\n\n return request\n\n if version_is_dash(request):\n\n # Special-case a version of '-' to never 304/404 here.\n\n return request\n\n if not version_is_available(request):\n\n # Don't serve one version of a file as if it were another.\n\n raise Response(404)\n\n ims = request.headers.get('If-Modified-Since')\n if not ims:\n\n # This client doesn't care about when the file was modified.\n\n return request\n\n if request.fs.endswith('.spt'):\n\n # This is a requests for a dynamic resource. Perhaps in the future\n # we'll delegate to such resources to compute a sensible Last-Modified\n # or E-Tag, but for now we punt. This is okay, because we expect to\n # put our dynamic assets behind a CDN in production.\n\n return request\n\n\n try:\n ims = timegm(parsedate(ims))\n except:\n\n # Malformed If-Modified-Since header. Proceed with the request.\n\n return request\n\n last_modified = get_last_modified(request.fs)\n if ims < last_modified:\n\n # The file has been modified since. Serve the whole thing.\n\n return request\n\n\n # Huzzah!\n # =======\n # We can serve a 304! :D\n\n response = Response(304)\n response.headers['Last-Modified'] = format_date_time(last_modified)\n response.headers['Cache-Control'] = 'no-cache'\n raise response\n\n\ndef outbound(response):\n \"\"\"Set caching headers for resources under assets/.\n \"\"\"\n request = response.request\n website = request.website\n uri = request.line.uri\n\n version = website.version\n response.headers['X-Gittip-Version'] = version\n\n if not uri.startswith('/assets/'):\n return response\n\n response.headers.cookie.clear()\n\n if response.code == 304:\n return response\n\n if website.cache_static:\n\n # https://developers.google.com/speed/docs/best-practices/caching\n response.headers['Cache-Control'] = 'public'\n response.headers['Vary'] = 'accept-encoding'\n\n if 'version' in uri.path:\n # This specific asset is versioned, so it's fine to cache it.\n response.headers['Expires'] = 'Sun, 17 Jan 2038 19:14:07 GMT'\n else:\n # Asset is not versioned. Don't cache it, but set Last-Modified.\n last_modified = get_last_modified(request.fs)\n response.headers['Last-Modified'] = format_date_time(last_modified)\n", "path": "gittip/cache_static.py"}], "after_files": [{"content": "\"\"\"\nHandles caching of static resources.\n\"\"\"\nimport os\nfrom calendar import timegm\nfrom email.utils import parsedate\nfrom wsgiref.handlers import format_date_time\n\nfrom aspen import Response\n\n\ndef version_is_available(request):\n \"\"\"Return a boolean, whether we have the version they asked for.\n \"\"\"\n path = request.line.uri.path\n version = request.website.version\n return path['version'] == version if 'version' in path else True\n\n\ndef version_is_dash(request):\n \"\"\"Return a boolean, whether the version they asked for is -.\n \"\"\"\n return request.line.uri.path.get('version') == '-'\n\n\ndef get_last_modified(fs_path):\n \"\"\"Get the last modified time, as int, of the file pointed to by fs_path.\n \"\"\"\n return int(os.path.getctime(fs_path))\n\n\ndef inbound(request):\n \"\"\"Try to serve a 304 for resources under assets/.\n \"\"\"\n uri = request.line.uri\n\n if not uri.startswith('/assets/'):\n\n # Only apply to the assets/ directory.\n\n return request\n\n if version_is_dash(request):\n\n # Special-case a version of '-' to never 304/404 here.\n\n return request\n\n if not version_is_available(request):\n\n # Don't serve one version of a file as if it were another.\n\n raise Response(404)\n\n ims = request.headers.get('If-Modified-Since')\n if not ims:\n\n # This client doesn't care about when the file was modified.\n\n return request\n\n if request.fs.endswith('.spt'):\n\n # This is a requests for a dynamic resource. Perhaps in the future\n # we'll delegate to such resources to compute a sensible Last-Modified\n # or E-Tag, but for now we punt. This is okay, because we expect to\n # put our dynamic assets behind a CDN in production.\n\n return request\n\n\n try:\n ims = timegm(parsedate(ims))\n except:\n\n # Malformed If-Modified-Since header. Proceed with the request.\n\n return request\n\n last_modified = get_last_modified(request.fs)\n if ims < last_modified:\n\n # The file has been modified since. Serve the whole thing.\n\n return request\n\n\n # Huzzah!\n # =======\n # We can serve a 304! :D\n\n response = Response(304)\n response.headers['Last-Modified'] = format_date_time(last_modified)\n response.headers['Cache-Control'] = 'no-cache'\n raise response\n\n\ndef outbound(response):\n \"\"\"Set caching headers for resources under assets/.\n \"\"\"\n request = response.request\n website = request.website\n uri = request.line.uri\n\n version = website.version\n response.headers['X-Gittip-Version'] = version\n\n if not uri.startswith('/assets/'):\n return response\n\n response.headers.cookie.clear()\n\n if response.code == 304:\n\n # https://github.com/gittip/www.gittip.com/issues/1308\n del response.headers['Content-Type']\n\n return response\n\n if website.cache_static:\n\n # https://developers.google.com/speed/docs/best-practices/caching\n response.headers['Cache-Control'] = 'public'\n response.headers['Vary'] = 'accept-encoding'\n\n if 'version' in uri.path:\n # This specific asset is versioned, so it's fine to cache it.\n response.headers['Expires'] = 'Sun, 17 Jan 2038 19:14:07 GMT'\n else:\n # Asset is not versioned. Don't cache it, but set Last-Modified.\n last_modified = get_last_modified(request.fs)\n response.headers['Last-Modified'] = format_date_time(last_modified)\n", "path": "gittip/cache_static.py"}]} | 1,486 | 111 |
gh_patches_debug_28903 | rasdani/github-patches | git_diff | crytic__slither-252 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect source mappings because of certain (Unicode?) characters in comments
Certain characters (or scripts) in Solidity comments appear to cause incorrect source mappings.
For example, in `0x06012c8cf97bead5deae237070f9587f8e7a266d_KittyCore.sol`, the symbol that looks like underscore in "email_protected":
```
/// @author Dieter Shirley <<a href="/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="6004051405200118090f0d1a050e4e030f">[email_protected]</a>> (https://github.com/dete)
```
Similarly, the Asian characters in below comments from `0x5d0d76787d9d564061dd23f8209f804a3b8ad2f2_FoMo3Dlong.sol` also cause source mapping problems:
```
struct Round {
uint256 plyr; // pID of player in lead, lead领导吗?
uint256 team; // tID of team in lead
uint256 end; // time ends/ended
bool ended; // has round end function been ran 这个开关值得研究下
uint256 strt; // time round started
uint256 keys; // keys
uint256 eth; // total eth in
uint256 pot; // eth to pot (during round) / final amount paid to winner (after round ends)
uint256 mask; // global mask
uint256 ico; // total eth sent in during ICO phase
uint256 icoGen; // total eth for gen during ICO phase
uint256 icoAvg; // average key price for ICO phase
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/core/source_mapping/source_mapping.py`
Content:
```
1 import re
2 import os
3 from slither.core.context.context import Context
4
5 class SourceMapping(Context):
6
7 def __init__(self):
8 super(SourceMapping, self).__init__()
9 self._source_mapping = None
10
11 @property
12 def source_mapping(self):
13 return self._source_mapping
14
15 @staticmethod
16 def _compute_line(source_code, start, length):
17 """
18 Compute line(s) numbers and starting/ending columns
19 from a start/end offset. All numbers start from 1.
20
21 Not done in an efficient way
22 """
23 total_length = len(source_code)
24 source_code = source_code.splitlines(True)
25 counter = 0
26 i = 0
27 lines = []
28 starting_column = None
29 ending_column = None
30 while counter < total_length:
31 # Determine the length of the line, and advance the line number
32 lineLength = len(source_code[i])
33 i = i + 1
34
35 # Determine our column numbers.
36 if starting_column is None and counter + lineLength > start:
37 starting_column = (start - counter) + 1
38 if starting_column is not None and ending_column is None and counter + lineLength > start + length:
39 ending_column = ((start + length) - counter) + 1
40
41 # Advance the current position counter, and determine line numbers.
42 counter += lineLength
43 if counter > start:
44 lines.append(i)
45
46 # If our advanced position for the next line is out of range, stop.
47 if counter > start + length:
48 break
49
50 return (lines, starting_column, ending_column)
51
52 @staticmethod
53 def _convert_source_mapping(offset, slither):
54 '''
55 Convert a text offset to a real offset
56 see https://solidity.readthedocs.io/en/develop/miscellaneous.html#source-mappings
57 Returns:
58 (dict): {'start':0, 'length':0, 'filename': 'file.sol'}
59 '''
60 sourceUnits = slither.source_units
61
62 position = re.findall('([0-9]*):([0-9]*):([-]?[0-9]*)', offset)
63 if len(position) != 1:
64 return {}
65
66 s, l, f = position[0]
67 s = int(s)
68 l = int(l)
69 f = int(f)
70
71 if f not in sourceUnits:
72 return {'start':s, 'length':l}
73 filename_used = sourceUnits[f]
74 filename_absolute = None
75 filename_relative = None
76 filename_short = None
77
78 lines = []
79
80 # If possible, convert the filename to its absolute/relative version
81 if slither.crytic_compile:
82 filenames = slither.crytic_compile.filename_lookup(filename_used)
83 filename_absolute = filenames.absolute
84 filename_relative = filenames.relative
85 filename_short = filenames.short
86
87 if filename_absolute in slither.source_code:
88 filename = filename_absolute
89 elif filename_relative in slither.source_code:
90 filename = filename_relative
91 elif filename_short in slither.source_code:
92 filename = filename_short
93 else:#
94 filename = filename_used.used
95 else:
96 filename = filename_used
97
98 if filename in slither.source_code:
99 source_code = slither.source_code[filename]
100 (lines, starting_column, ending_column) = SourceMapping._compute_line(source_code,
101 s,
102 l)
103 else:
104 (lines, starting_column, ending_column) = ([], None, None)
105
106
107 return {'start':s,
108 'length':l,
109 'filename_used': filename_used,
110 'filename_relative': filename_relative,
111 'filename_absolute': filename_absolute,
112 'filename_short': filename_short,
113 'lines' : lines,
114 'starting_column': starting_column,
115 'ending_column': ending_column
116 }
117
118 def set_offset(self, offset, slither):
119 if isinstance(offset, dict):
120 self._source_mapping = offset
121 else:
122 self._source_mapping = self._convert_source_mapping(offset, slither)
123
124
125 @property
126 def source_mapping_str(self):
127
128 lines = self.source_mapping.get('lines', None)
129 if not lines:
130 lines = ''
131 elif len(lines) == 1:
132 lines = '#{}'.format(lines[0])
133 else:
134 lines = '#{}-{}'.format(lines[0], lines[-1])
135 return '{}{}'.format(self.source_mapping['filename_short'], lines)
136
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/core/source_mapping/source_mapping.py b/slither/core/source_mapping/source_mapping.py
--- a/slither/core/source_mapping/source_mapping.py
+++ b/slither/core/source_mapping/source_mapping.py
@@ -20,6 +20,7 @@
Not done in an efficient way
"""
+ source_code = source_code.encode('utf-8')
total_length = len(source_code)
source_code = source_code.splitlines(True)
counter = 0
@@ -29,17 +30,18 @@
ending_column = None
while counter < total_length:
# Determine the length of the line, and advance the line number
- lineLength = len(source_code[i])
+ line_content = source_code[i]
+ line_length = len(line_content)
i = i + 1
# Determine our column numbers.
- if starting_column is None and counter + lineLength > start:
+ if starting_column is None and counter + line_length > start:
starting_column = (start - counter) + 1
- if starting_column is not None and ending_column is None and counter + lineLength > start + length:
+ if starting_column is not None and ending_column is None and counter + line_length > start + length:
ending_column = ((start + length) - counter) + 1
# Advance the current position counter, and determine line numbers.
- counter += lineLength
+ counter += line_length
if counter > start:
lines.append(i)
| {"golden_diff": "diff --git a/slither/core/source_mapping/source_mapping.py b/slither/core/source_mapping/source_mapping.py\n--- a/slither/core/source_mapping/source_mapping.py\n+++ b/slither/core/source_mapping/source_mapping.py\n@@ -20,6 +20,7 @@\n \n Not done in an efficient way\n \"\"\"\n+ source_code = source_code.encode('utf-8')\n total_length = len(source_code)\n source_code = source_code.splitlines(True)\n counter = 0\n@@ -29,17 +30,18 @@\n ending_column = None\n while counter < total_length:\n # Determine the length of the line, and advance the line number\n- lineLength = len(source_code[i])\n+ line_content = source_code[i]\n+ line_length = len(line_content)\n i = i + 1\n \n # Determine our column numbers.\n- if starting_column is None and counter + lineLength > start:\n+ if starting_column is None and counter + line_length > start:\n starting_column = (start - counter) + 1\n- if starting_column is not None and ending_column is None and counter + lineLength > start + length:\n+ if starting_column is not None and ending_column is None and counter + line_length > start + length:\n ending_column = ((start + length) - counter) + 1\n \n # Advance the current position counter, and determine line numbers.\n- counter += lineLength\n+ counter += line_length\n if counter > start:\n lines.append(i)\n", "issue": "Incorrect source mappings because of certain (Unicode?) characters in comments\nCertain characters (or scripts) in Solidity comments appear to cause incorrect source mappings.\r\n\r\nFor example, in `0x06012c8cf97bead5deae237070f9587f8e7a266d_KittyCore.sol`, the symbol that looks like underscore in \"email_protected\":\r\n```\r\n/// @author Dieter Shirley <<a href=\"/cdn-cgi/l/email-protection\" class=\"__cf_email__\" data-cfemail=\"6004051405200118090f0d1a050e4e030f\">[email_protected]</a>> (https://github.com/dete) \r\n```\r\nSimilarly, the Asian characters in below comments from `0x5d0d76787d9d564061dd23f8209f804a3b8ad2f2_FoMo3Dlong.sol` also cause source mapping problems:\r\n\r\n```\r\nstruct Round {\r\n uint256 plyr; // pID of player in lead\uff0c lead\u9886\u5bfc\u5417\uff1f \r\n uint256 team; // tID of team in lead \r\n uint256 end; // time ends/ended \r\n bool ended; // has round end function been ran \u8fd9\u4e2a\u5f00\u5173\u503c\u5f97\u7814\u7a76\u4e0b \r\n\tuint256 strt; // time round started \r\n\tuint256 keys; // keys \r\n\tuint256 eth; // total eth in \r\n\tuint256 pot; // eth to pot (during round) / final amount paid to winner (after round ends) \r\n uint256 mask; // global mask \r\n uint256 ico; // total eth sent in during ICO phase \r\n uint256 icoGen; // total eth for gen during ICO phase \r\n uint256 icoAvg; // average key price for ICO phase \r\n }\r\n```\n", "before_files": [{"content": "import re\nimport os\nfrom slither.core.context.context import Context\n\nclass SourceMapping(Context):\n\n def __init__(self):\n super(SourceMapping, self).__init__()\n self._source_mapping = None\n\n @property\n def source_mapping(self):\n return self._source_mapping\n\n @staticmethod\n def _compute_line(source_code, start, length):\n \"\"\"\n Compute line(s) numbers and starting/ending columns\n from a start/end offset. All numbers start from 1.\n\n Not done in an efficient way\n \"\"\"\n total_length = len(source_code)\n source_code = source_code.splitlines(True)\n counter = 0\n i = 0\n lines = []\n starting_column = None\n ending_column = None\n while counter < total_length:\n # Determine the length of the line, and advance the line number\n lineLength = len(source_code[i])\n i = i + 1\n\n # Determine our column numbers.\n if starting_column is None and counter + lineLength > start:\n starting_column = (start - counter) + 1\n if starting_column is not None and ending_column is None and counter + lineLength > start + length:\n ending_column = ((start + length) - counter) + 1\n\n # Advance the current position counter, and determine line numbers.\n counter += lineLength\n if counter > start:\n lines.append(i)\n\n # If our advanced position for the next line is out of range, stop.\n if counter > start + length:\n break\n\n return (lines, starting_column, ending_column)\n\n @staticmethod\n def _convert_source_mapping(offset, slither):\n '''\n Convert a text offset to a real offset\n see https://solidity.readthedocs.io/en/develop/miscellaneous.html#source-mappings\n Returns:\n (dict): {'start':0, 'length':0, 'filename': 'file.sol'}\n '''\n sourceUnits = slither.source_units\n\n position = re.findall('([0-9]*):([0-9]*):([-]?[0-9]*)', offset)\n if len(position) != 1:\n return {}\n\n s, l, f = position[0]\n s = int(s)\n l = int(l)\n f = int(f)\n\n if f not in sourceUnits:\n return {'start':s, 'length':l}\n filename_used = sourceUnits[f]\n filename_absolute = None\n filename_relative = None\n filename_short = None\n\n lines = []\n\n # If possible, convert the filename to its absolute/relative version\n if slither.crytic_compile:\n filenames = slither.crytic_compile.filename_lookup(filename_used)\n filename_absolute = filenames.absolute\n filename_relative = filenames.relative\n filename_short = filenames.short\n\n if filename_absolute in slither.source_code:\n filename = filename_absolute\n elif filename_relative in slither.source_code:\n filename = filename_relative\n elif filename_short in slither.source_code:\n filename = filename_short\n else:#\n filename = filename_used.used\n else:\n filename = filename_used\n\n if filename in slither.source_code:\n source_code = slither.source_code[filename]\n (lines, starting_column, ending_column) = SourceMapping._compute_line(source_code,\n s,\n l)\n else:\n (lines, starting_column, ending_column) = ([], None, None)\n\n\n return {'start':s,\n 'length':l,\n 'filename_used': filename_used,\n 'filename_relative': filename_relative,\n 'filename_absolute': filename_absolute,\n 'filename_short': filename_short,\n 'lines' : lines,\n 'starting_column': starting_column,\n 'ending_column': ending_column\n }\n\n def set_offset(self, offset, slither):\n if isinstance(offset, dict):\n self._source_mapping = offset\n else:\n self._source_mapping = self._convert_source_mapping(offset, slither)\n\n\n @property\n def source_mapping_str(self):\n\n lines = self.source_mapping.get('lines', None)\n if not lines:\n lines = ''\n elif len(lines) == 1:\n lines = '#{}'.format(lines[0])\n else:\n lines = '#{}-{}'.format(lines[0], lines[-1])\n return '{}{}'.format(self.source_mapping['filename_short'], lines)\n\n", "path": "slither/core/source_mapping/source_mapping.py"}], "after_files": [{"content": "import re\nimport os\nfrom slither.core.context.context import Context\n\nclass SourceMapping(Context):\n\n def __init__(self):\n super(SourceMapping, self).__init__()\n self._source_mapping = None\n\n @property\n def source_mapping(self):\n return self._source_mapping\n\n @staticmethod\n def _compute_line(source_code, start, length):\n \"\"\"\n Compute line(s) numbers and starting/ending columns\n from a start/end offset. All numbers start from 1.\n\n Not done in an efficient way\n \"\"\"\n source_code = source_code.encode('utf-8')\n total_length = len(source_code)\n source_code = source_code.splitlines(True)\n counter = 0\n i = 0\n lines = []\n starting_column = None\n ending_column = None\n while counter < total_length:\n # Determine the length of the line, and advance the line number\n line_content = source_code[i]\n line_length = len(line_content)\n i = i + 1\n\n # Determine our column numbers.\n if starting_column is None and counter + line_length > start:\n starting_column = (start - counter) + 1\n if starting_column is not None and ending_column is None and counter + line_length > start + length:\n ending_column = ((start + length) - counter) + 1\n\n # Advance the current position counter, and determine line numbers.\n counter += line_length\n if counter > start:\n lines.append(i)\n\n # If our advanced position for the next line is out of range, stop.\n if counter > start + length:\n break\n\n return (lines, starting_column, ending_column)\n\n @staticmethod\n def _convert_source_mapping(offset, slither):\n '''\n Convert a text offset to a real offset\n see https://solidity.readthedocs.io/en/develop/miscellaneous.html#source-mappings\n Returns:\n (dict): {'start':0, 'length':0, 'filename': 'file.sol'}\n '''\n sourceUnits = slither.source_units\n\n position = re.findall('([0-9]*):([0-9]*):([-]?[0-9]*)', offset)\n if len(position) != 1:\n return {}\n\n s, l, f = position[0]\n s = int(s)\n l = int(l)\n f = int(f)\n\n if f not in sourceUnits:\n return {'start':s, 'length':l}\n filename_used = sourceUnits[f]\n filename_absolute = None\n filename_relative = None\n filename_short = None\n\n lines = []\n\n # If possible, convert the filename to its absolute/relative version\n if slither.crytic_compile:\n filenames = slither.crytic_compile.filename_lookup(filename_used)\n filename_absolute = filenames.absolute\n filename_relative = filenames.relative\n filename_short = filenames.short\n\n if filename_absolute in slither.source_code:\n filename = filename_absolute\n elif filename_relative in slither.source_code:\n filename = filename_relative\n elif filename_short in slither.source_code:\n filename = filename_short\n else:#\n filename = filename_used.used\n else:\n filename = filename_used\n\n if filename in slither.source_code:\n source_code = slither.source_code[filename]\n (lines, starting_column, ending_column) = SourceMapping._compute_line(source_code,\n s,\n l)\n else:\n (lines, starting_column, ending_column) = ([], None, None)\n\n\n return {'start':s,\n 'length':l,\n 'filename_used': filename_used,\n 'filename_relative': filename_relative,\n 'filename_absolute': filename_absolute,\n 'filename_short': filename_short,\n 'lines' : lines,\n 'starting_column': starting_column,\n 'ending_column': ending_column\n }\n\n def set_offset(self, offset, slither):\n if isinstance(offset, dict):\n self._source_mapping = offset\n else:\n self._source_mapping = self._convert_source_mapping(offset, slither)\n\n\n @property\n def source_mapping_str(self):\n\n lines = self.source_mapping.get('lines', None)\n if not lines:\n lines = ''\n elif len(lines) == 1:\n lines = '#{}'.format(lines[0])\n else:\n lines = '#{}-{}'.format(lines[0], lines[-1])\n return '{}{}'.format(self.source_mapping['filename_short'], lines)\n\n", "path": "slither/core/source_mapping/source_mapping.py"}]} | 1,986 | 332 |
gh_patches_debug_30152 | rasdani/github-patches | git_diff | wagtail__wagtail-9973 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Setting WAGTAILIMAGES_RENDITION_STORAGE generates a migration in wagtailimages
### Issue Summary
Running `./manage.py makemigrations` while WAGTAILIMAGES_RENDITION_STORAGE is set to something other than the default storage causes a migration to be generated within the wagtailimages app
### Steps to Reproduce
1. (for example) Start a new project with `wagtail start myproject`
2. Run `./manage.py migrate` and `./manage.py makemigrations`; this outputs "No changes detected"
3. `pip install django-storages`
4. Add the line `WAGTAILIMAGES_RENDITION_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"` to myproject/settings/base.py
5. Run `./manage.py makemigrations`; this generates a migration `wagtail/images/migrations/0026_alter_rendition_file.py` that adds a `storage` argument to the Rendition.file field.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.8.0
- Django version: 4.1.3
- Wagtail version: main (4.2a0, 4b770784ca68f22d5ea58ecbd01e5c8c13882a3d)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py`
Content:
```
1 # Generated by Django 4.0.7 on 2022-08-10 16:26
2
3 from django.db import migrations
4 import wagtail.images.models
5
6
7 class Migration(migrations.Migration):
8
9 dependencies = [
10 ("wagtailimages", "0024_index_image_file_hash"),
11 ]
12
13 operations = [
14 migrations.AlterField(
15 model_name="image",
16 name="file",
17 field=wagtail.images.models.WagtailImageField(
18 height_field="height",
19 upload_to=wagtail.images.models.get_upload_to,
20 verbose_name="file",
21 width_field="width",
22 ),
23 ),
24 migrations.AlterField(
25 model_name="rendition",
26 name="file",
27 field=wagtail.images.models.WagtailImageField(
28 height_field="height",
29 upload_to=wagtail.images.models.get_rendition_upload_to,
30 width_field="width",
31 ),
32 ),
33 ]
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py b/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py
--- a/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py
+++ b/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py
@@ -1,5 +1,6 @@
# Generated by Django 4.0.7 on 2022-08-10 16:26
+from django import VERSION as DJANGO_VERSION
from django.db import migrations
import wagtail.images.models
@@ -10,6 +11,19 @@
("wagtailimages", "0024_index_image_file_hash"),
]
+ rendition_file_options = {
+ "height_field": "height",
+ "upload_to": wagtail.images.models.get_rendition_upload_to,
+ "width_field": "width",
+ }
+ # See https://code.djangoproject.com/ticket/34192 - prior to Django 4.2, a callable storage
+ # argument that returns default_storage would be incorrectly omitted from the deconstructed
+ # field. We need to match that behaviour and include/omit it accordingly to prevent
+ # makemigrations from seeing a difference and generating a spurious migration in
+ # wagtail.images.
+ if DJANGO_VERSION >= (4, 2):
+ rendition_file_options["storage"] = wagtail.images.models.get_rendition_storage
+
operations = [
migrations.AlterField(
model_name="image",
@@ -24,10 +38,6 @@
migrations.AlterField(
model_name="rendition",
name="file",
- field=wagtail.images.models.WagtailImageField(
- height_field="height",
- upload_to=wagtail.images.models.get_rendition_upload_to,
- width_field="width",
- ),
+ field=wagtail.images.models.WagtailImageField(**rendition_file_options),
),
]
| {"golden_diff": "diff --git a/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py b/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py\n--- a/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py\n+++ b/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py\n@@ -1,5 +1,6 @@\n # Generated by Django 4.0.7 on 2022-08-10 16:26\r\n \r\n+from django import VERSION as DJANGO_VERSION\r\n from django.db import migrations\r\n import wagtail.images.models\r\n \r\n@@ -10,6 +11,19 @@\n (\"wagtailimages\", \"0024_index_image_file_hash\"),\r\n ]\r\n \r\n+ rendition_file_options = {\r\n+ \"height_field\": \"height\",\r\n+ \"upload_to\": wagtail.images.models.get_rendition_upload_to,\r\n+ \"width_field\": \"width\",\r\n+ }\r\n+ # See https://code.djangoproject.com/ticket/34192 - prior to Django 4.2, a callable storage\r\n+ # argument that returns default_storage would be incorrectly omitted from the deconstructed\r\n+ # field. We need to match that behaviour and include/omit it accordingly to prevent\r\n+ # makemigrations from seeing a difference and generating a spurious migration in\r\n+ # wagtail.images.\r\n+ if DJANGO_VERSION >= (4, 2):\r\n+ rendition_file_options[\"storage\"] = wagtail.images.models.get_rendition_storage\r\n+\r\n operations = [\r\n migrations.AlterField(\r\n model_name=\"image\",\r\n@@ -24,10 +38,6 @@\n migrations.AlterField(\r\n model_name=\"rendition\",\r\n name=\"file\",\r\n- field=wagtail.images.models.WagtailImageField(\r\n- height_field=\"height\",\r\n- upload_to=wagtail.images.models.get_rendition_upload_to,\r\n- width_field=\"width\",\r\n- ),\r\n+ field=wagtail.images.models.WagtailImageField(**rendition_file_options),\r\n ),\r\n ]\n", "issue": "Setting WAGTAILIMAGES_RENDITION_STORAGE generates a migration in wagtailimages\n### Issue Summary\r\n\r\nRunning `./manage.py makemigrations` while WAGTAILIMAGES_RENDITION_STORAGE is set to something other than the default storage causes a migration to be generated within the wagtailimages app\r\n\r\n### Steps to Reproduce\r\n\r\n1. (for example) Start a new project with `wagtail start myproject`\r\n2. Run `./manage.py migrate` and `./manage.py makemigrations`; this outputs \"No changes detected\"\r\n3. `pip install django-storages`\r\n4. Add the line `WAGTAILIMAGES_RENDITION_STORAGE = \"storages.backends.s3boto3.S3Boto3Storage\"` to myproject/settings/base.py\r\n5. Run `./manage.py makemigrations`; this generates a migration `wagtail/images/migrations/0026_alter_rendition_file.py` that adds a `storage` argument to the Rendition.file field.\r\n\r\n- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\r\n\r\n### Technical details\r\n\r\n- Python version: 3.8.0\r\n- Django version: 4.1.3\r\n- Wagtail version: main (4.2a0, 4b770784ca68f22d5ea58ecbd01e5c8c13882a3d)\r\n\n", "before_files": [{"content": "# Generated by Django 4.0.7 on 2022-08-10 16:26\r\n\r\nfrom django.db import migrations\r\nimport wagtail.images.models\r\n\r\n\r\nclass Migration(migrations.Migration):\r\n\r\n dependencies = [\r\n (\"wagtailimages\", \"0024_index_image_file_hash\"),\r\n ]\r\n\r\n operations = [\r\n migrations.AlterField(\r\n model_name=\"image\",\r\n name=\"file\",\r\n field=wagtail.images.models.WagtailImageField(\r\n height_field=\"height\",\r\n upload_to=wagtail.images.models.get_upload_to,\r\n verbose_name=\"file\",\r\n width_field=\"width\",\r\n ),\r\n ),\r\n migrations.AlterField(\r\n model_name=\"rendition\",\r\n name=\"file\",\r\n field=wagtail.images.models.WagtailImageField(\r\n height_field=\"height\",\r\n upload_to=wagtail.images.models.get_rendition_upload_to,\r\n width_field=\"width\",\r\n ),\r\n ),\r\n ]\r\n", "path": "wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py"}], "after_files": [{"content": "# Generated by Django 4.0.7 on 2022-08-10 16:26\r\n\r\nfrom django import VERSION as DJANGO_VERSION\r\nfrom django.db import migrations\r\nimport wagtail.images.models\r\n\r\n\r\nclass Migration(migrations.Migration):\r\n\r\n dependencies = [\r\n (\"wagtailimages\", \"0024_index_image_file_hash\"),\r\n ]\r\n\r\n rendition_file_options = {\r\n \"height_field\": \"height\",\r\n \"upload_to\": wagtail.images.models.get_rendition_upload_to,\r\n \"width_field\": \"width\",\r\n }\r\n # See https://code.djangoproject.com/ticket/34192 - prior to Django 4.2, a callable storage\r\n # argument that returns default_storage would be incorrectly omitted from the deconstructed\r\n # field. We need to match that behaviour and include/omit it accordingly to prevent\r\n # makemigrations from seeing a difference and generating a spurious migration in\r\n # wagtail.images.\r\n if DJANGO_VERSION >= (4, 2):\r\n rendition_file_options[\"storage\"] = wagtail.images.models.get_rendition_storage\r\n\r\n operations = [\r\n migrations.AlterField(\r\n model_name=\"image\",\r\n name=\"file\",\r\n field=wagtail.images.models.WagtailImageField(\r\n height_field=\"height\",\r\n upload_to=wagtail.images.models.get_upload_to,\r\n verbose_name=\"file\",\r\n width_field=\"width\",\r\n ),\r\n ),\r\n migrations.AlterField(\r\n model_name=\"rendition\",\r\n name=\"file\",\r\n field=wagtail.images.models.WagtailImageField(**rendition_file_options),\r\n ),\r\n ]\r\n", "path": "wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py"}]} | 857 | 482 |
gh_patches_debug_30208 | rasdani/github-patches | git_diff | microsoft__DeepSpeed-3348 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Size of saved model checkpoint becomes much larger after deepspeed.initialize when using ZeRO-2
**Describe the bug**
Originally reported [here](https://github.com/huggingface/transformers/issues/22822). @stas00 @tjruwase
For some models, the size of model checkpoints saved by `model.save_prtrained()` becomes much larger after calling `deepspeed.initialize`. See examples below.
**To Reproduce**
```python
from transformers import AutoModelForCausalLM
import deepspeed
ds_config = {
"optimizer": {
"type": "AdamW",
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": True
},
"allgather_partitions": True,
"allgather_bucket_size": 2e8,
"overlap_comm": True,
"reduce_scatter": True,
"reduce_bucket_size": 2e8,
"contiguous_gradients": True
},
"offload_optimizer": {
"device": "cpu",
"pin_memory": True
},
"train_batch_size": 1,
"train_micro_batch_size_per_gpu": 1
}
model = AutoModelForCausalLM.from_pretrained("decapoda-research/llama-7b-hf")
model.save_pretrained("before")
deepspeed_engine, _, _, _ = deepspeed.initialize(model=model, config_params=ds_config)
deepspeed_engine.module.save_pretrained("after")
```
File sizes:
```bash
du -a -h --max-depth=1 before/
512 before/config.json
32K before/pytorch_model.bin.index.json
9.2G before/pytorch_model-00001-of-00003.bin
9.3G before/pytorch_model-00002-of-00003.bin
6.7G before/pytorch_model-00003-of-00003.bin
512 before/generation_config.json
26G before/
du -a -h --max-depth=1 after/
512 after/config.json
32K after/pytorch_model.bin.index.json
26G after/pytorch_model-00001-of-00003.bin
26G after/pytorch_model-00002-of-00003.bin
26G after/pytorch_model-00003-of-00003.bin
512 after/generation_config.json
76G after/
```
This issue is not always occurred, for example, `gpt2` does not have this problem. But I tested `decapoda-research/llama-7b-hf`, and `decapoda-research/llama-13b-hf` have this issue.
This can be fixed by re-clone states before the saving:
```python
state_dict = deepspeed_engine.module.state_dict()
state_dict = type(state_dict)(
{k: v.clone()
for k,
v in state_dict.items()})
deepspeed_engine.module.save_pretrained("after_fixed", state_dict=state_dict)
```
**Expected behavior**
The saved model size should be unchanged after `deepspeed.initialize`
**System info (please complete the following information):**
- deepspeed: 0.8.3
- transformers version: 4.28.0.dev0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepspeed/checkpoint/utils.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # SPDX-License-Identifier: Apache-2.0
3
4 # DeepSpeed Team
5
6 import os
7 from .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)
8
9
10 def get_model_ckpt_name_for_rank(base_folder, mp_rank_str):
11 ckpt_name = os.path.join(
12 base_folder,
13 MODEL_FILE_PREFIX + mp_rank_str + MODEL_FILE_SUFFIX,
14 )
15 return ckpt_name
16
17
18 def get_zero_ckpt_name_for_rank(base_folder, dp_rank, mp_rank):
19 zero_prefix = f'{ZERO_FILE_PREFIX}{dp_rank}'
20 mp_rank_string = f'_{MODEL_FILE_PREFIX}{mp_rank:02d}'
21 zero_ckpt_name = os.path.join(
22 base_folder,
23 zero_prefix + mp_rank_string + OPTIM_FILE_SUFFIX,
24 )
25 return zero_ckpt_name
26
27
28 def get_layer_ckpt_name_for_rank(base_folder, layer_id, tp_rank):
29 ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'
30 ckpt_path = os.path.join(base_folder, ckpt_file)
31 return ckpt_path
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/deepspeed/checkpoint/utils.py b/deepspeed/checkpoint/utils.py
--- a/deepspeed/checkpoint/utils.py
+++ b/deepspeed/checkpoint/utils.py
@@ -4,6 +4,7 @@
# DeepSpeed Team
import os
+import torch
from .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)
@@ -29,3 +30,33 @@
ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'
ckpt_path = os.path.join(base_folder, ckpt_file)
return ckpt_path
+
+
+# We pass cloned tensors to torch.save() to avoid checkpoint bloat that occurs when torch.save()
+# saves the underlying storage rather than the slice of the storage corresponding to individual tensors.
+# This is a problem in DeepSpeed because we often allocate tensors using slices of large flattened buffers.
+# Tensor cloning helps to avoid this problem because the storage of cloned tensors are closer to the true size.
+# It is expected that the garbage collector will reclaim the cloned tensor storage to avoid memory bloat.
+# See https://pytorch.org/docs/stable/notes/serialization.html#preserve-storage-sharing
+def clone_tensors_for_torch_save(item, device=torch.device('cpu')):
+ """
+ Returns a copy of ``item`` with all enclosed tensors replaced by clones on a specified device.
+ Works on individual tensors, and tensors contained/nested in lists, tuples, and dicts.
+
+ Parameters:
+ - ``item``: tensor to clone or (possibly nested) container of tensors to clone.
+ - ``device``: target device (defaults to 'cpu')
+
+ Returns:
+ - copy of ``item`` with cloned tensors on target device
+ """
+ if torch.is_tensor(item):
+ return item.detach().clone().to(device)
+ elif isinstance(item, list):
+ return [clone_tensors_for_torch_save(v, device) for v in item]
+ elif isinstance(item, tuple):
+ return tuple([clone_tensors_for_torch_save(v, device) for v in item])
+ elif isinstance(item, dict):
+ return type(item)({k: clone_tensors_for_torch_save(v, device) for k, v in item.items()})
+ else:
+ return item
| {"golden_diff": "diff --git a/deepspeed/checkpoint/utils.py b/deepspeed/checkpoint/utils.py\n--- a/deepspeed/checkpoint/utils.py\n+++ b/deepspeed/checkpoint/utils.py\n@@ -4,6 +4,7 @@\n # DeepSpeed Team\n \n import os\n+import torch\n from .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)\n \n \n@@ -29,3 +30,33 @@\n ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'\n ckpt_path = os.path.join(base_folder, ckpt_file)\n return ckpt_path\n+\n+\n+# We pass cloned tensors to torch.save() to avoid checkpoint bloat that occurs when torch.save()\n+# saves the underlying storage rather than the slice of the storage corresponding to individual tensors.\n+# This is a problem in DeepSpeed because we often allocate tensors using slices of large flattened buffers.\n+# Tensor cloning helps to avoid this problem because the storage of cloned tensors are closer to the true size.\n+# It is expected that the garbage collector will reclaim the cloned tensor storage to avoid memory bloat.\n+# See https://pytorch.org/docs/stable/notes/serialization.html#preserve-storage-sharing\n+def clone_tensors_for_torch_save(item, device=torch.device('cpu')):\n+ \"\"\"\n+ Returns a copy of ``item`` with all enclosed tensors replaced by clones on a specified device.\n+ Works on individual tensors, and tensors contained/nested in lists, tuples, and dicts.\n+\n+ Parameters:\n+ - ``item``: tensor to clone or (possibly nested) container of tensors to clone.\n+ - ``device``: target device (defaults to 'cpu')\n+\n+ Returns:\n+ - copy of ``item`` with cloned tensors on target device\n+ \"\"\"\n+ if torch.is_tensor(item):\n+ return item.detach().clone().to(device)\n+ elif isinstance(item, list):\n+ return [clone_tensors_for_torch_save(v, device) for v in item]\n+ elif isinstance(item, tuple):\n+ return tuple([clone_tensors_for_torch_save(v, device) for v in item])\n+ elif isinstance(item, dict):\n+ return type(item)({k: clone_tensors_for_torch_save(v, device) for k, v in item.items()})\n+ else:\n+ return item\n", "issue": "[BUG] Size of saved model checkpoint becomes much larger after deepspeed.initialize when using ZeRO-2\n**Describe the bug**\r\nOriginally reported [here](https://github.com/huggingface/transformers/issues/22822). @stas00 @tjruwase\r\n\r\nFor some models, the size of model checkpoints saved by `model.save_prtrained()` becomes much larger after calling `deepspeed.initialize`. See examples below.\r\n\r\n\r\n**To Reproduce**\r\n```python\r\nfrom transformers import AutoModelForCausalLM\r\nimport deepspeed\r\n\r\nds_config = {\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": True\r\n },\r\n \"allgather_partitions\": True,\r\n \"allgather_bucket_size\": 2e8,\r\n \"overlap_comm\": True,\r\n \"reduce_scatter\": True,\r\n \"reduce_bucket_size\": 2e8,\r\n \"contiguous_gradients\": True\r\n },\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": True\r\n },\r\n \"train_batch_size\": 1,\r\n \"train_micro_batch_size_per_gpu\": 1\r\n}\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"decapoda-research/llama-7b-hf\")\r\nmodel.save_pretrained(\"before\")\r\ndeepspeed_engine, _, _, _ = deepspeed.initialize(model=model, config_params=ds_config)\r\ndeepspeed_engine.module.save_pretrained(\"after\")\r\n```\r\n\r\nFile sizes:\r\n\r\n```bash\r\ndu -a -h --max-depth=1 before/\r\n512 before/config.json\r\n32K before/pytorch_model.bin.index.json\r\n9.2G before/pytorch_model-00001-of-00003.bin\r\n9.3G before/pytorch_model-00002-of-00003.bin\r\n6.7G before/pytorch_model-00003-of-00003.bin\r\n512 before/generation_config.json\r\n26G before/\r\n\r\ndu -a -h --max-depth=1 after/\r\n512 after/config.json\r\n32K after/pytorch_model.bin.index.json\r\n26G after/pytorch_model-00001-of-00003.bin\r\n26G after/pytorch_model-00002-of-00003.bin\r\n26G after/pytorch_model-00003-of-00003.bin\r\n512 after/generation_config.json\r\n76G after/\r\n```\r\n\r\nThis issue is not always occurred, for example, `gpt2` does not have this problem. But I tested `decapoda-research/llama-7b-hf`, and `decapoda-research/llama-13b-hf` have this issue.\r\n\r\nThis can be fixed by re-clone states before the saving:\r\n```python\r\nstate_dict = deepspeed_engine.module.state_dict()\r\nstate_dict = type(state_dict)(\r\n {k: v.clone()\r\n for k,\r\n v in state_dict.items()})\r\ndeepspeed_engine.module.save_pretrained(\"after_fixed\", state_dict=state_dict)\r\n```\r\n\r\n**Expected behavior**\r\nThe saved model size should be unchanged after `deepspeed.initialize`\r\n\r\n**System info (please complete the following information):**\r\n- deepspeed: 0.8.3\r\n- transformers version: 4.28.0.dev0\r\n- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.13.3\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 1.12.1+cu116 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: yes\r\n- Using distributed or parallel set-up in script?: yes\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# SPDX-License-Identifier: Apache-2.0\n\n# DeepSpeed Team\n\nimport os\nfrom .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)\n\n\ndef get_model_ckpt_name_for_rank(base_folder, mp_rank_str):\n ckpt_name = os.path.join(\n base_folder,\n MODEL_FILE_PREFIX + mp_rank_str + MODEL_FILE_SUFFIX,\n )\n return ckpt_name\n\n\ndef get_zero_ckpt_name_for_rank(base_folder, dp_rank, mp_rank):\n zero_prefix = f'{ZERO_FILE_PREFIX}{dp_rank}'\n mp_rank_string = f'_{MODEL_FILE_PREFIX}{mp_rank:02d}'\n zero_ckpt_name = os.path.join(\n base_folder,\n zero_prefix + mp_rank_string + OPTIM_FILE_SUFFIX,\n )\n return zero_ckpt_name\n\n\ndef get_layer_ckpt_name_for_rank(base_folder, layer_id, tp_rank):\n ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'\n ckpt_path = os.path.join(base_folder, ckpt_file)\n return ckpt_path\n", "path": "deepspeed/checkpoint/utils.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# SPDX-License-Identifier: Apache-2.0\n\n# DeepSpeed Team\n\nimport os\nimport torch\nfrom .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)\n\n\ndef get_model_ckpt_name_for_rank(base_folder, mp_rank_str):\n ckpt_name = os.path.join(\n base_folder,\n MODEL_FILE_PREFIX + mp_rank_str + MODEL_FILE_SUFFIX,\n )\n return ckpt_name\n\n\ndef get_zero_ckpt_name_for_rank(base_folder, dp_rank, mp_rank):\n zero_prefix = f'{ZERO_FILE_PREFIX}{dp_rank}'\n mp_rank_string = f'_{MODEL_FILE_PREFIX}{mp_rank:02d}'\n zero_ckpt_name = os.path.join(\n base_folder,\n zero_prefix + mp_rank_string + OPTIM_FILE_SUFFIX,\n )\n return zero_ckpt_name\n\n\ndef get_layer_ckpt_name_for_rank(base_folder, layer_id, tp_rank):\n ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'\n ckpt_path = os.path.join(base_folder, ckpt_file)\n return ckpt_path\n\n\n# We pass cloned tensors to torch.save() to avoid checkpoint bloat that occurs when torch.save()\n# saves the underlying storage rather than the slice of the storage corresponding to individual tensors.\n# This is a problem in DeepSpeed because we often allocate tensors using slices of large flattened buffers.\n# Tensor cloning helps to avoid this problem because the storage of cloned tensors are closer to the true size.\n# It is expected that the garbage collector will reclaim the cloned tensor storage to avoid memory bloat.\n# See https://pytorch.org/docs/stable/notes/serialization.html#preserve-storage-sharing\ndef clone_tensors_for_torch_save(item, device=torch.device('cpu')):\n \"\"\"\n Returns a copy of ``item`` with all enclosed tensors replaced by clones on a specified device.\n Works on individual tensors, and tensors contained/nested in lists, tuples, and dicts.\n\n Parameters:\n - ``item``: tensor to clone or (possibly nested) container of tensors to clone.\n - ``device``: target device (defaults to 'cpu')\n\n Returns:\n - copy of ``item`` with cloned tensors on target device\n \"\"\"\n if torch.is_tensor(item):\n return item.detach().clone().to(device)\n elif isinstance(item, list):\n return [clone_tensors_for_torch_save(v, device) for v in item]\n elif isinstance(item, tuple):\n return tuple([clone_tensors_for_torch_save(v, device) for v in item])\n elif isinstance(item, dict):\n return type(item)({k: clone_tensors_for_torch_save(v, device) for k, v in item.items()})\n else:\n return item\n", "path": "deepspeed/checkpoint/utils.py"}]} | 1,516 | 504 |
gh_patches_debug_7862 | rasdani/github-patches | git_diff | coala__coala-bears-2136 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set setup.py url = http://coala.io/
difficulty/newcomer
Opened by @jayvdb at [Gitter](https://gitter.im/coala/coala?at=5a1181aff257ad9109b396a0)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import locale
4 import sys
5 from subprocess import call
6
7 import setuptools.command.build_py
8 from bears import Constants
9 from setuptools import find_packages, setup
10 from setuptools.command.test import test as TestCommand
11
12 try:
13 locale.getlocale()
14 except (ValueError, UnicodeError):
15 locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
16
17
18 class PyTestCommand(TestCommand):
19
20 def run_tests(self):
21 # import here, cause outside the eggs aren't loaded
22 import pytest
23 errno = pytest.main([])
24 sys.exit(errno)
25
26
27 class BuildDocsCommand(setuptools.command.build_py.build_py):
28 apidoc_command = ('sphinx-apidoc', '-f', '-o', 'docs/API',
29 'bears')
30 make_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')
31
32 def run(self):
33 err_no = call(self.apidoc_command)
34 if not err_no:
35 err_no = call(self.make_command)
36 sys.exit(err_no)
37
38
39 with open('requirements.txt') as requirements:
40 required = requirements.read().splitlines()
41 required.remove('-r bear-requirements.txt')
42
43 with open('bear-requirements.txt') as requirements:
44 bear_required = requirements.read().splitlines()
45
46 with open('test-requirements.txt') as requirements:
47 test_required = requirements.read().splitlines()
48
49 with open('ignore.txt') as ignore:
50 ignore_requirements = ignore.read().splitlines()
51
52 with open('README.rst') as readme:
53 long_description = readme.read()
54
55 extras_require = {
56 'alldeps': bear_required,
57 }
58
59 # For the average user we leave out some of the more complicated requirements,
60 # e.g. language-check (needs java).
61 required += [req for req in bear_required
62 if not any(req.startswith(ignore)
63 for ignore in ignore_requirements)]
64
65
66 if __name__ == '__main__':
67 setup(name='coala-bears',
68 version=Constants.VERSION,
69 description='Bears for coala (Code Analysis Application)',
70 author='The coala developers',
71 maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\xfcger',
72 maintainer_email=('[email protected], '
73 '[email protected], '
74 '[email protected]'),
75 url='http://coala.rtfd.org/',
76 platforms='any',
77 packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),
78 install_requires=required,
79 extras_require=extras_require,
80 tests_require=test_required,
81 package_data={'bears': ['VERSION'],
82 'bears.java': ['checkstyle.jar', 'google_checks.xml'],
83 'bears.scala': ['scalastyle.jar',
84 'scalastyle_config.xml']},
85 license='AGPL-3.0',
86 long_description=long_description,
87 entry_points={'coalabears': ['coala_official_bears = bears']},
88 # from http://pypi.python.org/pypi?%3Aaction=list_classifiers
89 classifiers=[
90 'Development Status :: 4 - Beta',
91
92 'Environment :: Plugins',
93 'Environment :: MacOS X',
94 'Environment :: Win32 (MS Windows)',
95 'Environment :: X11 Applications :: Gnome',
96
97 'Intended Audience :: Science/Research',
98 'Intended Audience :: Developers',
99
100 'License :: OSI Approved :: GNU Affero General Public License '
101 'v3 or later (AGPLv3+)',
102
103 'Operating System :: OS Independent',
104
105 'Programming Language :: Python :: Implementation :: CPython',
106 'Programming Language :: Python :: 3.4',
107 'Programming Language :: Python :: 3.5',
108 'Programming Language :: Python :: 3 :: Only',
109
110 'Topic :: Scientific/Engineering :: Information Analysis',
111 'Topic :: Software Development :: Quality Assurance',
112 'Topic :: Text Processing :: Linguistic'],
113 cmdclass={'docs': BuildDocsCommand,
114 'test': PyTestCommand})
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,7 +72,7 @@
maintainer_email=('[email protected], '
'[email protected], '
'[email protected]'),
- url='http://coala.rtfd.org/',
+ url='http://coala.io/',
platforms='any',
packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),
install_requires=required,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,7 +72,7 @@\n maintainer_email=('[email protected], '\n '[email protected], '\n '[email protected]'),\n- url='http://coala.rtfd.org/',\n+ url='http://coala.io/',\n platforms='any',\n packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),\n install_requires=required,\n", "issue": "Set setup.py url = http://coala.io/\ndifficulty/newcomer\nOpened by @jayvdb at [Gitter](https://gitter.im/coala/coala?at=5a1181aff257ad9109b396a0)\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport locale\nimport sys\nfrom subprocess import call\n\nimport setuptools.command.build_py\nfrom bears import Constants\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\ntry:\n locale.getlocale()\nexcept (ValueError, UnicodeError):\n locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\n\nclass PyTestCommand(TestCommand):\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main([])\n sys.exit(errno)\n\n\nclass BuildDocsCommand(setuptools.command.build_py.build_py):\n apidoc_command = ('sphinx-apidoc', '-f', '-o', 'docs/API',\n 'bears')\n make_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')\n\n def run(self):\n err_no = call(self.apidoc_command)\n if not err_no:\n err_no = call(self.make_command)\n sys.exit(err_no)\n\n\nwith open('requirements.txt') as requirements:\n required = requirements.read().splitlines()\n required.remove('-r bear-requirements.txt')\n\nwith open('bear-requirements.txt') as requirements:\n bear_required = requirements.read().splitlines()\n\nwith open('test-requirements.txt') as requirements:\n test_required = requirements.read().splitlines()\n\nwith open('ignore.txt') as ignore:\n ignore_requirements = ignore.read().splitlines()\n\nwith open('README.rst') as readme:\n long_description = readme.read()\n\nextras_require = {\n 'alldeps': bear_required,\n}\n\n# For the average user we leave out some of the more complicated requirements,\n# e.g. language-check (needs java).\nrequired += [req for req in bear_required\n if not any(req.startswith(ignore)\n for ignore in ignore_requirements)]\n\n\nif __name__ == '__main__':\n setup(name='coala-bears',\n version=Constants.VERSION,\n description='Bears for coala (Code Analysis Application)',\n author='The coala developers',\n maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\\xfcger',\n maintainer_email=('[email protected], '\n '[email protected], '\n '[email protected]'),\n url='http://coala.rtfd.org/',\n platforms='any',\n packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),\n install_requires=required,\n extras_require=extras_require,\n tests_require=test_required,\n package_data={'bears': ['VERSION'],\n 'bears.java': ['checkstyle.jar', 'google_checks.xml'],\n 'bears.scala': ['scalastyle.jar',\n 'scalastyle_config.xml']},\n license='AGPL-3.0',\n long_description=long_description,\n entry_points={'coalabears': ['coala_official_bears = bears']},\n # from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n 'Development Status :: 4 - Beta',\n\n 'Environment :: Plugins',\n 'Environment :: MacOS X',\n 'Environment :: Win32 (MS Windows)',\n 'Environment :: X11 Applications :: Gnome',\n\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n\n 'License :: OSI Approved :: GNU Affero General Public License '\n 'v3 or later (AGPLv3+)',\n\n 'Operating System :: OS Independent',\n\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3 :: Only',\n\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Text Processing :: Linguistic'],\n cmdclass={'docs': BuildDocsCommand,\n 'test': PyTestCommand})\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport locale\nimport sys\nfrom subprocess import call\n\nimport setuptools.command.build_py\nfrom bears import Constants\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\ntry:\n locale.getlocale()\nexcept (ValueError, UnicodeError):\n locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\n\nclass PyTestCommand(TestCommand):\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main([])\n sys.exit(errno)\n\n\nclass BuildDocsCommand(setuptools.command.build_py.build_py):\n apidoc_command = ('sphinx-apidoc', '-f', '-o', 'docs/API',\n 'bears')\n make_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')\n\n def run(self):\n err_no = call(self.apidoc_command)\n if not err_no:\n err_no = call(self.make_command)\n sys.exit(err_no)\n\n\nwith open('requirements.txt') as requirements:\n required = requirements.read().splitlines()\n required.remove('-r bear-requirements.txt')\n\nwith open('bear-requirements.txt') as requirements:\n bear_required = requirements.read().splitlines()\n\nwith open('test-requirements.txt') as requirements:\n test_required = requirements.read().splitlines()\n\nwith open('ignore.txt') as ignore:\n ignore_requirements = ignore.read().splitlines()\n\nwith open('README.rst') as readme:\n long_description = readme.read()\n\nextras_require = {\n 'alldeps': bear_required,\n}\n\n# For the average user we leave out some of the more complicated requirements,\n# e.g. language-check (needs java).\nrequired += [req for req in bear_required\n if not any(req.startswith(ignore)\n for ignore in ignore_requirements)]\n\n\nif __name__ == '__main__':\n setup(name='coala-bears',\n version=Constants.VERSION,\n description='Bears for coala (Code Analysis Application)',\n author='The coala developers',\n maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\\xfcger',\n maintainer_email=('[email protected], '\n '[email protected], '\n '[email protected]'),\n url='http://coala.io/',\n platforms='any',\n packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),\n install_requires=required,\n extras_require=extras_require,\n tests_require=test_required,\n package_data={'bears': ['VERSION'],\n 'bears.java': ['checkstyle.jar', 'google_checks.xml'],\n 'bears.scala': ['scalastyle.jar',\n 'scalastyle_config.xml']},\n license='AGPL-3.0',\n long_description=long_description,\n entry_points={'coalabears': ['coala_official_bears = bears']},\n # from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n 'Development Status :: 4 - Beta',\n\n 'Environment :: Plugins',\n 'Environment :: MacOS X',\n 'Environment :: Win32 (MS Windows)',\n 'Environment :: X11 Applications :: Gnome',\n\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n\n 'License :: OSI Approved :: GNU Affero General Public License '\n 'v3 or later (AGPLv3+)',\n\n 'Operating System :: OS Independent',\n\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3 :: Only',\n\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Text Processing :: Linguistic'],\n cmdclass={'docs': BuildDocsCommand,\n 'test': PyTestCommand})\n", "path": "setup.py"}]} | 1,424 | 116 |
gh_patches_debug_15199 | rasdani/github-patches | git_diff | qtile__qtile-3205 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CheckUpdates widget swallows crashes and shows as no updates
As per title, it's not clear if the check update command is working as any error in the command results in the widget treating it as no updates.
This makes debugging impossible.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/widget/check_updates.py`
Content:
```
1 # Copyright (c) 2015 Ali Mousavi
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
19 # SOFTWARE.
20
21 import os
22 from subprocess import CalledProcessError, Popen
23
24 from libqtile.log_utils import logger
25 from libqtile.widget import base
26
27
28 class CheckUpdates(base.ThreadPoolText):
29 """Shows number of pending updates in different unix systems"""
30
31 defaults = [
32 ("distro", "Arch", "Name of your distribution"),
33 (
34 "custom_command",
35 None,
36 "Custom shell command for checking updates (counts the lines of the output)",
37 ),
38 (
39 "custom_command_modify",
40 (lambda x: x),
41 "Lambda function to modify line count from custom_command",
42 ),
43 ("update_interval", 60, "Update interval in seconds."),
44 ("execute", None, "Command to execute on click"),
45 ("display_format", "Updates: {updates}", "Display format if updates available"),
46 ("colour_no_updates", "ffffff", "Colour when there's no updates."),
47 ("colour_have_updates", "ffffff", "Colour when there are updates."),
48 ("restart_indicator", "", "Indicator to represent reboot is required. (Ubuntu only)"),
49 ("no_update_string", "", "String to display if no updates available"),
50 ]
51
52 def __init__(self, **config):
53 base.ThreadPoolText.__init__(self, "", **config)
54 self.add_defaults(CheckUpdates.defaults)
55
56 # Helpful to have this as a variable as we can shorten it for testing
57 self.execute_polling_interval = 1
58
59 # format: "Distro": ("cmd", "number of lines to subtract from output")
60 self.cmd_dict = {
61 "Arch": ("pacman -Qu", 0),
62 "Arch_checkupdates": ("checkupdates", 0),
63 "Arch_Sup": ("pacman -Sup", 0),
64 "Arch_paru": ("paru -Qu", 0),
65 "Arch_paru_Sup": ("paru -Sup", 0),
66 "Arch_yay": ("yay -Qu", 0),
67 "Debian": ("apt-show-versions -u -b", 0),
68 "Gentoo_eix": ("EIX_LIMIT=0 eix -u# --world", 0),
69 "Ubuntu": ("aptitude search ~U", 0),
70 "Fedora": ("dnf list updates -q", 1),
71 "FreeBSD": ("pkg_version -I -l '<'", 0),
72 "Mandriva": ("urpmq --auto-select", 0),
73 }
74
75 if self.custom_command:
76 # Use custom_command
77 self.cmd = self.custom_command
78
79 else:
80 # Check if distro name is valid.
81 try:
82 self.cmd = self.cmd_dict[self.distro][0]
83 self.custom_command_modify = lambda x: x - self.cmd_dict[self.distro][1]
84 except KeyError:
85 distros = sorted(self.cmd_dict.keys())
86 logger.error(
87 self.distro
88 + " is not a valid distro name. "
89 + "Use one of the list: "
90 + str(distros)
91 + "."
92 )
93 self.cmd = None
94
95 if self.execute:
96 self.add_callbacks({"Button1": self.do_execute})
97
98 def _check_updates(self):
99 # type: () -> str
100 try:
101 updates = self.call_process(self.cmd, shell=True)
102 except CalledProcessError:
103 updates = ""
104 num_updates = self.custom_command_modify(len(updates.splitlines()))
105
106 if num_updates < 0:
107 num_updates = 0
108 if num_updates == 0:
109 self.layout.colour = self.colour_no_updates
110 return self.no_update_string
111 num_updates = str(num_updates)
112
113 if self.restart_indicator and os.path.exists("/var/run/reboot-required"):
114 num_updates += self.restart_indicator
115
116 self.layout.colour = self.colour_have_updates
117 return self.display_format.format(**{"updates": num_updates})
118
119 def poll(self):
120 # type: () -> str
121 if not self.cmd:
122 return "N/A"
123 return self._check_updates()
124
125 def do_execute(self):
126 self._process = Popen(self.execute, shell=True)
127 self.timeout_add(self.execute_polling_interval, self._refresh_count)
128
129 def _refresh_count(self):
130 if self._process.poll() is None:
131 self.timeout_add(self.execute_polling_interval, self._refresh_count)
132
133 else:
134 self.timer_setup()
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/widget/check_updates.py b/libqtile/widget/check_updates.py
--- a/libqtile/widget/check_updates.py
+++ b/libqtile/widget/check_updates.py
@@ -26,7 +26,21 @@
class CheckUpdates(base.ThreadPoolText):
- """Shows number of pending updates in different unix systems"""
+ """
+ Shows number of pending updates in different unix systems.
+
+ .. note::
+
+ It is common for package managers to return a non-zero code when there are no
+ updates. As a result, the widget will treat *any* error as if there are no updates.
+ If you are using a custom commmand/script, you should therefore ensure that it
+ returns zero when it completes if you wish to see the output of your command.
+
+ In addition, as no errors are recorded to the log, if the widget is showing no
+ updates and you believe that to be incorrect, you should run the appropriate
+ command in a terminal to view any error messages.
+
+ """
defaults = [
("distro", "Arch", "Name of your distribution"),
| {"golden_diff": "diff --git a/libqtile/widget/check_updates.py b/libqtile/widget/check_updates.py\n--- a/libqtile/widget/check_updates.py\n+++ b/libqtile/widget/check_updates.py\n@@ -26,7 +26,21 @@\n \n \n class CheckUpdates(base.ThreadPoolText):\n- \"\"\"Shows number of pending updates in different unix systems\"\"\"\n+ \"\"\"\n+ Shows number of pending updates in different unix systems.\n+\n+ .. note::\n+\n+ It is common for package managers to return a non-zero code when there are no\n+ updates. As a result, the widget will treat *any* error as if there are no updates.\n+ If you are using a custom commmand/script, you should therefore ensure that it\n+ returns zero when it completes if you wish to see the output of your command.\n+\n+ In addition, as no errors are recorded to the log, if the widget is showing no\n+ updates and you believe that to be incorrect, you should run the appropriate\n+ command in a terminal to view any error messages.\n+\n+ \"\"\"\n \n defaults = [\n (\"distro\", \"Arch\", \"Name of your distribution\"),\n", "issue": "CheckUpdates widget swallows crashes and shows as no updates\nAs per title, it's not clear if the check update command is working as any error in the command results in the widget treating it as no updates. \r\n\r\nThis makes debugging impossible.\n", "before_files": [{"content": "# Copyright (c) 2015 Ali Mousavi\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport os\nfrom subprocess import CalledProcessError, Popen\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass CheckUpdates(base.ThreadPoolText):\n \"\"\"Shows number of pending updates in different unix systems\"\"\"\n\n defaults = [\n (\"distro\", \"Arch\", \"Name of your distribution\"),\n (\n \"custom_command\",\n None,\n \"Custom shell command for checking updates (counts the lines of the output)\",\n ),\n (\n \"custom_command_modify\",\n (lambda x: x),\n \"Lambda function to modify line count from custom_command\",\n ),\n (\"update_interval\", 60, \"Update interval in seconds.\"),\n (\"execute\", None, \"Command to execute on click\"),\n (\"display_format\", \"Updates: {updates}\", \"Display format if updates available\"),\n (\"colour_no_updates\", \"ffffff\", \"Colour when there's no updates.\"),\n (\"colour_have_updates\", \"ffffff\", \"Colour when there are updates.\"),\n (\"restart_indicator\", \"\", \"Indicator to represent reboot is required. (Ubuntu only)\"),\n (\"no_update_string\", \"\", \"String to display if no updates available\"),\n ]\n\n def __init__(self, **config):\n base.ThreadPoolText.__init__(self, \"\", **config)\n self.add_defaults(CheckUpdates.defaults)\n\n # Helpful to have this as a variable as we can shorten it for testing\n self.execute_polling_interval = 1\n\n # format: \"Distro\": (\"cmd\", \"number of lines to subtract from output\")\n self.cmd_dict = {\n \"Arch\": (\"pacman -Qu\", 0),\n \"Arch_checkupdates\": (\"checkupdates\", 0),\n \"Arch_Sup\": (\"pacman -Sup\", 0),\n \"Arch_paru\": (\"paru -Qu\", 0),\n \"Arch_paru_Sup\": (\"paru -Sup\", 0),\n \"Arch_yay\": (\"yay -Qu\", 0),\n \"Debian\": (\"apt-show-versions -u -b\", 0),\n \"Gentoo_eix\": (\"EIX_LIMIT=0 eix -u# --world\", 0),\n \"Ubuntu\": (\"aptitude search ~U\", 0),\n \"Fedora\": (\"dnf list updates -q\", 1),\n \"FreeBSD\": (\"pkg_version -I -l '<'\", 0),\n \"Mandriva\": (\"urpmq --auto-select\", 0),\n }\n\n if self.custom_command:\n # Use custom_command\n self.cmd = self.custom_command\n\n else:\n # Check if distro name is valid.\n try:\n self.cmd = self.cmd_dict[self.distro][0]\n self.custom_command_modify = lambda x: x - self.cmd_dict[self.distro][1]\n except KeyError:\n distros = sorted(self.cmd_dict.keys())\n logger.error(\n self.distro\n + \" is not a valid distro name. \"\n + \"Use one of the list: \"\n + str(distros)\n + \".\"\n )\n self.cmd = None\n\n if self.execute:\n self.add_callbacks({\"Button1\": self.do_execute})\n\n def _check_updates(self):\n # type: () -> str\n try:\n updates = self.call_process(self.cmd, shell=True)\n except CalledProcessError:\n updates = \"\"\n num_updates = self.custom_command_modify(len(updates.splitlines()))\n\n if num_updates < 0:\n num_updates = 0\n if num_updates == 0:\n self.layout.colour = self.colour_no_updates\n return self.no_update_string\n num_updates = str(num_updates)\n\n if self.restart_indicator and os.path.exists(\"/var/run/reboot-required\"):\n num_updates += self.restart_indicator\n\n self.layout.colour = self.colour_have_updates\n return self.display_format.format(**{\"updates\": num_updates})\n\n def poll(self):\n # type: () -> str\n if not self.cmd:\n return \"N/A\"\n return self._check_updates()\n\n def do_execute(self):\n self._process = Popen(self.execute, shell=True)\n self.timeout_add(self.execute_polling_interval, self._refresh_count)\n\n def _refresh_count(self):\n if self._process.poll() is None:\n self.timeout_add(self.execute_polling_interval, self._refresh_count)\n\n else:\n self.timer_setup()\n", "path": "libqtile/widget/check_updates.py"}], "after_files": [{"content": "# Copyright (c) 2015 Ali Mousavi\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport os\nfrom subprocess import CalledProcessError, Popen\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass CheckUpdates(base.ThreadPoolText):\n \"\"\"\n Shows number of pending updates in different unix systems.\n\n .. note::\n\n It is common for package managers to return a non-zero code when there are no\n updates. As a result, the widget will treat *any* error as if there are no updates.\n If you are using a custom commmand/script, you should therefore ensure that it\n returns zero when it completes if you wish to see the output of your command.\n\n In addition, as no errors are recorded to the log, if the widget is showing no\n updates and you believe that to be incorrect, you should run the appropriate\n command in a terminal to view any error messages.\n\n \"\"\"\n\n defaults = [\n (\"distro\", \"Arch\", \"Name of your distribution\"),\n (\n \"custom_command\",\n None,\n \"Custom shell command for checking updates (counts the lines of the output)\",\n ),\n (\n \"custom_command_modify\",\n (lambda x: x),\n \"Lambda function to modify line count from custom_command\",\n ),\n (\"update_interval\", 60, \"Update interval in seconds.\"),\n (\"execute\", None, \"Command to execute on click\"),\n (\"display_format\", \"Updates: {updates}\", \"Display format if updates available\"),\n (\"colour_no_updates\", \"ffffff\", \"Colour when there's no updates.\"),\n (\"colour_have_updates\", \"ffffff\", \"Colour when there are updates.\"),\n (\"restart_indicator\", \"\", \"Indicator to represent reboot is required. (Ubuntu only)\"),\n (\"no_update_string\", \"\", \"String to display if no updates available\"),\n ]\n\n def __init__(self, **config):\n base.ThreadPoolText.__init__(self, \"\", **config)\n self.add_defaults(CheckUpdates.defaults)\n\n # Helpful to have this as a variable as we can shorten it for testing\n self.execute_polling_interval = 1\n\n # format: \"Distro\": (\"cmd\", \"number of lines to subtract from output\")\n self.cmd_dict = {\n \"Arch\": (\"pacman -Qu\", 0),\n \"Arch_checkupdates\": (\"checkupdates\", 0),\n \"Arch_Sup\": (\"pacman -Sup\", 0),\n \"Arch_paru\": (\"paru -Qu\", 0),\n \"Arch_paru_Sup\": (\"paru -Sup\", 0),\n \"Arch_yay\": (\"yay -Qu\", 0),\n \"Debian\": (\"apt-show-versions -u -b\", 0),\n \"Gentoo_eix\": (\"EIX_LIMIT=0 eix -u# --world\", 0),\n \"Ubuntu\": (\"aptitude search ~U\", 0),\n \"Fedora\": (\"dnf list updates -q\", 1),\n \"FreeBSD\": (\"pkg_version -I -l '<'\", 0),\n \"Mandriva\": (\"urpmq --auto-select\", 0),\n }\n\n if self.custom_command:\n # Use custom_command\n self.cmd = self.custom_command\n\n else:\n # Check if distro name is valid.\n try:\n self.cmd = self.cmd_dict[self.distro][0]\n self.custom_command_modify = lambda x: x - self.cmd_dict[self.distro][1]\n except KeyError:\n distros = sorted(self.cmd_dict.keys())\n logger.error(\n self.distro\n + \" is not a valid distro name. \"\n + \"Use one of the list: \"\n + str(distros)\n + \".\"\n )\n self.cmd = None\n\n if self.execute:\n self.add_callbacks({\"Button1\": self.do_execute})\n\n def _check_updates(self):\n # type: () -> str\n try:\n updates = self.call_process(self.cmd, shell=True)\n except CalledProcessError:\n updates = \"\"\n num_updates = self.custom_command_modify(len(updates.splitlines()))\n\n if num_updates < 0:\n num_updates = 0\n if num_updates == 0:\n self.layout.colour = self.colour_no_updates\n return self.no_update_string\n num_updates = str(num_updates)\n\n if self.restart_indicator and os.path.exists(\"/var/run/reboot-required\"):\n num_updates += self.restart_indicator\n\n self.layout.colour = self.colour_have_updates\n return self.display_format.format(**{\"updates\": num_updates})\n\n def poll(self):\n # type: () -> str\n if not self.cmd:\n return \"N/A\"\n return self._check_updates()\n\n def do_execute(self):\n self._process = Popen(self.execute, shell=True)\n self.timeout_add(self.execute_polling_interval, self._refresh_count)\n\n def _refresh_count(self):\n if self._process.poll() is None:\n self.timeout_add(self.execute_polling_interval, self._refresh_count)\n\n else:\n self.timer_setup()\n", "path": "libqtile/widget/check_updates.py"}]} | 1,793 | 250 |
gh_patches_debug_15946 | rasdani/github-patches | git_diff | microsoft__Qcodes-485 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keithely 2600 "resolution"
@MerlinSmiles right now we are limiting the set to 8 digits (https://github.com/QCoDeS/Qcodes/blob/master/qcodes/instrument_drivers/tektronix/Keithley_2600.py#L23)
Afaik it can go to to 12 digits. Do you confirm ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qcodes/instrument_drivers/tektronix/Keithley_2600.py`
Content:
```
1 from qcodes import VisaInstrument
2
3
4 class Keithley_2600(VisaInstrument):
5 """
6 channel: use channel 'a' or 'b'
7
8 This is the qcodes driver for the Keithley_2600 Source-Meter series,
9 tested with Keithley_2614B
10
11 Status: beta-version.
12 TODO:
13 - Add all parameters that are in the manual
14 - range and limit should be set according to mode
15 - add ramping and such stuff
16
17 """
18 def __init__(self, name, address, channel, **kwargs):
19 super().__init__(name, address, terminator='\n', **kwargs)
20 self._channel = channel
21
22 self.add_parameter('volt', get_cmd='measure.v()',
23 get_parser=float, set_cmd='source.levelv={:.8f}',
24 label='Voltage',
25 unit='V')
26 self.add_parameter('curr', get_cmd='measure.i()',
27 get_parser=float, set_cmd='source.leveli={:.8f}',
28 label='Current',
29 unit='A')
30 self.add_parameter('mode',
31 get_cmd='source.func',
32 set_cmd='source.func={:d}',
33 val_mapping={'current': 0, 'voltage': 1})
34 self.add_parameter('output',
35 get_cmd='source.output',
36 set_cmd='source.output={:d}',
37 val_mapping={'on': 1, 'off': 0})
38 # Source range
39 # needs get after set
40 self.add_parameter('rangev',
41 get_cmd='source.rangev',
42 get_parser=float,
43 set_cmd='source.rangev={:.4f}',
44 unit='V')
45 # Measure range
46 # needs get after set
47 self.add_parameter('rangei',
48 get_cmd='source.rangei',
49 get_parser=float,
50 set_cmd='source.rangei={:.4f}',
51 unit='A')
52 # Compliance limit
53 self.add_parameter('limitv',
54 get_cmd='source.limitv',
55 get_parser=float,
56 set_cmd='source.limitv={:.4f}',
57 unit='V')
58 # Compliance limit
59 self.add_parameter('limiti',
60 get_cmd='source.limiti',
61 get_parser=float,
62 set_cmd='source.limiti={:.4f}',
63 unit='A')
64
65 self.connect_message()
66
67 def get_idn(self):
68 IDN = self.ask_raw('*IDN?')
69 vendor, model, serial, firmware = map(str.strip, IDN.split(','))
70 model = model[6:]
71
72 IDN = {'vendor': vendor, 'model': model,
73 'serial': serial, 'firmware': firmware}
74 return IDN
75
76 def reset(self):
77 self.write('reset()')
78
79 def ask(self, cmd):
80 return super().ask('print(smu{:s}.{:s})'.format(self._channel, cmd))
81
82 def write(self, cmd):
83 super().write('smu{:s}.{:s}'.format(self._channel, cmd))
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qcodes/instrument_drivers/tektronix/Keithley_2600.py b/qcodes/instrument_drivers/tektronix/Keithley_2600.py
--- a/qcodes/instrument_drivers/tektronix/Keithley_2600.py
+++ b/qcodes/instrument_drivers/tektronix/Keithley_2600.py
@@ -20,11 +20,11 @@
self._channel = channel
self.add_parameter('volt', get_cmd='measure.v()',
- get_parser=float, set_cmd='source.levelv={:.8f}',
+ get_parser=float, set_cmd='source.levelv={:.12f}',
label='Voltage',
unit='V')
self.add_parameter('curr', get_cmd='measure.i()',
- get_parser=float, set_cmd='source.leveli={:.8f}',
+ get_parser=float, set_cmd='source.leveli={:.12f}',
label='Current',
unit='A')
self.add_parameter('mode',
| {"golden_diff": "diff --git a/qcodes/instrument_drivers/tektronix/Keithley_2600.py b/qcodes/instrument_drivers/tektronix/Keithley_2600.py\n--- a/qcodes/instrument_drivers/tektronix/Keithley_2600.py\n+++ b/qcodes/instrument_drivers/tektronix/Keithley_2600.py\n@@ -20,11 +20,11 @@\n self._channel = channel\n \n self.add_parameter('volt', get_cmd='measure.v()',\n- get_parser=float, set_cmd='source.levelv={:.8f}',\n+ get_parser=float, set_cmd='source.levelv={:.12f}',\n label='Voltage',\n unit='V')\n self.add_parameter('curr', get_cmd='measure.i()',\n- get_parser=float, set_cmd='source.leveli={:.8f}',\n+ get_parser=float, set_cmd='source.leveli={:.12f}',\n label='Current',\n unit='A')\n self.add_parameter('mode',\n", "issue": "Keithely 2600 \"resolution\"\n@MerlinSmiles right now we are limiting the set to 8 digits (https://github.com/QCoDeS/Qcodes/blob/master/qcodes/instrument_drivers/tektronix/Keithley_2600.py#L23)\r\nAfaik it can go to to 12 digits. Do you confirm ? \r\n\n", "before_files": [{"content": "from qcodes import VisaInstrument\n\n\nclass Keithley_2600(VisaInstrument):\n \"\"\"\n channel: use channel 'a' or 'b'\n\n This is the qcodes driver for the Keithley_2600 Source-Meter series,\n tested with Keithley_2614B\n\n Status: beta-version.\n TODO:\n - Add all parameters that are in the manual\n - range and limit should be set according to mode\n - add ramping and such stuff\n\n \"\"\"\n def __init__(self, name, address, channel, **kwargs):\n super().__init__(name, address, terminator='\\n', **kwargs)\n self._channel = channel\n\n self.add_parameter('volt', get_cmd='measure.v()',\n get_parser=float, set_cmd='source.levelv={:.8f}',\n label='Voltage',\n unit='V')\n self.add_parameter('curr', get_cmd='measure.i()',\n get_parser=float, set_cmd='source.leveli={:.8f}',\n label='Current',\n unit='A')\n self.add_parameter('mode',\n get_cmd='source.func',\n set_cmd='source.func={:d}',\n val_mapping={'current': 0, 'voltage': 1})\n self.add_parameter('output',\n get_cmd='source.output',\n set_cmd='source.output={:d}',\n val_mapping={'on': 1, 'off': 0})\n # Source range\n # needs get after set\n self.add_parameter('rangev',\n get_cmd='source.rangev',\n get_parser=float,\n set_cmd='source.rangev={:.4f}',\n unit='V')\n # Measure range\n # needs get after set\n self.add_parameter('rangei',\n get_cmd='source.rangei',\n get_parser=float,\n set_cmd='source.rangei={:.4f}',\n unit='A')\n # Compliance limit\n self.add_parameter('limitv',\n get_cmd='source.limitv',\n get_parser=float,\n set_cmd='source.limitv={:.4f}',\n unit='V')\n # Compliance limit\n self.add_parameter('limiti',\n get_cmd='source.limiti',\n get_parser=float,\n set_cmd='source.limiti={:.4f}',\n unit='A')\n\n self.connect_message()\n\n def get_idn(self):\n IDN = self.ask_raw('*IDN?')\n vendor, model, serial, firmware = map(str.strip, IDN.split(','))\n model = model[6:]\n\n IDN = {'vendor': vendor, 'model': model,\n 'serial': serial, 'firmware': firmware}\n return IDN\n\n def reset(self):\n self.write('reset()')\n\n def ask(self, cmd):\n return super().ask('print(smu{:s}.{:s})'.format(self._channel, cmd))\n\n def write(self, cmd):\n super().write('smu{:s}.{:s}'.format(self._channel, cmd))\n", "path": "qcodes/instrument_drivers/tektronix/Keithley_2600.py"}], "after_files": [{"content": "from qcodes import VisaInstrument\n\n\nclass Keithley_2600(VisaInstrument):\n \"\"\"\n channel: use channel 'a' or 'b'\n\n This is the qcodes driver for the Keithley_2600 Source-Meter series,\n tested with Keithley_2614B\n\n Status: beta-version.\n TODO:\n - Add all parameters that are in the manual\n - range and limit should be set according to mode\n - add ramping and such stuff\n\n \"\"\"\n def __init__(self, name, address, channel, **kwargs):\n super().__init__(name, address, terminator='\\n', **kwargs)\n self._channel = channel\n\n self.add_parameter('volt', get_cmd='measure.v()',\n get_parser=float, set_cmd='source.levelv={:.12f}',\n label='Voltage',\n unit='V')\n self.add_parameter('curr', get_cmd='measure.i()',\n get_parser=float, set_cmd='source.leveli={:.12f}',\n label='Current',\n unit='A')\n self.add_parameter('mode',\n get_cmd='source.func',\n set_cmd='source.func={:d}',\n val_mapping={'current': 0, 'voltage': 1})\n self.add_parameter('output',\n get_cmd='source.output',\n set_cmd='source.output={:d}',\n val_mapping={'on': 1, 'off': 0})\n # Source range\n # needs get after set\n self.add_parameter('rangev',\n get_cmd='source.rangev',\n get_parser=float,\n set_cmd='source.rangev={:.4f}',\n unit='V')\n # Measure range\n # needs get after set\n self.add_parameter('rangei',\n get_cmd='source.rangei',\n get_parser=float,\n set_cmd='source.rangei={:.4f}',\n unit='A')\n # Compliance limit\n self.add_parameter('limitv',\n get_cmd='source.limitv',\n get_parser=float,\n set_cmd='source.limitv={:.4f}',\n unit='V')\n # Compliance limit\n self.add_parameter('limiti',\n get_cmd='source.limiti',\n get_parser=float,\n set_cmd='source.limiti={:.4f}',\n unit='A')\n\n self.connect_message()\n\n def get_idn(self):\n IDN = self.ask_raw('*IDN?')\n vendor, model, serial, firmware = map(str.strip, IDN.split(','))\n model = model[6:]\n\n IDN = {'vendor': vendor, 'model': model,\n 'serial': serial, 'firmware': firmware}\n return IDN\n\n def reset(self):\n self.write('reset()')\n\n def ask(self, cmd):\n return super().ask('print(smu{:s}.{:s})'.format(self._channel, cmd))\n\n def write(self, cmd):\n super().write('smu{:s}.{:s}'.format(self._channel, cmd))\n", "path": "qcodes/instrument_drivers/tektronix/Keithley_2600.py"}]} | 1,173 | 234 |
gh_patches_debug_4901 | rasdani/github-patches | git_diff | certbot__certbot-6349 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError handle_modules with 0.27.0 on openSUSE
## My operating system is (include version):
openSUSE Leap 42.1
## I installed Certbot with (certbot-auto, OS package manager, pip, etc):
certbot-auto
## I ran this command and it produced this output:
````
kevdev36:~ # certbot-auto --version
Upgrading certbot-auto 0.26.1 to 0.27.0...
Replacing certbot-auto...
Creating virtual environment...
Installing Python packages...
Installation succeeded.
An unexpected error occurred:
KeyError: 'handle_modules'
Please see the logfile '/tmp/tmpMAZJox' for more details.
````
## Certbot's behavior differed from what I expected because:
It did not print the version.
## Here is a Certbot log showing the issue (if available):
/tmp/tmpMAZJox
````
2018-09-06 09:59:58,652:DEBUG:certbot.main:certbot version: 0.27.0
2018-09-06 09:59:58,652:DEBUG:certbot.main:Arguments: ['--version']
2018-09-06 09:59:58,653:DEBUG:certbot.main:Discovered plugins: PluginsRegistry(PluginEntryPoint#apache,PluginEntryPoint#manual,PluginEntryPoint#nginx,PluginEntryPoint#null,PluginEntryPoint#standalone,PluginEntryPoint#webroot)
2018-09-06 09:59:58,660:DEBUG:certbot.log:Exiting abnormally:
Traceback (most recent call last):
File "/opt/eff.org/certbot/venv/bin/letsencrypt", line 11, in <module>
sys.exit(main())
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/main.py", line 1345, in main
args = cli.prepare_and_parse_args(plugins, cli_args)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py", line 1243, in prepare_and_parse_args
_plugins_parsing(helpful, plugins)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py", line 1458, in _plugins_parsing
helpful.add_plugin_args(plugins)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py", line 840, in add_plugin_args
plugin_ep.plugin_cls.inject_parser_options(parser_or_group, name)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/plugins/common.py", line 81, in inject_parser_options
return cls.add_parser_arguments(add)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot_apache/configurator.py", line 159, in add_parser_arguments
add("handle-modules", default=cls.OS_DEFAULTS["handle_modules"],
KeyError: 'handle_modules'
2018-09-06 09:59:58,660:ERROR:certbot.log:An unexpected error occurred:
````
## Workaround
Downgrade to 0.26.1 and use `certbot-auto` with `--no-self-upgrade`.
````
kevdev36:~ # wget https://raw.githubusercontent.com/certbot/certbot/v0.26.1/certbot-auto
kevdev36:~ # chmod +x certbot-auto
kevdev36:~ # /opt/eff.org/certbot/venv/bin/pip install certbot==0.26.1 certbot-apache==0.26.1 certbot-nginx==0.26.1
kevdev36:~ # ./certbot-auto --no-self-upgrade --version
certbot 0.26.1
````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `certbot-apache/certbot_apache/override_suse.py`
Content:
```
1 """ Distribution specific override class for OpenSUSE """
2 import pkg_resources
3
4 import zope.interface
5
6 from certbot import interfaces
7
8 from certbot_apache import configurator
9
10 @zope.interface.provider(interfaces.IPluginFactory)
11 class OpenSUSEConfigurator(configurator.ApacheConfigurator):
12 """OpenSUSE specific ApacheConfigurator override class"""
13
14 OS_DEFAULTS = dict(
15 server_root="/etc/apache2",
16 vhost_root="/etc/apache2/vhosts.d",
17 vhost_files="*.conf",
18 logs_root="/var/log/apache2",
19 ctl="apache2ctl",
20 version_cmd=['apache2ctl', '-v'],
21 restart_cmd=['apache2ctl', 'graceful'],
22 conftest_cmd=['apache2ctl', 'configtest'],
23 enmod="a2enmod",
24 dismod="a2dismod",
25 le_vhost_ext="-le-ssl.conf",
26 handle_mods=False,
27 handle_sites=False,
28 challenge_location="/etc/apache2/vhosts.d",
29 MOD_SSL_CONF_SRC=pkg_resources.resource_filename(
30 "certbot_apache", "options-ssl-apache.conf")
31 )
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/certbot-apache/certbot_apache/override_suse.py b/certbot-apache/certbot_apache/override_suse.py
--- a/certbot-apache/certbot_apache/override_suse.py
+++ b/certbot-apache/certbot_apache/override_suse.py
@@ -23,7 +23,7 @@
enmod="a2enmod",
dismod="a2dismod",
le_vhost_ext="-le-ssl.conf",
- handle_mods=False,
+ handle_modules=False,
handle_sites=False,
challenge_location="/etc/apache2/vhosts.d",
MOD_SSL_CONF_SRC=pkg_resources.resource_filename(
| {"golden_diff": "diff --git a/certbot-apache/certbot_apache/override_suse.py b/certbot-apache/certbot_apache/override_suse.py\n--- a/certbot-apache/certbot_apache/override_suse.py\n+++ b/certbot-apache/certbot_apache/override_suse.py\n@@ -23,7 +23,7 @@\n enmod=\"a2enmod\",\n dismod=\"a2dismod\",\n le_vhost_ext=\"-le-ssl.conf\",\n- handle_mods=False,\n+ handle_modules=False,\n handle_sites=False,\n challenge_location=\"/etc/apache2/vhosts.d\",\n MOD_SSL_CONF_SRC=pkg_resources.resource_filename(\n", "issue": "KeyError handle_modules with 0.27.0 on openSUSE\n## My operating system is (include version):\r\n\r\nopenSUSE Leap 42.1\r\n\r\n## I installed Certbot with (certbot-auto, OS package manager, pip, etc):\r\n\r\ncertbot-auto\r\n\r\n## I ran this command and it produced this output:\r\n\r\n````\r\nkevdev36:~ # certbot-auto --version\r\nUpgrading certbot-auto 0.26.1 to 0.27.0...\r\nReplacing certbot-auto...\r\nCreating virtual environment...\r\nInstalling Python packages...\r\nInstallation succeeded.\r\nAn unexpected error occurred:\r\nKeyError: 'handle_modules'\r\nPlease see the logfile '/tmp/tmpMAZJox' for more details.\r\n````\r\n\r\n## Certbot's behavior differed from what I expected because:\r\n\r\nIt did not print the version.\r\n\r\n## Here is a Certbot log showing the issue (if available):\r\n\r\n/tmp/tmpMAZJox\r\n\r\n````\r\n2018-09-06 09:59:58,652:DEBUG:certbot.main:certbot version: 0.27.0\r\n2018-09-06 09:59:58,652:DEBUG:certbot.main:Arguments: ['--version']\r\n2018-09-06 09:59:58,653:DEBUG:certbot.main:Discovered plugins: PluginsRegistry(PluginEntryPoint#apache,PluginEntryPoint#manual,PluginEntryPoint#nginx,PluginEntryPoint#null,PluginEntryPoint#standalone,PluginEntryPoint#webroot)\r\n2018-09-06 09:59:58,660:DEBUG:certbot.log:Exiting abnormally:\r\nTraceback (most recent call last):\r\n File \"/opt/eff.org/certbot/venv/bin/letsencrypt\", line 11, in <module>\r\n sys.exit(main())\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/main.py\", line 1345, in main\r\n args = cli.prepare_and_parse_args(plugins, cli_args)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py\", line 1243, in prepare_and_parse_args\r\n _plugins_parsing(helpful, plugins)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py\", line 1458, in _plugins_parsing\r\n helpful.add_plugin_args(plugins)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py\", line 840, in add_plugin_args\r\n plugin_ep.plugin_cls.inject_parser_options(parser_or_group, name)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/plugins/common.py\", line 81, in inject_parser_options\r\n return cls.add_parser_arguments(add)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot_apache/configurator.py\", line 159, in add_parser_arguments\r\n add(\"handle-modules\", default=cls.OS_DEFAULTS[\"handle_modules\"],\r\nKeyError: 'handle_modules'\r\n2018-09-06 09:59:58,660:ERROR:certbot.log:An unexpected error occurred:\r\n````\r\n\r\n## Workaround\r\n\r\nDowngrade to 0.26.1 and use `certbot-auto` with `--no-self-upgrade`.\r\n\r\n````\r\nkevdev36:~ # wget https://raw.githubusercontent.com/certbot/certbot/v0.26.1/certbot-auto\r\nkevdev36:~ # chmod +x certbot-auto\r\nkevdev36:~ # /opt/eff.org/certbot/venv/bin/pip install certbot==0.26.1 certbot-apache==0.26.1 certbot-nginx==0.26.1\r\nkevdev36:~ # ./certbot-auto --no-self-upgrade --version\r\ncertbot 0.26.1\r\n````\n", "before_files": [{"content": "\"\"\" Distribution specific override class for OpenSUSE \"\"\"\nimport pkg_resources\n\nimport zope.interface\n\nfrom certbot import interfaces\n\nfrom certbot_apache import configurator\n\[email protected](interfaces.IPluginFactory)\nclass OpenSUSEConfigurator(configurator.ApacheConfigurator):\n \"\"\"OpenSUSE specific ApacheConfigurator override class\"\"\"\n\n OS_DEFAULTS = dict(\n server_root=\"/etc/apache2\",\n vhost_root=\"/etc/apache2/vhosts.d\",\n vhost_files=\"*.conf\",\n logs_root=\"/var/log/apache2\",\n ctl=\"apache2ctl\",\n version_cmd=['apache2ctl', '-v'],\n restart_cmd=['apache2ctl', 'graceful'],\n conftest_cmd=['apache2ctl', 'configtest'],\n enmod=\"a2enmod\",\n dismod=\"a2dismod\",\n le_vhost_ext=\"-le-ssl.conf\",\n handle_mods=False,\n handle_sites=False,\n challenge_location=\"/etc/apache2/vhosts.d\",\n MOD_SSL_CONF_SRC=pkg_resources.resource_filename(\n \"certbot_apache\", \"options-ssl-apache.conf\")\n )\n", "path": "certbot-apache/certbot_apache/override_suse.py"}], "after_files": [{"content": "\"\"\" Distribution specific override class for OpenSUSE \"\"\"\nimport pkg_resources\n\nimport zope.interface\n\nfrom certbot import interfaces\n\nfrom certbot_apache import configurator\n\[email protected](interfaces.IPluginFactory)\nclass OpenSUSEConfigurator(configurator.ApacheConfigurator):\n \"\"\"OpenSUSE specific ApacheConfigurator override class\"\"\"\n\n OS_DEFAULTS = dict(\n server_root=\"/etc/apache2\",\n vhost_root=\"/etc/apache2/vhosts.d\",\n vhost_files=\"*.conf\",\n logs_root=\"/var/log/apache2\",\n ctl=\"apache2ctl\",\n version_cmd=['apache2ctl', '-v'],\n restart_cmd=['apache2ctl', 'graceful'],\n conftest_cmd=['apache2ctl', 'configtest'],\n enmod=\"a2enmod\",\n dismod=\"a2dismod\",\n le_vhost_ext=\"-le-ssl.conf\",\n handle_modules=False,\n handle_sites=False,\n challenge_location=\"/etc/apache2/vhosts.d\",\n MOD_SSL_CONF_SRC=pkg_resources.resource_filename(\n \"certbot_apache\", \"options-ssl-apache.conf\")\n )\n", "path": "certbot-apache/certbot_apache/override_suse.py"}]} | 1,497 | 154 |
gh_patches_debug_21031 | rasdani/github-patches | git_diff | spack__spack-15252 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
py-pyfftw import issue with scipy.fftpack
Hi,
Sorry to bother you all.
After loading the spack modules via:
```console
spack load -r [email protected]
spack load py-h5py
spack load py-scipy
spack load py-pyfftw
spack load py-mpi4py
```
When in the python code I am using I try to do `import spicy_fftpack`, I have been getting an error message that ends with:
### Error Message
```python
from scipy.fftpack import (dct, idct, dst, idst, diff, tilbert, itilbert,
ImportError: cannot import name '_fftpack' from 'scipy.fftpack'
```
The full error output is in [error.txt](https://github.com/spack/spack/files/4252499/error.txt).
I think that that error is solved in the recent version of pfftw (https://github.com/pyFFTW/pyFFTW/pull/265 and https://github.com/pyFFTW/pyFFTW/issues/279).
But in my machine I still get that error.
I am not sure if I am installing py-pyfftw or py-scipy incorrectly, or making another mistake.
Or if I would just need to add an equivalent line to:
```vim
version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')
```
but for version `0.12.0`, to the package.py of py-pyfftw of spack.
Do you have any suggestion on how I can fix this issue and correctly import the library?
Thank you,
Diana
### System
1. macOS Catalina - %[email protected] (but with [email protected] fortran compilers - see compilers.yaml below)
2. spack installed python (@3.7.6)
3. spack installed py-scipy (@1.4.1)
4. spack installed py-pfftw (@0.11.1)
-----
**compilers.yaml**
```vim
compilers:
- compiler:
spec: [email protected]
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran
fc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran
flags: {}
operating_system: catalina
target: x86_64
modules: []
environment: {}
extra_rpaths: []
- compiler:
spec: [email protected]
paths:
cc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gcc
cxx: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/g++
f77: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran
fc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran
flags: {}
operating_system: catalina
target: x86_64
modules: []
environment: {}
extra_rpaths: []
```
-----
**packages.yaml**
```vim
packages:
all:
providers:
mpi: [mpich, openmpi]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/py-pyfftw/package.py`
Content:
```
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class PyPyfftw(PythonPackage):
10 """A pythonic wrapper around FFTW, the FFT library,
11 presenting a unified interface for all the supported transforms."""
12
13 homepage = "http://hgomersall.github.com/pyFFTW"
14 url = "https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz"
15
16 version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')
17 version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')
18
19 depends_on('fftw')
20 depends_on('py-setuptools', type='build')
21 depends_on('py-cython', type='build')
22 depends_on('[email protected]:', type=('build', 'run'))
23 depends_on('[email protected]:', type=('build', 'run'))
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/var/spack/repos/builtin/packages/py-pyfftw/package.py b/var/spack/repos/builtin/packages/py-pyfftw/package.py
--- a/var/spack/repos/builtin/packages/py-pyfftw/package.py
+++ b/var/spack/repos/builtin/packages/py-pyfftw/package.py
@@ -13,11 +13,12 @@
homepage = "http://hgomersall.github.com/pyFFTW"
url = "https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz"
+ version('0.12.0', sha256='60988e823ca75808a26fd79d88dbae1de3699e72a293f812aa4534f8a0a58cb0')
version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')
version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')
depends_on('fftw')
- depends_on('py-setuptools', type='build')
- depends_on('py-cython', type='build')
- depends_on('[email protected]:', type=('build', 'run'))
- depends_on('[email protected]:', type=('build', 'run'))
+ depends_on('py-setuptools', type='build')
+ depends_on('[email protected]:0.999', type='build')
+ depends_on('[email protected]:', type=('build', 'run'), when='@:0.10.4')
+ depends_on('[email protected]:1.999', type=('build', 'run'), when='@0.11.0:')
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/py-pyfftw/package.py b/var/spack/repos/builtin/packages/py-pyfftw/package.py\n--- a/var/spack/repos/builtin/packages/py-pyfftw/package.py\n+++ b/var/spack/repos/builtin/packages/py-pyfftw/package.py\n@@ -13,11 +13,12 @@\n homepage = \"http://hgomersall.github.com/pyFFTW\"\n url = \"https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz\"\n \n+ version('0.12.0', sha256='60988e823ca75808a26fd79d88dbae1de3699e72a293f812aa4534f8a0a58cb0')\n version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')\n version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')\n \n depends_on('fftw')\n- depends_on('py-setuptools', type='build')\n- depends_on('py-cython', type='build')\n- depends_on('[email protected]:', type=('build', 'run'))\n- depends_on('[email protected]:', type=('build', 'run'))\n+ depends_on('py-setuptools', type='build')\n+ depends_on('[email protected]:0.999', type='build')\n+ depends_on('[email protected]:', type=('build', 'run'), when='@:0.10.4')\n+ depends_on('[email protected]:1.999', type=('build', 'run'), when='@0.11.0:')\n", "issue": "py-pyfftw import issue with scipy.fftpack\nHi,\r\nSorry to bother you all.\r\nAfter loading the spack modules via:\r\n```console\r\n spack load -r [email protected]\r\n spack load py-h5py\r\n spack load py-scipy\r\n spack load py-pyfftw\r\n spack load py-mpi4py\r\n```\r\nWhen in the python code I am using I try to do `import spicy_fftpack`, I have been getting an error message that ends with:\r\n\r\n### Error Message\r\n```python\r\nfrom scipy.fftpack import (dct, idct, dst, idst, diff, tilbert, itilbert,\r\nImportError: cannot import name '_fftpack' from 'scipy.fftpack'\r\n```\r\nThe full error output is in [error.txt](https://github.com/spack/spack/files/4252499/error.txt).\r\n\r\nI think that that error is solved in the recent version of pfftw (https://github.com/pyFFTW/pyFFTW/pull/265 and https://github.com/pyFFTW/pyFFTW/issues/279).\r\n\r\nBut in my machine I still get that error.\r\nI am not sure if I am installing py-pyfftw or py-scipy incorrectly, or making another mistake.\r\nOr if I would just need to add an equivalent line to:\r\n```vim\r\nversion('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')\r\n```\r\nbut for version `0.12.0`, to the package.py of py-pyfftw of spack.\r\n\r\nDo you have any suggestion on how I can fix this issue and correctly import the library?\r\n\r\nThank you,\r\nDiana\r\n\r\n### System\r\n\r\n 1. macOS Catalina - %[email protected] (but with [email protected] fortran compilers - see compilers.yaml below)\r\n 2. spack installed python (@3.7.6)\r\n 3. spack installed py-scipy (@1.4.1)\r\n 4. spack installed py-pfftw (@0.11.1)\r\n\r\n-----\r\n\r\n**compilers.yaml**\r\n```vim\r\ncompilers:\r\n- compiler:\r\n spec: [email protected]\r\n paths:\r\n cc: /usr/bin/clang\r\n cxx: /usr/bin/clang++\r\n f77: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran\r\n fc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran\r\n flags: {}\r\n operating_system: catalina\r\n target: x86_64\r\n modules: []\r\n environment: {}\r\n extra_rpaths: []\r\n- compiler:\r\n spec: [email protected]\r\n paths:\r\n cc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gcc\r\n cxx: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/g++\r\n f77: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran\r\n fc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran\r\n flags: {}\r\n operating_system: catalina\r\n target: x86_64\r\n modules: []\r\n environment: {}\r\n extra_rpaths: []\r\n```\r\n-----\r\n\r\n**packages.yaml**\r\n```vim\r\npackages:\r\n all:\r\n providers:\r\n mpi: [mpich, openmpi]\r\n```\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass PyPyfftw(PythonPackage):\n \"\"\"A pythonic wrapper around FFTW, the FFT library,\n presenting a unified interface for all the supported transforms.\"\"\"\n\n homepage = \"http://hgomersall.github.com/pyFFTW\"\n url = \"https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz\"\n\n version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')\n version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')\n\n depends_on('fftw')\n depends_on('py-setuptools', type='build')\n depends_on('py-cython', type='build')\n depends_on('[email protected]:', type=('build', 'run'))\n depends_on('[email protected]:', type=('build', 'run'))\n", "path": "var/spack/repos/builtin/packages/py-pyfftw/package.py"}], "after_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass PyPyfftw(PythonPackage):\n \"\"\"A pythonic wrapper around FFTW, the FFT library,\n presenting a unified interface for all the supported transforms.\"\"\"\n\n homepage = \"http://hgomersall.github.com/pyFFTW\"\n url = \"https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz\"\n\n version('0.12.0', sha256='60988e823ca75808a26fd79d88dbae1de3699e72a293f812aa4534f8a0a58cb0')\n version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')\n version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')\n\n depends_on('fftw')\n depends_on('py-setuptools', type='build')\n depends_on('[email protected]:0.999', type='build')\n depends_on('[email protected]:', type=('build', 'run'), when='@:0.10.4')\n depends_on('[email protected]:1.999', type=('build', 'run'), when='@0.11.0:')\n", "path": "var/spack/repos/builtin/packages/py-pyfftw/package.py"}]} | 1,790 | 530 |
gh_patches_debug_11159 | rasdani/github-patches | git_diff | mozilla__kitsune-3192 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve performance of _get_creator_counts util function
`kitsune.community.utils._get_creator_counts` until function is DB heavy and takes a lot of time to execute. Evaluate its usefulness and provide a way to optimize the query and/or cache the results.
This issue is related to the degraded performance SUMO experienced on Fri March 30th ([NR Error](https://rpm.newrelic.com/accounts/1299394/applications/45097089/downtime/34422892))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kitsune/community/utils.py`
Content:
```
1 import hashlib
2
3 from datetime import datetime, date, timedelta
4 from django.conf import settings
5 from django.core.cache import cache
6 from django.db.models import Count, F
7
8 from kitsune.products.models import Product
9 from kitsune.questions.models import Answer
10 from kitsune.users.models import User, UserMappingType
11 from kitsune.wiki.models import Revision
12
13
14 def top_contributors_questions(start=None, end=None, locale=None, product=None,
15 count=10, page=1, use_cache=True):
16 """Get the top Support Forum contributors."""
17 if use_cache:
18 cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)
19 cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()
20 cache_key = 'top_contributors_questions_{}'.format(cache_key)
21 cached = cache.get(cache_key, None)
22 if cached:
23 return cached
24
25 answers = (Answer.objects
26 .exclude(is_spam=True)
27 .exclude(question__is_spam=True)
28 # Adding answer to your own question, isn't a contribution.
29 .exclude(creator_id=F('question__creator_id')))
30
31 if start is None:
32 # By default we go back 90 days.
33 start = date.today() - timedelta(days=90)
34 answers = answers.filter(created__gte=start)
35 if end:
36 # If no end is specified, we don't need to filter by it.
37 answers = answers.filter(created__lt=end)
38 if locale:
39 answers = answers.filter(question__locale=locale)
40 if product:
41 if isinstance(product, Product):
42 product = product.slug
43 answers = answers.filter(question__product__slug=product)
44
45 users = (User.objects
46 .filter(answers__in=answers)
47 .annotate(query_count=Count('answers'))
48 .order_by('-query_count'))
49 counts = _get_creator_counts(users, count, page)
50
51 if use_cache:
52 cache.set(cache_key, counts, 60*15) # 15 minutes
53 return counts
54
55
56 def top_contributors_kb(start=None, end=None, product=None, count=10, page=1, use_cache=True):
57 """Get the top KB editors (locale='en-US')."""
58 return top_contributors_l10n(
59 start, end, settings.WIKI_DEFAULT_LANGUAGE, product, count, use_cache)
60
61
62 def top_contributors_l10n(start=None, end=None, locale=None, product=None,
63 count=10, page=1, use_cache=True):
64 """Get the top l10n contributors for the KB."""
65 if use_cache:
66 cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)
67 cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()
68 cache_key = u'top_contributors_l10n_{}'.format(cache_key)
69 cached = cache.get(cache_key, None)
70 if cached:
71 return cached
72
73 # Get the user ids and contribution count of the top contributors.
74 revisions = Revision.objects.all()
75 if locale is None:
76 # If there is no locale specified, exclude en-US only. The rest are
77 # l10n.
78 revisions = revisions.exclude(document__locale=settings.WIKI_DEFAULT_LANGUAGE)
79 if start is None:
80 # By default we go back 90 days.
81 start = date.today() - timedelta(days=90)
82 revisions = revisions.filter(created__gte=start)
83 if end:
84 # If no end is specified, we don't need to filter by it.
85 revisions = revisions.filter(created__lt=end)
86 if locale:
87 revisions = revisions.filter(document__locale=locale)
88 if product:
89 if isinstance(product, Product):
90 product = product.slug
91 revisions = revisions.filter(document__products__slug=product)
92
93 users = (User.objects
94 .filter(created_revisions__in=revisions)
95 .annotate(query_count=Count('created_revisions'))
96 .order_by('-query_count'))
97 counts = _get_creator_counts(users, count, page)
98
99 if use_cache:
100 cache.set(cache_key, counts, 60*15) # 15 minutes
101 return counts
102
103
104 def top_contributors_aoa(start=None, end=None, locale=None, count=10, page=1, use_cache=True):
105 """Get the top Army of Awesome contributors."""
106 # AoA is deprecated, return 0 until we remove all related code.
107 return ([], 0)
108
109
110 def _get_creator_counts(query, count, page):
111 total = query.count()
112
113 start = (page - 1) * count
114 end = page * count
115 query_data = query.values('id', 'query_count')[start:end]
116
117 query_data = {obj['id']: obj['query_count'] for obj in query_data}
118
119 users_data = (UserMappingType.search().filter(id__in=query_data.keys())
120 .values_dict('id', 'username', 'display_name',
121 'avatar', 'twitter_usernames',
122 'last_contribution_date')[:count])
123
124 users_data = UserMappingType.reshape(users_data)
125
126 results = []
127 now = datetime.now()
128
129 for u_data in users_data:
130 user_id = u_data.get('id')
131 last_contribution_date = u_data.get('last_contribution_date', None)
132
133 u_data['days_since_last_activity'] = ((now - last_contribution_date).days
134 if last_contribution_date else None)
135
136 data = {
137 'count': query_data.get(user_id),
138 'term': user_id,
139 'user': u_data
140 }
141
142 results.append(data)
143
144 return results, total
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kitsune/community/utils.py b/kitsune/community/utils.py
--- a/kitsune/community/utils.py
+++ b/kitsune/community/utils.py
@@ -1,6 +1,8 @@
import hashlib
from datetime import datetime, date, timedelta
+from operator import itemgetter
+
from django.conf import settings
from django.core.cache import cache
from django.db.models import Count, F
@@ -141,4 +143,8 @@
results.append(data)
+ # Descending Order the list according to count.
+ # As the top number of contributor should be at first
+ results = sorted(results, key=itemgetter('count'), reverse=True)
+
return results, total
| {"golden_diff": "diff --git a/kitsune/community/utils.py b/kitsune/community/utils.py\n--- a/kitsune/community/utils.py\n+++ b/kitsune/community/utils.py\n@@ -1,6 +1,8 @@\n import hashlib\n \n from datetime import datetime, date, timedelta\n+from operator import itemgetter\n+\n from django.conf import settings\n from django.core.cache import cache\n from django.db.models import Count, F\n@@ -141,4 +143,8 @@\n \n results.append(data)\n \n+ # Descending Order the list according to count.\n+ # As the top number of contributor should be at first\n+ results = sorted(results, key=itemgetter('count'), reverse=True)\n+\n return results, total\n", "issue": "Improve performance of _get_creator_counts util function\n`kitsune.community.utils._get_creator_counts` until function is DB heavy and takes a lot of time to execute. Evaluate its usefulness and provide a way to optimize the query and/or cache the results. \r\n\r\nThis issue is related to the degraded performance SUMO experienced on Fri March 30th ([NR Error](https://rpm.newrelic.com/accounts/1299394/applications/45097089/downtime/34422892))\n", "before_files": [{"content": "import hashlib\n\nfrom datetime import datetime, date, timedelta\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db.models import Count, F\n\nfrom kitsune.products.models import Product\nfrom kitsune.questions.models import Answer\nfrom kitsune.users.models import User, UserMappingType\nfrom kitsune.wiki.models import Revision\n\n\ndef top_contributors_questions(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top Support Forum contributors.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = 'top_contributors_questions_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n answers = (Answer.objects\n .exclude(is_spam=True)\n .exclude(question__is_spam=True)\n # Adding answer to your own question, isn't a contribution.\n .exclude(creator_id=F('question__creator_id')))\n\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n answers = answers.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n answers = answers.filter(created__lt=end)\n if locale:\n answers = answers.filter(question__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n answers = answers.filter(question__product__slug=product)\n\n users = (User.objects\n .filter(answers__in=answers)\n .annotate(query_count=Count('answers'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_kb(start=None, end=None, product=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top KB editors (locale='en-US').\"\"\"\n return top_contributors_l10n(\n start, end, settings.WIKI_DEFAULT_LANGUAGE, product, count, use_cache)\n\n\ndef top_contributors_l10n(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top l10n contributors for the KB.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = u'top_contributors_l10n_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n # Get the user ids and contribution count of the top contributors.\n revisions = Revision.objects.all()\n if locale is None:\n # If there is no locale specified, exclude en-US only. The rest are\n # l10n.\n revisions = revisions.exclude(document__locale=settings.WIKI_DEFAULT_LANGUAGE)\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n revisions = revisions.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n revisions = revisions.filter(created__lt=end)\n if locale:\n revisions = revisions.filter(document__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n revisions = revisions.filter(document__products__slug=product)\n\n users = (User.objects\n .filter(created_revisions__in=revisions)\n .annotate(query_count=Count('created_revisions'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_aoa(start=None, end=None, locale=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top Army of Awesome contributors.\"\"\"\n # AoA is deprecated, return 0 until we remove all related code.\n return ([], 0)\n\n\ndef _get_creator_counts(query, count, page):\n total = query.count()\n\n start = (page - 1) * count\n end = page * count\n query_data = query.values('id', 'query_count')[start:end]\n\n query_data = {obj['id']: obj['query_count'] for obj in query_data}\n\n users_data = (UserMappingType.search().filter(id__in=query_data.keys())\n .values_dict('id', 'username', 'display_name',\n 'avatar', 'twitter_usernames',\n 'last_contribution_date')[:count])\n\n users_data = UserMappingType.reshape(users_data)\n\n results = []\n now = datetime.now()\n\n for u_data in users_data:\n user_id = u_data.get('id')\n last_contribution_date = u_data.get('last_contribution_date', None)\n\n u_data['days_since_last_activity'] = ((now - last_contribution_date).days\n if last_contribution_date else None)\n\n data = {\n 'count': query_data.get(user_id),\n 'term': user_id,\n 'user': u_data\n }\n\n results.append(data)\n\n return results, total\n", "path": "kitsune/community/utils.py"}], "after_files": [{"content": "import hashlib\n\nfrom datetime import datetime, date, timedelta\nfrom operator import itemgetter\n\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db.models import Count, F\n\nfrom kitsune.products.models import Product\nfrom kitsune.questions.models import Answer\nfrom kitsune.users.models import User, UserMappingType\nfrom kitsune.wiki.models import Revision\n\n\ndef top_contributors_questions(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top Support Forum contributors.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = 'top_contributors_questions_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n answers = (Answer.objects\n .exclude(is_spam=True)\n .exclude(question__is_spam=True)\n # Adding answer to your own question, isn't a contribution.\n .exclude(creator_id=F('question__creator_id')))\n\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n answers = answers.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n answers = answers.filter(created__lt=end)\n if locale:\n answers = answers.filter(question__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n answers = answers.filter(question__product__slug=product)\n\n users = (User.objects\n .filter(answers__in=answers)\n .annotate(query_count=Count('answers'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_kb(start=None, end=None, product=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top KB editors (locale='en-US').\"\"\"\n return top_contributors_l10n(\n start, end, settings.WIKI_DEFAULT_LANGUAGE, product, count, use_cache)\n\n\ndef top_contributors_l10n(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top l10n contributors for the KB.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = u'top_contributors_l10n_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n # Get the user ids and contribution count of the top contributors.\n revisions = Revision.objects.all()\n if locale is None:\n # If there is no locale specified, exclude en-US only. The rest are\n # l10n.\n revisions = revisions.exclude(document__locale=settings.WIKI_DEFAULT_LANGUAGE)\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n revisions = revisions.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n revisions = revisions.filter(created__lt=end)\n if locale:\n revisions = revisions.filter(document__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n revisions = revisions.filter(document__products__slug=product)\n\n users = (User.objects\n .filter(created_revisions__in=revisions)\n .annotate(query_count=Count('created_revisions'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_aoa(start=None, end=None, locale=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top Army of Awesome contributors.\"\"\"\n # AoA is deprecated, return 0 until we remove all related code.\n return ([], 0)\n\n\ndef _get_creator_counts(query, count, page):\n total = query.count()\n\n start = (page - 1) * count\n end = page * count\n query_data = query.values('id', 'query_count')[start:end]\n\n query_data = {obj['id']: obj['query_count'] for obj in query_data}\n\n users_data = (UserMappingType.search().filter(id__in=query_data.keys())\n .values_dict('id', 'username', 'display_name',\n 'avatar', 'twitter_usernames',\n 'last_contribution_date')[:count])\n\n users_data = UserMappingType.reshape(users_data)\n\n results = []\n now = datetime.now()\n\n for u_data in users_data:\n user_id = u_data.get('id')\n last_contribution_date = u_data.get('last_contribution_date', None)\n\n u_data['days_since_last_activity'] = ((now - last_contribution_date).days\n if last_contribution_date else None)\n\n data = {\n 'count': query_data.get(user_id),\n 'term': user_id,\n 'user': u_data\n }\n\n results.append(data)\n\n # Descending Order the list according to count.\n # As the top number of contributor should be at first\n results = sorted(results, key=itemgetter('count'), reverse=True)\n\n return results, total\n", "path": "kitsune/community/utils.py"}]} | 1,949 | 158 |
gh_patches_debug_25001 | rasdani/github-patches | git_diff | awslabs__gluonts-1652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
_tsf_reader.convert_base doesn't handle "10 minutes" frequency correctly
## Description
For Monash datasets with the "10 minutes" frequency, the frequency converter will convert it to a frequency 10 MonthEnd (10M), instead of the expect 10 Minutes (10T) frequency.
There is already code to properly handle the "minutely" frequency, but it checks for that string explicitly, so it doesn't catch the "10 minutes" case.
## To Reproduce
One dataset which has this frequency is the 10 minutes observation Solar dataset: https://zenodo.org/record/4656144
filename: `"solar_10_minutes_dataset.zip"`
record: `"4656132"`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/gluonts/dataset/repository/_tsf_reader.py`
Content:
```
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License").
4 # You may not use this file except in compliance with the License.
5 # A copy of the License is located at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # or in the "license" file accompanying this file. This file is distributed
10 # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
11 # express or implied. See the License for the specific language governing
12 # permissions and limitations under the License.
13
14 from datetime import datetime
15 from distutils.util import strtobool
16 from multiprocessing import cpu_count
17 from types import SimpleNamespace
18
19 import numpy as np
20 from toolz import compose_left
21
22 from gluonts import json
23 from gluonts.nursery import glide
24
25 parse_bool = compose_left(strtobool, bool)
26
27
28 def parse_attribute(ty, value: str):
29 if ty == "numeric":
30 return int(value)
31
32 if ty == "string":
33 return value
34
35 if ty == "date":
36 return datetime.strptime(value, "%Y-%m-%d %H-%M-%S")
37
38 raise AttributeError(ty)
39
40
41 def frequency_converter(freq: str):
42 parts = freq.split("_")
43 if len(parts) == 1:
44 return convert_base(parts[0])
45 if len(parts) == 2:
46 return convert_multiple(parts[0]) + convert_base(parts[1])
47 raise ValueError(f"Invalid frequency string {freq}.")
48
49
50 def convert_base(text: str) -> str:
51 if text.lower() == "minutely":
52 return "T"
53 return text[0].upper()
54
55
56 def convert_multiple(text: str) -> str:
57 if text.isnumeric():
58 return text
59 if text == "half":
60 return "0.5"
61 raise ValueError(f"Unknown frequency multiple {text}.")
62
63
64 class TSFReader:
65 def __init__(
66 self,
67 path,
68 target_name="target",
69 ):
70 self.path = path
71 self.target_name = target_name
72
73 self.meta = SimpleNamespace(columns={})
74
75 def read(self):
76 with open(self.path, encoding="latin1") as in_file:
77 # strip whitespace
78 lines = map(str.strip, in_file)
79
80 # ignore all lines starting with #
81 lines = filter(lambda line: not line.startswith("#"), lines)
82
83 data_tag_found = self._read_header(lines)
84 assert data_tag_found, "Missing @data tag."
85 assert (
86 self.meta.columns
87 ), "Missing attribute section. Attribute section must come before data."
88
89 assert self.target_name not in self.meta.columns
90 self.meta.columns[self.target_name] = None
91
92 data = self._read_data_section(lines)
93
94 return self.meta, data
95
96 def _read_header(self, lines):
97 for line in lines:
98 assert line.startswith("@")
99 stop = self._tag(line[1:])
100
101 if stop:
102 return True
103
104 return False
105
106 def _read_data_section(self, lines):
107 lines = list(lines)
108
109 lines = glide.imap_unordered(
110 self._read_data, lines, num_workers=cpu_count(), batch_size=8092
111 )
112
113 return list(lines)
114
115 def _read_data(self, line):
116 parts = line.split(":")
117
118 assert len(parts) == len(
119 self.meta.columns
120 ), "Missing attributes/values in series."
121
122 *attributes, target = parts
123
124 record = {}
125
126 record[self.target_name] = self._data_target(target)
127
128 for (column, ty), attr in zip(self.meta.columns.items(), attributes):
129 record[column] = parse_attribute(ty, attr)
130
131 return record
132
133 def _data_target(self, s):
134 s = s.replace("?", '"nan"')
135
136 values = json.loads(f"[{s}]")
137 assert (
138 values
139 ), "A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol"
140
141 return np.array(values, dtype=float)
142
143 def _tag(self, line):
144 fn_by_tag = {
145 "attribute": self._tag_attribute,
146 "frequency": self._tag_frequency,
147 "horizon": self._tag_horizon,
148 "missing": self._tag_missing,
149 "equallength": self._tag_equallength,
150 "data": self._tag_data,
151 }
152 tag, *args = line.split(" ")
153
154 if tag not in fn_by_tag:
155 return
156
157 return fn_by_tag[tag](*args)
158
159 def _tag_attribute(self, name, ty):
160 self.meta.columns[name] = ty
161
162 def _tag_frequency(self, frequency):
163 self.meta.frequency = frequency
164
165 def _tag_horizon(self, horizon):
166 self.meta.forecast_horizon = horizon
167
168 def _tag_missing(self, missing):
169 self.meta.has_missing_values = parse_bool(missing)
170
171 def _tag_equallength(self, equallength):
172 self.meta.has_equal_length = parse_bool(equallength)
173
174 def _tag_data(self):
175 return True
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/gluonts/dataset/repository/_tsf_reader.py b/src/gluonts/dataset/repository/_tsf_reader.py
--- a/src/gluonts/dataset/repository/_tsf_reader.py
+++ b/src/gluonts/dataset/repository/_tsf_reader.py
@@ -15,11 +15,13 @@
from distutils.util import strtobool
from multiprocessing import cpu_count
from types import SimpleNamespace
+from typing import Dict
import numpy as np
from toolz import compose_left
from gluonts import json
+from gluonts.exceptions import GluonTSDataError
from gluonts.nursery import glide
parse_bool = compose_left(strtobool, bool)
@@ -47,10 +49,32 @@
raise ValueError(f"Invalid frequency string {freq}.")
+BASE_FREQ_TO_PANDAS_OFFSET: Dict[str, str] = {
+ "seconds": "S",
+ "minutely": "T",
+ "minutes": "T",
+ "hourly": "H",
+ "hours": "H",
+ "daily": "D",
+ "days": "D",
+ "weekly": "W",
+ "weeks": "W",
+ "monthly": "M",
+ "months": "M",
+ "quarterly": "Q",
+ "quarters": "Q",
+ "yearly": "Y",
+ "years": "Y",
+}
+
+
def convert_base(text: str) -> str:
- if text.lower() == "minutely":
- return "T"
- return text[0].upper()
+ try:
+ return BASE_FREQ_TO_PANDAS_OFFSET[text]
+ except KeyError:
+ raise GluonTSDataError(
+ f'"{text}" is not recognized as a frequency string'
+ )
def convert_multiple(text: str) -> str:
| {"golden_diff": "diff --git a/src/gluonts/dataset/repository/_tsf_reader.py b/src/gluonts/dataset/repository/_tsf_reader.py\n--- a/src/gluonts/dataset/repository/_tsf_reader.py\n+++ b/src/gluonts/dataset/repository/_tsf_reader.py\n@@ -15,11 +15,13 @@\n from distutils.util import strtobool\n from multiprocessing import cpu_count\n from types import SimpleNamespace\n+from typing import Dict\n \n import numpy as np\n from toolz import compose_left\n \n from gluonts import json\n+from gluonts.exceptions import GluonTSDataError\n from gluonts.nursery import glide\n \n parse_bool = compose_left(strtobool, bool)\n@@ -47,10 +49,32 @@\n raise ValueError(f\"Invalid frequency string {freq}.\")\n \n \n+BASE_FREQ_TO_PANDAS_OFFSET: Dict[str, str] = {\n+ \"seconds\": \"S\",\n+ \"minutely\": \"T\",\n+ \"minutes\": \"T\",\n+ \"hourly\": \"H\",\n+ \"hours\": \"H\",\n+ \"daily\": \"D\",\n+ \"days\": \"D\",\n+ \"weekly\": \"W\",\n+ \"weeks\": \"W\",\n+ \"monthly\": \"M\",\n+ \"months\": \"M\",\n+ \"quarterly\": \"Q\",\n+ \"quarters\": \"Q\",\n+ \"yearly\": \"Y\",\n+ \"years\": \"Y\",\n+}\n+\n+\n def convert_base(text: str) -> str:\n- if text.lower() == \"minutely\":\n- return \"T\"\n- return text[0].upper()\n+ try:\n+ return BASE_FREQ_TO_PANDAS_OFFSET[text]\n+ except KeyError:\n+ raise GluonTSDataError(\n+ f'\"{text}\" is not recognized as a frequency string'\n+ )\n \n \n def convert_multiple(text: str) -> str:\n", "issue": "_tsf_reader.convert_base doesn't handle \"10 minutes\" frequency correctly\n## Description\r\nFor Monash datasets with the \"10 minutes\" frequency, the frequency converter will convert it to a frequency 10 MonthEnd (10M), instead of the expect 10 Minutes (10T) frequency.\r\n\r\nThere is already code to properly handle the \"minutely\" frequency, but it checks for that string explicitly, so it doesn't catch the \"10 minutes\" case.\r\n\r\n## To Reproduce\r\nOne dataset which has this frequency is the 10 minutes observation Solar dataset: https://zenodo.org/record/4656144\r\nfilename: `\"solar_10_minutes_dataset.zip\"`\r\nrecord: `\"4656132\"`\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom datetime import datetime\nfrom distutils.util import strtobool\nfrom multiprocessing import cpu_count\nfrom types import SimpleNamespace\n\nimport numpy as np\nfrom toolz import compose_left\n\nfrom gluonts import json\nfrom gluonts.nursery import glide\n\nparse_bool = compose_left(strtobool, bool)\n\n\ndef parse_attribute(ty, value: str):\n if ty == \"numeric\":\n return int(value)\n\n if ty == \"string\":\n return value\n\n if ty == \"date\":\n return datetime.strptime(value, \"%Y-%m-%d %H-%M-%S\")\n\n raise AttributeError(ty)\n\n\ndef frequency_converter(freq: str):\n parts = freq.split(\"_\")\n if len(parts) == 1:\n return convert_base(parts[0])\n if len(parts) == 2:\n return convert_multiple(parts[0]) + convert_base(parts[1])\n raise ValueError(f\"Invalid frequency string {freq}.\")\n\n\ndef convert_base(text: str) -> str:\n if text.lower() == \"minutely\":\n return \"T\"\n return text[0].upper()\n\n\ndef convert_multiple(text: str) -> str:\n if text.isnumeric():\n return text\n if text == \"half\":\n return \"0.5\"\n raise ValueError(f\"Unknown frequency multiple {text}.\")\n\n\nclass TSFReader:\n def __init__(\n self,\n path,\n target_name=\"target\",\n ):\n self.path = path\n self.target_name = target_name\n\n self.meta = SimpleNamespace(columns={})\n\n def read(self):\n with open(self.path, encoding=\"latin1\") as in_file:\n # strip whitespace\n lines = map(str.strip, in_file)\n\n # ignore all lines starting with #\n lines = filter(lambda line: not line.startswith(\"#\"), lines)\n\n data_tag_found = self._read_header(lines)\n assert data_tag_found, \"Missing @data tag.\"\n assert (\n self.meta.columns\n ), \"Missing attribute section. Attribute section must come before data.\"\n\n assert self.target_name not in self.meta.columns\n self.meta.columns[self.target_name] = None\n\n data = self._read_data_section(lines)\n\n return self.meta, data\n\n def _read_header(self, lines):\n for line in lines:\n assert line.startswith(\"@\")\n stop = self._tag(line[1:])\n\n if stop:\n return True\n\n return False\n\n def _read_data_section(self, lines):\n lines = list(lines)\n\n lines = glide.imap_unordered(\n self._read_data, lines, num_workers=cpu_count(), batch_size=8092\n )\n\n return list(lines)\n\n def _read_data(self, line):\n parts = line.split(\":\")\n\n assert len(parts) == len(\n self.meta.columns\n ), \"Missing attributes/values in series.\"\n\n *attributes, target = parts\n\n record = {}\n\n record[self.target_name] = self._data_target(target)\n\n for (column, ty), attr in zip(self.meta.columns.items(), attributes):\n record[column] = parse_attribute(ty, attr)\n\n return record\n\n def _data_target(self, s):\n s = s.replace(\"?\", '\"nan\"')\n\n values = json.loads(f\"[{s}]\")\n assert (\n values\n ), \"A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol\"\n\n return np.array(values, dtype=float)\n\n def _tag(self, line):\n fn_by_tag = {\n \"attribute\": self._tag_attribute,\n \"frequency\": self._tag_frequency,\n \"horizon\": self._tag_horizon,\n \"missing\": self._tag_missing,\n \"equallength\": self._tag_equallength,\n \"data\": self._tag_data,\n }\n tag, *args = line.split(\" \")\n\n if tag not in fn_by_tag:\n return\n\n return fn_by_tag[tag](*args)\n\n def _tag_attribute(self, name, ty):\n self.meta.columns[name] = ty\n\n def _tag_frequency(self, frequency):\n self.meta.frequency = frequency\n\n def _tag_horizon(self, horizon):\n self.meta.forecast_horizon = horizon\n\n def _tag_missing(self, missing):\n self.meta.has_missing_values = parse_bool(missing)\n\n def _tag_equallength(self, equallength):\n self.meta.has_equal_length = parse_bool(equallength)\n\n def _tag_data(self):\n return True\n", "path": "src/gluonts/dataset/repository/_tsf_reader.py"}], "after_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom datetime import datetime\nfrom distutils.util import strtobool\nfrom multiprocessing import cpu_count\nfrom types import SimpleNamespace\nfrom typing import Dict\n\nimport numpy as np\nfrom toolz import compose_left\n\nfrom gluonts import json\nfrom gluonts.exceptions import GluonTSDataError\nfrom gluonts.nursery import glide\n\nparse_bool = compose_left(strtobool, bool)\n\n\ndef parse_attribute(ty, value: str):\n if ty == \"numeric\":\n return int(value)\n\n if ty == \"string\":\n return value\n\n if ty == \"date\":\n return datetime.strptime(value, \"%Y-%m-%d %H-%M-%S\")\n\n raise AttributeError(ty)\n\n\ndef frequency_converter(freq: str):\n parts = freq.split(\"_\")\n if len(parts) == 1:\n return convert_base(parts[0])\n if len(parts) == 2:\n return convert_multiple(parts[0]) + convert_base(parts[1])\n raise ValueError(f\"Invalid frequency string {freq}.\")\n\n\nBASE_FREQ_TO_PANDAS_OFFSET: Dict[str, str] = {\n \"seconds\": \"S\",\n \"minutely\": \"T\",\n \"minutes\": \"T\",\n \"hourly\": \"H\",\n \"hours\": \"H\",\n \"daily\": \"D\",\n \"days\": \"D\",\n \"weekly\": \"W\",\n \"weeks\": \"W\",\n \"monthly\": \"M\",\n \"months\": \"M\",\n \"quarterly\": \"Q\",\n \"quarters\": \"Q\",\n \"yearly\": \"Y\",\n \"years\": \"Y\",\n}\n\n\ndef convert_base(text: str) -> str:\n try:\n return BASE_FREQ_TO_PANDAS_OFFSET[text]\n except KeyError:\n raise GluonTSDataError(\n f'\"{text}\" is not recognized as a frequency string'\n )\n\n\ndef convert_multiple(text: str) -> str:\n if text.isnumeric():\n return text\n if text == \"half\":\n return \"0.5\"\n raise ValueError(f\"Unknown frequency multiple {text}.\")\n\n\nclass TSFReader:\n def __init__(\n self,\n path,\n target_name=\"target\",\n ):\n self.path = path\n self.target_name = target_name\n\n self.meta = SimpleNamespace(columns={})\n\n def read(self):\n with open(self.path, encoding=\"latin1\") as in_file:\n # strip whitespace\n lines = map(str.strip, in_file)\n\n # ignore all lines starting with #\n lines = filter(lambda line: not line.startswith(\"#\"), lines)\n\n data_tag_found = self._read_header(lines)\n assert data_tag_found, \"Missing @data tag.\"\n assert (\n self.meta.columns\n ), \"Missing attribute section. Attribute section must come before data.\"\n\n assert self.target_name not in self.meta.columns\n self.meta.columns[self.target_name] = None\n\n data = self._read_data_section(lines)\n\n return self.meta, data\n\n def _read_header(self, lines):\n for line in lines:\n assert line.startswith(\"@\")\n stop = self._tag(line[1:])\n\n if stop:\n return True\n\n return False\n\n def _read_data_section(self, lines):\n lines = list(lines)\n\n lines = glide.imap_unordered(\n self._read_data, lines, num_workers=cpu_count(), batch_size=8092\n )\n\n return list(lines)\n\n def _read_data(self, line):\n parts = line.split(\":\")\n\n assert len(parts) == len(\n self.meta.columns\n ), \"Missing attributes/values in series.\"\n\n *attributes, target = parts\n\n record = {}\n\n record[self.target_name] = self._data_target(target)\n\n for (column, ty), attr in zip(self.meta.columns.items(), attributes):\n record[column] = parse_attribute(ty, attr)\n\n return record\n\n def _data_target(self, s):\n s = s.replace(\"?\", '\"nan\"')\n\n values = json.loads(f\"[{s}]\")\n assert (\n values\n ), \"A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol\"\n\n return np.array(values, dtype=float)\n\n def _tag(self, line):\n fn_by_tag = {\n \"attribute\": self._tag_attribute,\n \"frequency\": self._tag_frequency,\n \"horizon\": self._tag_horizon,\n \"missing\": self._tag_missing,\n \"equallength\": self._tag_equallength,\n \"data\": self._tag_data,\n }\n tag, *args = line.split(\" \")\n\n if tag not in fn_by_tag:\n return\n\n return fn_by_tag[tag](*args)\n\n def _tag_attribute(self, name, ty):\n self.meta.columns[name] = ty\n\n def _tag_frequency(self, frequency):\n self.meta.frequency = frequency\n\n def _tag_horizon(self, horizon):\n self.meta.forecast_horizon = horizon\n\n def _tag_missing(self, missing):\n self.meta.has_missing_values = parse_bool(missing)\n\n def _tag_equallength(self, equallength):\n self.meta.has_equal_length = parse_bool(equallength)\n\n def _tag_data(self):\n return True\n", "path": "src/gluonts/dataset/repository/_tsf_reader.py"}]} | 1,999 | 427 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.