problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_26206
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-568
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CMake always rerun with Ninja generator
Hi,
I am trying to use conan in our enterprise project and I am having problems when I use it in a CMake project in which I want to use the Ninja generator. This project has many external dependencies, and it has been working normally with any kind of generator (Make, Ninja, MSVC) before trying to use conan.
The first step we tried in our migration is to handle the boost dependency. When we use the Make generator everything works like a charm. However when I try to use the Ninja generator, every time I run the ninja command, CMake is rerunning the configuration many times.
I tried to run Ninja with "-d explain" to determine the origin of the problem. For some reason many of the cmake files are considered as _dirty_.
```
# Configure project
mkdir build
conan install .. && cmake -GNinja ..
# Run ninja with debugging capabilities
luis@p4dDesktop:~/projects/pix4d/master-conan/pix4dmapper/build$ ninja -d explain
ninja explain: output /home/luis/.conan/data/zlib/1.2.8/lasote/stable/package/21ace02f4960dd0c1d50bd3abe1537054de08157/FindZLIB.cmake of phony edge with no inputs doesn't exist
ninja explain: /home/luis/.conan/data/Boost/1.60.0/piponazo/testing/package/ed2b408ce34ce36caef16f74181d3bc588210ba6/FindBoost.cmake is dirty
ninja explain: /home/luis/.conan/data/zlib/1.2.8/lasote/stable/package/21ace02f4960dd0c1d50bd3abe1537054de08157/FindZLIB.cmake is dirty
...
ninja explain: /home/luis/projects/Pix4DMapper-Master/master-conan/pix4dmapper/src/apps/CMakeLists.txt is dirty
# A bunch of other project cmake files
...
ninja explain: /usr/local/share/cmake-3.6/Modules/AutogenInfo.cmake.in is dirty
ninja explain: /usr/local/share/cmake-3.6/Modules/CMakeCCompiler.cmake.in is dirty
ninja explain: /usr/local/share/cmake-3.6/Modules/CMakeCCompilerABI.c is dirty
# Many other cmake files
...
ninja explain: CMakeCache.txt is dirty
ninja explain: CMakeFiles/3.6.2/CMakeCCompiler.cmake is dirty
ninja explain: CMakeFiles/3.6.2/CMakeCXXCompiler.cmake is dirty
ninja explain: CMakeFiles/3.6.2/CMakeSystem.cmake is dirty
ninja explain: CMakeFiles/feature_tests.c is dirty
ninja explain: CMakeFiles/feature_tests.cxx is dirty
ninja explain: conanbuildinfo.cmake is dirty
[0/1] Re-running CMake...
```
Note that when I remove the following lines from my CMakeLists.txt file, the ninja generator starts to work again:
```
#include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
#conan_basic_setup()
```
Any of you has experienced similar issues?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/remote_manager.py`
Content:
```
1 import os
2 import shutil
3 import tarfile
4 import time
5 import traceback
6
7 from requests.exceptions import ConnectionError
8
9 from conans.errors import ConanException, ConanConnectionError
10 from conans.util.files import tar_extract, rmdir, relative_dirs
11 from conans.util.log import logger
12 from conans.paths import PACKAGE_TGZ_NAME, CONANINFO, CONAN_MANIFEST, CONANFILE, EXPORT_TGZ_NAME
13 from conans.util.files import gzopen_without_timestamps
14 from conans.util.files import touch
15
16
17 class RemoteManager(object):
18 """ Will handle the remotes to get conans, packages etc """
19
20 def __init__(self, client_cache, remote_client, output):
21 self._client_cache = client_cache
22 self._output = output
23 self._remote_client = remote_client
24
25 def upload_conan(self, conan_reference, remote):
26 """Will upload the conans to the first remote"""
27 basedir = self._client_cache.export(conan_reference)
28 rel_files = self._client_cache.export_paths(conan_reference)
29 the_files = {filename: os.path.join(basedir, filename) for filename in rel_files}
30
31 if CONANFILE not in rel_files or CONAN_MANIFEST not in rel_files:
32 raise ConanException("Cannot upload corrupted recipe '%s'" % str(conan_reference))
33
34 # FIXME: Check modified exports by hand?
35 the_files = compress_export_files(the_files, basedir, self._output)
36
37 return self._call_remote(remote, "upload_conan", conan_reference, the_files)
38
39 def upload_package(self, package_reference, remote):
40 """Will upload the package to the first remote"""
41 t1 = time.time()
42 # existing package, will use short paths if defined
43 basedir = self._client_cache.package(package_reference, short_paths=None)
44 rel_files = self._client_cache.package_paths(package_reference)
45
46 self._output.rewrite_line("Checking package integrity...")
47 if CONANINFO not in rel_files or CONAN_MANIFEST not in rel_files:
48 raise ConanException("Cannot upload corrupted package '%s'" % str(package_reference))
49
50 the_files = {filename: os.path.join(basedir, filename) for filename in rel_files}
51 logger.debug("====> Time remote_manager build_files_set : %f" % (time.time() - t1))
52
53 # If package has been modified remove tgz to regenerate it
54 read_manifest, expected_manifest = self._client_cache.package_manifests(package_reference)
55 if read_manifest is None or read_manifest.file_sums != expected_manifest.file_sums:
56 if PACKAGE_TGZ_NAME in the_files:
57 try:
58 tgz_path = os.path.join(basedir, PACKAGE_TGZ_NAME)
59 os.unlink(tgz_path)
60 except Exception:
61 pass
62 raise ConanException("Cannot upload corrupted package '%s'" % str(package_reference))
63 else:
64 self._output.rewrite_line("Package integrity OK!")
65 self._output.writeln("")
66 logger.debug("====> Time remote_manager check package integrity : %f" % (time.time() - t1))
67
68 the_files = compress_package_files(the_files, basedir, self._output)
69
70 tmp = self._call_remote(remote, "upload_package", package_reference, the_files)
71 logger.debug("====> Time remote_manager upload_package: %f" % (time.time() - t1))
72 return tmp
73
74 def get_conan_digest(self, conan_reference, remote):
75 """
76 Read ConanDigest from remotes
77 Will iterate the remotes to find the conans unless remote was specified
78
79 returns (ConanDigest, remote_name)"""
80 return self._call_remote(remote, "get_conan_digest", conan_reference)
81
82 def get_package_digest(self, package_reference, remote):
83 """
84 Read ConanDigest from remotes
85 Will iterate the remotes to find the conans unless remote was specified
86
87 returns (ConanDigest, remote_name)"""
88 return self._call_remote(remote, "get_package_digest", package_reference)
89
90 def get_recipe(self, conan_reference, dest_folder, remote):
91 """
92 Read the conans from remotes
93 Will iterate the remotes to find the conans unless remote was specified
94
95 returns (dict relative_filepath:abs_path , remote_name)"""
96 zipped_files = self._call_remote(remote, "get_recipe", conan_reference, dest_folder)
97 files = unzip_and_get_files(zipped_files, dest_folder, EXPORT_TGZ_NAME)
98 # Make sure that the source dir is deleted
99 rmdir(self._client_cache.source(conan_reference), True)
100 # TODO: Download only the CONANFILE file and only download the rest of files
101 # in install if needed (not found remote package)
102 return files
103
104 def get_package(self, package_reference, dest_folder, remote):
105 """
106 Read the conans package from remotes
107 Will iterate the remotes to find the conans unless remote was specified
108
109 returns (dict relative_filepath:abs_path , remote_name)"""
110 zipped_files = self._call_remote(remote, "get_package", package_reference, dest_folder)
111 files = unzip_and_get_files(zipped_files, dest_folder, PACKAGE_TGZ_NAME)
112 # Issue #214 https://github.com/conan-io/conan/issues/214
113 for dirname, _, files in os.walk(dest_folder):
114 for fname in files:
115 touch(os.path.join(dirname, fname))
116
117 return files
118
119 def search(self, remote, pattern=None, ignorecase=True):
120 """
121 Search exported conans information from remotes
122
123 returns (dict str(conan_ref): {packages_info}"""
124 return self._call_remote(remote, "search", pattern, ignorecase)
125
126 def search_packages(self, remote, reference, query):
127 return self._call_remote(remote, "search_packages", reference, query)
128
129 def remove(self, conan_ref, remote):
130 """
131 Removed conans or packages from remote
132 """
133 return self._call_remote(remote, "remove", conan_ref)
134
135 def remove_packages(self, conan_ref, remove_ids, remote):
136 """
137 Removed conans or packages from remote
138 """
139 return self._call_remote(remote, "remove_packages", conan_ref, remove_ids)
140
141 def authenticate(self, remote, name, password):
142 return self._call_remote(remote, 'authenticate', name, password)
143
144 def _call_remote(self, remote, method, *argc, **argv):
145 self._remote_client.remote = remote
146 try:
147 return getattr(self._remote_client, method)(*argc, **argv)
148 except ConnectionError as exc:
149 raise ConanConnectionError("Unable to connect to %s=%s" % (remote.name, remote.url))
150 except ConanException:
151 raise
152 except Exception as exc:
153 logger.error(traceback.format_exc())
154 raise ConanException(exc)
155
156
157 def compress_package_files(files, pkg_base_path, output):
158 # Check if conan_package.tgz is present
159 if PACKAGE_TGZ_NAME not in files:
160 output.rewrite_line("Compressing package...")
161 return compress_files(files, PACKAGE_TGZ_NAME,
162 excluded=(CONANINFO, CONAN_MANIFEST), dest_dir=pkg_base_path)
163 else:
164 the_files = {PACKAGE_TGZ_NAME: files[PACKAGE_TGZ_NAME],
165 CONANINFO: files[CONANINFO],
166 CONAN_MANIFEST: files[CONAN_MANIFEST]}
167
168 return the_files
169
170
171 def compress_export_files(files, export_base_path, output):
172 if EXPORT_TGZ_NAME not in files:
173 output.rewrite_line("Compressing exported files...")
174 return compress_files(files, EXPORT_TGZ_NAME,
175 excluded=(CONANFILE, CONAN_MANIFEST), dest_dir=export_base_path)
176 else:
177 the_files = {EXPORT_TGZ_NAME: files[EXPORT_TGZ_NAME],
178 CONANFILE: files[CONANFILE],
179 CONAN_MANIFEST: files[CONAN_MANIFEST]}
180 return the_files
181 return
182
183
184 def compress_files(files, name, excluded, dest_dir):
185 """Compress the package and returns the new dict (name => content) of files,
186 only with the conanXX files and the compressed file"""
187
188 # FIXME, better write to disk sequentially and not keep tgz contents in memory
189 tgz_path = os.path.join(dest_dir, name)
190 with open(tgz_path, "wb") as tgz_handle:
191 # tgz_contents = BytesIO()
192 tgz = gzopen_without_timestamps(name, mode="w", fileobj=tgz_handle)
193
194 def addfile(name, abs_path, tar):
195 info = tarfile.TarInfo(name=name)
196 info.size = os.stat(abs_path).st_size
197 info.mode = os.stat(abs_path).st_mode
198 with open(abs_path, 'rb') as file_handler:
199 tar.addfile(tarinfo=info, fileobj=file_handler)
200
201 for filename, abs_path in files.items():
202 if filename not in excluded:
203 addfile(filename, abs_path, tgz)
204
205 tgz.close()
206 ret = {}
207 for e in excluded:
208 if e in files:
209 ret[e] = files[e]
210
211 ret[name] = tgz_path
212
213 return ret
214
215
216 def unzip_and_get_files(files, destination_dir, tgz_name):
217 '''Moves all files from package_files, {relative_name: tmp_abs_path}
218 to destination_dir, unzipping the "tgz_name" if found'''
219
220 tgz_file = files.pop(tgz_name, None)
221 if tgz_file:
222 uncompress_file(tgz_file, destination_dir)
223
224 return relative_dirs(destination_dir)
225
226
227 def uncompress_file(src_path, dest_folder):
228 try:
229 with open(src_path, 'rb') as file_handler:
230 tar_extract(file_handler, dest_folder)
231 except Exception as e:
232 error_msg = "Error while downloading/extracting files to %s\n%s\n" % (dest_folder, str(e))
233 # try to remove the files
234 try:
235 if os.path.exists(dest_folder):
236 shutil.rmtree(dest_folder)
237 error_msg += "Folder removed"
238 except Exception as e:
239 error_msg += "Folder not removed, files/package might be damaged, remove manually"
240 raise ConanException(error_msg)
241
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/client/remote_manager.py b/conans/client/remote_manager.py
--- a/conans/client/remote_manager.py
+++ b/conans/client/remote_manager.py
@@ -97,6 +97,9 @@
files = unzip_and_get_files(zipped_files, dest_folder, EXPORT_TGZ_NAME)
# Make sure that the source dir is deleted
rmdir(self._client_cache.source(conan_reference), True)
+ for dirname, _, filenames in os.walk(dest_folder):
+ for fname in filenames:
+ touch(os.path.join(dirname, fname))
# TODO: Download only the CONANFILE file and only download the rest of files
# in install if needed (not found remote package)
return files
@@ -110,8 +113,8 @@
zipped_files = self._call_remote(remote, "get_package", package_reference, dest_folder)
files = unzip_and_get_files(zipped_files, dest_folder, PACKAGE_TGZ_NAME)
# Issue #214 https://github.com/conan-io/conan/issues/214
- for dirname, _, files in os.walk(dest_folder):
- for fname in files:
+ for dirname, _, filenames in os.walk(dest_folder):
+ for fname in filenames:
touch(os.path.join(dirname, fname))
return files
|
{"golden_diff": "diff --git a/conans/client/remote_manager.py b/conans/client/remote_manager.py\n--- a/conans/client/remote_manager.py\n+++ b/conans/client/remote_manager.py\n@@ -97,6 +97,9 @@\n files = unzip_and_get_files(zipped_files, dest_folder, EXPORT_TGZ_NAME)\n # Make sure that the source dir is deleted\n rmdir(self._client_cache.source(conan_reference), True)\n+ for dirname, _, filenames in os.walk(dest_folder):\n+ for fname in filenames:\n+ touch(os.path.join(dirname, fname))\n # TODO: Download only the CONANFILE file and only download the rest of files\n # in install if needed (not found remote package)\n return files\n@@ -110,8 +113,8 @@\n zipped_files = self._call_remote(remote, \"get_package\", package_reference, dest_folder)\n files = unzip_and_get_files(zipped_files, dest_folder, PACKAGE_TGZ_NAME)\n # Issue #214 https://github.com/conan-io/conan/issues/214\n- for dirname, _, files in os.walk(dest_folder):\n- for fname in files:\n+ for dirname, _, filenames in os.walk(dest_folder):\n+ for fname in filenames:\n touch(os.path.join(dirname, fname))\n \n return files\n", "issue": "CMake always rerun with Ninja generator\nHi,\n\nI am trying to use conan in our enterprise project and I am having problems when I use it in a CMake project in which I want to use the Ninja generator. This project has many external dependencies, and it has been working normally with any kind of generator (Make, Ninja, MSVC) before trying to use conan.\n\nThe first step we tried in our migration is to handle the boost dependency. When we use the Make generator everything works like a charm. However when I try to use the Ninja generator, every time I run the ninja command, CMake is rerunning the configuration many times.\n\nI tried to run Ninja with \"-d explain\" to determine the origin of the problem. For some reason many of the cmake files are considered as _dirty_. \n\n```\n# Configure project\nmkdir build\nconan install .. && cmake -GNinja ..\n\n# Run ninja with debugging capabilities\nluis@p4dDesktop:~/projects/pix4d/master-conan/pix4dmapper/build$ ninja -d explain\nninja explain: output /home/luis/.conan/data/zlib/1.2.8/lasote/stable/package/21ace02f4960dd0c1d50bd3abe1537054de08157/FindZLIB.cmake of phony edge with no inputs doesn't exist\nninja explain: /home/luis/.conan/data/Boost/1.60.0/piponazo/testing/package/ed2b408ce34ce36caef16f74181d3bc588210ba6/FindBoost.cmake is dirty\nninja explain: /home/luis/.conan/data/zlib/1.2.8/lasote/stable/package/21ace02f4960dd0c1d50bd3abe1537054de08157/FindZLIB.cmake is dirty\n...\nninja explain: /home/luis/projects/Pix4DMapper-Master/master-conan/pix4dmapper/src/apps/CMakeLists.txt is dirty\n# A bunch of other project cmake files\n...\nninja explain: /usr/local/share/cmake-3.6/Modules/AutogenInfo.cmake.in is dirty\nninja explain: /usr/local/share/cmake-3.6/Modules/CMakeCCompiler.cmake.in is dirty\nninja explain: /usr/local/share/cmake-3.6/Modules/CMakeCCompilerABI.c is dirty\n# Many other cmake files\n...\nninja explain: CMakeCache.txt is dirty\nninja explain: CMakeFiles/3.6.2/CMakeCCompiler.cmake is dirty\nninja explain: CMakeFiles/3.6.2/CMakeCXXCompiler.cmake is dirty\nninja explain: CMakeFiles/3.6.2/CMakeSystem.cmake is dirty\nninja explain: CMakeFiles/feature_tests.c is dirty\nninja explain: CMakeFiles/feature_tests.cxx is dirty\nninja explain: conanbuildinfo.cmake is dirty\n[0/1] Re-running CMake...\n\n```\n\nNote that when I remove the following lines from my CMakeLists.txt file, the ninja generator starts to work again:\n\n```\n#include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)\n#conan_basic_setup()\n```\n\nAny of you has experienced similar issues?\n\n", "before_files": [{"content": "import os\nimport shutil\nimport tarfile\nimport time\nimport traceback\n\nfrom requests.exceptions import ConnectionError\n\nfrom conans.errors import ConanException, ConanConnectionError\nfrom conans.util.files import tar_extract, rmdir, relative_dirs\nfrom conans.util.log import logger\nfrom conans.paths import PACKAGE_TGZ_NAME, CONANINFO, CONAN_MANIFEST, CONANFILE, EXPORT_TGZ_NAME\nfrom conans.util.files import gzopen_without_timestamps\nfrom conans.util.files import touch\n\n\nclass RemoteManager(object):\n \"\"\" Will handle the remotes to get conans, packages etc \"\"\"\n\n def __init__(self, client_cache, remote_client, output):\n self._client_cache = client_cache\n self._output = output\n self._remote_client = remote_client\n\n def upload_conan(self, conan_reference, remote):\n \"\"\"Will upload the conans to the first remote\"\"\"\n basedir = self._client_cache.export(conan_reference)\n rel_files = self._client_cache.export_paths(conan_reference)\n the_files = {filename: os.path.join(basedir, filename) for filename in rel_files}\n\n if CONANFILE not in rel_files or CONAN_MANIFEST not in rel_files:\n raise ConanException(\"Cannot upload corrupted recipe '%s'\" % str(conan_reference))\n\n # FIXME: Check modified exports by hand?\n the_files = compress_export_files(the_files, basedir, self._output)\n\n return self._call_remote(remote, \"upload_conan\", conan_reference, the_files)\n\n def upload_package(self, package_reference, remote):\n \"\"\"Will upload the package to the first remote\"\"\"\n t1 = time.time()\n # existing package, will use short paths if defined\n basedir = self._client_cache.package(package_reference, short_paths=None)\n rel_files = self._client_cache.package_paths(package_reference)\n\n self._output.rewrite_line(\"Checking package integrity...\")\n if CONANINFO not in rel_files or CONAN_MANIFEST not in rel_files:\n raise ConanException(\"Cannot upload corrupted package '%s'\" % str(package_reference))\n\n the_files = {filename: os.path.join(basedir, filename) for filename in rel_files}\n logger.debug(\"====> Time remote_manager build_files_set : %f\" % (time.time() - t1))\n\n # If package has been modified remove tgz to regenerate it\n read_manifest, expected_manifest = self._client_cache.package_manifests(package_reference)\n if read_manifest is None or read_manifest.file_sums != expected_manifest.file_sums:\n if PACKAGE_TGZ_NAME in the_files:\n try:\n tgz_path = os.path.join(basedir, PACKAGE_TGZ_NAME)\n os.unlink(tgz_path)\n except Exception:\n pass\n raise ConanException(\"Cannot upload corrupted package '%s'\" % str(package_reference))\n else:\n self._output.rewrite_line(\"Package integrity OK!\")\n self._output.writeln(\"\")\n logger.debug(\"====> Time remote_manager check package integrity : %f\" % (time.time() - t1))\n\n the_files = compress_package_files(the_files, basedir, self._output)\n\n tmp = self._call_remote(remote, \"upload_package\", package_reference, the_files)\n logger.debug(\"====> Time remote_manager upload_package: %f\" % (time.time() - t1))\n return tmp\n\n def get_conan_digest(self, conan_reference, remote):\n \"\"\"\n Read ConanDigest from remotes\n Will iterate the remotes to find the conans unless remote was specified\n\n returns (ConanDigest, remote_name)\"\"\"\n return self._call_remote(remote, \"get_conan_digest\", conan_reference)\n\n def get_package_digest(self, package_reference, remote):\n \"\"\"\n Read ConanDigest from remotes\n Will iterate the remotes to find the conans unless remote was specified\n\n returns (ConanDigest, remote_name)\"\"\"\n return self._call_remote(remote, \"get_package_digest\", package_reference)\n\n def get_recipe(self, conan_reference, dest_folder, remote):\n \"\"\"\n Read the conans from remotes\n Will iterate the remotes to find the conans unless remote was specified\n\n returns (dict relative_filepath:abs_path , remote_name)\"\"\"\n zipped_files = self._call_remote(remote, \"get_recipe\", conan_reference, dest_folder)\n files = unzip_and_get_files(zipped_files, dest_folder, EXPORT_TGZ_NAME)\n # Make sure that the source dir is deleted\n rmdir(self._client_cache.source(conan_reference), True)\n# TODO: Download only the CONANFILE file and only download the rest of files\n# in install if needed (not found remote package)\n return files\n\n def get_package(self, package_reference, dest_folder, remote):\n \"\"\"\n Read the conans package from remotes\n Will iterate the remotes to find the conans unless remote was specified\n\n returns (dict relative_filepath:abs_path , remote_name)\"\"\"\n zipped_files = self._call_remote(remote, \"get_package\", package_reference, dest_folder)\n files = unzip_and_get_files(zipped_files, dest_folder, PACKAGE_TGZ_NAME)\n # Issue #214 https://github.com/conan-io/conan/issues/214\n for dirname, _, files in os.walk(dest_folder):\n for fname in files:\n touch(os.path.join(dirname, fname))\n\n return files\n\n def search(self, remote, pattern=None, ignorecase=True):\n \"\"\"\n Search exported conans information from remotes\n\n returns (dict str(conan_ref): {packages_info}\"\"\"\n return self._call_remote(remote, \"search\", pattern, ignorecase)\n\n def search_packages(self, remote, reference, query):\n return self._call_remote(remote, \"search_packages\", reference, query)\n\n def remove(self, conan_ref, remote):\n \"\"\"\n Removed conans or packages from remote\n \"\"\"\n return self._call_remote(remote, \"remove\", conan_ref)\n\n def remove_packages(self, conan_ref, remove_ids, remote):\n \"\"\"\n Removed conans or packages from remote\n \"\"\"\n return self._call_remote(remote, \"remove_packages\", conan_ref, remove_ids)\n\n def authenticate(self, remote, name, password):\n return self._call_remote(remote, 'authenticate', name, password)\n\n def _call_remote(self, remote, method, *argc, **argv):\n self._remote_client.remote = remote\n try:\n return getattr(self._remote_client, method)(*argc, **argv)\n except ConnectionError as exc:\n raise ConanConnectionError(\"Unable to connect to %s=%s\" % (remote.name, remote.url))\n except ConanException:\n raise\n except Exception as exc:\n logger.error(traceback.format_exc())\n raise ConanException(exc)\n\n\ndef compress_package_files(files, pkg_base_path, output):\n # Check if conan_package.tgz is present\n if PACKAGE_TGZ_NAME not in files:\n output.rewrite_line(\"Compressing package...\")\n return compress_files(files, PACKAGE_TGZ_NAME,\n excluded=(CONANINFO, CONAN_MANIFEST), dest_dir=pkg_base_path)\n else:\n the_files = {PACKAGE_TGZ_NAME: files[PACKAGE_TGZ_NAME],\n CONANINFO: files[CONANINFO],\n CONAN_MANIFEST: files[CONAN_MANIFEST]}\n\n return the_files\n\n\ndef compress_export_files(files, export_base_path, output):\n if EXPORT_TGZ_NAME not in files:\n output.rewrite_line(\"Compressing exported files...\")\n return compress_files(files, EXPORT_TGZ_NAME,\n excluded=(CONANFILE, CONAN_MANIFEST), dest_dir=export_base_path)\n else:\n the_files = {EXPORT_TGZ_NAME: files[EXPORT_TGZ_NAME],\n CONANFILE: files[CONANFILE],\n CONAN_MANIFEST: files[CONAN_MANIFEST]}\n return the_files\n return\n\n\ndef compress_files(files, name, excluded, dest_dir):\n \"\"\"Compress the package and returns the new dict (name => content) of files,\n only with the conanXX files and the compressed file\"\"\"\n\n # FIXME, better write to disk sequentially and not keep tgz contents in memory\n tgz_path = os.path.join(dest_dir, name)\n with open(tgz_path, \"wb\") as tgz_handle:\n # tgz_contents = BytesIO()\n tgz = gzopen_without_timestamps(name, mode=\"w\", fileobj=tgz_handle)\n\n def addfile(name, abs_path, tar):\n info = tarfile.TarInfo(name=name)\n info.size = os.stat(abs_path).st_size\n info.mode = os.stat(abs_path).st_mode\n with open(abs_path, 'rb') as file_handler:\n tar.addfile(tarinfo=info, fileobj=file_handler)\n\n for filename, abs_path in files.items():\n if filename not in excluded:\n addfile(filename, abs_path, tgz)\n\n tgz.close()\n ret = {}\n for e in excluded:\n if e in files:\n ret[e] = files[e]\n\n ret[name] = tgz_path\n\n return ret\n\n\ndef unzip_and_get_files(files, destination_dir, tgz_name):\n '''Moves all files from package_files, {relative_name: tmp_abs_path}\n to destination_dir, unzipping the \"tgz_name\" if found'''\n\n tgz_file = files.pop(tgz_name, None)\n if tgz_file:\n uncompress_file(tgz_file, destination_dir)\n\n return relative_dirs(destination_dir)\n\n\ndef uncompress_file(src_path, dest_folder):\n try:\n with open(src_path, 'rb') as file_handler:\n tar_extract(file_handler, dest_folder)\n except Exception as e:\n error_msg = \"Error while downloading/extracting files to %s\\n%s\\n\" % (dest_folder, str(e))\n # try to remove the files\n try:\n if os.path.exists(dest_folder):\n shutil.rmtree(dest_folder)\n error_msg += \"Folder removed\"\n except Exception as e:\n error_msg += \"Folder not removed, files/package might be damaged, remove manually\"\n raise ConanException(error_msg)\n", "path": "conans/client/remote_manager.py"}], "after_files": [{"content": "import os\nimport shutil\nimport tarfile\nimport time\nimport traceback\n\nfrom requests.exceptions import ConnectionError\n\nfrom conans.errors import ConanException, ConanConnectionError\nfrom conans.util.files import tar_extract, rmdir, relative_dirs\nfrom conans.util.log import logger\nfrom conans.paths import PACKAGE_TGZ_NAME, CONANINFO, CONAN_MANIFEST, CONANFILE, EXPORT_TGZ_NAME\nfrom conans.util.files import gzopen_without_timestamps\nfrom conans.util.files import touch\n\n\nclass RemoteManager(object):\n \"\"\" Will handle the remotes to get conans, packages etc \"\"\"\n\n def __init__(self, client_cache, remote_client, output):\n self._client_cache = client_cache\n self._output = output\n self._remote_client = remote_client\n\n def upload_conan(self, conan_reference, remote):\n \"\"\"Will upload the conans to the first remote\"\"\"\n basedir = self._client_cache.export(conan_reference)\n rel_files = self._client_cache.export_paths(conan_reference)\n the_files = {filename: os.path.join(basedir, filename) for filename in rel_files}\n\n if CONANFILE not in rel_files or CONAN_MANIFEST not in rel_files:\n raise ConanException(\"Cannot upload corrupted recipe '%s'\" % str(conan_reference))\n\n # FIXME: Check modified exports by hand?\n the_files = compress_export_files(the_files, basedir, self._output)\n\n return self._call_remote(remote, \"upload_conan\", conan_reference, the_files)\n\n def upload_package(self, package_reference, remote):\n \"\"\"Will upload the package to the first remote\"\"\"\n t1 = time.time()\n # existing package, will use short paths if defined\n basedir = self._client_cache.package(package_reference, short_paths=None)\n rel_files = self._client_cache.package_paths(package_reference)\n\n self._output.rewrite_line(\"Checking package integrity...\")\n if CONANINFO not in rel_files or CONAN_MANIFEST not in rel_files:\n raise ConanException(\"Cannot upload corrupted package '%s'\" % str(package_reference))\n\n the_files = {filename: os.path.join(basedir, filename) for filename in rel_files}\n logger.debug(\"====> Time remote_manager build_files_set : %f\" % (time.time() - t1))\n\n # If package has been modified remove tgz to regenerate it\n read_manifest, expected_manifest = self._client_cache.package_manifests(package_reference)\n if read_manifest is None or read_manifest.file_sums != expected_manifest.file_sums:\n if PACKAGE_TGZ_NAME in the_files:\n try:\n tgz_path = os.path.join(basedir, PACKAGE_TGZ_NAME)\n os.unlink(tgz_path)\n except Exception:\n pass\n raise ConanException(\"Cannot upload corrupted package '%s'\" % str(package_reference))\n else:\n self._output.rewrite_line(\"Package integrity OK!\")\n self._output.writeln(\"\")\n logger.debug(\"====> Time remote_manager check package integrity : %f\" % (time.time() - t1))\n\n the_files = compress_package_files(the_files, basedir, self._output)\n\n tmp = self._call_remote(remote, \"upload_package\", package_reference, the_files)\n logger.debug(\"====> Time remote_manager upload_package: %f\" % (time.time() - t1))\n return tmp\n\n def get_conan_digest(self, conan_reference, remote):\n \"\"\"\n Read ConanDigest from remotes\n Will iterate the remotes to find the conans unless remote was specified\n\n returns (ConanDigest, remote_name)\"\"\"\n return self._call_remote(remote, \"get_conan_digest\", conan_reference)\n\n def get_package_digest(self, package_reference, remote):\n \"\"\"\n Read ConanDigest from remotes\n Will iterate the remotes to find the conans unless remote was specified\n\n returns (ConanDigest, remote_name)\"\"\"\n return self._call_remote(remote, \"get_package_digest\", package_reference)\n\n def get_recipe(self, conan_reference, dest_folder, remote):\n \"\"\"\n Read the conans from remotes\n Will iterate the remotes to find the conans unless remote was specified\n\n returns (dict relative_filepath:abs_path , remote_name)\"\"\"\n zipped_files = self._call_remote(remote, \"get_recipe\", conan_reference, dest_folder)\n files = unzip_and_get_files(zipped_files, dest_folder, EXPORT_TGZ_NAME)\n # Make sure that the source dir is deleted\n rmdir(self._client_cache.source(conan_reference), True)\n for dirname, _, filenames in os.walk(dest_folder):\n for fname in filenames:\n touch(os.path.join(dirname, fname))\n# TODO: Download only the CONANFILE file and only download the rest of files\n# in install if needed (not found remote package)\n return files\n\n def get_package(self, package_reference, dest_folder, remote):\n \"\"\"\n Read the conans package from remotes\n Will iterate the remotes to find the conans unless remote was specified\n\n returns (dict relative_filepath:abs_path , remote_name)\"\"\"\n zipped_files = self._call_remote(remote, \"get_package\", package_reference, dest_folder)\n files = unzip_and_get_files(zipped_files, dest_folder, PACKAGE_TGZ_NAME)\n # Issue #214 https://github.com/conan-io/conan/issues/214\n for dirname, _, filenames in os.walk(dest_folder):\n for fname in filenames:\n touch(os.path.join(dirname, fname))\n\n return files\n\n def search(self, remote, pattern=None, ignorecase=True):\n \"\"\"\n Search exported conans information from remotes\n\n returns (dict str(conan_ref): {packages_info}\"\"\"\n return self._call_remote(remote, \"search\", pattern, ignorecase)\n\n def search_packages(self, remote, reference, query):\n return self._call_remote(remote, \"search_packages\", reference, query)\n\n def remove(self, conan_ref, remote):\n \"\"\"\n Removed conans or packages from remote\n \"\"\"\n return self._call_remote(remote, \"remove\", conan_ref)\n\n def remove_packages(self, conan_ref, remove_ids, remote):\n \"\"\"\n Removed conans or packages from remote\n \"\"\"\n return self._call_remote(remote, \"remove_packages\", conan_ref, remove_ids)\n\n def authenticate(self, remote, name, password):\n return self._call_remote(remote, 'authenticate', name, password)\n\n def _call_remote(self, remote, method, *argc, **argv):\n self._remote_client.remote = remote\n try:\n return getattr(self._remote_client, method)(*argc, **argv)\n except ConnectionError as exc:\n raise ConanConnectionError(\"Unable to connect to %s=%s\" % (remote.name, remote.url))\n except ConanException:\n raise\n except Exception as exc:\n logger.error(traceback.format_exc())\n raise ConanException(exc)\n\n\ndef compress_package_files(files, pkg_base_path, output):\n # Check if conan_package.tgz is present\n if PACKAGE_TGZ_NAME not in files:\n output.rewrite_line(\"Compressing package...\")\n return compress_files(files, PACKAGE_TGZ_NAME,\n excluded=(CONANINFO, CONAN_MANIFEST), dest_dir=pkg_base_path)\n else:\n the_files = {PACKAGE_TGZ_NAME: files[PACKAGE_TGZ_NAME],\n CONANINFO: files[CONANINFO],\n CONAN_MANIFEST: files[CONAN_MANIFEST]}\n\n return the_files\n\n\ndef compress_export_files(files, export_base_path, output):\n if EXPORT_TGZ_NAME not in files:\n output.rewrite_line(\"Compressing exported files...\")\n return compress_files(files, EXPORT_TGZ_NAME,\n excluded=(CONANFILE, CONAN_MANIFEST), dest_dir=export_base_path)\n else:\n the_files = {EXPORT_TGZ_NAME: files[EXPORT_TGZ_NAME],\n CONANFILE: files[CONANFILE],\n CONAN_MANIFEST: files[CONAN_MANIFEST]}\n return the_files\n return\n\n\ndef compress_files(files, name, excluded, dest_dir):\n \"\"\"Compress the package and returns the new dict (name => content) of files,\n only with the conanXX files and the compressed file\"\"\"\n\n # FIXME, better write to disk sequentially and not keep tgz contents in memory\n tgz_path = os.path.join(dest_dir, name)\n with open(tgz_path, \"wb\") as tgz_handle:\n # tgz_contents = BytesIO()\n tgz = gzopen_without_timestamps(name, mode=\"w\", fileobj=tgz_handle)\n\n def addfile(name, abs_path, tar):\n info = tarfile.TarInfo(name=name)\n info.size = os.stat(abs_path).st_size\n info.mode = os.stat(abs_path).st_mode\n with open(abs_path, 'rb') as file_handler:\n tar.addfile(tarinfo=info, fileobj=file_handler)\n\n for filename, abs_path in files.items():\n if filename not in excluded:\n addfile(filename, abs_path, tgz)\n\n tgz.close()\n ret = {}\n for e in excluded:\n if e in files:\n ret[e] = files[e]\n\n ret[name] = tgz_path\n\n return ret\n\n\ndef unzip_and_get_files(files, destination_dir, tgz_name):\n '''Moves all files from package_files, {relative_name: tmp_abs_path}\n to destination_dir, unzipping the \"tgz_name\" if found'''\n\n tgz_file = files.pop(tgz_name, None)\n if tgz_file:\n uncompress_file(tgz_file, destination_dir)\n\n return relative_dirs(destination_dir)\n\n\ndef uncompress_file(src_path, dest_folder):\n try:\n with open(src_path, 'rb') as file_handler:\n tar_extract(file_handler, dest_folder)\n except Exception as e:\n error_msg = \"Error while downloading/extracting files to %s\\n%s\\n\" % (dest_folder, str(e))\n # try to remove the files\n try:\n if os.path.exists(dest_folder):\n shutil.rmtree(dest_folder)\n error_msg += \"Folder removed\"\n except Exception as e:\n error_msg += \"Folder not removed, files/package might be damaged, remove manually\"\n raise ConanException(error_msg)\n", "path": "conans/client/remote_manager.py"}]}
| 3,862 | 293 |
gh_patches_debug_39554
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-443
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Log full traceback with `log.exception` in exception handlers
While trying to reload the `reddit` cog, I noticed that the `cogs` cog doesn't log the full traceback in the `except` blocks, but just the exception message using `log.error`. It would be better to use `log.exception` here to make sure that the full traceback is included instead of just a message like ` 'NoneType' object has no attribute 'startswith'`. (`log.exception` automatically includes the full traceback when used in an exception handler, no additional arguments required.)
I suspect that more cogs do this, so I think it's a good idea to check all the cogs after the Django migration is completed to change the log methods to `log.exception` inside exception handlers where appropriate.
Example from the `cogs` cog (it's both in `master` and `django`):
https://github.com/python-discord/bot/blob/5e16f4a52d59c73a04323e070e7b4a320e8c1e49/bot/cogs/cogs.py#L85
https://github.com/python-discord/bot/blob/25640adec9d042ccf249a91540fb09d354b04dfd/bot/cogs/cogs.py#L85
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/cogs.py`
Content:
```
1 import logging
2 import os
3
4 from discord import Colour, Embed
5 from discord.ext.commands import Bot, Cog, Context, group
6
7 from bot.constants import (
8 Emojis, MODERATION_ROLES, Roles, URLs
9 )
10 from bot.decorators import with_role
11 from bot.pagination import LinePaginator
12
13 log = logging.getLogger(__name__)
14
15 KEEP_LOADED = ["bot.cogs.cogs", "bot.cogs.modlog"]
16
17
18 class Cogs(Cog):
19 """Cog management commands."""
20
21 def __init__(self, bot: Bot):
22 self.bot = bot
23 self.cogs = {}
24
25 # Load up the cog names
26 log.info("Initializing cog names...")
27 for filename in os.listdir("bot/cogs"):
28 if filename.endswith(".py") and "_" not in filename:
29 if os.path.isfile(f"bot/cogs/{filename}"):
30 cog = filename[:-3]
31
32 self.cogs[cog] = f"bot.cogs.{cog}"
33
34 # Allow reverse lookups by reversing the pairs
35 self.cogs.update({v: k for k, v in self.cogs.items()})
36
37 @group(name='cogs', aliases=('c',), invoke_without_command=True)
38 @with_role(*MODERATION_ROLES, Roles.core_developer)
39 async def cogs_group(self, ctx: Context) -> None:
40 """Load, unload, reload, and list active cogs."""
41 await ctx.invoke(self.bot.get_command("help"), "cogs")
42
43 @cogs_group.command(name='load', aliases=('l',))
44 @with_role(*MODERATION_ROLES, Roles.core_developer)
45 async def load_command(self, ctx: Context, cog: str) -> None:
46 """
47 Load up an unloaded cog, given the module containing it.
48
49 You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the
50 entire module directly.
51 """
52 cog = cog.lower()
53
54 embed = Embed()
55 embed.colour = Colour.red()
56
57 embed.set_author(
58 name="Python Bot (Cogs)",
59 url=URLs.github_bot_repo,
60 icon_url=URLs.bot_avatar
61 )
62
63 if cog in self.cogs:
64 full_cog = self.cogs[cog]
65 elif "." in cog:
66 full_cog = cog
67 else:
68 full_cog = None
69 log.warning(f"{ctx.author} requested we load the '{cog}' cog, but that cog doesn't exist.")
70 embed.description = f"Unknown cog: {cog}"
71
72 if full_cog:
73 if full_cog not in self.bot.extensions:
74 try:
75 self.bot.load_extension(full_cog)
76 except ImportError:
77 log.error(f"{ctx.author} requested we load the '{cog}' cog, "
78 f"but the cog module {full_cog} could not be found!")
79 embed.description = f"Invalid cog: {cog}\n\nCould not find cog module {full_cog}"
80 except Exception as e:
81 log.error(f"{ctx.author} requested we load the '{cog}' cog, "
82 "but the loading failed with the following error: \n"
83 f"**{e.__class__.__name__}: {e}**")
84 embed.description = f"Failed to load cog: {cog}\n\n{e.__class__.__name__}: {e}"
85 else:
86 log.debug(f"{ctx.author} requested we load the '{cog}' cog. Cog loaded!")
87 embed.description = f"Cog loaded: {cog}"
88 embed.colour = Colour.green()
89 else:
90 log.warning(f"{ctx.author} requested we load the '{cog}' cog, but the cog was already loaded!")
91 embed.description = f"Cog {cog} is already loaded"
92
93 await ctx.send(embed=embed)
94
95 @cogs_group.command(name='unload', aliases=('ul',))
96 @with_role(*MODERATION_ROLES, Roles.core_developer)
97 async def unload_command(self, ctx: Context, cog: str) -> None:
98 """
99 Unload an already-loaded cog, given the module containing it.
100
101 You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the
102 entire module directly.
103 """
104 cog = cog.lower()
105
106 embed = Embed()
107 embed.colour = Colour.red()
108
109 embed.set_author(
110 name="Python Bot (Cogs)",
111 url=URLs.github_bot_repo,
112 icon_url=URLs.bot_avatar
113 )
114
115 if cog in self.cogs:
116 full_cog = self.cogs[cog]
117 elif "." in cog:
118 full_cog = cog
119 else:
120 full_cog = None
121 log.warning(f"{ctx.author} requested we unload the '{cog}' cog, but that cog doesn't exist.")
122 embed.description = f"Unknown cog: {cog}"
123
124 if full_cog:
125 if full_cog in KEEP_LOADED:
126 log.warning(f"{ctx.author} requested we unload `{full_cog}`, that sneaky pete. We said no.")
127 embed.description = f"You may not unload `{full_cog}`!"
128 elif full_cog in self.bot.extensions:
129 try:
130 self.bot.unload_extension(full_cog)
131 except Exception as e:
132 log.error(f"{ctx.author} requested we unload the '{cog}' cog, "
133 "but the unloading failed with the following error: \n"
134 f"{e}")
135 embed.description = f"Failed to unload cog: {cog}\n\n```{e}```"
136 else:
137 log.debug(f"{ctx.author} requested we unload the '{cog}' cog. Cog unloaded!")
138 embed.description = f"Cog unloaded: {cog}"
139 embed.colour = Colour.green()
140 else:
141 log.warning(f"{ctx.author} requested we unload the '{cog}' cog, but the cog wasn't loaded!")
142 embed.description = f"Cog {cog} is not loaded"
143
144 await ctx.send(embed=embed)
145
146 @cogs_group.command(name='reload', aliases=('r',))
147 @with_role(*MODERATION_ROLES, Roles.core_developer)
148 async def reload_command(self, ctx: Context, cog: str) -> None:
149 """
150 Reload an unloaded cog, given the module containing it.
151
152 You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the
153 entire module directly.
154
155 If you specify "*" as the cog, every cog currently loaded will be unloaded, and then every cog present in the
156 bot/cogs directory will be loaded.
157 """
158 cog = cog.lower()
159
160 embed = Embed()
161 embed.colour = Colour.red()
162
163 embed.set_author(
164 name="Python Bot (Cogs)",
165 url=URLs.github_bot_repo,
166 icon_url=URLs.bot_avatar
167 )
168
169 if cog == "*":
170 full_cog = cog
171 elif cog in self.cogs:
172 full_cog = self.cogs[cog]
173 elif "." in cog:
174 full_cog = cog
175 else:
176 full_cog = None
177 log.warning(f"{ctx.author} requested we reload the '{cog}' cog, but that cog doesn't exist.")
178 embed.description = f"Unknown cog: {cog}"
179
180 if full_cog:
181 if full_cog == "*":
182 all_cogs = [
183 f"bot.cogs.{fn[:-3]}" for fn in os.listdir("bot/cogs")
184 if os.path.isfile(f"bot/cogs/{fn}") and fn.endswith(".py") and "_" not in fn
185 ]
186
187 failed_unloads = {}
188 failed_loads = {}
189
190 unloaded = 0
191 loaded = 0
192
193 for loaded_cog in self.bot.extensions.copy().keys():
194 try:
195 self.bot.unload_extension(loaded_cog)
196 except Exception as e:
197 failed_unloads[loaded_cog] = f"{e.__class__.__name__}: {e}"
198 else:
199 unloaded += 1
200
201 for unloaded_cog in all_cogs:
202 try:
203 self.bot.load_extension(unloaded_cog)
204 except Exception as e:
205 failed_loads[unloaded_cog] = f"{e.__class__.__name__}: {e}"
206 else:
207 loaded += 1
208
209 lines = [
210 "**All cogs reloaded**",
211 f"**Unloaded**: {unloaded} / **Loaded**: {loaded}"
212 ]
213
214 if failed_unloads:
215 lines.append("\n**Unload failures**")
216
217 for cog, error in failed_unloads:
218 lines.append(f"{Emojis.status_dnd} **{cog}:** `{error}`")
219
220 if failed_loads:
221 lines.append("\n**Load failures**")
222
223 for cog, error in failed_loads.items():
224 lines.append(f"{Emojis.status_dnd} **{cog}:** `{error}`")
225
226 log.debug(f"{ctx.author} requested we reload all cogs. Here are the results: \n"
227 f"{lines}")
228
229 await LinePaginator.paginate(lines, ctx, embed, empty=False)
230 return
231
232 elif full_cog in self.bot.extensions:
233 try:
234 self.bot.unload_extension(full_cog)
235 self.bot.load_extension(full_cog)
236 except Exception as e:
237 log.error(f"{ctx.author} requested we reload the '{cog}' cog, "
238 "but the unloading failed with the following error: \n"
239 f"{e}")
240 embed.description = f"Failed to reload cog: {cog}\n\n```{e}```"
241 else:
242 log.debug(f"{ctx.author} requested we reload the '{cog}' cog. Cog reloaded!")
243 embed.description = f"Cog reload: {cog}"
244 embed.colour = Colour.green()
245 else:
246 log.warning(f"{ctx.author} requested we reload the '{cog}' cog, but the cog wasn't loaded!")
247 embed.description = f"Cog {cog} is not loaded"
248
249 await ctx.send(embed=embed)
250
251 @cogs_group.command(name='list', aliases=('all',))
252 @with_role(*MODERATION_ROLES, Roles.core_developer)
253 async def list_command(self, ctx: Context) -> None:
254 """
255 Get a list of all cogs, including their loaded status.
256
257 Gray indicates that the cog is unloaded. Green indicates that the cog is currently loaded.
258 """
259 embed = Embed()
260 lines = []
261 cogs = {}
262
263 embed.colour = Colour.blurple()
264 embed.set_author(
265 name="Python Bot (Cogs)",
266 url=URLs.github_bot_repo,
267 icon_url=URLs.bot_avatar
268 )
269
270 for key, _value in self.cogs.items():
271 if "." not in key:
272 continue
273
274 if key in self.bot.extensions:
275 cogs[key] = True
276 else:
277 cogs[key] = False
278
279 for key in self.bot.extensions.keys():
280 if key not in self.cogs:
281 cogs[key] = True
282
283 for cog, loaded in sorted(cogs.items(), key=lambda x: x[0]):
284 if cog in self.cogs:
285 cog = self.cogs[cog]
286
287 if loaded:
288 status = Emojis.status_online
289 else:
290 status = Emojis.status_offline
291
292 lines.append(f"{status} {cog}")
293
294 log.debug(f"{ctx.author} requested a list of all cogs. Returning a paginated list.")
295 await LinePaginator.paginate(lines, ctx, embed, max_size=300, empty=False)
296
297
298 def setup(bot: Bot) -> None:
299 """Cogs cog load."""
300 bot.add_cog(Cogs(bot))
301 log.info("Cog loaded: Cogs")
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bot/cogs/cogs.py b/bot/cogs/cogs.py
--- a/bot/cogs/cogs.py
+++ b/bot/cogs/cogs.py
@@ -74,13 +74,12 @@
try:
self.bot.load_extension(full_cog)
except ImportError:
- log.error(f"{ctx.author} requested we load the '{cog}' cog, "
- f"but the cog module {full_cog} could not be found!")
+ log.exception(f"{ctx.author} requested we load the '{cog}' cog, "
+ f"but the cog module {full_cog} could not be found!")
embed.description = f"Invalid cog: {cog}\n\nCould not find cog module {full_cog}"
except Exception as e:
- log.error(f"{ctx.author} requested we load the '{cog}' cog, "
- "but the loading failed with the following error: \n"
- f"**{e.__class__.__name__}: {e}**")
+ log.exception(f"{ctx.author} requested we load the '{cog}' cog, "
+ "but the loading failed")
embed.description = f"Failed to load cog: {cog}\n\n{e.__class__.__name__}: {e}"
else:
log.debug(f"{ctx.author} requested we load the '{cog}' cog. Cog loaded!")
@@ -129,9 +128,8 @@
try:
self.bot.unload_extension(full_cog)
except Exception as e:
- log.error(f"{ctx.author} requested we unload the '{cog}' cog, "
- "but the unloading failed with the following error: \n"
- f"{e}")
+ log.exception(f"{ctx.author} requested we unload the '{cog}' cog, "
+ "but the unloading failed")
embed.description = f"Failed to unload cog: {cog}\n\n```{e}```"
else:
log.debug(f"{ctx.author} requested we unload the '{cog}' cog. Cog unloaded!")
@@ -234,9 +232,8 @@
self.bot.unload_extension(full_cog)
self.bot.load_extension(full_cog)
except Exception as e:
- log.error(f"{ctx.author} requested we reload the '{cog}' cog, "
- "but the unloading failed with the following error: \n"
- f"{e}")
+ log.exception(f"{ctx.author} requested we reload the '{cog}' cog, "
+ "but the unloading failed")
embed.description = f"Failed to reload cog: {cog}\n\n```{e}```"
else:
log.debug(f"{ctx.author} requested we reload the '{cog}' cog. Cog reloaded!")
|
{"golden_diff": "diff --git a/bot/cogs/cogs.py b/bot/cogs/cogs.py\n--- a/bot/cogs/cogs.py\n+++ b/bot/cogs/cogs.py\n@@ -74,13 +74,12 @@\n try:\n self.bot.load_extension(full_cog)\n except ImportError:\n- log.error(f\"{ctx.author} requested we load the '{cog}' cog, \"\n- f\"but the cog module {full_cog} could not be found!\")\n+ log.exception(f\"{ctx.author} requested we load the '{cog}' cog, \"\n+ f\"but the cog module {full_cog} could not be found!\")\n embed.description = f\"Invalid cog: {cog}\\n\\nCould not find cog module {full_cog}\"\n except Exception as e:\n- log.error(f\"{ctx.author} requested we load the '{cog}' cog, \"\n- \"but the loading failed with the following error: \\n\"\n- f\"**{e.__class__.__name__}: {e}**\")\n+ log.exception(f\"{ctx.author} requested we load the '{cog}' cog, \"\n+ \"but the loading failed\")\n embed.description = f\"Failed to load cog: {cog}\\n\\n{e.__class__.__name__}: {e}\"\n else:\n log.debug(f\"{ctx.author} requested we load the '{cog}' cog. Cog loaded!\")\n@@ -129,9 +128,8 @@\n try:\n self.bot.unload_extension(full_cog)\n except Exception as e:\n- log.error(f\"{ctx.author} requested we unload the '{cog}' cog, \"\n- \"but the unloading failed with the following error: \\n\"\n- f\"{e}\")\n+ log.exception(f\"{ctx.author} requested we unload the '{cog}' cog, \"\n+ \"but the unloading failed\")\n embed.description = f\"Failed to unload cog: {cog}\\n\\n```{e}```\"\n else:\n log.debug(f\"{ctx.author} requested we unload the '{cog}' cog. Cog unloaded!\")\n@@ -234,9 +232,8 @@\n self.bot.unload_extension(full_cog)\n self.bot.load_extension(full_cog)\n except Exception as e:\n- log.error(f\"{ctx.author} requested we reload the '{cog}' cog, \"\n- \"but the unloading failed with the following error: \\n\"\n- f\"{e}\")\n+ log.exception(f\"{ctx.author} requested we reload the '{cog}' cog, \"\n+ \"but the unloading failed\")\n embed.description = f\"Failed to reload cog: {cog}\\n\\n```{e}```\"\n else:\n log.debug(f\"{ctx.author} requested we reload the '{cog}' cog. Cog reloaded!\")\n", "issue": "Log full traceback with `log.exception` in exception handlers\nWhile trying to reload the `reddit` cog, I noticed that the `cogs` cog doesn't log the full traceback in the `except` blocks, but just the exception message using `log.error`. It would be better to use `log.exception` here to make sure that the full traceback is included instead of just a message like ` 'NoneType' object has no attribute 'startswith'`. (`log.exception` automatically includes the full traceback when used in an exception handler, no additional arguments required.)\r\n\r\nI suspect that more cogs do this, so I think it's a good idea to check all the cogs after the Django migration is completed to change the log methods to `log.exception` inside exception handlers where appropriate.\r\n\r\nExample from the `cogs` cog (it's both in `master` and `django`):\r\nhttps://github.com/python-discord/bot/blob/5e16f4a52d59c73a04323e070e7b4a320e8c1e49/bot/cogs/cogs.py#L85\r\nhttps://github.com/python-discord/bot/blob/25640adec9d042ccf249a91540fb09d354b04dfd/bot/cogs/cogs.py#L85\n", "before_files": [{"content": "import logging\nimport os\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import Bot, Cog, Context, group\n\nfrom bot.constants import (\n Emojis, MODERATION_ROLES, Roles, URLs\n)\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\nlog = logging.getLogger(__name__)\n\nKEEP_LOADED = [\"bot.cogs.cogs\", \"bot.cogs.modlog\"]\n\n\nclass Cogs(Cog):\n \"\"\"Cog management commands.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.cogs = {}\n\n # Load up the cog names\n log.info(\"Initializing cog names...\")\n for filename in os.listdir(\"bot/cogs\"):\n if filename.endswith(\".py\") and \"_\" not in filename:\n if os.path.isfile(f\"bot/cogs/{filename}\"):\n cog = filename[:-3]\n\n self.cogs[cog] = f\"bot.cogs.{cog}\"\n\n # Allow reverse lookups by reversing the pairs\n self.cogs.update({v: k for k, v in self.cogs.items()})\n\n @group(name='cogs', aliases=('c',), invoke_without_command=True)\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def cogs_group(self, ctx: Context) -> None:\n \"\"\"Load, unload, reload, and list active cogs.\"\"\"\n await ctx.invoke(self.bot.get_command(\"help\"), \"cogs\")\n\n @cogs_group.command(name='load', aliases=('l',))\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def load_command(self, ctx: Context, cog: str) -> None:\n \"\"\"\n Load up an unloaded cog, given the module containing it.\n\n You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the\n entire module directly.\n \"\"\"\n cog = cog.lower()\n\n embed = Embed()\n embed.colour = Colour.red()\n\n embed.set_author(\n name=\"Python Bot (Cogs)\",\n url=URLs.github_bot_repo,\n icon_url=URLs.bot_avatar\n )\n\n if cog in self.cogs:\n full_cog = self.cogs[cog]\n elif \".\" in cog:\n full_cog = cog\n else:\n full_cog = None\n log.warning(f\"{ctx.author} requested we load the '{cog}' cog, but that cog doesn't exist.\")\n embed.description = f\"Unknown cog: {cog}\"\n\n if full_cog:\n if full_cog not in self.bot.extensions:\n try:\n self.bot.load_extension(full_cog)\n except ImportError:\n log.error(f\"{ctx.author} requested we load the '{cog}' cog, \"\n f\"but the cog module {full_cog} could not be found!\")\n embed.description = f\"Invalid cog: {cog}\\n\\nCould not find cog module {full_cog}\"\n except Exception as e:\n log.error(f\"{ctx.author} requested we load the '{cog}' cog, \"\n \"but the loading failed with the following error: \\n\"\n f\"**{e.__class__.__name__}: {e}**\")\n embed.description = f\"Failed to load cog: {cog}\\n\\n{e.__class__.__name__}: {e}\"\n else:\n log.debug(f\"{ctx.author} requested we load the '{cog}' cog. Cog loaded!\")\n embed.description = f\"Cog loaded: {cog}\"\n embed.colour = Colour.green()\n else:\n log.warning(f\"{ctx.author} requested we load the '{cog}' cog, but the cog was already loaded!\")\n embed.description = f\"Cog {cog} is already loaded\"\n\n await ctx.send(embed=embed)\n\n @cogs_group.command(name='unload', aliases=('ul',))\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def unload_command(self, ctx: Context, cog: str) -> None:\n \"\"\"\n Unload an already-loaded cog, given the module containing it.\n\n You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the\n entire module directly.\n \"\"\"\n cog = cog.lower()\n\n embed = Embed()\n embed.colour = Colour.red()\n\n embed.set_author(\n name=\"Python Bot (Cogs)\",\n url=URLs.github_bot_repo,\n icon_url=URLs.bot_avatar\n )\n\n if cog in self.cogs:\n full_cog = self.cogs[cog]\n elif \".\" in cog:\n full_cog = cog\n else:\n full_cog = None\n log.warning(f\"{ctx.author} requested we unload the '{cog}' cog, but that cog doesn't exist.\")\n embed.description = f\"Unknown cog: {cog}\"\n\n if full_cog:\n if full_cog in KEEP_LOADED:\n log.warning(f\"{ctx.author} requested we unload `{full_cog}`, that sneaky pete. We said no.\")\n embed.description = f\"You may not unload `{full_cog}`!\"\n elif full_cog in self.bot.extensions:\n try:\n self.bot.unload_extension(full_cog)\n except Exception as e:\n log.error(f\"{ctx.author} requested we unload the '{cog}' cog, \"\n \"but the unloading failed with the following error: \\n\"\n f\"{e}\")\n embed.description = f\"Failed to unload cog: {cog}\\n\\n```{e}```\"\n else:\n log.debug(f\"{ctx.author} requested we unload the '{cog}' cog. Cog unloaded!\")\n embed.description = f\"Cog unloaded: {cog}\"\n embed.colour = Colour.green()\n else:\n log.warning(f\"{ctx.author} requested we unload the '{cog}' cog, but the cog wasn't loaded!\")\n embed.description = f\"Cog {cog} is not loaded\"\n\n await ctx.send(embed=embed)\n\n @cogs_group.command(name='reload', aliases=('r',))\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def reload_command(self, ctx: Context, cog: str) -> None:\n \"\"\"\n Reload an unloaded cog, given the module containing it.\n\n You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the\n entire module directly.\n\n If you specify \"*\" as the cog, every cog currently loaded will be unloaded, and then every cog present in the\n bot/cogs directory will be loaded.\n \"\"\"\n cog = cog.lower()\n\n embed = Embed()\n embed.colour = Colour.red()\n\n embed.set_author(\n name=\"Python Bot (Cogs)\",\n url=URLs.github_bot_repo,\n icon_url=URLs.bot_avatar\n )\n\n if cog == \"*\":\n full_cog = cog\n elif cog in self.cogs:\n full_cog = self.cogs[cog]\n elif \".\" in cog:\n full_cog = cog\n else:\n full_cog = None\n log.warning(f\"{ctx.author} requested we reload the '{cog}' cog, but that cog doesn't exist.\")\n embed.description = f\"Unknown cog: {cog}\"\n\n if full_cog:\n if full_cog == \"*\":\n all_cogs = [\n f\"bot.cogs.{fn[:-3]}\" for fn in os.listdir(\"bot/cogs\")\n if os.path.isfile(f\"bot/cogs/{fn}\") and fn.endswith(\".py\") and \"_\" not in fn\n ]\n\n failed_unloads = {}\n failed_loads = {}\n\n unloaded = 0\n loaded = 0\n\n for loaded_cog in self.bot.extensions.copy().keys():\n try:\n self.bot.unload_extension(loaded_cog)\n except Exception as e:\n failed_unloads[loaded_cog] = f\"{e.__class__.__name__}: {e}\"\n else:\n unloaded += 1\n\n for unloaded_cog in all_cogs:\n try:\n self.bot.load_extension(unloaded_cog)\n except Exception as e:\n failed_loads[unloaded_cog] = f\"{e.__class__.__name__}: {e}\"\n else:\n loaded += 1\n\n lines = [\n \"**All cogs reloaded**\",\n f\"**Unloaded**: {unloaded} / **Loaded**: {loaded}\"\n ]\n\n if failed_unloads:\n lines.append(\"\\n**Unload failures**\")\n\n for cog, error in failed_unloads:\n lines.append(f\"{Emojis.status_dnd} **{cog}:** `{error}`\")\n\n if failed_loads:\n lines.append(\"\\n**Load failures**\")\n\n for cog, error in failed_loads.items():\n lines.append(f\"{Emojis.status_dnd} **{cog}:** `{error}`\")\n\n log.debug(f\"{ctx.author} requested we reload all cogs. Here are the results: \\n\"\n f\"{lines}\")\n\n await LinePaginator.paginate(lines, ctx, embed, empty=False)\n return\n\n elif full_cog in self.bot.extensions:\n try:\n self.bot.unload_extension(full_cog)\n self.bot.load_extension(full_cog)\n except Exception as e:\n log.error(f\"{ctx.author} requested we reload the '{cog}' cog, \"\n \"but the unloading failed with the following error: \\n\"\n f\"{e}\")\n embed.description = f\"Failed to reload cog: {cog}\\n\\n```{e}```\"\n else:\n log.debug(f\"{ctx.author} requested we reload the '{cog}' cog. Cog reloaded!\")\n embed.description = f\"Cog reload: {cog}\"\n embed.colour = Colour.green()\n else:\n log.warning(f\"{ctx.author} requested we reload the '{cog}' cog, but the cog wasn't loaded!\")\n embed.description = f\"Cog {cog} is not loaded\"\n\n await ctx.send(embed=embed)\n\n @cogs_group.command(name='list', aliases=('all',))\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def list_command(self, ctx: Context) -> None:\n \"\"\"\n Get a list of all cogs, including their loaded status.\n\n Gray indicates that the cog is unloaded. Green indicates that the cog is currently loaded.\n \"\"\"\n embed = Embed()\n lines = []\n cogs = {}\n\n embed.colour = Colour.blurple()\n embed.set_author(\n name=\"Python Bot (Cogs)\",\n url=URLs.github_bot_repo,\n icon_url=URLs.bot_avatar\n )\n\n for key, _value in self.cogs.items():\n if \".\" not in key:\n continue\n\n if key in self.bot.extensions:\n cogs[key] = True\n else:\n cogs[key] = False\n\n for key in self.bot.extensions.keys():\n if key not in self.cogs:\n cogs[key] = True\n\n for cog, loaded in sorted(cogs.items(), key=lambda x: x[0]):\n if cog in self.cogs:\n cog = self.cogs[cog]\n\n if loaded:\n status = Emojis.status_online\n else:\n status = Emojis.status_offline\n\n lines.append(f\"{status} {cog}\")\n\n log.debug(f\"{ctx.author} requested a list of all cogs. Returning a paginated list.\")\n await LinePaginator.paginate(lines, ctx, embed, max_size=300, empty=False)\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Cogs cog load.\"\"\"\n bot.add_cog(Cogs(bot))\n log.info(\"Cog loaded: Cogs\")\n", "path": "bot/cogs/cogs.py"}], "after_files": [{"content": "import logging\nimport os\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import Bot, Cog, Context, group\n\nfrom bot.constants import (\n Emojis, MODERATION_ROLES, Roles, URLs\n)\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\nlog = logging.getLogger(__name__)\n\nKEEP_LOADED = [\"bot.cogs.cogs\", \"bot.cogs.modlog\"]\n\n\nclass Cogs(Cog):\n \"\"\"Cog management commands.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.cogs = {}\n\n # Load up the cog names\n log.info(\"Initializing cog names...\")\n for filename in os.listdir(\"bot/cogs\"):\n if filename.endswith(\".py\") and \"_\" not in filename:\n if os.path.isfile(f\"bot/cogs/{filename}\"):\n cog = filename[:-3]\n\n self.cogs[cog] = f\"bot.cogs.{cog}\"\n\n # Allow reverse lookups by reversing the pairs\n self.cogs.update({v: k for k, v in self.cogs.items()})\n\n @group(name='cogs', aliases=('c',), invoke_without_command=True)\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def cogs_group(self, ctx: Context) -> None:\n \"\"\"Load, unload, reload, and list active cogs.\"\"\"\n await ctx.invoke(self.bot.get_command(\"help\"), \"cogs\")\n\n @cogs_group.command(name='load', aliases=('l',))\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def load_command(self, ctx: Context, cog: str) -> None:\n \"\"\"\n Load up an unloaded cog, given the module containing it.\n\n You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the\n entire module directly.\n \"\"\"\n cog = cog.lower()\n\n embed = Embed()\n embed.colour = Colour.red()\n\n embed.set_author(\n name=\"Python Bot (Cogs)\",\n url=URLs.github_bot_repo,\n icon_url=URLs.bot_avatar\n )\n\n if cog in self.cogs:\n full_cog = self.cogs[cog]\n elif \".\" in cog:\n full_cog = cog\n else:\n full_cog = None\n log.warning(f\"{ctx.author} requested we load the '{cog}' cog, but that cog doesn't exist.\")\n embed.description = f\"Unknown cog: {cog}\"\n\n if full_cog:\n if full_cog not in self.bot.extensions:\n try:\n self.bot.load_extension(full_cog)\n except ImportError:\n log.exception(f\"{ctx.author} requested we load the '{cog}' cog, \"\n f\"but the cog module {full_cog} could not be found!\")\n embed.description = f\"Invalid cog: {cog}\\n\\nCould not find cog module {full_cog}\"\n except Exception as e:\n log.exception(f\"{ctx.author} requested we load the '{cog}' cog, \"\n \"but the loading failed\")\n embed.description = f\"Failed to load cog: {cog}\\n\\n{e.__class__.__name__}: {e}\"\n else:\n log.debug(f\"{ctx.author} requested we load the '{cog}' cog. Cog loaded!\")\n embed.description = f\"Cog loaded: {cog}\"\n embed.colour = Colour.green()\n else:\n log.warning(f\"{ctx.author} requested we load the '{cog}' cog, but the cog was already loaded!\")\n embed.description = f\"Cog {cog} is already loaded\"\n\n await ctx.send(embed=embed)\n\n @cogs_group.command(name='unload', aliases=('ul',))\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def unload_command(self, ctx: Context, cog: str) -> None:\n \"\"\"\n Unload an already-loaded cog, given the module containing it.\n\n You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the\n entire module directly.\n \"\"\"\n cog = cog.lower()\n\n embed = Embed()\n embed.colour = Colour.red()\n\n embed.set_author(\n name=\"Python Bot (Cogs)\",\n url=URLs.github_bot_repo,\n icon_url=URLs.bot_avatar\n )\n\n if cog in self.cogs:\n full_cog = self.cogs[cog]\n elif \".\" in cog:\n full_cog = cog\n else:\n full_cog = None\n log.warning(f\"{ctx.author} requested we unload the '{cog}' cog, but that cog doesn't exist.\")\n embed.description = f\"Unknown cog: {cog}\"\n\n if full_cog:\n if full_cog in KEEP_LOADED:\n log.warning(f\"{ctx.author} requested we unload `{full_cog}`, that sneaky pete. We said no.\")\n embed.description = f\"You may not unload `{full_cog}`!\"\n elif full_cog in self.bot.extensions:\n try:\n self.bot.unload_extension(full_cog)\n except Exception as e:\n log.exception(f\"{ctx.author} requested we unload the '{cog}' cog, \"\n \"but the unloading failed\")\n embed.description = f\"Failed to unload cog: {cog}\\n\\n```{e}```\"\n else:\n log.debug(f\"{ctx.author} requested we unload the '{cog}' cog. Cog unloaded!\")\n embed.description = f\"Cog unloaded: {cog}\"\n embed.colour = Colour.green()\n else:\n log.warning(f\"{ctx.author} requested we unload the '{cog}' cog, but the cog wasn't loaded!\")\n embed.description = f\"Cog {cog} is not loaded\"\n\n await ctx.send(embed=embed)\n\n @cogs_group.command(name='reload', aliases=('r',))\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def reload_command(self, ctx: Context, cog: str) -> None:\n \"\"\"\n Reload an unloaded cog, given the module containing it.\n\n You can specify the cog name for any cogs that are placed directly within `!cogs`, or specify the\n entire module directly.\n\n If you specify \"*\" as the cog, every cog currently loaded will be unloaded, and then every cog present in the\n bot/cogs directory will be loaded.\n \"\"\"\n cog = cog.lower()\n\n embed = Embed()\n embed.colour = Colour.red()\n\n embed.set_author(\n name=\"Python Bot (Cogs)\",\n url=URLs.github_bot_repo,\n icon_url=URLs.bot_avatar\n )\n\n if cog == \"*\":\n full_cog = cog\n elif cog in self.cogs:\n full_cog = self.cogs[cog]\n elif \".\" in cog:\n full_cog = cog\n else:\n full_cog = None\n log.warning(f\"{ctx.author} requested we reload the '{cog}' cog, but that cog doesn't exist.\")\n embed.description = f\"Unknown cog: {cog}\"\n\n if full_cog:\n if full_cog == \"*\":\n all_cogs = [\n f\"bot.cogs.{fn[:-3]}\" for fn in os.listdir(\"bot/cogs\")\n if os.path.isfile(f\"bot/cogs/{fn}\") and fn.endswith(\".py\") and \"_\" not in fn\n ]\n\n failed_unloads = {}\n failed_loads = {}\n\n unloaded = 0\n loaded = 0\n\n for loaded_cog in self.bot.extensions.copy().keys():\n try:\n self.bot.unload_extension(loaded_cog)\n except Exception as e:\n failed_unloads[loaded_cog] = f\"{e.__class__.__name__}: {e}\"\n else:\n unloaded += 1\n\n for unloaded_cog in all_cogs:\n try:\n self.bot.load_extension(unloaded_cog)\n except Exception as e:\n failed_loads[unloaded_cog] = f\"{e.__class__.__name__}: {e}\"\n else:\n loaded += 1\n\n lines = [\n \"**All cogs reloaded**\",\n f\"**Unloaded**: {unloaded} / **Loaded**: {loaded}\"\n ]\n\n if failed_unloads:\n lines.append(\"\\n**Unload failures**\")\n\n for cog, error in failed_unloads:\n lines.append(f\"{Emojis.status_dnd} **{cog}:** `{error}`\")\n\n if failed_loads:\n lines.append(\"\\n**Load failures**\")\n\n for cog, error in failed_loads.items():\n lines.append(f\"{Emojis.status_dnd} **{cog}:** `{error}`\")\n\n log.debug(f\"{ctx.author} requested we reload all cogs. Here are the results: \\n\"\n f\"{lines}\")\n\n await LinePaginator.paginate(lines, ctx, embed, empty=False)\n return\n\n elif full_cog in self.bot.extensions:\n try:\n self.bot.unload_extension(full_cog)\n self.bot.load_extension(full_cog)\n except Exception as e:\n log.exception(f\"{ctx.author} requested we reload the '{cog}' cog, \"\n \"but the unloading failed\")\n embed.description = f\"Failed to reload cog: {cog}\\n\\n```{e}```\"\n else:\n log.debug(f\"{ctx.author} requested we reload the '{cog}' cog. Cog reloaded!\")\n embed.description = f\"Cog reload: {cog}\"\n embed.colour = Colour.green()\n else:\n log.warning(f\"{ctx.author} requested we reload the '{cog}' cog, but the cog wasn't loaded!\")\n embed.description = f\"Cog {cog} is not loaded\"\n\n await ctx.send(embed=embed)\n\n @cogs_group.command(name='list', aliases=('all',))\n @with_role(*MODERATION_ROLES, Roles.core_developer)\n async def list_command(self, ctx: Context) -> None:\n \"\"\"\n Get a list of all cogs, including their loaded status.\n\n Gray indicates that the cog is unloaded. Green indicates that the cog is currently loaded.\n \"\"\"\n embed = Embed()\n lines = []\n cogs = {}\n\n embed.colour = Colour.blurple()\n embed.set_author(\n name=\"Python Bot (Cogs)\",\n url=URLs.github_bot_repo,\n icon_url=URLs.bot_avatar\n )\n\n for key, _value in self.cogs.items():\n if \".\" not in key:\n continue\n\n if key in self.bot.extensions:\n cogs[key] = True\n else:\n cogs[key] = False\n\n for key in self.bot.extensions.keys():\n if key not in self.cogs:\n cogs[key] = True\n\n for cog, loaded in sorted(cogs.items(), key=lambda x: x[0]):\n if cog in self.cogs:\n cog = self.cogs[cog]\n\n if loaded:\n status = Emojis.status_online\n else:\n status = Emojis.status_offline\n\n lines.append(f\"{status} {cog}\")\n\n log.debug(f\"{ctx.author} requested a list of all cogs. Returning a paginated list.\")\n await LinePaginator.paginate(lines, ctx, embed, max_size=300, empty=False)\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Cogs cog load.\"\"\"\n bot.add_cog(Cogs(bot))\n log.info(\"Cog loaded: Cogs\")\n", "path": "bot/cogs/cogs.py"}]}
| 3,928 | 622 |
gh_patches_debug_28690
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-5345
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/Chat/coati/dataset/sft_dataset.py`
Content:
```
1 # Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 from typing import Dict, Optional, Sequence, Tuple
17
18 import torch
19 from coati.models.chatglm.chatglm_tokenizer import ChatGLMTokenizer
20 from torch.utils.data import Dataset
21 from tqdm import tqdm
22 from transformers import PreTrainedTokenizer
23
24 from colossalai.logging import get_dist_logger
25
26 from .utils import is_rank_0, jload
27
28 logger = get_dist_logger()
29
30 IGNORE_INDEX = -100
31 PROMPT_DICT = {
32 "prompt_input": (
33 "Below is an instruction that describes a task, paired with an input that provides further context. "
34 "Write a response that appropriately completes the request.\n\n"
35 "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
36 ),
37 "prompt_no_input": (
38 "Below is an instruction that describes a task. "
39 "Write a response that appropriately completes the request.\n\n"
40 "### Instruction:\n{instruction}\n\n### Response:"
41 ),
42 }
43
44
45 def _preprocess(
46 sources: Sequence[str],
47 targets: Sequence[str],
48 tokenizer: PreTrainedTokenizer,
49 max_length: int,
50 ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
51 """Preprocess the data by tokenizing."""
52 sequences = [s + t for s, t in zip(sources, targets)]
53 sequences_token = tokenizer(
54 sequences, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt"
55 )
56 sources_token = tokenizer(
57 sources, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt"
58 )
59
60 assert sequences_token["attention_mask"].dim() == 2, "seq2seq model should be preprocessed differently"
61 labels = copy.deepcopy(sequences_token["input_ids"])
62 for i in range(labels.shape[0]):
63 source_len = sources_token["attention_mask"][i].sum().item()
64 pad_len = max_length - sequences_token["attention_mask"][i].sum().item()
65 if tokenizer.padding_side == "right":
66 # |prompt|completion|eos|pad|
67 labels[i][:source_len] = IGNORE_INDEX
68 labels[i][-pad_len:] = IGNORE_INDEX
69 elif tokenizer.padding_side == "left":
70 # |pad|prompt|completion|eos|
71 labels[i][: pad_len + source_len] = IGNORE_INDEX
72 else:
73 raise RuntimeError()
74
75 return sequences_token["input_ids"], labels, sequences_token["attention_mask"]
76
77
78 def _preprocess_chatglm(
79 sources: Sequence[str],
80 targets: Sequence[str],
81 tokenizer: PreTrainedTokenizer,
82 max_length: int,
83 ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
84 """
85 Preprocess the data by tokenizing.
86 None for attention mask, ChatGLM will calculate attention mask according to input ids
87 """
88
89 labels = []
90 input_ids = []
91 for source, target in zip(sources, targets):
92 source_id = tokenizer.encode(text=source, add_special_tokens=False)
93 target_id = tokenizer.encode(text=target, add_special_tokens=False)
94 input_id = tokenizer.build_inputs_with_special_tokens(source_id, target_id)
95 # truncate
96 sp_token_list = [tokenizer.gmask_token_id, tokenizer.bos_token_id]
97 truncate_length = max(0, len(input_id) - max_length)
98 input_id = input_id[truncate_length:]
99 if truncate_length == len(source_id) + 1:
100 input_id = sp_token_list + input_id[1:]
101 elif truncate_length > len(source_id) + 1:
102 input_id = sp_token_list + input_id[2:]
103
104 context_length = input_id.index(tokenizer.bos_token_id)
105 mask_position = context_length - 1
106 label = [IGNORE_INDEX] * context_length + input_id[mask_position + 1 :]
107
108 pad_len = max_length - len(input_id)
109 input_id = input_id + [tokenizer.pad_token_id] * pad_len
110 input_ids.append(input_id)
111 labels.append(label + [IGNORE_INDEX] * pad_len)
112 return torch.tensor(input_ids), torch.tensor(labels), None
113
114
115 class SFTDataset(Dataset):
116 """
117 Dataset for sft model
118
119 Args:
120 dataset: dataset for supervised model
121 tokenizer: tokenizer for supervised model
122 max_length: max length of input
123 """
124
125 def __init__(self, dataset: Dict, tokenizer: PreTrainedTokenizer, max_length: int = 512) -> None:
126 super().__init__()
127 self.input_ids = []
128
129 sources = [data["prompt"] for data in dataset]
130 targets = [data["completion"] + tokenizer.eos_token for data in tqdm(dataset, disable=not is_rank_0())]
131
132 logger.info("Tokenizing inputs... This may take some time...")
133 if isinstance(tokenizer, ChatGLMTokenizer):
134 self.input_ids, self.labels, self.attention_mask = _preprocess_chatglm(
135 sources, targets, tokenizer, max_length
136 )
137 else:
138 self.input_ids, self.labels, self.attention_mask = _preprocess(sources, targets, tokenizer, max_length)
139
140 logger.info("Loaded dataset.")
141
142 def __len__(self):
143 length = self.input_ids.shape[0]
144 return length
145
146 def __getitem__(self, idx):
147 if self.attention_mask is not None:
148 return dict(input_ids=self.input_ids[idx], labels=self.labels[idx], attention_mask=self.attention_mask[idx])
149 else:
150 return dict(input_ids=self.input_ids[idx], labels=self.labels[idx])
151
152
153 class SupervisedDataset(Dataset):
154 """Dataset for supervised fine-tuning."""
155
156 def __init__(
157 self,
158 data_path: str,
159 tokenizer: PreTrainedTokenizer,
160 max_datasets_size: Optional[int] = None,
161 max_length: int = 512,
162 ):
163 super().__init__()
164 logger.info("Loading data...")
165 list_data_dict = jload(data_path)
166 logger.info(f"Loaded {len(list_data_dict)} examples.")
167
168 if max_datasets_size is not None:
169 logger.info(f"Limiting dataset to {max_datasets_size} examples.")
170 list_data_dict = list_data_dict[:max_datasets_size]
171
172 logger.info("Formatting inputs...")
173 prompt_input, prompt_no_input = PROMPT_DICT["prompt_input"], PROMPT_DICT["prompt_no_input"]
174 sources = [
175 prompt_input.format_map(example) if "input" in example else prompt_no_input.format_map(example)
176 for example in list_data_dict
177 ]
178 targets = [example["output"] + tokenizer.eos_token for example in list_data_dict]
179
180 logger.info("Tokenizing inputs... This may take some time...")
181 if isinstance(tokenizer, ChatGLMTokenizer):
182 self.input_ids, self.labels, self.attention_mask = _preprocess_chatglm(
183 sources, targets, tokenizer, max_length
184 )
185 else:
186 self.input_ids, self.labels, self.attention_mask = _preprocess(sources, targets, tokenizer, max_length)
187
188 logger.info("Loaded dataset.")
189
190 def __len__(self):
191 length = self.input_ids.shape[0]
192 return length
193
194 def __getitem__(self, idx):
195 if self.attention_mask is not None:
196 return dict(input_ids=self.input_ids[idx], labels=self.labels[idx], attention_mask=self.attention_mask[idx])
197 else:
198 return dict(input_ids=self.input_ids[idx], labels=self.labels[idx])
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/applications/Chat/coati/dataset/sft_dataset.py b/applications/Chat/coati/dataset/sft_dataset.py
--- a/applications/Chat/coati/dataset/sft_dataset.py
+++ b/applications/Chat/coati/dataset/sft_dataset.py
@@ -49,12 +49,13 @@
max_length: int,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Preprocess the data by tokenizing."""
- sequences = [s + t for s, t in zip(sources, targets)]
+ sequences = [s + t + tokenizer.eos_token for s, t in zip(sources, targets)]
sequences_token = tokenizer(
- sequences, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt"
+ sequences, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt", add_special_tokens=False
)
+
sources_token = tokenizer(
- sources, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt"
+ sources, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt", add_special_tokens=False
)
assert sequences_token["attention_mask"].dim() == 2, "seq2seq model should be preprocessed differently"
@@ -65,7 +66,8 @@
if tokenizer.padding_side == "right":
# |prompt|completion|eos|pad|
labels[i][:source_len] = IGNORE_INDEX
- labels[i][-pad_len:] = IGNORE_INDEX
+ if pad_len>0:
+ labels[i][-pad_len:] = IGNORE_INDEX
elif tokenizer.padding_side == "left":
# |pad|prompt|completion|eos|
labels[i][: pad_len + source_len] = IGNORE_INDEX
|
{"golden_diff": "diff --git a/applications/Chat/coati/dataset/sft_dataset.py b/applications/Chat/coati/dataset/sft_dataset.py\n--- a/applications/Chat/coati/dataset/sft_dataset.py\n+++ b/applications/Chat/coati/dataset/sft_dataset.py\n@@ -49,12 +49,13 @@\n max_length: int,\n ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n \"\"\"Preprocess the data by tokenizing.\"\"\"\n- sequences = [s + t for s, t in zip(sources, targets)]\n+ sequences = [s + t + tokenizer.eos_token for s, t in zip(sources, targets)]\n sequences_token = tokenizer(\n- sequences, max_length=max_length, padding=\"max_length\", truncation=True, return_tensors=\"pt\"\n+ sequences, max_length=max_length, padding=\"max_length\", truncation=True, return_tensors=\"pt\", add_special_tokens=False\n )\n+\n sources_token = tokenizer(\n- sources, max_length=max_length, padding=\"max_length\", truncation=True, return_tensors=\"pt\"\n+ sources, max_length=max_length, padding=\"max_length\", truncation=True, return_tensors=\"pt\", add_special_tokens=False\n )\n \n assert sequences_token[\"attention_mask\"].dim() == 2, \"seq2seq model should be preprocessed differently\"\n@@ -65,7 +66,8 @@\n if tokenizer.padding_side == \"right\":\n # |prompt|completion|eos|pad|\n labels[i][:source_len] = IGNORE_INDEX\n- labels[i][-pad_len:] = IGNORE_INDEX\n+ if pad_len>0:\n+ labels[i][-pad_len:] = IGNORE_INDEX\n elif tokenizer.padding_side == \"left\":\n # |pad|prompt|completion|eos|\n labels[i][: pad_len + source_len] = IGNORE_INDEX\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "# Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Dict, Optional, Sequence, Tuple\n\nimport torch\nfrom coati.models.chatglm.chatglm_tokenizer import ChatGLMTokenizer\nfrom torch.utils.data import Dataset\nfrom tqdm import tqdm\nfrom transformers import PreTrainedTokenizer\n\nfrom colossalai.logging import get_dist_logger\n\nfrom .utils import is_rank_0, jload\n\nlogger = get_dist_logger()\n\nIGNORE_INDEX = -100\nPROMPT_DICT = {\n \"prompt_input\": (\n \"Below is an instruction that describes a task, paired with an input that provides further context. \"\n \"Write a response that appropriately completes the request.\\n\\n\"\n \"### Instruction:\\n{instruction}\\n\\n### Input:\\n{input}\\n\\n### Response:\"\n ),\n \"prompt_no_input\": (\n \"Below is an instruction that describes a task. \"\n \"Write a response that appropriately completes the request.\\n\\n\"\n \"### Instruction:\\n{instruction}\\n\\n### Response:\"\n ),\n}\n\n\ndef _preprocess(\n sources: Sequence[str],\n targets: Sequence[str],\n tokenizer: PreTrainedTokenizer,\n max_length: int,\n) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n \"\"\"Preprocess the data by tokenizing.\"\"\"\n sequences = [s + t for s, t in zip(sources, targets)]\n sequences_token = tokenizer(\n sequences, max_length=max_length, padding=\"max_length\", truncation=True, return_tensors=\"pt\"\n )\n sources_token = tokenizer(\n sources, max_length=max_length, padding=\"max_length\", truncation=True, return_tensors=\"pt\"\n )\n\n assert sequences_token[\"attention_mask\"].dim() == 2, \"seq2seq model should be preprocessed differently\"\n labels = copy.deepcopy(sequences_token[\"input_ids\"])\n for i in range(labels.shape[0]):\n source_len = sources_token[\"attention_mask\"][i].sum().item()\n pad_len = max_length - sequences_token[\"attention_mask\"][i].sum().item()\n if tokenizer.padding_side == \"right\":\n # |prompt|completion|eos|pad|\n labels[i][:source_len] = IGNORE_INDEX\n labels[i][-pad_len:] = IGNORE_INDEX\n elif tokenizer.padding_side == \"left\":\n # |pad|prompt|completion|eos|\n labels[i][: pad_len + source_len] = IGNORE_INDEX\n else:\n raise RuntimeError()\n\n return sequences_token[\"input_ids\"], labels, sequences_token[\"attention_mask\"]\n\n\ndef _preprocess_chatglm(\n sources: Sequence[str],\n targets: Sequence[str],\n tokenizer: PreTrainedTokenizer,\n max_length: int,\n) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n \"\"\"\n Preprocess the data by tokenizing.\n None for attention mask, ChatGLM will calculate attention mask according to input ids\n \"\"\"\n\n labels = []\n input_ids = []\n for source, target in zip(sources, targets):\n source_id = tokenizer.encode(text=source, add_special_tokens=False)\n target_id = tokenizer.encode(text=target, add_special_tokens=False)\n input_id = tokenizer.build_inputs_with_special_tokens(source_id, target_id)\n # truncate\n sp_token_list = [tokenizer.gmask_token_id, tokenizer.bos_token_id]\n truncate_length = max(0, len(input_id) - max_length)\n input_id = input_id[truncate_length:]\n if truncate_length == len(source_id) + 1:\n input_id = sp_token_list + input_id[1:]\n elif truncate_length > len(source_id) + 1:\n input_id = sp_token_list + input_id[2:]\n\n context_length = input_id.index(tokenizer.bos_token_id)\n mask_position = context_length - 1\n label = [IGNORE_INDEX] * context_length + input_id[mask_position + 1 :]\n\n pad_len = max_length - len(input_id)\n input_id = input_id + [tokenizer.pad_token_id] * pad_len\n input_ids.append(input_id)\n labels.append(label + [IGNORE_INDEX] * pad_len)\n return torch.tensor(input_ids), torch.tensor(labels), None\n\n\nclass SFTDataset(Dataset):\n \"\"\"\n Dataset for sft model\n\n Args:\n dataset: dataset for supervised model\n tokenizer: tokenizer for supervised model\n max_length: max length of input\n \"\"\"\n\n def __init__(self, dataset: Dict, tokenizer: PreTrainedTokenizer, max_length: int = 512) -> None:\n super().__init__()\n self.input_ids = []\n\n sources = [data[\"prompt\"] for data in dataset]\n targets = [data[\"completion\"] + tokenizer.eos_token for data in tqdm(dataset, disable=not is_rank_0())]\n\n logger.info(\"Tokenizing inputs... This may take some time...\")\n if isinstance(tokenizer, ChatGLMTokenizer):\n self.input_ids, self.labels, self.attention_mask = _preprocess_chatglm(\n sources, targets, tokenizer, max_length\n )\n else:\n self.input_ids, self.labels, self.attention_mask = _preprocess(sources, targets, tokenizer, max_length)\n\n logger.info(\"Loaded dataset.\")\n\n def __len__(self):\n length = self.input_ids.shape[0]\n return length\n\n def __getitem__(self, idx):\n if self.attention_mask is not None:\n return dict(input_ids=self.input_ids[idx], labels=self.labels[idx], attention_mask=self.attention_mask[idx])\n else:\n return dict(input_ids=self.input_ids[idx], labels=self.labels[idx])\n\n\nclass SupervisedDataset(Dataset):\n \"\"\"Dataset for supervised fine-tuning.\"\"\"\n\n def __init__(\n self,\n data_path: str,\n tokenizer: PreTrainedTokenizer,\n max_datasets_size: Optional[int] = None,\n max_length: int = 512,\n ):\n super().__init__()\n logger.info(\"Loading data...\")\n list_data_dict = jload(data_path)\n logger.info(f\"Loaded {len(list_data_dict)} examples.\")\n\n if max_datasets_size is not None:\n logger.info(f\"Limiting dataset to {max_datasets_size} examples.\")\n list_data_dict = list_data_dict[:max_datasets_size]\n\n logger.info(\"Formatting inputs...\")\n prompt_input, prompt_no_input = PROMPT_DICT[\"prompt_input\"], PROMPT_DICT[\"prompt_no_input\"]\n sources = [\n prompt_input.format_map(example) if \"input\" in example else prompt_no_input.format_map(example)\n for example in list_data_dict\n ]\n targets = [example[\"output\"] + tokenizer.eos_token for example in list_data_dict]\n\n logger.info(\"Tokenizing inputs... This may take some time...\")\n if isinstance(tokenizer, ChatGLMTokenizer):\n self.input_ids, self.labels, self.attention_mask = _preprocess_chatglm(\n sources, targets, tokenizer, max_length\n )\n else:\n self.input_ids, self.labels, self.attention_mask = _preprocess(sources, targets, tokenizer, max_length)\n\n logger.info(\"Loaded dataset.\")\n\n def __len__(self):\n length = self.input_ids.shape[0]\n return length\n\n def __getitem__(self, idx):\n if self.attention_mask is not None:\n return dict(input_ids=self.input_ids[idx], labels=self.labels[idx], attention_mask=self.attention_mask[idx])\n else:\n return dict(input_ids=self.input_ids[idx], labels=self.labels[idx])\n", "path": "applications/Chat/coati/dataset/sft_dataset.py"}], "after_files": [{"content": "# Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Dict, Optional, Sequence, Tuple\n\nimport torch\nfrom coati.models.chatglm.chatglm_tokenizer import ChatGLMTokenizer\nfrom torch.utils.data import Dataset\nfrom tqdm import tqdm\nfrom transformers import PreTrainedTokenizer\n\nfrom colossalai.logging import get_dist_logger\n\nfrom .utils import is_rank_0, jload\n\nlogger = get_dist_logger()\n\nIGNORE_INDEX = -100\nPROMPT_DICT = {\n \"prompt_input\": (\n \"Below is an instruction that describes a task, paired with an input that provides further context. \"\n \"Write a response that appropriately completes the request.\\n\\n\"\n \"### Instruction:\\n{instruction}\\n\\n### Input:\\n{input}\\n\\n### Response:\"\n ),\n \"prompt_no_input\": (\n \"Below is an instruction that describes a task. \"\n \"Write a response that appropriately completes the request.\\n\\n\"\n \"### Instruction:\\n{instruction}\\n\\n### Response:\"\n ),\n}\n\n\ndef _preprocess(\n sources: Sequence[str],\n targets: Sequence[str],\n tokenizer: PreTrainedTokenizer,\n max_length: int,\n) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n \"\"\"Preprocess the data by tokenizing.\"\"\"\n sequences = [s + t + tokenizer.eos_token for s, t in zip(sources, targets)]\n sequences_token = tokenizer(\n sequences, max_length=max_length, padding=\"max_length\", truncation=True, return_tensors=\"pt\", add_special_tokens=False\n )\n\n sources_token = tokenizer(\n sources, max_length=max_length, padding=\"max_length\", truncation=True, return_tensors=\"pt\", add_special_tokens=False\n )\n\n assert sequences_token[\"attention_mask\"].dim() == 2, \"seq2seq model should be preprocessed differently\"\n labels = copy.deepcopy(sequences_token[\"input_ids\"])\n for i in range(labels.shape[0]):\n source_len = sources_token[\"attention_mask\"][i].sum().item()\n pad_len = max_length - sequences_token[\"attention_mask\"][i].sum().item()\n if tokenizer.padding_side == \"right\":\n # |prompt|completion|eos|pad|\n labels[i][:source_len] = IGNORE_INDEX\n if pad_len>0:\n labels[i][-pad_len:] = IGNORE_INDEX\n elif tokenizer.padding_side == \"left\":\n # |pad|prompt|completion|eos|\n labels[i][: pad_len + source_len] = IGNORE_INDEX\n else:\n raise RuntimeError()\n\n return sequences_token[\"input_ids\"], labels, sequences_token[\"attention_mask\"]\n\n\ndef _preprocess_chatglm(\n sources: Sequence[str],\n targets: Sequence[str],\n tokenizer: PreTrainedTokenizer,\n max_length: int,\n) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n \"\"\"\n Preprocess the data by tokenizing.\n None for attention mask, ChatGLM will calculate attention mask according to input ids\n \"\"\"\n\n labels = []\n input_ids = []\n for source, target in zip(sources, targets):\n source_id = tokenizer.encode(text=source, add_special_tokens=False)\n target_id = tokenizer.encode(text=target, add_special_tokens=False)\n input_id = tokenizer.build_inputs_with_special_tokens(source_id, target_id)\n # truncate\n sp_token_list = [tokenizer.gmask_token_id, tokenizer.bos_token_id]\n truncate_length = max(0, len(input_id) - max_length)\n input_id = input_id[truncate_length:]\n if truncate_length == len(source_id) + 1:\n input_id = sp_token_list + input_id[1:]\n elif truncate_length > len(source_id) + 1:\n input_id = sp_token_list + input_id[2:]\n\n context_length = input_id.index(tokenizer.bos_token_id)\n mask_position = context_length - 1\n label = [IGNORE_INDEX] * context_length + input_id[mask_position + 1 :]\n\n pad_len = max_length - len(input_id)\n input_id = input_id + [tokenizer.pad_token_id] * pad_len\n input_ids.append(input_id)\n labels.append(label + [IGNORE_INDEX] * pad_len)\n return torch.tensor(input_ids), torch.tensor(labels), None\n\n\nclass SFTDataset(Dataset):\n \"\"\"\n Dataset for sft model\n\n Args:\n dataset: dataset for supervised model\n tokenizer: tokenizer for supervised model\n max_length: max length of input\n \"\"\"\n\n def __init__(self, dataset: Dict, tokenizer: PreTrainedTokenizer, max_length: int = 512) -> None:\n super().__init__()\n self.input_ids = []\n\n sources = [data[\"prompt\"] for data in dataset]\n targets = [data[\"completion\"] + tokenizer.eos_token for data in tqdm(dataset, disable=not is_rank_0())]\n\n logger.info(\"Tokenizing inputs... This may take some time...\")\n if isinstance(tokenizer, ChatGLMTokenizer):\n self.input_ids, self.labels, self.attention_mask = _preprocess_chatglm(\n sources, targets, tokenizer, max_length\n )\n else:\n self.input_ids, self.labels, self.attention_mask = _preprocess(sources, targets, tokenizer, max_length)\n\n logger.info(\"Loaded dataset.\")\n\n def __len__(self):\n length = self.input_ids.shape[0]\n return length\n\n def __getitem__(self, idx):\n if self.attention_mask is not None:\n return dict(input_ids=self.input_ids[idx], labels=self.labels[idx], attention_mask=self.attention_mask[idx])\n else:\n return dict(input_ids=self.input_ids[idx], labels=self.labels[idx])\n\n\nclass SupervisedDataset(Dataset):\n \"\"\"Dataset for supervised fine-tuning.\"\"\"\n\n def __init__(\n self,\n data_path: str,\n tokenizer: PreTrainedTokenizer,\n max_datasets_size: Optional[int] = None,\n max_length: int = 512,\n ):\n super().__init__()\n logger.info(\"Loading data...\")\n list_data_dict = jload(data_path)\n logger.info(f\"Loaded {len(list_data_dict)} examples.\")\n\n if max_datasets_size is not None:\n logger.info(f\"Limiting dataset to {max_datasets_size} examples.\")\n list_data_dict = list_data_dict[:max_datasets_size]\n\n logger.info(\"Formatting inputs...\")\n prompt_input, prompt_no_input = PROMPT_DICT[\"prompt_input\"], PROMPT_DICT[\"prompt_no_input\"]\n sources = [\n prompt_input.format_map(example) if \"input\" in example else prompt_no_input.format_map(example)\n for example in list_data_dict\n ]\n targets = [example[\"output\"] + tokenizer.eos_token for example in list_data_dict]\n\n logger.info(\"Tokenizing inputs... This may take some time...\")\n if isinstance(tokenizer, ChatGLMTokenizer):\n self.input_ids, self.labels, self.attention_mask = _preprocess_chatglm(\n sources, targets, tokenizer, max_length\n )\n else:\n self.input_ids, self.labels, self.attention_mask = _preprocess(sources, targets, tokenizer, max_length)\n\n logger.info(\"Loaded dataset.\")\n\n def __len__(self):\n length = self.input_ids.shape[0]\n return length\n\n def __getitem__(self, idx):\n if self.attention_mask is not None:\n return dict(input_ids=self.input_ids[idx], labels=self.labels[idx], attention_mask=self.attention_mask[idx])\n else:\n return dict(input_ids=self.input_ids[idx], labels=self.labels[idx])\n", "path": "applications/Chat/coati/dataset/sft_dataset.py"}]}
| 2,538 | 400 |
gh_patches_debug_33551
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-5443
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Counting Comments on map popup and list items (2 issues - similar problem in a+)
**URL:** https://meinberlin-dev.liqd.net/mapideas/2023-01031/ ; https://meinberlin-dev.liqd.net/projekte/testprojekt-newsletter/
**user:** any
**expected behaviour:** the counting of comments should be consistent
**behaviour:**
1. The number of comments in the detail idea view is not the same anymore as the number in the idea overview (list & map). This is because the detail ide view now counts as well child comments while the idea overview doesn't. (see screenshot 1 vs. 2)
2. The counting in the detail view stops at 100 seperate comments. If there are child comments, it adds to counting of 100. The number is then also different to the idea overview. If I scroll down, then new comments are loaded and the counting number on top changes. This can be very confusing. (see screenshot 1, 2 & 3)
**important screensize:** any
**device & browser:** mac ff
**Comment/Question:**
Screenshot?
**1. screenshot of idea overview (map)**
<img width="821" alt="Bildschirmfoto 2023-08-01 um 15 36 52" src="https://github.com/liqd/a4-meinberlin/assets/113608720/ac6d7dd2-9785-49ad-85d4-f380cda6401d">
**2. screenshot of idea detail view with child comments**
<img width="847" alt="Bildschirmfoto 2023-08-01 um 15 37 17" src="https://github.com/liqd/a4-meinberlin/assets/113608720/45951686-f9d2-4acb-8615-8b75182ac943">
**3. screenshot of idea detail view with child comments and scrolled down**
<img width="972" alt="Bildschirmfoto 2023-08-01 um 15 37 40" src="https://github.com/liqd/a4-meinberlin/assets/113608720/3e2c3d16-0578-4a87-8f47-285d61e04be3">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/projects/templatetags/meinberlin_project_tags.py`
Content:
```
1 from django import template
2
3 from adhocracy4.comments.models import Comment
4 from adhocracy4.polls.models import Vote as Vote
5 from meinberlin.apps.budgeting.models import Proposal as budget_proposal
6 from meinberlin.apps.ideas.models import Idea
7 from meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal
8 from meinberlin.apps.likes.models import Like
9 from meinberlin.apps.livequestions.models import LiveQuestion
10 from meinberlin.apps.mapideas.models import MapIdea
11
12 register = template.Library()
13
14
15 @register.filter
16 def project_url(project):
17 if (
18 project.project_type == "meinberlin_bplan.Bplan"
19 or project.project_type == "meinberlin_extprojects.ExternalProject"
20 ):
21 return project.externalproject.url
22 return project.get_absolute_url()
23
24
25 @register.filter
26 def is_external(project):
27 return (
28 project.project_type == "meinberlin_bplan.Bplan"
29 or project.project_type == "meinberlin_extprojects.ExternalProject"
30 )
31
32
33 @register.simple_tag
34 def get_num_entries(module):
35 """Count all user-generated items."""
36 item_count = (
37 Idea.objects.filter(module=module).count()
38 + MapIdea.objects.filter(module=module).count()
39 + budget_proposal.objects.filter(module=module).count()
40 + kiezkasse_proposal.objects.filter(module=module).count()
41 + Comment.objects.filter(idea__module=module).count()
42 + Comment.objects.filter(mapidea__module=module).count()
43 + Comment.objects.filter(budget_proposal__module=module).count()
44 + Comment.objects.filter(kiezkasse_proposal__module=module).count()
45 + Comment.objects.filter(topic__module=module).count()
46 + Comment.objects.filter(maptopic__module=module).count()
47 + Comment.objects.filter(paragraph__chapter__module=module).count()
48 + Comment.objects.filter(chapter__module=module).count()
49 + Comment.objects.filter(poll__module=module).count()
50 + Vote.objects.filter(choice__question__poll__module=module).count()
51 + LiveQuestion.objects.filter(module=module).count()
52 + Like.objects.filter(question__module=module).count()
53 )
54 return item_count
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
@@ -1,4 +1,7 @@
from django import template
+from django.db.models import Count
+from django.db.models import Q
+from django.db.models import Sum
from adhocracy4.comments.models import Comment
from adhocracy4.polls.models import Vote as Vote
@@ -38,17 +41,28 @@
+ MapIdea.objects.filter(module=module).count()
+ budget_proposal.objects.filter(module=module).count()
+ kiezkasse_proposal.objects.filter(module=module).count()
- + Comment.objects.filter(idea__module=module).count()
- + Comment.objects.filter(mapidea__module=module).count()
- + Comment.objects.filter(budget_proposal__module=module).count()
- + Comment.objects.filter(kiezkasse_proposal__module=module).count()
- + Comment.objects.filter(topic__module=module).count()
- + Comment.objects.filter(maptopic__module=module).count()
- + Comment.objects.filter(paragraph__chapter__module=module).count()
- + Comment.objects.filter(chapter__module=module).count()
- + Comment.objects.filter(poll__module=module).count()
+ Vote.objects.filter(choice__question__poll__module=module).count()
+ LiveQuestion.objects.filter(module=module).count()
+ Like.objects.filter(question__module=module).count()
)
- return item_count
+ comment_filter = (
+ Q(idea__module=module)
+ | Q(mapidea__module=module)
+ | Q(budget_proposal__module=module)
+ | Q(kiezkasse_proposal__module=module)
+ | Q(topic__module=module)
+ | Q(maptopic__module=module)
+ | Q(paragraph__chapter__module=module)
+ | Q(chapter__module=module)
+ | Q(poll__module=module)
+ )
+ comment_count = (
+ Comment.objects.filter(comment_filter)
+ .annotate(child_comment_count=Count("child_comments__pk", distinct=True))
+ .aggregate(comment_count=Count("pk") + Sum("child_comment_count"))[
+ "comment_count"
+ ]
+ )
+ if comment_count is None:
+ comment_count = 0
+ return item_count + comment_count
|
{"golden_diff": "diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n@@ -1,4 +1,7 @@\n from django import template\n+from django.db.models import Count\n+from django.db.models import Q\n+from django.db.models import Sum\n \n from adhocracy4.comments.models import Comment\n from adhocracy4.polls.models import Vote as Vote\n@@ -38,17 +41,28 @@\n + MapIdea.objects.filter(module=module).count()\n + budget_proposal.objects.filter(module=module).count()\n + kiezkasse_proposal.objects.filter(module=module).count()\n- + Comment.objects.filter(idea__module=module).count()\n- + Comment.objects.filter(mapidea__module=module).count()\n- + Comment.objects.filter(budget_proposal__module=module).count()\n- + Comment.objects.filter(kiezkasse_proposal__module=module).count()\n- + Comment.objects.filter(topic__module=module).count()\n- + Comment.objects.filter(maptopic__module=module).count()\n- + Comment.objects.filter(paragraph__chapter__module=module).count()\n- + Comment.objects.filter(chapter__module=module).count()\n- + Comment.objects.filter(poll__module=module).count()\n + Vote.objects.filter(choice__question__poll__module=module).count()\n + LiveQuestion.objects.filter(module=module).count()\n + Like.objects.filter(question__module=module).count()\n )\n- return item_count\n+ comment_filter = (\n+ Q(idea__module=module)\n+ | Q(mapidea__module=module)\n+ | Q(budget_proposal__module=module)\n+ | Q(kiezkasse_proposal__module=module)\n+ | Q(topic__module=module)\n+ | Q(maptopic__module=module)\n+ | Q(paragraph__chapter__module=module)\n+ | Q(chapter__module=module)\n+ | Q(poll__module=module)\n+ )\n+ comment_count = (\n+ Comment.objects.filter(comment_filter)\n+ .annotate(child_comment_count=Count(\"child_comments__pk\", distinct=True))\n+ .aggregate(comment_count=Count(\"pk\") + Sum(\"child_comment_count\"))[\n+ \"comment_count\"\n+ ]\n+ )\n+ if comment_count is None:\n+ comment_count = 0\n+ return item_count + comment_count\n", "issue": "Counting Comments on map popup and list items (2 issues - similar problem in a+)\n**URL:** https://meinberlin-dev.liqd.net/mapideas/2023-01031/ ; https://meinberlin-dev.liqd.net/projekte/testprojekt-newsletter/\r\n**user:** any\r\n**expected behaviour:** the counting of comments should be consistent\r\n**behaviour:** \r\n\r\n1. The number of comments in the detail idea view is not the same anymore as the number in the idea overview (list & map). This is because the detail ide view now counts as well child comments while the idea overview doesn't. (see screenshot 1 vs. 2)\r\n\r\n2. The counting in the detail view stops at 100 seperate comments. If there are child comments, it adds to counting of 100. The number is then also different to the idea overview. If I scroll down, then new comments are loaded and the counting number on top changes. This can be very confusing. (see screenshot 1, 2 & 3)\r\n\r\n**important screensize:** any\r\n**device & browser:** mac ff\r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n**1. screenshot of idea overview (map)**\r\n<img width=\"821\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 36 52\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/ac6d7dd2-9785-49ad-85d4-f380cda6401d\">\r\n\r\n**2. screenshot of idea detail view with child comments**\r\n<img width=\"847\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 37 17\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/45951686-f9d2-4acb-8615-8b75182ac943\">\r\n\r\n**3. screenshot of idea detail view with child comments and scrolled down**\r\n<img width=\"972\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 37 40\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/3e2c3d16-0578-4a87-8f47-285d61e04be3\">\r\n\r\n\n", "before_files": [{"content": "from django import template\n\nfrom adhocracy4.comments.models import Comment\nfrom adhocracy4.polls.models import Vote as Vote\nfrom meinberlin.apps.budgeting.models import Proposal as budget_proposal\nfrom meinberlin.apps.ideas.models import Idea\nfrom meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\nfrom meinberlin.apps.likes.models import Like\nfrom meinberlin.apps.livequestions.models import LiveQuestion\nfrom meinberlin.apps.mapideas.models import MapIdea\n\nregister = template.Library()\n\n\[email protected]\ndef project_url(project):\n if (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n ):\n return project.externalproject.url\n return project.get_absolute_url()\n\n\[email protected]\ndef is_external(project):\n return (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n )\n\n\[email protected]_tag\ndef get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n item_count = (\n Idea.objects.filter(module=module).count()\n + MapIdea.objects.filter(module=module).count()\n + budget_proposal.objects.filter(module=module).count()\n + kiezkasse_proposal.objects.filter(module=module).count()\n + Comment.objects.filter(idea__module=module).count()\n + Comment.objects.filter(mapidea__module=module).count()\n + Comment.objects.filter(budget_proposal__module=module).count()\n + Comment.objects.filter(kiezkasse_proposal__module=module).count()\n + Comment.objects.filter(topic__module=module).count()\n + Comment.objects.filter(maptopic__module=module).count()\n + Comment.objects.filter(paragraph__chapter__module=module).count()\n + Comment.objects.filter(chapter__module=module).count()\n + Comment.objects.filter(poll__module=module).count()\n + Vote.objects.filter(choice__question__poll__module=module).count()\n + LiveQuestion.objects.filter(module=module).count()\n + Like.objects.filter(question__module=module).count()\n )\n return item_count\n", "path": "meinberlin/apps/projects/templatetags/meinberlin_project_tags.py"}], "after_files": [{"content": "from django import template\nfrom django.db.models import Count\nfrom django.db.models import Q\nfrom django.db.models import Sum\n\nfrom adhocracy4.comments.models import Comment\nfrom adhocracy4.polls.models import Vote as Vote\nfrom meinberlin.apps.budgeting.models import Proposal as budget_proposal\nfrom meinberlin.apps.ideas.models import Idea\nfrom meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\nfrom meinberlin.apps.likes.models import Like\nfrom meinberlin.apps.livequestions.models import LiveQuestion\nfrom meinberlin.apps.mapideas.models import MapIdea\n\nregister = template.Library()\n\n\[email protected]\ndef project_url(project):\n if (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n ):\n return project.externalproject.url\n return project.get_absolute_url()\n\n\[email protected]\ndef is_external(project):\n return (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n )\n\n\[email protected]_tag\ndef get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n item_count = (\n Idea.objects.filter(module=module).count()\n + MapIdea.objects.filter(module=module).count()\n + budget_proposal.objects.filter(module=module).count()\n + kiezkasse_proposal.objects.filter(module=module).count()\n + Vote.objects.filter(choice__question__poll__module=module).count()\n + LiveQuestion.objects.filter(module=module).count()\n + Like.objects.filter(question__module=module).count()\n )\n comment_filter = (\n Q(idea__module=module)\n | Q(mapidea__module=module)\n | Q(budget_proposal__module=module)\n | Q(kiezkasse_proposal__module=module)\n | Q(topic__module=module)\n | Q(maptopic__module=module)\n | Q(paragraph__chapter__module=module)\n | Q(chapter__module=module)\n | Q(poll__module=module)\n )\n comment_count = (\n Comment.objects.filter(comment_filter)\n .annotate(child_comment_count=Count(\"child_comments__pk\", distinct=True))\n .aggregate(comment_count=Count(\"pk\") + Sum(\"child_comment_count\"))[\n \"comment_count\"\n ]\n )\n if comment_count is None:\n comment_count = 0\n return item_count + comment_count\n", "path": "meinberlin/apps/projects/templatetags/meinberlin_project_tags.py"}]}
| 1,440 | 598 |
gh_patches_debug_115
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-7045
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ModuleNotFoundError: No module named 'modin.pandas.testing'
This module is public and is used quite often.
It shouldn't be difficult to maintain, as it has a few functions:
```python
__all__ = [
"assert_extension_array_equal",
"assert_frame_equal",
"assert_series_equal",
"assert_index_equal",
]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modin/pandas/__init__.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 import warnings
15
16 import pandas
17 from packaging import version
18
19 __pandas_version__ = "2.2"
20
21 if (
22 version.parse(pandas.__version__).release[:2]
23 != version.parse(__pandas_version__).release[:2]
24 ):
25 warnings.warn(
26 f"The pandas version installed ({pandas.__version__}) does not match the supported pandas version in"
27 + f" Modin ({__pandas_version__}.X). This may cause undesired side effects!"
28 )
29
30 # The extensions assigned to this module
31 _PD_EXTENSIONS_ = {}
32
33 # to not pollute namespace
34 del version
35
36 with warnings.catch_warnings():
37 warnings.simplefilter("ignore")
38 from pandas import (
39 eval,
40 factorize,
41 test,
42 date_range,
43 period_range,
44 Index,
45 MultiIndex,
46 CategoricalIndex,
47 bdate_range,
48 DatetimeIndex,
49 Timedelta,
50 Timestamp,
51 set_eng_float_format,
52 options,
53 describe_option,
54 set_option,
55 get_option,
56 reset_option,
57 option_context,
58 NaT,
59 PeriodIndex,
60 Categorical,
61 Interval,
62 UInt8Dtype,
63 UInt16Dtype,
64 UInt32Dtype,
65 UInt64Dtype,
66 SparseDtype,
67 Int8Dtype,
68 Int16Dtype,
69 Int32Dtype,
70 Int64Dtype,
71 StringDtype,
72 BooleanDtype,
73 CategoricalDtype,
74 DatetimeTZDtype,
75 IntervalDtype,
76 PeriodDtype,
77 RangeIndex,
78 TimedeltaIndex,
79 IntervalIndex,
80 IndexSlice,
81 Grouper,
82 array,
83 Period,
84 DateOffset,
85 timedelta_range,
86 infer_freq,
87 interval_range,
88 ExcelWriter,
89 NamedAgg,
90 NA,
91 api,
92 ArrowDtype,
93 Flags,
94 Float32Dtype,
95 Float64Dtype,
96 from_dummies,
97 )
98
99 import os
100
101 from modin.config import Parameter
102
103 _is_first_update = {}
104
105
106 def _update_engine(publisher: Parameter):
107 from modin.config import (
108 CpuCount,
109 Engine,
110 IsExperimental,
111 StorageFormat,
112 ValueSource,
113 )
114
115 # Set this so that Pandas doesn't try to multithread by itself
116 os.environ["OMP_NUM_THREADS"] = "1"
117
118 sfmt = StorageFormat.get()
119
120 if sfmt == "Hdk":
121 is_hdk = True
122 elif sfmt == "Omnisci":
123 is_hdk = True
124 StorageFormat.put("Hdk")
125 warnings.warn(
126 "The OmniSci storage format has been deprecated. Please use "
127 + '`StorageFormat.put("hdk")` or `MODIN_STORAGE_FORMAT="hdk"` instead.'
128 )
129 else:
130 is_hdk = False
131
132 if is_hdk and publisher.get_value_source() == ValueSource.DEFAULT:
133 publisher.put("Native")
134 IsExperimental.put(True)
135 if (
136 publisher.get() == "Native"
137 and StorageFormat.get_value_source() == ValueSource.DEFAULT
138 ):
139 is_hdk = True
140 StorageFormat.put("Hdk")
141 IsExperimental.put(True)
142
143 if publisher.get() == "Ray":
144 if _is_first_update.get("Ray", True):
145 from modin.core.execution.ray.common import initialize_ray
146
147 initialize_ray()
148 elif publisher.get() == "Native":
149 # With HDK storage format there is only a single worker per node
150 # and we allow it to work on all cores.
151 if is_hdk:
152 os.environ["OMP_NUM_THREADS"] = str(CpuCount.get())
153 else:
154 raise ValueError(
155 f"Storage format should be 'Hdk' with 'Native' engine, but provided {sfmt}."
156 )
157 elif publisher.get() == "Dask":
158 if _is_first_update.get("Dask", True):
159 from modin.core.execution.dask.common import initialize_dask
160
161 initialize_dask()
162 elif publisher.get() == "Unidist":
163 if _is_first_update.get("Unidist", True):
164 from modin.core.execution.unidist.common import initialize_unidist
165
166 initialize_unidist()
167 elif publisher.get() not in Engine.NOINIT_ENGINES:
168 raise ImportError("Unrecognized execution engine: {}.".format(publisher.get()))
169
170 _is_first_update[publisher.get()] = False
171
172
173 from modin.pandas import errors
174 from modin.utils import show_versions
175
176 from .. import __version__
177 from .dataframe import DataFrame
178 from .general import (
179 concat,
180 crosstab,
181 cut,
182 get_dummies,
183 isna,
184 isnull,
185 lreshape,
186 melt,
187 merge,
188 merge_asof,
189 merge_ordered,
190 notna,
191 notnull,
192 pivot,
193 pivot_table,
194 qcut,
195 to_datetime,
196 to_numeric,
197 to_timedelta,
198 unique,
199 value_counts,
200 wide_to_long,
201 )
202 from .io import (
203 ExcelFile,
204 HDFStore,
205 json_normalize,
206 read_clipboard,
207 read_csv,
208 read_excel,
209 read_feather,
210 read_fwf,
211 read_gbq,
212 read_hdf,
213 read_html,
214 read_json,
215 read_orc,
216 read_parquet,
217 read_pickle,
218 read_sas,
219 read_spss,
220 read_sql,
221 read_sql_query,
222 read_sql_table,
223 read_stata,
224 read_table,
225 read_xml,
226 to_pickle,
227 )
228 from .plotting import Plotting as plotting
229 from .series import Series
230
231
232 def __getattr__(name: str):
233 """
234 Overrides getattr on the module to enable extensions.
235
236 Parameters
237 ----------
238 name : str
239 The name of the attribute being retrieved.
240
241 Returns
242 -------
243 Attribute
244 Returns the extension attribute, if it exists, otherwise returns the attribute
245 imported in this file.
246 """
247 try:
248 return _PD_EXTENSIONS_.get(name, globals()[name])
249 except KeyError:
250 raise AttributeError(f"module 'modin.pandas' has no attribute '{name}'")
251
252
253 __all__ = [ # noqa: F405
254 "_PD_EXTENSIONS_",
255 "DataFrame",
256 "Series",
257 "read_csv",
258 "read_parquet",
259 "read_json",
260 "read_html",
261 "read_clipboard",
262 "read_excel",
263 "read_hdf",
264 "read_feather",
265 "read_stata",
266 "read_sas",
267 "read_pickle",
268 "read_sql",
269 "read_gbq",
270 "read_table",
271 "read_spss",
272 "read_orc",
273 "json_normalize",
274 "concat",
275 "eval",
276 "cut",
277 "factorize",
278 "test",
279 "qcut",
280 "to_datetime",
281 "get_dummies",
282 "isna",
283 "isnull",
284 "merge",
285 "pivot_table",
286 "date_range",
287 "Index",
288 "MultiIndex",
289 "Series",
290 "bdate_range",
291 "period_range",
292 "DatetimeIndex",
293 "to_timedelta",
294 "set_eng_float_format",
295 "options",
296 "describe_option",
297 "set_option",
298 "get_option",
299 "reset_option",
300 "option_context",
301 "CategoricalIndex",
302 "Timedelta",
303 "Timestamp",
304 "NaT",
305 "PeriodIndex",
306 "Categorical",
307 "__version__",
308 "melt",
309 "crosstab",
310 "plotting",
311 "Interval",
312 "UInt8Dtype",
313 "UInt16Dtype",
314 "UInt32Dtype",
315 "UInt64Dtype",
316 "SparseDtype",
317 "Int8Dtype",
318 "Int16Dtype",
319 "Int32Dtype",
320 "Int64Dtype",
321 "CategoricalDtype",
322 "DatetimeTZDtype",
323 "IntervalDtype",
324 "PeriodDtype",
325 "BooleanDtype",
326 "StringDtype",
327 "NA",
328 "RangeIndex",
329 "TimedeltaIndex",
330 "IntervalIndex",
331 "IndexSlice",
332 "Grouper",
333 "array",
334 "Period",
335 "show_versions",
336 "DateOffset",
337 "timedelta_range",
338 "infer_freq",
339 "interval_range",
340 "ExcelWriter",
341 "read_fwf",
342 "read_sql_table",
343 "read_sql_query",
344 "ExcelFile",
345 "to_pickle",
346 "HDFStore",
347 "lreshape",
348 "wide_to_long",
349 "merge_asof",
350 "merge_ordered",
351 "notnull",
352 "notna",
353 "pivot",
354 "to_numeric",
355 "unique",
356 "value_counts",
357 "NamedAgg",
358 "api",
359 "read_xml",
360 "ArrowDtype",
361 "Flags",
362 "Float32Dtype",
363 "Float64Dtype",
364 "from_dummies",
365 "errors",
366 ]
367
368 del pandas, Parameter
369
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/modin/pandas/__init__.py b/modin/pandas/__init__.py
--- a/modin/pandas/__init__.py
+++ b/modin/pandas/__init__.py
@@ -94,6 +94,7 @@
Float32Dtype,
Float64Dtype,
from_dummies,
+ testing,
)
import os
|
{"golden_diff": "diff --git a/modin/pandas/__init__.py b/modin/pandas/__init__.py\n--- a/modin/pandas/__init__.py\n+++ b/modin/pandas/__init__.py\n@@ -94,6 +94,7 @@\n Float32Dtype,\n Float64Dtype,\n from_dummies,\n+ testing,\n )\n \n import os\n", "issue": "ModuleNotFoundError: No module named 'modin.pandas.testing'\nThis module is public and is used quite often.\r\nIt shouldn't be difficult to maintain, as it has a few functions:\r\n```python\r\n__all__ = [\r\n \"assert_extension_array_equal\",\r\n \"assert_frame_equal\",\r\n \"assert_series_equal\",\r\n \"assert_index_equal\",\r\n]\r\n```\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nimport warnings\n\nimport pandas\nfrom packaging import version\n\n__pandas_version__ = \"2.2\"\n\nif (\n version.parse(pandas.__version__).release[:2]\n != version.parse(__pandas_version__).release[:2]\n):\n warnings.warn(\n f\"The pandas version installed ({pandas.__version__}) does not match the supported pandas version in\"\n + f\" Modin ({__pandas_version__}.X). This may cause undesired side effects!\"\n )\n\n# The extensions assigned to this module\n_PD_EXTENSIONS_ = {}\n\n# to not pollute namespace\ndel version\n\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n from pandas import (\n eval,\n factorize,\n test,\n date_range,\n period_range,\n Index,\n MultiIndex,\n CategoricalIndex,\n bdate_range,\n DatetimeIndex,\n Timedelta,\n Timestamp,\n set_eng_float_format,\n options,\n describe_option,\n set_option,\n get_option,\n reset_option,\n option_context,\n NaT,\n PeriodIndex,\n Categorical,\n Interval,\n UInt8Dtype,\n UInt16Dtype,\n UInt32Dtype,\n UInt64Dtype,\n SparseDtype,\n Int8Dtype,\n Int16Dtype,\n Int32Dtype,\n Int64Dtype,\n StringDtype,\n BooleanDtype,\n CategoricalDtype,\n DatetimeTZDtype,\n IntervalDtype,\n PeriodDtype,\n RangeIndex,\n TimedeltaIndex,\n IntervalIndex,\n IndexSlice,\n Grouper,\n array,\n Period,\n DateOffset,\n timedelta_range,\n infer_freq,\n interval_range,\n ExcelWriter,\n NamedAgg,\n NA,\n api,\n ArrowDtype,\n Flags,\n Float32Dtype,\n Float64Dtype,\n from_dummies,\n )\n\nimport os\n\nfrom modin.config import Parameter\n\n_is_first_update = {}\n\n\ndef _update_engine(publisher: Parameter):\n from modin.config import (\n CpuCount,\n Engine,\n IsExperimental,\n StorageFormat,\n ValueSource,\n )\n\n # Set this so that Pandas doesn't try to multithread by itself\n os.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n sfmt = StorageFormat.get()\n\n if sfmt == \"Hdk\":\n is_hdk = True\n elif sfmt == \"Omnisci\":\n is_hdk = True\n StorageFormat.put(\"Hdk\")\n warnings.warn(\n \"The OmniSci storage format has been deprecated. Please use \"\n + '`StorageFormat.put(\"hdk\")` or `MODIN_STORAGE_FORMAT=\"hdk\"` instead.'\n )\n else:\n is_hdk = False\n\n if is_hdk and publisher.get_value_source() == ValueSource.DEFAULT:\n publisher.put(\"Native\")\n IsExperimental.put(True)\n if (\n publisher.get() == \"Native\"\n and StorageFormat.get_value_source() == ValueSource.DEFAULT\n ):\n is_hdk = True\n StorageFormat.put(\"Hdk\")\n IsExperimental.put(True)\n\n if publisher.get() == \"Ray\":\n if _is_first_update.get(\"Ray\", True):\n from modin.core.execution.ray.common import initialize_ray\n\n initialize_ray()\n elif publisher.get() == \"Native\":\n # With HDK storage format there is only a single worker per node\n # and we allow it to work on all cores.\n if is_hdk:\n os.environ[\"OMP_NUM_THREADS\"] = str(CpuCount.get())\n else:\n raise ValueError(\n f\"Storage format should be 'Hdk' with 'Native' engine, but provided {sfmt}.\"\n )\n elif publisher.get() == \"Dask\":\n if _is_first_update.get(\"Dask\", True):\n from modin.core.execution.dask.common import initialize_dask\n\n initialize_dask()\n elif publisher.get() == \"Unidist\":\n if _is_first_update.get(\"Unidist\", True):\n from modin.core.execution.unidist.common import initialize_unidist\n\n initialize_unidist()\n elif publisher.get() not in Engine.NOINIT_ENGINES:\n raise ImportError(\"Unrecognized execution engine: {}.\".format(publisher.get()))\n\n _is_first_update[publisher.get()] = False\n\n\nfrom modin.pandas import errors\nfrom modin.utils import show_versions\n\nfrom .. import __version__\nfrom .dataframe import DataFrame\nfrom .general import (\n concat,\n crosstab,\n cut,\n get_dummies,\n isna,\n isnull,\n lreshape,\n melt,\n merge,\n merge_asof,\n merge_ordered,\n notna,\n notnull,\n pivot,\n pivot_table,\n qcut,\n to_datetime,\n to_numeric,\n to_timedelta,\n unique,\n value_counts,\n wide_to_long,\n)\nfrom .io import (\n ExcelFile,\n HDFStore,\n json_normalize,\n read_clipboard,\n read_csv,\n read_excel,\n read_feather,\n read_fwf,\n read_gbq,\n read_hdf,\n read_html,\n read_json,\n read_orc,\n read_parquet,\n read_pickle,\n read_sas,\n read_spss,\n read_sql,\n read_sql_query,\n read_sql_table,\n read_stata,\n read_table,\n read_xml,\n to_pickle,\n)\nfrom .plotting import Plotting as plotting\nfrom .series import Series\n\n\ndef __getattr__(name: str):\n \"\"\"\n Overrides getattr on the module to enable extensions.\n\n Parameters\n ----------\n name : str\n The name of the attribute being retrieved.\n\n Returns\n -------\n Attribute\n Returns the extension attribute, if it exists, otherwise returns the attribute\n imported in this file.\n \"\"\"\n try:\n return _PD_EXTENSIONS_.get(name, globals()[name])\n except KeyError:\n raise AttributeError(f\"module 'modin.pandas' has no attribute '{name}'\")\n\n\n__all__ = [ # noqa: F405\n \"_PD_EXTENSIONS_\",\n \"DataFrame\",\n \"Series\",\n \"read_csv\",\n \"read_parquet\",\n \"read_json\",\n \"read_html\",\n \"read_clipboard\",\n \"read_excel\",\n \"read_hdf\",\n \"read_feather\",\n \"read_stata\",\n \"read_sas\",\n \"read_pickle\",\n \"read_sql\",\n \"read_gbq\",\n \"read_table\",\n \"read_spss\",\n \"read_orc\",\n \"json_normalize\",\n \"concat\",\n \"eval\",\n \"cut\",\n \"factorize\",\n \"test\",\n \"qcut\",\n \"to_datetime\",\n \"get_dummies\",\n \"isna\",\n \"isnull\",\n \"merge\",\n \"pivot_table\",\n \"date_range\",\n \"Index\",\n \"MultiIndex\",\n \"Series\",\n \"bdate_range\",\n \"period_range\",\n \"DatetimeIndex\",\n \"to_timedelta\",\n \"set_eng_float_format\",\n \"options\",\n \"describe_option\",\n \"set_option\",\n \"get_option\",\n \"reset_option\",\n \"option_context\",\n \"CategoricalIndex\",\n \"Timedelta\",\n \"Timestamp\",\n \"NaT\",\n \"PeriodIndex\",\n \"Categorical\",\n \"__version__\",\n \"melt\",\n \"crosstab\",\n \"plotting\",\n \"Interval\",\n \"UInt8Dtype\",\n \"UInt16Dtype\",\n \"UInt32Dtype\",\n \"UInt64Dtype\",\n \"SparseDtype\",\n \"Int8Dtype\",\n \"Int16Dtype\",\n \"Int32Dtype\",\n \"Int64Dtype\",\n \"CategoricalDtype\",\n \"DatetimeTZDtype\",\n \"IntervalDtype\",\n \"PeriodDtype\",\n \"BooleanDtype\",\n \"StringDtype\",\n \"NA\",\n \"RangeIndex\",\n \"TimedeltaIndex\",\n \"IntervalIndex\",\n \"IndexSlice\",\n \"Grouper\",\n \"array\",\n \"Period\",\n \"show_versions\",\n \"DateOffset\",\n \"timedelta_range\",\n \"infer_freq\",\n \"interval_range\",\n \"ExcelWriter\",\n \"read_fwf\",\n \"read_sql_table\",\n \"read_sql_query\",\n \"ExcelFile\",\n \"to_pickle\",\n \"HDFStore\",\n \"lreshape\",\n \"wide_to_long\",\n \"merge_asof\",\n \"merge_ordered\",\n \"notnull\",\n \"notna\",\n \"pivot\",\n \"to_numeric\",\n \"unique\",\n \"value_counts\",\n \"NamedAgg\",\n \"api\",\n \"read_xml\",\n \"ArrowDtype\",\n \"Flags\",\n \"Float32Dtype\",\n \"Float64Dtype\",\n \"from_dummies\",\n \"errors\",\n]\n\ndel pandas, Parameter\n", "path": "modin/pandas/__init__.py"}], "after_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nimport warnings\n\nimport pandas\nfrom packaging import version\n\n__pandas_version__ = \"2.2\"\n\nif (\n version.parse(pandas.__version__).release[:2]\n != version.parse(__pandas_version__).release[:2]\n):\n warnings.warn(\n f\"The pandas version installed ({pandas.__version__}) does not match the supported pandas version in\"\n + f\" Modin ({__pandas_version__}.X). This may cause undesired side effects!\"\n )\n\n# The extensions assigned to this module\n_PD_EXTENSIONS_ = {}\n\n# to not pollute namespace\ndel version\n\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n from pandas import (\n eval,\n factorize,\n test,\n date_range,\n period_range,\n Index,\n MultiIndex,\n CategoricalIndex,\n bdate_range,\n DatetimeIndex,\n Timedelta,\n Timestamp,\n set_eng_float_format,\n options,\n describe_option,\n set_option,\n get_option,\n reset_option,\n option_context,\n NaT,\n PeriodIndex,\n Categorical,\n Interval,\n UInt8Dtype,\n UInt16Dtype,\n UInt32Dtype,\n UInt64Dtype,\n SparseDtype,\n Int8Dtype,\n Int16Dtype,\n Int32Dtype,\n Int64Dtype,\n StringDtype,\n BooleanDtype,\n CategoricalDtype,\n DatetimeTZDtype,\n IntervalDtype,\n PeriodDtype,\n RangeIndex,\n TimedeltaIndex,\n IntervalIndex,\n IndexSlice,\n Grouper,\n array,\n Period,\n DateOffset,\n timedelta_range,\n infer_freq,\n interval_range,\n ExcelWriter,\n NamedAgg,\n NA,\n api,\n ArrowDtype,\n Flags,\n Float32Dtype,\n Float64Dtype,\n from_dummies,\n testing,\n )\n\nimport os\n\nfrom modin.config import Parameter\n\n_is_first_update = {}\n\n\ndef _update_engine(publisher: Parameter):\n from modin.config import (\n CpuCount,\n Engine,\n IsExperimental,\n StorageFormat,\n ValueSource,\n )\n\n # Set this so that Pandas doesn't try to multithread by itself\n os.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n sfmt = StorageFormat.get()\n\n if sfmt == \"Hdk\":\n is_hdk = True\n elif sfmt == \"Omnisci\":\n is_hdk = True\n StorageFormat.put(\"Hdk\")\n warnings.warn(\n \"The OmniSci storage format has been deprecated. Please use \"\n + '`StorageFormat.put(\"hdk\")` or `MODIN_STORAGE_FORMAT=\"hdk\"` instead.'\n )\n else:\n is_hdk = False\n\n if is_hdk and publisher.get_value_source() == ValueSource.DEFAULT:\n publisher.put(\"Native\")\n IsExperimental.put(True)\n if (\n publisher.get() == \"Native\"\n and StorageFormat.get_value_source() == ValueSource.DEFAULT\n ):\n is_hdk = True\n StorageFormat.put(\"Hdk\")\n IsExperimental.put(True)\n\n if publisher.get() == \"Ray\":\n if _is_first_update.get(\"Ray\", True):\n from modin.core.execution.ray.common import initialize_ray\n\n initialize_ray()\n elif publisher.get() == \"Native\":\n # With HDK storage format there is only a single worker per node\n # and we allow it to work on all cores.\n if is_hdk:\n os.environ[\"OMP_NUM_THREADS\"] = str(CpuCount.get())\n else:\n raise ValueError(\n f\"Storage format should be 'Hdk' with 'Native' engine, but provided {sfmt}.\"\n )\n elif publisher.get() == \"Dask\":\n if _is_first_update.get(\"Dask\", True):\n from modin.core.execution.dask.common import initialize_dask\n\n initialize_dask()\n elif publisher.get() == \"Unidist\":\n if _is_first_update.get(\"Unidist\", True):\n from modin.core.execution.unidist.common import initialize_unidist\n\n initialize_unidist()\n elif publisher.get() not in Engine.NOINIT_ENGINES:\n raise ImportError(\"Unrecognized execution engine: {}.\".format(publisher.get()))\n\n _is_first_update[publisher.get()] = False\n\n\nfrom modin.pandas import errors\nfrom modin.utils import show_versions\n\nfrom .. import __version__\nfrom .dataframe import DataFrame\nfrom .general import (\n concat,\n crosstab,\n cut,\n get_dummies,\n isna,\n isnull,\n lreshape,\n melt,\n merge,\n merge_asof,\n merge_ordered,\n notna,\n notnull,\n pivot,\n pivot_table,\n qcut,\n to_datetime,\n to_numeric,\n to_timedelta,\n unique,\n value_counts,\n wide_to_long,\n)\nfrom .io import (\n ExcelFile,\n HDFStore,\n json_normalize,\n read_clipboard,\n read_csv,\n read_excel,\n read_feather,\n read_fwf,\n read_gbq,\n read_hdf,\n read_html,\n read_json,\n read_orc,\n read_parquet,\n read_pickle,\n read_sas,\n read_spss,\n read_sql,\n read_sql_query,\n read_sql_table,\n read_stata,\n read_table,\n read_xml,\n to_pickle,\n)\nfrom .plotting import Plotting as plotting\nfrom .series import Series\n\n\ndef __getattr__(name: str):\n \"\"\"\n Overrides getattr on the module to enable extensions.\n\n Parameters\n ----------\n name : str\n The name of the attribute being retrieved.\n\n Returns\n -------\n Attribute\n Returns the extension attribute, if it exists, otherwise returns the attribute\n imported in this file.\n \"\"\"\n try:\n return _PD_EXTENSIONS_.get(name, globals()[name])\n except KeyError:\n raise AttributeError(f\"module 'modin.pandas' has no attribute '{name}'\")\n\n\n__all__ = [ # noqa: F405\n \"_PD_EXTENSIONS_\",\n \"DataFrame\",\n \"Series\",\n \"read_csv\",\n \"read_parquet\",\n \"read_json\",\n \"read_html\",\n \"read_clipboard\",\n \"read_excel\",\n \"read_hdf\",\n \"read_feather\",\n \"read_stata\",\n \"read_sas\",\n \"read_pickle\",\n \"read_sql\",\n \"read_gbq\",\n \"read_table\",\n \"read_spss\",\n \"read_orc\",\n \"json_normalize\",\n \"concat\",\n \"eval\",\n \"cut\",\n \"factorize\",\n \"test\",\n \"qcut\",\n \"to_datetime\",\n \"get_dummies\",\n \"isna\",\n \"isnull\",\n \"merge\",\n \"pivot_table\",\n \"date_range\",\n \"Index\",\n \"MultiIndex\",\n \"Series\",\n \"bdate_range\",\n \"period_range\",\n \"DatetimeIndex\",\n \"to_timedelta\",\n \"set_eng_float_format\",\n \"options\",\n \"describe_option\",\n \"set_option\",\n \"get_option\",\n \"reset_option\",\n \"option_context\",\n \"CategoricalIndex\",\n \"Timedelta\",\n \"Timestamp\",\n \"NaT\",\n \"PeriodIndex\",\n \"Categorical\",\n \"__version__\",\n \"melt\",\n \"crosstab\",\n \"plotting\",\n \"Interval\",\n \"UInt8Dtype\",\n \"UInt16Dtype\",\n \"UInt32Dtype\",\n \"UInt64Dtype\",\n \"SparseDtype\",\n \"Int8Dtype\",\n \"Int16Dtype\",\n \"Int32Dtype\",\n \"Int64Dtype\",\n \"CategoricalDtype\",\n \"DatetimeTZDtype\",\n \"IntervalDtype\",\n \"PeriodDtype\",\n \"BooleanDtype\",\n \"StringDtype\",\n \"NA\",\n \"RangeIndex\",\n \"TimedeltaIndex\",\n \"IntervalIndex\",\n \"IndexSlice\",\n \"Grouper\",\n \"array\",\n \"Period\",\n \"show_versions\",\n \"DateOffset\",\n \"timedelta_range\",\n \"infer_freq\",\n \"interval_range\",\n \"ExcelWriter\",\n \"read_fwf\",\n \"read_sql_table\",\n \"read_sql_query\",\n \"ExcelFile\",\n \"to_pickle\",\n \"HDFStore\",\n \"lreshape\",\n \"wide_to_long\",\n \"merge_asof\",\n \"merge_ordered\",\n \"notnull\",\n \"notna\",\n \"pivot\",\n \"to_numeric\",\n \"unique\",\n \"value_counts\",\n \"NamedAgg\",\n \"api\",\n \"read_xml\",\n \"ArrowDtype\",\n \"Flags\",\n \"Float32Dtype\",\n \"Float64Dtype\",\n \"from_dummies\",\n \"errors\",\n]\n\ndel pandas, Parameter\n", "path": "modin/pandas/__init__.py"}]}
| 3,451 | 85 |
gh_patches_debug_16161
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-6692
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
内部服务器错误(Internal Server Error)
**Summary**
_What is the problem?_
Each time I try to submit the translation,It shows error in specific page--`/docs/Learn/CSS/Building_blocks/Selectors`
The problem never happend when I try to submit in other pages.
**Steps To Reproduce (STR)**
_How can we reproduce the problem?_
1. try to translate the page /docs/Learn/CSS/Building_blocks/Selectors into zh-CN
2. which would open page https://wiki.developer.mozilla.org/zh-CN/docs/Learn/CSS/Building_blocks/Selectors$translate?tolocale=zh-CN
3. submit and it jumped to the "内部服务器错误"page.(Internal Server Error)
**Actual behavior**
_What actually happened?_
内部服务器错误Internal Server Error
**Expected behavior**
_What did you expect to happen?_
It should jumped to the page revised.
**Additional context**
_Is there anything else we should know?_
None,thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/wiki/views/translate.py`
Content:
```
1 from urllib.parse import urlencode
2
3 from csp.decorators import csp_update
4 from django.conf import settings
5 from django.core.exceptions import ObjectDoesNotExist
6 from django.http import Http404, JsonResponse
7 from django.shortcuts import get_object_or_404, redirect, render
8 from django.utils.safestring import mark_safe
9 from django.utils.translation import gettext_lazy as _
10 from django.views.decorators.cache import never_cache
11
12 import kuma.wiki.content
13 from kuma.attachments.forms import AttachmentRevisionForm
14 from kuma.core.decorators import block_user_agents, ensure_wiki_domain, login_required
15 from kuma.core.i18n import get_language_mapping
16 from kuma.core.urlresolvers import reverse
17 from kuma.core.utils import get_object_or_none, smart_int, urlparams
18
19 from .utils import document_form_initial, split_slug
20 from ..decorators import check_readonly, prevent_indexing, process_document_path
21 from ..forms import DocumentForm, RevisionForm
22 from ..models import Document, Revision
23
24
25 @ensure_wiki_domain
26 @never_cache
27 @block_user_agents
28 @login_required
29 @process_document_path
30 def select_locale(request, document_slug, document_locale):
31 """
32 Select a locale to translate the document to.
33 """
34 doc = get_object_or_404(Document, locale=document_locale, slug=document_slug)
35 return render(request, "wiki/select_locale.html", {"document": doc})
36
37
38 @ensure_wiki_domain
39 @never_cache
40 @block_user_agents
41 @login_required
42 @csp_update(SCRIPT_SRC="'unsafe-eval'") # Required until CKEditor 4.7
43 @process_document_path
44 @check_readonly
45 @prevent_indexing
46 def translate(request, document_slug, document_locale):
47 """
48 Create a new translation of a wiki document.
49
50 * document_slug is for the default locale
51 * translation is to the request locale
52 """
53 # TODO: Refactor this view into two views? (new, edit)
54 # That might help reduce the headache-inducing branchiness.
55
56 # The parent document to translate from
57 try:
58 # Use '.all_objects' because the parent might have been soft deleted.
59 # And if we don't respect that fact, it would become impossible to
60 # edit a the child of it.
61 parent_doc = Document.all_objects.get(
62 locale=settings.WIKI_DEFAULT_LANGUAGE, slug=document_slug
63 )
64 except Document.DoesNotExist:
65 raise Http404("Parent document does not exist")
66
67 # Get the mapping here and now so it can be used for input validation
68 language_mapping = get_language_mapping()
69
70 # HACK: Seems weird, but sticking the translate-to locale in a query
71 # param is the best way to avoid the MindTouch-legacy locale
72 # redirection logic.
73 document_locale = request.GET.get("tolocale", document_locale)
74 if document_locale.lower() not in language_mapping:
75 # The 'tolocale' query string parameters aren't free-text. They're
76 # explicitly listed on the "Select language" page (`...$locales`)
77 # If a locale was entered that wasn't a link it's a user bug.
78 raise Http404
79
80 # Set a "Discard Changes" page
81 discard_href = ""
82
83 if settings.WIKI_DEFAULT_LANGUAGE == document_locale:
84 # Don't translate to the default language.
85 return redirect(
86 reverse(
87 "wiki.edit",
88 locale=settings.WIKI_DEFAULT_LANGUAGE,
89 args=[parent_doc.slug],
90 )
91 )
92
93 if not parent_doc.is_localizable:
94 message = _("You cannot translate this document.")
95 context = {"message": message}
96 return render(request, "handlers/400.html", context, status=400)
97
98 based_on_rev = parent_doc.current_or_latest_revision()
99
100 disclose_description = bool(request.GET.get("opendescription"))
101
102 try:
103 doc = parent_doc.translations.get(locale=document_locale)
104 slug_dict = split_slug(doc.slug)
105 except Document.DoesNotExist:
106 doc = None
107 disclose_description = True
108 slug_dict = split_slug(document_slug)
109
110 # Find the "real" parent topic, which is its translation
111 if parent_doc.parent_topic:
112 try:
113 parent_topic_translated_doc = parent_doc.parent_topic.translations.get(
114 locale=document_locale
115 )
116 slug_dict = split_slug(
117 parent_topic_translated_doc.slug + "/" + slug_dict["specific"]
118 )
119 except ObjectDoesNotExist:
120 pass
121
122 user_has_doc_perm = (not doc) or (doc and doc.allows_editing_by(request.user))
123
124 doc_form = None
125 if user_has_doc_perm:
126 if doc:
127 # If there's an existing doc, populate form from it.
128 discard_href = doc.get_absolute_url()
129 doc.slug = slug_dict["specific"]
130 doc_initial = document_form_initial(doc)
131 else:
132 # If no existing doc, bring over the original title and slug.
133 discard_href = parent_doc.get_absolute_url()
134 doc_initial = {"title": based_on_rev.title, "slug": slug_dict["specific"]}
135 doc_form = DocumentForm(initial=doc_initial, parent_slug=slug_dict["parent"])
136
137 initial = {
138 "based_on": based_on_rev.id,
139 "current_rev": doc.current_or_latest_revision().id if doc else None,
140 "comment": "",
141 "toc_depth": based_on_rev.toc_depth,
142 "localization_tags": ["inprogress"],
143 }
144 content = None
145 if not doc:
146 content = based_on_rev.content
147 if content:
148 # TODO: There will be no need to "filterEditorSafety" when the code
149 # that calls "clean_content" on Revision.save is deployed to
150 # production, AND the current revisions of all docs have had
151 # their content cleaned with "clean_content".
152 initial.update(
153 content=kuma.wiki.content.parse(content).filterEditorSafety().serialize()
154 )
155 instance = doc and doc.current_or_latest_revision()
156 rev_form = RevisionForm(
157 request=request,
158 instance=instance,
159 initial=initial,
160 parent_slug=slug_dict["parent"],
161 )
162
163 if request.method == "POST":
164 which_form = request.POST.get("form-type", "both")
165 doc_form_invalid = False
166
167 # Grab the posted slug value in case it's invalid
168 posted_slug = request.POST.get("slug", slug_dict["specific"])
169
170 if user_has_doc_perm and which_form in ["doc", "both"]:
171 disclose_description = True
172 post_data = request.POST.copy()
173
174 post_data.update({"locale": document_locale})
175
176 doc_form = DocumentForm(
177 post_data, instance=doc, parent_slug=slug_dict["parent"]
178 )
179 doc_form.instance.locale = document_locale
180 doc_form.instance.parent = parent_doc
181
182 if which_form == "both":
183 # Sending a new copy of post so the slug change above
184 # doesn't cause problems during validation
185 rev_form = RevisionForm(
186 request=request, data=post_data, parent_slug=slug_dict["parent"]
187 )
188
189 # If we are submitting the whole form, we need to check that
190 # the Revision is valid before saving the Document.
191 if doc_form.is_valid() and (which_form == "doc" or rev_form.is_valid()):
192 doc = doc_form.save(parent=parent_doc)
193
194 if which_form == "doc":
195 url = urlparams(doc.get_edit_url(), opendescription=1)
196 return redirect(url)
197 else:
198 doc_form.data["slug"] = posted_slug
199 doc_form_invalid = True
200
201 if doc and which_form in ["rev", "both"]:
202 post_data = request.POST.copy()
203 if "slug" not in post_data:
204 post_data["slug"] = posted_slug
205
206 # update the post data with the toc_depth of original
207 post_data["toc_depth"] = based_on_rev.toc_depth
208
209 # Pass in the locale for the akistmet "blog_lang".
210 post_data["locale"] = document_locale
211
212 rev_form = RevisionForm(
213 request=request, data=post_data, parent_slug=slug_dict["parent"]
214 )
215 rev_form.instance.document = doc # for rev_form.clean()
216
217 if rev_form.is_valid() and not doc_form_invalid:
218 parent_id = request.POST.get("parent_id", "")
219
220 # Attempt to set a parent
221 if parent_id:
222 try:
223 try:
224 parent_doc = Document.all_objects.get(id=parent_id)
225 except Document.DoesNotExist:
226 raise Http404("Parent document does not exist")
227 rev_form.instance.document.parent = parent_doc
228 doc.parent = parent_doc
229 rev_form.instance.based_on.document = doc.original
230 except Document.DoesNotExist:
231 pass
232
233 rev_form.save(doc)
234 # If this is an Ajax POST, then return a JsonResponse
235 if request.is_ajax():
236 data = {
237 "error": False,
238 "new_revision_id": rev_form.instance.id,
239 }
240
241 return JsonResponse(data)
242
243 # Construct the redirect URL, adding any needed parameters
244 url = doc.get_absolute_url()
245 params = {}
246 # Parameter for the document saved, so that we can delete the cached draft on load
247 params["rev_saved"] = request.POST.get("current_rev", "")
248 url = "%s?%s" % (url, urlencode(params))
249 return redirect(url)
250 else:
251 # If this is an Ajax POST, then return a JsonResponse with error
252 if request.is_ajax():
253 if "current_rev" in rev_form._errors:
254 # Make the error message safe so the '<' and '>' don't
255 # get turned into '<' and '>', respectively
256 rev_form.errors["current_rev"][0] = mark_safe(
257 rev_form.errors["current_rev"][0]
258 )
259 errors = [rev_form.errors[key][0] for key in rev_form.errors.keys()]
260 data = {
261 "error": True,
262 "error_message": errors,
263 "new_revision_id": rev_form.instance.id,
264 }
265 return JsonResponse(data=data)
266
267 if doc:
268 from_id = smart_int(request.GET.get("from"), None)
269 to_id = smart_int(request.GET.get("to"), None)
270
271 revision_from = get_object_or_none(Revision, pk=from_id, document=doc.parent)
272 revision_to = get_object_or_none(Revision, pk=to_id, document=doc.parent)
273 else:
274 revision_from = revision_to = None
275
276 parent_split = split_slug(parent_doc.slug)
277
278 language = language_mapping[document_locale.lower()]
279 default_locale = language_mapping[settings.WIKI_DEFAULT_LANGUAGE.lower()]
280
281 context = {
282 "parent": parent_doc,
283 "document": doc,
284 "document_form": doc_form,
285 "revision_form": rev_form,
286 "locale": document_locale,
287 "default_locale": default_locale,
288 "language": language,
289 "based_on": based_on_rev,
290 "disclose_description": disclose_description,
291 "discard_href": discard_href,
292 "attachment_form": AttachmentRevisionForm(),
293 "specific_slug": parent_split["specific"],
294 "parent_slug": parent_split["parent"],
295 "revision_from": revision_from,
296 "revision_to": revision_to,
297 }
298 return render(request, "wiki/translate.html", context)
299
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kuma/wiki/views/translate.py b/kuma/wiki/views/translate.py
--- a/kuma/wiki/views/translate.py
+++ b/kuma/wiki/views/translate.py
@@ -189,6 +189,15 @@
# If we are submitting the whole form, we need to check that
# the Revision is valid before saving the Document.
if doc_form.is_valid() and (which_form == "doc" or rev_form.is_valid()):
+
+ # If the document you're about to save already exists, as a
+ # soft-delete, then really delete it first.
+ for soft_deleted_document in Document.deleted_objects.filter(
+ locale=doc_form.cleaned_data["locale"],
+ slug=doc_form.cleaned_data["slug"],
+ ):
+ soft_deleted_document.delete(purge=True)
+
doc = doc_form.save(parent=parent_doc)
if which_form == "doc":
|
{"golden_diff": "diff --git a/kuma/wiki/views/translate.py b/kuma/wiki/views/translate.py\n--- a/kuma/wiki/views/translate.py\n+++ b/kuma/wiki/views/translate.py\n@@ -189,6 +189,15 @@\n # If we are submitting the whole form, we need to check that\n # the Revision is valid before saving the Document.\n if doc_form.is_valid() and (which_form == \"doc\" or rev_form.is_valid()):\n+\n+ # If the document you're about to save already exists, as a\n+ # soft-delete, then really delete it first.\n+ for soft_deleted_document in Document.deleted_objects.filter(\n+ locale=doc_form.cleaned_data[\"locale\"],\n+ slug=doc_form.cleaned_data[\"slug\"],\n+ ):\n+ soft_deleted_document.delete(purge=True)\n+\n doc = doc_form.save(parent=parent_doc)\n \n if which_form == \"doc\":\n", "issue": "\u5185\u90e8\u670d\u52a1\u5668\u9519\u8bef(Internal Server Error)\n**Summary**\r\n_What is the problem?_\r\n\r\nEach time I try to submit the translation,It shows error in specific page--`/docs/Learn/CSS/Building_blocks/Selectors`\r\n\r\nThe problem never happend when I try to submit in other pages.\r\n**Steps To Reproduce (STR)**\r\n_How can we reproduce the problem?_\r\n\r\n1. try to translate the page /docs/Learn/CSS/Building_blocks/Selectors into zh-CN\r\n2. which would open page https://wiki.developer.mozilla.org/zh-CN/docs/Learn/CSS/Building_blocks/Selectors$translate?tolocale=zh-CN\r\n3. submit and it jumped to the \"\u5185\u90e8\u670d\u52a1\u5668\u9519\u8bef\"page.(Internal Server Error)\r\n\r\n\r\n**Actual behavior**\r\n_What actually happened?_\r\n\u5185\u90e8\u670d\u52a1\u5668\u9519\u8befInternal Server Error\r\n\r\n**Expected behavior**\r\n_What did you expect to happen?_\r\nIt should jumped to the page revised.\r\n\r\n**Additional context**\r\n_Is there anything else we should know?_\r\nNone,thanks.\n", "before_files": [{"content": "from urllib.parse import urlencode\n\nfrom csp.decorators import csp_update\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.http import Http404, JsonResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.decorators.cache import never_cache\n\nimport kuma.wiki.content\nfrom kuma.attachments.forms import AttachmentRevisionForm\nfrom kuma.core.decorators import block_user_agents, ensure_wiki_domain, login_required\nfrom kuma.core.i18n import get_language_mapping\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.core.utils import get_object_or_none, smart_int, urlparams\n\nfrom .utils import document_form_initial, split_slug\nfrom ..decorators import check_readonly, prevent_indexing, process_document_path\nfrom ..forms import DocumentForm, RevisionForm\nfrom ..models import Document, Revision\n\n\n@ensure_wiki_domain\n@never_cache\n@block_user_agents\n@login_required\n@process_document_path\ndef select_locale(request, document_slug, document_locale):\n \"\"\"\n Select a locale to translate the document to.\n \"\"\"\n doc = get_object_or_404(Document, locale=document_locale, slug=document_slug)\n return render(request, \"wiki/select_locale.html\", {\"document\": doc})\n\n\n@ensure_wiki_domain\n@never_cache\n@block_user_agents\n@login_required\n@csp_update(SCRIPT_SRC=\"'unsafe-eval'\") # Required until CKEditor 4.7\n@process_document_path\n@check_readonly\n@prevent_indexing\ndef translate(request, document_slug, document_locale):\n \"\"\"\n Create a new translation of a wiki document.\n\n * document_slug is for the default locale\n * translation is to the request locale\n \"\"\"\n # TODO: Refactor this view into two views? (new, edit)\n # That might help reduce the headache-inducing branchiness.\n\n # The parent document to translate from\n try:\n # Use '.all_objects' because the parent might have been soft deleted.\n # And if we don't respect that fact, it would become impossible to\n # edit a the child of it.\n parent_doc = Document.all_objects.get(\n locale=settings.WIKI_DEFAULT_LANGUAGE, slug=document_slug\n )\n except Document.DoesNotExist:\n raise Http404(\"Parent document does not exist\")\n\n # Get the mapping here and now so it can be used for input validation\n language_mapping = get_language_mapping()\n\n # HACK: Seems weird, but sticking the translate-to locale in a query\n # param is the best way to avoid the MindTouch-legacy locale\n # redirection logic.\n document_locale = request.GET.get(\"tolocale\", document_locale)\n if document_locale.lower() not in language_mapping:\n # The 'tolocale' query string parameters aren't free-text. They're\n # explicitly listed on the \"Select language\" page (`...$locales`)\n # If a locale was entered that wasn't a link it's a user bug.\n raise Http404\n\n # Set a \"Discard Changes\" page\n discard_href = \"\"\n\n if settings.WIKI_DEFAULT_LANGUAGE == document_locale:\n # Don't translate to the default language.\n return redirect(\n reverse(\n \"wiki.edit\",\n locale=settings.WIKI_DEFAULT_LANGUAGE,\n args=[parent_doc.slug],\n )\n )\n\n if not parent_doc.is_localizable:\n message = _(\"You cannot translate this document.\")\n context = {\"message\": message}\n return render(request, \"handlers/400.html\", context, status=400)\n\n based_on_rev = parent_doc.current_or_latest_revision()\n\n disclose_description = bool(request.GET.get(\"opendescription\"))\n\n try:\n doc = parent_doc.translations.get(locale=document_locale)\n slug_dict = split_slug(doc.slug)\n except Document.DoesNotExist:\n doc = None\n disclose_description = True\n slug_dict = split_slug(document_slug)\n\n # Find the \"real\" parent topic, which is its translation\n if parent_doc.parent_topic:\n try:\n parent_topic_translated_doc = parent_doc.parent_topic.translations.get(\n locale=document_locale\n )\n slug_dict = split_slug(\n parent_topic_translated_doc.slug + \"/\" + slug_dict[\"specific\"]\n )\n except ObjectDoesNotExist:\n pass\n\n user_has_doc_perm = (not doc) or (doc and doc.allows_editing_by(request.user))\n\n doc_form = None\n if user_has_doc_perm:\n if doc:\n # If there's an existing doc, populate form from it.\n discard_href = doc.get_absolute_url()\n doc.slug = slug_dict[\"specific\"]\n doc_initial = document_form_initial(doc)\n else:\n # If no existing doc, bring over the original title and slug.\n discard_href = parent_doc.get_absolute_url()\n doc_initial = {\"title\": based_on_rev.title, \"slug\": slug_dict[\"specific\"]}\n doc_form = DocumentForm(initial=doc_initial, parent_slug=slug_dict[\"parent\"])\n\n initial = {\n \"based_on\": based_on_rev.id,\n \"current_rev\": doc.current_or_latest_revision().id if doc else None,\n \"comment\": \"\",\n \"toc_depth\": based_on_rev.toc_depth,\n \"localization_tags\": [\"inprogress\"],\n }\n content = None\n if not doc:\n content = based_on_rev.content\n if content:\n # TODO: There will be no need to \"filterEditorSafety\" when the code\n # that calls \"clean_content\" on Revision.save is deployed to\n # production, AND the current revisions of all docs have had\n # their content cleaned with \"clean_content\".\n initial.update(\n content=kuma.wiki.content.parse(content).filterEditorSafety().serialize()\n )\n instance = doc and doc.current_or_latest_revision()\n rev_form = RevisionForm(\n request=request,\n instance=instance,\n initial=initial,\n parent_slug=slug_dict[\"parent\"],\n )\n\n if request.method == \"POST\":\n which_form = request.POST.get(\"form-type\", \"both\")\n doc_form_invalid = False\n\n # Grab the posted slug value in case it's invalid\n posted_slug = request.POST.get(\"slug\", slug_dict[\"specific\"])\n\n if user_has_doc_perm and which_form in [\"doc\", \"both\"]:\n disclose_description = True\n post_data = request.POST.copy()\n\n post_data.update({\"locale\": document_locale})\n\n doc_form = DocumentForm(\n post_data, instance=doc, parent_slug=slug_dict[\"parent\"]\n )\n doc_form.instance.locale = document_locale\n doc_form.instance.parent = parent_doc\n\n if which_form == \"both\":\n # Sending a new copy of post so the slug change above\n # doesn't cause problems during validation\n rev_form = RevisionForm(\n request=request, data=post_data, parent_slug=slug_dict[\"parent\"]\n )\n\n # If we are submitting the whole form, we need to check that\n # the Revision is valid before saving the Document.\n if doc_form.is_valid() and (which_form == \"doc\" or rev_form.is_valid()):\n doc = doc_form.save(parent=parent_doc)\n\n if which_form == \"doc\":\n url = urlparams(doc.get_edit_url(), opendescription=1)\n return redirect(url)\n else:\n doc_form.data[\"slug\"] = posted_slug\n doc_form_invalid = True\n\n if doc and which_form in [\"rev\", \"both\"]:\n post_data = request.POST.copy()\n if \"slug\" not in post_data:\n post_data[\"slug\"] = posted_slug\n\n # update the post data with the toc_depth of original\n post_data[\"toc_depth\"] = based_on_rev.toc_depth\n\n # Pass in the locale for the akistmet \"blog_lang\".\n post_data[\"locale\"] = document_locale\n\n rev_form = RevisionForm(\n request=request, data=post_data, parent_slug=slug_dict[\"parent\"]\n )\n rev_form.instance.document = doc # for rev_form.clean()\n\n if rev_form.is_valid() and not doc_form_invalid:\n parent_id = request.POST.get(\"parent_id\", \"\")\n\n # Attempt to set a parent\n if parent_id:\n try:\n try:\n parent_doc = Document.all_objects.get(id=parent_id)\n except Document.DoesNotExist:\n raise Http404(\"Parent document does not exist\")\n rev_form.instance.document.parent = parent_doc\n doc.parent = parent_doc\n rev_form.instance.based_on.document = doc.original\n except Document.DoesNotExist:\n pass\n\n rev_form.save(doc)\n # If this is an Ajax POST, then return a JsonResponse\n if request.is_ajax():\n data = {\n \"error\": False,\n \"new_revision_id\": rev_form.instance.id,\n }\n\n return JsonResponse(data)\n\n # Construct the redirect URL, adding any needed parameters\n url = doc.get_absolute_url()\n params = {}\n # Parameter for the document saved, so that we can delete the cached draft on load\n params[\"rev_saved\"] = request.POST.get(\"current_rev\", \"\")\n url = \"%s?%s\" % (url, urlencode(params))\n return redirect(url)\n else:\n # If this is an Ajax POST, then return a JsonResponse with error\n if request.is_ajax():\n if \"current_rev\" in rev_form._errors:\n # Make the error message safe so the '<' and '>' don't\n # get turned into '<' and '>', respectively\n rev_form.errors[\"current_rev\"][0] = mark_safe(\n rev_form.errors[\"current_rev\"][0]\n )\n errors = [rev_form.errors[key][0] for key in rev_form.errors.keys()]\n data = {\n \"error\": True,\n \"error_message\": errors,\n \"new_revision_id\": rev_form.instance.id,\n }\n return JsonResponse(data=data)\n\n if doc:\n from_id = smart_int(request.GET.get(\"from\"), None)\n to_id = smart_int(request.GET.get(\"to\"), None)\n\n revision_from = get_object_or_none(Revision, pk=from_id, document=doc.parent)\n revision_to = get_object_or_none(Revision, pk=to_id, document=doc.parent)\n else:\n revision_from = revision_to = None\n\n parent_split = split_slug(parent_doc.slug)\n\n language = language_mapping[document_locale.lower()]\n default_locale = language_mapping[settings.WIKI_DEFAULT_LANGUAGE.lower()]\n\n context = {\n \"parent\": parent_doc,\n \"document\": doc,\n \"document_form\": doc_form,\n \"revision_form\": rev_form,\n \"locale\": document_locale,\n \"default_locale\": default_locale,\n \"language\": language,\n \"based_on\": based_on_rev,\n \"disclose_description\": disclose_description,\n \"discard_href\": discard_href,\n \"attachment_form\": AttachmentRevisionForm(),\n \"specific_slug\": parent_split[\"specific\"],\n \"parent_slug\": parent_split[\"parent\"],\n \"revision_from\": revision_from,\n \"revision_to\": revision_to,\n }\n return render(request, \"wiki/translate.html\", context)\n", "path": "kuma/wiki/views/translate.py"}], "after_files": [{"content": "from urllib.parse import urlencode\n\nfrom csp.decorators import csp_update\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.http import Http404, JsonResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.decorators.cache import never_cache\n\nimport kuma.wiki.content\nfrom kuma.attachments.forms import AttachmentRevisionForm\nfrom kuma.core.decorators import block_user_agents, ensure_wiki_domain, login_required\nfrom kuma.core.i18n import get_language_mapping\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.core.utils import get_object_or_none, smart_int, urlparams\n\nfrom .utils import document_form_initial, split_slug\nfrom ..decorators import check_readonly, prevent_indexing, process_document_path\nfrom ..forms import DocumentForm, RevisionForm\nfrom ..models import Document, Revision\n\n\n@ensure_wiki_domain\n@never_cache\n@block_user_agents\n@login_required\n@process_document_path\ndef select_locale(request, document_slug, document_locale):\n \"\"\"\n Select a locale to translate the document to.\n \"\"\"\n doc = get_object_or_404(Document, locale=document_locale, slug=document_slug)\n return render(request, \"wiki/select_locale.html\", {\"document\": doc})\n\n\n@ensure_wiki_domain\n@never_cache\n@block_user_agents\n@login_required\n@csp_update(SCRIPT_SRC=\"'unsafe-eval'\") # Required until CKEditor 4.7\n@process_document_path\n@check_readonly\n@prevent_indexing\ndef translate(request, document_slug, document_locale):\n \"\"\"\n Create a new translation of a wiki document.\n\n * document_slug is for the default locale\n * translation is to the request locale\n \"\"\"\n # TODO: Refactor this view into two views? (new, edit)\n # That might help reduce the headache-inducing branchiness.\n\n # The parent document to translate from\n try:\n # Use '.all_objects' because the parent might have been soft deleted.\n # And if we don't respect that fact, it would become impossible to\n # edit a the child of it.\n parent_doc = Document.all_objects.get(\n locale=settings.WIKI_DEFAULT_LANGUAGE, slug=document_slug\n )\n except Document.DoesNotExist:\n raise Http404(\"Parent document does not exist\")\n\n # Get the mapping here and now so it can be used for input validation\n language_mapping = get_language_mapping()\n\n # HACK: Seems weird, but sticking the translate-to locale in a query\n # param is the best way to avoid the MindTouch-legacy locale\n # redirection logic.\n document_locale = request.GET.get(\"tolocale\", document_locale)\n if document_locale.lower() not in language_mapping:\n # The 'tolocale' query string parameters aren't free-text. They're\n # explicitly listed on the \"Select language\" page (`...$locales`)\n # If a locale was entered that wasn't a link it's a user bug.\n raise Http404\n\n # Set a \"Discard Changes\" page\n discard_href = \"\"\n\n if settings.WIKI_DEFAULT_LANGUAGE == document_locale:\n # Don't translate to the default language.\n return redirect(\n reverse(\n \"wiki.edit\",\n locale=settings.WIKI_DEFAULT_LANGUAGE,\n args=[parent_doc.slug],\n )\n )\n\n if not parent_doc.is_localizable:\n message = _(\"You cannot translate this document.\")\n context = {\"message\": message}\n return render(request, \"handlers/400.html\", context, status=400)\n\n based_on_rev = parent_doc.current_or_latest_revision()\n\n disclose_description = bool(request.GET.get(\"opendescription\"))\n\n try:\n doc = parent_doc.translations.get(locale=document_locale)\n slug_dict = split_slug(doc.slug)\n except Document.DoesNotExist:\n doc = None\n disclose_description = True\n slug_dict = split_slug(document_slug)\n\n # Find the \"real\" parent topic, which is its translation\n if parent_doc.parent_topic:\n try:\n parent_topic_translated_doc = parent_doc.parent_topic.translations.get(\n locale=document_locale\n )\n slug_dict = split_slug(\n parent_topic_translated_doc.slug + \"/\" + slug_dict[\"specific\"]\n )\n except ObjectDoesNotExist:\n pass\n\n user_has_doc_perm = (not doc) or (doc and doc.allows_editing_by(request.user))\n\n doc_form = None\n if user_has_doc_perm:\n if doc:\n # If there's an existing doc, populate form from it.\n discard_href = doc.get_absolute_url()\n doc.slug = slug_dict[\"specific\"]\n doc_initial = document_form_initial(doc)\n else:\n # If no existing doc, bring over the original title and slug.\n discard_href = parent_doc.get_absolute_url()\n doc_initial = {\"title\": based_on_rev.title, \"slug\": slug_dict[\"specific\"]}\n doc_form = DocumentForm(initial=doc_initial, parent_slug=slug_dict[\"parent\"])\n\n initial = {\n \"based_on\": based_on_rev.id,\n \"current_rev\": doc.current_or_latest_revision().id if doc else None,\n \"comment\": \"\",\n \"toc_depth\": based_on_rev.toc_depth,\n \"localization_tags\": [\"inprogress\"],\n }\n content = None\n if not doc:\n content = based_on_rev.content\n if content:\n # TODO: There will be no need to \"filterEditorSafety\" when the code\n # that calls \"clean_content\" on Revision.save is deployed to\n # production, AND the current revisions of all docs have had\n # their content cleaned with \"clean_content\".\n initial.update(\n content=kuma.wiki.content.parse(content).filterEditorSafety().serialize()\n )\n instance = doc and doc.current_or_latest_revision()\n rev_form = RevisionForm(\n request=request,\n instance=instance,\n initial=initial,\n parent_slug=slug_dict[\"parent\"],\n )\n\n if request.method == \"POST\":\n which_form = request.POST.get(\"form-type\", \"both\")\n doc_form_invalid = False\n\n # Grab the posted slug value in case it's invalid\n posted_slug = request.POST.get(\"slug\", slug_dict[\"specific\"])\n\n if user_has_doc_perm and which_form in [\"doc\", \"both\"]:\n disclose_description = True\n post_data = request.POST.copy()\n\n post_data.update({\"locale\": document_locale})\n\n doc_form = DocumentForm(\n post_data, instance=doc, parent_slug=slug_dict[\"parent\"]\n )\n doc_form.instance.locale = document_locale\n doc_form.instance.parent = parent_doc\n\n if which_form == \"both\":\n # Sending a new copy of post so the slug change above\n # doesn't cause problems during validation\n rev_form = RevisionForm(\n request=request, data=post_data, parent_slug=slug_dict[\"parent\"]\n )\n\n # If we are submitting the whole form, we need to check that\n # the Revision is valid before saving the Document.\n if doc_form.is_valid() and (which_form == \"doc\" or rev_form.is_valid()):\n\n # If the document you're about to save already exists, as a\n # soft-delete, then really delete it first.\n for soft_deleted_document in Document.deleted_objects.filter(\n locale=doc_form.cleaned_data[\"locale\"],\n slug=doc_form.cleaned_data[\"slug\"],\n ):\n soft_deleted_document.delete(purge=True)\n\n doc = doc_form.save(parent=parent_doc)\n\n if which_form == \"doc\":\n url = urlparams(doc.get_edit_url(), opendescription=1)\n return redirect(url)\n else:\n doc_form.data[\"slug\"] = posted_slug\n doc_form_invalid = True\n\n if doc and which_form in [\"rev\", \"both\"]:\n post_data = request.POST.copy()\n if \"slug\" not in post_data:\n post_data[\"slug\"] = posted_slug\n\n # update the post data with the toc_depth of original\n post_data[\"toc_depth\"] = based_on_rev.toc_depth\n\n # Pass in the locale for the akistmet \"blog_lang\".\n post_data[\"locale\"] = document_locale\n\n rev_form = RevisionForm(\n request=request, data=post_data, parent_slug=slug_dict[\"parent\"]\n )\n rev_form.instance.document = doc # for rev_form.clean()\n\n if rev_form.is_valid() and not doc_form_invalid:\n parent_id = request.POST.get(\"parent_id\", \"\")\n\n # Attempt to set a parent\n if parent_id:\n try:\n try:\n parent_doc = Document.all_objects.get(id=parent_id)\n except Document.DoesNotExist:\n raise Http404(\"Parent document does not exist\")\n rev_form.instance.document.parent = parent_doc\n doc.parent = parent_doc\n rev_form.instance.based_on.document = doc.original\n except Document.DoesNotExist:\n pass\n\n rev_form.save(doc)\n # If this is an Ajax POST, then return a JsonResponse\n if request.is_ajax():\n data = {\n \"error\": False,\n \"new_revision_id\": rev_form.instance.id,\n }\n\n return JsonResponse(data)\n\n # Construct the redirect URL, adding any needed parameters\n url = doc.get_absolute_url()\n params = {}\n # Parameter for the document saved, so that we can delete the cached draft on load\n params[\"rev_saved\"] = request.POST.get(\"current_rev\", \"\")\n url = \"%s?%s\" % (url, urlencode(params))\n return redirect(url)\n else:\n # If this is an Ajax POST, then return a JsonResponse with error\n if request.is_ajax():\n if \"current_rev\" in rev_form._errors:\n # Make the error message safe so the '<' and '>' don't\n # get turned into '<' and '>', respectively\n rev_form.errors[\"current_rev\"][0] = mark_safe(\n rev_form.errors[\"current_rev\"][0]\n )\n errors = [rev_form.errors[key][0] for key in rev_form.errors.keys()]\n data = {\n \"error\": True,\n \"error_message\": errors,\n \"new_revision_id\": rev_form.instance.id,\n }\n return JsonResponse(data=data)\n\n if doc:\n from_id = smart_int(request.GET.get(\"from\"), None)\n to_id = smart_int(request.GET.get(\"to\"), None)\n\n revision_from = get_object_or_none(Revision, pk=from_id, document=doc.parent)\n revision_to = get_object_or_none(Revision, pk=to_id, document=doc.parent)\n else:\n revision_from = revision_to = None\n\n parent_split = split_slug(parent_doc.slug)\n\n language = language_mapping[document_locale.lower()]\n default_locale = language_mapping[settings.WIKI_DEFAULT_LANGUAGE.lower()]\n\n context = {\n \"parent\": parent_doc,\n \"document\": doc,\n \"document_form\": doc_form,\n \"revision_form\": rev_form,\n \"locale\": document_locale,\n \"default_locale\": default_locale,\n \"language\": language,\n \"based_on\": based_on_rev,\n \"disclose_description\": disclose_description,\n \"discard_href\": discard_href,\n \"attachment_form\": AttachmentRevisionForm(),\n \"specific_slug\": parent_split[\"specific\"],\n \"parent_slug\": parent_split[\"parent\"],\n \"revision_from\": revision_from,\n \"revision_to\": revision_to,\n }\n return render(request, \"wiki/translate.html\", context)\n", "path": "kuma/wiki/views/translate.py"}]}
| 3,689 | 201 |
gh_patches_debug_29718
|
rasdani/github-patches
|
git_diff
|
keras-team__autokeras-860
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
installation issue o
**My environment is a Jetson Xavier**
Jetpack 4.2
Ubuntu 18.04
python version is 3.6.8
relevant pip.list
dask (2.8.0)
dask-glm (0.2.0)
dask-ml (1.1.1)
Keras (2.3.1)
Keras-Applications (1.0.8)
Keras-Preprocessing (1.1.0)
numba (0.46.0)
numpy (1.17.4)
packaging (19.2)
pandas (0.25.3)
pandas-flavor (0.2.0)
pip (19.3.1)
scikit-learn (0.21.3)
scikit-MDR (0.4.4)
scipy (1.3.2)
tensorboard (1.14.0)
tensorflow-estimator (1.14.0)
**tensorflow-gpu (1.14.0+nv19.10)**
**error messages**
pip install git+git://github.com/keras-team/autokeras@master#egg=autokeras
Collecting autokeras
Cloning git://github.com/keras-team/autokeras (to revision master) to /tmp/pip-install-du3y560o/autokeras
Running command git clone -q git://github.com/keras-team/autokeras /tmp/pip-install-du3y560o/autokeras
ERROR: Could not find a version that satisfies the requirement tensorflow (from autokeras) (from versions: none)
ERROR: No matching distribution found for tensorflow (from autokeras)
**Then I tried**
pip install autokeras
Collecting autokeras
Downloading https://files.pythonhosted.org/packages/c2/32/de74bf6afd09925980340355a05aa6a19e7378ed91dac09e76a487bd136d/autokeras-0.4.0.tar.gz (67kB)
|████████████████████████████████| 71kB 3.8MB/s
Collecting scipy==1.2.0
Downloading https://files.pythonhosted.org/packages/ea/c8/c296904f2c852c5c129962e6ca4ba467116b08cd5b54b7180b2e77fe06b2/scipy-1.2.0.tar.gz (23.3MB)
|████████████████████████████████| 23.3MB 12.6MB/s
ERROR: Could not find a version that satisfies the requirement tensorflow==1.13.1 (from autokeras) (from versions: none)
**ERROR: No matching distribution found for tensorflow==1.13.1 (from autokeras)**
I have tried downgrading to tensorflow-gpu==1.13.1, but get the same error message
ERROR: No matching distribution found for tensorflow==1.13.1 (from autokeras)
My hunch is autokeras does not include tensorflow-gpu, thoughts on how to fix this?
thanks in advance for your assistance
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `autokeras/__init__.py`
Content:
```
1 from autokeras.auto_model import AutoModel
2 from autokeras.const import Constant
3 from autokeras.hypermodel.base import Block
4 from autokeras.hypermodel.base import Head
5 from autokeras.hypermodel.base import HyperBlock
6 from autokeras.hypermodel.base import Node
7 from autokeras.hypermodel.base import Preprocessor
8 from autokeras.hypermodel.block import ConvBlock
9 from autokeras.hypermodel.block import DenseBlock
10 from autokeras.hypermodel.block import EmbeddingBlock
11 from autokeras.hypermodel.block import Merge
12 from autokeras.hypermodel.block import ResNetBlock
13 from autokeras.hypermodel.block import RNNBlock
14 from autokeras.hypermodel.block import SpatialReduction
15 from autokeras.hypermodel.block import TemporalReduction
16 from autokeras.hypermodel.block import XceptionBlock
17 from autokeras.hypermodel.head import ClassificationHead
18 from autokeras.hypermodel.head import RegressionHead
19 from autokeras.hypermodel.hyperblock import ImageBlock
20 from autokeras.hypermodel.hyperblock import StructuredDataBlock
21 from autokeras.hypermodel.hyperblock import TextBlock
22 from autokeras.hypermodel.node import ImageInput
23 from autokeras.hypermodel.node import Input
24 from autokeras.hypermodel.node import StructuredDataInput
25 from autokeras.hypermodel.node import TextInput
26 from autokeras.hypermodel.preprocessor import FeatureEngineering
27 from autokeras.hypermodel.preprocessor import ImageAugmentation
28 from autokeras.hypermodel.preprocessor import LightGBM
29 from autokeras.hypermodel.preprocessor import Normalization
30 from autokeras.hypermodel.preprocessor import TextToIntSequence
31 from autokeras.hypermodel.preprocessor import TextToNgramVector
32 from autokeras.task import ImageClassifier
33 from autokeras.task import ImageRegressor
34 from autokeras.task import StructuredDataClassifier
35 from autokeras.task import StructuredDataRegressor
36 from autokeras.task import TextClassifier
37 from autokeras.task import TextRegressor
38
```
Path: `setup.py`
Content:
```
1 from distutils.core import setup
2 from pathlib import Path
3
4 from setuptools import find_packages
5
6 this_file = Path(__file__).resolve()
7 readme = this_file.parent / 'README.md'
8
9 setup(
10 name='autokeras',
11 version='1.0.0a0',
12 description='AutoML for deep learning',
13 package_data={'': ['README.md']},
14 long_description=readme.read_text(encoding='utf-8'),
15 long_description_content_type='text/markdown',
16 author='Data Analytics at Texas A&M (DATA) Lab, Keras Team',
17 author_email='[email protected]',
18 url='http://autokeras.com',
19 download_url='https://github.com/keras-team/autokeras/archive/1.0.0a0.tar.gz',
20 keywords=['AutoML', 'keras'],
21 install_requires=[
22 'tensorflow>=2.0.0',
23 'keras-tuner>=1.0.0',
24 'scikit-learn',
25 'numpy',
26 'pandas',
27 'lightgbm',
28 ],
29 extras_require={
30 'tests': ['pytest>=4.4.0',
31 'flake8',
32 'pytest-xdist',
33 'pytest-cov',
34 # can be removed once coveralls is compatible with
35 # coverage 5.0
36 'coverage==4.5.4'
37 ],
38 },
39 packages=find_packages(exclude=('tests',)),
40 )
41
```
Path: `autokeras/utils.py`
Content:
```
1 import pickle
2 import re
3
4 import numpy as np
5 import tensorflow as tf
6 from tensorflow.python.util import nest
7
8
9 def get_global_average_pooling(shape):
10 return [tf.keras.layers.GlobalAveragePooling1D,
11 tf.keras.layers.GlobalAveragePooling2D,
12 tf.keras.layers.GlobalAveragePooling3D][len(shape) - 3]
13
14
15 def get_global_max_pooling(shape):
16 return [tf.keras.layers.GlobalMaxPool1D,
17 tf.keras.layers.GlobalMaxPool2D,
18 tf.keras.layers.GlobalMaxPool3D][len(shape) - 3]
19
20
21 def get_max_pooling(shape):
22 return [tf.keras.layers.MaxPool1D,
23 tf.keras.layers.MaxPool2D,
24 tf.keras.layers.MaxPool3D][len(shape) - 3]
25
26
27 def get_conv(shape):
28 return [tf.keras.layers.Conv1D,
29 tf.keras.layers.Conv2D,
30 tf.keras.layers.Conv3D][len(shape) - 3]
31
32
33 def get_sep_conv(shape):
34 return [tf.keras.layers.SeparableConv1D,
35 tf.keras.layers.SeparableConv2D,
36 tf.keras.layers.Conv3D][len(shape) - 3]
37
38
39 def get_dropout(shape):
40 return [tf.keras.layers.SpatialDropout1D,
41 tf.keras.layers.SpatialDropout2D,
42 tf.keras.layers.SpatialDropout3D][len(shape) - 3]
43
44
45 def validate_num_inputs(inputs, num):
46 inputs = nest.flatten(inputs)
47 if not len(inputs) == num:
48 raise ValueError('Expected {num} elements in the inputs list '
49 'but received {len} inputs.'.format(num=num,
50 len=len(inputs)))
51
52
53 def split_dataset(dataset, validation_split):
54 """Split dataset into training and validation.
55
56 # Arguments
57 dataset: tf.data.Dataset. The entire dataset to be split.
58 validation_split: Float. The split ratio for the validation set.
59
60 # Raises
61 ValueError: If the dataset provided is too small to be split.
62
63 # Returns
64 A tuple of two tf.data.Dataset. The training set and the validation set.
65 """
66 num_instances = dataset.reduce(np.int64(0), lambda x, _: x + 1).numpy()
67 if num_instances < 2:
68 raise ValueError('The dataset should at least contain 2 '
69 'instances to be split.')
70 validation_set_size = min(
71 max(int(num_instances * validation_split), 1),
72 num_instances - 1)
73 train_set_size = num_instances - validation_set_size
74 train_dataset = dataset.take(train_set_size)
75 validation_dataset = dataset.skip(train_set_size)
76 return train_dataset, validation_dataset
77
78
79 def get_name_scope():
80 with tf.name_scope('a') as scope:
81 name_scope = scope[:-2]
82 return name_scope
83
84
85 def dataset_shape(dataset):
86 return tf.compat.v1.data.get_output_shapes(dataset)
87
88
89 def is_label(y):
90 """Check if the targets are one-hot encoded or plain labels.
91
92 # Arguments
93 y: numpy.ndarray. The targets.
94
95 # Returns
96 Boolean. Whether the targets are plain label, not encoded.
97 """
98 return len(y.flatten()) == len(y)
99
100
101 def pickle_from_file(path):
102 """Load the pickle file from the provided path and returns the object."""
103 return pickle.load(open(path, 'rb'))
104
105
106 def pickle_to_file(obj, path):
107 """Save the pickle file to the specified path."""
108 pickle.dump(obj, open(path, 'wb'))
109
110
111 def to_snake_case(name):
112 intermediate = re.sub('(.)([A-Z][a-z0-9]+)', r'\1_\2', name)
113 insecure = re.sub('([a-z])([A-Z])', r'\1_\2', intermediate).lower()
114 # If the class is private the name starts with "_" which is not secure
115 # for creating scopes. We prefix the name with "private" in this case.
116 if insecure[0] != '_':
117 return insecure
118 return 'private' + insecure
119
120
121 def to_type_key(dictionary, convert_func):
122 """Convert the keys of a dictionary to a different type.
123
124 # Arguments
125 dictionary: Dictionary. The dictionary to be converted.
126 convert_func: Function. The function to convert a key.
127 """
128 return {convert_func(key): value
129 for key, value in dictionary.items()}
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/autokeras/__init__.py b/autokeras/__init__.py
--- a/autokeras/__init__.py
+++ b/autokeras/__init__.py
@@ -35,3 +35,6 @@
from autokeras.task import StructuredDataRegressor
from autokeras.task import TextClassifier
from autokeras.task import TextRegressor
+
+from .utils import check_tf_version
+check_tf_version()
diff --git a/autokeras/utils.py b/autokeras/utils.py
--- a/autokeras/utils.py
+++ b/autokeras/utils.py
@@ -1,5 +1,6 @@
import pickle
import re
+from packaging.version import parse
import numpy as np
import tensorflow as tf
@@ -127,3 +128,16 @@
"""
return {convert_func(key): value
for key, value in dictionary.items()}
+
+
+def check_tf_version():
+ if parse(tf.__version__) < parse('2.0.0'):
+ raise ImportError(
+ f'The Tensorflow package version needs to be at least v2.0.0 \n'
+ f'for AutoKeras to run. Currently, your TensorFlow version is \n'
+ f'v{tf.__version__}. Please upgrade with \n'
+ f'`$ pip install --upgrade tensorflow` -> GPU version \n'
+ f'or \n'
+ f'`$ pip install --upgrade tensorflow-cpu` -> CPU version. \n'
+ f'You can use `pip freeze` to check afterwards that everything is ok.'
+ )
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,8 +18,9 @@
url='http://autokeras.com',
download_url='https://github.com/keras-team/autokeras/archive/1.0.0a0.tar.gz',
keywords=['AutoML', 'keras'],
+ # TODO: Do not install tensorflow if tensorflow-gpu is installed.
install_requires=[
- 'tensorflow>=2.0.0',
+ 'packaging',
'keras-tuner>=1.0.0',
'scikit-learn',
'numpy',
|
{"golden_diff": "diff --git a/autokeras/__init__.py b/autokeras/__init__.py\n--- a/autokeras/__init__.py\n+++ b/autokeras/__init__.py\n@@ -35,3 +35,6 @@\n from autokeras.task import StructuredDataRegressor\n from autokeras.task import TextClassifier\n from autokeras.task import TextRegressor\n+\n+from .utils import check_tf_version\n+check_tf_version()\ndiff --git a/autokeras/utils.py b/autokeras/utils.py\n--- a/autokeras/utils.py\n+++ b/autokeras/utils.py\n@@ -1,5 +1,6 @@\n import pickle\n import re\n+from packaging.version import parse\n \n import numpy as np\n import tensorflow as tf\n@@ -127,3 +128,16 @@\n \"\"\"\n return {convert_func(key): value\n for key, value in dictionary.items()}\n+\n+\n+def check_tf_version():\n+ if parse(tf.__version__) < parse('2.0.0'):\n+ raise ImportError(\n+ f'The Tensorflow package version needs to be at least v2.0.0 \\n'\n+ f'for AutoKeras to run. Currently, your TensorFlow version is \\n'\n+ f'v{tf.__version__}. Please upgrade with \\n'\n+ f'`$ pip install --upgrade tensorflow` -> GPU version \\n'\n+ f'or \\n'\n+ f'`$ pip install --upgrade tensorflow-cpu` -> CPU version. \\n'\n+ f'You can use `pip freeze` to check afterwards that everything is ok.'\n+ )\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,8 +18,9 @@\n url='http://autokeras.com',\n download_url='https://github.com/keras-team/autokeras/archive/1.0.0a0.tar.gz',\n keywords=['AutoML', 'keras'],\n+ # TODO: Do not install tensorflow if tensorflow-gpu is installed.\n install_requires=[\n- 'tensorflow>=2.0.0',\n+ 'packaging',\n 'keras-tuner>=1.0.0',\n 'scikit-learn',\n 'numpy',\n", "issue": "installation issue o\n**My environment is a Jetson Xavier** \r\nJetpack 4.2 \r\nUbuntu 18.04\r\npython version is 3.6.8\r\nrelevant pip.list\r\ndask (2.8.0)\r\ndask-glm (0.2.0)\r\ndask-ml (1.1.1)\r\nKeras (2.3.1)\r\nKeras-Applications (1.0.8)\r\nKeras-Preprocessing (1.1.0)\r\nnumba (0.46.0)\r\nnumpy (1.17.4)\r\npackaging (19.2)\r\npandas (0.25.3)\r\npandas-flavor (0.2.0)\r\npip (19.3.1)\r\nscikit-learn (0.21.3)\r\nscikit-MDR (0.4.4)\r\nscipy (1.3.2)\r\ntensorboard (1.14.0)\r\ntensorflow-estimator (1.14.0)\r\n**tensorflow-gpu (1.14.0+nv19.10)**\r\n\r\n**error messages**\r\npip install git+git://github.com/keras-team/autokeras@master#egg=autokeras\r\nCollecting autokeras\r\nCloning git://github.com/keras-team/autokeras (to revision master) to /tmp/pip-install-du3y560o/autokeras\r\nRunning command git clone -q git://github.com/keras-team/autokeras /tmp/pip-install-du3y560o/autokeras\r\nERROR: Could not find a version that satisfies the requirement tensorflow (from autokeras) (from versions: none)\r\nERROR: No matching distribution found for tensorflow (from autokeras)\r\n**Then I tried**\r\npip install autokeras\r\nCollecting autokeras\r\n Downloading https://files.pythonhosted.org/packages/c2/32/de74bf6afd09925980340355a05aa6a19e7378ed91dac09e76a487bd136d/autokeras-0.4.0.tar.gz (67kB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 71kB 3.8MB/s \r\nCollecting scipy==1.2.0\r\n Downloading https://files.pythonhosted.org/packages/ea/c8/c296904f2c852c5c129962e6ca4ba467116b08cd5b54b7180b2e77fe06b2/scipy-1.2.0.tar.gz (23.3MB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 23.3MB 12.6MB/s \r\nERROR: Could not find a version that satisfies the requirement tensorflow==1.13.1 (from autokeras) (from versions: none)\r\n**ERROR: No matching distribution found for tensorflow==1.13.1 (from autokeras)**\r\n\r\nI have tried downgrading to tensorflow-gpu==1.13.1, but get the same error message\r\nERROR: No matching distribution found for tensorflow==1.13.1 (from autokeras)\r\n\r\nMy hunch is autokeras does not include tensorflow-gpu, thoughts on how to fix this?\r\n\r\nthanks in advance for your assistance\n", "before_files": [{"content": "from autokeras.auto_model import AutoModel\nfrom autokeras.const import Constant\nfrom autokeras.hypermodel.base import Block\nfrom autokeras.hypermodel.base import Head\nfrom autokeras.hypermodel.base import HyperBlock\nfrom autokeras.hypermodel.base import Node\nfrom autokeras.hypermodel.base import Preprocessor\nfrom autokeras.hypermodel.block import ConvBlock\nfrom autokeras.hypermodel.block import DenseBlock\nfrom autokeras.hypermodel.block import EmbeddingBlock\nfrom autokeras.hypermodel.block import Merge\nfrom autokeras.hypermodel.block import ResNetBlock\nfrom autokeras.hypermodel.block import RNNBlock\nfrom autokeras.hypermodel.block import SpatialReduction\nfrom autokeras.hypermodel.block import TemporalReduction\nfrom autokeras.hypermodel.block import XceptionBlock\nfrom autokeras.hypermodel.head import ClassificationHead\nfrom autokeras.hypermodel.head import RegressionHead\nfrom autokeras.hypermodel.hyperblock import ImageBlock\nfrom autokeras.hypermodel.hyperblock import StructuredDataBlock\nfrom autokeras.hypermodel.hyperblock import TextBlock\nfrom autokeras.hypermodel.node import ImageInput\nfrom autokeras.hypermodel.node import Input\nfrom autokeras.hypermodel.node import StructuredDataInput\nfrom autokeras.hypermodel.node import TextInput\nfrom autokeras.hypermodel.preprocessor import FeatureEngineering\nfrom autokeras.hypermodel.preprocessor import ImageAugmentation\nfrom autokeras.hypermodel.preprocessor import LightGBM\nfrom autokeras.hypermodel.preprocessor import Normalization\nfrom autokeras.hypermodel.preprocessor import TextToIntSequence\nfrom autokeras.hypermodel.preprocessor import TextToNgramVector\nfrom autokeras.task import ImageClassifier\nfrom autokeras.task import ImageRegressor\nfrom autokeras.task import StructuredDataClassifier\nfrom autokeras.task import StructuredDataRegressor\nfrom autokeras.task import TextClassifier\nfrom autokeras.task import TextRegressor\n", "path": "autokeras/__init__.py"}, {"content": "from distutils.core import setup\nfrom pathlib import Path\n\nfrom setuptools import find_packages\n\nthis_file = Path(__file__).resolve()\nreadme = this_file.parent / 'README.md'\n\nsetup(\n name='autokeras',\n version='1.0.0a0',\n description='AutoML for deep learning',\n package_data={'': ['README.md']},\n long_description=readme.read_text(encoding='utf-8'),\n long_description_content_type='text/markdown',\n author='Data Analytics at Texas A&M (DATA) Lab, Keras Team',\n author_email='[email protected]',\n url='http://autokeras.com',\n download_url='https://github.com/keras-team/autokeras/archive/1.0.0a0.tar.gz',\n keywords=['AutoML', 'keras'],\n install_requires=[\n 'tensorflow>=2.0.0',\n 'keras-tuner>=1.0.0',\n 'scikit-learn',\n 'numpy',\n 'pandas',\n 'lightgbm',\n ],\n extras_require={\n 'tests': ['pytest>=4.4.0',\n 'flake8',\n 'pytest-xdist',\n 'pytest-cov',\n # can be removed once coveralls is compatible with\n # coverage 5.0\n 'coverage==4.5.4'\n ],\n },\n packages=find_packages(exclude=('tests',)),\n)\n", "path": "setup.py"}, {"content": "import pickle\nimport re\n\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.python.util import nest\n\n\ndef get_global_average_pooling(shape):\n return [tf.keras.layers.GlobalAveragePooling1D,\n tf.keras.layers.GlobalAveragePooling2D,\n tf.keras.layers.GlobalAveragePooling3D][len(shape) - 3]\n\n\ndef get_global_max_pooling(shape):\n return [tf.keras.layers.GlobalMaxPool1D,\n tf.keras.layers.GlobalMaxPool2D,\n tf.keras.layers.GlobalMaxPool3D][len(shape) - 3]\n\n\ndef get_max_pooling(shape):\n return [tf.keras.layers.MaxPool1D,\n tf.keras.layers.MaxPool2D,\n tf.keras.layers.MaxPool3D][len(shape) - 3]\n\n\ndef get_conv(shape):\n return [tf.keras.layers.Conv1D,\n tf.keras.layers.Conv2D,\n tf.keras.layers.Conv3D][len(shape) - 3]\n\n\ndef get_sep_conv(shape):\n return [tf.keras.layers.SeparableConv1D,\n tf.keras.layers.SeparableConv2D,\n tf.keras.layers.Conv3D][len(shape) - 3]\n\n\ndef get_dropout(shape):\n return [tf.keras.layers.SpatialDropout1D,\n tf.keras.layers.SpatialDropout2D,\n tf.keras.layers.SpatialDropout3D][len(shape) - 3]\n\n\ndef validate_num_inputs(inputs, num):\n inputs = nest.flatten(inputs)\n if not len(inputs) == num:\n raise ValueError('Expected {num} elements in the inputs list '\n 'but received {len} inputs.'.format(num=num,\n len=len(inputs)))\n\n\ndef split_dataset(dataset, validation_split):\n \"\"\"Split dataset into training and validation.\n\n # Arguments\n dataset: tf.data.Dataset. The entire dataset to be split.\n validation_split: Float. The split ratio for the validation set.\n\n # Raises\n ValueError: If the dataset provided is too small to be split.\n\n # Returns\n A tuple of two tf.data.Dataset. The training set and the validation set.\n \"\"\"\n num_instances = dataset.reduce(np.int64(0), lambda x, _: x + 1).numpy()\n if num_instances < 2:\n raise ValueError('The dataset should at least contain 2 '\n 'instances to be split.')\n validation_set_size = min(\n max(int(num_instances * validation_split), 1),\n num_instances - 1)\n train_set_size = num_instances - validation_set_size\n train_dataset = dataset.take(train_set_size)\n validation_dataset = dataset.skip(train_set_size)\n return train_dataset, validation_dataset\n\n\ndef get_name_scope():\n with tf.name_scope('a') as scope:\n name_scope = scope[:-2]\n return name_scope\n\n\ndef dataset_shape(dataset):\n return tf.compat.v1.data.get_output_shapes(dataset)\n\n\ndef is_label(y):\n \"\"\"Check if the targets are one-hot encoded or plain labels.\n\n # Arguments\n y: numpy.ndarray. The targets.\n\n # Returns\n Boolean. Whether the targets are plain label, not encoded.\n \"\"\"\n return len(y.flatten()) == len(y)\n\n\ndef pickle_from_file(path):\n \"\"\"Load the pickle file from the provided path and returns the object.\"\"\"\n return pickle.load(open(path, 'rb'))\n\n\ndef pickle_to_file(obj, path):\n \"\"\"Save the pickle file to the specified path.\"\"\"\n pickle.dump(obj, open(path, 'wb'))\n\n\ndef to_snake_case(name):\n intermediate = re.sub('(.)([A-Z][a-z0-9]+)', r'\\1_\\2', name)\n insecure = re.sub('([a-z])([A-Z])', r'\\1_\\2', intermediate).lower()\n # If the class is private the name starts with \"_\" which is not secure\n # for creating scopes. We prefix the name with \"private\" in this case.\n if insecure[0] != '_':\n return insecure\n return 'private' + insecure\n\n\ndef to_type_key(dictionary, convert_func):\n \"\"\"Convert the keys of a dictionary to a different type.\n\n # Arguments\n dictionary: Dictionary. The dictionary to be converted.\n convert_func: Function. The function to convert a key.\n \"\"\"\n return {convert_func(key): value\n for key, value in dictionary.items()}\n", "path": "autokeras/utils.py"}], "after_files": [{"content": "from autokeras.auto_model import AutoModel\nfrom autokeras.const import Constant\nfrom autokeras.hypermodel.base import Block\nfrom autokeras.hypermodel.base import Head\nfrom autokeras.hypermodel.base import HyperBlock\nfrom autokeras.hypermodel.base import Node\nfrom autokeras.hypermodel.base import Preprocessor\nfrom autokeras.hypermodel.block import ConvBlock\nfrom autokeras.hypermodel.block import DenseBlock\nfrom autokeras.hypermodel.block import EmbeddingBlock\nfrom autokeras.hypermodel.block import Merge\nfrom autokeras.hypermodel.block import ResNetBlock\nfrom autokeras.hypermodel.block import RNNBlock\nfrom autokeras.hypermodel.block import SpatialReduction\nfrom autokeras.hypermodel.block import TemporalReduction\nfrom autokeras.hypermodel.block import XceptionBlock\nfrom autokeras.hypermodel.head import ClassificationHead\nfrom autokeras.hypermodel.head import RegressionHead\nfrom autokeras.hypermodel.hyperblock import ImageBlock\nfrom autokeras.hypermodel.hyperblock import StructuredDataBlock\nfrom autokeras.hypermodel.hyperblock import TextBlock\nfrom autokeras.hypermodel.node import ImageInput\nfrom autokeras.hypermodel.node import Input\nfrom autokeras.hypermodel.node import StructuredDataInput\nfrom autokeras.hypermodel.node import TextInput\nfrom autokeras.hypermodel.preprocessor import FeatureEngineering\nfrom autokeras.hypermodel.preprocessor import ImageAugmentation\nfrom autokeras.hypermodel.preprocessor import LightGBM\nfrom autokeras.hypermodel.preprocessor import Normalization\nfrom autokeras.hypermodel.preprocessor import TextToIntSequence\nfrom autokeras.hypermodel.preprocessor import TextToNgramVector\nfrom autokeras.task import ImageClassifier\nfrom autokeras.task import ImageRegressor\nfrom autokeras.task import StructuredDataClassifier\nfrom autokeras.task import StructuredDataRegressor\nfrom autokeras.task import TextClassifier\nfrom autokeras.task import TextRegressor\n\nfrom .utils import check_tf_version\ncheck_tf_version()\n", "path": "autokeras/__init__.py"}, {"content": "from distutils.core import setup\nfrom pathlib import Path\n\nfrom setuptools import find_packages\n\nthis_file = Path(__file__).resolve()\nreadme = this_file.parent / 'README.md'\n\nsetup(\n name='autokeras',\n version='1.0.0a0',\n description='AutoML for deep learning',\n package_data={'': ['README.md']},\n long_description=readme.read_text(encoding='utf-8'),\n long_description_content_type='text/markdown',\n author='Data Analytics at Texas A&M (DATA) Lab, Keras Team',\n author_email='[email protected]',\n url='http://autokeras.com',\n download_url='https://github.com/keras-team/autokeras/archive/1.0.0a0.tar.gz',\n keywords=['AutoML', 'keras'],\n # TODO: Do not install tensorflow if tensorflow-gpu is installed.\n install_requires=[\n 'packaging',\n 'keras-tuner>=1.0.0',\n 'scikit-learn',\n 'numpy',\n 'pandas',\n 'lightgbm',\n ],\n extras_require={\n 'tests': ['pytest>=4.4.0',\n 'flake8',\n 'pytest-xdist',\n 'pytest-cov',\n # can be removed once coveralls is compatible with\n # coverage 5.0\n 'coverage==4.5.4'\n ],\n },\n packages=find_packages(exclude=('tests',)),\n)\n", "path": "setup.py"}, {"content": "import pickle\nimport re\nfrom packaging.version import parse\n\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.python.util import nest\n\n\ndef get_global_average_pooling(shape):\n return [tf.keras.layers.GlobalAveragePooling1D,\n tf.keras.layers.GlobalAveragePooling2D,\n tf.keras.layers.GlobalAveragePooling3D][len(shape) - 3]\n\n\ndef get_global_max_pooling(shape):\n return [tf.keras.layers.GlobalMaxPool1D,\n tf.keras.layers.GlobalMaxPool2D,\n tf.keras.layers.GlobalMaxPool3D][len(shape) - 3]\n\n\ndef get_max_pooling(shape):\n return [tf.keras.layers.MaxPool1D,\n tf.keras.layers.MaxPool2D,\n tf.keras.layers.MaxPool3D][len(shape) - 3]\n\n\ndef get_conv(shape):\n return [tf.keras.layers.Conv1D,\n tf.keras.layers.Conv2D,\n tf.keras.layers.Conv3D][len(shape) - 3]\n\n\ndef get_sep_conv(shape):\n return [tf.keras.layers.SeparableConv1D,\n tf.keras.layers.SeparableConv2D,\n tf.keras.layers.Conv3D][len(shape) - 3]\n\n\ndef get_dropout(shape):\n return [tf.keras.layers.SpatialDropout1D,\n tf.keras.layers.SpatialDropout2D,\n tf.keras.layers.SpatialDropout3D][len(shape) - 3]\n\n\ndef validate_num_inputs(inputs, num):\n inputs = nest.flatten(inputs)\n if not len(inputs) == num:\n raise ValueError('Expected {num} elements in the inputs list '\n 'but received {len} inputs.'.format(num=num,\n len=len(inputs)))\n\n\ndef split_dataset(dataset, validation_split):\n \"\"\"Split dataset into training and validation.\n\n # Arguments\n dataset: tf.data.Dataset. The entire dataset to be split.\n validation_split: Float. The split ratio for the validation set.\n\n # Raises\n ValueError: If the dataset provided is too small to be split.\n\n # Returns\n A tuple of two tf.data.Dataset. The training set and the validation set.\n \"\"\"\n num_instances = dataset.reduce(np.int64(0), lambda x, _: x + 1).numpy()\n if num_instances < 2:\n raise ValueError('The dataset should at least contain 2 '\n 'instances to be split.')\n validation_set_size = min(\n max(int(num_instances * validation_split), 1),\n num_instances - 1)\n train_set_size = num_instances - validation_set_size\n train_dataset = dataset.take(train_set_size)\n validation_dataset = dataset.skip(train_set_size)\n return train_dataset, validation_dataset\n\n\ndef get_name_scope():\n with tf.name_scope('a') as scope:\n name_scope = scope[:-2]\n return name_scope\n\n\ndef dataset_shape(dataset):\n return tf.compat.v1.data.get_output_shapes(dataset)\n\n\ndef is_label(y):\n \"\"\"Check if the targets are one-hot encoded or plain labels.\n\n # Arguments\n y: numpy.ndarray. The targets.\n\n # Returns\n Boolean. Whether the targets are plain label, not encoded.\n \"\"\"\n return len(y.flatten()) == len(y)\n\n\ndef pickle_from_file(path):\n \"\"\"Load the pickle file from the provided path and returns the object.\"\"\"\n return pickle.load(open(path, 'rb'))\n\n\ndef pickle_to_file(obj, path):\n \"\"\"Save the pickle file to the specified path.\"\"\"\n pickle.dump(obj, open(path, 'wb'))\n\n\ndef to_snake_case(name):\n intermediate = re.sub('(.)([A-Z][a-z0-9]+)', r'\\1_\\2', name)\n insecure = re.sub('([a-z])([A-Z])', r'\\1_\\2', intermediate).lower()\n # If the class is private the name starts with \"_\" which is not secure\n # for creating scopes. We prefix the name with \"private\" in this case.\n if insecure[0] != '_':\n return insecure\n return 'private' + insecure\n\n\ndef to_type_key(dictionary, convert_func):\n \"\"\"Convert the keys of a dictionary to a different type.\n\n # Arguments\n dictionary: Dictionary. The dictionary to be converted.\n convert_func: Function. The function to convert a key.\n \"\"\"\n return {convert_func(key): value\n for key, value in dictionary.items()}\n\n\ndef check_tf_version():\n if parse(tf.__version__) < parse('2.0.0'):\n raise ImportError(\n f'The Tensorflow package version needs to be at least v2.0.0 \\n'\n f'for AutoKeras to run. Currently, your TensorFlow version is \\n'\n f'v{tf.__version__}. Please upgrade with \\n'\n f'`$ pip install --upgrade tensorflow` -> GPU version \\n'\n f'or \\n'\n f'`$ pip install --upgrade tensorflow-cpu` -> CPU version. \\n'\n f'You can use `pip freeze` to check afterwards that everything is ok.'\n )\n", "path": "autokeras/utils.py"}]}
| 3,169 | 504 |
gh_patches_debug_30154
|
rasdani/github-patches
|
git_diff
|
fal-ai__dbt-fal-190
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python script should be able to handle relative imports
I was trying execute a script using `fal`, it works fine when full code is in a single script but breaks down when I write down my script to different modules. Probably this is because fal is internally using python's `exec` builtins function to execute the script after reading the file. Would appreciate it very much if you guys can add this feature to fal as soon as possible. It is a great tool to work with dbt.! :D
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/fal/cli/fal_runner.py`
Content:
```
1 import argparse
2 from typing import List
3 import os
4
5 import dbt.exceptions
6 import dbt.ui
7 from dbt.config.profile import DEFAULT_PROFILES_DIR
8
9 from fal.run_scripts import run_global_scripts, run_scripts
10 from fal.fal_script import FalScript
11 from faldbt.project import FalDbt, FalGeneralException, FalProject
12
13
14 def create_fal_dbt(
15 args: argparse.Namespace,
16 ):
17 real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))
18 real_profiles_dir = None
19 if args.profiles_dir is not None:
20 real_profiles_dir = os.path.realpath(os.path.normpath(args.profiles_dir))
21 elif os.getenv("DBT_PROFILES_DIR"):
22 real_profiles_dir = os.path.realpath(
23 os.path.normpath(os.getenv("DBT_PROFILES_DIR"))
24 )
25 else:
26 real_profiles_dir = DEFAULT_PROFILES_DIR
27
28 return FalDbt(
29 real_project_dir,
30 real_profiles_dir,
31 args.select,
32 args.exclude,
33 args.selector,
34 args.keyword,
35 )
36
37
38 def fal_run(
39 args: argparse.Namespace,
40 selects_count=0, # TODO: remove `action="extend"` to match exactly what dbt does
41 exclude_count=0,
42 script_count=0,
43 ):
44 "Runs the fal run command in a subprocess"
45
46 args_dict = vars(args)
47 selector_flags = args.select or args.exclude or args.selector
48 if args_dict.get("all") and selector_flags:
49 raise FalGeneralException(
50 "Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)"
51 )
52
53 faldbt = create_fal_dbt(args)
54 project = FalProject(faldbt)
55 models = project.get_filtered_models(
56 args_dict.get("all"), selector_flags, args_dict.get("before")
57 )
58
59 _handle_selector_warnings(selects_count, exclude_count, script_count, args)
60
61 scripts = _select_scripts(args_dict, models, project, args)
62
63 # run model specific scripts first
64 run_scripts(scripts, project)
65
66 # then run global scripts
67 if _should_run_global_scripts(args_dict):
68 _run_global_scripts(
69 project, faldbt, "before" if args_dict.get("before") else "after"
70 )
71
72
73 def _handle_selector_warnings(selects_count, exclude_count, script_count, args):
74 # TODO: remove `action="extend"` to match exactly what dbt does
75 if selects_count > 1:
76 dbt.exceptions.warn_or_error(
77 "Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\n"
78 + f"Please use model selection like dbt. Use: --select {' '.join(args.select)}",
79 log_fmt=dbt.ui.warning_tag("{}"),
80 )
81 if exclude_count > 1:
82 dbt.exceptions.warn_or_error(
83 "Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\n"
84 + f"Please use model exclusion like dbt. Use: --exclude {' '.join(args.exclude)}",
85 log_fmt=dbt.ui.warning_tag("{}"),
86 )
87 if script_count > 1:
88 dbt.exceptions.warn_or_error(
89 "Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\n"
90 + f"Please use: --script {' '.join(args.scripts)}",
91 log_fmt=dbt.ui.warning_tag("{}"),
92 )
93
94
95 def _should_run_global_scripts(args_dict) -> bool:
96 return args_dict.get("scripts")
97
98
99 def _select_scripts(args_dict, models, project, args) -> List[FalScript]:
100 scripts = []
101 # if --script selector is there only run selected scripts
102 if args_dict.get("scripts"):
103 scripts = []
104 for model in models:
105 model_scripts = model.get_scripts(args.keyword, args_dict.get("before"))
106 for el in args.scripts:
107 if el in model_scripts:
108 scripts.append(FalScript(model, el))
109 else:
110 real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))
111 for model in models:
112 for path in model.get_script_paths(
113 args.keyword, real_project_dir, args_dict.get("before")
114 ):
115 scripts.append(FalScript(model, path))
116
117 return scripts
118
119
120 def _run_global_scripts(project: FalProject, faldbt: FalDbt, global_key: str):
121 global_scripts = list(
122 map(
123 lambda path: FalScript(None, path),
124 faldbt._global_script_paths[global_key],
125 )
126 )
127
128 run_global_scripts(global_scripts, project)
129
```
Path: `src/fal/fal_script.py`
Content:
```
1 from dataclasses import dataclass, field
2 from typing import List, TypeVar, Dict, Union
3 from faldbt.project import DbtModel, FalDbt
4 from pathlib import Path
5
6 T = TypeVar("T", bound="FalScript")
7
8
9 class FalDagCycle(Exception):
10 pass
11
12
13 @dataclass(frozen=True)
14 class FalScript:
15 model: Union[DbtModel, None]
16 path: Path
17
18 def exec(self, context, faldbt: FalDbt):
19 """
20 Executes the script
21 """
22 with open(self.path) as file:
23 a_script = file.read()
24 exec(
25 a_script,
26 {
27 "context": context,
28 "ref": faldbt.ref,
29 "source": faldbt.source,
30 "write_to_source": faldbt.write_to_source,
31 "write_to_firestore": faldbt.write_to_firestore,
32 "list_models": faldbt.list_models,
33 "list_models_ids": faldbt.list_models_ids,
34 "list_sources": faldbt.list_sources,
35 "list_features": faldbt.list_features,
36 },
37 )
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/fal/cli/fal_runner.py b/src/fal/cli/fal_runner.py
--- a/src/fal/cli/fal_runner.py
+++ b/src/fal/cli/fal_runner.py
@@ -1,5 +1,6 @@
import argparse
from typing import List
+from pathlib import Path
import os
import dbt.exceptions
@@ -105,7 +106,7 @@
model_scripts = model.get_scripts(args.keyword, args_dict.get("before"))
for el in args.scripts:
if el in model_scripts:
- scripts.append(FalScript(model, el))
+ scripts.append(FalScript(model, Path(el)))
else:
real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))
for model in models:
diff --git a/src/fal/fal_script.py b/src/fal/fal_script.py
--- a/src/fal/fal_script.py
+++ b/src/fal/fal_script.py
@@ -2,6 +2,7 @@
from typing import List, TypeVar, Dict, Union
from faldbt.project import DbtModel, FalDbt
from pathlib import Path
+import sys
T = TypeVar("T", bound="FalScript")
@@ -19,6 +20,11 @@
"""
Executes the script
"""
+
+ # Enable local imports
+ local_path = str(self.path.parent)
+ sys.path.append(local_path)
+
with open(self.path) as file:
a_script = file.read()
exec(
@@ -35,3 +41,4 @@
"list_features": faldbt.list_features,
},
)
+ sys.path.remove(local_path)
|
{"golden_diff": "diff --git a/src/fal/cli/fal_runner.py b/src/fal/cli/fal_runner.py\n--- a/src/fal/cli/fal_runner.py\n+++ b/src/fal/cli/fal_runner.py\n@@ -1,5 +1,6 @@\n import argparse\n from typing import List\n+from pathlib import Path\n import os\n \n import dbt.exceptions\n@@ -105,7 +106,7 @@\n model_scripts = model.get_scripts(args.keyword, args_dict.get(\"before\"))\n for el in args.scripts:\n if el in model_scripts:\n- scripts.append(FalScript(model, el))\n+ scripts.append(FalScript(model, Path(el)))\n else:\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n for model in models:\ndiff --git a/src/fal/fal_script.py b/src/fal/fal_script.py\n--- a/src/fal/fal_script.py\n+++ b/src/fal/fal_script.py\n@@ -2,6 +2,7 @@\n from typing import List, TypeVar, Dict, Union\n from faldbt.project import DbtModel, FalDbt\n from pathlib import Path\n+import sys\n \n T = TypeVar(\"T\", bound=\"FalScript\")\n \n@@ -19,6 +20,11 @@\n \"\"\"\n Executes the script\n \"\"\"\n+\n+ # Enable local imports\n+ local_path = str(self.path.parent)\n+ sys.path.append(local_path)\n+\n with open(self.path) as file:\n a_script = file.read()\n exec(\n@@ -35,3 +41,4 @@\n \"list_features\": faldbt.list_features,\n },\n )\n+ sys.path.remove(local_path)\n", "issue": "Python script should be able to handle relative imports\nI was trying execute a script using `fal`, it works fine when full code is in a single script but breaks down when I write down my script to different modules. Probably this is because fal is internally using python's `exec` builtins function to execute the script after reading the file. Would appreciate it very much if you guys can add this feature to fal as soon as possible. It is a great tool to work with dbt.! :D\n", "before_files": [{"content": "import argparse\nfrom typing import List\nimport os\n\nimport dbt.exceptions\nimport dbt.ui\nfrom dbt.config.profile import DEFAULT_PROFILES_DIR\n\nfrom fal.run_scripts import run_global_scripts, run_scripts\nfrom fal.fal_script import FalScript\nfrom faldbt.project import FalDbt, FalGeneralException, FalProject\n\n\ndef create_fal_dbt(\n args: argparse.Namespace,\n):\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n real_profiles_dir = None\n if args.profiles_dir is not None:\n real_profiles_dir = os.path.realpath(os.path.normpath(args.profiles_dir))\n elif os.getenv(\"DBT_PROFILES_DIR\"):\n real_profiles_dir = os.path.realpath(\n os.path.normpath(os.getenv(\"DBT_PROFILES_DIR\"))\n )\n else:\n real_profiles_dir = DEFAULT_PROFILES_DIR\n\n return FalDbt(\n real_project_dir,\n real_profiles_dir,\n args.select,\n args.exclude,\n args.selector,\n args.keyword,\n )\n\n\ndef fal_run(\n args: argparse.Namespace,\n selects_count=0, # TODO: remove `action=\"extend\"` to match exactly what dbt does\n exclude_count=0,\n script_count=0,\n):\n \"Runs the fal run command in a subprocess\"\n\n args_dict = vars(args)\n selector_flags = args.select or args.exclude or args.selector\n if args_dict.get(\"all\") and selector_flags:\n raise FalGeneralException(\n \"Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)\"\n )\n\n faldbt = create_fal_dbt(args)\n project = FalProject(faldbt)\n models = project.get_filtered_models(\n args_dict.get(\"all\"), selector_flags, args_dict.get(\"before\")\n )\n\n _handle_selector_warnings(selects_count, exclude_count, script_count, args)\n\n scripts = _select_scripts(args_dict, models, project, args)\n\n # run model specific scripts first\n run_scripts(scripts, project)\n\n # then run global scripts\n if _should_run_global_scripts(args_dict):\n _run_global_scripts(\n project, faldbt, \"before\" if args_dict.get(\"before\") else \"after\"\n )\n\n\ndef _handle_selector_warnings(selects_count, exclude_count, script_count, args):\n # TODO: remove `action=\"extend\"` to match exactly what dbt does\n if selects_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use model selection like dbt. Use: --select {' '.join(args.select)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n if exclude_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use model exclusion like dbt. Use: --exclude {' '.join(args.exclude)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n if script_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use: --script {' '.join(args.scripts)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n\n\ndef _should_run_global_scripts(args_dict) -> bool:\n return args_dict.get(\"scripts\")\n\n\ndef _select_scripts(args_dict, models, project, args) -> List[FalScript]:\n scripts = []\n # if --script selector is there only run selected scripts\n if args_dict.get(\"scripts\"):\n scripts = []\n for model in models:\n model_scripts = model.get_scripts(args.keyword, args_dict.get(\"before\"))\n for el in args.scripts:\n if el in model_scripts:\n scripts.append(FalScript(model, el))\n else:\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n for model in models:\n for path in model.get_script_paths(\n args.keyword, real_project_dir, args_dict.get(\"before\")\n ):\n scripts.append(FalScript(model, path))\n\n return scripts\n\n\ndef _run_global_scripts(project: FalProject, faldbt: FalDbt, global_key: str):\n global_scripts = list(\n map(\n lambda path: FalScript(None, path),\n faldbt._global_script_paths[global_key],\n )\n )\n\n run_global_scripts(global_scripts, project)\n", "path": "src/fal/cli/fal_runner.py"}, {"content": "from dataclasses import dataclass, field\nfrom typing import List, TypeVar, Dict, Union\nfrom faldbt.project import DbtModel, FalDbt\nfrom pathlib import Path\n\nT = TypeVar(\"T\", bound=\"FalScript\")\n\n\nclass FalDagCycle(Exception):\n pass\n\n\n@dataclass(frozen=True)\nclass FalScript:\n model: Union[DbtModel, None]\n path: Path\n\n def exec(self, context, faldbt: FalDbt):\n \"\"\"\n Executes the script\n \"\"\"\n with open(self.path) as file:\n a_script = file.read()\n exec(\n a_script,\n {\n \"context\": context,\n \"ref\": faldbt.ref,\n \"source\": faldbt.source,\n \"write_to_source\": faldbt.write_to_source,\n \"write_to_firestore\": faldbt.write_to_firestore,\n \"list_models\": faldbt.list_models,\n \"list_models_ids\": faldbt.list_models_ids,\n \"list_sources\": faldbt.list_sources,\n \"list_features\": faldbt.list_features,\n },\n )\n", "path": "src/fal/fal_script.py"}], "after_files": [{"content": "import argparse\nfrom typing import List\nfrom pathlib import Path\nimport os\n\nimport dbt.exceptions\nimport dbt.ui\nfrom dbt.config.profile import DEFAULT_PROFILES_DIR\n\nfrom fal.run_scripts import run_global_scripts, run_scripts\nfrom fal.fal_script import FalScript\nfrom faldbt.project import FalDbt, FalGeneralException, FalProject\n\n\ndef create_fal_dbt(\n args: argparse.Namespace,\n):\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n real_profiles_dir = None\n if args.profiles_dir is not None:\n real_profiles_dir = os.path.realpath(os.path.normpath(args.profiles_dir))\n elif os.getenv(\"DBT_PROFILES_DIR\"):\n real_profiles_dir = os.path.realpath(\n os.path.normpath(os.getenv(\"DBT_PROFILES_DIR\"))\n )\n else:\n real_profiles_dir = DEFAULT_PROFILES_DIR\n\n return FalDbt(\n real_project_dir,\n real_profiles_dir,\n args.select,\n args.exclude,\n args.selector,\n args.keyword,\n )\n\n\ndef fal_run(\n args: argparse.Namespace,\n selects_count=0, # TODO: remove `action=\"extend\"` to match exactly what dbt does\n exclude_count=0,\n script_count=0,\n):\n \"Runs the fal run command in a subprocess\"\n\n args_dict = vars(args)\n selector_flags = args.select or args.exclude or args.selector\n if args_dict.get(\"all\") and selector_flags:\n raise FalGeneralException(\n \"Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)\"\n )\n\n faldbt = create_fal_dbt(args)\n project = FalProject(faldbt)\n models = project.get_filtered_models(\n args_dict.get(\"all\"), selector_flags, args_dict.get(\"before\")\n )\n\n _handle_selector_warnings(selects_count, exclude_count, script_count, args)\n\n scripts = _select_scripts(args_dict, models, project, args)\n\n # run model specific scripts first\n run_scripts(scripts, project)\n\n # then run global scripts\n if _should_run_global_scripts(args_dict):\n _run_global_scripts(\n project, faldbt, \"before\" if args_dict.get(\"before\") else \"after\"\n )\n\n\ndef _handle_selector_warnings(selects_count, exclude_count, script_count, args):\n # TODO: remove `action=\"extend\"` to match exactly what dbt does\n if selects_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use model selection like dbt. Use: --select {' '.join(args.select)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n if exclude_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use model exclusion like dbt. Use: --exclude {' '.join(args.exclude)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n if script_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use: --script {' '.join(args.scripts)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n\n\ndef _should_run_global_scripts(args_dict) -> bool:\n return args_dict.get(\"scripts\")\n\n\ndef _select_scripts(args_dict, models, project, args) -> List[FalScript]:\n scripts = []\n # if --script selector is there only run selected scripts\n if args_dict.get(\"scripts\"):\n scripts = []\n for model in models:\n model_scripts = model.get_scripts(args.keyword, args_dict.get(\"before\"))\n for el in args.scripts:\n if el in model_scripts:\n scripts.append(FalScript(model, Path(el)))\n else:\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n for model in models:\n for path in model.get_script_paths(\n args.keyword, real_project_dir, args_dict.get(\"before\")\n ):\n scripts.append(FalScript(model, path))\n\n return scripts\n\n\ndef _run_global_scripts(project: FalProject, faldbt: FalDbt, global_key: str):\n global_scripts = list(\n map(\n lambda path: FalScript(None, path),\n faldbt._global_script_paths[global_key],\n )\n )\n\n run_global_scripts(global_scripts, project)\n", "path": "src/fal/cli/fal_runner.py"}, {"content": "from dataclasses import dataclass, field\nfrom typing import List, TypeVar, Dict, Union\nfrom faldbt.project import DbtModel, FalDbt\nfrom pathlib import Path\nimport sys\n\nT = TypeVar(\"T\", bound=\"FalScript\")\n\n\nclass FalDagCycle(Exception):\n pass\n\n\n@dataclass(frozen=True)\nclass FalScript:\n model: Union[DbtModel, None]\n path: Path\n\n def exec(self, context, faldbt: FalDbt):\n \"\"\"\n Executes the script\n \"\"\"\n\n # Enable local imports\n local_path = str(self.path.parent)\n sys.path.append(local_path)\n\n with open(self.path) as file:\n a_script = file.read()\n exec(\n a_script,\n {\n \"context\": context,\n \"ref\": faldbt.ref,\n \"source\": faldbt.source,\n \"write_to_source\": faldbt.write_to_source,\n \"write_to_firestore\": faldbt.write_to_firestore,\n \"list_models\": faldbt.list_models,\n \"list_models_ids\": faldbt.list_models_ids,\n \"list_sources\": faldbt.list_sources,\n \"list_features\": faldbt.list_features,\n },\n )\n sys.path.remove(local_path)\n", "path": "src/fal/fal_script.py"}]}
| 1,985 | 370 |
gh_patches_debug_18860
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-2755
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
inconsistency between CU1 and CU3 gate definitions
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
This is not a bug or enhancement request as such, but seems like an internal inconsistency in Qiskit's gate definitions.
In [the gate definitions](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/terra/summary_of_quantum_operations.ipynb), U1 is defined as [1,0,0,e^(iλ)], while an Rz is a [e^(-iλ/2),0,0,e^(iλ/2)].
U3 is defined in the docs similarly to U1 - ie. a U3 is a U1*Ry*U1. Therefore, a U3(0,0,a) = U1(a). However, CU3 is defined in the docs in such a way that CU3(0,0,a) != CU1(a). CU3 is instead defined using the Rz definition, rather than the U1.
So:
U3(0,0,a) = U1(a)
CU3(0,0,a) != CU1(a)
This is a confusing set of definitions. I assume that these definitions were a conscious decision, and that you are aware of the inconsistency, but I don't understand why?
I hope this hasn't been asked already - I couldn't find a duplicate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/extensions/standard/cu3.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 controlled-u3 gate.
17 """
18 from qiskit.circuit import Gate
19 from qiskit.circuit import QuantumCircuit
20 from qiskit.circuit import QuantumRegister
21 from qiskit.extensions.standard.u1 import U1Gate
22 from qiskit.extensions.standard.u3 import U3Gate
23 from qiskit.extensions.standard.cx import CnotGate
24
25
26 class Cu3Gate(Gate):
27 """controlled-u3 gate."""
28
29 def __init__(self, theta, phi, lam):
30 """Create new cu3 gate."""
31 super().__init__("cu3", 2, [theta, phi, lam])
32
33 def _define(self):
34 """
35 gate cu3(theta,phi,lambda) c, t
36 { u1((lambda-phi)/2) t; cx c,t;
37 u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;
38 u3(theta/2,phi,0) t;
39 }
40 """
41 definition = []
42 q = QuantumRegister(2, "q")
43 rule = [
44 (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),
45 (CnotGate(), [q[0], q[1]], []),
46 (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),
47 (CnotGate(), [q[0], q[1]], []),
48 (U3Gate(self.params[0] / 2, self.params[1], 0), [q[1]], [])
49 ]
50 for inst in rule:
51 definition.append(inst)
52 self.definition = definition
53
54 def inverse(self):
55 """Invert this gate."""
56 return Cu3Gate(-self.params[0], -self.params[2], -self.params[1])
57
58
59 def cu3(self, theta, phi, lam, ctl, tgt):
60 """Apply cu3 from ctl to tgt with angle theta, phi, lam."""
61 return self.append(Cu3Gate(theta, phi, lam), [ctl, tgt], [])
62
63
64 QuantumCircuit.cu3 = cu3
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qiskit/extensions/standard/cu3.py b/qiskit/extensions/standard/cu3.py
--- a/qiskit/extensions/standard/cu3.py
+++ b/qiskit/extensions/standard/cu3.py
@@ -33,7 +33,7 @@
def _define(self):
"""
gate cu3(theta,phi,lambda) c, t
- { u1((lambda-phi)/2) t; cx c,t;
+ { u1((lambda+phi)/2) c; u1((lambda-phi)/2) t; cx c,t;
u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;
u3(theta/2,phi,0) t;
}
@@ -41,6 +41,7 @@
definition = []
q = QuantumRegister(2, "q")
rule = [
+ (U1Gate((self.params[2] + self.params[1]) / 2), [q[0]], []),
(U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),
(CnotGate(), [q[0], q[1]], []),
(U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),
|
{"golden_diff": "diff --git a/qiskit/extensions/standard/cu3.py b/qiskit/extensions/standard/cu3.py\n--- a/qiskit/extensions/standard/cu3.py\n+++ b/qiskit/extensions/standard/cu3.py\n@@ -33,7 +33,7 @@\n def _define(self):\n \"\"\"\n gate cu3(theta,phi,lambda) c, t\n- { u1((lambda-phi)/2) t; cx c,t;\n+ { u1((lambda+phi)/2) c; u1((lambda-phi)/2) t; cx c,t;\n u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;\n u3(theta/2,phi,0) t;\n }\n@@ -41,6 +41,7 @@\n definition = []\n q = QuantumRegister(2, \"q\")\n rule = [\n+ (U1Gate((self.params[2] + self.params[1]) / 2), [q[0]], []),\n (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),\n", "issue": "inconsistency between CU1 and CU3 gate definitions\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues to confirm this idea does not exist. -->\r\n\r\n### What is the expected enhancement?\r\n\r\nThis is not a bug or enhancement request as such, but seems like an internal inconsistency in Qiskit's gate definitions.\r\nIn [the gate definitions](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/terra/summary_of_quantum_operations.ipynb), U1 is defined as [1,0,0,e^(i\u03bb)], while an Rz is a [e^(-i\u03bb/2),0,0,e^(i\u03bb/2)].\r\n\r\nU3 is defined in the docs similarly to U1 - ie. a U3 is a U1*Ry*U1. Therefore, a U3(0,0,a) = U1(a). However, CU3 is defined in the docs in such a way that CU3(0,0,a) != CU1(a). CU3 is instead defined using the Rz definition, rather than the U1.\r\n\r\nSo: \r\nU3(0,0,a) = U1(a)\r\nCU3(0,0,a) != CU1(a)\r\n\r\nThis is a confusing set of definitions. I assume that these definitions were a conscious decision, and that you are aware of the inconsistency, but I don't understand why?\r\nI hope this hasn't been asked already - I couldn't find a duplicate.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\ncontrolled-u3 gate.\n\"\"\"\nfrom qiskit.circuit import Gate\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.extensions.standard.u1 import U1Gate\nfrom qiskit.extensions.standard.u3 import U3Gate\nfrom qiskit.extensions.standard.cx import CnotGate\n\n\nclass Cu3Gate(Gate):\n \"\"\"controlled-u3 gate.\"\"\"\n\n def __init__(self, theta, phi, lam):\n \"\"\"Create new cu3 gate.\"\"\"\n super().__init__(\"cu3\", 2, [theta, phi, lam])\n\n def _define(self):\n \"\"\"\n gate cu3(theta,phi,lambda) c, t\n { u1((lambda-phi)/2) t; cx c,t;\n u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;\n u3(theta/2,phi,0) t;\n }\n \"\"\"\n definition = []\n q = QuantumRegister(2, \"q\")\n rule = [\n (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(self.params[0] / 2, self.params[1], 0), [q[1]], [])\n ]\n for inst in rule:\n definition.append(inst)\n self.definition = definition\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n return Cu3Gate(-self.params[0], -self.params[2], -self.params[1])\n\n\ndef cu3(self, theta, phi, lam, ctl, tgt):\n \"\"\"Apply cu3 from ctl to tgt with angle theta, phi, lam.\"\"\"\n return self.append(Cu3Gate(theta, phi, lam), [ctl, tgt], [])\n\n\nQuantumCircuit.cu3 = cu3\n", "path": "qiskit/extensions/standard/cu3.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\ncontrolled-u3 gate.\n\"\"\"\nfrom qiskit.circuit import Gate\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.extensions.standard.u1 import U1Gate\nfrom qiskit.extensions.standard.u3 import U3Gate\nfrom qiskit.extensions.standard.cx import CnotGate\n\n\nclass Cu3Gate(Gate):\n \"\"\"controlled-u3 gate.\"\"\"\n\n def __init__(self, theta, phi, lam):\n \"\"\"Create new cu3 gate.\"\"\"\n super().__init__(\"cu3\", 2, [theta, phi, lam])\n\n def _define(self):\n \"\"\"\n gate cu3(theta,phi,lambda) c, t\n { u1((lambda+phi)/2) c; u1((lambda-phi)/2) t; cx c,t;\n u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;\n u3(theta/2,phi,0) t;\n }\n \"\"\"\n definition = []\n q = QuantumRegister(2, \"q\")\n rule = [\n (U1Gate((self.params[2] + self.params[1]) / 2), [q[0]], []),\n (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(self.params[0] / 2, self.params[1], 0), [q[1]], [])\n ]\n for inst in rule:\n definition.append(inst)\n self.definition = definition\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n return Cu3Gate(-self.params[0], -self.params[2], -self.params[1])\n\n\ndef cu3(self, theta, phi, lam, ctl, tgt):\n \"\"\"Apply cu3 from ctl to tgt with angle theta, phi, lam.\"\"\"\n return self.append(Cu3Gate(theta, phi, lam), [ctl, tgt], [])\n\n\nQuantumCircuit.cu3 = cu3\n", "path": "qiskit/extensions/standard/cu3.py"}]}
| 1,320 | 313 |
gh_patches_debug_8076
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-250
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RowIterator to_dataframe requires pyarrow >= 1.0.0 to work
Currently the google-cloud-bigquery library requires pyarrow > 0.16.0, however the method RowIterator.to_dataframe adds the kwarg "timestamp_as_object", which is only supported in pyarrow >= 1.0.0. If install pyarrow >= 1.0.0, everything works as expected, however we are using other libraries which require pyarrow < 1.0.0.
So the requirements should either be updated to require pyarrow >= 1.0.0, or backported to support versions less than 1.
#### Environment details
- OS type and version: Any
- Python version: 3.6.9
- pip version: 20.2.2
- `google-cloud-bigquery` version: 1.27.2
#### Steps to reproduce
1. Use pyarrow < 1.0.0
2. Run RowIterator to_dataframe
#### Stack trace
```
# result = future.result()
File "<path>/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "<path>/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "<path>/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "<path>", line 133, in run_query
bqstorage_client=client_storage
File "<path>/python3.6/site-packages/google/cloud/bigquery/table.py", line 1757, in to_dataframe
df = record_batch.to_pandas(date_as_object=date_as_object, **extra_kwargs)
File "pyarrow/array.pxi", line 503, in pyarrow.lib._PandasConvertible.to_pandas
TypeError: to_pandas() got an unexpected keyword argument 'timestamp_as_object'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25 version = "1.27.2"
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 'enum34; python_version < "3.4"',
33 "google-api-core >= 1.21.0, < 2.0dev",
34 "google-cloud-core >= 1.4.1, < 2.0dev",
35 "google-resumable-media >= 0.5.0, < 2.0dev",
36 "six >=1.13.0,< 2.0.0dev",
37 ]
38 extras = {
39 "bqstorage": [
40 "google-cloud-bigquery-storage >= 1.0.0, <2.0.0dev",
41 # Due to an issue in pip's dependency resolver, the `grpc` extra is not
42 # installed, even though `google-cloud-bigquery-storage` specifies it
43 # as `google-api-core[grpc]`. We thus need to explicitly specify it here.
44 # See: https://github.com/googleapis/python-bigquery/issues/83
45 "grpcio >= 1.8.2, < 2.0dev",
46 "pyarrow>=0.16.0, < 2.0dev",
47 ],
48 "pandas": ["pandas>=0.17.1"],
49 # Exclude PyArrow dependency from Windows Python 2.7.
50 "pyarrow": [
51 "pyarrow >= 1.0.0, < 2.0dev; python_version >= '3.5'",
52 # Pyarrow >= 0.17.0 is not compatible with Python 2 anymore.
53 "pyarrow < 0.17.0; python_version < '3.0' and platform_system != 'Windows'",
54 ],
55 "tqdm": ["tqdm >= 4.0.0, <5.0.0dev"],
56 "fastparquet": [
57 "fastparquet",
58 "python-snappy",
59 # llvmlite >= 0.32.0 cannot be installed on Python 3.5 and below
60 # (building the wheel fails), thus needs to be restricted.
61 # See: https://github.com/googleapis/python-bigquery/issues/78
62 "llvmlite<=0.34.0;python_version>='3.6'",
63 "llvmlite<=0.31.0;python_version<'3.6'",
64 ],
65 "opentelemetry": [
66 "opentelemetry-api==0.9b0",
67 "opentelemetry-sdk==0.9b0",
68 "opentelemetry-instrumentation==0.9b0 ",
69 ],
70 }
71
72 all_extras = []
73
74 for extra in extras:
75 if extra in (
76 # Skip fastparquet from "all" because it is redundant with pyarrow and
77 # creates a dependency on pre-release versions of numpy. See:
78 # https://github.com/googleapis/google-cloud-python/issues/8549
79 "fastparquet",
80 # Skip opentelemetry because the library is not compatible with Python 2.
81 "opentelemetry",
82 ):
83 continue
84 all_extras.extend(extras[extra])
85
86 extras["all"] = all_extras
87
88 # Setup boilerplate below this line.
89
90 package_root = os.path.abspath(os.path.dirname(__file__))
91
92 readme_filename = os.path.join(package_root, "README.rst")
93 with io.open(readme_filename, encoding="utf-8") as readme_file:
94 readme = readme_file.read()
95
96 # Only include packages under the 'google' namespace. Do not include tests,
97 # benchmarks, etc.
98 packages = [
99 package for package in setuptools.find_packages() if package.startswith("google")
100 ]
101
102 # Determine which namespaces are needed.
103 namespaces = ["google"]
104 if "google.cloud" in packages:
105 namespaces.append("google.cloud")
106
107
108 setuptools.setup(
109 name=name,
110 version=version,
111 description=description,
112 long_description=readme,
113 author="Google LLC",
114 author_email="[email protected]",
115 license="Apache 2.0",
116 url="https://github.com/googleapis/python-bigquery",
117 classifiers=[
118 release_status,
119 "Intended Audience :: Developers",
120 "License :: OSI Approved :: Apache Software License",
121 "Programming Language :: Python",
122 "Programming Language :: Python :: 2",
123 "Programming Language :: Python :: 2.7",
124 "Programming Language :: Python :: 3",
125 "Programming Language :: Python :: 3.5",
126 "Programming Language :: Python :: 3.6",
127 "Programming Language :: Python :: 3.7",
128 "Programming Language :: Python :: 3.8",
129 "Operating System :: OS Independent",
130 "Topic :: Internet",
131 ],
132 platforms="Posix; MacOS X; Windows",
133 packages=packages,
134 namespace_packages=namespaces,
135 install_requires=dependencies,
136 extras_require=extras,
137 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
138 include_package_data=True,
139 zip_safe=False,
140 )
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -43,7 +43,7 @@
# as `google-api-core[grpc]`. We thus need to explicitly specify it here.
# See: https://github.com/googleapis/python-bigquery/issues/83
"grpcio >= 1.8.2, < 2.0dev",
- "pyarrow>=0.16.0, < 2.0dev",
+ "pyarrow >= 1.0.0, < 2.0dev; python_version >= '3.5'",
],
"pandas": ["pandas>=0.17.1"],
# Exclude PyArrow dependency from Windows Python 2.7.
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -43,7 +43,7 @@\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83\n \"grpcio >= 1.8.2, < 2.0dev\",\n- \"pyarrow>=0.16.0, < 2.0dev\",\n+ \"pyarrow >= 1.0.0, < 2.0dev; python_version >= '3.5'\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n", "issue": "RowIterator to_dataframe requires pyarrow >= 1.0.0 to work\nCurrently the google-cloud-bigquery library requires pyarrow > 0.16.0, however the method RowIterator.to_dataframe adds the kwarg \"timestamp_as_object\", which is only supported in pyarrow >= 1.0.0. If install pyarrow >= 1.0.0, everything works as expected, however we are using other libraries which require pyarrow < 1.0.0.\r\n\r\nSo the requirements should either be updated to require pyarrow >= 1.0.0, or backported to support versions less than 1.\r\n\r\n#### Environment details\r\n\r\n - OS type and version: Any\r\n - Python version: 3.6.9\r\n - pip version: 20.2.2\r\n - `google-cloud-bigquery` version: 1.27.2\r\n\r\n#### Steps to reproduce\r\n\r\n 1. Use pyarrow < 1.0.0\r\n 2. Run RowIterator to_dataframe\r\n\r\n#### Stack trace\r\n```\r\n# result = future.result()\r\n File \"<path>/python3.6/concurrent/futures/_base.py\", line 425, in result\r\n return self.__get_result()\r\n File \"<path>/python3.6/concurrent/futures/_base.py\", line 384, in __get_result\r\n raise self._exception\r\n File \"<path>/python3.6/concurrent/futures/thread.py\", line 56, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"<path>\", line 133, in run_query\r\n bqstorage_client=client_storage\r\n File \"<path>/python3.6/site-packages/google/cloud/bigquery/table.py\", line 1757, in to_dataframe\r\n df = record_batch.to_pandas(date_as_object=date_as_object, **extra_kwargs)\r\n File \"pyarrow/array.pxi\", line 503, in pyarrow.lib._PandasConvertible.to_pandas\r\nTypeError: to_pandas() got an unexpected keyword argument 'timestamp_as_object'\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.27.2\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-api-core >= 1.21.0, < 2.0dev\",\n \"google-cloud-core >= 1.4.1, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 2.0dev\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 1.0.0, <2.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83\n \"grpcio >= 1.8.2, < 2.0dev\",\n \"pyarrow>=0.16.0, < 2.0dev\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n \"pyarrow\": [\n \"pyarrow >= 1.0.0, < 2.0dev; python_version >= '3.5'\",\n # Pyarrow >= 0.17.0 is not compatible with Python 2 anymore.\n \"pyarrow < 0.17.0; python_version < '3.0' and platform_system != 'Windows'\",\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\n \"fastparquet\",\n \"python-snappy\",\n # llvmlite >= 0.32.0 cannot be installed on Python 3.5 and below\n # (building the wheel fails), thus needs to be restricted.\n # See: https://github.com/googleapis/python-bigquery/issues/78\n \"llvmlite<=0.34.0;python_version>='3.6'\",\n \"llvmlite<=0.31.0;python_version<'3.6'\",\n ],\n \"opentelemetry\": [\n \"opentelemetry-api==0.9b0\",\n \"opentelemetry-sdk==0.9b0\",\n \"opentelemetry-instrumentation==0.9b0 \",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra in (\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n \"fastparquet\",\n # Skip opentelemetry because the library is not compatible with Python 2.\n \"opentelemetry\",\n ):\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.27.2\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-api-core >= 1.21.0, < 2.0dev\",\n \"google-cloud-core >= 1.4.1, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 2.0dev\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 1.0.0, <2.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83\n \"grpcio >= 1.8.2, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 2.0dev; python_version >= '3.5'\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n \"pyarrow\": [\n \"pyarrow >= 1.0.0, < 2.0dev; python_version >= '3.5'\",\n # Pyarrow >= 0.17.0 is not compatible with Python 2 anymore.\n \"pyarrow < 0.17.0; python_version < '3.0' and platform_system != 'Windows'\",\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\n \"fastparquet\",\n \"python-snappy\",\n # llvmlite >= 0.32.0 cannot be installed on Python 3.5 and below\n # (building the wheel fails), thus needs to be restricted.\n # See: https://github.com/googleapis/python-bigquery/issues/78\n \"llvmlite<=0.34.0;python_version>='3.6'\",\n \"llvmlite<=0.31.0;python_version<'3.6'\",\n ],\n \"opentelemetry\": [\n \"opentelemetry-api==0.9b0\",\n \"opentelemetry-sdk==0.9b0\",\n \"opentelemetry-instrumentation==0.9b0 \",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra in (\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n \"fastparquet\",\n # Skip opentelemetry because the library is not compatible with Python 2.\n \"opentelemetry\",\n ):\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]}
| 2,333 | 169 |
gh_patches_debug_251
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-497
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC] Clarify Python version requirements
# Brief Description of Fix
I was looking through documentation (for users and contributors), and it was unclear to me which python versions we actually support. It seems that we support python 3.6 + 3.7. This arose as I was updating the `pyproject.toml` file to avoid the warning:
```
--py36 is deprecated and will be removed in a future version. Use --target-version py36 instead.
```
Our current locations of explicit python versions are in:
- `pyproject.toml`
- `py36 = true`
- `environment-dev.yml`
- `- python >= 3.6`
- `.azure-pipelines/pipeline-master.yml`
- `python.version: "3.7"`
# Proposed Fix
If `pyjanitor` is in fact meant to function on 3.6+, we should
- Explicitly inform contributors that their code should be 3.6+ compatible
- Inform users which python versions the package requires, on the documentation site, PyPI etc
- Add `python_requires=">=3.6"` to `setup.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3
4 def requirements():
5 with open("requirements.txt", "r+") as f:
6 return f.read()
7
8
9 setup(
10 name="pyjanitor",
11 version="0.18.0",
12 description="Tools for cleaning pandas DataFrames",
13 author="Eric J. Ma",
14 author_email="[email protected]",
15 url="https://github.com/ericmjl/pyjanitor",
16 packages=["janitor"],
17 install_requires=requirements(),
18 )
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,4 +15,5 @@
url="https://github.com/ericmjl/pyjanitor",
packages=["janitor"],
install_requires=requirements(),
+ python_requires=">=3.6",
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,4 +15,5 @@\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n+ python_requires=\">=3.6\",\n )\n", "issue": "[DOC] Clarify Python version requirements\n# Brief Description of Fix\r\n\r\nI was looking through documentation (for users and contributors), and it was unclear to me which python versions we actually support. It seems that we support python 3.6 + 3.7. This arose as I was updating the `pyproject.toml` file to avoid the warning:\r\n```\r\n--py36 is deprecated and will be removed in a future version. Use --target-version py36 instead.\r\n```\r\n\r\nOur current locations of explicit python versions are in:\r\n- `pyproject.toml`\r\n - `py36 = true`\r\n- `environment-dev.yml`\r\n - `- python >= 3.6`\r\n- `.azure-pipelines/pipeline-master.yml`\r\n - `python.version: \"3.7\"`\r\n\r\n# Proposed Fix\r\n\r\nIf `pyjanitor` is in fact meant to function on 3.6+, we should\r\n- Explicitly inform contributors that their code should be 3.6+ compatible\r\n- Inform users which python versions the package requires, on the documentation site, PyPI etc\r\n- Add `python_requires=\">=3.6\"` to `setup.py`\r\n\n", "before_files": [{"content": "from setuptools import setup\n\n\ndef requirements():\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.18.0\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\n\ndef requirements():\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.18.0\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n python_requires=\">=3.6\",\n)\n", "path": "setup.py"}]}
| 636 | 70 |
gh_patches_debug_31343
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-114
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
the source create form has no required fields
fix this
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/forms.py`
Content:
```
1 from django import forms
2 from .models import Chant, Office, Genre, Feast, Source, RismSiglum, Provenance, Century, Indexer
3 from .widgets import *
4 from django.contrib.auth import get_user_model
5
6 # ModelForm allows to build a form directly from a model
7 # see https://docs.djangoproject.com/en/3.0/topics/forms/modelforms/
8
9 """
10 # 3 ways of doing it
11 #1 worst, helptext in the model will be missing
12 class CommetnForm(forms.Form):
13 marginalia = forms.CharField(
14 label="Marginalia", widget=forms.TextInput(), help_text="help"
15 )
16 url = forms.URLField()
17 comment = forms.CharField()
18
19 url.widget.attrs.update({'class': 'special'})
20 comment.widget.attrs.update(size='40')
21 #2
22 class CommentForm(forms.ModelForm):
23 def __init__(self, *args, **kwargs):
24 super().__init__(*args, **kwargs)
25 self.fields['name'].widget.attrs.update({'class': 'special'})
26 self.fields['comment'].widget.attrs.update(size='40')
27 """
28 # 3 best
29 class ChantCreateForm(forms.ModelForm):
30 class Meta:
31 model = Chant
32 # specify either 'fields' or 'excludes' so that django knows which fields to use
33 fields = [
34 "marginalia",
35 "folio",
36 "sequence_number",
37 "office",
38 "genre",
39 "position",
40 "cantus_id",
41 "feast",
42 "mode",
43 "differentia",
44 "finalis",
45 "extra",
46 "chant_range",
47 "manuscript_full_text_std_spelling",
48 "manuscript_full_text",
49 "volpiano",
50 "image_link",
51 "melody_id",
52 "content_structure",
53 "indexing_notes",
54 "addendum",
55 ]
56 widgets = {
57 "marginalia": TextInputWidget(),
58 "folio": TextInputWidget(),
59 "sequence_number": TextInputWidget(),
60 # the widgets dictionary is ignored for a model field with a non-empty choices attribute.
61 # In this case, you must override the form field to use a different widget.
62 # this goes for all foreignkey fields here, which are written explicitly below to override form field
63 "position": TextInputWidget(),
64 "cantus_id": TextInputWidget(),
65 #'feast': SelectWidget(),
66 "mode": TextInputWidget(),
67 "differentia": TextInputWidget(),
68 "finalis": TextInputWidget(),
69 "extra": TextInputWidget(),
70 "chant_range": VolpianoInputWidget(),
71 "manuscript_full_text_std_spelling": TextAreaWidget(),
72 "manuscript_full_text": TextAreaWidget(),
73 "volpiano": VolpianoAreaWidget(),
74 "image_link": TextInputWidget(),
75 "melody_id": TextInputWidget(),
76 "content_structure": TextInputWidget(),
77 "indexing_notes": TextAreaWidget(),
78 "addendum": TextInputWidget(),
79 }
80 # error_messages = {
81 # # specify custom error messages for each field here
82 # }
83
84 manuscript_full_text_std_spelling = forms.CharField(
85 required=True,
86 widget=TextAreaWidget,
87 help_text="Manuscript full text with standardized spelling. Enter the words "
88 "according to the manuscript but normalize their spellings following "
89 "Classical Latin forms. Use upper-case letters for proper nouns, "
90 'the first word of each chant, and the first word after "Alleluia" for '
91 "Mass Alleluias. Punctuation is omitted.",
92 )
93
94 folio = forms.CharField(
95 required=True, widget=TextInputWidget, help_text="Binding order",
96 )
97
98 sequence_number = forms.CharField(
99 required=True, widget=TextInputWidget, help_text="Each folio starts with '1'",
100 )
101
102 office = forms.ModelChoiceField(
103 queryset=Office.objects.all().order_by("name"), required=False
104 )
105 office.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
106
107 genre = forms.ModelChoiceField(
108 queryset=Genre.objects.all().order_by("name"), required=False
109 )
110 genre.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
111
112 feast = forms.ModelChoiceField(
113 queryset=Feast.objects.all().order_by("name"), required=False
114 )
115 feast.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
116
117 # automatically computed fields
118 # source and incipit are mandatory fields in model,
119 # but have to be optional in the form, otherwise the field validation won't pass
120 source = forms.ModelChoiceField(
121 queryset=Source.objects.all().order_by("title"),
122 required=False,
123 error_messages={
124 "invalid_choice": "This source does not exist, please switch to a different source."
125 },
126 )
127 incipit = forms.CharField(required=False)
128
129
130 class ContactForm(forms.Form):
131 name = forms.CharField(max_length=100)
132 sender_email = forms.EmailField()
133 subject = forms.CharField(max_length=100)
134 message = forms.CharField(widget=forms.Textarea)
135
136 class SourceCreateForm(forms.ModelForm):
137 class Meta:
138 model = Source
139 fields = [
140 "title",
141 "rism_siglum",
142 "siglum",
143 "provenance",
144 "provenance_notes",
145 "full_source",
146 "date",
147 "century",
148 "cursus",
149 "current_editors",
150 "melodies_entered_by",
151 "complete_inventory",
152 "summary",
153 "description",
154 "selected_bibliography",
155 "image_link",
156 "fragmentarium_id",
157 "dact_id",
158 "indexing_notes"
159 ]
160 widgets = {
161 "title": TextInputWidget(),
162 "siglum": TextInputWidget(),
163 "provenance_notes": TextInputWidget(),
164 "date": TextInputWidget(),
165 "cursus": SelectWidget(),
166 "summary": TextAreaWidget(),
167 "description": TextAreaWidget(),
168 "selected_bibliography": TextAreaWidget(),
169 "image_link": TextInputWidget(),
170 "fragmentarium_id": TextInputWidget(),
171 "dact_id": TextInputWidget(),
172 "indexing_notes": TextAreaWidget()
173 }
174 rism_siglum = forms.ModelChoiceField(
175 queryset=RismSiglum.objects.all().order_by("name"), required=False
176 )
177 rism_siglum.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
178
179 provenance = forms.ModelChoiceField(
180 queryset=Provenance.objects.all().order_by("name"), required=False
181 )
182 provenance.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
183
184 TRUE_FALSE_CHOICES_SOURCE = (
185 (True, "Full"),
186 (False, "Fragment")
187 )
188
189 full_source = forms.ChoiceField(
190 choices=TRUE_FALSE_CHOICES_SOURCE,
191 )
192 full_source.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
193
194 century = forms.ModelMultipleChoiceField(
195 queryset=Century.objects.all().order_by("name"), required=False
196 )
197 century.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
198
199 current_editors = forms.ModelMultipleChoiceField(
200 queryset=get_user_model().objects.all().order_by("last_name"), required=False
201 )
202 current_editors.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
203
204 melodies_entered_by = forms.ModelMultipleChoiceField(
205 queryset=Indexer.objects.all().order_by("family_name"), required=False
206 )
207 melodies_entered_by.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
208
209 TRUE_FALSE_CHOICES_INVEN = (
210 (True, "Complete"),
211 (False, "Incomplete")
212 )
213
214 complete_inventory = forms.ChoiceField(
215 choices=TRUE_FALSE_CHOICES_INVEN,
216 )
217 complete_inventory.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
218
219 class ChantEditForm(forms.ModelForm):
220 class Meta:
221 model = Chant
222 fields = [
223 "manuscript_full_text_std_spelling",
224 "manuscript_full_text",
225 "volpiano",
226 "marginalia",
227 "folio",
228 "sequence",
229 "feast",
230 "office",
231 "genre",
232 "position",
233 "cantus_id",
234 "melody_id",
235 "mode",
236 "finalis",
237 "differentia",
238 "extra",
239 "image_link",
240 "indexing_notes"
241 ]
242 widgets = {
243 "manuscript_full_text_std_spelling": TextAreaWidget(),
244 "manuscript_full_text": TextAreaWidget(),
245 "volpiano": VolpianoAreaWidget(),
246 "marginalia": TextInputWidget(),
247 "folio": TextInputWidget(),
248 "sequence": TextInputWidget(),
249 "office": TextInputWidget(),
250 "genre": TextInputWidget(),
251 "position": TextInputWidget(),
252 "cantus_id": TextInputWidget(),
253 "melody_id": TextInputWidget(),
254 "mode": TextInputWidget(),
255 "finalis": TextInputWidget(),
256 "differentia": TextInputWidget(),
257 "extra": TextInputWidget(),
258 "image_link": TextInputWidget(),
259 "indexing_notes": TextAreaWidget()
260 }
261 feast = forms.ModelChoiceField(
262 queryset=Feast.objects.all().order_by("name"), required=False
263 )
264 feast.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/main_app/forms.py b/django/cantusdb_project/main_app/forms.py
--- a/django/cantusdb_project/main_app/forms.py
+++ b/django/cantusdb_project/main_app/forms.py
@@ -187,22 +187,22 @@
)
full_source = forms.ChoiceField(
- choices=TRUE_FALSE_CHOICES_SOURCE,
+ choices=TRUE_FALSE_CHOICES_SOURCE, required=False
)
full_source.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
century = forms.ModelMultipleChoiceField(
- queryset=Century.objects.all().order_by("name"), required=False
+ queryset=Century.objects.all().order_by("name")
)
century.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
current_editors = forms.ModelMultipleChoiceField(
- queryset=get_user_model().objects.all().order_by("last_name"), required=False
+ queryset=get_user_model().objects.all().order_by("last_name")
)
current_editors.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
melodies_entered_by = forms.ModelMultipleChoiceField(
- queryset=Indexer.objects.all().order_by("family_name"), required=False
+ queryset=Indexer.objects.all().order_by("family_name")
)
melodies_entered_by.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
@@ -212,7 +212,7 @@
)
complete_inventory = forms.ChoiceField(
- choices=TRUE_FALSE_CHOICES_INVEN,
+ choices=TRUE_FALSE_CHOICES_INVEN, required=False
)
complete_inventory.widget.attrs.update({"class": "form-control custom-select custom-select-sm"})
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/forms.py b/django/cantusdb_project/main_app/forms.py\n--- a/django/cantusdb_project/main_app/forms.py\n+++ b/django/cantusdb_project/main_app/forms.py\n@@ -187,22 +187,22 @@\n )\n \n full_source = forms.ChoiceField(\n- choices=TRUE_FALSE_CHOICES_SOURCE,\n+ choices=TRUE_FALSE_CHOICES_SOURCE, required=False\n )\n full_source.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n \n century = forms.ModelMultipleChoiceField(\n- queryset=Century.objects.all().order_by(\"name\"), required=False\n+ queryset=Century.objects.all().order_by(\"name\")\n )\n century.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n \n current_editors = forms.ModelMultipleChoiceField(\n- queryset=get_user_model().objects.all().order_by(\"last_name\"), required=False\n+ queryset=get_user_model().objects.all().order_by(\"last_name\")\n )\n current_editors.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n \n melodies_entered_by = forms.ModelMultipleChoiceField(\n- queryset=Indexer.objects.all().order_by(\"family_name\"), required=False\n+ queryset=Indexer.objects.all().order_by(\"family_name\")\n )\n melodies_entered_by.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n \n@@ -212,7 +212,7 @@\n )\n \n complete_inventory = forms.ChoiceField(\n- choices=TRUE_FALSE_CHOICES_INVEN,\n+ choices=TRUE_FALSE_CHOICES_INVEN, required=False\n )\n complete_inventory.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n", "issue": "the source create form has no required fields\nfix this\n", "before_files": [{"content": "from django import forms\nfrom .models import Chant, Office, Genre, Feast, Source, RismSiglum, Provenance, Century, Indexer\nfrom .widgets import *\nfrom django.contrib.auth import get_user_model\n\n# ModelForm allows to build a form directly from a model\n# see https://docs.djangoproject.com/en/3.0/topics/forms/modelforms/\n\n\"\"\"\n# 3 ways of doing it\n#1 worst, helptext in the model will be missing\nclass CommetnForm(forms.Form):\n marginalia = forms.CharField(\n label=\"Marginalia\", widget=forms.TextInput(), help_text=\"help\"\n )\n url = forms.URLField()\n comment = forms.CharField()\n\n url.widget.attrs.update({'class': 'special'})\n comment.widget.attrs.update(size='40')\n#2\nclass CommentForm(forms.ModelForm):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['name'].widget.attrs.update({'class': 'special'})\n self.fields['comment'].widget.attrs.update(size='40')\n\"\"\"\n# 3 best\nclass ChantCreateForm(forms.ModelForm):\n class Meta:\n model = Chant\n # specify either 'fields' or 'excludes' so that django knows which fields to use\n fields = [\n \"marginalia\",\n \"folio\",\n \"sequence_number\",\n \"office\",\n \"genre\",\n \"position\",\n \"cantus_id\",\n \"feast\",\n \"mode\",\n \"differentia\",\n \"finalis\",\n \"extra\",\n \"chant_range\",\n \"manuscript_full_text_std_spelling\",\n \"manuscript_full_text\",\n \"volpiano\",\n \"image_link\",\n \"melody_id\",\n \"content_structure\",\n \"indexing_notes\",\n \"addendum\",\n ]\n widgets = {\n \"marginalia\": TextInputWidget(),\n \"folio\": TextInputWidget(),\n \"sequence_number\": TextInputWidget(),\n # the widgets dictionary is ignored for a model field with a non-empty choices attribute.\n # In this case, you must override the form field to use a different widget.\n # this goes for all foreignkey fields here, which are written explicitly below to override form field\n \"position\": TextInputWidget(),\n \"cantus_id\": TextInputWidget(),\n #'feast': SelectWidget(),\n \"mode\": TextInputWidget(),\n \"differentia\": TextInputWidget(),\n \"finalis\": TextInputWidget(),\n \"extra\": TextInputWidget(),\n \"chant_range\": VolpianoInputWidget(),\n \"manuscript_full_text_std_spelling\": TextAreaWidget(),\n \"manuscript_full_text\": TextAreaWidget(),\n \"volpiano\": VolpianoAreaWidget(),\n \"image_link\": TextInputWidget(),\n \"melody_id\": TextInputWidget(),\n \"content_structure\": TextInputWidget(),\n \"indexing_notes\": TextAreaWidget(),\n \"addendum\": TextInputWidget(),\n }\n # error_messages = {\n # # specify custom error messages for each field here\n # }\n\n manuscript_full_text_std_spelling = forms.CharField(\n required=True,\n widget=TextAreaWidget,\n help_text=\"Manuscript full text with standardized spelling. Enter the words \"\n \"according to the manuscript but normalize their spellings following \"\n \"Classical Latin forms. Use upper-case letters for proper nouns, \"\n 'the first word of each chant, and the first word after \"Alleluia\" for '\n \"Mass Alleluias. Punctuation is omitted.\",\n )\n\n folio = forms.CharField(\n required=True, widget=TextInputWidget, help_text=\"Binding order\",\n )\n\n sequence_number = forms.CharField(\n required=True, widget=TextInputWidget, help_text=\"Each folio starts with '1'\",\n )\n\n office = forms.ModelChoiceField(\n queryset=Office.objects.all().order_by(\"name\"), required=False\n )\n office.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n genre = forms.ModelChoiceField(\n queryset=Genre.objects.all().order_by(\"name\"), required=False\n )\n genre.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n feast = forms.ModelChoiceField(\n queryset=Feast.objects.all().order_by(\"name\"), required=False\n )\n feast.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n # automatically computed fields\n # source and incipit are mandatory fields in model,\n # but have to be optional in the form, otherwise the field validation won't pass\n source = forms.ModelChoiceField(\n queryset=Source.objects.all().order_by(\"title\"),\n required=False,\n error_messages={\n \"invalid_choice\": \"This source does not exist, please switch to a different source.\"\n },\n )\n incipit = forms.CharField(required=False)\n\n\nclass ContactForm(forms.Form):\n name = forms.CharField(max_length=100)\n sender_email = forms.EmailField()\n subject = forms.CharField(max_length=100)\n message = forms.CharField(widget=forms.Textarea)\n\nclass SourceCreateForm(forms.ModelForm):\n class Meta:\n model = Source\n fields = [\n \"title\",\n \"rism_siglum\",\n \"siglum\",\n \"provenance\",\n \"provenance_notes\",\n \"full_source\",\n \"date\",\n \"century\",\n \"cursus\",\n \"current_editors\",\n \"melodies_entered_by\",\n \"complete_inventory\",\n \"summary\",\n \"description\",\n \"selected_bibliography\",\n \"image_link\",\n \"fragmentarium_id\",\n \"dact_id\",\n \"indexing_notes\"\n ]\n widgets = {\n \"title\": TextInputWidget(),\n \"siglum\": TextInputWidget(),\n \"provenance_notes\": TextInputWidget(),\n \"date\": TextInputWidget(),\n \"cursus\": SelectWidget(),\n \"summary\": TextAreaWidget(),\n \"description\": TextAreaWidget(),\n \"selected_bibliography\": TextAreaWidget(),\n \"image_link\": TextInputWidget(),\n \"fragmentarium_id\": TextInputWidget(),\n \"dact_id\": TextInputWidget(),\n \"indexing_notes\": TextAreaWidget()\n }\n rism_siglum = forms.ModelChoiceField(\n queryset=RismSiglum.objects.all().order_by(\"name\"), required=False\n )\n rism_siglum.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n provenance = forms.ModelChoiceField(\n queryset=Provenance.objects.all().order_by(\"name\"), required=False\n )\n provenance.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n TRUE_FALSE_CHOICES_SOURCE = (\n (True, \"Full\"), \n (False, \"Fragment\")\n )\n\n full_source = forms.ChoiceField(\n choices=TRUE_FALSE_CHOICES_SOURCE,\n )\n full_source.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n century = forms.ModelMultipleChoiceField(\n queryset=Century.objects.all().order_by(\"name\"), required=False\n )\n century.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n current_editors = forms.ModelMultipleChoiceField(\n queryset=get_user_model().objects.all().order_by(\"last_name\"), required=False\n )\n current_editors.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n melodies_entered_by = forms.ModelMultipleChoiceField(\n queryset=Indexer.objects.all().order_by(\"family_name\"), required=False\n )\n melodies_entered_by.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n TRUE_FALSE_CHOICES_INVEN = (\n (True, \"Complete\"), \n (False, \"Incomplete\")\n )\n\n complete_inventory = forms.ChoiceField(\n choices=TRUE_FALSE_CHOICES_INVEN,\n )\n complete_inventory.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\nclass ChantEditForm(forms.ModelForm):\n class Meta:\n model = Chant\n fields = [\n \"manuscript_full_text_std_spelling\",\n \"manuscript_full_text\",\n \"volpiano\",\n \"marginalia\",\n \"folio\",\n \"sequence\",\n \"feast\",\n \"office\",\n \"genre\",\n \"position\",\n \"cantus_id\",\n \"melody_id\",\n \"mode\",\n \"finalis\",\n \"differentia\",\n \"extra\",\n \"image_link\",\n \"indexing_notes\"\n ]\n widgets = {\n \"manuscript_full_text_std_spelling\": TextAreaWidget(),\n \"manuscript_full_text\": TextAreaWidget(),\n \"volpiano\": VolpianoAreaWidget(),\n \"marginalia\": TextInputWidget(),\n \"folio\": TextInputWidget(),\n \"sequence\": TextInputWidget(),\n \"office\": TextInputWidget(),\n \"genre\": TextInputWidget(),\n \"position\": TextInputWidget(),\n \"cantus_id\": TextInputWidget(),\n \"melody_id\": TextInputWidget(),\n \"mode\": TextInputWidget(),\n \"finalis\": TextInputWidget(),\n \"differentia\": TextInputWidget(),\n \"extra\": TextInputWidget(),\n \"image_link\": TextInputWidget(),\n \"indexing_notes\": TextAreaWidget()\n }\n feast = forms.ModelChoiceField(\n queryset=Feast.objects.all().order_by(\"name\"), required=False\n )\n feast.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n", "path": "django/cantusdb_project/main_app/forms.py"}], "after_files": [{"content": "from django import forms\nfrom .models import Chant, Office, Genre, Feast, Source, RismSiglum, Provenance, Century, Indexer\nfrom .widgets import *\nfrom django.contrib.auth import get_user_model\n\n# ModelForm allows to build a form directly from a model\n# see https://docs.djangoproject.com/en/3.0/topics/forms/modelforms/\n\n\"\"\"\n# 3 ways of doing it\n#1 worst, helptext in the model will be missing\nclass CommetnForm(forms.Form):\n marginalia = forms.CharField(\n label=\"Marginalia\", widget=forms.TextInput(), help_text=\"help\"\n )\n url = forms.URLField()\n comment = forms.CharField()\n\n url.widget.attrs.update({'class': 'special'})\n comment.widget.attrs.update(size='40')\n#2\nclass CommentForm(forms.ModelForm):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['name'].widget.attrs.update({'class': 'special'})\n self.fields['comment'].widget.attrs.update(size='40')\n\"\"\"\n# 3 best\nclass ChantCreateForm(forms.ModelForm):\n class Meta:\n model = Chant\n # specify either 'fields' or 'excludes' so that django knows which fields to use\n fields = [\n \"marginalia\",\n \"folio\",\n \"sequence_number\",\n \"office\",\n \"genre\",\n \"position\",\n \"cantus_id\",\n \"feast\",\n \"mode\",\n \"differentia\",\n \"finalis\",\n \"extra\",\n \"chant_range\",\n \"manuscript_full_text_std_spelling\",\n \"manuscript_full_text\",\n \"volpiano\",\n \"image_link\",\n \"melody_id\",\n \"content_structure\",\n \"indexing_notes\",\n \"addendum\",\n ]\n widgets = {\n \"marginalia\": TextInputWidget(),\n \"folio\": TextInputWidget(),\n \"sequence_number\": TextInputWidget(),\n # the widgets dictionary is ignored for a model field with a non-empty choices attribute.\n # In this case, you must override the form field to use a different widget.\n # this goes for all foreignkey fields here, which are written explicitly below to override form field\n \"position\": TextInputWidget(),\n \"cantus_id\": TextInputWidget(),\n #'feast': SelectWidget(),\n \"mode\": TextInputWidget(),\n \"differentia\": TextInputWidget(),\n \"finalis\": TextInputWidget(),\n \"extra\": TextInputWidget(),\n \"chant_range\": VolpianoInputWidget(),\n \"manuscript_full_text_std_spelling\": TextAreaWidget(),\n \"manuscript_full_text\": TextAreaWidget(),\n \"volpiano\": VolpianoAreaWidget(),\n \"image_link\": TextInputWidget(),\n \"melody_id\": TextInputWidget(),\n \"content_structure\": TextInputWidget(),\n \"indexing_notes\": TextAreaWidget(),\n \"addendum\": TextInputWidget(),\n }\n # error_messages = {\n # # specify custom error messages for each field here\n # }\n\n manuscript_full_text_std_spelling = forms.CharField(\n required=True,\n widget=TextAreaWidget,\n help_text=\"Manuscript full text with standardized spelling. Enter the words \"\n \"according to the manuscript but normalize their spellings following \"\n \"Classical Latin forms. Use upper-case letters for proper nouns, \"\n 'the first word of each chant, and the first word after \"Alleluia\" for '\n \"Mass Alleluias. Punctuation is omitted.\",\n )\n\n folio = forms.CharField(\n required=True, widget=TextInputWidget, help_text=\"Binding order\",\n )\n\n sequence_number = forms.CharField(\n required=True, widget=TextInputWidget, help_text=\"Each folio starts with '1'\",\n )\n\n office = forms.ModelChoiceField(\n queryset=Office.objects.all().order_by(\"name\"), required=False\n )\n office.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n genre = forms.ModelChoiceField(\n queryset=Genre.objects.all().order_by(\"name\"), required=False\n )\n genre.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n feast = forms.ModelChoiceField(\n queryset=Feast.objects.all().order_by(\"name\"), required=False\n )\n feast.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n # automatically computed fields\n # source and incipit are mandatory fields in model,\n # but have to be optional in the form, otherwise the field validation won't pass\n source = forms.ModelChoiceField(\n queryset=Source.objects.all().order_by(\"title\"),\n required=False,\n error_messages={\n \"invalid_choice\": \"This source does not exist, please switch to a different source.\"\n },\n )\n incipit = forms.CharField(required=False)\n\n\nclass ContactForm(forms.Form):\n name = forms.CharField(max_length=100)\n sender_email = forms.EmailField()\n subject = forms.CharField(max_length=100)\n message = forms.CharField(widget=forms.Textarea)\n\nclass SourceCreateForm(forms.ModelForm):\n class Meta:\n model = Source\n fields = [\n \"title\",\n \"rism_siglum\",\n \"siglum\",\n \"provenance\",\n \"provenance_notes\",\n \"full_source\",\n \"date\",\n \"century\",\n \"cursus\",\n \"current_editors\",\n \"melodies_entered_by\",\n \"complete_inventory\",\n \"summary\",\n \"description\",\n \"selected_bibliography\",\n \"image_link\",\n \"fragmentarium_id\",\n \"dact_id\",\n \"indexing_notes\"\n ]\n widgets = {\n \"title\": TextInputWidget(),\n \"siglum\": TextInputWidget(),\n \"provenance_notes\": TextInputWidget(),\n \"date\": TextInputWidget(),\n \"cursus\": SelectWidget(),\n \"summary\": TextAreaWidget(),\n \"description\": TextAreaWidget(),\n \"selected_bibliography\": TextAreaWidget(),\n \"image_link\": TextInputWidget(),\n \"fragmentarium_id\": TextInputWidget(),\n \"dact_id\": TextInputWidget(),\n \"indexing_notes\": TextAreaWidget()\n }\n rism_siglum = forms.ModelChoiceField(\n queryset=RismSiglum.objects.all().order_by(\"name\"), required=False\n )\n rism_siglum.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n provenance = forms.ModelChoiceField(\n queryset=Provenance.objects.all().order_by(\"name\"), required=False\n )\n provenance.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n TRUE_FALSE_CHOICES_SOURCE = (\n (True, \"Full\"), \n (False, \"Fragment\")\n )\n\n full_source = forms.ChoiceField(\n choices=TRUE_FALSE_CHOICES_SOURCE, required=False\n )\n full_source.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n century = forms.ModelMultipleChoiceField(\n queryset=Century.objects.all().order_by(\"name\")\n )\n century.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n current_editors = forms.ModelMultipleChoiceField(\n queryset=get_user_model().objects.all().order_by(\"last_name\")\n )\n current_editors.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n melodies_entered_by = forms.ModelMultipleChoiceField(\n queryset=Indexer.objects.all().order_by(\"family_name\")\n )\n melodies_entered_by.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\n TRUE_FALSE_CHOICES_INVEN = (\n (True, \"Complete\"), \n (False, \"Incomplete\")\n )\n\n complete_inventory = forms.ChoiceField(\n choices=TRUE_FALSE_CHOICES_INVEN, required=False\n )\n complete_inventory.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n\nclass ChantEditForm(forms.ModelForm):\n class Meta:\n model = Chant\n fields = [\n \"manuscript_full_text_std_spelling\",\n \"manuscript_full_text\",\n \"volpiano\",\n \"marginalia\",\n \"folio\",\n \"sequence\",\n \"feast\",\n \"office\",\n \"genre\",\n \"position\",\n \"cantus_id\",\n \"melody_id\",\n \"mode\",\n \"finalis\",\n \"differentia\",\n \"extra\",\n \"image_link\",\n \"indexing_notes\"\n ]\n widgets = {\n \"manuscript_full_text_std_spelling\": TextAreaWidget(),\n \"manuscript_full_text\": TextAreaWidget(),\n \"volpiano\": VolpianoAreaWidget(),\n \"marginalia\": TextInputWidget(),\n \"folio\": TextInputWidget(),\n \"sequence\": TextInputWidget(),\n \"office\": TextInputWidget(),\n \"genre\": TextInputWidget(),\n \"position\": TextInputWidget(),\n \"cantus_id\": TextInputWidget(),\n \"melody_id\": TextInputWidget(),\n \"mode\": TextInputWidget(),\n \"finalis\": TextInputWidget(),\n \"differentia\": TextInputWidget(),\n \"extra\": TextInputWidget(),\n \"image_link\": TextInputWidget(),\n \"indexing_notes\": TextAreaWidget()\n }\n feast = forms.ModelChoiceField(\n queryset=Feast.objects.all().order_by(\"name\"), required=False\n )\n feast.widget.attrs.update({\"class\": \"form-control custom-select custom-select-sm\"})\n", "path": "django/cantusdb_project/main_app/forms.py"}]}
| 2,988 | 389 |
gh_patches_debug_18864
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-348
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for replacing actor classes given registration
**Is your feature request related to a problem? Please describe.**
Sometimes it is convenient for us to replace existing actor implementations when running or deploying Mars. For instance, when doing tests, we need to replace some functions of actors to report something or make some delays. We need a native mechanism in actor system to simplify implementation of these functions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 1999-2017 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import sys
17 from setuptools import setup, find_packages, Extension
18
19 import numpy as np
20 from Cython.Build import cythonize
21 from Cython.Distutils import build_ext
22
23 repo_root = os.path.dirname(os.path.abspath(__file__))
24
25 try:
26 execfile
27 except NameError:
28 def execfile(fname, globs, locs=None):
29 locs = locs or globs
30 exec(compile(open(fname).read(), fname, "exec"), globs, locs)
31
32 version_file_path = os.path.join(repo_root, 'mars', '_version.py')
33 version_ns = {'__file__': version_file_path}
34 execfile(version_file_path, version_ns)
35
36 requirements = []
37 with open(os.path.join(repo_root, 'requirements.txt'), 'r') as f:
38 requirements.extend(f.read().splitlines())
39
40 extra_requirements = []
41 with open(os.path.join(repo_root, 'requirements-extra.txt'), 'r') as f:
42 extra_requirements.extend(f.read().splitlines())
43
44 dev_requirements = []
45 with open(os.path.join(repo_root, 'requirements-dev.txt'), 'r') as f:
46 dev_requirements.extend(f.read().splitlines())
47
48 long_description = None
49 if os.path.exists(os.path.join(repo_root, 'README.rst')):
50 with open(os.path.join(repo_root, 'README.rst')) as f:
51 long_description = f.read()
52
53
54 if os.path.exists(os.path.join(repo_root, '.git')):
55 git_info = version_ns['get_git_info']()
56 if git_info:
57 with open(os.path.join(repo_root, 'mars', '.git-branch'), 'w') as git_file:
58 git_file.write('%s %s' % git_info)
59
60 cythonize_kw = dict(language_level=sys.version_info[0])
61 extension_kw = dict()
62 if 'CYTHON_TRACE' in os.environ:
63 extension_kw['define_macros'] = [('CYTHON_TRACE_NOGIL', '1'), ('CYTHON_TRACE', '1')]
64 cythonize_kw['compiler_directives'] = {'linetrace': True, 'binding': True}
65
66 if 'MSC' in sys.version:
67 extra_compile_args = ['/Ot', '/I' + os.path.join(repo_root, 'misc')]
68 extension_kw['extra_compile_args'] = extra_compile_args
69 else:
70 extra_compile_args = ['-O3']
71 extension_kw['extra_compile_args'] = extra_compile_args
72
73 extension_kw['include_dirs'] = [np.get_include()]
74 extensions = [
75 Extension('mars.graph', ['mars/graph.pyx'], **extension_kw),
76 Extension('mars.fuse', ['mars/fuse.pyx'], **extension_kw),
77 Extension('mars._utils', ['mars/_utils.pyx'], **extension_kw),
78 Extension('mars.lib.gipc', ['mars/lib/gipc.pyx'], **extension_kw),
79 Extension('mars.actors.core', ['mars/actors/core.pyx'], **extension_kw),
80 Extension('mars.actors.distributor', ['mars/actors/distributor.pyx'], **extension_kw),
81 Extension('mars.actors.cluster', ['mars/actors/cluster.pyx'], **extension_kw),
82 Extension('mars.actors.pool.messages', ['mars/actors/pool/messages.pyx'], **extension_kw),
83 Extension('mars.actors.pool.utils', ['mars/actors/pool/utils.pyx'], **extension_kw),
84 Extension('mars.actors.pool.gevent_pool', ['mars/actors/pool/gevent_pool.pyx'], **extension_kw),
85 Extension('mars.serialize.core', ['mars/serialize/core.pyx'], **extension_kw),
86 Extension('mars.serialize.pbserializer', ['mars/serialize/pbserializer.pyx'], **extension_kw),
87 Extension('mars.serialize.jsonserializer', ['mars/serialize/jsonserializer.pyx'], **extension_kw),
88 ]
89
90
91 setup_options = dict(
92 name='pymars',
93 version=version_ns['__version__'],
94 description='MARS: a tensor-based unified framework for large-scale data computation.',
95 long_description=long_description,
96 author='Qin Xuye',
97 author_email='[email protected]',
98 maintainer='Qin Xuye',
99 maintainer_email='[email protected]',
100 url='http://github.com/mars-project/mars',
101 license='Apache License 2.0',
102 classifiers=[
103 'Operating System :: OS Independent',
104 'Programming Language :: Python',
105 'Programming Language :: Python :: 2',
106 'Programming Language :: Python :: 2.7',
107 'Programming Language :: Python :: 3',
108 'Programming Language :: Python :: 3.5',
109 'Programming Language :: Python :: 3.6',
110 'Programming Language :: Python :: 3.7',
111 'Programming Language :: Python :: Implementation :: CPython',
112 'Topic :: Software Development :: Libraries',
113 ],
114 packages=find_packages(exclude=('*.tests.*', '*.tests')),
115 include_package_data=True,
116 entry_points={'console_scripts': [
117 'mars-scheduler = mars.scheduler.__main__:main',
118 'mars-worker = mars.worker.__main__:main',
119 'mars-web = mars.web.__main__:main',
120 ]},
121 install_requires=requirements,
122 cmdclass={'build_ext': build_ext},
123 ext_modules=cythonize(extensions, **cythonize_kw),
124 extras_require={
125 'distributed': extra_requirements,
126 'dev': extra_requirements + dev_requirements,
127 }
128 )
129 setup(**setup_options)
130
```
Path: `mars/actors/__init__.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17
18 from .core import create_actor_pool, Actor, FunctionActor, new_client
19 from .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist
20 from .distributor import Distributor
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mars/actors/__init__.py b/mars/actors/__init__.py
--- a/mars/actors/__init__.py
+++ b/mars/actors/__init__.py
@@ -15,6 +15,7 @@
# limitations under the License.
-from .core import create_actor_pool, Actor, FunctionActor, new_client
+from .core import create_actor_pool, Actor, FunctionActor, new_client, \
+ register_actor_implementation, unregister_actor_implementation
from .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist
from .distributor import Distributor
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,7 +61,7 @@
extension_kw = dict()
if 'CYTHON_TRACE' in os.environ:
extension_kw['define_macros'] = [('CYTHON_TRACE_NOGIL', '1'), ('CYTHON_TRACE', '1')]
- cythonize_kw['compiler_directives'] = {'linetrace': True, 'binding': True}
+ cythonize_kw['compiler_directives'] = {'linetrace': True}
if 'MSC' in sys.version:
extra_compile_args = ['/Ot', '/I' + os.path.join(repo_root, 'misc')]
|
{"golden_diff": "diff --git a/mars/actors/__init__.py b/mars/actors/__init__.py\n--- a/mars/actors/__init__.py\n+++ b/mars/actors/__init__.py\n@@ -15,6 +15,7 @@\n # limitations under the License.\n \n \n-from .core import create_actor_pool, Actor, FunctionActor, new_client\n+from .core import create_actor_pool, Actor, FunctionActor, new_client, \\\n+ register_actor_implementation, unregister_actor_implementation\n from .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist\n from .distributor import Distributor\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,7 +61,7 @@\n extension_kw = dict()\n if 'CYTHON_TRACE' in os.environ:\n extension_kw['define_macros'] = [('CYTHON_TRACE_NOGIL', '1'), ('CYTHON_TRACE', '1')]\n- cythonize_kw['compiler_directives'] = {'linetrace': True, 'binding': True}\n+ cythonize_kw['compiler_directives'] = {'linetrace': True}\n \n if 'MSC' in sys.version:\n extra_compile_args = ['/Ot', '/I' + os.path.join(repo_root, 'misc')]\n", "issue": "Add support for replacing actor classes given registration\n**Is your feature request related to a problem? Please describe.**\r\nSometimes it is convenient for us to replace existing actor implementations when running or deploying Mars. For instance, when doing tests, we need to replace some functions of actors to report something or make some delays. We need a native mechanism in actor system to simplify implementation of these functions.\n", "before_files": [{"content": "# Copyright 1999-2017 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nfrom setuptools import setup, find_packages, Extension\n\nimport numpy as np\nfrom Cython.Build import cythonize\nfrom Cython.Distutils import build_ext\n\nrepo_root = os.path.dirname(os.path.abspath(__file__))\n\ntry:\n execfile\nexcept NameError:\n def execfile(fname, globs, locs=None):\n locs = locs or globs\n exec(compile(open(fname).read(), fname, \"exec\"), globs, locs)\n\nversion_file_path = os.path.join(repo_root, 'mars', '_version.py')\nversion_ns = {'__file__': version_file_path}\nexecfile(version_file_path, version_ns)\n\nrequirements = []\nwith open(os.path.join(repo_root, 'requirements.txt'), 'r') as f:\n requirements.extend(f.read().splitlines())\n\nextra_requirements = []\nwith open(os.path.join(repo_root, 'requirements-extra.txt'), 'r') as f:\n extra_requirements.extend(f.read().splitlines())\n\ndev_requirements = []\nwith open(os.path.join(repo_root, 'requirements-dev.txt'), 'r') as f:\n dev_requirements.extend(f.read().splitlines())\n\nlong_description = None\nif os.path.exists(os.path.join(repo_root, 'README.rst')):\n with open(os.path.join(repo_root, 'README.rst')) as f:\n long_description = f.read()\n\n\nif os.path.exists(os.path.join(repo_root, '.git')):\n git_info = version_ns['get_git_info']()\n if git_info:\n with open(os.path.join(repo_root, 'mars', '.git-branch'), 'w') as git_file:\n git_file.write('%s %s' % git_info)\n\ncythonize_kw = dict(language_level=sys.version_info[0])\nextension_kw = dict()\nif 'CYTHON_TRACE' in os.environ:\n extension_kw['define_macros'] = [('CYTHON_TRACE_NOGIL', '1'), ('CYTHON_TRACE', '1')]\n cythonize_kw['compiler_directives'] = {'linetrace': True, 'binding': True}\n\nif 'MSC' in sys.version:\n extra_compile_args = ['/Ot', '/I' + os.path.join(repo_root, 'misc')]\n extension_kw['extra_compile_args'] = extra_compile_args\nelse:\n extra_compile_args = ['-O3']\n extension_kw['extra_compile_args'] = extra_compile_args\n\nextension_kw['include_dirs'] = [np.get_include()]\nextensions = [\n Extension('mars.graph', ['mars/graph.pyx'], **extension_kw),\n Extension('mars.fuse', ['mars/fuse.pyx'], **extension_kw),\n Extension('mars._utils', ['mars/_utils.pyx'], **extension_kw),\n Extension('mars.lib.gipc', ['mars/lib/gipc.pyx'], **extension_kw),\n Extension('mars.actors.core', ['mars/actors/core.pyx'], **extension_kw),\n Extension('mars.actors.distributor', ['mars/actors/distributor.pyx'], **extension_kw),\n Extension('mars.actors.cluster', ['mars/actors/cluster.pyx'], **extension_kw),\n Extension('mars.actors.pool.messages', ['mars/actors/pool/messages.pyx'], **extension_kw),\n Extension('mars.actors.pool.utils', ['mars/actors/pool/utils.pyx'], **extension_kw),\n Extension('mars.actors.pool.gevent_pool', ['mars/actors/pool/gevent_pool.pyx'], **extension_kw),\n Extension('mars.serialize.core', ['mars/serialize/core.pyx'], **extension_kw),\n Extension('mars.serialize.pbserializer', ['mars/serialize/pbserializer.pyx'], **extension_kw),\n Extension('mars.serialize.jsonserializer', ['mars/serialize/jsonserializer.pyx'], **extension_kw),\n]\n\n\nsetup_options = dict(\n name='pymars',\n version=version_ns['__version__'],\n description='MARS: a tensor-based unified framework for large-scale data computation.',\n long_description=long_description,\n author='Qin Xuye',\n author_email='[email protected]',\n maintainer='Qin Xuye',\n maintainer_email='[email protected]',\n url='http://github.com/mars-project/mars',\n license='Apache License 2.0',\n classifiers=[\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Software Development :: Libraries',\n ],\n packages=find_packages(exclude=('*.tests.*', '*.tests')),\n include_package_data=True,\n entry_points={'console_scripts': [\n 'mars-scheduler = mars.scheduler.__main__:main',\n 'mars-worker = mars.worker.__main__:main',\n 'mars-web = mars.web.__main__:main',\n ]},\n install_requires=requirements,\n cmdclass={'build_ext': build_ext},\n ext_modules=cythonize(extensions, **cythonize_kw),\n extras_require={\n 'distributed': extra_requirements,\n 'dev': extra_requirements + dev_requirements,\n }\n)\nsetup(**setup_options)\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom .core import create_actor_pool, Actor, FunctionActor, new_client\nfrom .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist\nfrom .distributor import Distributor\n", "path": "mars/actors/__init__.py"}], "after_files": [{"content": "# Copyright 1999-2017 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nfrom setuptools import setup, find_packages, Extension\n\nimport numpy as np\nfrom Cython.Build import cythonize\nfrom Cython.Distutils import build_ext\n\nrepo_root = os.path.dirname(os.path.abspath(__file__))\n\ntry:\n execfile\nexcept NameError:\n def execfile(fname, globs, locs=None):\n locs = locs or globs\n exec(compile(open(fname).read(), fname, \"exec\"), globs, locs)\n\nversion_file_path = os.path.join(repo_root, 'mars', '_version.py')\nversion_ns = {'__file__': version_file_path}\nexecfile(version_file_path, version_ns)\n\nrequirements = []\nwith open(os.path.join(repo_root, 'requirements.txt'), 'r') as f:\n requirements.extend(f.read().splitlines())\n\nextra_requirements = []\nwith open(os.path.join(repo_root, 'requirements-extra.txt'), 'r') as f:\n extra_requirements.extend(f.read().splitlines())\n\ndev_requirements = []\nwith open(os.path.join(repo_root, 'requirements-dev.txt'), 'r') as f:\n dev_requirements.extend(f.read().splitlines())\n\nlong_description = None\nif os.path.exists(os.path.join(repo_root, 'README.rst')):\n with open(os.path.join(repo_root, 'README.rst')) as f:\n long_description = f.read()\n\n\nif os.path.exists(os.path.join(repo_root, '.git')):\n git_info = version_ns['get_git_info']()\n if git_info:\n with open(os.path.join(repo_root, 'mars', '.git-branch'), 'w') as git_file:\n git_file.write('%s %s' % git_info)\n\ncythonize_kw = dict(language_level=sys.version_info[0])\nextension_kw = dict()\nif 'CYTHON_TRACE' in os.environ:\n extension_kw['define_macros'] = [('CYTHON_TRACE_NOGIL', '1'), ('CYTHON_TRACE', '1')]\n cythonize_kw['compiler_directives'] = {'linetrace': True}\n\nif 'MSC' in sys.version:\n extra_compile_args = ['/Ot', '/I' + os.path.join(repo_root, 'misc')]\n extension_kw['extra_compile_args'] = extra_compile_args\nelse:\n extra_compile_args = ['-O3']\n extension_kw['extra_compile_args'] = extra_compile_args\n\nextension_kw['include_dirs'] = [np.get_include()]\nextensions = [\n Extension('mars.graph', ['mars/graph.pyx'], **extension_kw),\n Extension('mars.fuse', ['mars/fuse.pyx'], **extension_kw),\n Extension('mars._utils', ['mars/_utils.pyx'], **extension_kw),\n Extension('mars.lib.gipc', ['mars/lib/gipc.pyx'], **extension_kw),\n Extension('mars.actors.core', ['mars/actors/core.pyx'], **extension_kw),\n Extension('mars.actors.distributor', ['mars/actors/distributor.pyx'], **extension_kw),\n Extension('mars.actors.cluster', ['mars/actors/cluster.pyx'], **extension_kw),\n Extension('mars.actors.pool.messages', ['mars/actors/pool/messages.pyx'], **extension_kw),\n Extension('mars.actors.pool.utils', ['mars/actors/pool/utils.pyx'], **extension_kw),\n Extension('mars.actors.pool.gevent_pool', ['mars/actors/pool/gevent_pool.pyx'], **extension_kw),\n Extension('mars.serialize.core', ['mars/serialize/core.pyx'], **extension_kw),\n Extension('mars.serialize.pbserializer', ['mars/serialize/pbserializer.pyx'], **extension_kw),\n Extension('mars.serialize.jsonserializer', ['mars/serialize/jsonserializer.pyx'], **extension_kw),\n]\n\n\nsetup_options = dict(\n name='pymars',\n version=version_ns['__version__'],\n description='MARS: a tensor-based unified framework for large-scale data computation.',\n long_description=long_description,\n author='Qin Xuye',\n author_email='[email protected]',\n maintainer='Qin Xuye',\n maintainer_email='[email protected]',\n url='http://github.com/mars-project/mars',\n license='Apache License 2.0',\n classifiers=[\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Software Development :: Libraries',\n ],\n packages=find_packages(exclude=('*.tests.*', '*.tests')),\n include_package_data=True,\n entry_points={'console_scripts': [\n 'mars-scheduler = mars.scheduler.__main__:main',\n 'mars-worker = mars.worker.__main__:main',\n 'mars-web = mars.web.__main__:main',\n ]},\n install_requires=requirements,\n cmdclass={'build_ext': build_ext},\n ext_modules=cythonize(extensions, **cythonize_kw),\n extras_require={\n 'distributed': extra_requirements,\n 'dev': extra_requirements + dev_requirements,\n }\n)\nsetup(**setup_options)\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom .core import create_actor_pool, Actor, FunctionActor, new_client, \\\n register_actor_implementation, unregister_actor_implementation\nfrom .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist\nfrom .distributor import Distributor\n", "path": "mars/actors/__init__.py"}]}
| 2,143 | 280 |
gh_patches_debug_15738
|
rasdani/github-patches
|
git_diff
|
kornia__kornia-1584
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Gradient computation failed for distance_transform
### Describe the bug
Using `kornia.contrib.distance_transform` before computing an MSE loss would raise RuntimeError for gradient computation.
The error message looks like this:
```
{...}/site-packages/torch/autograd/__init__.py:145: UserWarning: Error detected in ReplicationPad2DBackward. Traceback of forward call that caused the error:
File "distance_transform_loss.py", line 104, in <module>
a_dist = distance_transform(a)
File "{...}/site-packages/kornia/contrib/distance_transform.py", line 59, in distance_transform
cdt = filter2d(boundary, kernel, border_type='replicate')
File "{...}/site-packages/kornia/filters/filter.py", line 114, in filter2d
input = F.pad(input, padding_shape, mode=border_type)
File "{...}/site-packages/torch/nn/functional.py", line 4019, in _pad
return torch._C._nn.replication_pad2d(input, pad)
(Triggered internally at /opt/conda/conda-bld/pytorch_1616554788289/work/torch/csrc/autograd/python_anomaly_mode.cpp:104.)
Variable._execution_engine.run_backward(
Traceback (most recent call last):
File "distance_transform_loss.py", line 110, in <module>
loss.backward()
File "{...}/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "{...}/site-packages/torch/autograd/__init__.py", line 145, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [12, 1, 384, 384]], which is output 0 of IndexPutBackward, is at version 384; expected version 383 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
A minimum reproducible script is provided below.
### Reproduction steps
```bash
import torch
torch.autograd.set_detect_anomaly(True)
import torch.nn.functional as F
from kornia.contrib import distance_transform
a = torch.rand((12, 1, 384, 384)).to(torch.float32)
b = torch.rand((12, 1, 384, 384)).to(torch.float32)
layer = nn.Conv2d(1, 1, (3, 3), (1, 1), (1, 1))
a = layer(a)
a_dist = distance_transform(a)
b_dist = distance_transform(b)
loss = F.mse_loss(a_dist, b_dist)
loss.backward()
```
### Expected behavior
Gradient back-propagated successfully and there should not be any console outputs
### Environment
```shell
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): 1.8.1
- OS (e.g., Linux): Ubuntu 18.04.5 LTS (x86_64)
- How you installed PyTorch (`conda`, `pip`, source): conda
- Build command you used (if compiling from source): None
- Python version: 3.8.10
- CUDA/cuDNN version: 10.1.243/7.6.5
- GPU models and configuration: Tesla V100-SXM2-16GB
- Any other relevant information:
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/contrib/distance_transform.py`
Content:
```
1 import math
2
3 import torch
4 import torch.nn as nn
5
6 from kornia.filters import filter2d
7 from kornia.utils import create_meshgrid
8
9
10 def distance_transform(
11 image: torch.Tensor,
12 kernel_size: int = 3,
13 h: float = 0.35
14 ) -> torch.Tensor:
15 r"""Approximates the Manhattan distance transform of images using cascaded convolution operations.
16
17 The value at each pixel in the output represents the distance to the nearest non-zero pixel in the image image.
18 It uses the method described in :cite:`pham2021dtlayer`.
19 The transformation is applied independently across the channel dimension of the images.
20
21 Args:
22 image: Image with shape :math:`(B,C,H,W)`.
23 kernel_size: size of the convolution kernel.
24 h: value that influence the approximation of the min function.
25
26 Returns:
27 tensor with shape :math:`(B,C,H,W)`.
28
29 Example:
30 >>> tensor = torch.zeros(1, 1, 5, 5)
31 >>> tensor[:,:, 1, 2] = 1
32 >>> dt = kornia.contrib.distance_transform(tensor)
33 """
34 if not isinstance(image, torch.Tensor):
35 raise TypeError(f"image type is not a torch.Tensor. Got {type(image)}")
36
37 if not len(image.shape) == 4:
38 raise ValueError(f"Invalid image shape, we expect BxCxHxW. Got: {image.shape}")
39
40 if kernel_size % 2 == 0:
41 raise ValueError("Kernel size must be an odd number.")
42
43 # n_iters is set such that the DT will be able to propagate from any corner of the image to its far,
44 # diagonally opposite corner
45 n_iters: int = math.ceil(max(image.shape[2], image.shape[3]) / math.floor(kernel_size / 2))
46 grid = create_meshgrid(kernel_size, kernel_size, normalized_coordinates=False,
47 device=image.device, dtype=image.dtype)
48
49 grid -= math.floor(kernel_size / 2)
50 kernel = torch.hypot(grid[0, :, :, 0], grid[0, :, :, 1])
51 kernel = torch.exp(kernel / -h).unsqueeze(0)
52
53 out = torch.zeros_like(image)
54
55 # It is possible to avoid cloning the image if boundary = image, but this would require modifying the image tensor.
56 boundary = image.clone()
57
58 for i in range(n_iters):
59 cdt = filter2d(boundary, kernel, border_type='replicate')
60 cdt = -h * torch.log(cdt)
61
62 # We are calculating log(0) above.
63 cdt = torch.nan_to_num(cdt, posinf=0.0)
64
65 mask = torch.where(cdt > 0, 1.0, 0.0)
66 if mask.sum() == 0:
67 break
68
69 offset: int = i * kernel_size // 2
70 out += (offset + cdt) * mask
71 boundary[mask == 1] = 1
72
73 return out
74
75
76 class DistanceTransform(nn.Module):
77 r"""Module that approximates the Manhattan (city block) distance transform of images using convolutions.
78
79 Args:
80 kernel_size: size of the convolution kernel.
81 h: value that influence the approximation of the min function.
82
83 """
84 def __init__(
85 self,
86 kernel_size: int = 3,
87 h: float = 0.35
88 ):
89 super().__init__()
90 self.kernel_size = kernel_size
91 self.h = h
92
93 def forward(self, image: torch.Tensor) -> torch.Tensor:
94 # If images have multiple channels, view the channels in the batch dimension to match kernel shape.
95 if image.shape[1] > 1:
96 image_in = image.view(-1, 1, image.shape[-2], image.shape[-1])
97 else:
98 image_in = image
99
100 return distance_transform(image_in, self.kernel_size, self.h).view_as(image)
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kornia/contrib/distance_transform.py b/kornia/contrib/distance_transform.py
--- a/kornia/contrib/distance_transform.py
+++ b/kornia/contrib/distance_transform.py
@@ -54,6 +54,7 @@
# It is possible to avoid cloning the image if boundary = image, but this would require modifying the image tensor.
boundary = image.clone()
+ signal_ones = torch.ones_like(boundary)
for i in range(n_iters):
cdt = filter2d(boundary, kernel, border_type='replicate')
@@ -68,7 +69,7 @@
offset: int = i * kernel_size // 2
out += (offset + cdt) * mask
- boundary[mask == 1] = 1
+ boundary = torch.where(mask == 1, signal_ones, boundary)
return out
|
{"golden_diff": "diff --git a/kornia/contrib/distance_transform.py b/kornia/contrib/distance_transform.py\n--- a/kornia/contrib/distance_transform.py\n+++ b/kornia/contrib/distance_transform.py\n@@ -54,6 +54,7 @@\n \r\n # It is possible to avoid cloning the image if boundary = image, but this would require modifying the image tensor.\r\n boundary = image.clone()\r\n+ signal_ones = torch.ones_like(boundary)\r\n \r\n for i in range(n_iters):\r\n cdt = filter2d(boundary, kernel, border_type='replicate')\r\n@@ -68,7 +69,7 @@\n \r\n offset: int = i * kernel_size // 2\r\n out += (offset + cdt) * mask\r\n- boundary[mask == 1] = 1\r\n+ boundary = torch.where(mask == 1, signal_ones, boundary)\r\n \r\n return out\n", "issue": "Gradient computation failed for distance_transform\n### Describe the bug\n\nUsing `kornia.contrib.distance_transform` before computing an MSE loss would raise RuntimeError for gradient computation. \r\nThe error message looks like this:\r\n\r\n```\r\n{...}/site-packages/torch/autograd/__init__.py:145: UserWarning: Error detected in ReplicationPad2DBackward. Traceback of forward call that caused the error:\r\n File \"distance_transform_loss.py\", line 104, in <module>\r\n a_dist = distance_transform(a)\r\n File \"{...}/site-packages/kornia/contrib/distance_transform.py\", line 59, in distance_transform\r\n cdt = filter2d(boundary, kernel, border_type='replicate')\r\n File \"{...}/site-packages/kornia/filters/filter.py\", line 114, in filter2d\r\n input = F.pad(input, padding_shape, mode=border_type)\r\n File \"{...}/site-packages/torch/nn/functional.py\", line 4019, in _pad\r\n return torch._C._nn.replication_pad2d(input, pad)\r\n (Triggered internally at /opt/conda/conda-bld/pytorch_1616554788289/work/torch/csrc/autograd/python_anomaly_mode.cpp:104.)\r\n Variable._execution_engine.run_backward(\r\nTraceback (most recent call last):\r\n File \"distance_transform_loss.py\", line 110, in <module>\r\n loss.backward()\r\n File \"{...}/site-packages/torch/tensor.py\", line 245, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\r\n File \"{...}/site-packages/torch/autograd/__init__.py\", line 145, in backward\r\n Variable._execution_engine.run_backward(\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [12, 1, 384, 384]], which is output 0 of IndexPutBackward, is at version 384; expected version 383 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!\r\n```\r\n\r\nA minimum reproducible script is provided below.\r\n\n\n### Reproduction steps\n\n```bash\nimport torch\r\ntorch.autograd.set_detect_anomaly(True)\r\nimport torch.nn.functional as F\r\n\r\nfrom kornia.contrib import distance_transform\r\n\r\na = torch.rand((12, 1, 384, 384)).to(torch.float32)\r\nb = torch.rand((12, 1, 384, 384)).to(torch.float32)\r\n\r\nlayer = nn.Conv2d(1, 1, (3, 3), (1, 1), (1, 1))\r\na = layer(a)\r\n\r\na_dist = distance_transform(a)\r\nb_dist = distance_transform(b)\r\nloss = F.mse_loss(a_dist, b_dist)\r\n\r\nloss.backward()\n```\n\n\n### Expected behavior\n\nGradient back-propagated successfully and there should not be any console outputs\n\n### Environment\n\n```shell\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n- PyTorch Version (e.g., 1.0): 1.8.1\r\n- OS (e.g., Linux): Ubuntu 18.04.5 LTS (x86_64)\r\n- How you installed PyTorch (`conda`, `pip`, source): conda\r\n- Build command you used (if compiling from source): None\r\n- Python version: 3.8.10\r\n- CUDA/cuDNN version: 10.1.243/7.6.5\r\n- GPU models and configuration: Tesla V100-SXM2-16GB\r\n- Any other relevant information:\n```\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "import math\r\n\r\nimport torch\r\nimport torch.nn as nn\r\n\r\nfrom kornia.filters import filter2d\r\nfrom kornia.utils import create_meshgrid\r\n\r\n\r\ndef distance_transform(\r\n image: torch.Tensor,\r\n kernel_size: int = 3,\r\n h: float = 0.35\r\n) -> torch.Tensor:\r\n r\"\"\"Approximates the Manhattan distance transform of images using cascaded convolution operations.\r\n\r\n The value at each pixel in the output represents the distance to the nearest non-zero pixel in the image image.\r\n It uses the method described in :cite:`pham2021dtlayer`.\r\n The transformation is applied independently across the channel dimension of the images.\r\n\r\n Args:\r\n image: Image with shape :math:`(B,C,H,W)`.\r\n kernel_size: size of the convolution kernel.\r\n h: value that influence the approximation of the min function.\r\n\r\n Returns:\r\n tensor with shape :math:`(B,C,H,W)`.\r\n\r\n Example:\r\n >>> tensor = torch.zeros(1, 1, 5, 5)\r\n >>> tensor[:,:, 1, 2] = 1\r\n >>> dt = kornia.contrib.distance_transform(tensor)\r\n \"\"\"\r\n if not isinstance(image, torch.Tensor):\r\n raise TypeError(f\"image type is not a torch.Tensor. Got {type(image)}\")\r\n\r\n if not len(image.shape) == 4:\r\n raise ValueError(f\"Invalid image shape, we expect BxCxHxW. Got: {image.shape}\")\r\n\r\n if kernel_size % 2 == 0:\r\n raise ValueError(\"Kernel size must be an odd number.\")\r\n\r\n # n_iters is set such that the DT will be able to propagate from any corner of the image to its far,\r\n # diagonally opposite corner\r\n n_iters: int = math.ceil(max(image.shape[2], image.shape[3]) / math.floor(kernel_size / 2))\r\n grid = create_meshgrid(kernel_size, kernel_size, normalized_coordinates=False,\r\n device=image.device, dtype=image.dtype)\r\n\r\n grid -= math.floor(kernel_size / 2)\r\n kernel = torch.hypot(grid[0, :, :, 0], grid[0, :, :, 1])\r\n kernel = torch.exp(kernel / -h).unsqueeze(0)\r\n\r\n out = torch.zeros_like(image)\r\n\r\n # It is possible to avoid cloning the image if boundary = image, but this would require modifying the image tensor.\r\n boundary = image.clone()\r\n\r\n for i in range(n_iters):\r\n cdt = filter2d(boundary, kernel, border_type='replicate')\r\n cdt = -h * torch.log(cdt)\r\n\r\n # We are calculating log(0) above.\r\n cdt = torch.nan_to_num(cdt, posinf=0.0)\r\n\r\n mask = torch.where(cdt > 0, 1.0, 0.0)\r\n if mask.sum() == 0:\r\n break\r\n\r\n offset: int = i * kernel_size // 2\r\n out += (offset + cdt) * mask\r\n boundary[mask == 1] = 1\r\n\r\n return out\r\n\r\n\r\nclass DistanceTransform(nn.Module):\r\n r\"\"\"Module that approximates the Manhattan (city block) distance transform of images using convolutions.\r\n\r\n Args:\r\n kernel_size: size of the convolution kernel.\r\n h: value that influence the approximation of the min function.\r\n\r\n \"\"\"\r\n def __init__(\r\n self,\r\n kernel_size: int = 3,\r\n h: float = 0.35\r\n ):\r\n super().__init__()\r\n self.kernel_size = kernel_size\r\n self.h = h\r\n\r\n def forward(self, image: torch.Tensor) -> torch.Tensor:\r\n # If images have multiple channels, view the channels in the batch dimension to match kernel shape.\r\n if image.shape[1] > 1:\r\n image_in = image.view(-1, 1, image.shape[-2], image.shape[-1])\r\n else:\r\n image_in = image\r\n\r\n return distance_transform(image_in, self.kernel_size, self.h).view_as(image)\r\n", "path": "kornia/contrib/distance_transform.py"}], "after_files": [{"content": "import math\r\n\r\nimport torch\r\nimport torch.nn as nn\r\n\r\nfrom kornia.filters import filter2d\r\nfrom kornia.utils import create_meshgrid\r\n\r\n\r\ndef distance_transform(\r\n image: torch.Tensor,\r\n kernel_size: int = 3,\r\n h: float = 0.35\r\n) -> torch.Tensor:\r\n r\"\"\"Approximates the Manhattan distance transform of images using cascaded convolution operations.\r\n\r\n The value at each pixel in the output represents the distance to the nearest non-zero pixel in the image image.\r\n It uses the method described in :cite:`pham2021dtlayer`.\r\n The transformation is applied independently across the channel dimension of the images.\r\n\r\n Args:\r\n image: Image with shape :math:`(B,C,H,W)`.\r\n kernel_size: size of the convolution kernel.\r\n h: value that influence the approximation of the min function.\r\n\r\n Returns:\r\n tensor with shape :math:`(B,C,H,W)`.\r\n\r\n Example:\r\n >>> tensor = torch.zeros(1, 1, 5, 5)\r\n >>> tensor[:,:, 1, 2] = 1\r\n >>> dt = kornia.contrib.distance_transform(tensor)\r\n \"\"\"\r\n if not isinstance(image, torch.Tensor):\r\n raise TypeError(f\"image type is not a torch.Tensor. Got {type(image)}\")\r\n\r\n if not len(image.shape) == 4:\r\n raise ValueError(f\"Invalid image shape, we expect BxCxHxW. Got: {image.shape}\")\r\n\r\n if kernel_size % 2 == 0:\r\n raise ValueError(\"Kernel size must be an odd number.\")\r\n\r\n # n_iters is set such that the DT will be able to propagate from any corner of the image to its far,\r\n # diagonally opposite corner\r\n n_iters: int = math.ceil(max(image.shape[2], image.shape[3]) / math.floor(kernel_size / 2))\r\n grid = create_meshgrid(kernel_size, kernel_size, normalized_coordinates=False,\r\n device=image.device, dtype=image.dtype)\r\n\r\n grid -= math.floor(kernel_size / 2)\r\n kernel = torch.hypot(grid[0, :, :, 0], grid[0, :, :, 1])\r\n kernel = torch.exp(kernel / -h).unsqueeze(0)\r\n\r\n out = torch.zeros_like(image)\r\n\r\n # It is possible to avoid cloning the image if boundary = image, but this would require modifying the image tensor.\r\n boundary = image.clone()\r\n signal_ones = torch.ones_like(boundary)\r\n\r\n for i in range(n_iters):\r\n cdt = filter2d(boundary, kernel, border_type='replicate')\r\n cdt = -h * torch.log(cdt)\r\n\r\n # We are calculating log(0) above.\r\n cdt = torch.nan_to_num(cdt, posinf=0.0)\r\n\r\n mask = torch.where(cdt > 0, 1.0, 0.0)\r\n if mask.sum() == 0:\r\n break\r\n\r\n offset: int = i * kernel_size // 2\r\n out += (offset + cdt) * mask\r\n boundary = torch.where(mask == 1, signal_ones, boundary)\r\n\r\n return out\r\n\r\n\r\nclass DistanceTransform(nn.Module):\r\n r\"\"\"Module that approximates the Manhattan (city block) distance transform of images using convolutions.\r\n\r\n Args:\r\n kernel_size: size of the convolution kernel.\r\n h: value that influence the approximation of the min function.\r\n\r\n \"\"\"\r\n def __init__(\r\n self,\r\n kernel_size: int = 3,\r\n h: float = 0.35\r\n ):\r\n super().__init__()\r\n self.kernel_size = kernel_size\r\n self.h = h\r\n\r\n def forward(self, image: torch.Tensor) -> torch.Tensor:\r\n # If images have multiple channels, view the channels in the batch dimension to match kernel shape.\r\n if image.shape[1] > 1:\r\n image_in = image.view(-1, 1, image.shape[-2], image.shape[-1])\r\n else:\r\n image_in = image\r\n\r\n return distance_transform(image_in, self.kernel_size, self.h).view_as(image)\r\n", "path": "kornia/contrib/distance_transform.py"}]}
| 2,215 | 202 |
gh_patches_debug_42386
|
rasdani/github-patches
|
git_diff
|
hydroshare__hydroshare-5336
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add ability to repair_resource ignoring recently repaired resources (batch)
**Describe the feature you'd like and what it will do**
Jenkins agent times out on repair_resource command
https://new-hs-ci.hydroshare.org/view/02-Deployment/job/Production-Deployment/job/prod-hsctl-command/268/console
```
FATAL: command execution failed
java.io.IOException
at hudson.remoting.Channel.close(Channel.java:1499)
at hudson.remoting.Channel.close(Channel.java:1455)
at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:884)
at hudson.slaves.SlaveComputer.kill(SlaveComputer.java:851)
at hudson.model.AbstractCIBase.killComputer(AbstractCIBase.java:87)
at jenkins.model.Jenkins.lambda$_cleanUpDisconnectComputers$11(Jenkins.java:3559)
at hudson.model.Queue._withLock(Queue.java:1382)
at hudson.model.Queue.withLock(Queue.java:1258)
at jenkins.model.Jenkins._cleanUpDisconnectComputers(Jenkins.java:3555)
at jenkins.model.Jenkins.cleanUp(Jenkins.java:3438)
at hudson.WebAppMain.contextDestroyed(WebAppMain.java:441)
at org.eclipse.jetty.server.handler.ContextHandler.callContextDestroyed(ContextHandler.java:1075)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextDestroyed(ServletContextHandler.java:584)
at org.eclipse.jetty.server.handler.ContextHandler.contextDestroyed(ContextHandler.java:1038)
at org.eclipse.jetty.servlet.ServletHandler.doStop(ServletHandler.java:319)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
at org.eclipse.jetty.security.SecurityHandler.doStop(SecurityHandler.java:437)
at org.eclipse.jetty.security.ConstraintSecurityHandler.doStop(ConstraintSecurityHandler.java:423)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
at org.eclipse.jetty.server.session.SessionHandler.doStop(SessionHandler.java:520)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
at org.eclipse.jetty.server.handler.ContextHandler.stopContext(ContextHandler.java:1061)
at org.eclipse.jetty.servlet.ServletContextHandler.stopContext(ServletContextHandler.java:386)
at org.eclipse.jetty.webapp.WebAppContext.stopWebapp(WebAppContext.java:1454)
at org.eclipse.jetty.webapp.WebAppContext.stopContext(WebAppContext.java:1420)
at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:1115)
at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:297)
at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:547)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
at org.eclipse.jetty.server.Server.doStop(Server.java:470)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at winstone.Launcher.shutdown(Launcher.java:318)
at winstone.ShutdownHook.run(ShutdownHook.java:25)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel@30214e98:prod-nginx": Remote call on prod-nginx failed. The channel is closing down or has closed down
```
**Why is this feature important?**
We want to bulk repair resources, but this process times out because it is very long-running.
We could make the process asynchronous and run the repairs in parallel, but we run the risk of race conditions and these are potentially sensitive file operations on published resources.
Instead of making the process async, I suggest that we add the ability to run repairs in batches. Or to ignore resources that have recently been repaired
**Additional context**
Related to https://github.com/hydroshare/hydroshare/issues/5300
HS v2.12.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_core/management/commands/repair_resource.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 Check synchronization between iRODS and Django for multiple resources
5
6 This checks that:
7
8 1. every ResourceFile corresponds to an iRODS file
9 2. every iRODS file in {short_id}/data/contents corresponds to a ResourceFile
10 3. every iRODS directory {short_id} corresponds to a Django resource
11 """
12
13 from django.core.management.base import BaseCommand, CommandError
14 from django.core.exceptions import ValidationError
15 from hs_core.models import BaseResource
16 from hs_core.management.utils import repair_resource
17 from hs_core.views.utils import get_default_admin_user
18 from hs_core import hydroshare
19 from django.utils import timezone
20 from django.db.models import F
21 from datetime import timedelta
22
23 import logging
24
25
26 class Command(BaseCommand):
27 help = "Check synchronization between iRODS and Django."
28
29 def add_arguments(self, parser):
30 parser.add_argument('resource_ids', nargs='*', type=str)
31 parser.add_argument('--days', type=int, dest='days', help='include resources updated in the last X days')
32 parser.add_argument(
33 '--admin',
34 action='store_true', # True for presence, False for absence
35 dest='admin', # value is options['dry_run']
36 help='run process as admin user - this allows published resources to be modified',
37 )
38 parser.add_argument(
39 '--dryrun',
40 action='store_true', # True for presence, False for absence
41 dest='dry_run', # value is options['dry_run']
42 help='run process without saving changes',
43 )
44 parser.add_argument(
45 '--published',
46 action='store_true', # True for presence, False for absence
47 dest='published', # value is options['published']
48 help='filter to just published resources',
49 )
50
51 def handle(self, *args, **options):
52 logger = logging.getLogger(__name__)
53 resources_ids = options['resource_ids']
54 resources = BaseResource.objects.all()
55 days = options['days']
56 admin = options['admin']
57 dry_run = options['dry_run']
58 published = options['published']
59 site_url = hydroshare.utils.current_site_url()
60
61 if resources_ids: # an array of resource short_id to check.
62 print("CHECKING RESOURCES PROVIDED")
63 resources = resources.filter(short_id__in=resources_ids)
64 if published:
65 if not dry_run:
66 print("WARNING: Executing with --published arg without --dryrun. Published resources will be modified.")
67 print("FILTERING TO INCLUDE PUBLISHED RESOURCES ONLY")
68 resources = resources.filter(raccess__published=True)
69
70 if days:
71 print(f"FILTERING TO INCLUDE RESOURCES UPDATED IN LAST {days} DAYS")
72 if resources_ids:
73 print("Your supplied resource_ids will be filtered by the --days that you provided. ")
74 cuttoff_time = timezone.now() - timedelta(days)
75 resources = resources.filter(updated__gte=cuttoff_time)
76
77 if dry_run:
78 print("CONDUCTING A DRY RUN: FIXES WILL NOT BE SAVED")
79
80 if not resources:
81 print("NO RESOURCES FOUND MATCHING YOUR FILTER ARGUMENTS")
82 return
83
84 if admin:
85 print("PROCESSES WILL BE RUN AS ADMIN USER. ALLOWS DELETING DJANGO RESOURCE FILES ON PUBLISHED RESOURCES")
86 user = get_default_admin_user()
87 else:
88 user = None
89
90 resources = resources.order_by(F('updated').asc(nulls_first=True))
91
92 total_res_to_check = resources.count()
93 current_resource = 0
94 impacted_resources = 0
95 total_files_missing_in_django = 0
96 total_files_dangling_in_django = 0
97 resources_with_missing_django = []
98 resources_with_missing_irods = []
99 failed_resources = []
100 for resource in resources.iterator():
101 current_resource += 1
102 res_url = site_url + resource.absolute_url
103 print("*" * 100)
104 print(f"{current_resource}/{total_res_to_check}: Checking resource {res_url}")
105 if resource.raccess.published:
106 print("This Resource is published")
107 if admin:
108 print("Command running with --admin. Published resources will be repaired if needed.")
109 else:
110 print("Command running without --admin. Fixing a published resource raise ValidationError")
111 try:
112 _, missing_in_django, dangling_in_django = repair_resource(resource, logger, dry_run=dry_run, user=user)
113 except ValidationError as ve:
114 failed_resources.append(res_url)
115 print("Exception while attempting to repair resource:")
116 print(ve)
117 continue
118 if dangling_in_django > 0 or missing_in_django > 0:
119 impacted_resources += 1
120 total_files_missing_in_django += missing_in_django
121 total_files_dangling_in_django += dangling_in_django
122 if missing_in_django > 0:
123 resources_with_missing_django.append(res_url)
124 if dangling_in_django > 0:
125 resources_with_missing_irods.append(res_url)
126 print(f"{dangling_in_django} files dangling in Django for this resource.")
127 print(f"{missing_in_django} files missing in Django for this resource.")
128 print(f"Resources thus far with at least one missing django file: {len(resources_with_missing_django)}")
129 print(f"Resources thus far with at least one dangling django file: {len(resources_with_missing_irods)}")
130 print(f"Total resources with discrepancies thus far: {impacted_resources}")
131 print("*" * 100)
132 print("*" * 100)
133 print(f"Number of resources that had at least one file issue: {impacted_resources}")
134
135 print("*" * 100)
136 print(f"Total number of files missing in Django (across all checked resources): \
137 {total_files_missing_in_django}")
138 print(f"Number of resources with at least one missing django file: {len(resources_with_missing_django)}")
139 for res in resources_with_missing_django:
140 print(res)
141
142 print("*" * 100)
143 print(f"Total number of files dangling in Django (across all checked resources): \
144 {total_files_dangling_in_django}")
145 print(f"Number of resources with at least one dangling Django file: {len(resources_with_missing_irods)}")
146 for res in resources_with_missing_irods:
147 print(res)
148
149 # Make it simple to detect clean/fail run in Jenkins
150 if impacted_resources and dry_run:
151 raise CommandError("repair_resources detected resources in need of repair during dry run")
152 else:
153 print("Completed run of repair_resource")
154 if failed_resources:
155 print("*" * 100)
156 print("Repair was attempted but failed for the following resources:")
157 for res in resources_with_missing_irods:
158 print(res)
159 raise CommandError("Repair was attempted but failed on at least one resource")
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hs_core/management/commands/repair_resource.py b/hs_core/management/commands/repair_resource.py
--- a/hs_core/management/commands/repair_resource.py
+++ b/hs_core/management/commands/repair_resource.py
@@ -17,7 +17,7 @@
from hs_core.views.utils import get_default_admin_user
from hs_core import hydroshare
from django.utils import timezone
-from django.db.models import F
+from django.db.models import F, Q
from datetime import timedelta
import logging
@@ -28,7 +28,10 @@
def add_arguments(self, parser):
parser.add_argument('resource_ids', nargs='*', type=str)
- parser.add_argument('--days', type=int, dest='days', help='include resources updated in the last X days')
+ parser.add_argument('--updated_since', type=int, dest='updated_since',
+ help='include only resources updated in the last X days')
+ parser.add_argument('--ignore_repaired_since', type=int, dest='ignore_repaired_since',
+ help='ignore resources repaired since X days ago')
parser.add_argument(
'--admin',
action='store_true', # True for presence, False for absence
@@ -52,11 +55,12 @@
logger = logging.getLogger(__name__)
resources_ids = options['resource_ids']
resources = BaseResource.objects.all()
- days = options['days']
+ updated_since = options['updated_since']
admin = options['admin']
dry_run = options['dry_run']
published = options['published']
site_url = hydroshare.utils.current_site_url()
+ ignore_repaired_since = options['ignore_repaired_since']
if resources_ids: # an array of resource short_id to check.
print("CHECKING RESOURCES PROVIDED")
@@ -67,13 +71,20 @@
print("FILTERING TO INCLUDE PUBLISHED RESOURCES ONLY")
resources = resources.filter(raccess__published=True)
- if days:
- print(f"FILTERING TO INCLUDE RESOURCES UPDATED IN LAST {days} DAYS")
+ if updated_since:
+ print(f"FILTERING TO INCLUDE RESOURCES UPDATED IN LAST {updated_since} DAYS")
if resources_ids:
- print("Your supplied resource_ids will be filtered by the --days that you provided. ")
- cuttoff_time = timezone.now() - timedelta(days)
+ print("Your supplied resource_ids will be filtered by the --updated_since days that you provided. ")
+ cuttoff_time = timezone.now() - timedelta(days=updated_since)
resources = resources.filter(updated__gte=cuttoff_time)
+ if ignore_repaired_since:
+ print(f"FILTERING TO INCLUDE RESOURCES NOT REPAIRED IN THE LAST {ignore_repaired_since} DAYS")
+ if resources_ids:
+ print("Your supplied resource_ids will be filtered by the --ignore_repaired_since days provided. ")
+ cuttoff_time = timezone.now() - timedelta(days=ignore_repaired_since)
+ resources = resources.filter(Q(repaired__lt=cuttoff_time) | Q(repaired__isnull=True))
+
if dry_run:
print("CONDUCTING A DRY RUN: FIXES WILL NOT BE SAVED")
@@ -87,7 +98,7 @@
else:
user = None
- resources = resources.order_by(F('updated').asc(nulls_first=True))
+ resources = resources.order_by(F('repaired').asc(nulls_first=True))
total_res_to_check = resources.count()
current_resource = 0
|
{"golden_diff": "diff --git a/hs_core/management/commands/repair_resource.py b/hs_core/management/commands/repair_resource.py\n--- a/hs_core/management/commands/repair_resource.py\n+++ b/hs_core/management/commands/repair_resource.py\n@@ -17,7 +17,7 @@\n from hs_core.views.utils import get_default_admin_user\n from hs_core import hydroshare\n from django.utils import timezone\n-from django.db.models import F\n+from django.db.models import F, Q\n from datetime import timedelta\n \n import logging\n@@ -28,7 +28,10 @@\n \n def add_arguments(self, parser):\n parser.add_argument('resource_ids', nargs='*', type=str)\n- parser.add_argument('--days', type=int, dest='days', help='include resources updated in the last X days')\n+ parser.add_argument('--updated_since', type=int, dest='updated_since',\n+ help='include only resources updated in the last X days')\n+ parser.add_argument('--ignore_repaired_since', type=int, dest='ignore_repaired_since',\n+ help='ignore resources repaired since X days ago')\n parser.add_argument(\n '--admin',\n action='store_true', # True for presence, False for absence\n@@ -52,11 +55,12 @@\n logger = logging.getLogger(__name__)\n resources_ids = options['resource_ids']\n resources = BaseResource.objects.all()\n- days = options['days']\n+ updated_since = options['updated_since']\n admin = options['admin']\n dry_run = options['dry_run']\n published = options['published']\n site_url = hydroshare.utils.current_site_url()\n+ ignore_repaired_since = options['ignore_repaired_since']\n \n if resources_ids: # an array of resource short_id to check.\n print(\"CHECKING RESOURCES PROVIDED\")\n@@ -67,13 +71,20 @@\n print(\"FILTERING TO INCLUDE PUBLISHED RESOURCES ONLY\")\n resources = resources.filter(raccess__published=True)\n \n- if days:\n- print(f\"FILTERING TO INCLUDE RESOURCES UPDATED IN LAST {days} DAYS\")\n+ if updated_since:\n+ print(f\"FILTERING TO INCLUDE RESOURCES UPDATED IN LAST {updated_since} DAYS\")\n if resources_ids:\n- print(\"Your supplied resource_ids will be filtered by the --days that you provided. \")\n- cuttoff_time = timezone.now() - timedelta(days)\n+ print(\"Your supplied resource_ids will be filtered by the --updated_since days that you provided. \")\n+ cuttoff_time = timezone.now() - timedelta(days=updated_since)\n resources = resources.filter(updated__gte=cuttoff_time)\n \n+ if ignore_repaired_since:\n+ print(f\"FILTERING TO INCLUDE RESOURCES NOT REPAIRED IN THE LAST {ignore_repaired_since} DAYS\")\n+ if resources_ids:\n+ print(\"Your supplied resource_ids will be filtered by the --ignore_repaired_since days provided. \")\n+ cuttoff_time = timezone.now() - timedelta(days=ignore_repaired_since)\n+ resources = resources.filter(Q(repaired__lt=cuttoff_time) | Q(repaired__isnull=True))\n+\n if dry_run:\n print(\"CONDUCTING A DRY RUN: FIXES WILL NOT BE SAVED\")\n \n@@ -87,7 +98,7 @@\n else:\n user = None\n \n- resources = resources.order_by(F('updated').asc(nulls_first=True))\n+ resources = resources.order_by(F('repaired').asc(nulls_first=True))\n \n total_res_to_check = resources.count()\n current_resource = 0\n", "issue": "Add ability to repair_resource ignoring recently repaired resources (batch)\n**Describe the feature you'd like and what it will do**\r\n\r\nJenkins agent times out on repair_resource command\r\nhttps://new-hs-ci.hydroshare.org/view/02-Deployment/job/Production-Deployment/job/prod-hsctl-command/268/console\r\n```\r\nFATAL: command execution failed\r\njava.io.IOException\r\n\tat hudson.remoting.Channel.close(Channel.java:1499)\r\n\tat hudson.remoting.Channel.close(Channel.java:1455)\r\n\tat hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:884)\r\n\tat hudson.slaves.SlaveComputer.kill(SlaveComputer.java:851)\r\n\tat hudson.model.AbstractCIBase.killComputer(AbstractCIBase.java:87)\r\n\tat jenkins.model.Jenkins.lambda$_cleanUpDisconnectComputers$11(Jenkins.java:3559)\r\n\tat hudson.model.Queue._withLock(Queue.java:1382)\r\n\tat hudson.model.Queue.withLock(Queue.java:1258)\r\n\tat jenkins.model.Jenkins._cleanUpDisconnectComputers(Jenkins.java:3555)\r\n\tat jenkins.model.Jenkins.cleanUp(Jenkins.java:3438)\r\n\tat hudson.WebAppMain.contextDestroyed(WebAppMain.java:441)\r\n\tat org.eclipse.jetty.server.handler.ContextHandler.callContextDestroyed(ContextHandler.java:1075)\r\n\tat org.eclipse.jetty.servlet.ServletContextHandler.callContextDestroyed(ServletContextHandler.java:584)\r\n\tat org.eclipse.jetty.server.handler.ContextHandler.contextDestroyed(ContextHandler.java:1038)\r\n\tat org.eclipse.jetty.servlet.ServletHandler.doStop(ServletHandler.java:319)\r\n\tat org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)\r\n\tat org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)\r\n\tat org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)\r\n\tat org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)\r\n\tat org.eclipse.jetty.security.SecurityHandler.doStop(SecurityHandler.java:437)\r\n\tat org.eclipse.jetty.security.ConstraintSecurityHandler.doStop(ConstraintSecurityHandler.java:423)\r\n\tat org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)\r\n\tat org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)\r\n\tat org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)\r\n\tat org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)\r\n\tat org.eclipse.jetty.server.session.SessionHandler.doStop(SessionHandler.java:520)\r\n\tat org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)\r\n\tat org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)\r\n\tat org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)\r\n\tat org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)\r\n\tat org.eclipse.jetty.server.handler.ContextHandler.stopContext(ContextHandler.java:1061)\r\n\tat org.eclipse.jetty.servlet.ServletContextHandler.stopContext(ServletContextHandler.java:386)\r\n\tat org.eclipse.jetty.webapp.WebAppContext.stopWebapp(WebAppContext.java:1454)\r\n\tat org.eclipse.jetty.webapp.WebAppContext.stopContext(WebAppContext.java:1420)\r\n\tat org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:1115)\r\n\tat org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:297)\r\n\tat org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:547)\r\n\tat org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)\r\n\tat org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)\r\n\tat org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)\r\n\tat org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)\r\n\tat org.eclipse.jetty.server.Server.doStop(Server.java:470)\r\n\tat org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)\r\n\tat winstone.Launcher.shutdown(Launcher.java:318)\r\n\tat winstone.ShutdownHook.run(ShutdownHook.java:25)\r\nCaused: hudson.remoting.ChannelClosedException: Channel \"hudson.remoting.Channel@30214e98:prod-nginx\": Remote call on prod-nginx failed. The channel is closing down or has closed down\r\n```\r\n\r\n**Why is this feature important?**\r\nWe want to bulk repair resources, but this process times out because it is very long-running.\r\nWe could make the process asynchronous and run the repairs in parallel, but we run the risk of race conditions and these are potentially sensitive file operations on published resources.\r\nInstead of making the process async, I suggest that we add the ability to run repairs in batches. Or to ignore resources that have recently been repaired\r\n\r\n**Additional context**\r\nRelated to https://github.com/hydroshare/hydroshare/issues/5300\r\nHS v2.12.3\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nCheck synchronization between iRODS and Django for multiple resources\n\nThis checks that:\n\n1. every ResourceFile corresponds to an iRODS file\n2. every iRODS file in {short_id}/data/contents corresponds to a ResourceFile\n3. every iRODS directory {short_id} corresponds to a Django resource\n\"\"\"\n\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.core.exceptions import ValidationError\nfrom hs_core.models import BaseResource\nfrom hs_core.management.utils import repair_resource\nfrom hs_core.views.utils import get_default_admin_user\nfrom hs_core import hydroshare\nfrom django.utils import timezone\nfrom django.db.models import F\nfrom datetime import timedelta\n\nimport logging\n\n\nclass Command(BaseCommand):\n help = \"Check synchronization between iRODS and Django.\"\n\n def add_arguments(self, parser):\n parser.add_argument('resource_ids', nargs='*', type=str)\n parser.add_argument('--days', type=int, dest='days', help='include resources updated in the last X days')\n parser.add_argument(\n '--admin',\n action='store_true', # True for presence, False for absence\n dest='admin', # value is options['dry_run']\n help='run process as admin user - this allows published resources to be modified',\n )\n parser.add_argument(\n '--dryrun',\n action='store_true', # True for presence, False for absence\n dest='dry_run', # value is options['dry_run']\n help='run process without saving changes',\n )\n parser.add_argument(\n '--published',\n action='store_true', # True for presence, False for absence\n dest='published', # value is options['published']\n help='filter to just published resources',\n )\n\n def handle(self, *args, **options):\n logger = logging.getLogger(__name__)\n resources_ids = options['resource_ids']\n resources = BaseResource.objects.all()\n days = options['days']\n admin = options['admin']\n dry_run = options['dry_run']\n published = options['published']\n site_url = hydroshare.utils.current_site_url()\n\n if resources_ids: # an array of resource short_id to check.\n print(\"CHECKING RESOURCES PROVIDED\")\n resources = resources.filter(short_id__in=resources_ids)\n if published:\n if not dry_run:\n print(\"WARNING: Executing with --published arg without --dryrun. Published resources will be modified.\")\n print(\"FILTERING TO INCLUDE PUBLISHED RESOURCES ONLY\")\n resources = resources.filter(raccess__published=True)\n\n if days:\n print(f\"FILTERING TO INCLUDE RESOURCES UPDATED IN LAST {days} DAYS\")\n if resources_ids:\n print(\"Your supplied resource_ids will be filtered by the --days that you provided. \")\n cuttoff_time = timezone.now() - timedelta(days)\n resources = resources.filter(updated__gte=cuttoff_time)\n\n if dry_run:\n print(\"CONDUCTING A DRY RUN: FIXES WILL NOT BE SAVED\")\n\n if not resources:\n print(\"NO RESOURCES FOUND MATCHING YOUR FILTER ARGUMENTS\")\n return\n\n if admin:\n print(\"PROCESSES WILL BE RUN AS ADMIN USER. ALLOWS DELETING DJANGO RESOURCE FILES ON PUBLISHED RESOURCES\")\n user = get_default_admin_user()\n else:\n user = None\n\n resources = resources.order_by(F('updated').asc(nulls_first=True))\n\n total_res_to_check = resources.count()\n current_resource = 0\n impacted_resources = 0\n total_files_missing_in_django = 0\n total_files_dangling_in_django = 0\n resources_with_missing_django = []\n resources_with_missing_irods = []\n failed_resources = []\n for resource in resources.iterator():\n current_resource += 1\n res_url = site_url + resource.absolute_url\n print(\"*\" * 100)\n print(f\"{current_resource}/{total_res_to_check}: Checking resource {res_url}\")\n if resource.raccess.published:\n print(\"This Resource is published\")\n if admin:\n print(\"Command running with --admin. Published resources will be repaired if needed.\")\n else:\n print(\"Command running without --admin. Fixing a published resource raise ValidationError\")\n try:\n _, missing_in_django, dangling_in_django = repair_resource(resource, logger, dry_run=dry_run, user=user)\n except ValidationError as ve:\n failed_resources.append(res_url)\n print(\"Exception while attempting to repair resource:\")\n print(ve)\n continue\n if dangling_in_django > 0 or missing_in_django > 0:\n impacted_resources += 1\n total_files_missing_in_django += missing_in_django\n total_files_dangling_in_django += dangling_in_django\n if missing_in_django > 0:\n resources_with_missing_django.append(res_url)\n if dangling_in_django > 0:\n resources_with_missing_irods.append(res_url)\n print(f\"{dangling_in_django} files dangling in Django for this resource.\")\n print(f\"{missing_in_django} files missing in Django for this resource.\")\n print(f\"Resources thus far with at least one missing django file: {len(resources_with_missing_django)}\")\n print(f\"Resources thus far with at least one dangling django file: {len(resources_with_missing_irods)}\")\n print(f\"Total resources with discrepancies thus far: {impacted_resources}\")\n print(\"*\" * 100)\n print(\"*\" * 100)\n print(f\"Number of resources that had at least one file issue: {impacted_resources}\")\n\n print(\"*\" * 100)\n print(f\"Total number of files missing in Django (across all checked resources): \\\n {total_files_missing_in_django}\")\n print(f\"Number of resources with at least one missing django file: {len(resources_with_missing_django)}\")\n for res in resources_with_missing_django:\n print(res)\n\n print(\"*\" * 100)\n print(f\"Total number of files dangling in Django (across all checked resources): \\\n {total_files_dangling_in_django}\")\n print(f\"Number of resources with at least one dangling Django file: {len(resources_with_missing_irods)}\")\n for res in resources_with_missing_irods:\n print(res)\n\n # Make it simple to detect clean/fail run in Jenkins\n if impacted_resources and dry_run:\n raise CommandError(\"repair_resources detected resources in need of repair during dry run\")\n else:\n print(\"Completed run of repair_resource\")\n if failed_resources:\n print(\"*\" * 100)\n print(\"Repair was attempted but failed for the following resources:\")\n for res in resources_with_missing_irods:\n print(res)\n raise CommandError(\"Repair was attempted but failed on at least one resource\")\n", "path": "hs_core/management/commands/repair_resource.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nCheck synchronization between iRODS and Django for multiple resources\n\nThis checks that:\n\n1. every ResourceFile corresponds to an iRODS file\n2. every iRODS file in {short_id}/data/contents corresponds to a ResourceFile\n3. every iRODS directory {short_id} corresponds to a Django resource\n\"\"\"\n\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.core.exceptions import ValidationError\nfrom hs_core.models import BaseResource\nfrom hs_core.management.utils import repair_resource\nfrom hs_core.views.utils import get_default_admin_user\nfrom hs_core import hydroshare\nfrom django.utils import timezone\nfrom django.db.models import F, Q\nfrom datetime import timedelta\n\nimport logging\n\n\nclass Command(BaseCommand):\n help = \"Check synchronization between iRODS and Django.\"\n\n def add_arguments(self, parser):\n parser.add_argument('resource_ids', nargs='*', type=str)\n parser.add_argument('--updated_since', type=int, dest='updated_since',\n help='include only resources updated in the last X days')\n parser.add_argument('--ignore_repaired_since', type=int, dest='ignore_repaired_since',\n help='ignore resources repaired since X days ago')\n parser.add_argument(\n '--admin',\n action='store_true', # True for presence, False for absence\n dest='admin', # value is options['dry_run']\n help='run process as admin user - this allows published resources to be modified',\n )\n parser.add_argument(\n '--dryrun',\n action='store_true', # True for presence, False for absence\n dest='dry_run', # value is options['dry_run']\n help='run process without saving changes',\n )\n parser.add_argument(\n '--published',\n action='store_true', # True for presence, False for absence\n dest='published', # value is options['published']\n help='filter to just published resources',\n )\n\n def handle(self, *args, **options):\n logger = logging.getLogger(__name__)\n resources_ids = options['resource_ids']\n resources = BaseResource.objects.all()\n updated_since = options['updated_since']\n admin = options['admin']\n dry_run = options['dry_run']\n published = options['published']\n site_url = hydroshare.utils.current_site_url()\n ignore_repaired_since = options['ignore_repaired_since']\n\n if resources_ids: # an array of resource short_id to check.\n print(\"CHECKING RESOURCES PROVIDED\")\n resources = resources.filter(short_id__in=resources_ids)\n if published:\n if not dry_run:\n print(\"WARNING: Executing with --published arg without --dryrun. Published resources will be modified.\")\n print(\"FILTERING TO INCLUDE PUBLISHED RESOURCES ONLY\")\n resources = resources.filter(raccess__published=True)\n\n if updated_since:\n print(f\"FILTERING TO INCLUDE RESOURCES UPDATED IN LAST {updated_since} DAYS\")\n if resources_ids:\n print(\"Your supplied resource_ids will be filtered by the --updated_since days that you provided. \")\n cuttoff_time = timezone.now() - timedelta(days=updated_since)\n resources = resources.filter(updated__gte=cuttoff_time)\n\n if ignore_repaired_since:\n print(f\"FILTERING TO INCLUDE RESOURCES NOT REPAIRED IN THE LAST {ignore_repaired_since} DAYS\")\n if resources_ids:\n print(\"Your supplied resource_ids will be filtered by the --ignore_repaired_since days provided. \")\n cuttoff_time = timezone.now() - timedelta(days=ignore_repaired_since)\n resources = resources.filter(Q(repaired__lt=cuttoff_time) | Q(repaired__isnull=True))\n\n if dry_run:\n print(\"CONDUCTING A DRY RUN: FIXES WILL NOT BE SAVED\")\n\n if not resources:\n print(\"NO RESOURCES FOUND MATCHING YOUR FILTER ARGUMENTS\")\n return\n\n if admin:\n print(\"PROCESSES WILL BE RUN AS ADMIN USER. ALLOWS DELETING DJANGO RESOURCE FILES ON PUBLISHED RESOURCES\")\n user = get_default_admin_user()\n else:\n user = None\n\n resources = resources.order_by(F('repaired').asc(nulls_first=True))\n\n total_res_to_check = resources.count()\n current_resource = 0\n impacted_resources = 0\n total_files_missing_in_django = 0\n total_files_dangling_in_django = 0\n resources_with_missing_django = []\n resources_with_missing_irods = []\n failed_resources = []\n for resource in resources.iterator():\n current_resource += 1\n res_url = site_url + resource.absolute_url\n print(\"*\" * 100)\n print(f\"{current_resource}/{total_res_to_check}: Checking resource {res_url}\")\n if resource.raccess.published:\n print(\"This Resource is published\")\n if admin:\n print(\"Command running with --admin. Published resources will be repaired if needed.\")\n else:\n print(\"Command running without --admin. Fixing a published resource raise ValidationError\")\n try:\n _, missing_in_django, dangling_in_django = repair_resource(resource, logger, dry_run=dry_run, user=user)\n except ValidationError as ve:\n failed_resources.append(res_url)\n print(\"Exception while attempting to repair resource:\")\n print(ve)\n continue\n if dangling_in_django > 0 or missing_in_django > 0:\n impacted_resources += 1\n total_files_missing_in_django += missing_in_django\n total_files_dangling_in_django += dangling_in_django\n if missing_in_django > 0:\n resources_with_missing_django.append(res_url)\n if dangling_in_django > 0:\n resources_with_missing_irods.append(res_url)\n print(f\"{dangling_in_django} files dangling in Django for this resource.\")\n print(f\"{missing_in_django} files missing in Django for this resource.\")\n print(f\"Resources thus far with at least one missing django file: {len(resources_with_missing_django)}\")\n print(f\"Resources thus far with at least one dangling django file: {len(resources_with_missing_irods)}\")\n print(f\"Total resources with discrepancies thus far: {impacted_resources}\")\n print(\"*\" * 100)\n print(\"*\" * 100)\n print(f\"Number of resources that had at least one file issue: {impacted_resources}\")\n\n print(\"*\" * 100)\n print(f\"Total number of files missing in Django (across all checked resources): \\\n {total_files_missing_in_django}\")\n print(f\"Number of resources with at least one missing django file: {len(resources_with_missing_django)}\")\n for res in resources_with_missing_django:\n print(res)\n\n print(\"*\" * 100)\n print(f\"Total number of files dangling in Django (across all checked resources): \\\n {total_files_dangling_in_django}\")\n print(f\"Number of resources with at least one dangling Django file: {len(resources_with_missing_irods)}\")\n for res in resources_with_missing_irods:\n print(res)\n\n # Make it simple to detect clean/fail run in Jenkins\n if impacted_resources and dry_run:\n raise CommandError(\"repair_resources detected resources in need of repair during dry run\")\n else:\n print(\"Completed run of repair_resource\")\n if failed_resources:\n print(\"*\" * 100)\n print(\"Repair was attempted but failed for the following resources:\")\n for res in resources_with_missing_irods:\n print(res)\n raise CommandError(\"Repair was attempted but failed on at least one resource\")\n", "path": "hs_core/management/commands/repair_resource.py"}]}
| 3,258 | 786 |
gh_patches_debug_1186
|
rasdani/github-patches
|
git_diff
|
oppia__oppia-8773
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
All the Frontend services should be documented with jsdoc.
**This starter issue is currently on hold because we do not have the capacity to support new contributors working on it.**
--------------
We aim to document all the files listed below.
Each of the below-listed files should have a file overview signifying the purpose of the file,
and each function should have its meaning, arguments and return statement documented with the help of jsdoc decorators like `@fileoverview`, `@param`, `@return`.
You can go through these services to get some reference:
- graph-input-rules.service.ts
- exploration-html-formatter.service.ts
- graph-utils.service.ts
- alerts.service.ts
- playthrough-issues.service.ts
**Deducing variable's significance and the meaning from the code:**
Try and execute the code by running a dev server locally, and log the variable type (you can use typeof for this) and try to find out the purpose of the variable(what's the variable storing, what is it being used for, what would break if we remove the variable?). To figure out how to execute the code, grep to see what methods call the function, and add console logs to ensure that the code is being executed when you perform the corresponding action in the UI. (As a sanity check, you might also want to ensure that the suspected variable type is consistent with any TypeScript types that are already provided.)
**Overview of the function:**
Finding or deducing the overview or the purpose of the function can be sometimes a bit tricky, some general advice can be to think--
- why is this function even required, what does it helps us achieve. Try to think from the perspective of the person who created the function and try to mimic the thought process of the original author.
- Look at the callers of the function, see all the places where this function is being called at and try to get a better understanding of the function.
- If you are unable to understand the purpose of the function, feel free to reach out to your mentor(always happy to help).
Please go through this [doc](https://docs.google.com/document/d/1jr8X3oqW7WqKxOgsK8b4TxIraODAV23vDJgYso1R7Pk/edit?usp=sharing) for a deeper context.
**Please don't include types in the JSDoc, use the TypeScript annotations for that.**
PR's for reference: [#8773](https://github.com/oppia/oppia/pull/8773)
**To be assigned to a file or for any queries, comment on the thread and tag @nithusha21.**
The listed services file below needs to be documented:
- [ ] admin-config-tab-backend-api.service.ts
- [ ] admin-data.service.ts
- [ ] admin-router.service.ts @anumehaagrawal
- [ ] admin-task-manager.service.ts @larakhdavies
- [ ] alerts.service.ts
- [ ] angular-name.service.ts @parulpriyedarshani
- [ ] answer-classification.service.ts
- [ ] answer-groups-cache.service.ts
- [ ] assets-backend-api.service.ts
- [ ] audio-pFlayer.service.ts
- [ ] audio-preloader.service.ts
- [ ] audio-translation-language.service.ts @kaylahardie
- [ ] audio-translation-manager.service.ts
- [ ] autogenerated-audio-player.service.ts @BlakeHan01
- [ ] autoplayed-videos.service.ts @darkpsychic
- [ ] autosave-info-modals.service.ts
- [ ] background-mask.service.ts
- [ ] base-undo-redo.service.ts
- [ ] browser-checker.service.ts
- [ ] change-list.service.ts
- [ ] changes-in-human-readable-form.service.ts
- [ ] classroom-backend-api.service.ts @ReshuKumari
- [ ] code-normalizer.service.ts
- [ ] collection-creation-backend-api.service.ts
- [ ] collection-creation.service.ts
- [ ] collection-editor-state.service.ts
- [ ] collection-linearizer.service.ts
- [ ] collection-rights-backend-api.service.ts
- [ ] collection-update.service.ts
- [ ] collection-validation.service.ts
- [ ] compare-versions.service.ts
- [ ] compute-graph.service.ts
- [ ] concept-card-backend-api.service.ts
- [ ] construct-translation-ids.service.ts @BlakeHan01
- [ ] context.service.ts
- [ ] contribution-and-review.service.ts @lelouchB
- [ ] contribution-opportunities-backend-api.service.ts
- [ ] contribution-opportunities.service.ts
- [ ] creator-dashboard-backend-api.service.ts
- [ ] csrf-token.service.ts
- [ ] current-interaction.service.ts
- [ ] date-time-format.service.ts @linnhallonqvist
- [ ] debouncer.service.ts
- [ ] debug-info-tracker.service.ts
- [ ] device-info.service.ts
- [ ] document-attribute-customization.service.ts
- [ ] editability.service.ts
- [ ] editable-collection-backend-api.service.ts
- [ ] editable-exploration-backend-api.service.ts
- [ ] editable-question-backend-api.service.ts
- [ ] editable-skill-backend-api.service.ts
- [ ] editable-story-backend-api.service.ts
- [ ] editable-topic-backend-api.service.ts
- [ ] editor-first-time-events.service.ts
- [ ] email-dashboard-data.service.ts
- [ ] exploration-automatic-text-to-speech.service.ts
- [ ] exploration-category.service.ts
- [ ] exploration-correctness-feedback.service.ts
- [ ] exploration-creation.service.ts
- [ ] exploration-data.service.ts
- [ ] exploration-diff.service.ts
- [ ] exploration-embed-button.service.ts
- [ ] exploration-engine.service.ts
- [ ] exploration-features-backend-api.service.ts
- [ ] exploration-features.service.ts @parulpriyedarshani
- [ ] exploration-html-formatter.service.ts
- [ ] exploration-init-state-name.service.ts
- [ ] exploration-language-code.service.ts
- [ ] exploration-objective.service.ts
- [ ] exploration-param-changes.service.ts
- [ ] exploration-param-specs.service.ts
- [ ] exploration-player-state.service.ts
- [ ] exploration-property.service.ts
- [ ] exploration-recommendations.service.ts
- [ ] exploration-rights.service.ts
- [ ] exploration-save.service.ts
- [ ] exploration-states.service.ts
- [ ] exploration-summary-backend-api.service.ts
- [ ] exploration-tags.service.ts @shrutisatish00
- [ ] exploration-title.service.ts
- [ ] exploration-warnings.service.ts
- [ ] expression-evaluator.service.ts
- [ ] expression-interpolation.service.ts
- [ ] expression-parser.service.ts
- [ ] expression-syntax-tree.service.ts
- [ ] expression-type-parser.service.ts
- [ ] extension-tag-assembler.service.ts
- [ ] extract-image-filenames-from-state.service.ts
- [ ] fatigue-detection.service.ts
- [ ] focus-manager.service.ts
- [ ] generate-content-id.service.ts
- [ ] graph-data.service.ts
- [ ] graph-layout.service.ts
- [ ] guest-collection-progress.service.ts
- [ ] hint-and-solution-modal.service.ts
- [ ] hints-and-solution-manager.service.ts
- [ ] html-escaper.service.ts @tianqi-wu
- [ ] id-generation.service.ts
- [ ] image-preloader.service.ts
- [ ] image-upload-helper.service.ts
- [ ] improvement-modal.service.ts
- [ ] improvement-task.service.ts
- [ ] improvements-display.service.ts
- [ ] improvements.service.ts
- [ ] interaction-details-cache.service.ts
- [ ] language-util.service.ts
- [ ] learner-action-render.service.ts
- [ ] learner-answer-details-backend-api.service.ts
- [ ] learner-answer-details-data.service.ts
- [ ] learner-answer-info.service.ts
- [ ] learner-dashboard-backend-api.service.ts
- [ ] learner-dashboard-ids-backend-api.service.ts
- [ ] learner-params.service.ts
- [ ] learner-playlist.service.ts
- [ ] learner-view-rating.service.ts
- [ ] local-storage.service.ts
- [ ] logger.service.ts @remigourdon
- [ ] messenger.service.ts @remigourdon
- [ ] meta-tag-customization.service.ts
- [ ] navigation.service.ts
- [ ] nested-directives-recursion-timeout-prevention.service.ts
- [ ] number-attempts.service.ts @gp201
- [ ] page-title.service.ts
- [ ] parameter-metadata.service.ts
- [ ] player-correctness-feedback-enabled.service.ts
- [ ] player-position.service.ts @tianqi-wu
- [ ] player-transcript.service.ts
- [ ] playthrough-issues-backend-api.service.ts
- [ ] playthrough-issues.service.ts
- [ ] playthrough.service.ts
- [ ] prediction-algorithm-registry.service.ts
- [ ] pretest-question-backend-api.service.ts
- [ ] promo-bar.service.ts
- [ ] question-backend-api.service.ts
- [ ] question-creation.service.ts
- [ ] question-player-engine.service.ts
- [ ] question-player-state.service.ts
- [ ] question-suggestion.service.ts
- [ ] question-undo-redo.service.ts
- [ ] question-update.service.ts
- [ ] questions-list.service.ts
- [ ] rating-computation.service.ts
- [ ] read-only-collection-backend-api.service.ts
- [ ] read-only-exploration-backend-api.service.ts
- [ ] refresher-exploration-confirmation-modal.service.ts
- [ ] request-interceptor.service.ts
- [ ] responses.service.ts
- [ ] review-test-backend-api.service.ts
- [ ] review-test-engine.service.ts
- [ ] router.service.ts
- [ ] rte-helper.service.ts
- [ ] schema-default-value.service.ts
- [ ] schema-undefined-last-element.service.ts
- [ ] search-explorations-backend-api.service.ts
- [ ] search.service.ts
- [ ] sidebar-status.service.ts
- [ ] site-analytics.service.ts
- [ ] skill-creation.service.ts
- [ ] skill-editor-routing.service.ts
- [ ] skill-editor-state.service.ts
- [ ] skill-mastery-backend-api.service.ts
- [ ] skill-rights-backend-api.service.ts
- [ ] skill-update.service.ts
- [ ] solution-validity.service.ts
- [ ] solution-verification.service.ts
- [ ] speech-synthesis-chunker.service.ts
- [ ] state-classifier-mapping.service.ts
- [ ] state-content.service.ts
- [ ] state-customization-args.service.ts
- [ ] state-editor.service.ts
- [ ] state-hints.service.ts
- [ ] state-improvement-suggestion.service.ts @bobbychen1999
- [ ] state-interaction-id.service.ts
- [ ] state-name.service.ts
- [ ] state-param-changes.service.ts
- [ ] state-property.service.ts
- [ ] state-recorded-voiceovers.service.ts
- [ ] state-rules-stats.service.ts
- [ ] state-solicit-answer-details.service.ts
- [ ] state-solution.service.ts
- [ ] state-top-answers-stats-backend-api.service.ts
- [ ] state-top-answers-stats.service.ts
- [ ] state-tutorial-first-time.service.ts @akeeoaobh
- [ ] state-written-translations.service.ts
- [ ] stats-reporting.service.ts
- [ ] story-creation.service.ts
- [ ] story-editor-state.service.ts @pengcheng95
- [ ] story-update.service.ts
- [ ] story-viewer-backend-api.service.ts
- [ ] subtopic-viewer-backend-api.service.ts
- [ ] suggestion-modal-for-creator-view.service.ts
- [ ] suggestion-modal-for-exploration-editor.service.ts
- [ ] suggestion-modal-for-exploration-player.service.ts
- [ ] suggestion-modal-for-learner-dashboard.service.ts
- [ ] suggestion-modal.service.ts
- [ ] thread-data.service.ts
- [ ] thread-status-display.service.ts
- [ ] topic-creation.service.ts
- [ ] topic-editor-routing.service.ts
- [ ] topic-editor-state.service.ts
- [ ] topic-rights-backend-api.service.ts
- [ ] topic-update.service.ts
- [ ] topic-viewer-backend-api.service.ts
- [ ] topics-and-skills-dashboard-backend-api.service.ts
- [ ] training-data-editor-panel.service.ts
- [ ] training-data.service.ts @felicityzhao99
- [ ] training-modal.service.ts @varuncj02
- [ ] translate-text.service.ts
- [ ] translation-file-hash-loader.service.ts
- [ ] translation-language.service.ts
- [ ] translation-status.service.ts
- [ ] translation-tab-active-content-id.service.ts
- [ ] translation-tab-active-mode.service.ts
- [ ] undo-redo.service.ts
- [ ] url-interpolation.service.ts @qinghaoyang
- [ ] url.service.ts @tianqi-wu
- [ ] user-email-preferences.service.ts @felicityzhao99
- [ ] user-exploration-permissions.service.ts
- [ ] user.service.ts
- [ ] utils.service.ts @rriyaldhi
- [ ] validators.service.ts
- [ ] version-tree.service.ts
- [ ] voiceover-recording.service.ts
- [ ] window-dimensions.service.ts @asafprivman
- [ ] window-ref.service.ts @larakhdavies
Note: For a guide on how to access Oppia's webpages, see [this](https://github.com/oppia/oppia/wiki/How-to-access-Oppia-webpages).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/create_expression_parser.py`
Content:
```
1 # Copyright 2019 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """This script produces the expression parser."""
16
17 from __future__ import absolute_import # pylint: disable=import-only-modules
18 from __future__ import unicode_literals # pylint: disable=import-only-modules
19
20 import argparse
21 import fileinput
22 import os
23 import re
24 import subprocess
25
26 import python_utils
27
28 from . import common
29 from . import setup
30
31 _PARSER = argparse.ArgumentParser(description="""
32 Run this script from the oppia root folder:
33 python -m scripts.create_expression_parser
34 The root folder MUST be named 'oppia'.
35 """)
36
37
38 def main(args=None):
39 """Produces the expression parser."""
40 unused_parsed_args = _PARSER.parse_args(args=args)
41 setup.main(args=[])
42
43 expression_parser_definition = os.path.join(
44 'core', 'templates', 'expressions', 'parser.pegjs')
45 expression_parser_js = os.path.join(
46 'core', 'templates', 'expressions', 'parser.js')
47
48 common.install_npm_library('pegjs', '0.8.0', common.OPPIA_TOOLS_DIR)
49
50 subprocess.check_call([
51 os.path.join(common.NODE_MODULES_PATH, 'pegjs', 'bin', 'pegjs'),
52 expression_parser_definition, expression_parser_js])
53
54 python_utils.PRINT('Done!')
55
56
57 if __name__ == '__main__':
58 main()
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/create_expression_parser.py b/scripts/create_expression_parser.py
--- a/scripts/create_expression_parser.py
+++ b/scripts/create_expression_parser.py
@@ -18,9 +18,7 @@
from __future__ import unicode_literals # pylint: disable=import-only-modules
import argparse
-import fileinput
import os
-import re
import subprocess
import python_utils
|
{"golden_diff": "diff --git a/scripts/create_expression_parser.py b/scripts/create_expression_parser.py\n--- a/scripts/create_expression_parser.py\n+++ b/scripts/create_expression_parser.py\n@@ -18,9 +18,7 @@\n from __future__ import unicode_literals # pylint: disable=import-only-modules\n \n import argparse\n-import fileinput\n import os\n-import re\n import subprocess\n \n import python_utils\n", "issue": "All the Frontend services should be documented with jsdoc.\n**This starter issue is currently on hold because we do not have the capacity to support new contributors working on it.**\r\n\r\n--------------\r\n\r\nWe aim to document all the files listed below. \r\n\r\nEach of the below-listed files should have a file overview signifying the purpose of the file, \r\nand each function should have its meaning, arguments and return statement documented with the help of jsdoc decorators like `@fileoverview`, `@param`, `@return`.\r\n\r\nYou can go through these services to get some reference:\r\n- graph-input-rules.service.ts\r\n- exploration-html-formatter.service.ts\r\n- graph-utils.service.ts\r\n- alerts.service.ts\r\n- playthrough-issues.service.ts\r\n\r\n**Deducing variable's significance and the meaning from the code:**\r\nTry and execute the code by running a dev server locally, and log the variable type (you can use typeof for this) and try to find out the purpose of the variable(what's the variable storing, what is it being used for, what would break if we remove the variable?). To figure out how to execute the code, grep to see what methods call the function, and add console logs to ensure that the code is being executed when you perform the corresponding action in the UI. (As a sanity check, you might also want to ensure that the suspected variable type is consistent with any TypeScript types that are already provided.)\r\n\r\n**Overview of the function:**\r\nFinding or deducing the overview or the purpose of the function can be sometimes a bit tricky, some general advice can be to think--\r\n\r\n- why is this function even required, what does it helps us achieve. Try to think from the perspective of the person who created the function and try to mimic the thought process of the original author.\r\n- Look at the callers of the function, see all the places where this function is being called at and try to get a better understanding of the function.\r\n- If you are unable to understand the purpose of the function, feel free to reach out to your mentor(always happy to help).\r\n\r\nPlease go through this [doc](https://docs.google.com/document/d/1jr8X3oqW7WqKxOgsK8b4TxIraODAV23vDJgYso1R7Pk/edit?usp=sharing) for a deeper context.\r\n\r\n**Please don't include types in the JSDoc, use the TypeScript annotations for that.**\r\n\r\nPR's for reference: [#8773](https://github.com/oppia/oppia/pull/8773)\r\n\r\n**To be assigned to a file or for any queries, comment on the thread and tag @nithusha21.** \r\n\r\nThe listed services file below needs to be documented:\r\n\r\n- [ ] admin-config-tab-backend-api.service.ts\r\n- [ ] admin-data.service.ts\r\n- [ ] admin-router.service.ts @anumehaagrawal\r\n- [ ] admin-task-manager.service.ts @larakhdavies\r\n- [ ] alerts.service.ts\r\n- [ ] angular-name.service.ts @parulpriyedarshani\r\n- [ ] answer-classification.service.ts\r\n- [ ] answer-groups-cache.service.ts\r\n- [ ] assets-backend-api.service.ts\r\n- [ ] audio-pFlayer.service.ts\r\n- [ ] audio-preloader.service.ts\r\n- [ ] audio-translation-language.service.ts @kaylahardie \r\n- [ ] audio-translation-manager.service.ts\r\n- [ ] autogenerated-audio-player.service.ts @BlakeHan01\r\n- [ ] autoplayed-videos.service.ts @darkpsychic\r\n- [ ] autosave-info-modals.service.ts\r\n- [ ] background-mask.service.ts\r\n- [ ] base-undo-redo.service.ts\r\n- [ ] browser-checker.service.ts\r\n- [ ] change-list.service.ts\r\n- [ ] changes-in-human-readable-form.service.ts\r\n- [ ] classroom-backend-api.service.ts @ReshuKumari \r\n- [ ] code-normalizer.service.ts\r\n- [ ] collection-creation-backend-api.service.ts\r\n- [ ] collection-creation.service.ts\r\n- [ ] collection-editor-state.service.ts\r\n- [ ] collection-linearizer.service.ts\r\n- [ ] collection-rights-backend-api.service.ts\r\n- [ ] collection-update.service.ts\r\n- [ ] collection-validation.service.ts\r\n- [ ] compare-versions.service.ts\r\n- [ ] compute-graph.service.ts\r\n- [ ] concept-card-backend-api.service.ts\r\n- [ ] construct-translation-ids.service.ts @BlakeHan01\r\n- [ ] context.service.ts\r\n- [ ] contribution-and-review.service.ts @lelouchB\r\n- [ ] contribution-opportunities-backend-api.service.ts\r\n- [ ] contribution-opportunities.service.ts\r\n- [ ] creator-dashboard-backend-api.service.ts\r\n- [ ] csrf-token.service.ts\r\n- [ ] current-interaction.service.ts\r\n- [ ] date-time-format.service.ts @linnhallonqvist\r\n- [ ] debouncer.service.ts\r\n- [ ] debug-info-tracker.service.ts\r\n- [ ] device-info.service.ts\r\n- [ ] document-attribute-customization.service.ts\r\n- [ ] editability.service.ts\r\n- [ ] editable-collection-backend-api.service.ts\r\n- [ ] editable-exploration-backend-api.service.ts\r\n- [ ] editable-question-backend-api.service.ts\r\n- [ ] editable-skill-backend-api.service.ts\r\n- [ ] editable-story-backend-api.service.ts\r\n- [ ] editable-topic-backend-api.service.ts\r\n- [ ] editor-first-time-events.service.ts\r\n- [ ] email-dashboard-data.service.ts\r\n- [ ] exploration-automatic-text-to-speech.service.ts\r\n- [ ] exploration-category.service.ts\r\n- [ ] exploration-correctness-feedback.service.ts\r\n- [ ] exploration-creation.service.ts\r\n- [ ] exploration-data.service.ts\r\n- [ ] exploration-diff.service.ts\r\n- [ ] exploration-embed-button.service.ts\r\n- [ ] exploration-engine.service.ts\r\n- [ ] exploration-features-backend-api.service.ts\r\n- [ ] exploration-features.service.ts @parulpriyedarshani\r\n- [ ] exploration-html-formatter.service.ts\r\n- [ ] exploration-init-state-name.service.ts\r\n- [ ] exploration-language-code.service.ts\r\n- [ ] exploration-objective.service.ts\r\n- [ ] exploration-param-changes.service.ts\r\n- [ ] exploration-param-specs.service.ts\r\n- [ ] exploration-player-state.service.ts\r\n- [ ] exploration-property.service.ts\r\n- [ ] exploration-recommendations.service.ts\r\n- [ ] exploration-rights.service.ts\r\n- [ ] exploration-save.service.ts\r\n- [ ] exploration-states.service.ts\r\n- [ ] exploration-summary-backend-api.service.ts\r\n- [ ] exploration-tags.service.ts @shrutisatish00 \r\n- [ ] exploration-title.service.ts\r\n- [ ] exploration-warnings.service.ts\r\n- [ ] expression-evaluator.service.ts\r\n- [ ] expression-interpolation.service.ts\r\n- [ ] expression-parser.service.ts\r\n- [ ] expression-syntax-tree.service.ts\r\n- [ ] expression-type-parser.service.ts\r\n- [ ] extension-tag-assembler.service.ts\r\n- [ ] extract-image-filenames-from-state.service.ts\r\n- [ ] fatigue-detection.service.ts\r\n- [ ] focus-manager.service.ts\r\n- [ ] generate-content-id.service.ts\r\n- [ ] graph-data.service.ts\r\n- [ ] graph-layout.service.ts\r\n- [ ] guest-collection-progress.service.ts\r\n- [ ] hint-and-solution-modal.service.ts\r\n- [ ] hints-and-solution-manager.service.ts\r\n- [ ] html-escaper.service.ts @tianqi-wu \r\n- [ ] id-generation.service.ts\r\n- [ ] image-preloader.service.ts\r\n- [ ] image-upload-helper.service.ts\r\n- [ ] improvement-modal.service.ts\r\n- [ ] improvement-task.service.ts\r\n- [ ] improvements-display.service.ts\r\n- [ ] improvements.service.ts\r\n- [ ] interaction-details-cache.service.ts\r\n- [ ] language-util.service.ts\r\n- [ ] learner-action-render.service.ts\r\n- [ ] learner-answer-details-backend-api.service.ts\r\n- [ ] learner-answer-details-data.service.ts\r\n- [ ] learner-answer-info.service.ts\r\n- [ ] learner-dashboard-backend-api.service.ts\r\n- [ ] learner-dashboard-ids-backend-api.service.ts\r\n- [ ] learner-params.service.ts\r\n- [ ] learner-playlist.service.ts\r\n- [ ] learner-view-rating.service.ts\r\n- [ ] local-storage.service.ts\r\n- [ ] logger.service.ts @remigourdon \r\n- [ ] messenger.service.ts @remigourdon \r\n- [ ] meta-tag-customization.service.ts\r\n- [ ] navigation.service.ts\r\n- [ ] nested-directives-recursion-timeout-prevention.service.ts\r\n- [ ] number-attempts.service.ts @gp201\r\n- [ ] page-title.service.ts\r\n- [ ] parameter-metadata.service.ts\r\n- [ ] player-correctness-feedback-enabled.service.ts\r\n- [ ] player-position.service.ts @tianqi-wu \r\n- [ ] player-transcript.service.ts\r\n- [ ] playthrough-issues-backend-api.service.ts\r\n- [ ] playthrough-issues.service.ts\r\n- [ ] playthrough.service.ts\r\n- [ ] prediction-algorithm-registry.service.ts\r\n- [ ] pretest-question-backend-api.service.ts\r\n- [ ] promo-bar.service.ts\r\n- [ ] question-backend-api.service.ts\r\n- [ ] question-creation.service.ts\r\n- [ ] question-player-engine.service.ts\r\n- [ ] question-player-state.service.ts\r\n- [ ] question-suggestion.service.ts\r\n- [ ] question-undo-redo.service.ts\r\n- [ ] question-update.service.ts\r\n- [ ] questions-list.service.ts\r\n- [ ] rating-computation.service.ts\r\n- [ ] read-only-collection-backend-api.service.ts\r\n- [ ] read-only-exploration-backend-api.service.ts\r\n- [ ] refresher-exploration-confirmation-modal.service.ts\r\n- [ ] request-interceptor.service.ts\r\n- [ ] responses.service.ts\r\n- [ ] review-test-backend-api.service.ts\r\n- [ ] review-test-engine.service.ts\r\n- [ ] router.service.ts\r\n- [ ] rte-helper.service.ts\r\n- [ ] schema-default-value.service.ts\r\n- [ ] schema-undefined-last-element.service.ts\r\n- [ ] search-explorations-backend-api.service.ts\r\n- [ ] search.service.ts\r\n- [ ] sidebar-status.service.ts\r\n- [ ] site-analytics.service.ts\r\n- [ ] skill-creation.service.ts\r\n- [ ] skill-editor-routing.service.ts\r\n- [ ] skill-editor-state.service.ts\r\n- [ ] skill-mastery-backend-api.service.ts\r\n- [ ] skill-rights-backend-api.service.ts\r\n- [ ] skill-update.service.ts\r\n- [ ] solution-validity.service.ts\r\n- [ ] solution-verification.service.ts\r\n- [ ] speech-synthesis-chunker.service.ts\r\n- [ ] state-classifier-mapping.service.ts\r\n- [ ] state-content.service.ts\r\n- [ ] state-customization-args.service.ts\r\n- [ ] state-editor.service.ts\r\n- [ ] state-hints.service.ts\r\n- [ ] state-improvement-suggestion.service.ts @bobbychen1999 \r\n- [ ] state-interaction-id.service.ts\r\n- [ ] state-name.service.ts\r\n- [ ] state-param-changes.service.ts\r\n- [ ] state-property.service.ts\r\n- [ ] state-recorded-voiceovers.service.ts\r\n- [ ] state-rules-stats.service.ts\r\n- [ ] state-solicit-answer-details.service.ts\r\n- [ ] state-solution.service.ts\r\n- [ ] state-top-answers-stats-backend-api.service.ts\r\n- [ ] state-top-answers-stats.service.ts\r\n- [ ] state-tutorial-first-time.service.ts @akeeoaobh \r\n- [ ] state-written-translations.service.ts\r\n- [ ] stats-reporting.service.ts\r\n- [ ] story-creation.service.ts\r\n- [ ] story-editor-state.service.ts @pengcheng95\r\n- [ ] story-update.service.ts\r\n- [ ] story-viewer-backend-api.service.ts\r\n- [ ] subtopic-viewer-backend-api.service.ts\r\n- [ ] suggestion-modal-for-creator-view.service.ts\r\n- [ ] suggestion-modal-for-exploration-editor.service.ts\r\n- [ ] suggestion-modal-for-exploration-player.service.ts\r\n- [ ] suggestion-modal-for-learner-dashboard.service.ts\r\n- [ ] suggestion-modal.service.ts\r\n- [ ] thread-data.service.ts\r\n- [ ] thread-status-display.service.ts\r\n- [ ] topic-creation.service.ts\r\n- [ ] topic-editor-routing.service.ts\r\n- [ ] topic-editor-state.service.ts\r\n- [ ] topic-rights-backend-api.service.ts\r\n- [ ] topic-update.service.ts\r\n- [ ] topic-viewer-backend-api.service.ts\r\n- [ ] topics-and-skills-dashboard-backend-api.service.ts\r\n- [ ] training-data-editor-panel.service.ts\r\n- [ ] training-data.service.ts @felicityzhao99 \r\n- [ ] training-modal.service.ts @varuncj02\r\n- [ ] translate-text.service.ts\r\n- [ ] translation-file-hash-loader.service.ts\r\n- [ ] translation-language.service.ts\r\n- [ ] translation-status.service.ts\r\n- [ ] translation-tab-active-content-id.service.ts\r\n- [ ] translation-tab-active-mode.service.ts\r\n- [ ] undo-redo.service.ts\r\n- [ ] url-interpolation.service.ts @qinghaoyang\r\n- [ ] url.service.ts @tianqi-wu \r\n- [ ] user-email-preferences.service.ts @felicityzhao99 \r\n- [ ] user-exploration-permissions.service.ts\r\n- [ ] user.service.ts\r\n- [ ] utils.service.ts @rriyaldhi \r\n- [ ] validators.service.ts\r\n- [ ] version-tree.service.ts\r\n- [ ] voiceover-recording.service.ts\r\n- [ ] window-dimensions.service.ts @asafprivman \r\n- [ ] window-ref.service.ts @larakhdavies\r\n\r\nNote: For a guide on how to access Oppia's webpages, see [this](https://github.com/oppia/oppia/wiki/How-to-access-Oppia-webpages).\n", "before_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This script produces the expression parser.\"\"\"\n\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport argparse\nimport fileinput\nimport os\nimport re\nimport subprocess\n\nimport python_utils\n\nfrom . import common\nfrom . import setup\n\n_PARSER = argparse.ArgumentParser(description=\"\"\"\nRun this script from the oppia root folder:\n python -m scripts.create_expression_parser\nThe root folder MUST be named 'oppia'.\n\"\"\")\n\n\ndef main(args=None):\n \"\"\"Produces the expression parser.\"\"\"\n unused_parsed_args = _PARSER.parse_args(args=args)\n setup.main(args=[])\n\n expression_parser_definition = os.path.join(\n 'core', 'templates', 'expressions', 'parser.pegjs')\n expression_parser_js = os.path.join(\n 'core', 'templates', 'expressions', 'parser.js')\n\n common.install_npm_library('pegjs', '0.8.0', common.OPPIA_TOOLS_DIR)\n\n subprocess.check_call([\n os.path.join(common.NODE_MODULES_PATH, 'pegjs', 'bin', 'pegjs'),\n expression_parser_definition, expression_parser_js])\n\n python_utils.PRINT('Done!')\n\n\nif __name__ == '__main__':\n main()\n", "path": "scripts/create_expression_parser.py"}], "after_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This script produces the expression parser.\"\"\"\n\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport argparse\nimport os\nimport subprocess\n\nimport python_utils\n\nfrom . import common\nfrom . import setup\n\n_PARSER = argparse.ArgumentParser(description=\"\"\"\nRun this script from the oppia root folder:\n python -m scripts.create_expression_parser\nThe root folder MUST be named 'oppia'.\n\"\"\")\n\n\ndef main(args=None):\n \"\"\"Produces the expression parser.\"\"\"\n unused_parsed_args = _PARSER.parse_args(args=args)\n setup.main(args=[])\n\n expression_parser_definition = os.path.join(\n 'core', 'templates', 'expressions', 'parser.pegjs')\n expression_parser_js = os.path.join(\n 'core', 'templates', 'expressions', 'parser.js')\n\n common.install_npm_library('pegjs', '0.8.0', common.OPPIA_TOOLS_DIR)\n\n subprocess.check_call([\n os.path.join(common.NODE_MODULES_PATH, 'pegjs', 'bin', 'pegjs'),\n expression_parser_definition, expression_parser_js])\n\n python_utils.PRINT('Done!')\n\n\nif __name__ == '__main__':\n main()\n", "path": "scripts/create_expression_parser.py"}]}
| 3,689 | 81 |
gh_patches_debug_16797
|
rasdani/github-patches
|
git_diff
|
semgrep__semgrep-rules-1457
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive for return-in-init when return in internal function
**Describe the bug**
[`return-in-init`](https://github.com/returntocorp/semgrep-rules/blob/master/python/lang/correctness/return-in-init.yaml) warns about a return statement in `__init__`. However, this may be valid if another function is defined within `__init__` and return is used there.
**To Reproduce**
```
class Odd:
def __init__(self, numbers):
def is_odd(n):
return n % 2 == 1
self.numbers = filter(is_odd, numbers)
```
```
$ semgrep --config=p/ci
test1.py
severity:error rule:python.lang.correctness.return-in-init.return-in-init: `return` should never appear inside a class __init__ function. This will cause a runtime error.
4: return n % 2 == 1
```
**Expected behavior**
I expect no error from `return-in-init` in this case.
**Priority**
How important is this to you?
- P2: annoying but not blocking me
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/lang/correctness/return-in-init.py`
Content:
```
1 class A:
2 def __init__(a, b, c):
3 # ruleid:return-in-init
4 return A(a, b, c)
5
6
7 class B:
8 def __init__(a, b, c):
9 # ok:return-in-init
10 return
11
12
13 class C:
14 def __init__(a, b, c):
15 # ruleid:yield-in-init
16 yield
17
18
19 class D:
20 def __init__():
21 # ruleid:yield-in-init
22 yield 5
23
24
25 def __init__(a, b, c):
26 # ok:yield-in-init
27 return A(a, b, c)
28
29
30 def __init__(a, b, c):
31 # ok:yield-in-init
32 yield
33
34
35 def __init__():
36 # ok:yield-in-init
37 yield 5
38
39
40 class E:
41 def func1():
42 if not hello:
43 # ok:yield-in-init
44 yield 5
45 # ok:yield-in-init
46 yield other
47
48
49 class F:
50 def __init__():
51 pass
52
53 def func1():
54 # ok:return-in-init
55 return 5
56
57 def func2():
58 # ok:return-in-init
59 return
60
61
62 class G:
63 def __init__():
64 pass
65
66 def func1():
67 # ok:yield-in-init
68 yield 5
69
70 def func2():
71 # ok:yield-in-init
72 yield
73
74 class H:
75 def __init__(self, x):
76 # ok:return-in-init
77 return None
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/lang/correctness/return-in-init.py b/python/lang/correctness/return-in-init.py
--- a/python/lang/correctness/return-in-init.py
+++ b/python/lang/correctness/return-in-init.py
@@ -75,3 +75,41 @@
def __init__(self, x):
# ok:return-in-init
return None
+
+class Odd:
+ def __init__(self, numbers):
+ def is_odd(n):
+ # ok:return-in-init
+ return n % 2 == 1
+ self.numbers = filter(is_odd, numbers)
+
+ # todoruleid:return-in-init
+ return self.numbers
+
+class Even:
+ def __init__(self):
+ class EvenNumber:
+ def __init__(self, n):
+ self.n = n
+ # todoruleid:return-in-init
+ return n
+
+ def is_even(self):
+ # ok:return-in-init
+ return self.n % 2 == 0
+
+ self.number = EvenNumber()
+
+ def not_init(self):
+ class EvenNumber:
+ def __init__(self, n):
+ self.n = n
+ # ruleid:return-in-init
+ return n
+
+ def is_even(self):
+ # ok:return-in-init
+ return self.n % 2 == 0
+
+ # ok:return-in-init
+ return EvenNumber()
|
{"golden_diff": "diff --git a/python/lang/correctness/return-in-init.py b/python/lang/correctness/return-in-init.py\n--- a/python/lang/correctness/return-in-init.py\n+++ b/python/lang/correctness/return-in-init.py\n@@ -75,3 +75,41 @@\n def __init__(self, x):\n # ok:return-in-init\n return None\n+\n+class Odd:\n+ def __init__(self, numbers):\n+ def is_odd(n):\n+ # ok:return-in-init\n+ return n % 2 == 1\n+ self.numbers = filter(is_odd, numbers)\n+\n+ # todoruleid:return-in-init\n+ return self.numbers\n+\n+class Even:\n+ def __init__(self):\n+ class EvenNumber:\n+ def __init__(self, n):\n+ self.n = n\n+ # todoruleid:return-in-init\n+ return n\n+\n+ def is_even(self):\n+ # ok:return-in-init\n+ return self.n % 2 == 0\n+\n+ self.number = EvenNumber()\n+\n+ def not_init(self):\n+ class EvenNumber:\n+ def __init__(self, n):\n+ self.n = n\n+ # ruleid:return-in-init\n+ return n\n+\n+ def is_even(self):\n+ # ok:return-in-init\n+ return self.n % 2 == 0\n+\n+ # ok:return-in-init\n+ return EvenNumber()\n", "issue": "False positive for return-in-init when return in internal function\n**Describe the bug**\r\n\r\n[`return-in-init`](https://github.com/returntocorp/semgrep-rules/blob/master/python/lang/correctness/return-in-init.yaml) warns about a return statement in `__init__`. However, this may be valid if another function is defined within `__init__` and return is used there.\r\n\r\n**To Reproduce**\r\n\r\n```\r\nclass Odd:\r\n def __init__(self, numbers):\r\n def is_odd(n):\r\n return n % 2 == 1\r\n self.numbers = filter(is_odd, numbers)\r\n```\r\n\r\n```\r\n$ semgrep --config=p/ci\r\ntest1.py\r\nseverity:error rule:python.lang.correctness.return-in-init.return-in-init: `return` should never appear inside a class __init__ function. This will cause a runtime error.\r\n4: return n % 2 == 1\r\n```\r\n\r\n**Expected behavior**\r\n\r\nI expect no error from `return-in-init` in this case.\r\n\r\n**Priority**\r\nHow important is this to you?\r\n- P2: annoying but not blocking me\r\n\n", "before_files": [{"content": "class A:\n def __init__(a, b, c):\n # ruleid:return-in-init\n return A(a, b, c)\n\n\nclass B:\n def __init__(a, b, c):\n # ok:return-in-init\n return\n\n\nclass C:\n def __init__(a, b, c):\n # ruleid:yield-in-init\n yield\n\n\nclass D:\n def __init__():\n # ruleid:yield-in-init\n yield 5\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n return A(a, b, c)\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n yield\n\n\ndef __init__():\n # ok:yield-in-init\n yield 5\n\n\nclass E:\n def func1():\n if not hello:\n # ok:yield-in-init\n yield 5\n # ok:yield-in-init\n yield other\n\n\nclass F:\n def __init__():\n pass\n\n def func1():\n # ok:return-in-init\n return 5\n\n def func2():\n # ok:return-in-init\n return\n\n\nclass G:\n def __init__():\n pass\n\n def func1():\n # ok:yield-in-init\n yield 5\n\n def func2():\n # ok:yield-in-init\n yield\n\nclass H:\n def __init__(self, x):\n # ok:return-in-init\n return None\n", "path": "python/lang/correctness/return-in-init.py"}], "after_files": [{"content": "class A:\n def __init__(a, b, c):\n # ruleid:return-in-init\n return A(a, b, c)\n\n\nclass B:\n def __init__(a, b, c):\n # ok:return-in-init\n return\n\n\nclass C:\n def __init__(a, b, c):\n # ruleid:yield-in-init\n yield\n\n\nclass D:\n def __init__():\n # ruleid:yield-in-init\n yield 5\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n return A(a, b, c)\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n yield\n\n\ndef __init__():\n # ok:yield-in-init\n yield 5\n\n\nclass E:\n def func1():\n if not hello:\n # ok:yield-in-init\n yield 5\n # ok:yield-in-init\n yield other\n\n\nclass F:\n def __init__():\n pass\n\n def func1():\n # ok:return-in-init\n return 5\n\n def func2():\n # ok:return-in-init\n return\n\n\nclass G:\n def __init__():\n pass\n\n def func1():\n # ok:yield-in-init\n yield 5\n\n def func2():\n # ok:yield-in-init\n yield\n\nclass H:\n def __init__(self, x):\n # ok:return-in-init\n return None\n\nclass Odd:\n def __init__(self, numbers):\n def is_odd(n):\n # ok:return-in-init\n return n % 2 == 1\n self.numbers = filter(is_odd, numbers)\n\n # todoruleid:return-in-init\n return self.numbers\n\nclass Even:\n def __init__(self):\n class EvenNumber:\n def __init__(self, n):\n self.n = n\n # todoruleid:return-in-init\n return n\n\n def is_even(self):\n # ok:return-in-init\n return self.n % 2 == 0\n\n self.number = EvenNumber()\n\n def not_init(self):\n class EvenNumber:\n def __init__(self, n):\n self.n = n\n # ruleid:return-in-init\n return n\n\n def is_even(self):\n # ok:return-in-init\n return self.n % 2 == 0\n\n # ok:return-in-init\n return EvenNumber()\n", "path": "python/lang/correctness/return-in-init.py"}]}
| 992 | 334 |
gh_patches_debug_40443
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-1280
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
We need a new API that displays concordances information for all chants in the database
In an email from Jan:
> The intensive process of getting all the data from CD via individual json-cid requests (59.000+) is running already on the 3rd day (and not finished yet) but this will not keep the Cantus data fresh in the Cantus Index API in the long term.
>
> The solution would be to regularly create a large JSON file export of all the CD chants (with the same fields as in json-cid exports) and make it available as a file to download. An example of such json export is here: https://austriamanus.org/files/concordances-export.json
> This kind of data transfer works also with the MMMO database which has approximately half the amount of data compared to a CD. I believe it would also be the best solution for CD.
This will not be difficult. We can use the code in our `json-con` API, but return all chants rather than filtering them by Cantus ID.
What's a good path for this API to live at? `/json-concordances-export`?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/management/commands/update_cached_concordances.py`
Content:
```
1 import ujson
2 import os
3 from sys import stdout
4 from datetime import datetime
5 from collections import defaultdict
6 from django.db.models.query import QuerySet
7 from django.core.management.base import BaseCommand
8 from main_app.models import Chant
9
10
11 class Command(BaseCommand):
12 def handle(self, *args, **kwargs) -> None:
13 CACHE_DIR: str = "api_cache"
14 FILEPATH: str = f"{CACHE_DIR}/concordances.json"
15 start_time: str = datetime.now().isoformat()
16 stdout.write(f"Running update_cached_concordances at {start_time}.\n")
17 concordances: dict = get_concordances()
18 write_time: str = datetime.now().isoformat()
19 metadata: dict = {
20 "last_updated": write_time,
21 }
22 data_and_metadata: dict = {
23 "data": concordances,
24 "metadata": metadata,
25 }
26 stdout.write(f"Attempting to make directory at {CACHE_DIR} to hold cache: ")
27 try:
28 os.mkdir(CACHE_DIR)
29 stdout.write(f"successfully created directory at {CACHE_DIR}.\n")
30 except FileExistsError:
31 stdout.write(f"directory at {CACHE_DIR} already exists.\n")
32 stdout.write(f"Writing concordances to {FILEPATH} at {write_time}.\n")
33 with open(FILEPATH, "w") as json_file:
34 ujson.dump(data_and_metadata, json_file)
35 end_time = datetime.now().isoformat()
36 stdout.write(
37 f"Concordances successfully written to {FILEPATH} at {end_time}.\n\n"
38 )
39
40
41 def get_concordances() -> dict:
42 DOMAIN: str = "https://cantusdatabase.org"
43
44 stdout.write("Querying database for published chants\n")
45 published_chants: QuerySet[Chant] = Chant.objects.filter(source__published=True)
46 values: QuerySet[dict] = published_chants.select_related(
47 "source",
48 "feast",
49 "genre",
50 "office",
51 ).values(
52 "id",
53 "source_id",
54 "source__siglum",
55 "folio",
56 "c_sequence",
57 "incipit",
58 "feast__name",
59 "genre__name",
60 "office__name",
61 "position",
62 "cantus_id",
63 "image_link",
64 "mode",
65 "manuscript_full_text_std_spelling",
66 "volpiano",
67 )
68
69 stdout.write("Processing chants\n")
70 concordances: defaultdict = defaultdict(list)
71 for chant in values:
72 source_id: int = chant["source_id"]
73 source_absolute_url: str = f"{DOMAIN}/source/{source_id}/"
74 chant_id: int = chant["id"]
75 chant_absolute_url: str = f"{DOMAIN}/chant/{chant_id}/"
76
77 concordances[chant["cantus_id"]].append(
78 {
79 "siglum": chant["source__siglum"],
80 "srclink": source_absolute_url,
81 "chantlink": chant_absolute_url,
82 "folio": chant["folio"],
83 "sequence": chant["c_sequence"],
84 "incipit": chant["incipit"],
85 "feast": chant["feast__name"],
86 "genre": chant["genre__name"],
87 "office": chant["office__name"],
88 "position": chant["position"],
89 "cantus_id": chant["cantus_id"],
90 "image": chant["image_link"],
91 "mode": chant["mode"],
92 "full_text": chant["manuscript_full_text_std_spelling"],
93 "melody": chant["volpiano"],
94 "db": "CD",
95 }
96 )
97
98 stdout.write(f"All chants processed - found {len(concordances)} Cantus IDs\n")
99
100 return dict(concordances)
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
--- a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
+++ b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
@@ -1,6 +1,7 @@
import ujson
import os
from sys import stdout
+from typing import Optional
from datetime import datetime
from collections import defaultdict
from django.db.models.query import QuerySet
@@ -8,10 +9,27 @@
from main_app.models import Chant
+# Usage: `python manage.py update_cached_concordances`
+# or `python manage.py update_cached_concordances -d "/path/to/directory/in/which/to/save/concordances"`
+
+
class Command(BaseCommand):
+ def add_arguments(self, parser):
+ parser.add_argument(
+ "-d",
+ "--directory",
+ help="Optional filepath specifying a directory to output concordances",
+ type=str,
+ )
+
def handle(self, *args, **kwargs) -> None:
- CACHE_DIR: str = "api_cache"
- FILEPATH: str = f"{CACHE_DIR}/concordances.json"
+ cache_dir: Optional[str] = kwargs["directory"]
+ if not cache_dir:
+ # this default directory should match the value in docker-compose.yml,
+ # at services:django:volumes:api_cache_volume
+ cache_dir = "/resources/api_cache"
+
+ filepath: str = f"{cache_dir}/concordances.json"
start_time: str = datetime.now().isoformat()
stdout.write(f"Running update_cached_concordances at {start_time}.\n")
concordances: dict = get_concordances()
@@ -23,22 +41,29 @@
"data": concordances,
"metadata": metadata,
}
- stdout.write(f"Attempting to make directory at {CACHE_DIR} to hold cache: ")
+ stdout.write(f"Attempting to make directory at {cache_dir} to hold cache: ")
try:
- os.mkdir(CACHE_DIR)
- stdout.write(f"successfully created directory at {CACHE_DIR}.\n")
+ os.mkdir(cache_dir)
+ stdout.write(f"successfully created directory at {cache_dir}.\n")
except FileExistsError:
- stdout.write(f"directory at {CACHE_DIR} already exists.\n")
- stdout.write(f"Writing concordances to {FILEPATH} at {write_time}.\n")
- with open(FILEPATH, "w") as json_file:
+ stdout.write(f"directory at {cache_dir} already exists.\n")
+ stdout.write(f"Writing concordances to {filepath} at {write_time}.\n")
+ with open(filepath, "w") as json_file:
ujson.dump(data_and_metadata, json_file)
end_time = datetime.now().isoformat()
stdout.write(
- f"Concordances successfully written to {FILEPATH} at {end_time}.\n\n"
+ f"Concordances successfully written to {filepath} at {end_time}.\n\n"
)
def get_concordances() -> dict:
+ """Fetch all published chants in the database, group them by Cantus ID, and return
+ a dictionary containing information on each of these chants.
+
+ Returns:
+ dict: A dictionary where each key is a Cantus ID and each value is a list all
+ published chants in the database with that Cantus ID.
+ """
DOMAIN: str = "https://cantusdatabase.org"
stdout.write("Querying database for published chants\n")
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n--- a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n+++ b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n@@ -1,6 +1,7 @@\n import ujson\n import os\n from sys import stdout\n+from typing import Optional\n from datetime import datetime\n from collections import defaultdict\n from django.db.models.query import QuerySet\n@@ -8,10 +9,27 @@\n from main_app.models import Chant\n \n \n+# Usage: `python manage.py update_cached_concordances`\n+# or `python manage.py update_cached_concordances -d \"/path/to/directory/in/which/to/save/concordances\"`\n+\n+\n class Command(BaseCommand):\n+ def add_arguments(self, parser):\n+ parser.add_argument(\n+ \"-d\",\n+ \"--directory\",\n+ help=\"Optional filepath specifying a directory to output concordances\",\n+ type=str,\n+ )\n+\n def handle(self, *args, **kwargs) -> None:\n- CACHE_DIR: str = \"api_cache\"\n- FILEPATH: str = f\"{CACHE_DIR}/concordances.json\"\n+ cache_dir: Optional[str] = kwargs[\"directory\"]\n+ if not cache_dir:\n+ # this default directory should match the value in docker-compose.yml,\n+ # at services:django:volumes:api_cache_volume\n+ cache_dir = \"/resources/api_cache\"\n+\n+ filepath: str = f\"{cache_dir}/concordances.json\"\n start_time: str = datetime.now().isoformat()\n stdout.write(f\"Running update_cached_concordances at {start_time}.\\n\")\n concordances: dict = get_concordances()\n@@ -23,22 +41,29 @@\n \"data\": concordances,\n \"metadata\": metadata,\n }\n- stdout.write(f\"Attempting to make directory at {CACHE_DIR} to hold cache: \")\n+ stdout.write(f\"Attempting to make directory at {cache_dir} to hold cache: \")\n try:\n- os.mkdir(CACHE_DIR)\n- stdout.write(f\"successfully created directory at {CACHE_DIR}.\\n\")\n+ os.mkdir(cache_dir)\n+ stdout.write(f\"successfully created directory at {cache_dir}.\\n\")\n except FileExistsError:\n- stdout.write(f\"directory at {CACHE_DIR} already exists.\\n\")\n- stdout.write(f\"Writing concordances to {FILEPATH} at {write_time}.\\n\")\n- with open(FILEPATH, \"w\") as json_file:\n+ stdout.write(f\"directory at {cache_dir} already exists.\\n\")\n+ stdout.write(f\"Writing concordances to {filepath} at {write_time}.\\n\")\n+ with open(filepath, \"w\") as json_file:\n ujson.dump(data_and_metadata, json_file)\n end_time = datetime.now().isoformat()\n stdout.write(\n- f\"Concordances successfully written to {FILEPATH} at {end_time}.\\n\\n\"\n+ f\"Concordances successfully written to {filepath} at {end_time}.\\n\\n\"\n )\n \n \n def get_concordances() -> dict:\n+ \"\"\"Fetch all published chants in the database, group them by Cantus ID, and return\n+ a dictionary containing information on each of these chants.\n+\n+ Returns:\n+ dict: A dictionary where each key is a Cantus ID and each value is a list all\n+ published chants in the database with that Cantus ID.\n+ \"\"\"\n DOMAIN: str = \"https://cantusdatabase.org\"\n \n stdout.write(\"Querying database for published chants\\n\")\n", "issue": "We need a new API that displays concordances information for all chants in the database\nIn an email from Jan:\r\n\r\n> The intensive process of getting all the data from CD via individual json-cid requests (59.000+) is running already on the 3rd day (and not finished yet) but this will not keep the Cantus data fresh in the Cantus Index API in the long term.\r\n> \r\n> The solution would be to regularly create a large JSON file export of all the CD chants (with the same fields as in json-cid exports) and make it available as a file to download. An example of such json export is here: https://austriamanus.org/files/concordances-export.json\r\n> This kind of data transfer works also with the MMMO database which has approximately half the amount of data compared to a CD. I believe it would also be the best solution for CD.\r\n\r\nThis will not be difficult. We can use the code in our `json-con` API, but return all chants rather than filtering them by Cantus ID.\r\n\r\nWhat's a good path for this API to live at? `/json-concordances-export`?\n", "before_files": [{"content": "import ujson\nimport os\nfrom sys import stdout\nfrom datetime import datetime\nfrom collections import defaultdict\nfrom django.db.models.query import QuerySet\nfrom django.core.management.base import BaseCommand\nfrom main_app.models import Chant\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **kwargs) -> None:\n CACHE_DIR: str = \"api_cache\"\n FILEPATH: str = f\"{CACHE_DIR}/concordances.json\"\n start_time: str = datetime.now().isoformat()\n stdout.write(f\"Running update_cached_concordances at {start_time}.\\n\")\n concordances: dict = get_concordances()\n write_time: str = datetime.now().isoformat()\n metadata: dict = {\n \"last_updated\": write_time,\n }\n data_and_metadata: dict = {\n \"data\": concordances,\n \"metadata\": metadata,\n }\n stdout.write(f\"Attempting to make directory at {CACHE_DIR} to hold cache: \")\n try:\n os.mkdir(CACHE_DIR)\n stdout.write(f\"successfully created directory at {CACHE_DIR}.\\n\")\n except FileExistsError:\n stdout.write(f\"directory at {CACHE_DIR} already exists.\\n\")\n stdout.write(f\"Writing concordances to {FILEPATH} at {write_time}.\\n\")\n with open(FILEPATH, \"w\") as json_file:\n ujson.dump(data_and_metadata, json_file)\n end_time = datetime.now().isoformat()\n stdout.write(\n f\"Concordances successfully written to {FILEPATH} at {end_time}.\\n\\n\"\n )\n\n\ndef get_concordances() -> dict:\n DOMAIN: str = \"https://cantusdatabase.org\"\n\n stdout.write(\"Querying database for published chants\\n\")\n published_chants: QuerySet[Chant] = Chant.objects.filter(source__published=True)\n values: QuerySet[dict] = published_chants.select_related(\n \"source\",\n \"feast\",\n \"genre\",\n \"office\",\n ).values(\n \"id\",\n \"source_id\",\n \"source__siglum\",\n \"folio\",\n \"c_sequence\",\n \"incipit\",\n \"feast__name\",\n \"genre__name\",\n \"office__name\",\n \"position\",\n \"cantus_id\",\n \"image_link\",\n \"mode\",\n \"manuscript_full_text_std_spelling\",\n \"volpiano\",\n )\n\n stdout.write(\"Processing chants\\n\")\n concordances: defaultdict = defaultdict(list)\n for chant in values:\n source_id: int = chant[\"source_id\"]\n source_absolute_url: str = f\"{DOMAIN}/source/{source_id}/\"\n chant_id: int = chant[\"id\"]\n chant_absolute_url: str = f\"{DOMAIN}/chant/{chant_id}/\"\n\n concordances[chant[\"cantus_id\"]].append(\n {\n \"siglum\": chant[\"source__siglum\"],\n \"srclink\": source_absolute_url,\n \"chantlink\": chant_absolute_url,\n \"folio\": chant[\"folio\"],\n \"sequence\": chant[\"c_sequence\"],\n \"incipit\": chant[\"incipit\"],\n \"feast\": chant[\"feast__name\"],\n \"genre\": chant[\"genre__name\"],\n \"office\": chant[\"office__name\"],\n \"position\": chant[\"position\"],\n \"cantus_id\": chant[\"cantus_id\"],\n \"image\": chant[\"image_link\"],\n \"mode\": chant[\"mode\"],\n \"full_text\": chant[\"manuscript_full_text_std_spelling\"],\n \"melody\": chant[\"volpiano\"],\n \"db\": \"CD\",\n }\n )\n\n stdout.write(f\"All chants processed - found {len(concordances)} Cantus IDs\\n\")\n\n return dict(concordances)\n", "path": "django/cantusdb_project/main_app/management/commands/update_cached_concordances.py"}], "after_files": [{"content": "import ujson\nimport os\nfrom sys import stdout\nfrom typing import Optional\nfrom datetime import datetime\nfrom collections import defaultdict\nfrom django.db.models.query import QuerySet\nfrom django.core.management.base import BaseCommand\nfrom main_app.models import Chant\n\n\n# Usage: `python manage.py update_cached_concordances`\n# or `python manage.py update_cached_concordances -d \"/path/to/directory/in/which/to/save/concordances\"`\n\n\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"-d\",\n \"--directory\",\n help=\"Optional filepath specifying a directory to output concordances\",\n type=str,\n )\n\n def handle(self, *args, **kwargs) -> None:\n cache_dir: Optional[str] = kwargs[\"directory\"]\n if not cache_dir:\n # this default directory should match the value in docker-compose.yml,\n # at services:django:volumes:api_cache_volume\n cache_dir = \"/resources/api_cache\"\n\n filepath: str = f\"{cache_dir}/concordances.json\"\n start_time: str = datetime.now().isoformat()\n stdout.write(f\"Running update_cached_concordances at {start_time}.\\n\")\n concordances: dict = get_concordances()\n write_time: str = datetime.now().isoformat()\n metadata: dict = {\n \"last_updated\": write_time,\n }\n data_and_metadata: dict = {\n \"data\": concordances,\n \"metadata\": metadata,\n }\n stdout.write(f\"Attempting to make directory at {cache_dir} to hold cache: \")\n try:\n os.mkdir(cache_dir)\n stdout.write(f\"successfully created directory at {cache_dir}.\\n\")\n except FileExistsError:\n stdout.write(f\"directory at {cache_dir} already exists.\\n\")\n stdout.write(f\"Writing concordances to {filepath} at {write_time}.\\n\")\n with open(filepath, \"w\") as json_file:\n ujson.dump(data_and_metadata, json_file)\n end_time = datetime.now().isoformat()\n stdout.write(\n f\"Concordances successfully written to {filepath} at {end_time}.\\n\\n\"\n )\n\n\ndef get_concordances() -> dict:\n \"\"\"Fetch all published chants in the database, group them by Cantus ID, and return\n a dictionary containing information on each of these chants.\n\n Returns:\n dict: A dictionary where each key is a Cantus ID and each value is a list all\n published chants in the database with that Cantus ID.\n \"\"\"\n DOMAIN: str = \"https://cantusdatabase.org\"\n\n stdout.write(\"Querying database for published chants\\n\")\n published_chants: QuerySet[Chant] = Chant.objects.filter(source__published=True)\n values: QuerySet[dict] = published_chants.select_related(\n \"source\",\n \"feast\",\n \"genre\",\n \"office\",\n ).values(\n \"id\",\n \"source_id\",\n \"source__siglum\",\n \"folio\",\n \"c_sequence\",\n \"incipit\",\n \"feast__name\",\n \"genre__name\",\n \"office__name\",\n \"position\",\n \"cantus_id\",\n \"image_link\",\n \"mode\",\n \"manuscript_full_text_std_spelling\",\n \"volpiano\",\n )\n\n stdout.write(\"Processing chants\\n\")\n concordances: defaultdict = defaultdict(list)\n for chant in values:\n source_id: int = chant[\"source_id\"]\n source_absolute_url: str = f\"{DOMAIN}/source/{source_id}/\"\n chant_id: int = chant[\"id\"]\n chant_absolute_url: str = f\"{DOMAIN}/chant/{chant_id}/\"\n\n concordances[chant[\"cantus_id\"]].append(\n {\n \"siglum\": chant[\"source__siglum\"],\n \"srclink\": source_absolute_url,\n \"chantlink\": chant_absolute_url,\n \"folio\": chant[\"folio\"],\n \"sequence\": chant[\"c_sequence\"],\n \"incipit\": chant[\"incipit\"],\n \"feast\": chant[\"feast__name\"],\n \"genre\": chant[\"genre__name\"],\n \"office\": chant[\"office__name\"],\n \"position\": chant[\"position\"],\n \"cantus_id\": chant[\"cantus_id\"],\n \"image\": chant[\"image_link\"],\n \"mode\": chant[\"mode\"],\n \"full_text\": chant[\"manuscript_full_text_std_spelling\"],\n \"melody\": chant[\"volpiano\"],\n \"db\": \"CD\",\n }\n )\n\n stdout.write(f\"All chants processed - found {len(concordances)} Cantus IDs\\n\")\n\n return dict(concordances)\n", "path": "django/cantusdb_project/main_app/management/commands/update_cached_concordances.py"}]}
| 1,524 | 831 |
gh_patches_debug_40357
|
rasdani/github-patches
|
git_diff
|
napari__napari-2410
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cancel doesn't work on preference dialog
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
The cancel button on the preferences dialog isn't working properly. I think its possible that the function I removed in the last PR that I thought was unnecessary was actually necessary.
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- Please copy and paste the information at napari info option in help menubar here:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/_qt/dialogs/preferences_dialog.py`
Content:
```
1 import json
2
3 from qtpy.QtCore import Signal
4 from qtpy.QtWidgets import (
5 QDialog,
6 QHBoxLayout,
7 QLabel,
8 QListWidget,
9 QPushButton,
10 QStackedWidget,
11 QVBoxLayout,
12 QWidget,
13 )
14
15 from ..._vendor.qt_json_builder.qt_jsonschema_form import WidgetBuilder
16 from ...utils.settings import SETTINGS
17 from ...utils.settings._defaults import ApplicationSettings, PluginSettings
18 from ...utils.translations import translator
19
20 trans = translator.load()
21
22
23 class PreferencesDialog(QDialog):
24 """Preferences Dialog for Napari user settings."""
25
26 def __init__(self, parent=None):
27 super().__init__(parent)
28
29 self._list = QListWidget(self)
30 self._stack = QStackedWidget(self)
31
32 # Set up buttons
33 self._button_cancel = QPushButton(trans._("Cancel"))
34 self._button_ok = QPushButton(trans._("OK"))
35 self._default_restore = QPushButton(trans._("Restore defaults"))
36
37 # Setup
38 self.setWindowTitle(trans._("Preferences"))
39
40 # Layout
41 main_layout = QHBoxLayout()
42 main_layout.addWidget(self._list)
43 main_layout.addWidget(self._stack)
44
45 buttons_layout = QHBoxLayout()
46 buttons_layout.addWidget(self._button_cancel)
47 buttons_layout.addWidget(self._button_ok)
48
49 layout = QVBoxLayout()
50 layout.addLayout(main_layout)
51 layout.addWidget(self._default_restore)
52 layout.addLayout(buttons_layout)
53
54 self.setLayout(layout)
55
56 # Signals
57
58 self._list.currentRowChanged.connect(
59 lambda index: self._stack.setCurrentIndex(index)
60 )
61 self._button_cancel.clicked.connect(self.on_click_cancel)
62 self._button_ok.clicked.connect(self.on_click_ok)
63 self._default_restore.clicked.connect(self.restore_defaults)
64
65 # Make widget
66
67 self.make_dialog()
68 self._list.setCurrentRow(0)
69
70 def make_dialog(self):
71 """Removes settings not to be exposed to user and creates dialog pages."""
72
73 settings_list = [ApplicationSettings(), PluginSettings()]
74 cnt = 0
75 for key, setting in SETTINGS.schemas().items():
76
77 schema = json.loads(setting['json_schema'])
78 # need to remove certain properties that will not be displayed on the GUI
79 properties = schema.pop('properties')
80 values = setting['model'].dict()
81 for val in settings_list[cnt].NapariConfig().preferences_exclude:
82 properties.pop(val)
83 values.pop(val)
84
85 cnt += 1
86 schema['properties'] = properties
87
88 self.add_page(schema, values)
89
90 def restore_defaults(self):
91 """Launches dialog to confirm restore settings choice."""
92
93 widget = ConfirmDialog(
94 parent=self,
95 text=trans._("Are you sure you want to restore default settings?"),
96 )
97 widget.valueChanged.connect(self._reset_widgets)
98 widget.exec_()
99
100 def _reset_widgets(self):
101 """Deletes the widgets and rebuilds with defaults."""
102 self.close()
103 self._list.clear()
104
105 for n in range(self._stack.count()):
106 widget = self._stack.removeWidget(self._stack.currentWidget())
107 del widget
108
109 self.make_dialog()
110 self._list.setCurrentRow(0)
111 self.show()
112
113 def on_click_ok(self):
114 """Keeps the selected preferences saved to SETTINGS."""
115 self.close()
116
117 def on_click_cancel(self):
118 """Restores the settings in place when dialog was launched."""
119 self.check_differences(self._values_orig_set, self._values_set)
120 self.close()
121
122 def add_page(self, schema, values):
123 """Creates a new page for each section in dialog.
124
125 Parameters
126 ----------
127 schema : dict
128 Json schema including all information to build each page in the
129 preferences dialog.
130 values : dict
131 Dictionary of current values set in preferences.
132 """
133 widget = self.build_page_dialog(schema, values)
134 self._list.addItem(schema["title"])
135 self._stack.addWidget(widget)
136
137 def build_page_dialog(self, schema, values):
138 """Builds the preferences widget using the json schema builder.
139
140 Parameters
141 ----------
142 schema : dict
143 Json schema including all information to build each page in the
144 preferences dialog.
145 values : dict
146 Dictionary of current values set in preferences.
147 """
148 self._values_orig_set = set(values.items())
149 self._values_set = set(values.items())
150
151 builder = WidgetBuilder()
152 form = builder.create_form(schema, {})
153 # set state values for widget
154 form.widget.state = values
155 form.widget.on_changed.connect(
156 lambda d: self.check_differences(set(d.items()), self._values_set)
157 )
158
159 return form
160
161 def check_differences(self, new_set, values_set):
162 """Changes settings in settings manager with changes from dialog.
163
164 Parameters
165 ----------
166 new_set : set
167 The set of new values, with tuples of key value pairs for each
168 setting.
169 values_set : set
170 The old set of values.
171 """
172
173 page = self._list.currentItem().text().split(" ")[0].lower()
174 different_values = list(new_set - values_set)
175
176 if len(different_values) > 0:
177 # change the values in SETTINGS
178 for val in different_values:
179 try:
180 setattr(SETTINGS._settings[page], val[0], val[1])
181 self._values_set = new_set
182 except: # noqa: E722
183 continue
184
185
186 class ConfirmDialog(QDialog):
187 """Dialog to confirms a user's choice to restore default settings."""
188
189 valueChanged = Signal(bool)
190
191 def __init__(
192 self,
193 parent: QWidget = None,
194 text: str = "",
195 ):
196 super().__init__(parent)
197
198 # Set up components
199 self._question = QLabel(self)
200 self._button_restore = QPushButton(trans._("Restore"))
201 self._button_cancel = QPushButton(trans._("Cancel"))
202
203 # Widget set up
204 self._question.setText(text)
205
206 # Layout
207 button_layout = QHBoxLayout()
208 button_layout.addWidget(self._button_cancel)
209 button_layout.addWidget(self._button_restore)
210
211 main_layout = QVBoxLayout()
212 main_layout.addWidget(self._question)
213 main_layout.addLayout(button_layout)
214
215 self.setLayout(main_layout)
216
217 # Signals
218 self._button_cancel.clicked.connect(self.on_click_cancel)
219 self._button_restore.clicked.connect(self.on_click_restore)
220
221 def on_click_cancel(self):
222 """Do not restore defaults and close window."""
223 self.close()
224
225 def on_click_restore(self):
226 """Restore defaults and close window."""
227 SETTINGS.reset()
228 self.valueChanged.emit(True)
229 self.close()
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/napari/_qt/dialogs/preferences_dialog.py b/napari/_qt/dialogs/preferences_dialog.py
--- a/napari/_qt/dialogs/preferences_dialog.py
+++ b/napari/_qt/dialogs/preferences_dialog.py
@@ -72,8 +72,10 @@
settings_list = [ApplicationSettings(), PluginSettings()]
cnt = 0
+ # Because there are multiple pages, need to keep a list of values sets.
+ self._values_orig_set_list = []
+ self._values_set_list = []
for key, setting in SETTINGS.schemas().items():
-
schema = json.loads(setting['json_schema'])
# need to remove certain properties that will not be displayed on the GUI
properties = schema.pop('properties')
@@ -84,7 +86,8 @@
cnt += 1
schema['properties'] = properties
-
+ self._values_orig_set_list.append(set(values.items()))
+ self._values_set_list.append(set(values.items()))
self.add_page(schema, values)
def restore_defaults(self):
@@ -116,7 +119,16 @@
def on_click_cancel(self):
"""Restores the settings in place when dialog was launched."""
- self.check_differences(self._values_orig_set, self._values_set)
+ # Need to check differences for each page.
+ for n in range(self._stack.count()):
+ # Must set the current row so that the proper set list is updated
+ # in check differences.
+ self._list.setCurrentRow(n)
+ self.check_differences(
+ self._values_orig_set_list[n],
+ self._values_set_list[n],
+ )
+ self._list.setCurrentRow(0)
self.close()
def add_page(self, schema, values):
@@ -145,15 +157,16 @@
values : dict
Dictionary of current values set in preferences.
"""
- self._values_orig_set = set(values.items())
- self._values_set = set(values.items())
builder = WidgetBuilder()
form = builder.create_form(schema, {})
# set state values for widget
form.widget.state = values
form.widget.on_changed.connect(
- lambda d: self.check_differences(set(d.items()), self._values_set)
+ lambda d: self.check_differences(
+ set(d.items()),
+ self._values_set_list[self._list.currentIndex().row()],
+ )
)
return form
@@ -178,7 +191,9 @@
for val in different_values:
try:
setattr(SETTINGS._settings[page], val[0], val[1])
- self._values_set = new_set
+ self._values_set_list[
+ self._list.currentIndex().row()
+ ] = new_set
except: # noqa: E722
continue
|
{"golden_diff": "diff --git a/napari/_qt/dialogs/preferences_dialog.py b/napari/_qt/dialogs/preferences_dialog.py\n--- a/napari/_qt/dialogs/preferences_dialog.py\n+++ b/napari/_qt/dialogs/preferences_dialog.py\n@@ -72,8 +72,10 @@\n \n settings_list = [ApplicationSettings(), PluginSettings()]\n cnt = 0\n+ # Because there are multiple pages, need to keep a list of values sets.\n+ self._values_orig_set_list = []\n+ self._values_set_list = []\n for key, setting in SETTINGS.schemas().items():\n-\n schema = json.loads(setting['json_schema'])\n # need to remove certain properties that will not be displayed on the GUI\n properties = schema.pop('properties')\n@@ -84,7 +86,8 @@\n \n cnt += 1\n schema['properties'] = properties\n-\n+ self._values_orig_set_list.append(set(values.items()))\n+ self._values_set_list.append(set(values.items()))\n self.add_page(schema, values)\n \n def restore_defaults(self):\n@@ -116,7 +119,16 @@\n \n def on_click_cancel(self):\n \"\"\"Restores the settings in place when dialog was launched.\"\"\"\n- self.check_differences(self._values_orig_set, self._values_set)\n+ # Need to check differences for each page.\n+ for n in range(self._stack.count()):\n+ # Must set the current row so that the proper set list is updated\n+ # in check differences.\n+ self._list.setCurrentRow(n)\n+ self.check_differences(\n+ self._values_orig_set_list[n],\n+ self._values_set_list[n],\n+ )\n+ self._list.setCurrentRow(0)\n self.close()\n \n def add_page(self, schema, values):\n@@ -145,15 +157,16 @@\n values : dict\n Dictionary of current values set in preferences.\n \"\"\"\n- self._values_orig_set = set(values.items())\n- self._values_set = set(values.items())\n \n builder = WidgetBuilder()\n form = builder.create_form(schema, {})\n # set state values for widget\n form.widget.state = values\n form.widget.on_changed.connect(\n- lambda d: self.check_differences(set(d.items()), self._values_set)\n+ lambda d: self.check_differences(\n+ set(d.items()),\n+ self._values_set_list[self._list.currentIndex().row()],\n+ )\n )\n \n return form\n@@ -178,7 +191,9 @@\n for val in different_values:\n try:\n setattr(SETTINGS._settings[page], val[0], val[1])\n- self._values_set = new_set\n+ self._values_set_list[\n+ self._list.currentIndex().row()\n+ ] = new_set\n except: # noqa: E722\n continue\n", "issue": "Cancel doesn't work on preference dialog\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nThe cancel button on the preferences dialog isn't working properly. I think its possible that the function I removed in the last PR that I thought was unnecessary was actually necessary. \r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.\r\n2.\r\n3.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n - Please copy and paste the information at napari info option in help menubar here:\r\n\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "import json\n\nfrom qtpy.QtCore import Signal\nfrom qtpy.QtWidgets import (\n QDialog,\n QHBoxLayout,\n QLabel,\n QListWidget,\n QPushButton,\n QStackedWidget,\n QVBoxLayout,\n QWidget,\n)\n\nfrom ..._vendor.qt_json_builder.qt_jsonschema_form import WidgetBuilder\nfrom ...utils.settings import SETTINGS\nfrom ...utils.settings._defaults import ApplicationSettings, PluginSettings\nfrom ...utils.translations import translator\n\ntrans = translator.load()\n\n\nclass PreferencesDialog(QDialog):\n \"\"\"Preferences Dialog for Napari user settings.\"\"\"\n\n def __init__(self, parent=None):\n super().__init__(parent)\n\n self._list = QListWidget(self)\n self._stack = QStackedWidget(self)\n\n # Set up buttons\n self._button_cancel = QPushButton(trans._(\"Cancel\"))\n self._button_ok = QPushButton(trans._(\"OK\"))\n self._default_restore = QPushButton(trans._(\"Restore defaults\"))\n\n # Setup\n self.setWindowTitle(trans._(\"Preferences\"))\n\n # Layout\n main_layout = QHBoxLayout()\n main_layout.addWidget(self._list)\n main_layout.addWidget(self._stack)\n\n buttons_layout = QHBoxLayout()\n buttons_layout.addWidget(self._button_cancel)\n buttons_layout.addWidget(self._button_ok)\n\n layout = QVBoxLayout()\n layout.addLayout(main_layout)\n layout.addWidget(self._default_restore)\n layout.addLayout(buttons_layout)\n\n self.setLayout(layout)\n\n # Signals\n\n self._list.currentRowChanged.connect(\n lambda index: self._stack.setCurrentIndex(index)\n )\n self._button_cancel.clicked.connect(self.on_click_cancel)\n self._button_ok.clicked.connect(self.on_click_ok)\n self._default_restore.clicked.connect(self.restore_defaults)\n\n # Make widget\n\n self.make_dialog()\n self._list.setCurrentRow(0)\n\n def make_dialog(self):\n \"\"\"Removes settings not to be exposed to user and creates dialog pages.\"\"\"\n\n settings_list = [ApplicationSettings(), PluginSettings()]\n cnt = 0\n for key, setting in SETTINGS.schemas().items():\n\n schema = json.loads(setting['json_schema'])\n # need to remove certain properties that will not be displayed on the GUI\n properties = schema.pop('properties')\n values = setting['model'].dict()\n for val in settings_list[cnt].NapariConfig().preferences_exclude:\n properties.pop(val)\n values.pop(val)\n\n cnt += 1\n schema['properties'] = properties\n\n self.add_page(schema, values)\n\n def restore_defaults(self):\n \"\"\"Launches dialog to confirm restore settings choice.\"\"\"\n\n widget = ConfirmDialog(\n parent=self,\n text=trans._(\"Are you sure you want to restore default settings?\"),\n )\n widget.valueChanged.connect(self._reset_widgets)\n widget.exec_()\n\n def _reset_widgets(self):\n \"\"\"Deletes the widgets and rebuilds with defaults.\"\"\"\n self.close()\n self._list.clear()\n\n for n in range(self._stack.count()):\n widget = self._stack.removeWidget(self._stack.currentWidget())\n del widget\n\n self.make_dialog()\n self._list.setCurrentRow(0)\n self.show()\n\n def on_click_ok(self):\n \"\"\"Keeps the selected preferences saved to SETTINGS.\"\"\"\n self.close()\n\n def on_click_cancel(self):\n \"\"\"Restores the settings in place when dialog was launched.\"\"\"\n self.check_differences(self._values_orig_set, self._values_set)\n self.close()\n\n def add_page(self, schema, values):\n \"\"\"Creates a new page for each section in dialog.\n\n Parameters\n ----------\n schema : dict\n Json schema including all information to build each page in the\n preferences dialog.\n values : dict\n Dictionary of current values set in preferences.\n \"\"\"\n widget = self.build_page_dialog(schema, values)\n self._list.addItem(schema[\"title\"])\n self._stack.addWidget(widget)\n\n def build_page_dialog(self, schema, values):\n \"\"\"Builds the preferences widget using the json schema builder.\n\n Parameters\n ----------\n schema : dict\n Json schema including all information to build each page in the\n preferences dialog.\n values : dict\n Dictionary of current values set in preferences.\n \"\"\"\n self._values_orig_set = set(values.items())\n self._values_set = set(values.items())\n\n builder = WidgetBuilder()\n form = builder.create_form(schema, {})\n # set state values for widget\n form.widget.state = values\n form.widget.on_changed.connect(\n lambda d: self.check_differences(set(d.items()), self._values_set)\n )\n\n return form\n\n def check_differences(self, new_set, values_set):\n \"\"\"Changes settings in settings manager with changes from dialog.\n\n Parameters\n ----------\n new_set : set\n The set of new values, with tuples of key value pairs for each\n setting.\n values_set : set\n The old set of values.\n \"\"\"\n\n page = self._list.currentItem().text().split(\" \")[0].lower()\n different_values = list(new_set - values_set)\n\n if len(different_values) > 0:\n # change the values in SETTINGS\n for val in different_values:\n try:\n setattr(SETTINGS._settings[page], val[0], val[1])\n self._values_set = new_set\n except: # noqa: E722\n continue\n\n\nclass ConfirmDialog(QDialog):\n \"\"\"Dialog to confirms a user's choice to restore default settings.\"\"\"\n\n valueChanged = Signal(bool)\n\n def __init__(\n self,\n parent: QWidget = None,\n text: str = \"\",\n ):\n super().__init__(parent)\n\n # Set up components\n self._question = QLabel(self)\n self._button_restore = QPushButton(trans._(\"Restore\"))\n self._button_cancel = QPushButton(trans._(\"Cancel\"))\n\n # Widget set up\n self._question.setText(text)\n\n # Layout\n button_layout = QHBoxLayout()\n button_layout.addWidget(self._button_cancel)\n button_layout.addWidget(self._button_restore)\n\n main_layout = QVBoxLayout()\n main_layout.addWidget(self._question)\n main_layout.addLayout(button_layout)\n\n self.setLayout(main_layout)\n\n # Signals\n self._button_cancel.clicked.connect(self.on_click_cancel)\n self._button_restore.clicked.connect(self.on_click_restore)\n\n def on_click_cancel(self):\n \"\"\"Do not restore defaults and close window.\"\"\"\n self.close()\n\n def on_click_restore(self):\n \"\"\"Restore defaults and close window.\"\"\"\n SETTINGS.reset()\n self.valueChanged.emit(True)\n self.close()\n", "path": "napari/_qt/dialogs/preferences_dialog.py"}], "after_files": [{"content": "import json\n\nfrom qtpy.QtCore import Signal\nfrom qtpy.QtWidgets import (\n QDialog,\n QHBoxLayout,\n QLabel,\n QListWidget,\n QPushButton,\n QStackedWidget,\n QVBoxLayout,\n QWidget,\n)\n\nfrom ..._vendor.qt_json_builder.qt_jsonschema_form import WidgetBuilder\nfrom ...utils.settings import SETTINGS\nfrom ...utils.settings._defaults import ApplicationSettings, PluginSettings\nfrom ...utils.translations import translator\n\ntrans = translator.load()\n\n\nclass PreferencesDialog(QDialog):\n \"\"\"Preferences Dialog for Napari user settings.\"\"\"\n\n def __init__(self, parent=None):\n super().__init__(parent)\n\n self._list = QListWidget(self)\n self._stack = QStackedWidget(self)\n\n # Set up buttons\n self._button_cancel = QPushButton(trans._(\"Cancel\"))\n self._button_ok = QPushButton(trans._(\"OK\"))\n self._default_restore = QPushButton(trans._(\"Restore defaults\"))\n\n # Setup\n self.setWindowTitle(trans._(\"Preferences\"))\n\n # Layout\n main_layout = QHBoxLayout()\n main_layout.addWidget(self._list)\n main_layout.addWidget(self._stack)\n\n buttons_layout = QHBoxLayout()\n buttons_layout.addWidget(self._button_cancel)\n buttons_layout.addWidget(self._button_ok)\n\n layout = QVBoxLayout()\n layout.addLayout(main_layout)\n layout.addWidget(self._default_restore)\n layout.addLayout(buttons_layout)\n\n self.setLayout(layout)\n\n # Signals\n\n self._list.currentRowChanged.connect(\n lambda index: self._stack.setCurrentIndex(index)\n )\n self._button_cancel.clicked.connect(self.on_click_cancel)\n self._button_ok.clicked.connect(self.on_click_ok)\n self._default_restore.clicked.connect(self.restore_defaults)\n\n # Make widget\n\n self.make_dialog()\n self._list.setCurrentRow(0)\n\n def make_dialog(self):\n \"\"\"Removes settings not to be exposed to user and creates dialog pages.\"\"\"\n\n settings_list = [ApplicationSettings(), PluginSettings()]\n cnt = 0\n # Because there are multiple pages, need to keep a list of values sets.\n self._values_orig_set_list = []\n self._values_set_list = []\n for key, setting in SETTINGS.schemas().items():\n schema = json.loads(setting['json_schema'])\n # need to remove certain properties that will not be displayed on the GUI\n properties = schema.pop('properties')\n values = setting['model'].dict()\n for val in settings_list[cnt].NapariConfig().preferences_exclude:\n properties.pop(val)\n values.pop(val)\n\n cnt += 1\n schema['properties'] = properties\n self._values_orig_set_list.append(set(values.items()))\n self._values_set_list.append(set(values.items()))\n self.add_page(schema, values)\n\n def restore_defaults(self):\n \"\"\"Launches dialog to confirm restore settings choice.\"\"\"\n\n widget = ConfirmDialog(\n parent=self,\n text=trans._(\"Are you sure you want to restore default settings?\"),\n )\n widget.valueChanged.connect(self._reset_widgets)\n widget.exec_()\n\n def _reset_widgets(self):\n \"\"\"Deletes the widgets and rebuilds with defaults.\"\"\"\n self.close()\n self._list.clear()\n\n for n in range(self._stack.count()):\n widget = self._stack.removeWidget(self._stack.currentWidget())\n del widget\n\n self.make_dialog()\n self._list.setCurrentRow(0)\n self.show()\n\n def on_click_ok(self):\n \"\"\"Keeps the selected preferences saved to SETTINGS.\"\"\"\n self.close()\n\n def on_click_cancel(self):\n \"\"\"Restores the settings in place when dialog was launched.\"\"\"\n # Need to check differences for each page.\n for n in range(self._stack.count()):\n # Must set the current row so that the proper set list is updated\n # in check differences.\n self._list.setCurrentRow(n)\n self.check_differences(\n self._values_orig_set_list[n],\n self._values_set_list[n],\n )\n self._list.setCurrentRow(0)\n self.close()\n\n def add_page(self, schema, values):\n \"\"\"Creates a new page for each section in dialog.\n\n Parameters\n ----------\n schema : dict\n Json schema including all information to build each page in the\n preferences dialog.\n values : dict\n Dictionary of current values set in preferences.\n \"\"\"\n widget = self.build_page_dialog(schema, values)\n self._list.addItem(schema[\"title\"])\n self._stack.addWidget(widget)\n\n def build_page_dialog(self, schema, values):\n \"\"\"Builds the preferences widget using the json schema builder.\n\n Parameters\n ----------\n schema : dict\n Json schema including all information to build each page in the\n preferences dialog.\n values : dict\n Dictionary of current values set in preferences.\n \"\"\"\n\n builder = WidgetBuilder()\n form = builder.create_form(schema, {})\n # set state values for widget\n form.widget.state = values\n form.widget.on_changed.connect(\n lambda d: self.check_differences(\n set(d.items()),\n self._values_set_list[self._list.currentIndex().row()],\n )\n )\n\n return form\n\n def check_differences(self, new_set, values_set):\n \"\"\"Changes settings in settings manager with changes from dialog.\n\n Parameters\n ----------\n new_set : set\n The set of new values, with tuples of key value pairs for each\n setting.\n values_set : set\n The old set of values.\n \"\"\"\n\n page = self._list.currentItem().text().split(\" \")[0].lower()\n different_values = list(new_set - values_set)\n\n if len(different_values) > 0:\n # change the values in SETTINGS\n for val in different_values:\n try:\n setattr(SETTINGS._settings[page], val[0], val[1])\n self._values_set_list[\n self._list.currentIndex().row()\n ] = new_set\n except: # noqa: E722\n continue\n\n\nclass ConfirmDialog(QDialog):\n \"\"\"Dialog to confirms a user's choice to restore default settings.\"\"\"\n\n valueChanged = Signal(bool)\n\n def __init__(\n self,\n parent: QWidget = None,\n text: str = \"\",\n ):\n super().__init__(parent)\n\n # Set up components\n self._question = QLabel(self)\n self._button_restore = QPushButton(trans._(\"Restore\"))\n self._button_cancel = QPushButton(trans._(\"Cancel\"))\n\n # Widget set up\n self._question.setText(text)\n\n # Layout\n button_layout = QHBoxLayout()\n button_layout.addWidget(self._button_cancel)\n button_layout.addWidget(self._button_restore)\n\n main_layout = QVBoxLayout()\n main_layout.addWidget(self._question)\n main_layout.addLayout(button_layout)\n\n self.setLayout(main_layout)\n\n # Signals\n self._button_cancel.clicked.connect(self.on_click_cancel)\n self._button_restore.clicked.connect(self.on_click_restore)\n\n def on_click_cancel(self):\n \"\"\"Do not restore defaults and close window.\"\"\"\n self.close()\n\n def on_click_restore(self):\n \"\"\"Restore defaults and close window.\"\"\"\n SETTINGS.reset()\n self.valueChanged.emit(True)\n self.close()\n", "path": "napari/_qt/dialogs/preferences_dialog.py"}]}
| 2,417 | 638 |
gh_patches_debug_1320
|
rasdani/github-patches
|
git_diff
|
conda__conda-5124
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
export toposort for conda-build
export toposort for conda-build
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda/exports.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from functools import partial
5 from logging import getLogger
6 from warnings import warn
7
8 log = getLogger(__name__)
9
10 from . import CondaError # NOQA
11 CondaError = CondaError
12
13 from . import compat, plan # NOQA
14 compat, plan = compat, plan
15
16 from .api import get_index # NOQA
17 get_index = get_index
18
19 from .cli.common import specs_from_args, spec_from_line, specs_from_url # NOQA
20 from .cli.conda_argparse import add_parser_prefix, add_parser_channels # NOQA
21 add_parser_channels, add_parser_prefix = add_parser_channels, add_parser_prefix
22 specs_from_args, spec_from_line = specs_from_args, spec_from_line
23 specs_from_url = specs_from_url
24
25 from .cli.conda_argparse import ArgumentParser # NOQA
26 ArgumentParser = ArgumentParser
27
28 from .common.compat import PY3, StringIO, input, iteritems, string_types, text_type # NOQA
29 PY3, StringIO, input, iteritems, string_types, text_type = PY3, StringIO, input, iteritems, string_types, text_type # NOQA
30 from .gateways.connection import CondaSession # NOQA
31 CondaSession = CondaSession
32
33 from .gateways.disk.link import lchmod # NOQA
34 lchmod = lchmod
35
36 from .fetch import TmpDownload # NOQA
37 TmpDownload = TmpDownload
38 handle_proxy_407 = lambda x, y: warn("handle_proxy_407 is deprecated. "
39 "Now handled by CondaSession.")
40 from .core.index import dist_str_in_index, fetch_index # NOQA
41 dist_str_in_index, fetch_index = dist_str_in_index, fetch_index
42 from .core.package_cache import download, rm_fetched # NOQA
43 download, rm_fetched = download, rm_fetched
44
45 from .install import package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA
46 package_cache, prefix_placeholder, rm_rf, symlink_conda = package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA
47
48 from .gateways.disk.delete import delete_trash, move_to_trash # NOQA
49 delete_trash, move_to_trash = delete_trash, move_to_trash
50
51 from .core.linked_data import is_linked, linked, linked_data # NOQA
52 is_linked, linked, linked_data = is_linked, linked, linked_data
53
54 from .misc import untracked, walk_prefix # NOQA
55 untracked, walk_prefix = untracked, walk_prefix
56
57 from .resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA
58 MatchSpec, NoPackagesFound, Resolve = MatchSpec, NoPackagesFound, Resolve
59 Unsatisfiable, normalized_version = Unsatisfiable, normalized_version
60
61 from .signature import KEYS, KEYS_DIR, hash_file, verify # NOQA
62 KEYS, KEYS_DIR = KEYS, KEYS_DIR
63 hash_file, verify = hash_file, verify
64
65 from .utils import hashsum_file, human_bytes, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA
66 hashsum_file, human_bytes = hashsum_file, human_bytes
67 memoized, unix_path_to_win = memoized, unix_path_to_win
68 win_path_to_unix, url_path = win_path_to_unix, url_path
69
70 from .gateways.disk.read import compute_md5sum # NOQA
71 md5_file = compute_md5sum
72
73 from .config import sys_rc_path # NOQA
74 sys_rc_path = sys_rc_path
75
76 from .models.version import VersionOrder # NOQA
77 VersionOrder = VersionOrder
78
79 import conda.base.context # NOQA
80 from .base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA
81 non_x86_linux_machines = non_x86_linux_machines
82
83 from ._vendor.auxlib.entity import EntityEncoder # NOQA
84 EntityEncoder = EntityEncoder
85 from .base.constants import DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA
86 DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX = DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA
87 get_prefix = partial(context_get_prefix, conda.base.context.context)
88 get_default_urls = lambda: DEFAULT_CHANNELS
89
90 arch_name = conda.base.context.context.arch_name
91 binstar_upload = conda.base.context.context.anaconda_upload
92 bits = conda.base.context.context.bits
93 default_prefix = conda.base.context.context.default_prefix
94 default_python = conda.base.context.context.default_python
95 envs_dirs = conda.base.context.context.envs_dirs
96 pkgs_dirs = conda.base.context.context.pkgs_dirs
97 platform = conda.base.context.context.platform
98 root_dir = conda.base.context.context.root_prefix
99 root_writable = conda.base.context.context.root_writable
100 subdir = conda.base.context.context.subdir
101 from .models.channel import get_conda_build_local_url # NOQA
102 get_rc_urls = lambda: list(conda.base.context.context.channels)
103 get_local_urls = lambda: list(get_conda_build_local_url()) or []
104 load_condarc = lambda fn: conda.base.context.reset_context([fn])
105 from .exceptions import PaddingError # NOQA
106 PaddingError = PaddingError
107 from .gateways.disk.link import CrossPlatformStLink # NOQA
108 CrossPlatformStLink = CrossPlatformStLink
109
110 from .models.enums import FileMode # NOQA
111 FileMode = FileMode
112 from .models.enums import PathType # NOQA
113 PathType = PathType
114
115
116 if PY3:
117 import configparser # NOQA # pragma: py2 no cover
118 else:
119 import ConfigParser as configparser # NOQA # pragma: py3 no cover
120 configparser = configparser
121
122
123 from .compat import TemporaryDirectory # NOQA
124 TemporaryDirectory = TemporaryDirectory
125
126 from .gateways.subprocess import ACTIVE_SUBPROCESSES, subprocess_call # NOQA
127 ACTIVE_SUBPROCESSES, subprocess_call = ACTIVE_SUBPROCESSES, subprocess_call
128
129 from .core.repodata import cache_fn_url # NOQA
130 cache_fn_url = cache_fn_url
131
132
133 class Completer(object):
134 def get_items(self):
135 return self._get_items()
136
137 def __contains__(self, item):
138 return True
139
140 def __iter__(self):
141 return iter(self.get_items())
142
143 class InstalledPackages(object): pass # NOQA
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conda/exports.py b/conda/exports.py
--- a/conda/exports.py
+++ b/conda/exports.py
@@ -30,6 +30,9 @@
from .gateways.connection import CondaSession # NOQA
CondaSession = CondaSession
+from .common.toposort import _toposort
+_toposort = _toposort
+
from .gateways.disk.link import lchmod # NOQA
lchmod = lchmod
|
{"golden_diff": "diff --git a/conda/exports.py b/conda/exports.py\n--- a/conda/exports.py\n+++ b/conda/exports.py\n@@ -30,6 +30,9 @@\n from .gateways.connection import CondaSession # NOQA\n CondaSession = CondaSession\n \n+from .common.toposort import _toposort\n+_toposort = _toposort\n+\n from .gateways.disk.link import lchmod # NOQA\n lchmod = lchmod\n", "issue": "export toposort for conda-build\n\nexport toposort for conda-build\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom functools import partial\nfrom logging import getLogger\nfrom warnings import warn\n\nlog = getLogger(__name__)\n\nfrom . import CondaError # NOQA\nCondaError = CondaError\n\nfrom . import compat, plan # NOQA\ncompat, plan = compat, plan\n\nfrom .api import get_index # NOQA\nget_index = get_index\n\nfrom .cli.common import specs_from_args, spec_from_line, specs_from_url # NOQA\nfrom .cli.conda_argparse import add_parser_prefix, add_parser_channels # NOQA\nadd_parser_channels, add_parser_prefix = add_parser_channels, add_parser_prefix\nspecs_from_args, spec_from_line = specs_from_args, spec_from_line\nspecs_from_url = specs_from_url\n\nfrom .cli.conda_argparse import ArgumentParser # NOQA\nArgumentParser = ArgumentParser\n\nfrom .common.compat import PY3, StringIO, input, iteritems, string_types, text_type # NOQA\nPY3, StringIO, input, iteritems, string_types, text_type = PY3, StringIO, input, iteritems, string_types, text_type # NOQA\nfrom .gateways.connection import CondaSession # NOQA\nCondaSession = CondaSession\n\nfrom .gateways.disk.link import lchmod # NOQA\nlchmod = lchmod\n\nfrom .fetch import TmpDownload # NOQA\nTmpDownload = TmpDownload\nhandle_proxy_407 = lambda x, y: warn(\"handle_proxy_407 is deprecated. \"\n \"Now handled by CondaSession.\")\nfrom .core.index import dist_str_in_index, fetch_index # NOQA\ndist_str_in_index, fetch_index = dist_str_in_index, fetch_index\nfrom .core.package_cache import download, rm_fetched # NOQA\ndownload, rm_fetched = download, rm_fetched\n\nfrom .install import package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA\npackage_cache, prefix_placeholder, rm_rf, symlink_conda = package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA\n\nfrom .gateways.disk.delete import delete_trash, move_to_trash # NOQA\ndelete_trash, move_to_trash = delete_trash, move_to_trash\n\nfrom .core.linked_data import is_linked, linked, linked_data # NOQA\nis_linked, linked, linked_data = is_linked, linked, linked_data\n\nfrom .misc import untracked, walk_prefix # NOQA\nuntracked, walk_prefix = untracked, walk_prefix\n\nfrom .resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA\nMatchSpec, NoPackagesFound, Resolve = MatchSpec, NoPackagesFound, Resolve\nUnsatisfiable, normalized_version = Unsatisfiable, normalized_version\n\nfrom .signature import KEYS, KEYS_DIR, hash_file, verify # NOQA\nKEYS, KEYS_DIR = KEYS, KEYS_DIR\nhash_file, verify = hash_file, verify\n\nfrom .utils import hashsum_file, human_bytes, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA\nhashsum_file, human_bytes = hashsum_file, human_bytes\nmemoized, unix_path_to_win = memoized, unix_path_to_win\nwin_path_to_unix, url_path = win_path_to_unix, url_path\n\nfrom .gateways.disk.read import compute_md5sum # NOQA\nmd5_file = compute_md5sum\n\nfrom .config import sys_rc_path # NOQA\nsys_rc_path = sys_rc_path\n\nfrom .models.version import VersionOrder # NOQA\nVersionOrder = VersionOrder\n\nimport conda.base.context # NOQA\nfrom .base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA\nnon_x86_linux_machines = non_x86_linux_machines\n\nfrom ._vendor.auxlib.entity import EntityEncoder # NOQA\nEntityEncoder = EntityEncoder\nfrom .base.constants import DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA\nDEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX = DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA\nget_prefix = partial(context_get_prefix, conda.base.context.context)\nget_default_urls = lambda: DEFAULT_CHANNELS\n\narch_name = conda.base.context.context.arch_name\nbinstar_upload = conda.base.context.context.anaconda_upload\nbits = conda.base.context.context.bits\ndefault_prefix = conda.base.context.context.default_prefix\ndefault_python = conda.base.context.context.default_python\nenvs_dirs = conda.base.context.context.envs_dirs\npkgs_dirs = conda.base.context.context.pkgs_dirs\nplatform = conda.base.context.context.platform\nroot_dir = conda.base.context.context.root_prefix\nroot_writable = conda.base.context.context.root_writable\nsubdir = conda.base.context.context.subdir\nfrom .models.channel import get_conda_build_local_url # NOQA\nget_rc_urls = lambda: list(conda.base.context.context.channels)\nget_local_urls = lambda: list(get_conda_build_local_url()) or []\nload_condarc = lambda fn: conda.base.context.reset_context([fn])\nfrom .exceptions import PaddingError # NOQA\nPaddingError = PaddingError\nfrom .gateways.disk.link import CrossPlatformStLink # NOQA\nCrossPlatformStLink = CrossPlatformStLink\n\nfrom .models.enums import FileMode # NOQA\nFileMode = FileMode\nfrom .models.enums import PathType # NOQA\nPathType = PathType\n\n\nif PY3:\n import configparser # NOQA # pragma: py2 no cover\nelse:\n import ConfigParser as configparser # NOQA # pragma: py3 no cover\nconfigparser = configparser\n\n\nfrom .compat import TemporaryDirectory # NOQA\nTemporaryDirectory = TemporaryDirectory\n\nfrom .gateways.subprocess import ACTIVE_SUBPROCESSES, subprocess_call # NOQA\nACTIVE_SUBPROCESSES, subprocess_call = ACTIVE_SUBPROCESSES, subprocess_call\n\nfrom .core.repodata import cache_fn_url # NOQA\ncache_fn_url = cache_fn_url\n\n\nclass Completer(object):\n def get_items(self):\n return self._get_items()\n\n def __contains__(self, item):\n return True\n\n def __iter__(self):\n return iter(self.get_items())\n\nclass InstalledPackages(object): pass # NOQA\n", "path": "conda/exports.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom functools import partial\nfrom logging import getLogger\nfrom warnings import warn\n\nlog = getLogger(__name__)\n\nfrom . import CondaError # NOQA\nCondaError = CondaError\n\nfrom . import compat, plan # NOQA\ncompat, plan = compat, plan\n\nfrom .api import get_index # NOQA\nget_index = get_index\n\nfrom .cli.common import specs_from_args, spec_from_line, specs_from_url # NOQA\nfrom .cli.conda_argparse import add_parser_prefix, add_parser_channels # NOQA\nadd_parser_channels, add_parser_prefix = add_parser_channels, add_parser_prefix\nspecs_from_args, spec_from_line = specs_from_args, spec_from_line\nspecs_from_url = specs_from_url\n\nfrom .cli.conda_argparse import ArgumentParser # NOQA\nArgumentParser = ArgumentParser\n\nfrom .common.compat import PY3, StringIO, input, iteritems, string_types, text_type # NOQA\nPY3, StringIO, input, iteritems, string_types, text_type = PY3, StringIO, input, iteritems, string_types, text_type # NOQA\nfrom .gateways.connection import CondaSession # NOQA\nCondaSession = CondaSession\n\nfrom .common.toposort import _toposort\n_toposort = _toposort\n\nfrom .gateways.disk.link import lchmod # NOQA\nlchmod = lchmod\n\nfrom .fetch import TmpDownload # NOQA\nTmpDownload = TmpDownload\nhandle_proxy_407 = lambda x, y: warn(\"handle_proxy_407 is deprecated. \"\n \"Now handled by CondaSession.\")\nfrom .core.index import dist_str_in_index, fetch_index # NOQA\ndist_str_in_index, fetch_index = dist_str_in_index, fetch_index\nfrom .core.package_cache import download, rm_fetched # NOQA\ndownload, rm_fetched = download, rm_fetched\n\nfrom .install import package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA\npackage_cache, prefix_placeholder, rm_rf, symlink_conda = package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA\n\nfrom .gateways.disk.delete import delete_trash, move_to_trash # NOQA\ndelete_trash, move_to_trash = delete_trash, move_to_trash\n\nfrom .core.linked_data import is_linked, linked, linked_data # NOQA\nis_linked, linked, linked_data = is_linked, linked, linked_data\n\nfrom .misc import untracked, walk_prefix # NOQA\nuntracked, walk_prefix = untracked, walk_prefix\n\nfrom .resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA\nMatchSpec, NoPackagesFound, Resolve = MatchSpec, NoPackagesFound, Resolve\nUnsatisfiable, normalized_version = Unsatisfiable, normalized_version\n\nfrom .signature import KEYS, KEYS_DIR, hash_file, verify # NOQA\nKEYS, KEYS_DIR = KEYS, KEYS_DIR\nhash_file, verify = hash_file, verify\n\nfrom .utils import hashsum_file, human_bytes, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA\nhashsum_file, human_bytes = hashsum_file, human_bytes\nmemoized, unix_path_to_win = memoized, unix_path_to_win\nwin_path_to_unix, url_path = win_path_to_unix, url_path\n\nfrom .gateways.disk.read import compute_md5sum # NOQA\nmd5_file = compute_md5sum\n\nfrom .config import sys_rc_path # NOQA\nsys_rc_path = sys_rc_path\n\nfrom .models.version import VersionOrder # NOQA\nVersionOrder = VersionOrder\n\nimport conda.base.context # NOQA\nfrom .base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA\nnon_x86_linux_machines = non_x86_linux_machines\n\nfrom ._vendor.auxlib.entity import EntityEncoder # NOQA\nEntityEncoder = EntityEncoder\nfrom .base.constants import DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA\nDEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX = DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA\nget_prefix = partial(context_get_prefix, conda.base.context.context)\nget_default_urls = lambda: DEFAULT_CHANNELS\n\narch_name = conda.base.context.context.arch_name\nbinstar_upload = conda.base.context.context.anaconda_upload\nbits = conda.base.context.context.bits\ndefault_prefix = conda.base.context.context.default_prefix\ndefault_python = conda.base.context.context.default_python\nenvs_dirs = conda.base.context.context.envs_dirs\npkgs_dirs = conda.base.context.context.pkgs_dirs\nplatform = conda.base.context.context.platform\nroot_dir = conda.base.context.context.root_prefix\nroot_writable = conda.base.context.context.root_writable\nsubdir = conda.base.context.context.subdir\nfrom .models.channel import get_conda_build_local_url # NOQA\nget_rc_urls = lambda: list(conda.base.context.context.channels)\nget_local_urls = lambda: list(get_conda_build_local_url()) or []\nload_condarc = lambda fn: conda.base.context.reset_context([fn])\nfrom .exceptions import PaddingError # NOQA\nPaddingError = PaddingError\nfrom .gateways.disk.link import CrossPlatformStLink # NOQA\nCrossPlatformStLink = CrossPlatformStLink\n\nfrom .models.enums import FileMode # NOQA\nFileMode = FileMode\nfrom .models.enums import PathType # NOQA\nPathType = PathType\n\n\nif PY3:\n import configparser # NOQA # pragma: py2 no cover\nelse:\n import ConfigParser as configparser # NOQA # pragma: py3 no cover\nconfigparser = configparser\n\n\nfrom .compat import TemporaryDirectory # NOQA\nTemporaryDirectory = TemporaryDirectory\n\nfrom .gateways.subprocess import ACTIVE_SUBPROCESSES, subprocess_call # NOQA\nACTIVE_SUBPROCESSES, subprocess_call = ACTIVE_SUBPROCESSES, subprocess_call\n\nfrom .core.repodata import cache_fn_url # NOQA\ncache_fn_url = cache_fn_url\n\n\nclass Completer(object):\n def get_items(self):\n return self._get_items()\n\n def __contains__(self, item):\n return True\n\n def __iter__(self):\n return iter(self.get_items())\n\nclass InstalledPackages(object): pass # NOQA\n", "path": "conda/exports.py"}]}
| 2,030 | 110 |
gh_patches_debug_1450
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-3731
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
release infrastrucutre doesn't handle "out of order" releases
Specifically if we issue an `0.X` release, then an `0.X+1` release, and then we go to do an `0.X.1` release, the wheel automation won't work, since it builds a wheel for the latest release.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `release.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import getpass
8 import io
9 import os
10 import subprocess
11 import time
12
13 import click
14
15 from clint.textui.progress import Bar as ProgressBar
16
17 import requests
18
19
20 JENKINS_URL = (
21 "https://ci.cryptography.io/job/cryptography-support-jobs/"
22 "job/wheel-builder"
23 )
24
25
26 def run(*args, **kwargs):
27 kwargs.setdefault("stderr", subprocess.STDOUT)
28 try:
29 subprocess.check_output(list(args), **kwargs)
30 except subprocess.CalledProcessError as e:
31 # Reraise this with a different type so that str(e) is something with
32 # stdout in it.
33 raise Exception(e.cmd, e.returncode, e.output)
34
35
36 def wait_for_build_completed(session):
37 # Wait 20 seconds before actually checking if the build is complete, to
38 # ensure that it had time to really start.
39 time.sleep(20)
40 while True:
41 response = session.get(
42 "{0}/lastBuild/api/json/".format(JENKINS_URL),
43 headers={
44 "Accept": "application/json",
45 }
46 )
47 response.raise_for_status()
48 if not response.json()["building"]:
49 assert response.json()["result"] == "SUCCESS"
50 break
51 time.sleep(0.1)
52
53
54 def download_artifacts(session):
55 response = session.get(
56 "{0}/lastBuild/api/json/".format(JENKINS_URL),
57 headers={
58 "Accept": "application/json"
59 }
60 )
61 response.raise_for_status()
62 json_response = response.json()
63 assert not json_response["building"]
64 assert json_response["result"] == "SUCCESS"
65
66 paths = []
67
68 for artifact in json_response["artifacts"]:
69 response = session.get(
70 "{0}artifact/{1}".format(
71 json_response["url"], artifact["relativePath"]
72 ), stream=True
73 )
74 assert response.headers["content-length"]
75 print("Downloading {0}".format(artifact["fileName"]))
76 bar = ProgressBar(
77 expected_size=int(response.headers["content-length"]),
78 filled_char="="
79 )
80 content = io.BytesIO()
81 for data in response.iter_content(chunk_size=8192):
82 content.write(data)
83 bar.show(content.tell())
84 assert bar.expected_size == content.tell()
85 bar.done()
86 out_path = os.path.join(
87 os.path.dirname(__file__),
88 "dist",
89 artifact["fileName"],
90 )
91 with open(out_path, "wb") as f:
92 f.write(content.getvalue())
93 paths.append(out_path)
94 return paths
95
96
97 @click.command()
98 @click.argument("version")
99 def release(version):
100 """
101 ``version`` should be a string like '0.4' or '1.0'.
102 """
103 run("git", "tag", "-s", version, "-m", "{0} release".format(version))
104 run("git", "push", "--tags")
105
106 run("python", "setup.py", "sdist")
107 run("python", "setup.py", "sdist", "bdist_wheel", cwd="vectors/")
108
109 run(
110 "twine", "upload", "-s", "dist/cryptography-{0}*".format(version),
111 "vectors/dist/cryptography_vectors-{0}*".format(version), shell=True
112 )
113
114 session = requests.Session()
115
116 # This tells the CDN to delete the cached response for the URL. We do this
117 # so that the Jenkins builders will see the new sdist immediately when they
118 # go to build the wheels.
119 response = session.request(
120 "PURGE", "https://pypi.python.org/simple/cryptography/"
121 )
122 response.raise_for_status()
123
124 token = getpass.getpass("Input the Jenkins token: ")
125 response = session.get(
126 "{0}/build".format(JENKINS_URL),
127 params={
128 "token": token,
129 "cause": "Building wheels for {0}".format(version)
130 }
131 )
132 response.raise_for_status()
133 wait_for_build_completed(session)
134 paths = download_artifacts(session)
135 run("twine", "upload", " ".join(paths))
136
137
138 if __name__ == "__main__":
139 release()
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/release.py b/release.py
--- a/release.py
+++ b/release.py
@@ -126,6 +126,7 @@
"{0}/build".format(JENKINS_URL),
params={
"token": token,
+ "BUILD_VERSION": version,
"cause": "Building wheels for {0}".format(version)
}
)
|
{"golden_diff": "diff --git a/release.py b/release.py\n--- a/release.py\n+++ b/release.py\n@@ -126,6 +126,7 @@\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n+ \"BUILD_VERSION\": version,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n", "issue": "release infrastrucutre doesn't handle \"out of order\" releases\nSpecifically if we issue an `0.X` release, then an `0.X+1` release, and then we go to do an `0.X.1` release, the wheel automation won't work, since it builds a wheel for the latest release.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport io\nimport os\nimport subprocess\nimport time\n\nimport click\n\nfrom clint.textui.progress import Bar as ProgressBar\n\nimport requests\n\n\nJENKINS_URL = (\n \"https://ci.cryptography.io/job/cryptography-support-jobs/\"\n \"job/wheel-builder\"\n)\n\n\ndef run(*args, **kwargs):\n kwargs.setdefault(\"stderr\", subprocess.STDOUT)\n try:\n subprocess.check_output(list(args), **kwargs)\n except subprocess.CalledProcessError as e:\n # Reraise this with a different type so that str(e) is something with\n # stdout in it.\n raise Exception(e.cmd, e.returncode, e.output)\n\n\ndef wait_for_build_completed(session):\n # Wait 20 seconds before actually checking if the build is complete, to\n # ensure that it had time to really start.\n time.sleep(20)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n json_response = response.json()\n assert not json_response[\"building\"]\n assert json_response[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n for artifact in json_response[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(\n json_response[\"url\"], artifact[\"relativePath\"]\n ), stream=True\n )\n assert response.headers[\"content-length\"]\n print(\"Downloading {0}\".format(artifact[\"fileName\"]))\n bar = ProgressBar(\n expected_size=int(response.headers[\"content-length\"]),\n filled_char=\"=\"\n )\n content = io.BytesIO()\n for data in response.iter_content(chunk_size=8192):\n content.write(data)\n bar.show(content.tell())\n assert bar.expected_size == content.tell()\n bar.done()\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(content.getvalue())\n paths.append(out_path)\n return paths\n\n\[email protected]()\[email protected](\"version\")\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n run(\"git\", \"tag\", \"-s\", version, \"-m\", \"{0} release\".format(version))\n run(\"git\", \"push\", \"--tags\")\n\n run(\"python\", \"setup.py\", \"sdist\")\n run(\"python\", \"setup.py\", \"sdist\", \"bdist_wheel\", cwd=\"vectors/\")\n\n run(\n \"twine\", \"upload\", \"-s\", \"dist/cryptography-{0}*\".format(version),\n \"vectors/dist/cryptography_vectors-{0}*\".format(version), shell=True\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.get(\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n run(\"twine\", \"upload\", \" \".join(paths))\n\n\nif __name__ == \"__main__\":\n release()\n", "path": "release.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport io\nimport os\nimport subprocess\nimport time\n\nimport click\n\nfrom clint.textui.progress import Bar as ProgressBar\n\nimport requests\n\n\nJENKINS_URL = (\n \"https://ci.cryptography.io/job/cryptography-support-jobs/\"\n \"job/wheel-builder\"\n)\n\n\ndef run(*args, **kwargs):\n kwargs.setdefault(\"stderr\", subprocess.STDOUT)\n try:\n subprocess.check_output(list(args), **kwargs)\n except subprocess.CalledProcessError as e:\n # Reraise this with a different type so that str(e) is something with\n # stdout in it.\n raise Exception(e.cmd, e.returncode, e.output)\n\n\ndef wait_for_build_completed(session):\n # Wait 20 seconds before actually checking if the build is complete, to\n # ensure that it had time to really start.\n time.sleep(20)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n json_response = response.json()\n assert not json_response[\"building\"]\n assert json_response[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n for artifact in json_response[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(\n json_response[\"url\"], artifact[\"relativePath\"]\n ), stream=True\n )\n assert response.headers[\"content-length\"]\n print(\"Downloading {0}\".format(artifact[\"fileName\"]))\n bar = ProgressBar(\n expected_size=int(response.headers[\"content-length\"]),\n filled_char=\"=\"\n )\n content = io.BytesIO()\n for data in response.iter_content(chunk_size=8192):\n content.write(data)\n bar.show(content.tell())\n assert bar.expected_size == content.tell()\n bar.done()\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(content.getvalue())\n paths.append(out_path)\n return paths\n\n\[email protected]()\[email protected](\"version\")\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n run(\"git\", \"tag\", \"-s\", version, \"-m\", \"{0} release\".format(version))\n run(\"git\", \"push\", \"--tags\")\n\n run(\"python\", \"setup.py\", \"sdist\")\n run(\"python\", \"setup.py\", \"sdist\", \"bdist_wheel\", cwd=\"vectors/\")\n\n run(\n \"twine\", \"upload\", \"-s\", \"dist/cryptography-{0}*\".format(version),\n \"vectors/dist/cryptography_vectors-{0}*\".format(version), shell=True\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.get(\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n \"BUILD_VERSION\": version,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n run(\"twine\", \"upload\", \" \".join(paths))\n\n\nif __name__ == \"__main__\":\n release()\n", "path": "release.py"}]}
| 1,590 | 82 |
gh_patches_debug_18161
|
rasdani/github-patches
|
git_diff
|
nipy__nipype-2595
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
config.enable_debug_mode() does not work as advertised
config.enable_debug_mode() is not equivalent to:
```
mkdir ~/.nipype
echo "[logging]" > ~/.nipype/nipype.cfg
echo "workflow_level = DEBUG" ~/.nipype/nipype.cfg
echo "interface_level = DEBUG" >> ~/.nipype/nipype.cfg
echo "filemanip_level = DEBUG" >> ~/.nipype/nipype.cfg
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/utils/config.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 '''
5 Created on 20 Apr 2010
6
7 logging options : INFO, DEBUG
8 hash_method : content, timestamp
9
10 @author: Chris Filo Gorgolewski
11 '''
12 from __future__ import (print_function, division, unicode_literals,
13 absolute_import)
14 import os
15 import sys
16 import errno
17 import atexit
18 from warnings import warn
19 from distutils.version import LooseVersion
20 import configparser
21 import numpy as np
22
23 from builtins import bytes, str, object, open
24 from simplejson import load, dump
25 from future import standard_library
26
27 from .misc import str2bool
28 from ..external import portalocker
29
30 standard_library.install_aliases()
31
32 CONFIG_DEPRECATIONS = {
33 'profile_runtime': ('monitoring.enabled', '1.0'),
34 'filemanip_level': ('logging.utils_level', '1.0'),
35 }
36
37 NUMPY_MMAP = LooseVersion(np.__version__) >= LooseVersion('1.12.0')
38
39 DEFAULT_CONFIG_TPL = """\
40 [logging]
41 workflow_level = INFO
42 utils_level = INFO
43 interface_level = INFO
44 log_to_file = false
45 log_directory = {log_dir}
46 log_size = 16384000
47 log_rotate = 4
48
49 [execution]
50 create_report = true
51 crashdump_dir = {crashdump_dir}
52 hash_method = timestamp
53 job_finished_timeout = 5
54 keep_inputs = false
55 local_hash_check = true
56 matplotlib_backend = Agg
57 plugin = Linear
58 remove_node_directories = false
59 remove_unnecessary_outputs = true
60 try_hard_link_datasink = true
61 single_thread_matlab = true
62 crashfile_format = pklz
63 stop_on_first_crash = false
64 stop_on_first_rerun = false
65 use_relative_paths = false
66 stop_on_unknown_version = false
67 write_provenance = false
68 parameterize_dirs = true
69 poll_sleep_duration = 2
70 xvfb_max_wait = 10
71
72 [monitoring]
73 enabled = false
74 sample_frequency = 1
75 summary_append = true
76
77 [check]
78 interval = 1209600
79 """.format
80
81
82 def mkdir_p(path):
83 try:
84 os.makedirs(path)
85 except OSError as exc:
86 if exc.errno == errno.EEXIST and os.path.isdir(path):
87 pass
88 else:
89 raise
90
91
92 class NipypeConfig(object):
93 """Base nipype config class"""
94
95 def __init__(self, *args, **kwargs):
96 self._config = configparser.ConfigParser()
97 self._cwd = None
98
99 config_dir = os.path.expanduser('~/.nipype')
100 self.data_file = os.path.join(config_dir, 'nipype.json')
101
102 self.set_default_config()
103 self._display = None
104 self._resource_monitor = None
105
106 if os.path.exists(config_dir):
107 self._config.read(
108 [os.path.join(config_dir, 'nipype.cfg'), 'nipype.cfg'])
109
110 for option in CONFIG_DEPRECATIONS:
111 for section in ['execution', 'logging', 'monitoring']:
112 if self.has_option(section, option):
113 new_section, new_option = CONFIG_DEPRECATIONS[option][
114 0].split('.')
115 if not self.has_option(new_section, new_option):
116 # Warn implicit in get
117 self.set(new_section, new_option,
118 self.get(section, option))
119
120 @property
121 def cwd(self):
122 """Cache current working directory ASAP"""
123 # Run getcwd only once, preventing multiproc to finish
124 # with error having changed to the wrong path
125 if self._cwd is None:
126 try:
127 self._cwd = os.getcwd()
128 except OSError:
129 warn('Trying to run Nipype from a nonexistent directory "{}".'.
130 format(os.getenv('PWD', 'unknown')), RuntimeWarning)
131 raise
132 return self._cwd
133
134 def set_default_config(self):
135 """Read default settings template and set into config object"""
136 default_cfg = DEFAULT_CONFIG_TPL(
137 log_dir=os.path.expanduser(
138 '~'), # Get $HOME in a platform-agnostic way
139 crashdump_dir=self.cwd # Read cached cwd
140 )
141
142 try:
143 self._config.read_string(default_cfg) # Python >= 3.2
144 except AttributeError:
145 from io import StringIO
146 self._config.readfp(StringIO(default_cfg))
147
148 def enable_debug_mode(self):
149 """Enables debug configuration"""
150 self._config.set('execution', 'stop_on_first_crash', 'true')
151 self._config.set('execution', 'remove_unnecessary_outputs', 'false')
152 self._config.set('execution', 'keep_inputs', 'true')
153 self._config.set('logging', 'workflow_level', 'DEBUG')
154 self._config.set('logging', 'interface_level', 'DEBUG')
155
156 def set_log_dir(self, log_dir):
157 """Sets logging directory
158
159 This should be the first thing that is done before any nipype class
160 with logging is imported.
161 """
162 self._config.set('logging', 'log_directory', log_dir)
163
164 def get(self, section, option, default=None):
165 """Get an option"""
166 if option in CONFIG_DEPRECATIONS:
167 msg = ('Config option "%s" has been deprecated as of nipype %s. '
168 'Please use "%s" instead.') % (
169 option, CONFIG_DEPRECATIONS[option][1],
170 CONFIG_DEPRECATIONS[option][0])
171 warn(msg)
172 section, option = CONFIG_DEPRECATIONS[option][0].split('.')
173
174 if self._config.has_option(section, option):
175 return self._config.get(section, option)
176 return default
177
178 def set(self, section, option, value):
179 """Set new value on option"""
180 if isinstance(value, bool):
181 value = str(value)
182
183 if option in CONFIG_DEPRECATIONS:
184 msg = ('Config option "%s" has been deprecated as of nipype %s. '
185 'Please use "%s" instead.') % (
186 option, CONFIG_DEPRECATIONS[option][1],
187 CONFIG_DEPRECATIONS[option][0])
188 warn(msg)
189 section, option = CONFIG_DEPRECATIONS[option][0].split('.')
190
191 return self._config.set(section, option, value)
192
193 def getboolean(self, section, option):
194 """Get a boolean option from section"""
195 return self._config.getboolean(section, option)
196
197 def has_option(self, section, option):
198 """Check if option exists in section"""
199 return self._config.has_option(section, option)
200
201 @property
202 def _sections(self):
203 return self._config._sections
204
205 def get_data(self, key):
206 """Read options file"""
207 if not os.path.exists(self.data_file):
208 return None
209 with open(self.data_file, 'rt') as file:
210 portalocker.lock(file, portalocker.LOCK_EX)
211 datadict = load(file)
212 if key in datadict:
213 return datadict[key]
214 return None
215
216 def save_data(self, key, value):
217 """Store config flie"""
218 datadict = {}
219 if os.path.exists(self.data_file):
220 with open(self.data_file, 'rt') as file:
221 portalocker.lock(file, portalocker.LOCK_EX)
222 datadict = load(file)
223 else:
224 dirname = os.path.dirname(self.data_file)
225 if not os.path.exists(dirname):
226 mkdir_p(dirname)
227 with open(self.data_file, 'wt') as file:
228 portalocker.lock(file, portalocker.LOCK_EX)
229 datadict[key] = value
230 dump(datadict, file)
231
232 def update_config(self, config_dict):
233 """Extend internal dictionary with config_dict"""
234 for section in ['execution', 'logging', 'check']:
235 if section in config_dict:
236 for key, val in list(config_dict[section].items()):
237 if not key.startswith('__'):
238 self._config.set(section, key, str(val))
239
240 def update_matplotlib(self):
241 """Set backend on matplotlib from options"""
242 import matplotlib
243 matplotlib.use(self.get('execution', 'matplotlib_backend'))
244
245 def enable_provenance(self):
246 """Sets provenance storing on"""
247 self._config.set('execution', 'write_provenance', 'true')
248 self._config.set('execution', 'hash_method', 'content')
249
250 @property
251 def resource_monitor(self):
252 """Check if resource_monitor is available"""
253 if self._resource_monitor is not None:
254 return self._resource_monitor
255
256 # Cache config from nipype config
257 self.resource_monitor = str2bool(
258 self._config.get('monitoring', 'enabled')) or False
259 return self._resource_monitor
260
261 @resource_monitor.setter
262 def resource_monitor(self, value):
263 # Accept string true/false values
264 if isinstance(value, (str, bytes)):
265 value = str2bool(value.lower())
266
267 if value is False:
268 self._resource_monitor = False
269 elif value is True:
270 if not self._resource_monitor:
271 # Before setting self._resource_monitor check psutil
272 # availability
273 self._resource_monitor = False
274 try:
275 import psutil
276 self._resource_monitor = LooseVersion(
277 psutil.__version__) >= LooseVersion('5.0')
278 except ImportError:
279 pass
280 finally:
281 if not self._resource_monitor:
282 warn('Could not enable the resource monitor: '
283 'psutil>=5.0 could not be imported.')
284 self._config.set('monitoring', 'enabled',
285 ('%s' % self._resource_monitor).lower())
286
287 def enable_resource_monitor(self):
288 """Sets the resource monitor on"""
289 self.resource_monitor = True
290
291 def get_display(self):
292 """Returns the first display available"""
293
294 # Check if an Xorg server is listening
295 # import subprocess as sp
296 # if not hasattr(sp, 'DEVNULL'):
297 # setattr(sp, 'DEVNULL', os.devnull)
298 # x_listening = bool(sp.call('ps au | grep -v grep | grep -i xorg',
299 # shell=True, stdout=sp.DEVNULL))
300
301 if self._display is not None:
302 return ':%d' % self._display.new_display
303
304 sysdisplay = None
305 if self._config.has_option('execution', 'display_variable'):
306 sysdisplay = self._config.get('execution', 'display_variable')
307
308 sysdisplay = sysdisplay or os.getenv('DISPLAY')
309 if sysdisplay:
310 from collections import namedtuple
311
312 def _mock():
313 pass
314
315 # Store a fake Xvfb object. Format - <host>:<display>[.<screen>]
316 ndisp = sysdisplay.split(':')[-1].split('.')[0]
317 Xvfb = namedtuple('Xvfb', ['new_display', 'stop'])
318 self._display = Xvfb(int(ndisp), _mock)
319 return self.get_display()
320 else:
321 if 'darwin' in sys.platform:
322 raise RuntimeError(
323 'Xvfb requires root permissions to run in OSX. Please '
324 'make sure that an X server is listening and set the '
325 'appropriate config on either $DISPLAY or nipype\'s '
326 '"display_variable" config. Valid X servers include '
327 'VNC, XQuartz, or manually started Xvfb.')
328
329 # If $DISPLAY is empty, it confuses Xvfb so unset
330 if sysdisplay == '':
331 del os.environ['DISPLAY']
332 try:
333 from xvfbwrapper import Xvfb
334 except ImportError:
335 raise RuntimeError(
336 'A display server was required, but $DISPLAY is not '
337 'defined and Xvfb could not be imported.')
338
339 self._display = Xvfb(nolisten='tcp')
340 self._display.start()
341
342 # Older versions of xvfbwrapper used vdisplay_num
343 if not hasattr(self._display, 'new_display'):
344 setattr(self._display, 'new_display',
345 self._display.vdisplay_num)
346 return self.get_display()
347
348 def stop_display(self):
349 """Closes the display if started"""
350 if self._display is not None:
351 from .. import logging
352 self._display.stop()
353 logging.getLogger('interface').debug(
354 'Closing display (if virtual)')
355
356
357 @atexit.register
358 def free_display():
359 """Stop virtual display (if it is up)"""
360 from .. import config
361 config.stop_display()
362
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nipype/utils/config.py b/nipype/utils/config.py
--- a/nipype/utils/config.py
+++ b/nipype/utils/config.py
@@ -147,11 +147,14 @@
def enable_debug_mode(self):
"""Enables debug configuration"""
+ from .. import logging
self._config.set('execution', 'stop_on_first_crash', 'true')
self._config.set('execution', 'remove_unnecessary_outputs', 'false')
self._config.set('execution', 'keep_inputs', 'true')
self._config.set('logging', 'workflow_level', 'DEBUG')
self._config.set('logging', 'interface_level', 'DEBUG')
+ self._config.set('logging', 'utils_level', 'DEBUG')
+ logging.update_logging(self._config)
def set_log_dir(self, log_dir):
"""Sets logging directory
|
{"golden_diff": "diff --git a/nipype/utils/config.py b/nipype/utils/config.py\n--- a/nipype/utils/config.py\n+++ b/nipype/utils/config.py\n@@ -147,11 +147,14 @@\n \n def enable_debug_mode(self):\n \"\"\"Enables debug configuration\"\"\"\n+ from .. import logging\n self._config.set('execution', 'stop_on_first_crash', 'true')\n self._config.set('execution', 'remove_unnecessary_outputs', 'false')\n self._config.set('execution', 'keep_inputs', 'true')\n self._config.set('logging', 'workflow_level', 'DEBUG')\n self._config.set('logging', 'interface_level', 'DEBUG')\n+ self._config.set('logging', 'utils_level', 'DEBUG')\n+ logging.update_logging(self._config)\n \n def set_log_dir(self, log_dir):\n \"\"\"Sets logging directory\n", "issue": "config.enable_debug_mode() does not work as advertised\nconfig.enable_debug_mode() is not equivalent to:\n\n```\nmkdir ~/.nipype\necho \"[logging]\" > ~/.nipype/nipype.cfg\necho \"workflow_level = DEBUG\" ~/.nipype/nipype.cfg\necho \"interface_level = DEBUG\" >> ~/.nipype/nipype.cfg\necho \"filemanip_level = DEBUG\" >> ~/.nipype/nipype.cfg\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n'''\nCreated on 20 Apr 2010\n\nlogging options : INFO, DEBUG\nhash_method : content, timestamp\n\n@author: Chris Filo Gorgolewski\n'''\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\nimport os\nimport sys\nimport errno\nimport atexit\nfrom warnings import warn\nfrom distutils.version import LooseVersion\nimport configparser\nimport numpy as np\n\nfrom builtins import bytes, str, object, open\nfrom simplejson import load, dump\nfrom future import standard_library\n\nfrom .misc import str2bool\nfrom ..external import portalocker\n\nstandard_library.install_aliases()\n\nCONFIG_DEPRECATIONS = {\n 'profile_runtime': ('monitoring.enabled', '1.0'),\n 'filemanip_level': ('logging.utils_level', '1.0'),\n}\n\nNUMPY_MMAP = LooseVersion(np.__version__) >= LooseVersion('1.12.0')\n\nDEFAULT_CONFIG_TPL = \"\"\"\\\n[logging]\nworkflow_level = INFO\nutils_level = INFO\ninterface_level = INFO\nlog_to_file = false\nlog_directory = {log_dir}\nlog_size = 16384000\nlog_rotate = 4\n\n[execution]\ncreate_report = true\ncrashdump_dir = {crashdump_dir}\nhash_method = timestamp\njob_finished_timeout = 5\nkeep_inputs = false\nlocal_hash_check = true\nmatplotlib_backend = Agg\nplugin = Linear\nremove_node_directories = false\nremove_unnecessary_outputs = true\ntry_hard_link_datasink = true\nsingle_thread_matlab = true\ncrashfile_format = pklz\nstop_on_first_crash = false\nstop_on_first_rerun = false\nuse_relative_paths = false\nstop_on_unknown_version = false\nwrite_provenance = false\nparameterize_dirs = true\npoll_sleep_duration = 2\nxvfb_max_wait = 10\n\n[monitoring]\nenabled = false\nsample_frequency = 1\nsummary_append = true\n\n[check]\ninterval = 1209600\n\"\"\".format\n\n\ndef mkdir_p(path):\n try:\n os.makedirs(path)\n except OSError as exc:\n if exc.errno == errno.EEXIST and os.path.isdir(path):\n pass\n else:\n raise\n\n\nclass NipypeConfig(object):\n \"\"\"Base nipype config class\"\"\"\n\n def __init__(self, *args, **kwargs):\n self._config = configparser.ConfigParser()\n self._cwd = None\n\n config_dir = os.path.expanduser('~/.nipype')\n self.data_file = os.path.join(config_dir, 'nipype.json')\n\n self.set_default_config()\n self._display = None\n self._resource_monitor = None\n\n if os.path.exists(config_dir):\n self._config.read(\n [os.path.join(config_dir, 'nipype.cfg'), 'nipype.cfg'])\n\n for option in CONFIG_DEPRECATIONS:\n for section in ['execution', 'logging', 'monitoring']:\n if self.has_option(section, option):\n new_section, new_option = CONFIG_DEPRECATIONS[option][\n 0].split('.')\n if not self.has_option(new_section, new_option):\n # Warn implicit in get\n self.set(new_section, new_option,\n self.get(section, option))\n\n @property\n def cwd(self):\n \"\"\"Cache current working directory ASAP\"\"\"\n # Run getcwd only once, preventing multiproc to finish\n # with error having changed to the wrong path\n if self._cwd is None:\n try:\n self._cwd = os.getcwd()\n except OSError:\n warn('Trying to run Nipype from a nonexistent directory \"{}\".'.\n format(os.getenv('PWD', 'unknown')), RuntimeWarning)\n raise\n return self._cwd\n\n def set_default_config(self):\n \"\"\"Read default settings template and set into config object\"\"\"\n default_cfg = DEFAULT_CONFIG_TPL(\n log_dir=os.path.expanduser(\n '~'), # Get $HOME in a platform-agnostic way\n crashdump_dir=self.cwd # Read cached cwd\n )\n\n try:\n self._config.read_string(default_cfg) # Python >= 3.2\n except AttributeError:\n from io import StringIO\n self._config.readfp(StringIO(default_cfg))\n\n def enable_debug_mode(self):\n \"\"\"Enables debug configuration\"\"\"\n self._config.set('execution', 'stop_on_first_crash', 'true')\n self._config.set('execution', 'remove_unnecessary_outputs', 'false')\n self._config.set('execution', 'keep_inputs', 'true')\n self._config.set('logging', 'workflow_level', 'DEBUG')\n self._config.set('logging', 'interface_level', 'DEBUG')\n\n def set_log_dir(self, log_dir):\n \"\"\"Sets logging directory\n\n This should be the first thing that is done before any nipype class\n with logging is imported.\n \"\"\"\n self._config.set('logging', 'log_directory', log_dir)\n\n def get(self, section, option, default=None):\n \"\"\"Get an option\"\"\"\n if option in CONFIG_DEPRECATIONS:\n msg = ('Config option \"%s\" has been deprecated as of nipype %s. '\n 'Please use \"%s\" instead.') % (\n option, CONFIG_DEPRECATIONS[option][1],\n CONFIG_DEPRECATIONS[option][0])\n warn(msg)\n section, option = CONFIG_DEPRECATIONS[option][0].split('.')\n\n if self._config.has_option(section, option):\n return self._config.get(section, option)\n return default\n\n def set(self, section, option, value):\n \"\"\"Set new value on option\"\"\"\n if isinstance(value, bool):\n value = str(value)\n\n if option in CONFIG_DEPRECATIONS:\n msg = ('Config option \"%s\" has been deprecated as of nipype %s. '\n 'Please use \"%s\" instead.') % (\n option, CONFIG_DEPRECATIONS[option][1],\n CONFIG_DEPRECATIONS[option][0])\n warn(msg)\n section, option = CONFIG_DEPRECATIONS[option][0].split('.')\n\n return self._config.set(section, option, value)\n\n def getboolean(self, section, option):\n \"\"\"Get a boolean option from section\"\"\"\n return self._config.getboolean(section, option)\n\n def has_option(self, section, option):\n \"\"\"Check if option exists in section\"\"\"\n return self._config.has_option(section, option)\n\n @property\n def _sections(self):\n return self._config._sections\n\n def get_data(self, key):\n \"\"\"Read options file\"\"\"\n if not os.path.exists(self.data_file):\n return None\n with open(self.data_file, 'rt') as file:\n portalocker.lock(file, portalocker.LOCK_EX)\n datadict = load(file)\n if key in datadict:\n return datadict[key]\n return None\n\n def save_data(self, key, value):\n \"\"\"Store config flie\"\"\"\n datadict = {}\n if os.path.exists(self.data_file):\n with open(self.data_file, 'rt') as file:\n portalocker.lock(file, portalocker.LOCK_EX)\n datadict = load(file)\n else:\n dirname = os.path.dirname(self.data_file)\n if not os.path.exists(dirname):\n mkdir_p(dirname)\n with open(self.data_file, 'wt') as file:\n portalocker.lock(file, portalocker.LOCK_EX)\n datadict[key] = value\n dump(datadict, file)\n\n def update_config(self, config_dict):\n \"\"\"Extend internal dictionary with config_dict\"\"\"\n for section in ['execution', 'logging', 'check']:\n if section in config_dict:\n for key, val in list(config_dict[section].items()):\n if not key.startswith('__'):\n self._config.set(section, key, str(val))\n\n def update_matplotlib(self):\n \"\"\"Set backend on matplotlib from options\"\"\"\n import matplotlib\n matplotlib.use(self.get('execution', 'matplotlib_backend'))\n\n def enable_provenance(self):\n \"\"\"Sets provenance storing on\"\"\"\n self._config.set('execution', 'write_provenance', 'true')\n self._config.set('execution', 'hash_method', 'content')\n\n @property\n def resource_monitor(self):\n \"\"\"Check if resource_monitor is available\"\"\"\n if self._resource_monitor is not None:\n return self._resource_monitor\n\n # Cache config from nipype config\n self.resource_monitor = str2bool(\n self._config.get('monitoring', 'enabled')) or False\n return self._resource_monitor\n\n @resource_monitor.setter\n def resource_monitor(self, value):\n # Accept string true/false values\n if isinstance(value, (str, bytes)):\n value = str2bool(value.lower())\n\n if value is False:\n self._resource_monitor = False\n elif value is True:\n if not self._resource_monitor:\n # Before setting self._resource_monitor check psutil\n # availability\n self._resource_monitor = False\n try:\n import psutil\n self._resource_monitor = LooseVersion(\n psutil.__version__) >= LooseVersion('5.0')\n except ImportError:\n pass\n finally:\n if not self._resource_monitor:\n warn('Could not enable the resource monitor: '\n 'psutil>=5.0 could not be imported.')\n self._config.set('monitoring', 'enabled',\n ('%s' % self._resource_monitor).lower())\n\n def enable_resource_monitor(self):\n \"\"\"Sets the resource monitor on\"\"\"\n self.resource_monitor = True\n\n def get_display(self):\n \"\"\"Returns the first display available\"\"\"\n\n # Check if an Xorg server is listening\n # import subprocess as sp\n # if not hasattr(sp, 'DEVNULL'):\n # setattr(sp, 'DEVNULL', os.devnull)\n # x_listening = bool(sp.call('ps au | grep -v grep | grep -i xorg',\n # shell=True, stdout=sp.DEVNULL))\n\n if self._display is not None:\n return ':%d' % self._display.new_display\n\n sysdisplay = None\n if self._config.has_option('execution', 'display_variable'):\n sysdisplay = self._config.get('execution', 'display_variable')\n\n sysdisplay = sysdisplay or os.getenv('DISPLAY')\n if sysdisplay:\n from collections import namedtuple\n\n def _mock():\n pass\n\n # Store a fake Xvfb object. Format - <host>:<display>[.<screen>]\n ndisp = sysdisplay.split(':')[-1].split('.')[0]\n Xvfb = namedtuple('Xvfb', ['new_display', 'stop'])\n self._display = Xvfb(int(ndisp), _mock)\n return self.get_display()\n else:\n if 'darwin' in sys.platform:\n raise RuntimeError(\n 'Xvfb requires root permissions to run in OSX. Please '\n 'make sure that an X server is listening and set the '\n 'appropriate config on either $DISPLAY or nipype\\'s '\n '\"display_variable\" config. Valid X servers include '\n 'VNC, XQuartz, or manually started Xvfb.')\n\n # If $DISPLAY is empty, it confuses Xvfb so unset\n if sysdisplay == '':\n del os.environ['DISPLAY']\n try:\n from xvfbwrapper import Xvfb\n except ImportError:\n raise RuntimeError(\n 'A display server was required, but $DISPLAY is not '\n 'defined and Xvfb could not be imported.')\n\n self._display = Xvfb(nolisten='tcp')\n self._display.start()\n\n # Older versions of xvfbwrapper used vdisplay_num\n if not hasattr(self._display, 'new_display'):\n setattr(self._display, 'new_display',\n self._display.vdisplay_num)\n return self.get_display()\n\n def stop_display(self):\n \"\"\"Closes the display if started\"\"\"\n if self._display is not None:\n from .. import logging\n self._display.stop()\n logging.getLogger('interface').debug(\n 'Closing display (if virtual)')\n\n\[email protected]\ndef free_display():\n \"\"\"Stop virtual display (if it is up)\"\"\"\n from .. import config\n config.stop_display()\n", "path": "nipype/utils/config.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n'''\nCreated on 20 Apr 2010\n\nlogging options : INFO, DEBUG\nhash_method : content, timestamp\n\n@author: Chris Filo Gorgolewski\n'''\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\nimport os\nimport sys\nimport errno\nimport atexit\nfrom warnings import warn\nfrom distutils.version import LooseVersion\nimport configparser\nimport numpy as np\n\nfrom builtins import bytes, str, object, open\nfrom simplejson import load, dump\nfrom future import standard_library\n\nfrom .misc import str2bool\nfrom ..external import portalocker\n\nstandard_library.install_aliases()\n\nCONFIG_DEPRECATIONS = {\n 'profile_runtime': ('monitoring.enabled', '1.0'),\n 'filemanip_level': ('logging.utils_level', '1.0'),\n}\n\nNUMPY_MMAP = LooseVersion(np.__version__) >= LooseVersion('1.12.0')\n\nDEFAULT_CONFIG_TPL = \"\"\"\\\n[logging]\nworkflow_level = INFO\nutils_level = INFO\ninterface_level = INFO\nlog_to_file = false\nlog_directory = {log_dir}\nlog_size = 16384000\nlog_rotate = 4\n\n[execution]\ncreate_report = true\ncrashdump_dir = {crashdump_dir}\nhash_method = timestamp\njob_finished_timeout = 5\nkeep_inputs = false\nlocal_hash_check = true\nmatplotlib_backend = Agg\nplugin = Linear\nremove_node_directories = false\nremove_unnecessary_outputs = true\ntry_hard_link_datasink = true\nsingle_thread_matlab = true\ncrashfile_format = pklz\nstop_on_first_crash = false\nstop_on_first_rerun = false\nuse_relative_paths = false\nstop_on_unknown_version = false\nwrite_provenance = false\nparameterize_dirs = true\npoll_sleep_duration = 2\nxvfb_max_wait = 10\n\n[monitoring]\nenabled = false\nsample_frequency = 1\nsummary_append = true\n\n[check]\ninterval = 1209600\n\"\"\".format\n\n\ndef mkdir_p(path):\n try:\n os.makedirs(path)\n except OSError as exc:\n if exc.errno == errno.EEXIST and os.path.isdir(path):\n pass\n else:\n raise\n\n\nclass NipypeConfig(object):\n \"\"\"Base nipype config class\"\"\"\n\n def __init__(self, *args, **kwargs):\n self._config = configparser.ConfigParser()\n self._cwd = None\n\n config_dir = os.path.expanduser('~/.nipype')\n self.data_file = os.path.join(config_dir, 'nipype.json')\n\n self.set_default_config()\n self._display = None\n self._resource_monitor = None\n\n if os.path.exists(config_dir):\n self._config.read(\n [os.path.join(config_dir, 'nipype.cfg'), 'nipype.cfg'])\n\n for option in CONFIG_DEPRECATIONS:\n for section in ['execution', 'logging', 'monitoring']:\n if self.has_option(section, option):\n new_section, new_option = CONFIG_DEPRECATIONS[option][\n 0].split('.')\n if not self.has_option(new_section, new_option):\n # Warn implicit in get\n self.set(new_section, new_option,\n self.get(section, option))\n\n @property\n def cwd(self):\n \"\"\"Cache current working directory ASAP\"\"\"\n # Run getcwd only once, preventing multiproc to finish\n # with error having changed to the wrong path\n if self._cwd is None:\n try:\n self._cwd = os.getcwd()\n except OSError:\n warn('Trying to run Nipype from a nonexistent directory \"{}\".'.\n format(os.getenv('PWD', 'unknown')), RuntimeWarning)\n raise\n return self._cwd\n\n def set_default_config(self):\n \"\"\"Read default settings template and set into config object\"\"\"\n default_cfg = DEFAULT_CONFIG_TPL(\n log_dir=os.path.expanduser(\n '~'), # Get $HOME in a platform-agnostic way\n crashdump_dir=self.cwd # Read cached cwd\n )\n\n try:\n self._config.read_string(default_cfg) # Python >= 3.2\n except AttributeError:\n from io import StringIO\n self._config.readfp(StringIO(default_cfg))\n\n def enable_debug_mode(self):\n \"\"\"Enables debug configuration\"\"\"\n from .. import logging\n self._config.set('execution', 'stop_on_first_crash', 'true')\n self._config.set('execution', 'remove_unnecessary_outputs', 'false')\n self._config.set('execution', 'keep_inputs', 'true')\n self._config.set('logging', 'workflow_level', 'DEBUG')\n self._config.set('logging', 'interface_level', 'DEBUG')\n self._config.set('logging', 'utils_level', 'DEBUG')\n logging.update_logging(self._config)\n\n def set_log_dir(self, log_dir):\n \"\"\"Sets logging directory\n\n This should be the first thing that is done before any nipype class\n with logging is imported.\n \"\"\"\n self._config.set('logging', 'log_directory', log_dir)\n\n def get(self, section, option, default=None):\n \"\"\"Get an option\"\"\"\n if option in CONFIG_DEPRECATIONS:\n msg = ('Config option \"%s\" has been deprecated as of nipype %s. '\n 'Please use \"%s\" instead.') % (\n option, CONFIG_DEPRECATIONS[option][1],\n CONFIG_DEPRECATIONS[option][0])\n warn(msg)\n section, option = CONFIG_DEPRECATIONS[option][0].split('.')\n\n if self._config.has_option(section, option):\n return self._config.get(section, option)\n return default\n\n def set(self, section, option, value):\n \"\"\"Set new value on option\"\"\"\n if isinstance(value, bool):\n value = str(value)\n\n if option in CONFIG_DEPRECATIONS:\n msg = ('Config option \"%s\" has been deprecated as of nipype %s. '\n 'Please use \"%s\" instead.') % (\n option, CONFIG_DEPRECATIONS[option][1],\n CONFIG_DEPRECATIONS[option][0])\n warn(msg)\n section, option = CONFIG_DEPRECATIONS[option][0].split('.')\n\n return self._config.set(section, option, value)\n\n def getboolean(self, section, option):\n \"\"\"Get a boolean option from section\"\"\"\n return self._config.getboolean(section, option)\n\n def has_option(self, section, option):\n \"\"\"Check if option exists in section\"\"\"\n return self._config.has_option(section, option)\n\n @property\n def _sections(self):\n return self._config._sections\n\n def get_data(self, key):\n \"\"\"Read options file\"\"\"\n if not os.path.exists(self.data_file):\n return None\n with open(self.data_file, 'rt') as file:\n portalocker.lock(file, portalocker.LOCK_EX)\n datadict = load(file)\n if key in datadict:\n return datadict[key]\n return None\n\n def save_data(self, key, value):\n \"\"\"Store config flie\"\"\"\n datadict = {}\n if os.path.exists(self.data_file):\n with open(self.data_file, 'rt') as file:\n portalocker.lock(file, portalocker.LOCK_EX)\n datadict = load(file)\n else:\n dirname = os.path.dirname(self.data_file)\n if not os.path.exists(dirname):\n mkdir_p(dirname)\n with open(self.data_file, 'wt') as file:\n portalocker.lock(file, portalocker.LOCK_EX)\n datadict[key] = value\n dump(datadict, file)\n\n def update_config(self, config_dict):\n \"\"\"Extend internal dictionary with config_dict\"\"\"\n for section in ['execution', 'logging', 'check']:\n if section in config_dict:\n for key, val in list(config_dict[section].items()):\n if not key.startswith('__'):\n self._config.set(section, key, str(val))\n\n def update_matplotlib(self):\n \"\"\"Set backend on matplotlib from options\"\"\"\n import matplotlib\n matplotlib.use(self.get('execution', 'matplotlib_backend'))\n\n def enable_provenance(self):\n \"\"\"Sets provenance storing on\"\"\"\n self._config.set('execution', 'write_provenance', 'true')\n self._config.set('execution', 'hash_method', 'content')\n\n @property\n def resource_monitor(self):\n \"\"\"Check if resource_monitor is available\"\"\"\n if self._resource_monitor is not None:\n return self._resource_monitor\n\n # Cache config from nipype config\n self.resource_monitor = str2bool(\n self._config.get('monitoring', 'enabled')) or False\n return self._resource_monitor\n\n @resource_monitor.setter\n def resource_monitor(self, value):\n # Accept string true/false values\n if isinstance(value, (str, bytes)):\n value = str2bool(value.lower())\n\n if value is False:\n self._resource_monitor = False\n elif value is True:\n if not self._resource_monitor:\n # Before setting self._resource_monitor check psutil\n # availability\n self._resource_monitor = False\n try:\n import psutil\n self._resource_monitor = LooseVersion(\n psutil.__version__) >= LooseVersion('5.0')\n except ImportError:\n pass\n finally:\n if not self._resource_monitor:\n warn('Could not enable the resource monitor: '\n 'psutil>=5.0 could not be imported.')\n self._config.set('monitoring', 'enabled',\n ('%s' % self._resource_monitor).lower())\n\n def enable_resource_monitor(self):\n \"\"\"Sets the resource monitor on\"\"\"\n self.resource_monitor = True\n\n def get_display(self):\n \"\"\"Returns the first display available\"\"\"\n\n # Check if an Xorg server is listening\n # import subprocess as sp\n # if not hasattr(sp, 'DEVNULL'):\n # setattr(sp, 'DEVNULL', os.devnull)\n # x_listening = bool(sp.call('ps au | grep -v grep | grep -i xorg',\n # shell=True, stdout=sp.DEVNULL))\n\n if self._display is not None:\n return ':%d' % self._display.new_display\n\n sysdisplay = None\n if self._config.has_option('execution', 'display_variable'):\n sysdisplay = self._config.get('execution', 'display_variable')\n\n sysdisplay = sysdisplay or os.getenv('DISPLAY')\n if sysdisplay:\n from collections import namedtuple\n\n def _mock():\n pass\n\n # Store a fake Xvfb object. Format - <host>:<display>[.<screen>]\n ndisp = sysdisplay.split(':')[-1].split('.')[0]\n Xvfb = namedtuple('Xvfb', ['new_display', 'stop'])\n self._display = Xvfb(int(ndisp), _mock)\n return self.get_display()\n else:\n if 'darwin' in sys.platform:\n raise RuntimeError(\n 'Xvfb requires root permissions to run in OSX. Please '\n 'make sure that an X server is listening and set the '\n 'appropriate config on either $DISPLAY or nipype\\'s '\n '\"display_variable\" config. Valid X servers include '\n 'VNC, XQuartz, or manually started Xvfb.')\n\n # If $DISPLAY is empty, it confuses Xvfb so unset\n if sysdisplay == '':\n del os.environ['DISPLAY']\n try:\n from xvfbwrapper import Xvfb\n except ImportError:\n raise RuntimeError(\n 'A display server was required, but $DISPLAY is not '\n 'defined and Xvfb could not be imported.')\n\n self._display = Xvfb(nolisten='tcp')\n self._display.start()\n\n # Older versions of xvfbwrapper used vdisplay_num\n if not hasattr(self._display, 'new_display'):\n setattr(self._display, 'new_display',\n self._display.vdisplay_num)\n return self.get_display()\n\n def stop_display(self):\n \"\"\"Closes the display if started\"\"\"\n if self._display is not None:\n from .. import logging\n self._display.stop()\n logging.getLogger('interface').debug(\n 'Closing display (if virtual)')\n\n\[email protected]\ndef free_display():\n \"\"\"Stop virtual display (if it is up)\"\"\"\n from .. import config\n config.stop_display()\n", "path": "nipype/utils/config.py"}]}
| 4,046 | 196 |
gh_patches_debug_20409
|
rasdani/github-patches
|
git_diff
|
flairNLP__flair-557
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PermissionError when downloading and copying models
I am on Windows 10, python 3.6. Today I installed flair with pip, and have been going through the documentation.
Whenever the flair package downloads a model and tries to remove it from temp, I get the following PermissionError, which my guess is a Window's specific thing.
```
2019-02-21 11:28:27,309 https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/embeddings-v0.4/de-wiki-fasttext-300d-1M.vectors.npy not found in cache, downloading to C:\Users\EXTERN~1\AppData\Local\Temp\2\tmp5uq0mv3m
100%|██████████| 1199998928/1199998928 [00:23<00:00, 50211190.58B/s]
2019-02-21 11:28:51,427 copying C:\Users\EXTERN~1\AppData\Local\Temp\2\tmp5uq0mv3m to cache at C:\Users\external-dsvm-1\.flair\embeddings\de-wiki-fasttext-300d-1M.vectors.npy
2019-02-21 11:29:04,958 removing temp file C:\Users\EXTERN~1\AppData\Local\Temp\2\tmp5uq0mv3m
---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
<ipython-input-12-a2ec2b8c11d7> in <module>
----> 1 german_embedding = WordEmbeddings('de')
C:\anaconda\envs\tf_gpu\lib\site-packages\flair\embeddings.py in __init__(self, embeddings)
185 # two-letter language code wiki embeddings
186 elif len(embeddings.lower()) == 2:
--> 187 cached_path(f'{embeddings_path_v4}{embeddings}-wiki-fasttext-300d-1M.vectors.npy', cache_dir=cache_dir)
188 embeddings = cached_path(f'{embeddings_path_v4}{embeddings}-wiki-fasttext-300d-1M', cache_dir=cache_dir)
189
C:\anaconda\envs\tf_gpu\lib\site-packages\flair\file_utils.py in cached_path(url_or_filename, cache_dir)
72 if parsed.scheme in ('http', 'https'):
73 # URL, so get it from the cache (downloading if necessary)
---> 74 return get_from_cache(url_or_filename, dataset_cache)
75 elif parsed.scheme == '' and Path(url_or_filename).exists():
76 # File, and it exists.
C:\anaconda\envs\tf_gpu\lib\site-packages\flair\file_utils.py in get_from_cache(url, cache_dir)
128 shutil.copyfile(temp_filename, str(cache_path))
129 logger.info("removing temp file %s", temp_filename)
--> 130 os.remove(temp_filename)
131
132 return cache_path
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\EXTERN~1\\AppData\\Local\\Temp\\2\\tmp5uq0mv3m'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flair/file_utils.py`
Content:
```
1 """
2 Utilities for working with the local dataset cache. Copied from AllenNLP
3 """
4 from pathlib import Path
5 from typing import Tuple
6 import os
7 import base64
8 import logging
9 import shutil
10 import tempfile
11 import re
12 from urllib.parse import urlparse
13
14 import mmap
15 import requests
16
17 # from allennlp.common.tqdm import Tqdm
18
19
20 logger = logging.getLogger('flair')
21
22
23 CACHE_ROOT = os.path.expanduser(os.path.join('~', '.flair'))
24
25
26 def load_big_file(f):
27 """
28 Workaround for loading a big pickle file. Files over 2GB cause pickle errors on certin Mac and Windows distributions.
29 :param f:
30 :return:
31 """
32 logger.info(f'loading file {f}')
33 with open(f, 'r+b') as f_in:
34 # mmap seems to be much more memory efficient
35 bf = mmap.mmap(f_in.fileno(), 0)
36 f_in.close()
37 return bf
38
39
40 def url_to_filename(url: str, etag: str = None) -> str:
41 """
42 Converts a url into a filename in a reversible way.
43 If `etag` is specified, add it on the end, separated by a period
44 (which necessarily won't appear in the base64-encoded filename).
45 Get rid of the quotes in the etag, since Windows doesn't like them.
46 """
47 url_bytes = url.encode('utf-8')
48 b64_bytes = base64.b64encode(url_bytes)
49 decoded = b64_bytes.decode('utf-8')
50
51 if etag:
52 # Remove quotes from etag
53 etag = etag.replace('"', '')
54 return f"{decoded}.{etag}"
55 else:
56 return decoded
57
58
59 def filename_to_url(filename: str) -> Tuple[str, str]:
60 """
61 Recovers the the url from the encoded filename. Returns it and the ETag
62 (which may be ``None``)
63 """
64 try:
65 # If there is an etag, it's everything after the first period
66 decoded, etag = filename.split(".", 1)
67 except ValueError:
68 # Otherwise, use None
69 decoded, etag = filename, None
70
71 filename_bytes = decoded.encode('utf-8')
72 url_bytes = base64.b64decode(filename_bytes)
73 return url_bytes.decode('utf-8'), etag
74
75
76 def cached_path(url_or_filename: str, cache_dir: Path) -> Path:
77 """
78 Given something that might be a URL (or might be a local path),
79 determine which. If it's a URL, download the file and cache it, and
80 return the path to the cached file. If it's already a local path,
81 make sure the file exists and then return the path.
82 """
83 dataset_cache = Path(CACHE_ROOT) / cache_dir
84
85 parsed = urlparse(url_or_filename)
86
87 if parsed.scheme in ('http', 'https'):
88 # URL, so get it from the cache (downloading if necessary)
89 return get_from_cache(url_or_filename, dataset_cache)
90 elif parsed.scheme == '' and Path(url_or_filename).exists():
91 # File, and it exists.
92 return Path(url_or_filename)
93 elif parsed.scheme == '':
94 # File, but it doesn't exist.
95 raise FileNotFoundError("file {} not found".format(url_or_filename))
96 else:
97 # Something unknown
98 raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename))
99
100
101 # TODO(joelgrus): do we want to do checksums or anything like that?
102 def get_from_cache(url: str, cache_dir: Path = None) -> Path:
103 """
104 Given a URL, look for the corresponding dataset in the local cache.
105 If it's not there, download it. Then return the path to the cached file.
106 """
107 cache_dir.mkdir(parents=True, exist_ok=True)
108
109 filename = re.sub(r'.+/', '', url)
110 # get cache path to put the file
111 cache_path = cache_dir / filename
112 if cache_path.exists():
113 return cache_path
114
115 # make HEAD request to check ETag
116 response = requests.head(url)
117 if response.status_code != 200:
118 raise IOError("HEAD request failed for url {}".format(url))
119
120 # add ETag to filename if it exists
121 # etag = response.headers.get("ETag")
122
123 if not cache_path.exists():
124 # Download to temporary file, then copy to cache dir once finished.
125 # Otherwise you get corrupt cache entries if the download gets interrupted.
126 _, temp_filename = tempfile.mkstemp()
127 logger.info("%s not found in cache, downloading to %s", url, temp_filename)
128
129 # GET file object
130 req = requests.get(url, stream=True)
131 content_length = req.headers.get('Content-Length')
132 total = int(content_length) if content_length is not None else None
133 progress = Tqdm.tqdm(unit="B", total=total)
134 with open(temp_filename, 'wb') as temp_file:
135 for chunk in req.iter_content(chunk_size=1024):
136 if chunk: # filter out keep-alive new chunks
137 progress.update(len(chunk))
138 temp_file.write(chunk)
139
140 progress.close()
141
142 logger.info("copying %s to cache at %s", temp_filename, cache_path)
143 shutil.copyfile(temp_filename, str(cache_path))
144 logger.info("removing temp file %s", temp_filename)
145 os.remove(temp_filename)
146
147 return cache_path
148
149
150 from tqdm import tqdm as _tqdm
151
152
153 class Tqdm:
154 # These defaults are the same as the argument defaults in tqdm.
155 default_mininterval: float = 0.1
156
157 @staticmethod
158 def set_default_mininterval(value: float) -> None:
159 Tqdm.default_mininterval = value
160
161 @staticmethod
162 def set_slower_interval(use_slower_interval: bool) -> None:
163 """
164 If ``use_slower_interval`` is ``True``, we will dramatically slow down ``tqdm's`` default
165 output rate. ``tqdm's`` default output rate is great for interactively watching progress,
166 but it is not great for log files. You might want to set this if you are primarily going
167 to be looking at output through log files, not the terminal.
168 """
169 if use_slower_interval:
170 Tqdm.default_mininterval = 10.0
171 else:
172 Tqdm.default_mininterval = 0.1
173
174 @staticmethod
175 def tqdm(*args, **kwargs):
176 new_kwargs = {
177 'mininterval': Tqdm.default_mininterval,
178 **kwargs
179 }
180
181 return _tqdm(*args, **new_kwargs)
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flair/file_utils.py b/flair/file_utils.py
--- a/flair/file_utils.py
+++ b/flair/file_utils.py
@@ -123,7 +123,7 @@
if not cache_path.exists():
# Download to temporary file, then copy to cache dir once finished.
# Otherwise you get corrupt cache entries if the download gets interrupted.
- _, temp_filename = tempfile.mkstemp()
+ fd, temp_filename = tempfile.mkstemp()
logger.info("%s not found in cache, downloading to %s", url, temp_filename)
# GET file object
@@ -142,6 +142,7 @@
logger.info("copying %s to cache at %s", temp_filename, cache_path)
shutil.copyfile(temp_filename, str(cache_path))
logger.info("removing temp file %s", temp_filename)
+ os.close(fd)
os.remove(temp_filename)
return cache_path
|
{"golden_diff": "diff --git a/flair/file_utils.py b/flair/file_utils.py\n--- a/flair/file_utils.py\n+++ b/flair/file_utils.py\n@@ -123,7 +123,7 @@\n if not cache_path.exists():\n # Download to temporary file, then copy to cache dir once finished.\n # Otherwise you get corrupt cache entries if the download gets interrupted.\n- _, temp_filename = tempfile.mkstemp()\n+ fd, temp_filename = tempfile.mkstemp()\n logger.info(\"%s not found in cache, downloading to %s\", url, temp_filename)\n \n # GET file object\n@@ -142,6 +142,7 @@\n logger.info(\"copying %s to cache at %s\", temp_filename, cache_path)\n shutil.copyfile(temp_filename, str(cache_path))\n logger.info(\"removing temp file %s\", temp_filename)\n+ os.close(fd)\n os.remove(temp_filename)\n \n return cache_path\n", "issue": "PermissionError when downloading and copying models\nI am on Windows 10, python 3.6. Today I installed flair with pip, and have been going through the documentation. \r\n\r\nWhenever the flair package downloads a model and tries to remove it from temp, I get the following PermissionError, which my guess is a Window's specific thing.\r\n\r\n```\r\n2019-02-21 11:28:27,309 https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/embeddings-v0.4/de-wiki-fasttext-300d-1M.vectors.npy not found in cache, downloading to C:\\Users\\EXTERN~1\\AppData\\Local\\Temp\\2\\tmp5uq0mv3m\r\n\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1199998928/1199998928 [00:23<00:00, 50211190.58B/s]\r\n\r\n2019-02-21 11:28:51,427 copying C:\\Users\\EXTERN~1\\AppData\\Local\\Temp\\2\\tmp5uq0mv3m to cache at C:\\Users\\external-dsvm-1\\.flair\\embeddings\\de-wiki-fasttext-300d-1M.vectors.npy\r\n2019-02-21 11:29:04,958 removing temp file C:\\Users\\EXTERN~1\\AppData\\Local\\Temp\\2\\tmp5uq0mv3m\r\n\r\n---------------------------------------------------------------------------\r\nPermissionError Traceback (most recent call last)\r\n<ipython-input-12-a2ec2b8c11d7> in <module>\r\n----> 1 german_embedding = WordEmbeddings('de')\r\n\r\nC:\\anaconda\\envs\\tf_gpu\\lib\\site-packages\\flair\\embeddings.py in __init__(self, embeddings)\r\n 185 # two-letter language code wiki embeddings\r\n 186 elif len(embeddings.lower()) == 2:\r\n--> 187 cached_path(f'{embeddings_path_v4}{embeddings}-wiki-fasttext-300d-1M.vectors.npy', cache_dir=cache_dir)\r\n 188 embeddings = cached_path(f'{embeddings_path_v4}{embeddings}-wiki-fasttext-300d-1M', cache_dir=cache_dir)\r\n 189 \r\n\r\nC:\\anaconda\\envs\\tf_gpu\\lib\\site-packages\\flair\\file_utils.py in cached_path(url_or_filename, cache_dir)\r\n 72 if parsed.scheme in ('http', 'https'):\r\n 73 # URL, so get it from the cache (downloading if necessary)\r\n---> 74 return get_from_cache(url_or_filename, dataset_cache)\r\n 75 elif parsed.scheme == '' and Path(url_or_filename).exists():\r\n 76 # File, and it exists.\r\n\r\nC:\\anaconda\\envs\\tf_gpu\\lib\\site-packages\\flair\\file_utils.py in get_from_cache(url, cache_dir)\r\n 128 shutil.copyfile(temp_filename, str(cache_path))\r\n 129 logger.info(\"removing temp file %s\", temp_filename)\r\n--> 130 os.remove(temp_filename)\r\n 131 \r\n 132 return cache_path\r\n\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\EXTERN~1\\\\AppData\\\\Local\\\\Temp\\\\2\\\\tmp5uq0mv3m'\r\n```\n", "before_files": [{"content": "\"\"\"\nUtilities for working with the local dataset cache. Copied from AllenNLP\n\"\"\"\nfrom pathlib import Path\nfrom typing import Tuple\nimport os\nimport base64\nimport logging\nimport shutil\nimport tempfile\nimport re\nfrom urllib.parse import urlparse\n\nimport mmap\nimport requests\n\n# from allennlp.common.tqdm import Tqdm\n\n\nlogger = logging.getLogger('flair')\n\n\nCACHE_ROOT = os.path.expanduser(os.path.join('~', '.flair'))\n\n\ndef load_big_file(f):\n \"\"\"\n Workaround for loading a big pickle file. Files over 2GB cause pickle errors on certin Mac and Windows distributions.\n :param f:\n :return:\n \"\"\"\n logger.info(f'loading file {f}')\n with open(f, 'r+b') as f_in:\n # mmap seems to be much more memory efficient\n bf = mmap.mmap(f_in.fileno(), 0)\n f_in.close()\n return bf\n\n\ndef url_to_filename(url: str, etag: str = None) -> str:\n \"\"\"\n Converts a url into a filename in a reversible way.\n If `etag` is specified, add it on the end, separated by a period\n (which necessarily won't appear in the base64-encoded filename).\n Get rid of the quotes in the etag, since Windows doesn't like them.\n \"\"\"\n url_bytes = url.encode('utf-8')\n b64_bytes = base64.b64encode(url_bytes)\n decoded = b64_bytes.decode('utf-8')\n\n if etag:\n # Remove quotes from etag\n etag = etag.replace('\"', '')\n return f\"{decoded}.{etag}\"\n else:\n return decoded\n\n\ndef filename_to_url(filename: str) -> Tuple[str, str]:\n \"\"\"\n Recovers the the url from the encoded filename. Returns it and the ETag\n (which may be ``None``)\n \"\"\"\n try:\n # If there is an etag, it's everything after the first period\n decoded, etag = filename.split(\".\", 1)\n except ValueError:\n # Otherwise, use None\n decoded, etag = filename, None\n\n filename_bytes = decoded.encode('utf-8')\n url_bytes = base64.b64decode(filename_bytes)\n return url_bytes.decode('utf-8'), etag\n\n\ndef cached_path(url_or_filename: str, cache_dir: Path) -> Path:\n \"\"\"\n Given something that might be a URL (or might be a local path),\n determine which. If it's a URL, download the file and cache it, and\n return the path to the cached file. If it's already a local path,\n make sure the file exists and then return the path.\n \"\"\"\n dataset_cache = Path(CACHE_ROOT) / cache_dir\n\n parsed = urlparse(url_or_filename)\n\n if parsed.scheme in ('http', 'https'):\n # URL, so get it from the cache (downloading if necessary)\n return get_from_cache(url_or_filename, dataset_cache)\n elif parsed.scheme == '' and Path(url_or_filename).exists():\n # File, and it exists.\n return Path(url_or_filename)\n elif parsed.scheme == '':\n # File, but it doesn't exist.\n raise FileNotFoundError(\"file {} not found\".format(url_or_filename))\n else:\n # Something unknown\n raise ValueError(\"unable to parse {} as a URL or as a local path\".format(url_or_filename))\n\n\n# TODO(joelgrus): do we want to do checksums or anything like that?\ndef get_from_cache(url: str, cache_dir: Path = None) -> Path:\n \"\"\"\n Given a URL, look for the corresponding dataset in the local cache.\n If it's not there, download it. Then return the path to the cached file.\n \"\"\"\n cache_dir.mkdir(parents=True, exist_ok=True)\n\n filename = re.sub(r'.+/', '', url)\n # get cache path to put the file\n cache_path = cache_dir / filename\n if cache_path.exists():\n return cache_path\n\n # make HEAD request to check ETag\n response = requests.head(url)\n if response.status_code != 200:\n raise IOError(\"HEAD request failed for url {}\".format(url))\n\n # add ETag to filename if it exists\n # etag = response.headers.get(\"ETag\")\n\n if not cache_path.exists():\n # Download to temporary file, then copy to cache dir once finished.\n # Otherwise you get corrupt cache entries if the download gets interrupted.\n _, temp_filename = tempfile.mkstemp()\n logger.info(\"%s not found in cache, downloading to %s\", url, temp_filename)\n\n # GET file object\n req = requests.get(url, stream=True)\n content_length = req.headers.get('Content-Length')\n total = int(content_length) if content_length is not None else None\n progress = Tqdm.tqdm(unit=\"B\", total=total)\n with open(temp_filename, 'wb') as temp_file:\n for chunk in req.iter_content(chunk_size=1024):\n if chunk: # filter out keep-alive new chunks\n progress.update(len(chunk))\n temp_file.write(chunk)\n\n progress.close()\n\n logger.info(\"copying %s to cache at %s\", temp_filename, cache_path)\n shutil.copyfile(temp_filename, str(cache_path))\n logger.info(\"removing temp file %s\", temp_filename)\n os.remove(temp_filename)\n\n return cache_path\n\n\nfrom tqdm import tqdm as _tqdm\n\n\nclass Tqdm:\n # These defaults are the same as the argument defaults in tqdm.\n default_mininterval: float = 0.1\n\n @staticmethod\n def set_default_mininterval(value: float) -> None:\n Tqdm.default_mininterval = value\n\n @staticmethod\n def set_slower_interval(use_slower_interval: bool) -> None:\n \"\"\"\n If ``use_slower_interval`` is ``True``, we will dramatically slow down ``tqdm's`` default\n output rate. ``tqdm's`` default output rate is great for interactively watching progress,\n but it is not great for log files. You might want to set this if you are primarily going\n to be looking at output through log files, not the terminal.\n \"\"\"\n if use_slower_interval:\n Tqdm.default_mininterval = 10.0\n else:\n Tqdm.default_mininterval = 0.1\n\n @staticmethod\n def tqdm(*args, **kwargs):\n new_kwargs = {\n 'mininterval': Tqdm.default_mininterval,\n **kwargs\n }\n\n return _tqdm(*args, **new_kwargs)\n", "path": "flair/file_utils.py"}], "after_files": [{"content": "\"\"\"\nUtilities for working with the local dataset cache. Copied from AllenNLP\n\"\"\"\nfrom pathlib import Path\nfrom typing import Tuple\nimport os\nimport base64\nimport logging\nimport shutil\nimport tempfile\nimport re\nfrom urllib.parse import urlparse\n\nimport mmap\nimport requests\n\n# from allennlp.common.tqdm import Tqdm\n\n\nlogger = logging.getLogger('flair')\n\n\nCACHE_ROOT = os.path.expanduser(os.path.join('~', '.flair'))\n\n\ndef load_big_file(f):\n \"\"\"\n Workaround for loading a big pickle file. Files over 2GB cause pickle errors on certin Mac and Windows distributions.\n :param f:\n :return:\n \"\"\"\n logger.info(f'loading file {f}')\n with open(f, 'r+b') as f_in:\n # mmap seems to be much more memory efficient\n bf = mmap.mmap(f_in.fileno(), 0)\n f_in.close()\n return bf\n\n\ndef url_to_filename(url: str, etag: str = None) -> str:\n \"\"\"\n Converts a url into a filename in a reversible way.\n If `etag` is specified, add it on the end, separated by a period\n (which necessarily won't appear in the base64-encoded filename).\n Get rid of the quotes in the etag, since Windows doesn't like them.\n \"\"\"\n url_bytes = url.encode('utf-8')\n b64_bytes = base64.b64encode(url_bytes)\n decoded = b64_bytes.decode('utf-8')\n\n if etag:\n # Remove quotes from etag\n etag = etag.replace('\"', '')\n return f\"{decoded}.{etag}\"\n else:\n return decoded\n\n\ndef filename_to_url(filename: str) -> Tuple[str, str]:\n \"\"\"\n Recovers the the url from the encoded filename. Returns it and the ETag\n (which may be ``None``)\n \"\"\"\n try:\n # If there is an etag, it's everything after the first period\n decoded, etag = filename.split(\".\", 1)\n except ValueError:\n # Otherwise, use None\n decoded, etag = filename, None\n\n filename_bytes = decoded.encode('utf-8')\n url_bytes = base64.b64decode(filename_bytes)\n return url_bytes.decode('utf-8'), etag\n\n\ndef cached_path(url_or_filename: str, cache_dir: Path) -> Path:\n \"\"\"\n Given something that might be a URL (or might be a local path),\n determine which. If it's a URL, download the file and cache it, and\n return the path to the cached file. If it's already a local path,\n make sure the file exists and then return the path.\n \"\"\"\n dataset_cache = Path(CACHE_ROOT) / cache_dir\n\n parsed = urlparse(url_or_filename)\n\n if parsed.scheme in ('http', 'https'):\n # URL, so get it from the cache (downloading if necessary)\n return get_from_cache(url_or_filename, dataset_cache)\n elif parsed.scheme == '' and Path(url_or_filename).exists():\n # File, and it exists.\n return Path(url_or_filename)\n elif parsed.scheme == '':\n # File, but it doesn't exist.\n raise FileNotFoundError(\"file {} not found\".format(url_or_filename))\n else:\n # Something unknown\n raise ValueError(\"unable to parse {} as a URL or as a local path\".format(url_or_filename))\n\n\n# TODO(joelgrus): do we want to do checksums or anything like that?\ndef get_from_cache(url: str, cache_dir: Path = None) -> Path:\n \"\"\"\n Given a URL, look for the corresponding dataset in the local cache.\n If it's not there, download it. Then return the path to the cached file.\n \"\"\"\n cache_dir.mkdir(parents=True, exist_ok=True)\n\n filename = re.sub(r'.+/', '', url)\n # get cache path to put the file\n cache_path = cache_dir / filename\n if cache_path.exists():\n return cache_path\n\n # make HEAD request to check ETag\n response = requests.head(url)\n if response.status_code != 200:\n raise IOError(\"HEAD request failed for url {}\".format(url))\n\n # add ETag to filename if it exists\n # etag = response.headers.get(\"ETag\")\n\n if not cache_path.exists():\n # Download to temporary file, then copy to cache dir once finished.\n # Otherwise you get corrupt cache entries if the download gets interrupted.\n fd, temp_filename = tempfile.mkstemp()\n logger.info(\"%s not found in cache, downloading to %s\", url, temp_filename)\n\n # GET file object\n req = requests.get(url, stream=True)\n content_length = req.headers.get('Content-Length')\n total = int(content_length) if content_length is not None else None\n progress = Tqdm.tqdm(unit=\"B\", total=total)\n with open(temp_filename, 'wb') as temp_file:\n for chunk in req.iter_content(chunk_size=1024):\n if chunk: # filter out keep-alive new chunks\n progress.update(len(chunk))\n temp_file.write(chunk)\n\n progress.close()\n\n logger.info(\"copying %s to cache at %s\", temp_filename, cache_path)\n shutil.copyfile(temp_filename, str(cache_path))\n logger.info(\"removing temp file %s\", temp_filename)\n os.close(fd)\n os.remove(temp_filename)\n\n return cache_path\n\n\nfrom tqdm import tqdm as _tqdm\n\n\nclass Tqdm:\n # These defaults are the same as the argument defaults in tqdm.\n default_mininterval: float = 0.1\n\n @staticmethod\n def set_default_mininterval(value: float) -> None:\n Tqdm.default_mininterval = value\n\n @staticmethod\n def set_slower_interval(use_slower_interval: bool) -> None:\n \"\"\"\n If ``use_slower_interval`` is ``True``, we will dramatically slow down ``tqdm's`` default\n output rate. ``tqdm's`` default output rate is great for interactively watching progress,\n but it is not great for log files. You might want to set this if you are primarily going\n to be looking at output through log files, not the terminal.\n \"\"\"\n if use_slower_interval:\n Tqdm.default_mininterval = 10.0\n else:\n Tqdm.default_mininterval = 0.1\n\n @staticmethod\n def tqdm(*args, **kwargs):\n new_kwargs = {\n 'mininterval': Tqdm.default_mininterval,\n **kwargs\n }\n\n return _tqdm(*args, **new_kwargs)\n", "path": "flair/file_utils.py"}]}
| 2,973 | 209 |
gh_patches_debug_37049
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-2492
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bulk_update() in content-stages can cause (very rare) deadlock
**Version**
3.14
**Describe the bug**
In high-concurrency environments, with overlapping content, calling bulk_update() can cause a deadlock. Specifically, this call:
https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/content_stages.py#L158-L164
Ordering the list-to-be-updated does not, alas, protect us - because Postgres doesn't guarantee order when doing an update like this.
**To Reproduce**
We have only seen this "in the wild" once, syncing 8-10 repos with similar content at the same time with 10 workers available.
**Expected behavior**
Don't deadlock.
**Additional context**
This is the traceback from the initial description for
https://bugzilla.redhat.com/show_bug.cgi?id=2062526
We fixed the deadlock noted in https://bugzilla.redhat.com/show_bug.cgi?id=2062526#c2 under #2420
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/plugin/stages/content_stages.py`
Content:
```
1 from collections import defaultdict
2
3 from django.core.exceptions import ObjectDoesNotExist
4 from django.db import IntegrityError, transaction
5 from django.db.models import Q
6
7 from pulpcore.plugin.models import Content, ContentArtifact, ProgressReport
8
9 from .api import Stage
10
11
12 class QueryExistingContents(Stage):
13 """
14 A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related
15 :class:`~pulpcore.plugin.models.ContentArtifact` objects too.
16
17 This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`
18 and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each
19 :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one
20 :class:`~pulpcore.plugin.models.Artifact`.
21
22 This stage inspects any "unsaved" Content unit objects and searches for existing saved Content
23 units inside Pulp with the same unit key. Any existing Content objects found, replace their
24 "unsaved" counterpart in the :class:`~pulpcore.plugin.stages.DeclarativeContent` object.
25
26 Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to `self._out_q` after it has
27 been handled.
28
29 This stage drains all available items from `self._in_q` and batches everything into one large
30 call to the db for efficiency.
31 """
32
33 async def run(self):
34 """
35 The coroutine for this stage.
36
37 Returns:
38 The coroutine for this stage.
39 """
40 async for batch in self.batches():
41 content_q_by_type = defaultdict(lambda: Q(pk__in=[]))
42 d_content_by_nat_key = defaultdict(list)
43 for d_content in batch:
44 if d_content.content._state.adding:
45 model_type = type(d_content.content)
46 unit_q = d_content.content.q()
47 content_q_by_type[model_type] = content_q_by_type[model_type] | unit_q
48 d_content_by_nat_key[d_content.content.natural_key()].append(d_content)
49
50 for model_type, content_q in content_q_by_type.items():
51 try:
52 model_type.objects.filter(content_q).touch()
53 except AttributeError:
54 from pulpcore.app.loggers import deprecation_logger
55 from gettext import gettext as _
56
57 deprecation_logger.warning(
58 _(
59 "As of pulpcore 3.14.5, plugins which declare custom ORM managers on "
60 "their content classes should have those managers inherit from "
61 "pulpcore.plugin.models.ContentManager. This will become a hard error "
62 "in the future."
63 )
64 )
65 for result in model_type.objects.filter(content_q).iterator():
66 for d_content in d_content_by_nat_key[result.natural_key()]:
67 d_content.content = result
68
69 for d_content in batch:
70 await self.put(d_content)
71
72
73 class ContentSaver(Stage):
74 """
75 A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related
76 :class:`~pulpcore.plugin.models.ContentArtifact` objects too.
77
78 This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`
79 and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each
80 :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one
81 :class:`~pulpcore.plugin.models.Artifact`.
82
83 Each "unsaved" Content objects is saved and a :class:`~pulpcore.plugin.models.ContentArtifact`
84 objects too.
85
86 Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to after it has been handled.
87
88 This stage drains all available items from `self._in_q` and batches everything into one large
89 call to the db for efficiency.
90 """
91
92 async def run(self):
93 """
94 The coroutine for this stage.
95
96 Returns:
97 The coroutine for this stage.
98 """
99 async for batch in self.batches():
100 content_artifact_bulk = []
101 to_update_ca_query = ContentArtifact.objects.none()
102 to_update_ca_bulk = []
103 to_update_ca_artifact = {}
104 with transaction.atomic():
105 await self._pre_save(batch)
106
107 # Process the batch in dc.content.natural_keys order.
108 # This prevents deadlocks when we're processing the same/similar content
109 # in concurrent workers.
110 batch.sort(key=lambda x: "".join(map(str, x.content.natural_key())))
111 for d_content in batch:
112 # Are we saving to the database for the first time?
113 content_already_saved = not d_content.content._state.adding
114 if not content_already_saved:
115 try:
116 with transaction.atomic():
117 d_content.content.save()
118 except IntegrityError as e:
119 try:
120 d_content.content = d_content.content.__class__.objects.get(
121 d_content.content.q()
122 )
123 except ObjectDoesNotExist:
124 raise e
125 else:
126 for d_artifact in d_content.d_artifacts:
127 if not d_artifact.artifact._state.adding:
128 artifact = d_artifact.artifact
129 else:
130 # set to None for on-demand synced artifacts
131 artifact = None
132 content_artifact = ContentArtifact(
133 content=d_content.content,
134 artifact=artifact,
135 relative_path=d_artifact.relative_path,
136 )
137 content_artifact_bulk.append(content_artifact)
138 continue
139 # When the Content already exists, check if ContentArtifacts need to be updated
140 for d_artifact in d_content.d_artifacts:
141 if not d_artifact.artifact._state.adding:
142 # the artifact is already present in the database; update references
143 # Creating one large query and one large dictionary
144 to_update_ca_query |= ContentArtifact.objects.filter(
145 content=d_content.content, relative_path=d_artifact.relative_path
146 )
147 key = (d_content.content.pk, d_artifact.relative_path)
148 to_update_ca_artifact[key] = d_artifact.artifact
149 # Query db once and update each object in memory for bulk_update call
150 for content_artifact in to_update_ca_query.iterator():
151 key = (content_artifact.content_id, content_artifact.relative_path)
152 # Maybe remove dict elements after to reduce memory?
153 content_artifact.artifact = to_update_ca_artifact[key]
154 to_update_ca_bulk.append(content_artifact)
155 # Sort the lists we're about to do bulk updates/creates on.
156 # We know to_update_ca_bulk entries already are in the DB, so we can enforce
157 # order just using pulp_id.
158 to_update_ca_bulk.sort(key=lambda x: x.pulp_id)
159 content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
160 ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
161 ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)
162 await self._post_save(batch)
163 for declarative_content in batch:
164 await self.put(declarative_content)
165
166 async def _pre_save(self, batch):
167 """
168 A hook plugin-writers can override to save related objects prior to content unit saving.
169
170 This is run within the same transaction as the content unit saving.
171
172 Args:
173 batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of
174 :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.
175
176 """
177 pass
178
179 async def _post_save(self, batch):
180 """
181 A hook plugin-writers can override to save related objects after content unit saving.
182
183 This is run within the same transaction as the content unit saving.
184
185 Args:
186 batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of
187 :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.
188
189 """
190 pass
191
192
193 class ResolveContentFutures(Stage):
194 """
195 This stage resolves the futures in :class:`~pulpcore.plugin.stages.DeclarativeContent`.
196
197 Futures results are set to the found/created :class:`~pulpcore.plugin.models.Content`.
198
199 This is useful when data downloaded from the plugin API needs to be parsed by FirstStage to
200 create additional :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be send down
201 the pipeline. Consider an example where content type `Foo` references additional instances of a
202 different content type `Bar`. Consider this code in FirstStage::
203
204 # Create d_content and d_artifact for a `foo_a`
205 foo_a = DeclarativeContent(...)
206 # Send it in the pipeline
207 await self.put(foo_a)
208
209 ...
210
211 foo_a_content = await foo_a.resolution() # awaits until the foo_a reaches this stage
212
213 This creates a "looping" pattern, of sorts, where downloaded content at the end of the pipeline
214 can introduce new additional to-be-downloaded content at the beginning of the pipeline.
215 On the other hand, it can impose a substantial performance decrement of batching content in the
216 earlier stages.
217 If you want to drop a declarative content prematurely from the pipeline, use the function
218 `resolve()` to unblock the coroutines awaiting the attached future and do not hand the content
219 to the next stage.
220 As a rule of thumb, sending more items into the pipeline first and awaiting their resolution
221 later is better.
222 """
223
224 async def run(self):
225 """
226 The coroutine for this stage.
227
228 Returns:
229 The coroutine for this stage.
230 """
231 async for d_content in self.items():
232 d_content.resolve()
233 await self.put(d_content)
234
235
236 class ContentAssociation(Stage):
237 """
238 A Stages API stage that associates content units with `new_version`.
239
240 This stage stores all content unit primary keys in memory before running. This is done to
241 compute the units already associated but not received from `self._in_q`. These units are passed
242 via `self._out_q` to the next stage as a :class:`django.db.models.query.QuerySet`.
243
244 This stage creates a ProgressReport named 'Associating Content' that counts the number of units
245 associated. Since it's a stream the total count isn't known until it's finished.
246
247 If `mirror` was enabled, then content units may also be un-assocated (removed) from
248 `new_version`. A ProgressReport named 'Un-Associating Content' is created that counts the number
249 of units un-associated.
250
251 Args:
252 new_version (:class:`~pulpcore.plugin.models.RepositoryVersion`): The repo version this
253 stage associates content with.
254 mirror (bool): Whether or not to "mirror" the stream of DeclarativeContent - whether content
255 not in the stream should be removed from the repository.
256 args: unused positional arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.
257 kwargs: unused keyword arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.
258 """
259
260 def __init__(self, new_version, mirror, *args, **kwargs):
261 super().__init__(*args, **kwargs)
262 self.new_version = new_version
263 self.allow_delete = mirror
264
265 async def run(self):
266 """
267 The coroutine for this stage.
268
269 Returns:
270 The coroutine for this stage.
271 """
272 with ProgressReport(message="Associating Content", code="associating.content") as pb:
273 to_delete = set(self.new_version.content.values_list("pk", flat=True))
274 async for batch in self.batches():
275 to_add = set()
276 for d_content in batch:
277 try:
278 to_delete.remove(d_content.content.pk)
279 except KeyError:
280 to_add.add(d_content.content.pk)
281 await self.put(d_content)
282
283 if to_add:
284 self.new_version.add_content(Content.objects.filter(pk__in=to_add))
285 pb.increase_by(len(to_add))
286
287 if self.allow_delete:
288 with ProgressReport(
289 message="Un-Associating Content", code="unassociating.content"
290 ) as pb:
291 if to_delete:
292 self.new_version.remove_content(Content.objects.filter(pk__in=to_delete))
293 pb.increase_by(len(to_delete))
294
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -152,12 +152,40 @@
# Maybe remove dict elements after to reduce memory?
content_artifact.artifact = to_update_ca_artifact[key]
to_update_ca_bulk.append(content_artifact)
- # Sort the lists we're about to do bulk updates/creates on.
- # We know to_update_ca_bulk entries already are in the DB, so we can enforce
- # order just using pulp_id.
- to_update_ca_bulk.sort(key=lambda x: x.pulp_id)
+
+ # to_update_ca_bulk are the CAs that we know are already persisted.
+ # We need to update their artifact_ids, and wish to do it in bulk to
+ # avoid hundreds of round-trips to the database.
+ #
+ # To avoid deadlocks in high-concurrency environments with overlapping
+ # content, we need to update the rows in some defined order. Unfortunately,
+ # postgres doesn't support order-on-update - but it *does* support ordering
+ # on select-for-update. So, we select-for-update, in pulp_id order, the
+ # rows we're about to update as one db-call, and then do the update in a
+ # second.
+ ids = [k.pulp_id for k in to_update_ca_bulk]
+ with transaction.atomic():
+ # "len()" forces the QA to be evaluated. Using exist() or count() won't
+ # work for us - Django is smart enough to either not-order, or even
+ # not-emit, a select-for-update in these cases.
+ #
+ # To maximize performance, we make sure to only ask for pulp_ids, and
+ # avoid instantiating a python-object for the affected CAs by using
+ # values_list()
+ len(
+ ContentArtifact.objects.filter(pulp_id__in=ids)
+ .only("pulp_id")
+ .order_by("pulp_id")
+ .select_for_update()
+ .values_list()
+ )
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+
+ # To avoid a similar deadlock issue when calling get_or_create, we sort the
+ # "new" CAs to make sure inserts happen in a defined order. Since we can't
+ # trust the pulp_id (by the time we go to create a CA, it may already exist,
+ # and be replaced by the 'real' one), we sort by their "natural key".
content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)
await self._post_save(batch)
for declarative_content in batch:
|
{"golden_diff": "diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py\n--- a/pulpcore/plugin/stages/content_stages.py\n+++ b/pulpcore/plugin/stages/content_stages.py\n@@ -152,12 +152,40 @@\n # Maybe remove dict elements after to reduce memory?\n content_artifact.artifact = to_update_ca_artifact[key]\n to_update_ca_bulk.append(content_artifact)\n- # Sort the lists we're about to do bulk updates/creates on.\n- # We know to_update_ca_bulk entries already are in the DB, so we can enforce\n- # order just using pulp_id.\n- to_update_ca_bulk.sort(key=lambda x: x.pulp_id)\n+\n+ # to_update_ca_bulk are the CAs that we know are already persisted.\n+ # We need to update their artifact_ids, and wish to do it in bulk to\n+ # avoid hundreds of round-trips to the database.\n+ #\n+ # To avoid deadlocks in high-concurrency environments with overlapping\n+ # content, we need to update the rows in some defined order. Unfortunately,\n+ # postgres doesn't support order-on-update - but it *does* support ordering\n+ # on select-for-update. So, we select-for-update, in pulp_id order, the\n+ # rows we're about to update as one db-call, and then do the update in a\n+ # second.\n+ ids = [k.pulp_id for k in to_update_ca_bulk]\n+ with transaction.atomic():\n+ # \"len()\" forces the QA to be evaluated. Using exist() or count() won't\n+ # work for us - Django is smart enough to either not-order, or even\n+ # not-emit, a select-for-update in these cases.\n+ #\n+ # To maximize performance, we make sure to only ask for pulp_ids, and\n+ # avoid instantiating a python-object for the affected CAs by using\n+ # values_list()\n+ len(\n+ ContentArtifact.objects.filter(pulp_id__in=ids)\n+ .only(\"pulp_id\")\n+ .order_by(\"pulp_id\")\n+ .select_for_update()\n+ .values_list()\n+ )\n+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, [\"artifact\"])\n+\n+ # To avoid a similar deadlock issue when calling get_or_create, we sort the\n+ # \"new\" CAs to make sure inserts happen in a defined order. Since we can't\n+ # trust the pulp_id (by the time we go to create a CA, it may already exist,\n+ # and be replaced by the 'real' one), we sort by their \"natural key\".\n content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))\n- ContentArtifact.objects.bulk_update(to_update_ca_bulk, [\"artifact\"])\n ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)\n await self._post_save(batch)\n for declarative_content in batch:\n", "issue": "bulk_update() in content-stages can cause (very rare) deadlock\n**Version**\r\n3.14\r\n\r\n**Describe the bug**\r\nIn high-concurrency environments, with overlapping content, calling bulk_update() can cause a deadlock. Specifically, this call:\r\n\r\nhttps://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/content_stages.py#L158-L164\r\n\r\nOrdering the list-to-be-updated does not, alas, protect us - because Postgres doesn't guarantee order when doing an update like this.\r\n\r\n**To Reproduce**\r\nWe have only seen this \"in the wild\" once, syncing 8-10 repos with similar content at the same time with 10 workers available.\r\n\r\n**Expected behavior**\r\nDon't deadlock.\r\n\r\n**Additional context**\r\nThis is the traceback from the initial description for\r\n\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2062526\r\n\r\nWe fixed the deadlock noted in https://bugzilla.redhat.com/show_bug.cgi?id=2062526#c2 under #2420 \r\n\r\n\n", "before_files": [{"content": "from collections import defaultdict\n\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import IntegrityError, transaction\nfrom django.db.models import Q\n\nfrom pulpcore.plugin.models import Content, ContentArtifact, ProgressReport\n\nfrom .api import Stage\n\n\nclass QueryExistingContents(Stage):\n \"\"\"\n A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related\n :class:`~pulpcore.plugin.models.ContentArtifact` objects too.\n\n This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`\n and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each\n :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one\n :class:`~pulpcore.plugin.models.Artifact`.\n\n This stage inspects any \"unsaved\" Content unit objects and searches for existing saved Content\n units inside Pulp with the same unit key. Any existing Content objects found, replace their\n \"unsaved\" counterpart in the :class:`~pulpcore.plugin.stages.DeclarativeContent` object.\n\n Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to `self._out_q` after it has\n been handled.\n\n This stage drains all available items from `self._in_q` and batches everything into one large\n call to the db for efficiency.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for batch in self.batches():\n content_q_by_type = defaultdict(lambda: Q(pk__in=[]))\n d_content_by_nat_key = defaultdict(list)\n for d_content in batch:\n if d_content.content._state.adding:\n model_type = type(d_content.content)\n unit_q = d_content.content.q()\n content_q_by_type[model_type] = content_q_by_type[model_type] | unit_q\n d_content_by_nat_key[d_content.content.natural_key()].append(d_content)\n\n for model_type, content_q in content_q_by_type.items():\n try:\n model_type.objects.filter(content_q).touch()\n except AttributeError:\n from pulpcore.app.loggers import deprecation_logger\n from gettext import gettext as _\n\n deprecation_logger.warning(\n _(\n \"As of pulpcore 3.14.5, plugins which declare custom ORM managers on \"\n \"their content classes should have those managers inherit from \"\n \"pulpcore.plugin.models.ContentManager. This will become a hard error \"\n \"in the future.\"\n )\n )\n for result in model_type.objects.filter(content_q).iterator():\n for d_content in d_content_by_nat_key[result.natural_key()]:\n d_content.content = result\n\n for d_content in batch:\n await self.put(d_content)\n\n\nclass ContentSaver(Stage):\n \"\"\"\n A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related\n :class:`~pulpcore.plugin.models.ContentArtifact` objects too.\n\n This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`\n and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each\n :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one\n :class:`~pulpcore.plugin.models.Artifact`.\n\n Each \"unsaved\" Content objects is saved and a :class:`~pulpcore.plugin.models.ContentArtifact`\n objects too.\n\n Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to after it has been handled.\n\n This stage drains all available items from `self._in_q` and batches everything into one large\n call to the db for efficiency.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for batch in self.batches():\n content_artifact_bulk = []\n to_update_ca_query = ContentArtifact.objects.none()\n to_update_ca_bulk = []\n to_update_ca_artifact = {}\n with transaction.atomic():\n await self._pre_save(batch)\n\n # Process the batch in dc.content.natural_keys order.\n # This prevents deadlocks when we're processing the same/similar content\n # in concurrent workers.\n batch.sort(key=lambda x: \"\".join(map(str, x.content.natural_key())))\n for d_content in batch:\n # Are we saving to the database for the first time?\n content_already_saved = not d_content.content._state.adding\n if not content_already_saved:\n try:\n with transaction.atomic():\n d_content.content.save()\n except IntegrityError as e:\n try:\n d_content.content = d_content.content.__class__.objects.get(\n d_content.content.q()\n )\n except ObjectDoesNotExist:\n raise e\n else:\n for d_artifact in d_content.d_artifacts:\n if not d_artifact.artifact._state.adding:\n artifact = d_artifact.artifact\n else:\n # set to None for on-demand synced artifacts\n artifact = None\n content_artifact = ContentArtifact(\n content=d_content.content,\n artifact=artifact,\n relative_path=d_artifact.relative_path,\n )\n content_artifact_bulk.append(content_artifact)\n continue\n # When the Content already exists, check if ContentArtifacts need to be updated\n for d_artifact in d_content.d_artifacts:\n if not d_artifact.artifact._state.adding:\n # the artifact is already present in the database; update references\n # Creating one large query and one large dictionary\n to_update_ca_query |= ContentArtifact.objects.filter(\n content=d_content.content, relative_path=d_artifact.relative_path\n )\n key = (d_content.content.pk, d_artifact.relative_path)\n to_update_ca_artifact[key] = d_artifact.artifact\n # Query db once and update each object in memory for bulk_update call\n for content_artifact in to_update_ca_query.iterator():\n key = (content_artifact.content_id, content_artifact.relative_path)\n # Maybe remove dict elements after to reduce memory?\n content_artifact.artifact = to_update_ca_artifact[key]\n to_update_ca_bulk.append(content_artifact)\n # Sort the lists we're about to do bulk updates/creates on.\n # We know to_update_ca_bulk entries already are in the DB, so we can enforce\n # order just using pulp_id.\n to_update_ca_bulk.sort(key=lambda x: x.pulp_id)\n content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))\n ContentArtifact.objects.bulk_update(to_update_ca_bulk, [\"artifact\"])\n ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)\n await self._post_save(batch)\n for declarative_content in batch:\n await self.put(declarative_content)\n\n async def _pre_save(self, batch):\n \"\"\"\n A hook plugin-writers can override to save related objects prior to content unit saving.\n\n This is run within the same transaction as the content unit saving.\n\n Args:\n batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of\n :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.\n\n \"\"\"\n pass\n\n async def _post_save(self, batch):\n \"\"\"\n A hook plugin-writers can override to save related objects after content unit saving.\n\n This is run within the same transaction as the content unit saving.\n\n Args:\n batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of\n :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.\n\n \"\"\"\n pass\n\n\nclass ResolveContentFutures(Stage):\n \"\"\"\n This stage resolves the futures in :class:`~pulpcore.plugin.stages.DeclarativeContent`.\n\n Futures results are set to the found/created :class:`~pulpcore.plugin.models.Content`.\n\n This is useful when data downloaded from the plugin API needs to be parsed by FirstStage to\n create additional :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be send down\n the pipeline. Consider an example where content type `Foo` references additional instances of a\n different content type `Bar`. Consider this code in FirstStage::\n\n # Create d_content and d_artifact for a `foo_a`\n foo_a = DeclarativeContent(...)\n # Send it in the pipeline\n await self.put(foo_a)\n\n ...\n\n foo_a_content = await foo_a.resolution() # awaits until the foo_a reaches this stage\n\n This creates a \"looping\" pattern, of sorts, where downloaded content at the end of the pipeline\n can introduce new additional to-be-downloaded content at the beginning of the pipeline.\n On the other hand, it can impose a substantial performance decrement of batching content in the\n earlier stages.\n If you want to drop a declarative content prematurely from the pipeline, use the function\n `resolve()` to unblock the coroutines awaiting the attached future and do not hand the content\n to the next stage.\n As a rule of thumb, sending more items into the pipeline first and awaiting their resolution\n later is better.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for d_content in self.items():\n d_content.resolve()\n await self.put(d_content)\n\n\nclass ContentAssociation(Stage):\n \"\"\"\n A Stages API stage that associates content units with `new_version`.\n\n This stage stores all content unit primary keys in memory before running. This is done to\n compute the units already associated but not received from `self._in_q`. These units are passed\n via `self._out_q` to the next stage as a :class:`django.db.models.query.QuerySet`.\n\n This stage creates a ProgressReport named 'Associating Content' that counts the number of units\n associated. Since it's a stream the total count isn't known until it's finished.\n\n If `mirror` was enabled, then content units may also be un-assocated (removed) from\n `new_version`. A ProgressReport named 'Un-Associating Content' is created that counts the number\n of units un-associated.\n\n Args:\n new_version (:class:`~pulpcore.plugin.models.RepositoryVersion`): The repo version this\n stage associates content with.\n mirror (bool): Whether or not to \"mirror\" the stream of DeclarativeContent - whether content\n not in the stream should be removed from the repository.\n args: unused positional arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.\n kwargs: unused keyword arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.\n \"\"\"\n\n def __init__(self, new_version, mirror, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.new_version = new_version\n self.allow_delete = mirror\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n with ProgressReport(message=\"Associating Content\", code=\"associating.content\") as pb:\n to_delete = set(self.new_version.content.values_list(\"pk\", flat=True))\n async for batch in self.batches():\n to_add = set()\n for d_content in batch:\n try:\n to_delete.remove(d_content.content.pk)\n except KeyError:\n to_add.add(d_content.content.pk)\n await self.put(d_content)\n\n if to_add:\n self.new_version.add_content(Content.objects.filter(pk__in=to_add))\n pb.increase_by(len(to_add))\n\n if self.allow_delete:\n with ProgressReport(\n message=\"Un-Associating Content\", code=\"unassociating.content\"\n ) as pb:\n if to_delete:\n self.new_version.remove_content(Content.objects.filter(pk__in=to_delete))\n pb.increase_by(len(to_delete))\n", "path": "pulpcore/plugin/stages/content_stages.py"}], "after_files": [{"content": "from collections import defaultdict\n\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import IntegrityError, transaction\nfrom django.db.models import Q\n\nfrom pulpcore.plugin.models import Content, ContentArtifact, ProgressReport\n\nfrom .api import Stage\n\n\nclass QueryExistingContents(Stage):\n \"\"\"\n A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related\n :class:`~pulpcore.plugin.models.ContentArtifact` objects too.\n\n This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`\n and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each\n :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one\n :class:`~pulpcore.plugin.models.Artifact`.\n\n This stage inspects any \"unsaved\" Content unit objects and searches for existing saved Content\n units inside Pulp with the same unit key. Any existing Content objects found, replace their\n \"unsaved\" counterpart in the :class:`~pulpcore.plugin.stages.DeclarativeContent` object.\n\n Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to `self._out_q` after it has\n been handled.\n\n This stage drains all available items from `self._in_q` and batches everything into one large\n call to the db for efficiency.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for batch in self.batches():\n content_q_by_type = defaultdict(lambda: Q(pk__in=[]))\n d_content_by_nat_key = defaultdict(list)\n for d_content in batch:\n if d_content.content._state.adding:\n model_type = type(d_content.content)\n unit_q = d_content.content.q()\n content_q_by_type[model_type] = content_q_by_type[model_type] | unit_q\n d_content_by_nat_key[d_content.content.natural_key()].append(d_content)\n\n for model_type, content_q in content_q_by_type.items():\n try:\n model_type.objects.filter(content_q).touch()\n except AttributeError:\n from pulpcore.app.loggers import deprecation_logger\n from gettext import gettext as _\n\n deprecation_logger.warning(\n _(\n \"As of pulpcore 3.14.5, plugins which declare custom ORM managers on \"\n \"their content classes should have those managers inherit from \"\n \"pulpcore.plugin.models.ContentManager. This will become a hard error \"\n \"in the future.\"\n )\n )\n for result in model_type.objects.filter(content_q).iterator():\n for d_content in d_content_by_nat_key[result.natural_key()]:\n d_content.content = result\n\n for d_content in batch:\n await self.put(d_content)\n\n\nclass ContentSaver(Stage):\n \"\"\"\n A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related\n :class:`~pulpcore.plugin.models.ContentArtifact` objects too.\n\n This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`\n and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each\n :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one\n :class:`~pulpcore.plugin.models.Artifact`.\n\n Each \"unsaved\" Content objects is saved and a :class:`~pulpcore.plugin.models.ContentArtifact`\n objects too.\n\n Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to after it has been handled.\n\n This stage drains all available items from `self._in_q` and batches everything into one large\n call to the db for efficiency.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for batch in self.batches():\n content_artifact_bulk = []\n to_update_ca_query = ContentArtifact.objects.none()\n to_update_ca_bulk = []\n to_update_ca_artifact = {}\n with transaction.atomic():\n await self._pre_save(batch)\n\n # Process the batch in dc.content.natural_keys order.\n # This prevents deadlocks when we're processing the same/similar content\n # in concurrent workers.\n batch.sort(key=lambda x: \"\".join(map(str, x.content.natural_key())))\n for d_content in batch:\n # Are we saving to the database for the first time?\n content_already_saved = not d_content.content._state.adding\n if not content_already_saved:\n try:\n with transaction.atomic():\n d_content.content.save()\n except IntegrityError as e:\n try:\n d_content.content = d_content.content.__class__.objects.get(\n d_content.content.q()\n )\n except ObjectDoesNotExist:\n raise e\n else:\n for d_artifact in d_content.d_artifacts:\n if not d_artifact.artifact._state.adding:\n artifact = d_artifact.artifact\n else:\n # set to None for on-demand synced artifacts\n artifact = None\n content_artifact = ContentArtifact(\n content=d_content.content,\n artifact=artifact,\n relative_path=d_artifact.relative_path,\n )\n content_artifact_bulk.append(content_artifact)\n continue\n # When the Content already exists, check if ContentArtifacts need to be updated\n for d_artifact in d_content.d_artifacts:\n if not d_artifact.artifact._state.adding:\n # the artifact is already present in the database; update references\n # Creating one large query and one large dictionary\n to_update_ca_query |= ContentArtifact.objects.filter(\n content=d_content.content, relative_path=d_artifact.relative_path\n )\n key = (d_content.content.pk, d_artifact.relative_path)\n to_update_ca_artifact[key] = d_artifact.artifact\n # Query db once and update each object in memory for bulk_update call\n for content_artifact in to_update_ca_query.iterator():\n key = (content_artifact.content_id, content_artifact.relative_path)\n # Maybe remove dict elements after to reduce memory?\n content_artifact.artifact = to_update_ca_artifact[key]\n to_update_ca_bulk.append(content_artifact)\n\n # to_update_ca_bulk are the CAs that we know are already persisted.\n # We need to update their artifact_ids, and wish to do it in bulk to\n # avoid hundreds of round-trips to the database.\n #\n # To avoid deadlocks in high-concurrency environments with overlapping\n # content, we need to update the rows in some defined order. Unfortunately,\n # postgres doesn't support order-on-update - but it *does* support ordering\n # on select-for-update. So, we select-for-update, in pulp_id order, the\n # rows we're about to update as one db-call, and then do the update in a\n # second.\n ids = [k.pulp_id for k in to_update_ca_bulk]\n with transaction.atomic():\n # \"len()\" forces the QA to be evaluated. Using exist() or count() won't\n # work for us - Django is smart enough to either not-order, or even\n # not-emit, a select-for-update in these cases.\n #\n # To maximize performance, we make sure to only ask for pulp_ids, and\n # avoid instantiating a python-object for the affected CAs by using\n # values_list()\n len(\n ContentArtifact.objects.filter(pulp_id__in=ids)\n .only(\"pulp_id\")\n .order_by(\"pulp_id\")\n .select_for_update()\n .values_list()\n )\n ContentArtifact.objects.bulk_update(to_update_ca_bulk, [\"artifact\"])\n\n # To avoid a similar deadlock issue when calling get_or_create, we sort the\n # \"new\" CAs to make sure inserts happen in a defined order. Since we can't\n # trust the pulp_id (by the time we go to create a CA, it may already exist,\n # and be replaced by the 'real' one), we sort by their \"natural key\".\n content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))\n ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)\n await self._post_save(batch)\n for declarative_content in batch:\n await self.put(declarative_content)\n\n async def _pre_save(self, batch):\n \"\"\"\n A hook plugin-writers can override to save related objects prior to content unit saving.\n\n This is run within the same transaction as the content unit saving.\n\n Args:\n batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of\n :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.\n\n \"\"\"\n pass\n\n async def _post_save(self, batch):\n \"\"\"\n A hook plugin-writers can override to save related objects after content unit saving.\n\n This is run within the same transaction as the content unit saving.\n\n Args:\n batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of\n :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.\n\n \"\"\"\n pass\n\n\nclass ResolveContentFutures(Stage):\n \"\"\"\n This stage resolves the futures in :class:`~pulpcore.plugin.stages.DeclarativeContent`.\n\n Futures results are set to the found/created :class:`~pulpcore.plugin.models.Content`.\n\n This is useful when data downloaded from the plugin API needs to be parsed by FirstStage to\n create additional :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be send down\n the pipeline. Consider an example where content type `Foo` references additional instances of a\n different content type `Bar`. Consider this code in FirstStage::\n\n # Create d_content and d_artifact for a `foo_a`\n foo_a = DeclarativeContent(...)\n # Send it in the pipeline\n await self.put(foo_a)\n\n ...\n\n foo_a_content = await foo_a.resolution() # awaits until the foo_a reaches this stage\n\n This creates a \"looping\" pattern, of sorts, where downloaded content at the end of the pipeline\n can introduce new additional to-be-downloaded content at the beginning of the pipeline.\n On the other hand, it can impose a substantial performance decrement of batching content in the\n earlier stages.\n If you want to drop a declarative content prematurely from the pipeline, use the function\n `resolve()` to unblock the coroutines awaiting the attached future and do not hand the content\n to the next stage.\n As a rule of thumb, sending more items into the pipeline first and awaiting their resolution\n later is better.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for d_content in self.items():\n d_content.resolve()\n await self.put(d_content)\n\n\nclass ContentAssociation(Stage):\n \"\"\"\n A Stages API stage that associates content units with `new_version`.\n\n This stage stores all content unit primary keys in memory before running. This is done to\n compute the units already associated but not received from `self._in_q`. These units are passed\n via `self._out_q` to the next stage as a :class:`django.db.models.query.QuerySet`.\n\n This stage creates a ProgressReport named 'Associating Content' that counts the number of units\n associated. Since it's a stream the total count isn't known until it's finished.\n\n If `mirror` was enabled, then content units may also be un-assocated (removed) from\n `new_version`. A ProgressReport named 'Un-Associating Content' is created that counts the number\n of units un-associated.\n\n Args:\n new_version (:class:`~pulpcore.plugin.models.RepositoryVersion`): The repo version this\n stage associates content with.\n mirror (bool): Whether or not to \"mirror\" the stream of DeclarativeContent - whether content\n not in the stream should be removed from the repository.\n args: unused positional arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.\n kwargs: unused keyword arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.\n \"\"\"\n\n def __init__(self, new_version, mirror, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.new_version = new_version\n self.allow_delete = mirror\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n with ProgressReport(message=\"Associating Content\", code=\"associating.content\") as pb:\n to_delete = set(self.new_version.content.values_list(\"pk\", flat=True))\n async for batch in self.batches():\n to_add = set()\n for d_content in batch:\n try:\n to_delete.remove(d_content.content.pk)\n except KeyError:\n to_add.add(d_content.content.pk)\n await self.put(d_content)\n\n if to_add:\n self.new_version.add_content(Content.objects.filter(pk__in=to_add))\n pb.increase_by(len(to_add))\n\n if self.allow_delete:\n with ProgressReport(\n message=\"Un-Associating Content\", code=\"unassociating.content\"\n ) as pb:\n if to_delete:\n self.new_version.remove_content(Content.objects.filter(pk__in=to_delete))\n pb.increase_by(len(to_delete))\n", "path": "pulpcore/plugin/stages/content_stages.py"}]}
| 3,900 | 661 |
gh_patches_debug_19144
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-2765
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change in mpv options for 2 dashes options breaks it connection to Streamlink
lately mpv changed the way they handle [2 dashes options](https://github.com/mpv-player/mpv/commit/d3cef97ad38fb027262a905bd82e1d3d2549aec7)
```
--- mpv 0.31.1 ---
- change behavior when using legacy option syntax with options that start
with two dashes (``--`` instead of a ``-``). Now, using the recommended
syntax is required for options starting with ``--``, which means an option
value must be strictly passed after a ``=``, instead of as separate
argument. For example, ``--log-file f.txt`` was previously accepted and
behaved like ``--log-file=f.txt``, but now causes an error. Use of legacy
syntax that is still supported now prints a deprecation warning.
```
So now mpv just closes with no errors when used with streamlink as streamlink creates an mpv command with --title "title" instead of --title="title"
```
[15:57] zouhair@box <5402:63> [~] -> :)
┖╴$ streamlink --verbose-player --loglevel debug https://www.twitch.tv/videos/533706108 best
[cli][debug] OS: CYGWIN_NT-10.0-18362-3.1.2-340.x86_64-x86_64-64bit-WindowsPE
[cli][debug] Python: 3.7.4
[cli][debug] Streamlink: 1.3.0
[cli][debug] Requests(2.22.0), Socks(1.7.0), Websocket(0.56.0)
[cli][info] Found matching plugin twitch for URL https://www.twitch.tv/videos/533706108
[plugin.twitch][debug] Getting video HLS streams for gamesdonequick
[utils.l10n][debug] Language code: en_US
[cli][info] Available streams: audio, 160p (worst), 360p, 480p, 720p, 720p60, 1080p60 (best)
[cli][info] Opening stream: 1080p60 (hls)
[cli][info] Starting player: /home/zouhair/mpv.exe --cache 2048
[cli.output][debug] Calling: /home/zouhair/mpv.exe --cache 2048 --title https://www.twitch.tv/videos/533706108 https://vod-secure.twitch.tv/7dc8f3a102cc1793f7bc_gamesdonequick_36629634272_1358097847/chunked/index-dvr.m3u8
[15:57] zouhair@box <5403:64> [~] -> :)
┖╴$ echo $?
0
[15:57] zouhair@box <5404:65> [~] -> :)
┖╴$
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink_cli/output.py`
Content:
```
1 import logging
2 import os
3 import shlex
4 import subprocess
5 import sys
6 from time import sleep
7
8 from streamlink.utils.encoding import get_filesystem_encoding, maybe_encode, maybe_decode
9 from .compat import is_win32, stdout
10 from .constants import DEFAULT_PLAYER_ARGUMENTS, SUPPORTED_PLAYERS
11 from .utils import ignored
12
13 if is_win32:
14 import msvcrt
15
16 log = logging.getLogger("streamlink.cli.output")
17
18
19 class Output(object):
20 def __init__(self):
21 self.opened = False
22
23 def open(self):
24 self._open()
25 self.opened = True
26
27 def close(self):
28 if self.opened:
29 self._close()
30
31 self.opened = False
32
33 def write(self, data):
34 if not self.opened:
35 raise IOError("Output is not opened")
36
37 return self._write(data)
38
39 def _open(self):
40 pass
41
42 def _close(self):
43 pass
44
45 def _write(self, data):
46 pass
47
48
49 class FileOutput(Output):
50 def __init__(self, filename=None, fd=None, record=None):
51 super(FileOutput, self).__init__()
52 self.filename = filename
53 self.fd = fd
54 self.record = record
55
56 def _open(self):
57 if self.filename:
58 self.fd = open(self.filename, "wb")
59
60 if self.record:
61 self.record.open()
62
63 if is_win32:
64 msvcrt.setmode(self.fd.fileno(), os.O_BINARY)
65
66 def _close(self):
67 if self.fd is not stdout:
68 self.fd.close()
69 if self.record:
70 self.record.close()
71
72 def _write(self, data):
73 self.fd.write(data)
74 if self.record:
75 self.record.write(data)
76
77
78 class PlayerOutput(Output):
79 PLAYER_TERMINATE_TIMEOUT = 10.0
80
81 def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=None,
82 namedpipe=None, record=None, title=None):
83 super(PlayerOutput, self).__init__()
84 self.cmd = cmd
85 self.args = args
86 self.kill = kill
87 self.call = call
88 self.quiet = quiet
89
90 self.filename = filename
91 self.namedpipe = namedpipe
92 self.http = http
93 self.title = title
94 self.player = None
95 self.player_name = self.supported_player(self.cmd)
96 self.record = record
97
98 if self.namedpipe or self.filename or self.http:
99 self.stdin = sys.stdin
100 else:
101 self.stdin = subprocess.PIPE
102
103 if self.quiet:
104 self.stdout = open(os.devnull, "w")
105 self.stderr = open(os.devnull, "w")
106 else:
107 self.stdout = sys.stdout
108 self.stderr = sys.stderr
109
110 @property
111 def running(self):
112 sleep(0.5)
113 return self.player.poll() is None
114
115 @classmethod
116 def supported_player(cls, cmd):
117 """
118 Check if the current player supports adding a title
119
120 :param cmd: command to test
121 :return: name of the player|None
122 """
123 if not is_win32:
124 # under a POSIX system use shlex to find the actual command
125 # under windows this is not an issue because executables end in .exe
126 cmd = shlex.split(cmd)[0]
127
128 cmd = os.path.basename(cmd.lower())
129 for player, possiblecmds in SUPPORTED_PLAYERS.items():
130 for possiblecmd in possiblecmds:
131 if cmd.startswith(possiblecmd):
132 return player
133
134 @classmethod
135 def _mpv_title_escape(cls, title_string):
136 # mpv has a "disable property-expansion" token which must be handled in order to accurately represent $$ in title
137 if r'\$>' in title_string:
138 processed_title = ""
139 double_dollars = True
140 i = dollars = 0
141 while i < len(title_string):
142 if double_dollars:
143 if title_string[i] == "\\":
144 if title_string[i + 1] == "$":
145 processed_title += "$"
146 dollars += 1
147 i += 1
148 if title_string[i + 1] == ">" and dollars % 2 == 1:
149 double_dollars = False
150 processed_title += ">"
151 i += 1
152 else:
153 processed_title += "\\"
154 elif title_string[i] == "$":
155 processed_title += "$$"
156 else:
157 dollars = 0
158 processed_title += title_string[i]
159 else:
160 if title_string[i:i + 2] == "\\$":
161 processed_title += "$"
162 i += 1
163 else:
164 processed_title += title_string[i]
165 i += 1
166 return processed_title
167 else:
168 # not possible for property-expansion to be disabled, happy days
169 return title_string.replace("$", "$$").replace(r'\$$', "$")
170
171 def _create_arguments(self):
172 if self.namedpipe:
173 filename = self.namedpipe.path
174 elif self.filename:
175 filename = self.filename
176 elif self.http:
177 filename = self.http.url
178 else:
179 filename = "-"
180 extra_args = []
181
182 if self.title is not None:
183 # vlc
184 if self.player_name == "vlc":
185 # see https://wiki.videolan.org/Documentation:Format_String/, allow escaping with \$
186 self.title = self.title.replace("$", "$$").replace(r'\$$', "$")
187 extra_args.extend(["--input-title-format", self.title])
188
189 # mpv
190 if self.player_name == "mpv":
191 # see https://mpv.io/manual/stable/#property-expansion, allow escaping with \$, respect mpv's $>
192 self.title = self._mpv_title_escape(self.title)
193 extra_args.extend(["--title", self.title])
194
195 # potplayer
196 if self.player_name == "potplayer":
197 if filename != "-":
198 # PotPlayer - About - Command Line
199 # You can specify titles for URLs by separating them with a backslash (\) at the end of URLs. ("http://...\title of this url")
200 self.title = self.title.replace('"', '')
201 filename = filename[:-1] + '\\' + self.title + filename[-1]
202
203 args = self.args.format(filename=filename)
204 cmd = self.cmd
205
206 # player command
207 if is_win32:
208 eargs = maybe_decode(subprocess.list2cmdline(extra_args))
209 # do not insert and extra " " when there are no extra_args
210 return maybe_encode(u' '.join([cmd] + ([eargs] if eargs else []) + [args]),
211 encoding=get_filesystem_encoding())
212 return shlex.split(cmd) + extra_args + shlex.split(args)
213
214 def _open(self):
215 try:
216 if self.record:
217 self.record.open()
218 if self.call and self.filename:
219 self._open_call()
220 else:
221 self._open_subprocess()
222 finally:
223 if self.quiet:
224 # Output streams no longer needed in parent process
225 self.stdout.close()
226 self.stderr.close()
227
228 def _open_call(self):
229 args = self._create_arguments()
230 if is_win32:
231 fargs = args
232 else:
233 fargs = subprocess.list2cmdline(args)
234 log.debug(u"Calling: {0}".format(fargs))
235 subprocess.call(args,
236 stdout=self.stdout,
237 stderr=self.stderr)
238
239 def _open_subprocess(self):
240 # Force bufsize=0 on all Python versions to avoid writing the
241 # unflushed buffer when closing a broken input pipe
242 args = self._create_arguments()
243 if is_win32:
244 fargs = args
245 else:
246 fargs = subprocess.list2cmdline(args)
247 log.debug(u"Opening subprocess: {0}".format(fargs))
248 self.player = subprocess.Popen(args,
249 stdin=self.stdin, bufsize=0,
250 stdout=self.stdout,
251 stderr=self.stderr)
252 # Wait 0.5 seconds to see if program exited prematurely
253 if not self.running:
254 raise OSError("Process exited prematurely")
255
256 if self.namedpipe:
257 self.namedpipe.open("wb")
258 elif self.http:
259 self.http.open()
260
261 def _close(self):
262 # Close input to the player first to signal the end of the
263 # stream and allow the player to terminate of its own accord
264 if self.namedpipe:
265 self.namedpipe.close()
266 elif self.http:
267 self.http.close()
268 elif not self.filename:
269 self.player.stdin.close()
270
271 if self.record:
272 self.record.close()
273
274 if self.kill:
275 with ignored(Exception):
276 self.player.terminate()
277 if not is_win32:
278 t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT
279 while self.player.poll() is None and t < timeout:
280 sleep(0.5)
281 t += 0.5
282
283 if not self.player.returncode:
284 self.player.kill()
285 self.player.wait()
286
287 def _write(self, data):
288 if self.record:
289 self.record.write(data)
290
291 if self.namedpipe:
292 self.namedpipe.write(data)
293 elif self.http:
294 self.http.write(data)
295 else:
296 self.player.stdin.write(data)
297
298
299 __all__ = ["PlayerOutput", "FileOutput"]
300
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py
--- a/src/streamlink_cli/output.py
+++ b/src/streamlink_cli/output.py
@@ -190,7 +190,7 @@
if self.player_name == "mpv":
# see https://mpv.io/manual/stable/#property-expansion, allow escaping with \$, respect mpv's $>
self.title = self._mpv_title_escape(self.title)
- extra_args.extend(["--title", self.title])
+ extra_args.append("--title={}".format(self.title))
# potplayer
if self.player_name == "potplayer":
@@ -202,7 +202,7 @@
args = self.args.format(filename=filename)
cmd = self.cmd
-
+
# player command
if is_win32:
eargs = maybe_decode(subprocess.list2cmdline(extra_args))
|
{"golden_diff": "diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py\n--- a/src/streamlink_cli/output.py\n+++ b/src/streamlink_cli/output.py\n@@ -190,7 +190,7 @@\n if self.player_name == \"mpv\":\n # see https://mpv.io/manual/stable/#property-expansion, allow escaping with \\$, respect mpv's $>\n self.title = self._mpv_title_escape(self.title)\n- extra_args.extend([\"--title\", self.title])\n+ extra_args.append(\"--title={}\".format(self.title))\n \n # potplayer\n if self.player_name == \"potplayer\":\n@@ -202,7 +202,7 @@\n \n args = self.args.format(filename=filename)\n cmd = self.cmd\n- \n+\n # player command\n if is_win32:\n eargs = maybe_decode(subprocess.list2cmdline(extra_args))\n", "issue": "Change in mpv options for 2 dashes options breaks it connection to Streamlink\nlately mpv changed the way they handle [2 dashes options](https://github.com/mpv-player/mpv/commit/d3cef97ad38fb027262a905bd82e1d3d2549aec7)\r\n\r\n```\r\n --- mpv 0.31.1 ---\r\n - change behavior when using legacy option syntax with options that start\r\n with two dashes (``--`` instead of a ``-``). Now, using the recommended\r\n syntax is required for options starting with ``--``, which means an option\r\n value must be strictly passed after a ``=``, instead of as separate\r\n argument. For example, ``--log-file f.txt`` was previously accepted and\r\n behaved like ``--log-file=f.txt``, but now causes an error. Use of legacy\r\n syntax that is still supported now prints a deprecation warning.\r\n```\r\n\r\nSo now mpv just closes with no errors when used with streamlink as streamlink creates an mpv command with --title \"title\" instead of --title=\"title\"\r\n\r\n```\r\n[15:57] zouhair@box <5402:63> [~] -> :)\r\n\u2516\u2574$ streamlink --verbose-player --loglevel debug https://www.twitch.tv/videos/533706108 best\r\n[cli][debug] OS: CYGWIN_NT-10.0-18362-3.1.2-340.x86_64-x86_64-64bit-WindowsPE\r\n[cli][debug] Python: 3.7.4\r\n[cli][debug] Streamlink: 1.3.0\r\n[cli][debug] Requests(2.22.0), Socks(1.7.0), Websocket(0.56.0)\r\n[cli][info] Found matching plugin twitch for URL https://www.twitch.tv/videos/533706108\r\n[plugin.twitch][debug] Getting video HLS streams for gamesdonequick\r\n[utils.l10n][debug] Language code: en_US\r\n[cli][info] Available streams: audio, 160p (worst), 360p, 480p, 720p, 720p60, 1080p60 (best)\r\n[cli][info] Opening stream: 1080p60 (hls)\r\n[cli][info] Starting player: /home/zouhair/mpv.exe --cache 2048\r\n[cli.output][debug] Calling: /home/zouhair/mpv.exe --cache 2048 --title https://www.twitch.tv/videos/533706108 https://vod-secure.twitch.tv/7dc8f3a102cc1793f7bc_gamesdonequick_36629634272_1358097847/chunked/index-dvr.m3u8\r\n[15:57] zouhair@box <5403:64> [~] -> :)\r\n\u2516\u2574$ echo $?\r\n0\r\n[15:57] zouhair@box <5404:65> [~] -> :)\r\n\u2516\u2574$\r\n```\n", "before_files": [{"content": "import logging\nimport os\nimport shlex\nimport subprocess\nimport sys\nfrom time import sleep\n\nfrom streamlink.utils.encoding import get_filesystem_encoding, maybe_encode, maybe_decode\nfrom .compat import is_win32, stdout\nfrom .constants import DEFAULT_PLAYER_ARGUMENTS, SUPPORTED_PLAYERS\nfrom .utils import ignored\n\nif is_win32:\n import msvcrt\n\nlog = logging.getLogger(\"streamlink.cli.output\")\n\n\nclass Output(object):\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise IOError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None, record=None):\n super(FileOutput, self).__init__()\n self.filename = filename\n self.fd = fd\n self.record = record\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if self.record:\n self.record.open()\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n if self.record:\n self.record.close()\n\n def _write(self, data):\n self.fd.write(data)\n if self.record:\n self.record.write(data)\n\n\nclass PlayerOutput(Output):\n PLAYER_TERMINATE_TIMEOUT = 10.0\n\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=None,\n namedpipe=None, record=None, title=None):\n super(PlayerOutput, self).__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n self.title = title\n self.player = None\n self.player_name = self.supported_player(self.cmd)\n self.record = record\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n @property\n def running(self):\n sleep(0.5)\n return self.player.poll() is None\n\n @classmethod\n def supported_player(cls, cmd):\n \"\"\"\n Check if the current player supports adding a title\n\n :param cmd: command to test\n :return: name of the player|None\n \"\"\"\n if not is_win32:\n # under a POSIX system use shlex to find the actual command\n # under windows this is not an issue because executables end in .exe\n cmd = shlex.split(cmd)[0]\n\n cmd = os.path.basename(cmd.lower())\n for player, possiblecmds in SUPPORTED_PLAYERS.items():\n for possiblecmd in possiblecmds:\n if cmd.startswith(possiblecmd):\n return player\n\n @classmethod\n def _mpv_title_escape(cls, title_string):\n # mpv has a \"disable property-expansion\" token which must be handled in order to accurately represent $$ in title\n if r'\\$>' in title_string:\n processed_title = \"\"\n double_dollars = True\n i = dollars = 0\n while i < len(title_string):\n if double_dollars:\n if title_string[i] == \"\\\\\":\n if title_string[i + 1] == \"$\":\n processed_title += \"$\"\n dollars += 1\n i += 1\n if title_string[i + 1] == \">\" and dollars % 2 == 1:\n double_dollars = False\n processed_title += \">\"\n i += 1\n else:\n processed_title += \"\\\\\"\n elif title_string[i] == \"$\":\n processed_title += \"$$\"\n else:\n dollars = 0\n processed_title += title_string[i]\n else:\n if title_string[i:i + 2] == \"\\\\$\":\n processed_title += \"$\"\n i += 1\n else:\n processed_title += title_string[i]\n i += 1\n return processed_title\n else:\n # not possible for property-expansion to be disabled, happy days\n return title_string.replace(\"$\", \"$$\").replace(r'\\$$', \"$\")\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n extra_args = []\n\n if self.title is not None:\n # vlc\n if self.player_name == \"vlc\":\n # see https://wiki.videolan.org/Documentation:Format_String/, allow escaping with \\$\n self.title = self.title.replace(\"$\", \"$$\").replace(r'\\$$', \"$\")\n extra_args.extend([\"--input-title-format\", self.title])\n\n # mpv\n if self.player_name == \"mpv\":\n # see https://mpv.io/manual/stable/#property-expansion, allow escaping with \\$, respect mpv's $>\n self.title = self._mpv_title_escape(self.title)\n extra_args.extend([\"--title\", self.title])\n\n # potplayer\n if self.player_name == \"potplayer\":\n if filename != \"-\":\n # PotPlayer - About - Command Line\n # You can specify titles for URLs by separating them with a backslash (\\) at the end of URLs. (\"http://...\\title of this url\")\n self.title = self.title.replace('\"', '')\n filename = filename[:-1] + '\\\\' + self.title + filename[-1]\n\n args = self.args.format(filename=filename)\n cmd = self.cmd\n \n # player command\n if is_win32:\n eargs = maybe_decode(subprocess.list2cmdline(extra_args))\n # do not insert and extra \" \" when there are no extra_args\n return maybe_encode(u' '.join([cmd] + ([eargs] if eargs else []) + [args]),\n encoding=get_filesystem_encoding())\n return shlex.split(cmd) + extra_args + shlex.split(args)\n\n def _open(self):\n try:\n if self.record:\n self.record.open()\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n args = self._create_arguments()\n if is_win32:\n fargs = args\n else:\n fargs = subprocess.list2cmdline(args)\n log.debug(u\"Calling: {0}\".format(fargs))\n subprocess.call(args,\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n args = self._create_arguments()\n if is_win32:\n fargs = args\n else:\n fargs = subprocess.list2cmdline(args)\n log.debug(u\"Opening subprocess: {0}\".format(fargs))\n self.player = subprocess.Popen(args,\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.record:\n self.record.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.terminate()\n if not is_win32:\n t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n while self.player.poll() is None and t < timeout:\n sleep(0.5)\n t += 0.5\n\n if not self.player.returncode:\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.record:\n self.record.write(data)\n\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}], "after_files": [{"content": "import logging\nimport os\nimport shlex\nimport subprocess\nimport sys\nfrom time import sleep\n\nfrom streamlink.utils.encoding import get_filesystem_encoding, maybe_encode, maybe_decode\nfrom .compat import is_win32, stdout\nfrom .constants import DEFAULT_PLAYER_ARGUMENTS, SUPPORTED_PLAYERS\nfrom .utils import ignored\n\nif is_win32:\n import msvcrt\n\nlog = logging.getLogger(\"streamlink.cli.output\")\n\n\nclass Output(object):\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise IOError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None, record=None):\n super(FileOutput, self).__init__()\n self.filename = filename\n self.fd = fd\n self.record = record\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if self.record:\n self.record.open()\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n if self.record:\n self.record.close()\n\n def _write(self, data):\n self.fd.write(data)\n if self.record:\n self.record.write(data)\n\n\nclass PlayerOutput(Output):\n PLAYER_TERMINATE_TIMEOUT = 10.0\n\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=None,\n namedpipe=None, record=None, title=None):\n super(PlayerOutput, self).__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n self.title = title\n self.player = None\n self.player_name = self.supported_player(self.cmd)\n self.record = record\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n @property\n def running(self):\n sleep(0.5)\n return self.player.poll() is None\n\n @classmethod\n def supported_player(cls, cmd):\n \"\"\"\n Check if the current player supports adding a title\n\n :param cmd: command to test\n :return: name of the player|None\n \"\"\"\n if not is_win32:\n # under a POSIX system use shlex to find the actual command\n # under windows this is not an issue because executables end in .exe\n cmd = shlex.split(cmd)[0]\n\n cmd = os.path.basename(cmd.lower())\n for player, possiblecmds in SUPPORTED_PLAYERS.items():\n for possiblecmd in possiblecmds:\n if cmd.startswith(possiblecmd):\n return player\n\n @classmethod\n def _mpv_title_escape(cls, title_string):\n # mpv has a \"disable property-expansion\" token which must be handled in order to accurately represent $$ in title\n if r'\\$>' in title_string:\n processed_title = \"\"\n double_dollars = True\n i = dollars = 0\n while i < len(title_string):\n if double_dollars:\n if title_string[i] == \"\\\\\":\n if title_string[i + 1] == \"$\":\n processed_title += \"$\"\n dollars += 1\n i += 1\n if title_string[i + 1] == \">\" and dollars % 2 == 1:\n double_dollars = False\n processed_title += \">\"\n i += 1\n else:\n processed_title += \"\\\\\"\n elif title_string[i] == \"$\":\n processed_title += \"$$\"\n else:\n dollars = 0\n processed_title += title_string[i]\n else:\n if title_string[i:i + 2] == \"\\\\$\":\n processed_title += \"$\"\n i += 1\n else:\n processed_title += title_string[i]\n i += 1\n return processed_title\n else:\n # not possible for property-expansion to be disabled, happy days\n return title_string.replace(\"$\", \"$$\").replace(r'\\$$', \"$\")\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n extra_args = []\n\n if self.title is not None:\n # vlc\n if self.player_name == \"vlc\":\n # see https://wiki.videolan.org/Documentation:Format_String/, allow escaping with \\$\n self.title = self.title.replace(\"$\", \"$$\").replace(r'\\$$', \"$\")\n extra_args.extend([\"--input-title-format\", self.title])\n\n # mpv\n if self.player_name == \"mpv\":\n # see https://mpv.io/manual/stable/#property-expansion, allow escaping with \\$, respect mpv's $>\n self.title = self._mpv_title_escape(self.title)\n extra_args.append(\"--title={}\".format(self.title))\n\n # potplayer\n if self.player_name == \"potplayer\":\n if filename != \"-\":\n # PotPlayer - About - Command Line\n # You can specify titles for URLs by separating them with a backslash (\\) at the end of URLs. (\"http://...\\title of this url\")\n self.title = self.title.replace('\"', '')\n filename = filename[:-1] + '\\\\' + self.title + filename[-1]\n\n args = self.args.format(filename=filename)\n cmd = self.cmd\n\n # player command\n if is_win32:\n eargs = maybe_decode(subprocess.list2cmdline(extra_args))\n # do not insert and extra \" \" when there are no extra_args\n return maybe_encode(u' '.join([cmd] + ([eargs] if eargs else []) + [args]),\n encoding=get_filesystem_encoding())\n return shlex.split(cmd) + extra_args + shlex.split(args)\n\n def _open(self):\n try:\n if self.record:\n self.record.open()\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n args = self._create_arguments()\n if is_win32:\n fargs = args\n else:\n fargs = subprocess.list2cmdline(args)\n log.debug(u\"Calling: {0}\".format(fargs))\n subprocess.call(args,\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n args = self._create_arguments()\n if is_win32:\n fargs = args\n else:\n fargs = subprocess.list2cmdline(args)\n log.debug(u\"Opening subprocess: {0}\".format(fargs))\n self.player = subprocess.Popen(args,\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.record:\n self.record.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.terminate()\n if not is_win32:\n t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n while self.player.poll() is None and t < timeout:\n sleep(0.5)\n t += 0.5\n\n if not self.player.returncode:\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.record:\n self.record.write(data)\n\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}]}
| 3,828 | 202 |
gh_patches_debug_13191
|
rasdani/github-patches
|
git_diff
|
davanstrien__flyswot-156
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
catch no files found before running prediction function
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/flyswot/inference.py`
Content:
```
1 """Inference functionality"""
2 import csv
3 import mimetypes
4 import time
5 from abc import ABC
6 from abc import abstractmethod
7 from dataclasses import asdict
8 from dataclasses import dataclass
9 from datetime import datetime
10 from datetime import timedelta
11 from pathlib import Path
12 from typing import Iterable
13 from typing import Iterator
14 from typing import List
15 from typing import Union
16
17 import numpy as np
18 import onnxruntime as rt # type: ignore
19 import typer
20 from PIL import Image # type: ignore
21 from rich.table import Table
22 from toolz import itertoolz
23
24 from flyswot import core
25 from flyswot import models
26 from flyswot.console import console
27
28 app = typer.Typer()
29
30
31 @dataclass
32 class ImagePredictionItem:
33 """Prediction for an image.
34
35 Attributes:
36 path: The Path to the image
37 predicted_label: The predicted label i.e. the argmax value for the prediction tensor
38 condidence: The confidence for `predicted_label` i.e. the max value for prediction tensor
39 """
40
41 path: Path
42 predicted_label: str
43 confidence: float
44
45 def __post_init__(self) -> Union[Path, None]:
46 """attempt to get absolute path"""
47 try:
48 self.path: Path = self.path.absolute()
49 except AttributeError:
50 pass
51
52
53 @dataclass
54 class PredictionBatch:
55 """Container for ImagePredictionItems"""
56
57 batch: List[ImagePredictionItem]
58
59 def __post_init__(self):
60 """Returns a list of all predicted labels in batch"""
61 self.batch_labels: Iterator[str] = (item.predicted_label for item in self.batch)
62
63
64 image_extensions = {k for k, v in mimetypes.types_map.items() if v.startswith("image/")}
65
66
67 @app.command()
68 def predict_image(
69 image: Path = typer.Argument(..., readable=True, resolve_path=True)
70 ) -> None:
71 """Predict a single image"""
72 pass # pragma: no cover
73
74
75 @app.command(name="directory")
76 def predict_directory(
77 directory: Path = typer.Argument(
78 ...,
79 readable=True,
80 resolve_path=True,
81 help="Directory to start searching for images from",
82 ),
83 csv_save_dir: Path = typer.Argument(
84 ...,
85 writable=True,
86 resolve_path=True,
87 help="Directory used to store the csv report",
88 ),
89 pattern: str = typer.Option("fse", help="Pattern used to filter image filenames"),
90 bs: int = typer.Option(16, help="Batch Size"),
91 image_format: str = typer.Option(
92 ".tif", help="Image format for flyswot to use for predictions"
93 ),
94 check_latest: bool = typer.Option(True, help="Use latest available model"),
95 ):
96 """Predicts against all images stored under DIRECTORY which match PATTERN in the filename.
97
98 By default searches for filenames containing 'fse'.
99
100 Creates a CSV report saved to `csv_save_dir`
101 """
102 start_time = time.perf_counter()
103 model_dir = models.ensure_model_dir()
104 # TODO add load learner function that can be passed a model name
105 model_parts = models.ensure_model(model_dir, check_latest)
106 model = model_parts.model
107 vocab = models.load_vocab(model_parts.vocab)
108 onnxinference = OnnxInferenceSession(model, vocab)
109 files = list(core.get_image_files_from_pattern(directory, pattern, image_format))
110 typer.echo(f"Found {len(files)} files matching {pattern} in {directory}")
111 csv_fname = create_csv_fname(csv_save_dir)
112 create_csv_header(csv_fname)
113 with typer.progressbar(length=len(files)) as progress:
114 all_preds = []
115 predictions = []
116 for batch in itertoolz.partition_all(bs, files):
117 batch_predictions = onnxinference.predict_batch(batch, bs)
118 all_preds.append(batch_predictions.batch_labels)
119 predictions.append(batch_predictions)
120 progress.update(len(batch))
121 write_batch_preds_to_csv(csv_fname, batch_predictions)
122 all_preds = list(itertoolz.concat(all_preds))
123 typer.echo(f"CSV report stored in {csv_fname}")
124 delta = timedelta(seconds=time.perf_counter() - start_time)
125 typer.echo(f"Time taken to run: {str(delta)}")
126 print_table(all_preds)
127
128
129 def print_table(decoded) -> None:
130 """Prints table summary of predicted labels"""
131 table = Table(show_header=True, title="Prediction summary")
132 table.add_column(
133 "Class",
134 )
135 table.add_column("Count")
136 table.add_column("Percentage")
137 total = len(decoded)
138 frequencies = itertoolz.frequencies(decoded)
139 for is_last_element, var in core.signal_last(frequencies.items()):
140 key, value = var
141 count = value
142 percentage = round((count / total) * 100, 2)
143 if is_last_element:
144 table.add_row(key, str(count), f"{percentage}", end_section=True)
145 table.add_row("Total", str(total), "")
146 else:
147 table.add_row(key, str(count), f"{percentage}")
148 console.print(table)
149
150
151 def create_csv_fname(csv_directory: Path) -> Path:
152 """Creates a csv filename"""
153 date_now = datetime.now()
154 date_now = date_now.strftime("%Y_%m_%d_%H_%M")
155 fname = Path(date_now + ".csv")
156 return Path(csv_directory / fname)
157
158
159 def create_csv_header(csv_path: Path) -> None:
160 """Creates a header for csv `csv_path`"""
161 with open(csv_path, mode="w", newline="") as csv_file:
162 field_names = ["path", "directory", "predicted_label", "confidence"]
163 writer = csv.DictWriter(csv_file, fieldnames=field_names)
164 writer.writeheader()
165
166
167 def write_batch_preds_to_csv(csv_fpath: Path, predictions: PredictionBatch) -> None:
168 """Appends `predictions` batch to `csv_path`"""
169 with open(csv_fpath, mode="a", newline="") as csv_file:
170 field_names = ["path", "directory", "predicted_label", "confidence"]
171 writer = csv.DictWriter(csv_file, fieldnames=field_names)
172 for pred in predictions.batch:
173 row = asdict(pred)
174 row["directory"] = pred.path.parent
175 writer.writerow(row)
176
177
178 class InferenceSession(ABC):
179 """Abstract class for inference sessions"""
180
181 @abstractmethod
182 def __init__(self, model: Path, vocab: List):
183 """Inference Sessions should init from a model file and vocab"""
184 self.model = model
185 self.vocab = vocab
186
187 @abstractmethod
188 def predict_image(self, image: Path):
189 """Predict a single image"""
190 pass
191
192 @abstractmethod
193 def predict_batch(self, model: Path, batch: Iterable[Path], bs: int):
194 """Predict a batch"""
195 pass
196
197
198 def softmax(x):
199 """return softmax of `x`"""
200 x = x.reshape(-1)
201 e_x = np.exp(x - np.max(x))
202 return e_x / e_x.sum(axis=0)
203
204
205 # class FastaiInferenceModel(InferenceSession):
206 # def __init__(self, model):
207 # self.model = model
208 # self.learn = load_learner(model)
209
210 # def predict_image(self, image: Path) -> Any:
211 # return self.learn.predict(image)
212
213 # def predict_batch(self, batch: Iterable[Path], bs: int) -> PredictionBatch:
214 # test_dl = self.learn.dls.test_dl(batch, bs=bs)
215 # vocab = dict(enumerate(self.learn.dls.vocab))
216 # with self.learn.no_bar():
217 # fastai_preds: Any = self.learn.get_preds(dl=test_dl, with_decoded=True)
218 # prediction_tensors: Iterable[Any] = fastai_preds[0]
219 # prediction_items = []
220 # for file, pred in zip(batch, prediction_tensors):
221 # arg_max = int(np.array(pred).argmax())
222 # predicted_label = vocab[int(arg_max)]
223 # confidence = float(np.array(pred).max())
224 # prediction_items.append(
225 # ImagePredictionItem(file, predicted_label, confidence)
226 # )
227 # return PredictionBatch(prediction_items)
228
229
230 class OnnxInferenceSession(InferenceSession):
231 """onnx inference session"""
232
233 def __init__(self, model: Path, vocab: Path):
234 """Create onnx session"""
235 self.model = model
236 self.session = rt.InferenceSession(str(model))
237
238 self.vocab = vocab
239 self.vocab_mapping = dict(enumerate(self.vocab))
240
241 def _load_vocab(self, vocab: Path) -> List:
242 with open(vocab, "r") as f:
243 return [item.strip("\n") for item in f.readlines()]
244
245 def predict_image(self, image: Path):
246 """Predict a single image"""
247 img = self._load_image(image)
248 raw_result = self.session.run(["output"], {"image": img})
249 pred = self._postprocess(raw_result)
250 arg_max = int(np.array(pred).argmax())
251 predicted_label = self.vocab_mapping[int(arg_max)]
252 confidence = float(np.array(pred).max())
253 return ImagePredictionItem(image, predicted_label, confidence)
254
255 def _preprocess(self, input_data: np.ndarray) -> np.ndarray:
256 # converts the input data into the float32 input for onnx
257 img_data = input_data.astype("float32")
258
259 # normalize
260 mean_vec = np.array([0.485, 0.456, 0.406])
261 stddev_vec = np.array([0.229, 0.224, 0.225])
262 norm_img_data = np.zeros(img_data.shape).astype("float32")
263 for i in range(img_data.shape[0]):
264 norm_img_data[i, :, :] = (
265 img_data[i, :, :] / 255 - mean_vec[i]
266 ) / stddev_vec[i]
267
268 # add batch channel
269 norm_img_data = norm_img_data.reshape(1, 3, 512, 512).astype("float32")
270 return norm_img_data
271
272 def _load_image(self, file: Path) -> np.ndarray:
273 """loads image and carries out preprocessing for inference"""
274 image = Image.open(file, mode="r")
275 image = image.resize((512, 512), Image.BILINEAR)
276 image_data = np.array(image).transpose(2, 0, 1)
277 return self._preprocess(image_data)
278
279 def _postprocess(self, result: List):
280 """process results from onnx session"""
281 return softmax(np.array(result)).tolist()
282
283 def predict_batch(self, batch: Iterable[Path], bs: int):
284 """predicts a batch of images"""
285 prediction_items = [self.predict_image(file) for file in batch]
286 return PredictionBatch(prediction_items)
287
288
289 if __name__ == "__main__":
290 app()
291
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/flyswot/inference.py b/src/flyswot/inference.py
--- a/src/flyswot/inference.py
+++ b/src/flyswot/inference.py
@@ -107,6 +107,11 @@
vocab = models.load_vocab(model_parts.vocab)
onnxinference = OnnxInferenceSession(model, vocab)
files = list(core.get_image_files_from_pattern(directory, pattern, image_format))
+ if not files:
+ typer.echo(
+ f"Didn't find any files maching {pattern} in {directory}, please check the inputs to flyswot"
+ )
+ raise typer.Exit(code=1)
typer.echo(f"Found {len(files)} files matching {pattern} in {directory}")
csv_fname = create_csv_fname(csv_save_dir)
create_csv_header(csv_fname)
|
{"golden_diff": "diff --git a/src/flyswot/inference.py b/src/flyswot/inference.py\n--- a/src/flyswot/inference.py\n+++ b/src/flyswot/inference.py\n@@ -107,6 +107,11 @@\n vocab = models.load_vocab(model_parts.vocab)\n onnxinference = OnnxInferenceSession(model, vocab)\n files = list(core.get_image_files_from_pattern(directory, pattern, image_format))\n+ if not files:\n+ typer.echo(\n+ f\"Didn't find any files maching {pattern} in {directory}, please check the inputs to flyswot\"\n+ )\n+ raise typer.Exit(code=1)\n typer.echo(f\"Found {len(files)} files matching {pattern} in {directory}\")\n csv_fname = create_csv_fname(csv_save_dir)\n create_csv_header(csv_fname)\n", "issue": "catch no files found before running prediction function\n\n", "before_files": [{"content": "\"\"\"Inference functionality\"\"\"\nimport csv\nimport mimetypes\nimport time\nfrom abc import ABC\nfrom abc import abstractmethod\nfrom dataclasses import asdict\nfrom dataclasses import dataclass\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom pathlib import Path\nfrom typing import Iterable\nfrom typing import Iterator\nfrom typing import List\nfrom typing import Union\n\nimport numpy as np\nimport onnxruntime as rt # type: ignore\nimport typer\nfrom PIL import Image # type: ignore\nfrom rich.table import Table\nfrom toolz import itertoolz\n\nfrom flyswot import core\nfrom flyswot import models\nfrom flyswot.console import console\n\napp = typer.Typer()\n\n\n@dataclass\nclass ImagePredictionItem:\n \"\"\"Prediction for an image.\n\n Attributes:\n path: The Path to the image\n predicted_label: The predicted label i.e. the argmax value for the prediction tensor\n condidence: The confidence for `predicted_label` i.e. the max value for prediction tensor\n \"\"\"\n\n path: Path\n predicted_label: str\n confidence: float\n\n def __post_init__(self) -> Union[Path, None]:\n \"\"\"attempt to get absolute path\"\"\"\n try:\n self.path: Path = self.path.absolute()\n except AttributeError:\n pass\n\n\n@dataclass\nclass PredictionBatch:\n \"\"\"Container for ImagePredictionItems\"\"\"\n\n batch: List[ImagePredictionItem]\n\n def __post_init__(self):\n \"\"\"Returns a list of all predicted labels in batch\"\"\"\n self.batch_labels: Iterator[str] = (item.predicted_label for item in self.batch)\n\n\nimage_extensions = {k for k, v in mimetypes.types_map.items() if v.startswith(\"image/\")}\n\n\[email protected]()\ndef predict_image(\n image: Path = typer.Argument(..., readable=True, resolve_path=True)\n) -> None:\n \"\"\"Predict a single image\"\"\"\n pass # pragma: no cover\n\n\[email protected](name=\"directory\")\ndef predict_directory(\n directory: Path = typer.Argument(\n ...,\n readable=True,\n resolve_path=True,\n help=\"Directory to start searching for images from\",\n ),\n csv_save_dir: Path = typer.Argument(\n ...,\n writable=True,\n resolve_path=True,\n help=\"Directory used to store the csv report\",\n ),\n pattern: str = typer.Option(\"fse\", help=\"Pattern used to filter image filenames\"),\n bs: int = typer.Option(16, help=\"Batch Size\"),\n image_format: str = typer.Option(\n \".tif\", help=\"Image format for flyswot to use for predictions\"\n ),\n check_latest: bool = typer.Option(True, help=\"Use latest available model\"),\n):\n \"\"\"Predicts against all images stored under DIRECTORY which match PATTERN in the filename.\n\n By default searches for filenames containing 'fse'.\n\n Creates a CSV report saved to `csv_save_dir`\n \"\"\"\n start_time = time.perf_counter()\n model_dir = models.ensure_model_dir()\n # TODO add load learner function that can be passed a model name\n model_parts = models.ensure_model(model_dir, check_latest)\n model = model_parts.model\n vocab = models.load_vocab(model_parts.vocab)\n onnxinference = OnnxInferenceSession(model, vocab)\n files = list(core.get_image_files_from_pattern(directory, pattern, image_format))\n typer.echo(f\"Found {len(files)} files matching {pattern} in {directory}\")\n csv_fname = create_csv_fname(csv_save_dir)\n create_csv_header(csv_fname)\n with typer.progressbar(length=len(files)) as progress:\n all_preds = []\n predictions = []\n for batch in itertoolz.partition_all(bs, files):\n batch_predictions = onnxinference.predict_batch(batch, bs)\n all_preds.append(batch_predictions.batch_labels)\n predictions.append(batch_predictions)\n progress.update(len(batch))\n write_batch_preds_to_csv(csv_fname, batch_predictions)\n all_preds = list(itertoolz.concat(all_preds))\n typer.echo(f\"CSV report stored in {csv_fname}\")\n delta = timedelta(seconds=time.perf_counter() - start_time)\n typer.echo(f\"Time taken to run: {str(delta)}\")\n print_table(all_preds)\n\n\ndef print_table(decoded) -> None:\n \"\"\"Prints table summary of predicted labels\"\"\"\n table = Table(show_header=True, title=\"Prediction summary\")\n table.add_column(\n \"Class\",\n )\n table.add_column(\"Count\")\n table.add_column(\"Percentage\")\n total = len(decoded)\n frequencies = itertoolz.frequencies(decoded)\n for is_last_element, var in core.signal_last(frequencies.items()):\n key, value = var\n count = value\n percentage = round((count / total) * 100, 2)\n if is_last_element:\n table.add_row(key, str(count), f\"{percentage}\", end_section=True)\n table.add_row(\"Total\", str(total), \"\")\n else:\n table.add_row(key, str(count), f\"{percentage}\")\n console.print(table)\n\n\ndef create_csv_fname(csv_directory: Path) -> Path:\n \"\"\"Creates a csv filename\"\"\"\n date_now = datetime.now()\n date_now = date_now.strftime(\"%Y_%m_%d_%H_%M\")\n fname = Path(date_now + \".csv\")\n return Path(csv_directory / fname)\n\n\ndef create_csv_header(csv_path: Path) -> None:\n \"\"\"Creates a header for csv `csv_path`\"\"\"\n with open(csv_path, mode=\"w\", newline=\"\") as csv_file:\n field_names = [\"path\", \"directory\", \"predicted_label\", \"confidence\"]\n writer = csv.DictWriter(csv_file, fieldnames=field_names)\n writer.writeheader()\n\n\ndef write_batch_preds_to_csv(csv_fpath: Path, predictions: PredictionBatch) -> None:\n \"\"\"Appends `predictions` batch to `csv_path`\"\"\"\n with open(csv_fpath, mode=\"a\", newline=\"\") as csv_file:\n field_names = [\"path\", \"directory\", \"predicted_label\", \"confidence\"]\n writer = csv.DictWriter(csv_file, fieldnames=field_names)\n for pred in predictions.batch:\n row = asdict(pred)\n row[\"directory\"] = pred.path.parent\n writer.writerow(row)\n\n\nclass InferenceSession(ABC):\n \"\"\"Abstract class for inference sessions\"\"\"\n\n @abstractmethod\n def __init__(self, model: Path, vocab: List):\n \"\"\"Inference Sessions should init from a model file and vocab\"\"\"\n self.model = model\n self.vocab = vocab\n\n @abstractmethod\n def predict_image(self, image: Path):\n \"\"\"Predict a single image\"\"\"\n pass\n\n @abstractmethod\n def predict_batch(self, model: Path, batch: Iterable[Path], bs: int):\n \"\"\"Predict a batch\"\"\"\n pass\n\n\ndef softmax(x):\n \"\"\"return softmax of `x`\"\"\"\n x = x.reshape(-1)\n e_x = np.exp(x - np.max(x))\n return e_x / e_x.sum(axis=0)\n\n\n# class FastaiInferenceModel(InferenceSession):\n# def __init__(self, model):\n# self.model = model\n# self.learn = load_learner(model)\n\n# def predict_image(self, image: Path) -> Any:\n# return self.learn.predict(image)\n\n# def predict_batch(self, batch: Iterable[Path], bs: int) -> PredictionBatch:\n# test_dl = self.learn.dls.test_dl(batch, bs=bs)\n# vocab = dict(enumerate(self.learn.dls.vocab))\n# with self.learn.no_bar():\n# fastai_preds: Any = self.learn.get_preds(dl=test_dl, with_decoded=True)\n# prediction_tensors: Iterable[Any] = fastai_preds[0]\n# prediction_items = []\n# for file, pred in zip(batch, prediction_tensors):\n# arg_max = int(np.array(pred).argmax())\n# predicted_label = vocab[int(arg_max)]\n# confidence = float(np.array(pred).max())\n# prediction_items.append(\n# ImagePredictionItem(file, predicted_label, confidence)\n# )\n# return PredictionBatch(prediction_items)\n\n\nclass OnnxInferenceSession(InferenceSession):\n \"\"\"onnx inference session\"\"\"\n\n def __init__(self, model: Path, vocab: Path):\n \"\"\"Create onnx session\"\"\"\n self.model = model\n self.session = rt.InferenceSession(str(model))\n\n self.vocab = vocab\n self.vocab_mapping = dict(enumerate(self.vocab))\n\n def _load_vocab(self, vocab: Path) -> List:\n with open(vocab, \"r\") as f:\n return [item.strip(\"\\n\") for item in f.readlines()]\n\n def predict_image(self, image: Path):\n \"\"\"Predict a single image\"\"\"\n img = self._load_image(image)\n raw_result = self.session.run([\"output\"], {\"image\": img})\n pred = self._postprocess(raw_result)\n arg_max = int(np.array(pred).argmax())\n predicted_label = self.vocab_mapping[int(arg_max)]\n confidence = float(np.array(pred).max())\n return ImagePredictionItem(image, predicted_label, confidence)\n\n def _preprocess(self, input_data: np.ndarray) -> np.ndarray:\n # converts the input data into the float32 input for onnx\n img_data = input_data.astype(\"float32\")\n\n # normalize\n mean_vec = np.array([0.485, 0.456, 0.406])\n stddev_vec = np.array([0.229, 0.224, 0.225])\n norm_img_data = np.zeros(img_data.shape).astype(\"float32\")\n for i in range(img_data.shape[0]):\n norm_img_data[i, :, :] = (\n img_data[i, :, :] / 255 - mean_vec[i]\n ) / stddev_vec[i]\n\n # add batch channel\n norm_img_data = norm_img_data.reshape(1, 3, 512, 512).astype(\"float32\")\n return norm_img_data\n\n def _load_image(self, file: Path) -> np.ndarray:\n \"\"\"loads image and carries out preprocessing for inference\"\"\"\n image = Image.open(file, mode=\"r\")\n image = image.resize((512, 512), Image.BILINEAR)\n image_data = np.array(image).transpose(2, 0, 1)\n return self._preprocess(image_data)\n\n def _postprocess(self, result: List):\n \"\"\"process results from onnx session\"\"\"\n return softmax(np.array(result)).tolist()\n\n def predict_batch(self, batch: Iterable[Path], bs: int):\n \"\"\"predicts a batch of images\"\"\"\n prediction_items = [self.predict_image(file) for file in batch]\n return PredictionBatch(prediction_items)\n\n\nif __name__ == \"__main__\":\n app()\n", "path": "src/flyswot/inference.py"}], "after_files": [{"content": "\"\"\"Inference functionality\"\"\"\nimport csv\nimport mimetypes\nimport time\nfrom abc import ABC\nfrom abc import abstractmethod\nfrom dataclasses import asdict\nfrom dataclasses import dataclass\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom pathlib import Path\nfrom typing import Iterable\nfrom typing import Iterator\nfrom typing import List\nfrom typing import Union\n\nimport numpy as np\nimport onnxruntime as rt # type: ignore\nimport typer\nfrom PIL import Image # type: ignore\nfrom rich.table import Table\nfrom toolz import itertoolz\n\nfrom flyswot import core\nfrom flyswot import models\nfrom flyswot.console import console\n\napp = typer.Typer()\n\n\n@dataclass\nclass ImagePredictionItem:\n \"\"\"Prediction for an image.\n\n Attributes:\n path: The Path to the image\n predicted_label: The predicted label i.e. the argmax value for the prediction tensor\n condidence: The confidence for `predicted_label` i.e. the max value for prediction tensor\n \"\"\"\n\n path: Path\n predicted_label: str\n confidence: float\n\n def __post_init__(self) -> Union[Path, None]:\n \"\"\"attempt to get absolute path\"\"\"\n try:\n self.path: Path = self.path.absolute()\n except AttributeError:\n pass\n\n\n@dataclass\nclass PredictionBatch:\n \"\"\"Container for ImagePredictionItems\"\"\"\n\n batch: List[ImagePredictionItem]\n\n def __post_init__(self):\n \"\"\"Returns a list of all predicted labels in batch\"\"\"\n self.batch_labels: Iterator[str] = (item.predicted_label for item in self.batch)\n\n\nimage_extensions = {k for k, v in mimetypes.types_map.items() if v.startswith(\"image/\")}\n\n\[email protected]()\ndef predict_image(\n image: Path = typer.Argument(..., readable=True, resolve_path=True)\n) -> None:\n \"\"\"Predict a single image\"\"\"\n pass # pragma: no cover\n\n\[email protected](name=\"directory\")\ndef predict_directory(\n directory: Path = typer.Argument(\n ...,\n readable=True,\n resolve_path=True,\n help=\"Directory to start searching for images from\",\n ),\n csv_save_dir: Path = typer.Argument(\n ...,\n writable=True,\n resolve_path=True,\n help=\"Directory used to store the csv report\",\n ),\n pattern: str = typer.Option(\"fse\", help=\"Pattern used to filter image filenames\"),\n bs: int = typer.Option(16, help=\"Batch Size\"),\n image_format: str = typer.Option(\n \".tif\", help=\"Image format for flyswot to use for predictions\"\n ),\n check_latest: bool = typer.Option(True, help=\"Use latest available model\"),\n):\n \"\"\"Predicts against all images stored under DIRECTORY which match PATTERN in the filename.\n\n By default searches for filenames containing 'fse'.\n\n Creates a CSV report saved to `csv_save_dir`\n \"\"\"\n start_time = time.perf_counter()\n model_dir = models.ensure_model_dir()\n # TODO add load learner function that can be passed a model name\n model_parts = models.ensure_model(model_dir, check_latest)\n model = model_parts.model\n vocab = models.load_vocab(model_parts.vocab)\n onnxinference = OnnxInferenceSession(model, vocab)\n files = list(core.get_image_files_from_pattern(directory, pattern, image_format))\n if not files:\n typer.echo(\n f\"Didn't find any files maching {pattern} in {directory}, please check the inputs to flyswot\"\n )\n raise typer.Exit(code=1)\n typer.echo(f\"Found {len(files)} files matching {pattern} in {directory}\")\n csv_fname = create_csv_fname(csv_save_dir)\n create_csv_header(csv_fname)\n with typer.progressbar(length=len(files)) as progress:\n all_preds = []\n predictions = []\n for batch in itertoolz.partition_all(bs, files):\n batch_predictions = onnxinference.predict_batch(batch, bs)\n all_preds.append(batch_predictions.batch_labels)\n predictions.append(batch_predictions)\n progress.update(len(batch))\n write_batch_preds_to_csv(csv_fname, batch_predictions)\n all_preds = list(itertoolz.concat(all_preds))\n typer.echo(f\"CSV report stored in {csv_fname}\")\n delta = timedelta(seconds=time.perf_counter() - start_time)\n typer.echo(f\"Time taken to run: {str(delta)}\")\n print_table(all_preds)\n\n\ndef print_table(decoded) -> None:\n \"\"\"Prints table summary of predicted labels\"\"\"\n table = Table(show_header=True, title=\"Prediction summary\")\n table.add_column(\n \"Class\",\n )\n table.add_column(\"Count\")\n table.add_column(\"Percentage\")\n total = len(decoded)\n frequencies = itertoolz.frequencies(decoded)\n for is_last_element, var in core.signal_last(frequencies.items()):\n key, value = var\n count = value\n percentage = round((count / total) * 100, 2)\n if is_last_element:\n table.add_row(key, str(count), f\"{percentage}\", end_section=True)\n table.add_row(\"Total\", str(total), \"\")\n else:\n table.add_row(key, str(count), f\"{percentage}\")\n console.print(table)\n\n\ndef create_csv_fname(csv_directory: Path) -> Path:\n \"\"\"Creates a csv filename\"\"\"\n date_now = datetime.now()\n date_now = date_now.strftime(\"%Y_%m_%d_%H_%M\")\n fname = Path(date_now + \".csv\")\n return Path(csv_directory / fname)\n\n\ndef create_csv_header(csv_path: Path) -> None:\n \"\"\"Creates a header for csv `csv_path`\"\"\"\n with open(csv_path, mode=\"w\", newline=\"\") as csv_file:\n field_names = [\"path\", \"directory\", \"predicted_label\", \"confidence\"]\n writer = csv.DictWriter(csv_file, fieldnames=field_names)\n writer.writeheader()\n\n\ndef write_batch_preds_to_csv(csv_fpath: Path, predictions: PredictionBatch) -> None:\n \"\"\"Appends `predictions` batch to `csv_path`\"\"\"\n with open(csv_fpath, mode=\"a\", newline=\"\") as csv_file:\n field_names = [\"path\", \"directory\", \"predicted_label\", \"confidence\"]\n writer = csv.DictWriter(csv_file, fieldnames=field_names)\n for pred in predictions.batch:\n row = asdict(pred)\n row[\"directory\"] = pred.path.parent\n writer.writerow(row)\n\n\nclass InferenceSession(ABC):\n \"\"\"Abstract class for inference sessions\"\"\"\n\n @abstractmethod\n def __init__(self, model: Path, vocab: List):\n \"\"\"Inference Sessions should init from a model file and vocab\"\"\"\n self.model = model\n self.vocab = vocab\n\n @abstractmethod\n def predict_image(self, image: Path):\n \"\"\"Predict a single image\"\"\"\n pass\n\n @abstractmethod\n def predict_batch(self, model: Path, batch: Iterable[Path], bs: int):\n \"\"\"Predict a batch\"\"\"\n pass\n\n\ndef softmax(x):\n \"\"\"return softmax of `x`\"\"\"\n x = x.reshape(-1)\n e_x = np.exp(x - np.max(x))\n return e_x / e_x.sum(axis=0)\n\n\n# class FastaiInferenceModel(InferenceSession):\n# def __init__(self, model):\n# self.model = model\n# self.learn = load_learner(model)\n\n# def predict_image(self, image: Path) -> Any:\n# return self.learn.predict(image)\n\n# def predict_batch(self, batch: Iterable[Path], bs: int) -> PredictionBatch:\n# test_dl = self.learn.dls.test_dl(batch, bs=bs)\n# vocab = dict(enumerate(self.learn.dls.vocab))\n# with self.learn.no_bar():\n# fastai_preds: Any = self.learn.get_preds(dl=test_dl, with_decoded=True)\n# prediction_tensors: Iterable[Any] = fastai_preds[0]\n# prediction_items = []\n# for file, pred in zip(batch, prediction_tensors):\n# arg_max = int(np.array(pred).argmax())\n# predicted_label = vocab[int(arg_max)]\n# confidence = float(np.array(pred).max())\n# prediction_items.append(\n# ImagePredictionItem(file, predicted_label, confidence)\n# )\n# return PredictionBatch(prediction_items)\n\n\nclass OnnxInferenceSession(InferenceSession):\n \"\"\"onnx inference session\"\"\"\n\n def __init__(self, model: Path, vocab: Path):\n \"\"\"Create onnx session\"\"\"\n self.model = model\n self.session = rt.InferenceSession(str(model))\n\n self.vocab = vocab\n self.vocab_mapping = dict(enumerate(self.vocab))\n\n def _load_vocab(self, vocab: Path) -> List:\n with open(vocab, \"r\") as f:\n return [item.strip(\"\\n\") for item in f.readlines()]\n\n def predict_image(self, image: Path):\n \"\"\"Predict a single image\"\"\"\n img = self._load_image(image)\n raw_result = self.session.run([\"output\"], {\"image\": img})\n pred = self._postprocess(raw_result)\n arg_max = int(np.array(pred).argmax())\n predicted_label = self.vocab_mapping[int(arg_max)]\n confidence = float(np.array(pred).max())\n return ImagePredictionItem(image, predicted_label, confidence)\n\n def _preprocess(self, input_data: np.ndarray) -> np.ndarray:\n # converts the input data into the float32 input for onnx\n img_data = input_data.astype(\"float32\")\n\n # normalize\n mean_vec = np.array([0.485, 0.456, 0.406])\n stddev_vec = np.array([0.229, 0.224, 0.225])\n norm_img_data = np.zeros(img_data.shape).astype(\"float32\")\n for i in range(img_data.shape[0]):\n norm_img_data[i, :, :] = (\n img_data[i, :, :] / 255 - mean_vec[i]\n ) / stddev_vec[i]\n\n # add batch channel\n norm_img_data = norm_img_data.reshape(1, 3, 512, 512).astype(\"float32\")\n return norm_img_data\n\n def _load_image(self, file: Path) -> np.ndarray:\n \"\"\"loads image and carries out preprocessing for inference\"\"\"\n image = Image.open(file, mode=\"r\")\n image = image.resize((512, 512), Image.BILINEAR)\n image_data = np.array(image).transpose(2, 0, 1)\n return self._preprocess(image_data)\n\n def _postprocess(self, result: List):\n \"\"\"process results from onnx session\"\"\"\n return softmax(np.array(result)).tolist()\n\n def predict_batch(self, batch: Iterable[Path], bs: int):\n \"\"\"predicts a batch of images\"\"\"\n prediction_items = [self.predict_image(file) for file in batch]\n return PredictionBatch(prediction_items)\n\n\nif __name__ == \"__main__\":\n app()\n", "path": "src/flyswot/inference.py"}]}
| 3,404 | 191 |
gh_patches_debug_7406
|
rasdani/github-patches
|
git_diff
|
interlegis__sapl-1191
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Integração do SAPL 3.1 e Portal Modelo
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/base/templatetags/common_tags.py`
Content:
```
1 from compressor.utils import get_class
2 from django import template
3
4 from sapl.base.models import AppConfig
5 from sapl.materia.models import DocumentoAcessorio, MateriaLegislativa
6 from sapl.norma.models import NormaJuridica
7 from sapl.parlamentares.models import Filiacao
8
9 register = template.Library()
10
11
12 @register.simple_tag
13 def field_verbose_name(instance, field_name):
14 return instance._meta.get_field(field_name).verbose_name
15
16
17 @register.simple_tag
18 def fieldclass_verbose_name(class_name, field_name):
19 cls = get_class(class_name)
20 return cls._meta.get_field(field_name).verbose_name
21
22
23 @register.simple_tag
24 def model_verbose_name(class_name):
25 model = get_class(class_name)
26 return model._meta.verbose_name
27
28
29 @register.simple_tag
30 def model_verbose_name_plural(class_name):
31 model = get_class(class_name)
32 return model._meta.verbose_name_plural
33
34
35 @register.filter
36 def lookup(d, key):
37 return d[key] if key in d else []
38
39
40 @register.filter
41 def isinst(value, class_str):
42 classe = value.__class__.__name__
43 return classe == class_str
44
45
46 @register.filter
47 def get_add_perm(value, arg):
48 perm = value
49 view = arg
50
51 try:
52 nome_app = view.__class__.model._meta.app_label
53 except AttributeError:
54 return None
55 nome_model = view.__class__.model.__name__.lower()
56 can_add = '.add_' + nome_model
57
58 return perm.__contains__(nome_app + can_add)
59
60
61 @register.filter
62 def get_change_perm(value, arg):
63 perm = value
64 view = arg
65
66 try:
67 nome_app = view.__class__.model._meta.app_label
68 except AttributeError:
69 return None
70 nome_model = view.__class__.model.__name__.lower()
71 can_change = '.change_' + nome_model
72
73 return perm.__contains__(nome_app + can_change)
74
75
76 @register.filter
77 def get_delete_perm(value, arg):
78 perm = value
79 view = arg
80
81 try:
82 nome_app = view.__class__.model._meta.app_label
83 except AttributeError:
84 return None
85 nome_model = view.__class__.model.__name__.lower()
86 can_delete = '.delete_' + nome_model
87
88 return perm.__contains__(nome_app + can_delete)
89
90
91 @register.filter
92 def ultima_filiacao(value):
93 parlamentar = value
94
95 ultima_filiacao = Filiacao.objects.filter(
96 parlamentar=parlamentar).order_by('-data').first()
97
98 if ultima_filiacao:
99 return ultima_filiacao.partido
100 else:
101 return None
102
103
104 @register.filter
105 def get_config_attr(attribute):
106 return AppConfig.attr(attribute)
107
108
109 @register.filter
110 def str2intabs(value):
111 if not isinstance(value, str):
112 return ''
113 try:
114 v = int(value)
115 v = abs(v)
116 return v
117 except:
118 return ''
119
120
121 @register.filter
122 def url(value):
123 if value.startswith('http://') or value.startswith('https://'):
124 return True
125 return False
126
127
128 @register.filter
129 def cronometro_to_seconds(value):
130 if not AppConfig.attr('cronometro_' + value):
131 return 0
132
133 m, s, x = AppConfig.attr(
134 'cronometro_' + value).isoformat().split(':')
135
136 return 60 * int(m) + int(s)
137
138
139 @register.filter
140 def to_list_pk(object_list):
141 return [o.pk for o in object_list]
142
143
144 @register.filter
145 def search_get_model(object):
146 if type(object) == MateriaLegislativa:
147 return 'm'
148 elif type(object) == DocumentoAcessorio:
149 return 'd'
150 elif type(object) == NormaJuridica:
151 return 'n'
152
153 return None
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sapl/base/templatetags/common_tags.py b/sapl/base/templatetags/common_tags.py
--- a/sapl/base/templatetags/common_tags.py
+++ b/sapl/base/templatetags/common_tags.py
@@ -117,6 +117,23 @@
except:
return ''
[email protected]
+def has_iframe(request):
+
+ iframe = request.session.get('iframe', False)
+ if not iframe and 'iframe' in request.GET:
+ ival = request.GET['iframe']
+ if ival and int(ival) == 1:
+ request.session['iframe'] = True
+ return True
+ elif 'iframe' in request.GET:
+ ival = request.GET['iframe']
+ if ival and int(ival) == 0:
+ del request.session['iframe']
+ return False
+
+ return iframe
+
@register.filter
def url(value):
|
{"golden_diff": "diff --git a/sapl/base/templatetags/common_tags.py b/sapl/base/templatetags/common_tags.py\n--- a/sapl/base/templatetags/common_tags.py\n+++ b/sapl/base/templatetags/common_tags.py\n@@ -117,6 +117,23 @@\n except:\n return ''\n \[email protected]\n+def has_iframe(request):\n+\n+ iframe = request.session.get('iframe', False)\n+ if not iframe and 'iframe' in request.GET:\n+ ival = request.GET['iframe']\n+ if ival and int(ival) == 1:\n+ request.session['iframe'] = True\n+ return True\n+ elif 'iframe' in request.GET:\n+ ival = request.GET['iframe']\n+ if ival and int(ival) == 0:\n+ del request.session['iframe']\n+ return False\n+\n+ return iframe\n+\n \n @register.filter\n def url(value):\n", "issue": "Integra\u00e7\u00e3o do SAPL 3.1 e Portal Modelo\n\n", "before_files": [{"content": "from compressor.utils import get_class\nfrom django import template\n\nfrom sapl.base.models import AppConfig\nfrom sapl.materia.models import DocumentoAcessorio, MateriaLegislativa\nfrom sapl.norma.models import NormaJuridica\nfrom sapl.parlamentares.models import Filiacao\n\nregister = template.Library()\n\n\[email protected]_tag\ndef field_verbose_name(instance, field_name):\n return instance._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef fieldclass_verbose_name(class_name, field_name):\n cls = get_class(class_name)\n return cls._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef model_verbose_name(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name\n\n\[email protected]_tag\ndef model_verbose_name_plural(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name_plural\n\n\[email protected]\ndef lookup(d, key):\n return d[key] if key in d else []\n\n\[email protected]\ndef isinst(value, class_str):\n classe = value.__class__.__name__\n return classe == class_str\n\n\[email protected]\ndef get_add_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_add = '.add_' + nome_model\n\n return perm.__contains__(nome_app + can_add)\n\n\[email protected]\ndef get_change_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_change = '.change_' + nome_model\n\n return perm.__contains__(nome_app + can_change)\n\n\[email protected]\ndef get_delete_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_delete = '.delete_' + nome_model\n\n return perm.__contains__(nome_app + can_delete)\n\n\[email protected]\ndef ultima_filiacao(value):\n parlamentar = value\n\n ultima_filiacao = Filiacao.objects.filter(\n parlamentar=parlamentar).order_by('-data').first()\n\n if ultima_filiacao:\n return ultima_filiacao.partido\n else:\n return None\n\n\[email protected]\ndef get_config_attr(attribute):\n return AppConfig.attr(attribute)\n\n\[email protected]\ndef str2intabs(value):\n if not isinstance(value, str):\n return ''\n try:\n v = int(value)\n v = abs(v)\n return v\n except:\n return ''\n\n\[email protected]\ndef url(value):\n if value.startswith('http://') or value.startswith('https://'):\n return True\n return False\n\n\[email protected]\ndef cronometro_to_seconds(value):\n if not AppConfig.attr('cronometro_' + value):\n return 0\n\n m, s, x = AppConfig.attr(\n 'cronometro_' + value).isoformat().split(':')\n\n return 60 * int(m) + int(s)\n\n\[email protected]\ndef to_list_pk(object_list):\n return [o.pk for o in object_list]\n\n\[email protected]\ndef search_get_model(object):\n if type(object) == MateriaLegislativa:\n return 'm'\n elif type(object) == DocumentoAcessorio:\n return 'd'\n elif type(object) == NormaJuridica:\n return 'n'\n\n return None\n", "path": "sapl/base/templatetags/common_tags.py"}], "after_files": [{"content": "from compressor.utils import get_class\nfrom django import template\n\nfrom sapl.base.models import AppConfig\nfrom sapl.materia.models import DocumentoAcessorio, MateriaLegislativa\nfrom sapl.norma.models import NormaJuridica\nfrom sapl.parlamentares.models import Filiacao\n\nregister = template.Library()\n\n\[email protected]_tag\ndef field_verbose_name(instance, field_name):\n return instance._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef fieldclass_verbose_name(class_name, field_name):\n cls = get_class(class_name)\n return cls._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef model_verbose_name(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name\n\n\[email protected]_tag\ndef model_verbose_name_plural(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name_plural\n\n\[email protected]\ndef lookup(d, key):\n return d[key] if key in d else []\n\n\[email protected]\ndef isinst(value, class_str):\n classe = value.__class__.__name__\n return classe == class_str\n\n\[email protected]\ndef get_add_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_add = '.add_' + nome_model\n\n return perm.__contains__(nome_app + can_add)\n\n\[email protected]\ndef get_change_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_change = '.change_' + nome_model\n\n return perm.__contains__(nome_app + can_change)\n\n\[email protected]\ndef get_delete_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_delete = '.delete_' + nome_model\n\n return perm.__contains__(nome_app + can_delete)\n\n\[email protected]\ndef ultima_filiacao(value):\n parlamentar = value\n\n ultima_filiacao = Filiacao.objects.filter(\n parlamentar=parlamentar).order_by('-data').first()\n\n if ultima_filiacao:\n return ultima_filiacao.partido\n else:\n return None\n\n\[email protected]\ndef get_config_attr(attribute):\n return AppConfig.attr(attribute)\n\n\[email protected]\ndef str2intabs(value):\n if not isinstance(value, str):\n return ''\n try:\n v = int(value)\n v = abs(v)\n return v\n except:\n return ''\n\[email protected]\ndef has_iframe(request):\n\n iframe = request.session.get('iframe', False)\n if not iframe and 'iframe' in request.GET:\n ival = request.GET['iframe']\n if ival and int(ival) == 1:\n request.session['iframe'] = True\n return True\n elif 'iframe' in request.GET:\n ival = request.GET['iframe']\n if ival and int(ival) == 0:\n del request.session['iframe']\n return False\n\n return iframe\n\n\[email protected]\ndef url(value):\n if value.startswith('http://') or value.startswith('https://'):\n return True\n return False\n\n\[email protected]\ndef cronometro_to_seconds(value):\n if not AppConfig.attr('cronometro_' + value):\n return 0\n\n m, s, x = AppConfig.attr(\n 'cronometro_' + value).isoformat().split(':')\n\n return 60 * int(m) + int(s)\n\n\[email protected]\ndef to_list_pk(object_list):\n return [o.pk for o in object_list]\n\n\[email protected]\ndef search_get_model(object):\n if type(object) == MateriaLegislativa:\n return 'm'\n elif type(object) == DocumentoAcessorio:\n return 'd'\n elif type(object) == NormaJuridica:\n return 'n'\n\n return None\n", "path": "sapl/base/templatetags/common_tags.py"}]}
| 1,496 | 218 |
gh_patches_debug_29041
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1699
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unnecessary ping event
**Environment**:
- CTFd Version/Commit: 3.1.1, latest commit
- Operating System: any
- Web Browser and Version: any
in the comment you said "Immediately yield a ping event to force Response headers to be set", but this event seems to lies inside the while True loop, which results to an unnecessary ping event every 5 seconds.
I believe that's an unintended behavior, though it doesn't break anything.
https://github.com/CTFd/CTFd/blob/4c31dc23e8cfa0308367732d603b16e01871b00e/CTFd/utils/events/__init__.py#L57-L67
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/events/__init__.py`
Content:
```
1 import json
2 from collections import defaultdict
3 from queue import Queue
4
5 from gevent import Timeout, spawn
6 from tenacity import retry, wait_exponential
7
8 from CTFd.cache import cache
9 from CTFd.utils import string_types
10
11
12 class ServerSentEvent(object):
13 def __init__(self, data, type=None, id=None):
14 self.data = data
15 self.type = type
16 self.id = id
17
18 def __str__(self):
19 if isinstance(self.data, string_types):
20 data = self.data
21 else:
22 data = json.dumps(self.data)
23 lines = ["data:{value}".format(value=line) for line in data.splitlines()]
24 if self.type:
25 lines.insert(0, "event:{value}".format(value=self.type))
26 if self.id:
27 lines.append("id:{value}".format(value=self.id))
28 return "\n".join(lines) + "\n\n"
29
30 def to_dict(self):
31 d = {"data": self.data}
32 if self.type:
33 d["type"] = self.type
34 if self.id:
35 d["id"] = self.id
36 return d
37
38
39 class EventManager(object):
40 def __init__(self):
41 self.clients = {}
42
43 def publish(self, data, type=None, channel="ctf"):
44 event = ServerSentEvent(data, type=type)
45 message = event.to_dict()
46 for client in list(self.clients.values()):
47 client[channel].put(message)
48 return len(self.clients)
49
50 def listen(self):
51 pass
52
53 def subscribe(self, channel="ctf"):
54 q = defaultdict(Queue)
55 self.clients[id(q)] = q
56 try:
57 while True:
58 try:
59 # Immediately yield a ping event to force Response headers to be set
60 # or else some reverse proxies will incorrectly buffer SSE
61 yield ServerSentEvent(data="", type="ping")
62
63 with Timeout(5):
64 message = q[channel].get()
65 yield ServerSentEvent(**message)
66 except Timeout:
67 yield ServerSentEvent(data="", type="ping")
68 finally:
69 del self.clients[id(q)]
70 del q
71
72
73 class RedisEventManager(EventManager):
74 def __init__(self):
75 super(EventManager, self).__init__()
76 self.client = cache.cache._write_client
77 self.clients = {}
78
79 def publish(self, data, type=None, channel="ctf"):
80 event = ServerSentEvent(data, type=type)
81 message = json.dumps(event.to_dict())
82 return self.client.publish(message=message, channel=channel)
83
84 def listen(self, channel="ctf"):
85 @retry(wait=wait_exponential(min=1, max=30))
86 def _listen():
87 while True:
88 pubsub = self.client.pubsub()
89 pubsub.subscribe(channel)
90 try:
91 while True:
92 message = pubsub.get_message(
93 ignore_subscribe_messages=True, timeout=5
94 )
95 if message:
96 if message["type"] == "message":
97 event = json.loads(message["data"])
98 for client in list(self.clients.values()):
99 client[channel].put(event)
100 finally:
101 pubsub.close()
102
103 spawn(_listen)
104
105 def subscribe(self, channel="ctf"):
106 q = defaultdict(Queue)
107 self.clients[id(q)] = q
108 try:
109 while True:
110 try:
111 # Immediately yield a ping event to force Response headers to be set
112 # or else some reverse proxies will incorrectly buffer SSE
113 yield ServerSentEvent(data="", type="ping")
114
115 with Timeout(5):
116 message = q[channel].get()
117 yield ServerSentEvent(**message)
118 except Timeout:
119 yield ServerSentEvent(data="", type="ping")
120 finally:
121 del self.clients[id(q)]
122 del q
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/CTFd/utils/events/__init__.py b/CTFd/utils/events/__init__.py
--- a/CTFd/utils/events/__init__.py
+++ b/CTFd/utils/events/__init__.py
@@ -54,12 +54,11 @@
q = defaultdict(Queue)
self.clients[id(q)] = q
try:
+ # Immediately yield a ping event to force Response headers to be set
+ # or else some reverse proxies will incorrectly buffer SSE
+ yield ServerSentEvent(data="", type="ping")
while True:
try:
- # Immediately yield a ping event to force Response headers to be set
- # or else some reverse proxies will incorrectly buffer SSE
- yield ServerSentEvent(data="", type="ping")
-
with Timeout(5):
message = q[channel].get()
yield ServerSentEvent(**message)
@@ -106,12 +105,11 @@
q = defaultdict(Queue)
self.clients[id(q)] = q
try:
+ # Immediately yield a ping event to force Response headers to be set
+ # or else some reverse proxies will incorrectly buffer SSE
+ yield ServerSentEvent(data="", type="ping")
while True:
try:
- # Immediately yield a ping event to force Response headers to be set
- # or else some reverse proxies will incorrectly buffer SSE
- yield ServerSentEvent(data="", type="ping")
-
with Timeout(5):
message = q[channel].get()
yield ServerSentEvent(**message)
|
{"golden_diff": "diff --git a/CTFd/utils/events/__init__.py b/CTFd/utils/events/__init__.py\n--- a/CTFd/utils/events/__init__.py\n+++ b/CTFd/utils/events/__init__.py\n@@ -54,12 +54,11 @@\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n+ # Immediately yield a ping event to force Response headers to be set\n+ # or else some reverse proxies will incorrectly buffer SSE\n+ yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n- # Immediately yield a ping event to force Response headers to be set\n- # or else some reverse proxies will incorrectly buffer SSE\n- yield ServerSentEvent(data=\"\", type=\"ping\")\n-\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n@@ -106,12 +105,11 @@\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n+ # Immediately yield a ping event to force Response headers to be set\n+ # or else some reverse proxies will incorrectly buffer SSE\n+ yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n- # Immediately yield a ping event to force Response headers to be set\n- # or else some reverse proxies will incorrectly buffer SSE\n- yield ServerSentEvent(data=\"\", type=\"ping\")\n-\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n", "issue": "Unnecessary ping event\n**Environment**:\r\n\r\n- CTFd Version/Commit: 3.1.1, latest commit\r\n- Operating System: any\r\n- Web Browser and Version: any\r\n\r\nin the comment you said \"Immediately yield a ping event to force Response headers to be set\", but this event seems to lies inside the while True loop, which results to an unnecessary ping event every 5 seconds.\r\nI believe that's an unintended behavior, though it doesn't break anything.\r\n\r\nhttps://github.com/CTFd/CTFd/blob/4c31dc23e8cfa0308367732d603b16e01871b00e/CTFd/utils/events/__init__.py#L57-L67\n", "before_files": [{"content": "import json\nfrom collections import defaultdict\nfrom queue import Queue\n\nfrom gevent import Timeout, spawn\nfrom tenacity import retry, wait_exponential\n\nfrom CTFd.cache import cache\nfrom CTFd.utils import string_types\n\n\nclass ServerSentEvent(object):\n def __init__(self, data, type=None, id=None):\n self.data = data\n self.type = type\n self.id = id\n\n def __str__(self):\n if isinstance(self.data, string_types):\n data = self.data\n else:\n data = json.dumps(self.data)\n lines = [\"data:{value}\".format(value=line) for line in data.splitlines()]\n if self.type:\n lines.insert(0, \"event:{value}\".format(value=self.type))\n if self.id:\n lines.append(\"id:{value}\".format(value=self.id))\n return \"\\n\".join(lines) + \"\\n\\n\"\n\n def to_dict(self):\n d = {\"data\": self.data}\n if self.type:\n d[\"type\"] = self.type\n if self.id:\n d[\"id\"] = self.id\n return d\n\n\nclass EventManager(object):\n def __init__(self):\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = event.to_dict()\n for client in list(self.clients.values()):\n client[channel].put(message)\n return len(self.clients)\n\n def listen(self):\n pass\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n while True:\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n\n\nclass RedisEventManager(EventManager):\n def __init__(self):\n super(EventManager, self).__init__()\n self.client = cache.cache._write_client\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = json.dumps(event.to_dict())\n return self.client.publish(message=message, channel=channel)\n\n def listen(self, channel=\"ctf\"):\n @retry(wait=wait_exponential(min=1, max=30))\n def _listen():\n while True:\n pubsub = self.client.pubsub()\n pubsub.subscribe(channel)\n try:\n while True:\n message = pubsub.get_message(\n ignore_subscribe_messages=True, timeout=5\n )\n if message:\n if message[\"type\"] == \"message\":\n event = json.loads(message[\"data\"])\n for client in list(self.clients.values()):\n client[channel].put(event)\n finally:\n pubsub.close()\n\n spawn(_listen)\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n while True:\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n", "path": "CTFd/utils/events/__init__.py"}], "after_files": [{"content": "import json\nfrom collections import defaultdict\nfrom queue import Queue\n\nfrom gevent import Timeout, spawn\nfrom tenacity import retry, wait_exponential\n\nfrom CTFd.cache import cache\nfrom CTFd.utils import string_types\n\n\nclass ServerSentEvent(object):\n def __init__(self, data, type=None, id=None):\n self.data = data\n self.type = type\n self.id = id\n\n def __str__(self):\n if isinstance(self.data, string_types):\n data = self.data\n else:\n data = json.dumps(self.data)\n lines = [\"data:{value}\".format(value=line) for line in data.splitlines()]\n if self.type:\n lines.insert(0, \"event:{value}\".format(value=self.type))\n if self.id:\n lines.append(\"id:{value}\".format(value=self.id))\n return \"\\n\".join(lines) + \"\\n\\n\"\n\n def to_dict(self):\n d = {\"data\": self.data}\n if self.type:\n d[\"type\"] = self.type\n if self.id:\n d[\"id\"] = self.id\n return d\n\n\nclass EventManager(object):\n def __init__(self):\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = event.to_dict()\n for client in list(self.clients.values()):\n client[channel].put(message)\n return len(self.clients)\n\n def listen(self):\n pass\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n\n\nclass RedisEventManager(EventManager):\n def __init__(self):\n super(EventManager, self).__init__()\n self.client = cache.cache._write_client\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = json.dumps(event.to_dict())\n return self.client.publish(message=message, channel=channel)\n\n def listen(self, channel=\"ctf\"):\n @retry(wait=wait_exponential(min=1, max=30))\n def _listen():\n while True:\n pubsub = self.client.pubsub()\n pubsub.subscribe(channel)\n try:\n while True:\n message = pubsub.get_message(\n ignore_subscribe_messages=True, timeout=5\n )\n if message:\n if message[\"type\"] == \"message\":\n event = json.loads(message[\"data\"])\n for client in list(self.clients.values()):\n client[channel].put(event)\n finally:\n pubsub.close()\n\n spawn(_listen)\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n", "path": "CTFd/utils/events/__init__.py"}]}
| 1,492 | 340 |
gh_patches_debug_36465
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-464
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Provide a free text field for the "Other" license and remove the CC-noDerives license
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-metadata_fields/ckanext/metadata_fields/plugin.py`
Content:
```
1 '''
2 Created on Apr 10, 2014
3
4 @author:alexandru-m-g
5 '''
6 import logging
7
8 import ckan.plugins as plugins
9 import ckan.plugins.toolkit as tk
10 from routes.mapper import SubMapper
11
12 import ckanext.metadata_fields.custom_validator as vd
13 import ckanext.metadata_fields.update as update
14
15 def list_of_all_groups():
16 groups = tk.get_action('group_list')(data_dict={'all_fields': True})
17 return groups
18
19
20 class HdxMetadataFieldsPlugin(plugins.SingletonPlugin, tk.DefaultDatasetForm):
21 plugins.implements(plugins.IConfigurer, inherit=False)
22 plugins.implements(plugins.IRoutes, inherit=True)
23 plugins.implements(plugins.IDatasetForm, inherit=False)
24 plugins.implements(plugins.ITemplateHelpers)
25 plugins.implements(plugins.IActions)
26
27 def update_config(self, config):
28 tk.add_template_directory(config, 'templates')
29
30 def before_map(self, map):
31 with SubMapper(map, controller='ckanext.metadata_fields.dataset_controller:DatasetController') as m:
32 m.connect('add dataset', '/dataset/new', action='new')
33 m.connect('/dataset/{action}/{id}',
34 requirements=dict(action='|'.join([
35 'new_metadata',
36 'new_resource',
37 ])))
38 return map
39
40 def is_fallback(self):
41 return True
42
43 def package_types(self):
44 # default - no specific package type
45 return []
46
47 def _modify_package_schema(self, schema):
48
49 schema.update({
50 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required
51 'package_creator': [tk.get_validator('not_empty'),
52 tk.get_converter('convert_to_extras')],
53 'groups_list': [vd.groups_not_empty],
54 'caveats' : [tk.get_validator('ignore_missing'),
55 tk.get_converter('convert_to_extras')],
56 'dataset_source' : [tk.get_validator('not_empty'),
57 tk.get_converter('convert_to_extras')],
58 'dataset_date' : [tk.get_validator('ignore_missing'),
59 tk.get_converter('convert_to_extras')],
60 'methodology' : [tk.get_validator('ignore_missing'),
61 tk.get_converter('convert_to_extras')],
62 })
63
64 return schema
65
66
67 def create_package_schema(self):
68 schema = super(HdxMetadataFieldsPlugin, self).create_package_schema()
69 schema = self._modify_package_schema(schema)
70 return schema
71
72 def update_package_schema(self):
73 schema = super(HdxMetadataFieldsPlugin, self).update_package_schema()
74 schema = self._modify_package_schema(schema)
75 return schema
76
77 def show_package_schema(self):
78 schema = super(HdxMetadataFieldsPlugin, self).show_package_schema()
79 schema.update({
80 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required
81 'package_creator': [tk.get_converter('convert_from_extras'),
82 tk.get_validator('ignore_missing')],
83 'caveats' : [tk.get_converter('convert_from_extras'),
84 tk.get_validator('ignore_missing')],
85 'dataset_source' : [tk.get_converter('convert_from_extras'),
86 tk.get_validator('ignore_missing')],
87 'dataset_date' : [tk.get_converter('convert_from_extras'),
88 tk.get_validator('ignore_missing')],
89 'methodology' : [tk.get_converter('convert_from_extras'),
90 tk.get_validator('ignore_missing')],
91 })
92 return schema
93
94
95 def get_helpers(self):
96 return {'list_of_all_groups': list_of_all_groups}
97
98 def get_actions(self):
99 return {'package_update': update.package_update}
100
101
102
```
Path: `ckanext-hdx_theme/ckanext/hdx_theme/licenses.py`
Content:
```
1 '''
2 Created on May 12, 2014
3
4 @author: alexandru-m-g
5 '''
6
7 from ckan.common import _
8 from ckan.model.license import DefaultLicense
9
10
11 class LicenseCreativeCommonsIntergovernmentalOrgs(DefaultLicense):
12 # domain_content = True
13 # domain_data = True
14 id = "cc-by-igo"
15 is_okd_compliant = False
16 url = "http://creativecommons.org/licenses/by/3.0/igo/legalcode"
17
18 @property
19 def title(self):
20 return _("Creative Commons Attribution for Intergovernmental Organisations")
21
22 class LicenseCreativeCommonsNoDerives(DefaultLicense):
23 # domain_content = True
24 # domain_data = True
25 id = "cc-by-nd"
26 is_okd_compliant = False
27 url = "http://creativecommons.org/licenses/by-nd/3.0/legalcode"
28
29 @property
30 def title(self):
31 return _("Creative Commons Attribution-NoDerives")
32
33 class LicenseOtherPublicDomainNoRestrictions(DefaultLicense):
34 # domain_content = True
35 id = "other-pd-nr"
36 is_generic = True
37 is_okd_compliant = True
38
39 @property
40 def title(self):
41 return _("Public Domain / No Restrictions")
42
43 class LicenseHdxOther(DefaultLicense):
44 # domain_content = True
45 id = "hdx-other"
46 # is_generic = True
47 # is_okd_compliant = True
48
49 @property
50 def title(self):
51 return _("Other")
52
53
```
Path: `ckanext-hdx_theme/ckanext/hdx_theme/plugin.py`
Content:
```
1 import ckanext.hdx_theme.licenses as hdx_licenses
2 from beaker.cache import cache_regions
3
4 import ckan.plugins as plugins
5 import ckan.plugins.toolkit as toolkit
6 import ckan.model.package as package
7 import ckan.model.license as license
8 import version;
9
10 cache_regions.update({
11 'hdx_memory_cache':{
12 'expire': 172800, # 2 days
13 'type':'memory',
14 'key_length': 250
15 }
16 })
17
18 def _generate_license_list():
19 package.Package._license_register = license.LicenseRegister()
20 package.Package._license_register.licenses = [
21 license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),
22 license.License(license.LicenseCreativeCommonsAttribution()),
23 license.License(license.LicenseCreativeCommonsAttributionShareAlike()),
24 license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),
25 license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),
26 license.License(hdx_licenses.LicenseHdxOther())
27 ]
28
29 class HDXThemePlugin(plugins.SingletonPlugin):
30 plugins.implements(plugins.IConfigurer)
31 plugins.implements(plugins.IRoutes, inherit=True)
32 plugins.implements(plugins.ITemplateHelpers)
33 plugins.implements(plugins.IActions)
34
35 def update_config(self, config):
36 toolkit.add_template_directory(config, 'templates')
37 toolkit.add_public_directory(config, 'public')
38 toolkit.add_resource('fanstatic', 'hdx_theme')
39
40 def before_map(self, map):
41 map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')
42 map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')
43 map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')
44 map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')
45 map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')
46 map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')
47
48 map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')
49
50 # this is actually a HACK to force the customization of the license list.
51 # the license list should be changed to be based on a JSON rest service
52 _generate_license_list()
53
54 return map
55
56 def get_helpers(self):
57 from ckanext.hdx_theme import helpers as hdx_helpers
58 return {
59 'is_downloadable': hdx_helpers.is_downloadable,
60 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,
61 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,
62 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,
63 'get_last_revision_package':hdx_helpers.get_last_revision_package,
64 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,
65 'get_last_revision_group':hdx_helpers.get_last_revision_group,
66 'get_group_followers':hdx_helpers.get_group_followers,
67 'get_group_members':hdx_helpers.get_group_members,
68 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,
69 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,
70 'hdx_version':hdx_helpers.hdx_version,
71 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,
72 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items
73 }
74
75 def get_actions(self):
76 from ckanext.hdx_theme import actions as hdx_actions
77 return {
78 'organization_list_for_user':hdx_actions.organization_list_for_user
79 }
80
81
82
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py b/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py
@@ -40,6 +40,16 @@
def title(self):
return _("Public Domain / No Restrictions")
+class LicenseHdxMultiple(DefaultLicense):
+# domain_content = True
+ id = "hdx-multi"
+# is_generic = True
+# is_okd_compliant = True
+
+ @property
+ def title(self):
+ return _("Multiple Licenses")
+
class LicenseHdxOther(DefaultLicense):
# domain_content = True
id = "hdx-other"
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
@@ -23,6 +23,7 @@
license.License(license.LicenseCreativeCommonsAttributionShareAlike()),
license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),
license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),
+ license.License(hdx_licenses.LicenseHdxMultiple()),
license.License(hdx_licenses.LicenseHdxOther())
]
diff --git a/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py b/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py
--- a/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py
+++ b/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py
@@ -59,6 +59,8 @@
tk.get_converter('convert_to_extras')],
'methodology' : [tk.get_validator('ignore_missing'),
tk.get_converter('convert_to_extras')],
+ 'license_other' : [tk.get_validator('ignore_missing'),
+ tk.get_converter('convert_to_extras')],
})
return schema
@@ -88,6 +90,8 @@
tk.get_validator('ignore_missing')],
'methodology' : [tk.get_converter('convert_from_extras'),
tk.get_validator('ignore_missing')],
+ 'license_other' : [tk.get_converter('convert_from_extras'),
+ tk.get_validator('ignore_missing')],
})
return schema
|
{"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py b/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py\n@@ -40,6 +40,16 @@\n def title(self):\n return _(\"Public Domain / No Restrictions\")\n \n+class LicenseHdxMultiple(DefaultLicense):\n+# domain_content = True\n+ id = \"hdx-multi\"\n+# is_generic = True\n+# is_okd_compliant = True\n+\n+ @property\n+ def title(self):\n+ return _(\"Multiple Licenses\")\n+\n class LicenseHdxOther(DefaultLicense):\n # domain_content = True\n id = \"hdx-other\"\ndiff --git a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n@@ -23,6 +23,7 @@\n license.License(license.LicenseCreativeCommonsAttributionShareAlike()),\n license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),\n license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),\n+ license.License(hdx_licenses.LicenseHdxMultiple()),\n license.License(hdx_licenses.LicenseHdxOther())\n ]\n \ndiff --git a/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py b/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py\n--- a/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py\n+++ b/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py\n@@ -59,6 +59,8 @@\n tk.get_converter('convert_to_extras')],\n 'methodology' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n+ 'license_other' : [tk.get_validator('ignore_missing'),\n+ tk.get_converter('convert_to_extras')],\n })\n \n return schema\n@@ -88,6 +90,8 @@\n tk.get_validator('ignore_missing')],\n 'methodology' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n+ 'license_other' : [tk.get_converter('convert_from_extras'),\n+ tk.get_validator('ignore_missing')],\n })\n return schema\n", "issue": "Provide a free text field for the \"Other\" license and remove the CC-noDerives license\n\n", "before_files": [{"content": "'''\nCreated on Apr 10, 2014\n\n@author:alexandru-m-g\n'''\nimport logging\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nfrom routes.mapper import SubMapper\n\nimport ckanext.metadata_fields.custom_validator as vd\nimport ckanext.metadata_fields.update as update\n\ndef list_of_all_groups():\n groups = tk.get_action('group_list')(data_dict={'all_fields': True})\n return groups\n\n\nclass HdxMetadataFieldsPlugin(plugins.SingletonPlugin, tk.DefaultDatasetForm):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.IDatasetForm, inherit=False)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def before_map(self, map):\n with SubMapper(map, controller='ckanext.metadata_fields.dataset_controller:DatasetController') as m:\n m.connect('add dataset', '/dataset/new', action='new')\n m.connect('/dataset/{action}/{id}',\n requirements=dict(action='|'.join([\n 'new_metadata',\n 'new_resource',\n ])))\n return map\n \n def is_fallback(self):\n return True\n\n def package_types(self):\n # default - no specific package type\n return []\n\n def _modify_package_schema(self, schema):\n \n schema.update({\n 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required\n 'package_creator': [tk.get_validator('not_empty'),\n tk.get_converter('convert_to_extras')],\n 'groups_list': [vd.groups_not_empty],\n 'caveats' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n 'dataset_source' : [tk.get_validator('not_empty'),\n tk.get_converter('convert_to_extras')],\n 'dataset_date' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n 'methodology' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n })\n\n return schema\n\n\n def create_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).create_package_schema()\n schema = self._modify_package_schema(schema)\n return schema\n\n def update_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).update_package_schema()\n schema = self._modify_package_schema(schema)\n return schema\n\n def show_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).show_package_schema()\n schema.update({\n 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required\n 'package_creator': [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'caveats' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'dataset_source' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'dataset_date' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'methodology' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n })\n return schema\n \n \n def get_helpers(self):\n return {'list_of_all_groups': list_of_all_groups}\n \n def get_actions(self):\n return {'package_update': update.package_update}\n\n\n", "path": "ckanext-metadata_fields/ckanext/metadata_fields/plugin.py"}, {"content": "'''\nCreated on May 12, 2014\n\n@author: alexandru-m-g\n'''\n\nfrom ckan.common import _\nfrom ckan.model.license import DefaultLicense\n\n\nclass LicenseCreativeCommonsIntergovernmentalOrgs(DefaultLicense):\n# domain_content = True\n# domain_data = True\n id = \"cc-by-igo\"\n is_okd_compliant = False\n url = \"http://creativecommons.org/licenses/by/3.0/igo/legalcode\"\n\n @property\n def title(self):\n return _(\"Creative Commons Attribution for Intergovernmental Organisations\")\n \nclass LicenseCreativeCommonsNoDerives(DefaultLicense):\n# domain_content = True\n# domain_data = True\n id = \"cc-by-nd\"\n is_okd_compliant = False\n url = \"http://creativecommons.org/licenses/by-nd/3.0/legalcode\"\n\n @property\n def title(self):\n return _(\"Creative Commons Attribution-NoDerives\")\n \nclass LicenseOtherPublicDomainNoRestrictions(DefaultLicense):\n# domain_content = True\n id = \"other-pd-nr\"\n is_generic = True\n is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Public Domain / No Restrictions\")\n\nclass LicenseHdxOther(DefaultLicense):\n# domain_content = True\n id = \"hdx-other\"\n# is_generic = True\n# is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Other\")\n\n ", "path": "ckanext-hdx_theme/ckanext/hdx_theme/licenses.py"}, {"content": "import ckanext.hdx_theme.licenses as hdx_licenses\nfrom beaker.cache import cache_regions\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as toolkit\nimport ckan.model.package as package\nimport ckan.model.license as license\nimport version;\n\ncache_regions.update({\n 'hdx_memory_cache':{\n 'expire': 172800, # 2 days\n 'type':'memory',\n 'key_length': 250\n }\n })\n\ndef _generate_license_list():\n package.Package._license_register = license.LicenseRegister() \n package.Package._license_register.licenses = [\n license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),\n license.License(license.LicenseCreativeCommonsAttribution()),\n license.License(license.LicenseCreativeCommonsAttributionShareAlike()),\n license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),\n license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),\n license.License(hdx_licenses.LicenseHdxOther())\n ]\n\nclass HDXThemePlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n\n def update_config(self, config):\n toolkit.add_template_directory(config, 'templates')\n toolkit.add_public_directory(config, 'public')\n toolkit.add_resource('fanstatic', 'hdx_theme')\n\n def before_map(self, map):\n map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')\n map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')\n map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')\n map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')\n map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')\n map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')\n \n map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')\n \n # this is actually a HACK to force the customization of the license list.\n # the license list should be changed to be based on a JSON rest service\n _generate_license_list()\n \n return map\n\n def get_helpers(self):\n from ckanext.hdx_theme import helpers as hdx_helpers\n return {\n 'is_downloadable': hdx_helpers.is_downloadable,\n 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,\n 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,\n 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,\n 'get_last_revision_package':hdx_helpers.get_last_revision_package,\n 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,\n 'get_last_revision_group':hdx_helpers.get_last_revision_group,\n 'get_group_followers':hdx_helpers.get_group_followers,\n 'get_group_members':hdx_helpers.get_group_members,\n 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,\n 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,\n 'hdx_version':hdx_helpers.hdx_version,\n 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,\n 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items\n }\n \n def get_actions(self):\n from ckanext.hdx_theme import actions as hdx_actions\n return {\n 'organization_list_for_user':hdx_actions.organization_list_for_user\n }\n \n \n\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/plugin.py"}], "after_files": [{"content": "'''\nCreated on Apr 10, 2014\n\n@author:alexandru-m-g\n'''\nimport logging\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nfrom routes.mapper import SubMapper\n\nimport ckanext.metadata_fields.custom_validator as vd\nimport ckanext.metadata_fields.update as update\n\ndef list_of_all_groups():\n groups = tk.get_action('group_list')(data_dict={'all_fields': True})\n return groups\n\n\nclass HdxMetadataFieldsPlugin(plugins.SingletonPlugin, tk.DefaultDatasetForm):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.IDatasetForm, inherit=False)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def before_map(self, map):\n with SubMapper(map, controller='ckanext.metadata_fields.dataset_controller:DatasetController') as m:\n m.connect('add dataset', '/dataset/new', action='new')\n m.connect('/dataset/{action}/{id}',\n requirements=dict(action='|'.join([\n 'new_metadata',\n 'new_resource',\n ])))\n return map\n \n def is_fallback(self):\n return True\n\n def package_types(self):\n # default - no specific package type\n return []\n\n def _modify_package_schema(self, schema):\n \n schema.update({\n 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required\n 'package_creator': [tk.get_validator('not_empty'),\n tk.get_converter('convert_to_extras')],\n 'groups_list': [vd.groups_not_empty],\n 'caveats' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n 'dataset_source' : [tk.get_validator('not_empty'),\n tk.get_converter('convert_to_extras')],\n 'dataset_date' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n 'methodology' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n 'license_other' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n })\n\n return schema\n\n\n def create_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).create_package_schema()\n schema = self._modify_package_schema(schema)\n return schema\n\n def update_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).update_package_schema()\n schema = self._modify_package_schema(schema)\n return schema\n\n def show_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).show_package_schema()\n schema.update({\n 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required\n 'package_creator': [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'caveats' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'dataset_source' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'dataset_date' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'methodology' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'license_other' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n })\n return schema\n \n \n def get_helpers(self):\n return {'list_of_all_groups': list_of_all_groups}\n \n def get_actions(self):\n return {'package_update': update.package_update}\n\n\n", "path": "ckanext-metadata_fields/ckanext/metadata_fields/plugin.py"}, {"content": "'''\nCreated on May 12, 2014\n\n@author: alexandru-m-g\n'''\n\nfrom ckan.common import _\nfrom ckan.model.license import DefaultLicense\n\n\nclass LicenseCreativeCommonsIntergovernmentalOrgs(DefaultLicense):\n# domain_content = True\n# domain_data = True\n id = \"cc-by-igo\"\n is_okd_compliant = False\n url = \"http://creativecommons.org/licenses/by/3.0/igo/legalcode\"\n\n @property\n def title(self):\n return _(\"Creative Commons Attribution for Intergovernmental Organisations\")\n \nclass LicenseCreativeCommonsNoDerives(DefaultLicense):\n# domain_content = True\n# domain_data = True\n id = \"cc-by-nd\"\n is_okd_compliant = False\n url = \"http://creativecommons.org/licenses/by-nd/3.0/legalcode\"\n\n @property\n def title(self):\n return _(\"Creative Commons Attribution-NoDerives\")\n \nclass LicenseOtherPublicDomainNoRestrictions(DefaultLicense):\n# domain_content = True\n id = \"other-pd-nr\"\n is_generic = True\n is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Public Domain / No Restrictions\")\n\nclass LicenseHdxMultiple(DefaultLicense):\n# domain_content = True\n id = \"hdx-multi\"\n# is_generic = True\n# is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Multiple Licenses\")\n\nclass LicenseHdxOther(DefaultLicense):\n# domain_content = True\n id = \"hdx-other\"\n# is_generic = True\n# is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Other\")\n\n ", "path": "ckanext-hdx_theme/ckanext/hdx_theme/licenses.py"}, {"content": "import ckanext.hdx_theme.licenses as hdx_licenses\nfrom beaker.cache import cache_regions\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as toolkit\nimport ckan.model.package as package\nimport ckan.model.license as license\nimport version;\n\ncache_regions.update({\n 'hdx_memory_cache':{\n 'expire': 172800, # 2 days\n 'type':'memory',\n 'key_length': 250\n }\n })\n\ndef _generate_license_list():\n package.Package._license_register = license.LicenseRegister() \n package.Package._license_register.licenses = [\n license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),\n license.License(license.LicenseCreativeCommonsAttribution()),\n license.License(license.LicenseCreativeCommonsAttributionShareAlike()),\n license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),\n license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),\n license.License(hdx_licenses.LicenseHdxMultiple()),\n license.License(hdx_licenses.LicenseHdxOther())\n ]\n\nclass HDXThemePlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n\n def update_config(self, config):\n toolkit.add_template_directory(config, 'templates')\n toolkit.add_public_directory(config, 'public')\n toolkit.add_resource('fanstatic', 'hdx_theme')\n\n def before_map(self, map):\n map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')\n map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')\n map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')\n map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')\n map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')\n map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')\n \n map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')\n \n # this is actually a HACK to force the customization of the license list.\n # the license list should be changed to be based on a JSON rest service\n _generate_license_list()\n \n return map\n\n def get_helpers(self):\n from ckanext.hdx_theme import helpers as hdx_helpers\n return {\n 'is_downloadable': hdx_helpers.is_downloadable,\n 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,\n 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,\n 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,\n 'get_last_revision_package':hdx_helpers.get_last_revision_package,\n 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,\n 'get_last_revision_group':hdx_helpers.get_last_revision_group,\n 'get_group_followers':hdx_helpers.get_group_followers,\n 'get_group_members':hdx_helpers.get_group_members,\n 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,\n 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,\n 'hdx_version':hdx_helpers.hdx_version,\n 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,\n 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items\n }\n \n def get_actions(self):\n from ckanext.hdx_theme import actions as hdx_actions\n return {\n 'organization_list_for_user':hdx_actions.organization_list_for_user\n }\n \n \n\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/plugin.py"}]}
| 2,764 | 588 |
gh_patches_debug_2780
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-40614
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
asa_config Python3 Compatibility Issue for "backup"
<!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
"backup" in asa_config fails on Python 3.6.3 with Ansible 2.5.2. Same issue as [36717](https://github.com/ansible/ansible/issues/36717) but for asa_config.
Changing line 58 of asa_config.py from:` for key in result.keys()`
To either: `for key in result.copy().keys():`
Or: `for key in list(result)`
Should sort this out for py2 or py3.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
asa_config
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.5.2
config file = /home/ignw/my_network_as_code/ansible.cfg
configured module search path = ['/usr/local/lib/python3.6/dist-packages/napalm_ansible/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0]
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
DEFAULT_ACTION_PLUGIN_PATH(/home/ignw/my_network_as_code/ansible.cfg) = ['/usr/local/lib/python3.6/dist-packages/napalm_ansible/plug
DEFAULT_HOST_LIST(/home/ignw/my_network_as_code/ansible.cfg) = ['/home/ignw/my_network_as_code/inventory']
DEFAULT_MODULE_PATH(/home/ignw/my_network_as_code/ansible.cfg) = ['/usr/local/lib/python3.6/dist-packages/napalm_ansible/modules']
HOST_KEY_CHECKING(/home/ignw/my_network_as_code/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/ignw/my_network_as_code/ansible.cfg) = False
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
Distributor ID: Ubuntu
Description: Ubuntu 17.10
Release: 17.10
Codename: artful
Network device (Cisco ASAv):
Cisco Adaptive Security Appliance Software Version 9.9(2)
Firepower Extensible Operating System Version 2.3(1.84)
Device Manager Version 7.9(2)
Compiled on Sun 25-Mar-18 17:34 PDT by builders
System image file is "boot:/asa992-smp-k8.bin"
Config file at boot was "startup-config"
Hardware: ASAv, 1024 MB RAM, CPU Clarkdale 2300 MHz,
Model Id: ASAv5
Internal ATA Compact Flash, 1024MB
Slot 1: ATA Compact Flash, 8192MB
BIOS Flash Firmware Hub @ 0x0, 0KB
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Backup Cisco ASA Configurations
connection: local
hosts: cisco-asa
gather_facts: no
vars:
creds:
host: "{{ ansible_host }}"
username: "{{ username }}"
password: "{{ username }}"
authorize: yes
auth_pass: "{{ enable_password }}"
tags: asa
tasks:
- asa_config:
provider: "{{ creds }}"
backup: yes
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Backup of configuration to be placed in backup directory
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
<10.0.0.8> <10.0.0.8> ssh connection has completed successfully
<10.0.0.8> connection to remote device started successfully
<10.0.0.8> local domain socket listeners started successfully
<10.0.0.8>
<10.0.0.8> local domain socket path is /home/ignw/.ansible/pc/8617761c70
<10.0.0.8> socket_path: /home/ignw/.ansible/pc/8617761c70
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/network/asa/asa_config.py
<10.0.0.8> ESTABLISH LOCAL CONNECTION FOR USER: ignw
<10.0.0.8> EXEC /bin/sh -c 'echo ~ && sleep 0'
<10.0.0.8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411 `" && echo ansible-tmp-1526941893.6014657-134187020317411="` echo /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411 `" ) && sleep 0'
<10.0.0.8> PUT /home/ignw/.ansible/tmp/ansible-local-24856l3y7x_n7/tmpq9jw7ue_ TO /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/asa_config.py
<10.0.0.8> EXEC /bin/sh -c 'chmod u+x /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/ /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/asa_config.py && sleep 0'
<10.0.0.8> EXEC /bin/sh -c '/usr/bin/python3 /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/asa_config.py && sleep 0'
<10.0.0.8> EXEC /bin/sh -c 'rm -f -r /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/ansible/executor/task_executor.py", line 138, in run
res = self._execute()
File "/usr/local/lib/python3.6/dist-packages/ansible/executor/task_executor.py", line 558, in _execute
result = self._handler.run(task_vars=variables)
File "/usr/local/lib/python3.6/dist-packages/ansible/plugins/action/asa_config.py", line 58, in run
for key in result.keys().copy():
AttributeError: 'dict_keys' object has no attribute 'copy'
fatal: [acme-sea-asa1]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
PLAY RECAP *************************************************************************************************************************
acme-sea-asa1 : ok=0 changed=0 unreachable=0 failed=1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/plugins/action/asa_config.py`
Content:
```
1 #
2 # (c) 2017, Red Hat, Inc.
3 #
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18 #
19 from __future__ import (absolute_import, division, print_function)
20 __metaclass__ = type
21
22 import os
23 import re
24 import time
25 import glob
26
27 from ansible.plugins.action.asa import ActionModule as _ActionModule
28 from ansible.module_utils._text import to_text
29 from ansible.module_utils.six.moves.urllib.parse import urlsplit
30 from ansible.utils.vars import merge_hash
31
32 PRIVATE_KEYS_RE = re.compile('__.+__')
33
34
35 class ActionModule(_ActionModule):
36
37 def run(self, tmp=None, task_vars=None):
38
39 if self._task.args.get('src'):
40 try:
41 self._handle_template()
42 except ValueError as exc:
43 return dict(failed=True, msg=exc.message)
44
45 result = super(ActionModule, self).run(tmp, task_vars)
46 del tmp # tmp no longer has any effect
47
48 if self._task.args.get('backup') and result.get('__backup__'):
49 # User requested backup and no error occurred in module.
50 # NOTE: If there is a parameter error, _backup key may not be in results.
51 filepath = self._write_backup(task_vars['inventory_hostname'],
52 result['__backup__'])
53
54 result['backup_path'] = filepath
55
56 # strip out any keys that have two leading and two trailing
57 # underscore characters
58 for key in result.keys():
59 if PRIVATE_KEYS_RE.match(key):
60 del result[key]
61
62 return result
63
64 def _get_working_path(self):
65 cwd = self._loader.get_basedir()
66 if self._task._role is not None:
67 cwd = self._task._role._role_path
68 return cwd
69
70 def _write_backup(self, host, contents):
71 backup_path = self._get_working_path() + '/backup'
72 if not os.path.exists(backup_path):
73 os.mkdir(backup_path)
74 for fn in glob.glob('%s/%s*' % (backup_path, host)):
75 os.remove(fn)
76 tstamp = time.strftime("%Y-%m-%d@%H:%M:%S", time.localtime(time.time()))
77 filename = '%s/%s_config.%s' % (backup_path, host, tstamp)
78 open(filename, 'w').write(contents)
79 return filename
80
81 def _handle_template(self):
82 src = self._task.args.get('src')
83 working_path = self._get_working_path()
84
85 if os.path.isabs(src) or urlsplit('src').scheme:
86 source = src
87 else:
88 source = self._loader.path_dwim_relative(working_path, 'templates', src)
89 if not source:
90 source = self._loader.path_dwim_relative(working_path, src)
91
92 if not os.path.exists(source):
93 raise ValueError('path specified in src not found')
94
95 try:
96 with open(source, 'r') as f:
97 template_data = to_text(f.read())
98 except IOError:
99 return dict(failed=True, msg='unable to load src file')
100
101 # Create a template search path in the following order:
102 # [working_path, self_role_path, dependent_role_paths, dirname(source)]
103 searchpath = [working_path]
104 if self._task._role is not None:
105 searchpath.append(self._task._role._role_path)
106 if hasattr(self._task, "_block:"):
107 dep_chain = self._task._block.get_dep_chain()
108 if dep_chain is not None:
109 for role in dep_chain:
110 searchpath.append(role._role_path)
111 searchpath.append(os.path.dirname(source))
112 self._templar.environment.loader.searchpath = searchpath
113 self._task.args['src'] = self._templar.template(template_data)
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/ansible/plugins/action/asa_config.py b/lib/ansible/plugins/action/asa_config.py
--- a/lib/ansible/plugins/action/asa_config.py
+++ b/lib/ansible/plugins/action/asa_config.py
@@ -55,7 +55,7 @@
# strip out any keys that have two leading and two trailing
# underscore characters
- for key in result.keys():
+ for key in list(result):
if PRIVATE_KEYS_RE.match(key):
del result[key]
|
{"golden_diff": "diff --git a/lib/ansible/plugins/action/asa_config.py b/lib/ansible/plugins/action/asa_config.py\n--- a/lib/ansible/plugins/action/asa_config.py\n+++ b/lib/ansible/plugins/action/asa_config.py\n@@ -55,7 +55,7 @@\n \n # strip out any keys that have two leading and two trailing\n # underscore characters\n- for key in result.keys():\n+ for key in list(result):\n if PRIVATE_KEYS_RE.match(key):\n del result[key]\n", "issue": "asa_config Python3 Compatibility Issue for \"backup\"\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nTHIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.\r\nAlso test if the latest release, and devel branch are affected too.\r\nALWAYS add information AFTER (OUTSIDE) these html comments.\r\nOtherwise it may end up being automatically closed by our bot. -->\r\n\r\n##### SUMMARY\r\n\"backup\" in asa_config fails on Python 3.6.3 with Ansible 2.5.2. Same issue as [36717](https://github.com/ansible/ansible/issues/36717) but for asa_config.\r\n\r\nChanging line 58 of asa_config.py from:` for key in result.keys()`\r\nTo either: `for key in result.copy().keys():`\r\nOr: `for key in list(result)`\r\n\r\nShould sort this out for py2 or py3.\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.\r\nDo not include extra details here, e.g. \"vyos_command\" not \"the network module vyos_command\" or the full path-->\r\nasa_config\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.5.2\r\n config file = /home/ignw/my_network_as_code/ansible.cfg\r\n configured module search path = ['/usr/local/lib/python3.6/dist-packages/napalm_ansible/modules']\r\n ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of \"ansible-config dump --only-changed\"\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).-->\r\nDEFAULT_ACTION_PLUGIN_PATH(/home/ignw/my_network_as_code/ansible.cfg) = ['/usr/local/lib/python3.6/dist-packages/napalm_ansible/plug\r\nDEFAULT_HOST_LIST(/home/ignw/my_network_as_code/ansible.cfg) = ['/home/ignw/my_network_as_code/inventory']\r\nDEFAULT_MODULE_PATH(/home/ignw/my_network_as_code/ansible.cfg) = ['/usr/local/lib/python3.6/dist-packages/napalm_ansible/modules']\r\nHOST_KEY_CHECKING(/home/ignw/my_network_as_code/ansible.cfg) = False\r\nRETRY_FILES_ENABLED(/home/ignw/my_network_as_code/ansible.cfg) = False\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.-->\r\nDistributor ID:\tUbuntu\r\nDescription:\tUbuntu 17.10\r\nRelease:\t17.10\r\nCodename:\tartful\r\n\r\nNetwork device (Cisco ASAv):\r\nCisco Adaptive Security Appliance Software Version 9.9(2)\r\nFirepower Extensible Operating System Version 2.3(1.84)\r\nDevice Manager Version 7.9(2)\r\n\r\nCompiled on Sun 25-Mar-18 17:34 PDT by builders\r\nSystem image file is \"boot:/asa992-smp-k8.bin\"\r\nConfig file at boot was \"startup-config\"\r\n\r\nHardware: ASAv, 1024 MB RAM, CPU Clarkdale 2300 MHz,\r\nModel Id: ASAv5\r\nInternal ATA Compact Flash, 1024MB\r\nSlot 1: ATA Compact Flash, 8192MB\r\nBIOS Flash Firmware Hub @ 0x0, 0KB\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used. -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: Backup Cisco ASA Configurations\r\n connection: local\r\n hosts: cisco-asa\r\n gather_facts: no\r\n vars:\r\n creds:\r\n host: \"{{ ansible_host }}\"\r\n username: \"{{ username }}\"\r\n password: \"{{ username }}\"\r\n authorize: yes\r\n auth_pass: \"{{ enable_password }}\"\r\n tags: asa\r\n tasks:\r\n - asa_config:\r\n provider: \"{{ creds }}\"\r\n backup: yes\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nBackup of configuration to be placed in backup directory\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\n<10.0.0.8> <10.0.0.8> ssh connection has completed successfully\r\n<10.0.0.8> connection to remote device started successfully\r\n<10.0.0.8> local domain socket listeners started successfully\r\n<10.0.0.8>\r\n<10.0.0.8> local domain socket path is /home/ignw/.ansible/pc/8617761c70\r\n<10.0.0.8> socket_path: /home/ignw/.ansible/pc/8617761c70\r\nUsing module file /usr/local/lib/python3.6/dist-packages/ansible/modules/network/asa/asa_config.py\r\n<10.0.0.8> ESTABLISH LOCAL CONNECTION FOR USER: ignw\r\n<10.0.0.8> EXEC /bin/sh -c 'echo ~ && sleep 0'\r\n<10.0.0.8> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411 `\" && echo ansible-tmp-1526941893.6014657-134187020317411=\"` echo /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411 `\" ) && sleep 0'\r\n<10.0.0.8> PUT /home/ignw/.ansible/tmp/ansible-local-24856l3y7x_n7/tmpq9jw7ue_ TO /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/asa_config.py\r\n<10.0.0.8> EXEC /bin/sh -c 'chmod u+x /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/ /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/asa_config.py && sleep 0'\r\n<10.0.0.8> EXEC /bin/sh -c '/usr/bin/python3 /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/asa_config.py && sleep 0'\r\n<10.0.0.8> EXEC /bin/sh -c 'rm -f -r /home/ignw/.ansible/tmp/ansible-tmp-1526941893.6014657-134187020317411/ > /dev/null 2>&1 && sleep 0'\r\nThe full traceback is:\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/ansible/executor/task_executor.py\", line 138, in run\r\n res = self._execute()\r\n File \"/usr/local/lib/python3.6/dist-packages/ansible/executor/task_executor.py\", line 558, in _execute\r\n result = self._handler.run(task_vars=variables)\r\n File \"/usr/local/lib/python3.6/dist-packages/ansible/plugins/action/asa_config.py\", line 58, in run\r\n for key in result.keys().copy():\r\nAttributeError: 'dict_keys' object has no attribute 'copy'\r\n\r\nfatal: [acme-sea-asa1]: FAILED! => {\r\n \"msg\": \"Unexpected failure during module execution.\",\r\n \"stdout\": \"\"\r\n}\r\n\r\nPLAY RECAP *************************************************************************************************************************\r\nacme-sea-asa1 : ok=0 changed=0 unreachable=0 failed=1\r\n```\r\n\n", "before_files": [{"content": "#\n# (c) 2017, Red Hat, Inc.\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport os\nimport re\nimport time\nimport glob\n\nfrom ansible.plugins.action.asa import ActionModule as _ActionModule\nfrom ansible.module_utils._text import to_text\nfrom ansible.module_utils.six.moves.urllib.parse import urlsplit\nfrom ansible.utils.vars import merge_hash\n\nPRIVATE_KEYS_RE = re.compile('__.+__')\n\n\nclass ActionModule(_ActionModule):\n\n def run(self, tmp=None, task_vars=None):\n\n if self._task.args.get('src'):\n try:\n self._handle_template()\n except ValueError as exc:\n return dict(failed=True, msg=exc.message)\n\n result = super(ActionModule, self).run(tmp, task_vars)\n del tmp # tmp no longer has any effect\n\n if self._task.args.get('backup') and result.get('__backup__'):\n # User requested backup and no error occurred in module.\n # NOTE: If there is a parameter error, _backup key may not be in results.\n filepath = self._write_backup(task_vars['inventory_hostname'],\n result['__backup__'])\n\n result['backup_path'] = filepath\n\n # strip out any keys that have two leading and two trailing\n # underscore characters\n for key in result.keys():\n if PRIVATE_KEYS_RE.match(key):\n del result[key]\n\n return result\n\n def _get_working_path(self):\n cwd = self._loader.get_basedir()\n if self._task._role is not None:\n cwd = self._task._role._role_path\n return cwd\n\n def _write_backup(self, host, contents):\n backup_path = self._get_working_path() + '/backup'\n if not os.path.exists(backup_path):\n os.mkdir(backup_path)\n for fn in glob.glob('%s/%s*' % (backup_path, host)):\n os.remove(fn)\n tstamp = time.strftime(\"%Y-%m-%d@%H:%M:%S\", time.localtime(time.time()))\n filename = '%s/%s_config.%s' % (backup_path, host, tstamp)\n open(filename, 'w').write(contents)\n return filename\n\n def _handle_template(self):\n src = self._task.args.get('src')\n working_path = self._get_working_path()\n\n if os.path.isabs(src) or urlsplit('src').scheme:\n source = src\n else:\n source = self._loader.path_dwim_relative(working_path, 'templates', src)\n if not source:\n source = self._loader.path_dwim_relative(working_path, src)\n\n if not os.path.exists(source):\n raise ValueError('path specified in src not found')\n\n try:\n with open(source, 'r') as f:\n template_data = to_text(f.read())\n except IOError:\n return dict(failed=True, msg='unable to load src file')\n\n # Create a template search path in the following order:\n # [working_path, self_role_path, dependent_role_paths, dirname(source)]\n searchpath = [working_path]\n if self._task._role is not None:\n searchpath.append(self._task._role._role_path)\n if hasattr(self._task, \"_block:\"):\n dep_chain = self._task._block.get_dep_chain()\n if dep_chain is not None:\n for role in dep_chain:\n searchpath.append(role._role_path)\n searchpath.append(os.path.dirname(source))\n self._templar.environment.loader.searchpath = searchpath\n self._task.args['src'] = self._templar.template(template_data)\n", "path": "lib/ansible/plugins/action/asa_config.py"}], "after_files": [{"content": "#\n# (c) 2017, Red Hat, Inc.\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport os\nimport re\nimport time\nimport glob\n\nfrom ansible.plugins.action.asa import ActionModule as _ActionModule\nfrom ansible.module_utils._text import to_text\nfrom ansible.module_utils.six.moves.urllib.parse import urlsplit\nfrom ansible.utils.vars import merge_hash\n\nPRIVATE_KEYS_RE = re.compile('__.+__')\n\n\nclass ActionModule(_ActionModule):\n\n def run(self, tmp=None, task_vars=None):\n\n if self._task.args.get('src'):\n try:\n self._handle_template()\n except ValueError as exc:\n return dict(failed=True, msg=exc.message)\n\n result = super(ActionModule, self).run(tmp, task_vars)\n del tmp # tmp no longer has any effect\n\n if self._task.args.get('backup') and result.get('__backup__'):\n # User requested backup and no error occurred in module.\n # NOTE: If there is a parameter error, _backup key may not be in results.\n filepath = self._write_backup(task_vars['inventory_hostname'],\n result['__backup__'])\n\n result['backup_path'] = filepath\n\n # strip out any keys that have two leading and two trailing\n # underscore characters\n for key in list(result):\n if PRIVATE_KEYS_RE.match(key):\n del result[key]\n\n return result\n\n def _get_working_path(self):\n cwd = self._loader.get_basedir()\n if self._task._role is not None:\n cwd = self._task._role._role_path\n return cwd\n\n def _write_backup(self, host, contents):\n backup_path = self._get_working_path() + '/backup'\n if not os.path.exists(backup_path):\n os.mkdir(backup_path)\n for fn in glob.glob('%s/%s*' % (backup_path, host)):\n os.remove(fn)\n tstamp = time.strftime(\"%Y-%m-%d@%H:%M:%S\", time.localtime(time.time()))\n filename = '%s/%s_config.%s' % (backup_path, host, tstamp)\n open(filename, 'w').write(contents)\n return filename\n\n def _handle_template(self):\n src = self._task.args.get('src')\n working_path = self._get_working_path()\n\n if os.path.isabs(src) or urlsplit('src').scheme:\n source = src\n else:\n source = self._loader.path_dwim_relative(working_path, 'templates', src)\n if not source:\n source = self._loader.path_dwim_relative(working_path, src)\n\n if not os.path.exists(source):\n raise ValueError('path specified in src not found')\n\n try:\n with open(source, 'r') as f:\n template_data = to_text(f.read())\n except IOError:\n return dict(failed=True, msg='unable to load src file')\n\n # Create a template search path in the following order:\n # [working_path, self_role_path, dependent_role_paths, dirname(source)]\n searchpath = [working_path]\n if self._task._role is not None:\n searchpath.append(self._task._role._role_path)\n if hasattr(self._task, \"_block:\"):\n dep_chain = self._task._block.get_dep_chain()\n if dep_chain is not None:\n for role in dep_chain:\n searchpath.append(role._role_path)\n searchpath.append(os.path.dirname(source))\n self._templar.environment.loader.searchpath = searchpath\n self._task.args['src'] = self._templar.template(template_data)\n", "path": "lib/ansible/plugins/action/asa_config.py"}]}
| 3,550 | 109 |
gh_patches_debug_6853
|
rasdani/github-patches
|
git_diff
|
encode__httpx-2355
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multipart doesn't support tuple data value
This works:
```python
client.post(
url,
data={"foo": ("1", "2")}, # tuple
)
```
This works:
```python
client.post(
url,
data={"foo": ["1", "2"]}, # list
files={"test": b"test"},
)
```
This fails:
```python
client.post(
url,
data={"foo": ("1", "2")}, # tuple
files={"test": b"test"},
)
```
<details>
<summary>Traceback</summary>
```
File "httpx/_client.py", line 356, in build_request
return Request(
File "httpx/_models.py", line 336, in __init__
headers, stream = encode_request(content, data, files, json)
File "httpx/_content.py", line 210, in encode_request
return encode_multipart_data(data or {}, files, boundary)
File "httpx/_content.py", line 155, in encode_multipart_data
multipart = MultipartStream(data=data, files=files, boundary=boundary)
File "httpx/_multipart.py", line 188, in __init__
self.fields = list(self._iter_fields(data, files))
File "httpx/_multipart.py", line 198, in _iter_fields
yield DataField(name=name, value=value)
File "httpx/_multipart.py", line 36, in __init__
raise TypeError(
TypeError: Invalid type for value. Expected primitive type, got <class 'tuple'>: ('1', '2')
```
</details>
I guess this line:
https://github.com/encode/httpx/blob/93de1980fa77f15c6b23cbaf2422c0a812caf243/httpx/_multipart.py#L194
should be implemented in the same way as this line:
https://github.com/encode/httpx/blob/93de1980fa77f15c6b23cbaf2422c0a812caf243/httpx/_content.py#L141
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/_multipart.py`
Content:
```
1 import binascii
2 import io
3 import os
4 import typing
5 from pathlib import Path
6
7 from ._types import (
8 AsyncByteStream,
9 FileContent,
10 FileTypes,
11 RequestFiles,
12 SyncByteStream,
13 )
14 from ._utils import (
15 format_form_param,
16 guess_content_type,
17 peek_filelike_length,
18 primitive_value_to_str,
19 to_bytes,
20 )
21
22
23 def get_multipart_boundary_from_content_type(
24 content_type: typing.Optional[bytes],
25 ) -> typing.Optional[bytes]:
26 if not content_type or not content_type.startswith(b"multipart/form-data"):
27 return None
28 # parse boundary according to
29 # https://www.rfc-editor.org/rfc/rfc2046#section-5.1.1
30 if b";" in content_type:
31 for section in content_type.split(b";"):
32 if section.strip().lower().startswith(b"boundary="):
33 return section.strip()[len(b"boundary=") :].strip(b'"')
34 return None
35
36
37 class DataField:
38 """
39 A single form field item, within a multipart form field.
40 """
41
42 def __init__(
43 self, name: str, value: typing.Union[str, bytes, int, float, None]
44 ) -> None:
45 if not isinstance(name, str):
46 raise TypeError(
47 f"Invalid type for name. Expected str, got {type(name)}: {name!r}"
48 )
49 if value is not None and not isinstance(value, (str, bytes, int, float)):
50 raise TypeError(
51 f"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}"
52 )
53 self.name = name
54 self.value: typing.Union[str, bytes] = (
55 value if isinstance(value, bytes) else primitive_value_to_str(value)
56 )
57
58 def render_headers(self) -> bytes:
59 if not hasattr(self, "_headers"):
60 name = format_form_param("name", self.name)
61 self._headers = b"".join(
62 [b"Content-Disposition: form-data; ", name, b"\r\n\r\n"]
63 )
64
65 return self._headers
66
67 def render_data(self) -> bytes:
68 if not hasattr(self, "_data"):
69 self._data = to_bytes(self.value)
70
71 return self._data
72
73 def get_length(self) -> int:
74 headers = self.render_headers()
75 data = self.render_data()
76 return len(headers) + len(data)
77
78 def render(self) -> typing.Iterator[bytes]:
79 yield self.render_headers()
80 yield self.render_data()
81
82
83 class FileField:
84 """
85 A single file field item, within a multipart form field.
86 """
87
88 CHUNK_SIZE = 64 * 1024
89
90 def __init__(self, name: str, value: FileTypes) -> None:
91 self.name = name
92
93 fileobj: FileContent
94
95 headers: typing.Dict[str, str] = {}
96 content_type: typing.Optional[str] = None
97
98 # This large tuple based API largely mirror's requests' API
99 # It would be good to think of better APIs for this that we could include in httpx 2.0
100 # since variable length tuples (especially of 4 elements) are quite unwieldly
101 if isinstance(value, tuple):
102 if len(value) == 2:
103 # neither the 3rd parameter (content_type) nor the 4th (headers) was included
104 filename, fileobj = value # type: ignore
105 elif len(value) == 3:
106 filename, fileobj, content_type = value # type: ignore
107 else:
108 # all 4 parameters included
109 filename, fileobj, content_type, headers = value # type: ignore
110 else:
111 filename = Path(str(getattr(value, "name", "upload"))).name
112 fileobj = value
113
114 if content_type is None:
115 content_type = guess_content_type(filename)
116
117 has_content_type_header = any("content-type" in key.lower() for key in headers)
118 if content_type is not None and not has_content_type_header:
119 # note that unlike requests, we ignore the content_type
120 # provided in the 3rd tuple element if it is also included in the headers
121 # requests does the opposite (it overwrites the header with the 3rd tuple element)
122 headers["Content-Type"] = content_type
123
124 if isinstance(fileobj, (str, io.StringIO)):
125 raise TypeError(f"Expected bytes or bytes-like object got: {type(fileobj)}")
126
127 self.filename = filename
128 self.file = fileobj
129 self.headers = headers
130
131 def get_length(self) -> int:
132 headers = self.render_headers()
133
134 if isinstance(self.file, (str, bytes)):
135 return len(headers) + len(to_bytes(self.file))
136
137 # Let's do our best not to read `file` into memory.
138 file_length = peek_filelike_length(self.file)
139 if file_length is None:
140 # As a last resort, read file and cache contents for later.
141 assert not hasattr(self, "_data")
142 self._data = to_bytes(self.file.read())
143 file_length = len(self._data)
144
145 return len(headers) + file_length
146
147 def render_headers(self) -> bytes:
148 if not hasattr(self, "_headers"):
149 parts = [
150 b"Content-Disposition: form-data; ",
151 format_form_param("name", self.name),
152 ]
153 if self.filename:
154 filename = format_form_param("filename", self.filename)
155 parts.extend([b"; ", filename])
156 for header_name, header_value in self.headers.items():
157 key, val = f"\r\n{header_name}: ".encode(), header_value.encode()
158 parts.extend([key, val])
159 parts.append(b"\r\n\r\n")
160 self._headers = b"".join(parts)
161
162 return self._headers
163
164 def render_data(self) -> typing.Iterator[bytes]:
165 if isinstance(self.file, (str, bytes)):
166 yield to_bytes(self.file)
167 return
168
169 if hasattr(self, "_data"):
170 # Already rendered.
171 yield self._data
172 return
173
174 if hasattr(self.file, "seek"):
175 self.file.seek(0)
176
177 chunk = self.file.read(self.CHUNK_SIZE)
178 while chunk:
179 yield to_bytes(chunk)
180 chunk = self.file.read(self.CHUNK_SIZE)
181
182 def render(self) -> typing.Iterator[bytes]:
183 yield self.render_headers()
184 yield from self.render_data()
185
186
187 class MultipartStream(SyncByteStream, AsyncByteStream):
188 """
189 Request content as streaming multipart encoded form data.
190 """
191
192 def __init__(
193 self, data: dict, files: RequestFiles, boundary: typing.Optional[bytes] = None
194 ) -> None:
195 if boundary is None:
196 boundary = binascii.hexlify(os.urandom(16))
197
198 self.boundary = boundary
199 self.content_type = "multipart/form-data; boundary=%s" % boundary.decode(
200 "ascii"
201 )
202 self.fields = list(self._iter_fields(data, files))
203
204 def _iter_fields(
205 self, data: dict, files: RequestFiles
206 ) -> typing.Iterator[typing.Union[FileField, DataField]]:
207 for name, value in data.items():
208 if isinstance(value, list):
209 for item in value:
210 yield DataField(name=name, value=item)
211 else:
212 yield DataField(name=name, value=value)
213
214 file_items = files.items() if isinstance(files, typing.Mapping) else files
215 for name, value in file_items:
216 yield FileField(name=name, value=value)
217
218 def iter_chunks(self) -> typing.Iterator[bytes]:
219 for field in self.fields:
220 yield b"--%s\r\n" % self.boundary
221 yield from field.render()
222 yield b"\r\n"
223 yield b"--%s--\r\n" % self.boundary
224
225 def iter_chunks_lengths(self) -> typing.Iterator[int]:
226 boundary_length = len(self.boundary)
227 # Follow closely what `.iter_chunks()` does.
228 for field in self.fields:
229 yield 2 + boundary_length + 2
230 yield field.get_length()
231 yield 2
232 yield 2 + boundary_length + 4
233
234 def get_content_length(self) -> int:
235 return sum(self.iter_chunks_lengths())
236
237 # Content stream interface.
238
239 def get_headers(self) -> typing.Dict[str, str]:
240 content_length = str(self.get_content_length())
241 content_type = self.content_type
242 return {"Content-Length": content_length, "Content-Type": content_type}
243
244 def __iter__(self) -> typing.Iterator[bytes]:
245 for chunk in self.iter_chunks():
246 yield chunk
247
248 async def __aiter__(self) -> typing.AsyncIterator[bytes]:
249 for chunk in self.iter_chunks():
250 yield chunk
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/httpx/_multipart.py b/httpx/_multipart.py
--- a/httpx/_multipart.py
+++ b/httpx/_multipart.py
@@ -205,7 +205,7 @@
self, data: dict, files: RequestFiles
) -> typing.Iterator[typing.Union[FileField, DataField]]:
for name, value in data.items():
- if isinstance(value, list):
+ if isinstance(value, (tuple, list)):
for item in value:
yield DataField(name=name, value=item)
else:
|
{"golden_diff": "diff --git a/httpx/_multipart.py b/httpx/_multipart.py\n--- a/httpx/_multipart.py\n+++ b/httpx/_multipart.py\n@@ -205,7 +205,7 @@\n self, data: dict, files: RequestFiles\n ) -> typing.Iterator[typing.Union[FileField, DataField]]:\n for name, value in data.items():\n- if isinstance(value, list):\n+ if isinstance(value, (tuple, list)):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n", "issue": "Multipart doesn't support tuple data value\nThis works:\r\n\r\n```python\r\nclient.post(\r\n url,\r\n data={\"foo\": (\"1\", \"2\")}, # tuple\r\n)\r\n```\r\n\r\nThis works:\r\n\r\n```python\r\nclient.post(\r\n url,\r\n data={\"foo\": [\"1\", \"2\"]}, # list\r\n files={\"test\": b\"test\"},\r\n)\r\n```\r\n\r\nThis fails:\r\n\r\n```python\r\nclient.post(\r\n url,\r\n data={\"foo\": (\"1\", \"2\")}, # tuple\r\n files={\"test\": b\"test\"},\r\n)\r\n```\r\n\r\n<details>\r\n<summary>Traceback</summary>\r\n\r\n```\r\n File \"httpx/_client.py\", line 356, in build_request\r\n return Request(\r\n File \"httpx/_models.py\", line 336, in __init__\r\n headers, stream = encode_request(content, data, files, json)\r\n File \"httpx/_content.py\", line 210, in encode_request\r\n return encode_multipart_data(data or {}, files, boundary)\r\n File \"httpx/_content.py\", line 155, in encode_multipart_data\r\n multipart = MultipartStream(data=data, files=files, boundary=boundary)\r\n File \"httpx/_multipart.py\", line 188, in __init__\r\n self.fields = list(self._iter_fields(data, files))\r\n File \"httpx/_multipart.py\", line 198, in _iter_fields\r\n yield DataField(name=name, value=value)\r\n File \"httpx/_multipart.py\", line 36, in __init__\r\n raise TypeError(\r\nTypeError: Invalid type for value. Expected primitive type, got <class 'tuple'>: ('1', '2')\r\n```\r\n\r\n</details>\r\n\r\nI guess this line:\r\n\r\nhttps://github.com/encode/httpx/blob/93de1980fa77f15c6b23cbaf2422c0a812caf243/httpx/_multipart.py#L194\r\n\r\nshould be implemented in the same way as this line:\r\n\r\nhttps://github.com/encode/httpx/blob/93de1980fa77f15c6b23cbaf2422c0a812caf243/httpx/_content.py#L141\n", "before_files": [{"content": "import binascii\nimport io\nimport os\nimport typing\nfrom pathlib import Path\n\nfrom ._types import (\n AsyncByteStream,\n FileContent,\n FileTypes,\n RequestFiles,\n SyncByteStream,\n)\nfrom ._utils import (\n format_form_param,\n guess_content_type,\n peek_filelike_length,\n primitive_value_to_str,\n to_bytes,\n)\n\n\ndef get_multipart_boundary_from_content_type(\n content_type: typing.Optional[bytes],\n) -> typing.Optional[bytes]:\n if not content_type or not content_type.startswith(b\"multipart/form-data\"):\n return None\n # parse boundary according to\n # https://www.rfc-editor.org/rfc/rfc2046#section-5.1.1\n if b\";\" in content_type:\n for section in content_type.split(b\";\"):\n if section.strip().lower().startswith(b\"boundary=\"):\n return section.strip()[len(b\"boundary=\") :].strip(b'\"')\n return None\n\n\nclass DataField:\n \"\"\"\n A single form field item, within a multipart form field.\n \"\"\"\n\n def __init__(\n self, name: str, value: typing.Union[str, bytes, int, float, None]\n ) -> None:\n if not isinstance(name, str):\n raise TypeError(\n f\"Invalid type for name. Expected str, got {type(name)}: {name!r}\"\n )\n if value is not None and not isinstance(value, (str, bytes, int, float)):\n raise TypeError(\n f\"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}\"\n )\n self.name = name\n self.value: typing.Union[str, bytes] = (\n value if isinstance(value, bytes) else primitive_value_to_str(value)\n )\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n name = format_form_param(\"name\", self.name)\n self._headers = b\"\".join(\n [b\"Content-Disposition: form-data; \", name, b\"\\r\\n\\r\\n\"]\n )\n\n return self._headers\n\n def render_data(self) -> bytes:\n if not hasattr(self, \"_data\"):\n self._data = to_bytes(self.value)\n\n return self._data\n\n def get_length(self) -> int:\n headers = self.render_headers()\n data = self.render_data()\n return len(headers) + len(data)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield self.render_data()\n\n\nclass FileField:\n \"\"\"\n A single file field item, within a multipart form field.\n \"\"\"\n\n CHUNK_SIZE = 64 * 1024\n\n def __init__(self, name: str, value: FileTypes) -> None:\n self.name = name\n\n fileobj: FileContent\n\n headers: typing.Dict[str, str] = {}\n content_type: typing.Optional[str] = None\n\n # This large tuple based API largely mirror's requests' API\n # It would be good to think of better APIs for this that we could include in httpx 2.0\n # since variable length tuples (especially of 4 elements) are quite unwieldly\n if isinstance(value, tuple):\n if len(value) == 2:\n # neither the 3rd parameter (content_type) nor the 4th (headers) was included\n filename, fileobj = value # type: ignore\n elif len(value) == 3:\n filename, fileobj, content_type = value # type: ignore\n else:\n # all 4 parameters included\n filename, fileobj, content_type, headers = value # type: ignore\n else:\n filename = Path(str(getattr(value, \"name\", \"upload\"))).name\n fileobj = value\n\n if content_type is None:\n content_type = guess_content_type(filename)\n\n has_content_type_header = any(\"content-type\" in key.lower() for key in headers)\n if content_type is not None and not has_content_type_header:\n # note that unlike requests, we ignore the content_type\n # provided in the 3rd tuple element if it is also included in the headers\n # requests does the opposite (it overwrites the header with the 3rd tuple element)\n headers[\"Content-Type\"] = content_type\n\n if isinstance(fileobj, (str, io.StringIO)):\n raise TypeError(f\"Expected bytes or bytes-like object got: {type(fileobj)}\")\n\n self.filename = filename\n self.file = fileobj\n self.headers = headers\n\n def get_length(self) -> int:\n headers = self.render_headers()\n\n if isinstance(self.file, (str, bytes)):\n return len(headers) + len(to_bytes(self.file))\n\n # Let's do our best not to read `file` into memory.\n file_length = peek_filelike_length(self.file)\n if file_length is None:\n # As a last resort, read file and cache contents for later.\n assert not hasattr(self, \"_data\")\n self._data = to_bytes(self.file.read())\n file_length = len(self._data)\n\n return len(headers) + file_length\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n parts = [\n b\"Content-Disposition: form-data; \",\n format_form_param(\"name\", self.name),\n ]\n if self.filename:\n filename = format_form_param(\"filename\", self.filename)\n parts.extend([b\"; \", filename])\n for header_name, header_value in self.headers.items():\n key, val = f\"\\r\\n{header_name}: \".encode(), header_value.encode()\n parts.extend([key, val])\n parts.append(b\"\\r\\n\\r\\n\")\n self._headers = b\"\".join(parts)\n\n return self._headers\n\n def render_data(self) -> typing.Iterator[bytes]:\n if isinstance(self.file, (str, bytes)):\n yield to_bytes(self.file)\n return\n\n if hasattr(self, \"_data\"):\n # Already rendered.\n yield self._data\n return\n\n if hasattr(self.file, \"seek\"):\n self.file.seek(0)\n\n chunk = self.file.read(self.CHUNK_SIZE)\n while chunk:\n yield to_bytes(chunk)\n chunk = self.file.read(self.CHUNK_SIZE)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield from self.render_data()\n\n\nclass MultipartStream(SyncByteStream, AsyncByteStream):\n \"\"\"\n Request content as streaming multipart encoded form data.\n \"\"\"\n\n def __init__(\n self, data: dict, files: RequestFiles, boundary: typing.Optional[bytes] = None\n ) -> None:\n if boundary is None:\n boundary = binascii.hexlify(os.urandom(16))\n\n self.boundary = boundary\n self.content_type = \"multipart/form-data; boundary=%s\" % boundary.decode(\n \"ascii\"\n )\n self.fields = list(self._iter_fields(data, files))\n\n def _iter_fields(\n self, data: dict, files: RequestFiles\n ) -> typing.Iterator[typing.Union[FileField, DataField]]:\n for name, value in data.items():\n if isinstance(value, list):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n yield DataField(name=name, value=value)\n\n file_items = files.items() if isinstance(files, typing.Mapping) else files\n for name, value in file_items:\n yield FileField(name=name, value=value)\n\n def iter_chunks(self) -> typing.Iterator[bytes]:\n for field in self.fields:\n yield b\"--%s\\r\\n\" % self.boundary\n yield from field.render()\n yield b\"\\r\\n\"\n yield b\"--%s--\\r\\n\" % self.boundary\n\n def iter_chunks_lengths(self) -> typing.Iterator[int]:\n boundary_length = len(self.boundary)\n # Follow closely what `.iter_chunks()` does.\n for field in self.fields:\n yield 2 + boundary_length + 2\n yield field.get_length()\n yield 2\n yield 2 + boundary_length + 4\n\n def get_content_length(self) -> int:\n return sum(self.iter_chunks_lengths())\n\n # Content stream interface.\n\n def get_headers(self) -> typing.Dict[str, str]:\n content_length = str(self.get_content_length())\n content_type = self.content_type\n return {\"Content-Length\": content_length, \"Content-Type\": content_type}\n\n def __iter__(self) -> typing.Iterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n\n async def __aiter__(self) -> typing.AsyncIterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n", "path": "httpx/_multipart.py"}], "after_files": [{"content": "import binascii\nimport io\nimport os\nimport typing\nfrom pathlib import Path\n\nfrom ._types import (\n AsyncByteStream,\n FileContent,\n FileTypes,\n RequestFiles,\n SyncByteStream,\n)\nfrom ._utils import (\n format_form_param,\n guess_content_type,\n peek_filelike_length,\n primitive_value_to_str,\n to_bytes,\n)\n\n\ndef get_multipart_boundary_from_content_type(\n content_type: typing.Optional[bytes],\n) -> typing.Optional[bytes]:\n if not content_type or not content_type.startswith(b\"multipart/form-data\"):\n return None\n # parse boundary according to\n # https://www.rfc-editor.org/rfc/rfc2046#section-5.1.1\n if b\";\" in content_type:\n for section in content_type.split(b\";\"):\n if section.strip().lower().startswith(b\"boundary=\"):\n return section.strip()[len(b\"boundary=\") :].strip(b'\"')\n return None\n\n\nclass DataField:\n \"\"\"\n A single form field item, within a multipart form field.\n \"\"\"\n\n def __init__(\n self, name: str, value: typing.Union[str, bytes, int, float, None]\n ) -> None:\n if not isinstance(name, str):\n raise TypeError(\n f\"Invalid type for name. Expected str, got {type(name)}: {name!r}\"\n )\n if value is not None and not isinstance(value, (str, bytes, int, float)):\n raise TypeError(\n f\"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}\"\n )\n self.name = name\n self.value: typing.Union[str, bytes] = (\n value if isinstance(value, bytes) else primitive_value_to_str(value)\n )\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n name = format_form_param(\"name\", self.name)\n self._headers = b\"\".join(\n [b\"Content-Disposition: form-data; \", name, b\"\\r\\n\\r\\n\"]\n )\n\n return self._headers\n\n def render_data(self) -> bytes:\n if not hasattr(self, \"_data\"):\n self._data = to_bytes(self.value)\n\n return self._data\n\n def get_length(self) -> int:\n headers = self.render_headers()\n data = self.render_data()\n return len(headers) + len(data)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield self.render_data()\n\n\nclass FileField:\n \"\"\"\n A single file field item, within a multipart form field.\n \"\"\"\n\n CHUNK_SIZE = 64 * 1024\n\n def __init__(self, name: str, value: FileTypes) -> None:\n self.name = name\n\n fileobj: FileContent\n\n headers: typing.Dict[str, str] = {}\n content_type: typing.Optional[str] = None\n\n # This large tuple based API largely mirror's requests' API\n # It would be good to think of better APIs for this that we could include in httpx 2.0\n # since variable length tuples (especially of 4 elements) are quite unwieldly\n if isinstance(value, tuple):\n if len(value) == 2:\n # neither the 3rd parameter (content_type) nor the 4th (headers) was included\n filename, fileobj = value # type: ignore\n elif len(value) == 3:\n filename, fileobj, content_type = value # type: ignore\n else:\n # all 4 parameters included\n filename, fileobj, content_type, headers = value # type: ignore\n else:\n filename = Path(str(getattr(value, \"name\", \"upload\"))).name\n fileobj = value\n\n if content_type is None:\n content_type = guess_content_type(filename)\n\n has_content_type_header = any(\"content-type\" in key.lower() for key in headers)\n if content_type is not None and not has_content_type_header:\n # note that unlike requests, we ignore the content_type\n # provided in the 3rd tuple element if it is also included in the headers\n # requests does the opposite (it overwrites the header with the 3rd tuple element)\n headers[\"Content-Type\"] = content_type\n\n if isinstance(fileobj, (str, io.StringIO)):\n raise TypeError(f\"Expected bytes or bytes-like object got: {type(fileobj)}\")\n\n self.filename = filename\n self.file = fileobj\n self.headers = headers\n\n def get_length(self) -> int:\n headers = self.render_headers()\n\n if isinstance(self.file, (str, bytes)):\n return len(headers) + len(to_bytes(self.file))\n\n # Let's do our best not to read `file` into memory.\n file_length = peek_filelike_length(self.file)\n if file_length is None:\n # As a last resort, read file and cache contents for later.\n assert not hasattr(self, \"_data\")\n self._data = to_bytes(self.file.read())\n file_length = len(self._data)\n\n return len(headers) + file_length\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n parts = [\n b\"Content-Disposition: form-data; \",\n format_form_param(\"name\", self.name),\n ]\n if self.filename:\n filename = format_form_param(\"filename\", self.filename)\n parts.extend([b\"; \", filename])\n for header_name, header_value in self.headers.items():\n key, val = f\"\\r\\n{header_name}: \".encode(), header_value.encode()\n parts.extend([key, val])\n parts.append(b\"\\r\\n\\r\\n\")\n self._headers = b\"\".join(parts)\n\n return self._headers\n\n def render_data(self) -> typing.Iterator[bytes]:\n if isinstance(self.file, (str, bytes)):\n yield to_bytes(self.file)\n return\n\n if hasattr(self, \"_data\"):\n # Already rendered.\n yield self._data\n return\n\n if hasattr(self.file, \"seek\"):\n self.file.seek(0)\n\n chunk = self.file.read(self.CHUNK_SIZE)\n while chunk:\n yield to_bytes(chunk)\n chunk = self.file.read(self.CHUNK_SIZE)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield from self.render_data()\n\n\nclass MultipartStream(SyncByteStream, AsyncByteStream):\n \"\"\"\n Request content as streaming multipart encoded form data.\n \"\"\"\n\n def __init__(\n self, data: dict, files: RequestFiles, boundary: typing.Optional[bytes] = None\n ) -> None:\n if boundary is None:\n boundary = binascii.hexlify(os.urandom(16))\n\n self.boundary = boundary\n self.content_type = \"multipart/form-data; boundary=%s\" % boundary.decode(\n \"ascii\"\n )\n self.fields = list(self._iter_fields(data, files))\n\n def _iter_fields(\n self, data: dict, files: RequestFiles\n ) -> typing.Iterator[typing.Union[FileField, DataField]]:\n for name, value in data.items():\n if isinstance(value, (tuple, list)):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n yield DataField(name=name, value=value)\n\n file_items = files.items() if isinstance(files, typing.Mapping) else files\n for name, value in file_items:\n yield FileField(name=name, value=value)\n\n def iter_chunks(self) -> typing.Iterator[bytes]:\n for field in self.fields:\n yield b\"--%s\\r\\n\" % self.boundary\n yield from field.render()\n yield b\"\\r\\n\"\n yield b\"--%s--\\r\\n\" % self.boundary\n\n def iter_chunks_lengths(self) -> typing.Iterator[int]:\n boundary_length = len(self.boundary)\n # Follow closely what `.iter_chunks()` does.\n for field in self.fields:\n yield 2 + boundary_length + 2\n yield field.get_length()\n yield 2\n yield 2 + boundary_length + 4\n\n def get_content_length(self) -> int:\n return sum(self.iter_chunks_lengths())\n\n # Content stream interface.\n\n def get_headers(self) -> typing.Dict[str, str]:\n content_length = str(self.get_content_length())\n content_type = self.content_type\n return {\"Content-Length\": content_length, \"Content-Type\": content_type}\n\n def __iter__(self) -> typing.Iterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n\n async def __aiter__(self) -> typing.AsyncIterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n", "path": "httpx/_multipart.py"}]}
| 3,315 | 121 |
gh_patches_debug_971
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1204
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue with requests dependency
I found that commit 95d9306d2a1fd22dffb12a0548abf2d2f744ed9d excludes requests 2.11 for a bug that is fixed now on requests 2.11.1. And that's giving me a version conflict with another of the modules on my project:
```
pkg_resources.ContextualVersionConflict: (requests 2.11.1 (..............), Requirement.parse('requests<2.11,>=2.5.2'), {'docker-py'})
```
Can we allow requests 2.11.1 ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup
6
7
8 ROOT_DIR = os.path.dirname(__file__)
9 SOURCE_DIR = os.path.join(ROOT_DIR)
10
11 requirements = [
12 'requests >= 2.5.2, < 2.11',
13 'six >= 1.4.0',
14 'websocket-client >= 0.32.0',
15 'docker-pycreds >= 0.2.1'
16 ]
17
18 if sys.platform == 'win32':
19 requirements.append('pypiwin32 >= 219')
20
21 extras_require = {
22 ':python_version < "3.5"': 'backports.ssl_match_hostname >= 3.5',
23 ':python_version < "3.3"': 'ipaddress >= 1.0.16',
24 }
25
26 version = None
27 exec(open('docker/version.py').read())
28
29 with open('./test-requirements.txt') as test_reqs_txt:
30 test_requirements = [line for line in test_reqs_txt]
31
32
33 setup(
34 name="docker-py",
35 version=version,
36 description="Python client for Docker.",
37 url='https://github.com/docker/docker-py/',
38 packages=[
39 'docker', 'docker.api', 'docker.auth', 'docker.transport',
40 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',
41 'docker.types',
42 ],
43 install_requires=requirements,
44 tests_require=test_requirements,
45 extras_require=extras_require,
46 zip_safe=False,
47 test_suite='tests',
48 classifiers=[
49 'Development Status :: 4 - Beta',
50 'Environment :: Other Environment',
51 'Intended Audience :: Developers',
52 'Operating System :: OS Independent',
53 'Programming Language :: Python',
54 'Programming Language :: Python :: 2',
55 'Programming Language :: Python :: 2.6',
56 'Programming Language :: Python :: 2.7',
57 'Programming Language :: Python :: 3',
58 'Programming Language :: Python :: 3.3',
59 'Programming Language :: Python :: 3.4',
60 'Programming Language :: Python :: 3.5',
61 'Topic :: Utilities',
62 'License :: OSI Approved :: Apache Software License',
63 ],
64 )
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
SOURCE_DIR = os.path.join(ROOT_DIR)
requirements = [
- 'requests >= 2.5.2, < 2.11',
+ 'requests >= 2.5.2',
'six >= 1.4.0',
'websocket-client >= 0.32.0',
'docker-pycreds >= 0.2.1'
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n SOURCE_DIR = os.path.join(ROOT_DIR)\n \n requirements = [\n- 'requests >= 2.5.2, < 2.11',\n+ 'requests >= 2.5.2',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n", "issue": "Issue with requests dependency\nI found that commit 95d9306d2a1fd22dffb12a0548abf2d2f744ed9d excludes requests 2.11 for a bug that is fixed now on requests 2.11.1. And that's giving me a version conflict with another of the modules on my project:\n\n```\npkg_resources.ContextualVersionConflict: (requests 2.11.1 (..............), Requirement.parse('requests<2.11,>=2.5.2'), {'docker-py'})\n```\n\nCan we allow requests 2.11.1 ?\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2, < 2.11',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',\n 'docker.types',\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',\n 'docker.types',\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}]}
| 993 | 112 |
gh_patches_debug_6506
|
rasdani/github-patches
|
git_diff
|
GeotrekCE__Geotrek-admin-4142
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Sensitivity module] Deleted zones are still present in the open-air export
A clause is missing to exclude deleted sensitive areas in OpenAir API queryset.
assigned to my self.
[Sensitivity module] Deleted zones are still present in the open-air export
A clause is missing to exclude deleted sensitive areas in OpenAir API queryset.
assigned to my self.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/sensitivity/views.py`
Content:
```
1 import json
2 import logging
3 from datetime import datetime
4
5 from django.conf import settings
6 from django.contrib.gis.db.models.functions import Transform
7 from django.http import HttpResponse
8 from django.utils.translation import gettext_lazy as _
9 from django.views.generic import ListView
10 from django.views.generic.detail import BaseDetailView
11 from mapentity.views import (MapEntityCreate, MapEntityUpdate, MapEntityList, MapEntityDetail,
12 MapEntityDelete, MapEntityFormat, LastModifiedMixin)
13
14 from geotrek.authent.decorators import same_structure_required
15 from geotrek.common.mixins.views import CustomColumnsMixin
16 from geotrek.common.permissions import PublicOrReadPermMixin
17 from geotrek.common.viewsets import GeotrekMapentityViewSet
18 from .filters import SensitiveAreaFilterSet
19 from .forms import SensitiveAreaForm, RegulatorySensitiveAreaForm
20 from .models import SensitiveArea, Species, SportPractice
21 from .serializers import SensitiveAreaSerializer, SensitiveAreaGeojsonSerializer
22
23
24 logger = logging.getLogger(__name__)
25
26
27 class SensitiveAreaList(CustomColumnsMixin, MapEntityList):
28 queryset = SensitiveArea.objects.existing()
29 filterform = SensitiveAreaFilterSet
30 mandatory_columns = ['id', 'species']
31 default_extra_columns = ['category']
32
33
34 class SensitiveAreaFormatList(MapEntityFormat, SensitiveAreaList):
35 mandatory_columns = ['id']
36 default_extra_columns = [
37 'species', 'published', 'description', 'contact', 'radius', 'pretty_period', 'pretty_practices',
38 ]
39
40
41 class SensitiveAreaDetail(MapEntityDetail):
42 queryset = SensitiveArea.objects.existing()
43
44 def get_context_data(self, *args, **kwargs):
45 context = super().get_context_data(*args, **kwargs)
46 context['can_edit'] = self.object.same_structure(self.request.user)
47 return context
48
49
50 class SensitiveAreaRadiiMixin:
51 def get_context_data(self, *args, **kwargs):
52 context = super().get_context_data(*args, **kwargs)
53 species = Species.objects.filter(category=Species.SPECIES)
54 context['radii'] = json.dumps({
55 str(s.id): settings.SENSITIVITY_DEFAULT_RADIUS if s.radius is None else s.radius for s in species
56 })
57 return context
58
59
60 class SensitiveAreaCreate(SensitiveAreaRadiiMixin, MapEntityCreate):
61 model = SensitiveArea
62
63 def get_form_class(self):
64 if self.request.GET.get('category') == str(Species.REGULATORY):
65 return RegulatorySensitiveAreaForm
66 return SensitiveAreaForm
67
68
69 class SensitiveAreaUpdate(SensitiveAreaRadiiMixin, MapEntityUpdate):
70 queryset = SensitiveArea.objects.existing()
71
72 def get_form_class(self):
73 if self.object.species.category == Species.REGULATORY:
74 return RegulatorySensitiveAreaForm
75 return SensitiveAreaForm
76
77 @same_structure_required('sensitivity:sensitivearea_detail')
78 def dispatch(self, *args, **kwargs):
79 return super().dispatch(*args, **kwargs)
80
81
82 class SensitiveAreaDelete(MapEntityDelete):
83 model = SensitiveArea
84
85 @same_structure_required('sensitivity:sensitivearea_detail')
86 def dispatch(self, *args, **kwargs):
87 return super().dispatch(*args, **kwargs)
88
89
90 class SensitiveAreaViewSet(GeotrekMapentityViewSet):
91 model = SensitiveArea
92 serializer_class = SensitiveAreaSerializer
93 geojson_serializer_class = SensitiveAreaGeojsonSerializer
94 filterset_class = SensitiveAreaFilterSet
95 mapentity_list_class = SensitiveAreaList
96
97 def get_queryset(self):
98 qs = self.model.objects.existing().select_related('species')
99 if self.format_kwarg == 'geojson':
100 qs = qs.annotate(api_geom=Transform('geom', settings.API_SRID))
101 qs = qs.only('id', 'species')
102 return qs
103
104
105 class SensitiveAreaKMLDetail(LastModifiedMixin, PublicOrReadPermMixin, BaseDetailView):
106 queryset = SensitiveArea.objects.existing()
107
108 def render_to_response(self, context):
109 area = self.get_object()
110 response = HttpResponse(area.kml(),
111 content_type='application/vnd.google-earth.kml+xml')
112 return response
113
114
115 class SensitiveAreaOpenAirDetail(LastModifiedMixin, PublicOrReadPermMixin, BaseDetailView):
116 queryset = SensitiveArea.objects.existing()
117
118 def render_to_response(self, context):
119 area = self.get_object()
120 file_header = """* This file has been produced from GeoTrek sensitivity (https://geotrek.fr/) module from website {scheme}://{domain}
121 * Using pyopenair library (https://github.com/lpoaura/pyopenair)
122 * This file was created on: {timestamp}\n\n""".format(scheme=self.request.scheme, domain=self.request.headers['host'], timestamp=datetime.now())
123 is_aerial = area.species.practices.filter(name__in=settings.SENSITIVITY_OPENAIR_SPORT_PRACTICES).exists()
124 if is_aerial and area.openair():
125 result = file_header + area.openair()
126 response = HttpResponse(result, content_type='application/octet-stream; charset=UTF-8')
127 response['Content-Disposition'] = 'inline; filename=sensitivearea_openair_' + str(area.id) + '.txt'
128 return response
129 else:
130 message = _('This is not an aerial area')
131 response = HttpResponse(message, content_type='text/plain; charset=UTF-8')
132
133 return response
134
135
136 class SensitiveAreaOpenAirList(PublicOrReadPermMixin, ListView):
137
138 def get_queryset(self):
139 aerial_practice = SportPractice.objects.filter(name__in=settings.SENSITIVITY_OPENAIR_SPORT_PRACTICES)
140 return SensitiveArea.objects.filter(
141 species__practices__in=aerial_practice, published=True
142 ).select_related('species')
143
144 def render_to_response(self, context):
145 areas = self.get_queryset()
146 file_header = """* This file has been produced from GeoTrek sensitivity (https://geotrek.fr/) module from website {scheme}://{domain}
147 * Using pyopenair library (https://github.com/lpoaura/pyopenair)
148 * This file was created on: {timestamp}\n\n""".format(scheme=self.request.scheme, domain=self.request.headers['host'], timestamp=datetime.now())
149 airspace_list = [a.openair() for a in areas if a.openair()]
150 airspace_core = '\n\n'.join(airspace_list)
151 airspace_file = file_header + airspace_core
152 response = HttpResponse(airspace_file, content_type='application/octet-stream; charset=UTF-8')
153 response['Content-Disposition'] = 'inline; filename=sensitivearea_openair.txt'
154 return response
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/geotrek/sensitivity/views.py b/geotrek/sensitivity/views.py
--- a/geotrek/sensitivity/views.py
+++ b/geotrek/sensitivity/views.py
@@ -137,7 +137,7 @@
def get_queryset(self):
aerial_practice = SportPractice.objects.filter(name__in=settings.SENSITIVITY_OPENAIR_SPORT_PRACTICES)
- return SensitiveArea.objects.filter(
+ return SensitiveArea.objects.existing().filter(
species__practices__in=aerial_practice, published=True
).select_related('species')
|
{"golden_diff": "diff --git a/geotrek/sensitivity/views.py b/geotrek/sensitivity/views.py\n--- a/geotrek/sensitivity/views.py\n+++ b/geotrek/sensitivity/views.py\n@@ -137,7 +137,7 @@\n \n def get_queryset(self):\n aerial_practice = SportPractice.objects.filter(name__in=settings.SENSITIVITY_OPENAIR_SPORT_PRACTICES)\n- return SensitiveArea.objects.filter(\n+ return SensitiveArea.objects.existing().filter(\n species__practices__in=aerial_practice, published=True\n ).select_related('species')\n", "issue": "[Sensitivity module] Deleted zones are still present in the open-air export\nA clause is missing to exclude deleted sensitive areas in OpenAir API queryset.\r\n\r\nassigned to my self.\n[Sensitivity module] Deleted zones are still present in the open-air export\nA clause is missing to exclude deleted sensitive areas in OpenAir API queryset.\r\n\r\nassigned to my self.\n", "before_files": [{"content": "import json\nimport logging\nfrom datetime import datetime\n\nfrom django.conf import settings\nfrom django.contrib.gis.db.models.functions import Transform\nfrom django.http import HttpResponse\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.generic import ListView\nfrom django.views.generic.detail import BaseDetailView\nfrom mapentity.views import (MapEntityCreate, MapEntityUpdate, MapEntityList, MapEntityDetail,\n MapEntityDelete, MapEntityFormat, LastModifiedMixin)\n\nfrom geotrek.authent.decorators import same_structure_required\nfrom geotrek.common.mixins.views import CustomColumnsMixin\nfrom geotrek.common.permissions import PublicOrReadPermMixin\nfrom geotrek.common.viewsets import GeotrekMapentityViewSet\nfrom .filters import SensitiveAreaFilterSet\nfrom .forms import SensitiveAreaForm, RegulatorySensitiveAreaForm\nfrom .models import SensitiveArea, Species, SportPractice\nfrom .serializers import SensitiveAreaSerializer, SensitiveAreaGeojsonSerializer\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass SensitiveAreaList(CustomColumnsMixin, MapEntityList):\n queryset = SensitiveArea.objects.existing()\n filterform = SensitiveAreaFilterSet\n mandatory_columns = ['id', 'species']\n default_extra_columns = ['category']\n\n\nclass SensitiveAreaFormatList(MapEntityFormat, SensitiveAreaList):\n mandatory_columns = ['id']\n default_extra_columns = [\n 'species', 'published', 'description', 'contact', 'radius', 'pretty_period', 'pretty_practices',\n ]\n\n\nclass SensitiveAreaDetail(MapEntityDetail):\n queryset = SensitiveArea.objects.existing()\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context['can_edit'] = self.object.same_structure(self.request.user)\n return context\n\n\nclass SensitiveAreaRadiiMixin:\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n species = Species.objects.filter(category=Species.SPECIES)\n context['radii'] = json.dumps({\n str(s.id): settings.SENSITIVITY_DEFAULT_RADIUS if s.radius is None else s.radius for s in species\n })\n return context\n\n\nclass SensitiveAreaCreate(SensitiveAreaRadiiMixin, MapEntityCreate):\n model = SensitiveArea\n\n def get_form_class(self):\n if self.request.GET.get('category') == str(Species.REGULATORY):\n return RegulatorySensitiveAreaForm\n return SensitiveAreaForm\n\n\nclass SensitiveAreaUpdate(SensitiveAreaRadiiMixin, MapEntityUpdate):\n queryset = SensitiveArea.objects.existing()\n\n def get_form_class(self):\n if self.object.species.category == Species.REGULATORY:\n return RegulatorySensitiveAreaForm\n return SensitiveAreaForm\n\n @same_structure_required('sensitivity:sensitivearea_detail')\n def dispatch(self, *args, **kwargs):\n return super().dispatch(*args, **kwargs)\n\n\nclass SensitiveAreaDelete(MapEntityDelete):\n model = SensitiveArea\n\n @same_structure_required('sensitivity:sensitivearea_detail')\n def dispatch(self, *args, **kwargs):\n return super().dispatch(*args, **kwargs)\n\n\nclass SensitiveAreaViewSet(GeotrekMapentityViewSet):\n model = SensitiveArea\n serializer_class = SensitiveAreaSerializer\n geojson_serializer_class = SensitiveAreaGeojsonSerializer\n filterset_class = SensitiveAreaFilterSet\n mapentity_list_class = SensitiveAreaList\n\n def get_queryset(self):\n qs = self.model.objects.existing().select_related('species')\n if self.format_kwarg == 'geojson':\n qs = qs.annotate(api_geom=Transform('geom', settings.API_SRID))\n qs = qs.only('id', 'species')\n return qs\n\n\nclass SensitiveAreaKMLDetail(LastModifiedMixin, PublicOrReadPermMixin, BaseDetailView):\n queryset = SensitiveArea.objects.existing()\n\n def render_to_response(self, context):\n area = self.get_object()\n response = HttpResponse(area.kml(),\n content_type='application/vnd.google-earth.kml+xml')\n return response\n\n\nclass SensitiveAreaOpenAirDetail(LastModifiedMixin, PublicOrReadPermMixin, BaseDetailView):\n queryset = SensitiveArea.objects.existing()\n\n def render_to_response(self, context):\n area = self.get_object()\n file_header = \"\"\"* This file has been produced from GeoTrek sensitivity (https://geotrek.fr/) module from website {scheme}://{domain}\n* Using pyopenair library (https://github.com/lpoaura/pyopenair)\n* This file was created on: {timestamp}\\n\\n\"\"\".format(scheme=self.request.scheme, domain=self.request.headers['host'], timestamp=datetime.now())\n is_aerial = area.species.practices.filter(name__in=settings.SENSITIVITY_OPENAIR_SPORT_PRACTICES).exists()\n if is_aerial and area.openair():\n result = file_header + area.openair()\n response = HttpResponse(result, content_type='application/octet-stream; charset=UTF-8')\n response['Content-Disposition'] = 'inline; filename=sensitivearea_openair_' + str(area.id) + '.txt'\n return response\n else:\n message = _('This is not an aerial area')\n response = HttpResponse(message, content_type='text/plain; charset=UTF-8')\n\n return response\n\n\nclass SensitiveAreaOpenAirList(PublicOrReadPermMixin, ListView):\n\n def get_queryset(self):\n aerial_practice = SportPractice.objects.filter(name__in=settings.SENSITIVITY_OPENAIR_SPORT_PRACTICES)\n return SensitiveArea.objects.filter(\n species__practices__in=aerial_practice, published=True\n ).select_related('species')\n\n def render_to_response(self, context):\n areas = self.get_queryset()\n file_header = \"\"\"* This file has been produced from GeoTrek sensitivity (https://geotrek.fr/) module from website {scheme}://{domain}\n* Using pyopenair library (https://github.com/lpoaura/pyopenair)\n* This file was created on: {timestamp}\\n\\n\"\"\".format(scheme=self.request.scheme, domain=self.request.headers['host'], timestamp=datetime.now())\n airspace_list = [a.openair() for a in areas if a.openair()]\n airspace_core = '\\n\\n'.join(airspace_list)\n airspace_file = file_header + airspace_core\n response = HttpResponse(airspace_file, content_type='application/octet-stream; charset=UTF-8')\n response['Content-Disposition'] = 'inline; filename=sensitivearea_openair.txt'\n return response\n", "path": "geotrek/sensitivity/views.py"}], "after_files": [{"content": "import json\nimport logging\nfrom datetime import datetime\n\nfrom django.conf import settings\nfrom django.contrib.gis.db.models.functions import Transform\nfrom django.http import HttpResponse\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.generic import ListView\nfrom django.views.generic.detail import BaseDetailView\nfrom mapentity.views import (MapEntityCreate, MapEntityUpdate, MapEntityList, MapEntityDetail,\n MapEntityDelete, MapEntityFormat, LastModifiedMixin)\n\nfrom geotrek.authent.decorators import same_structure_required\nfrom geotrek.common.mixins.views import CustomColumnsMixin\nfrom geotrek.common.permissions import PublicOrReadPermMixin\nfrom geotrek.common.viewsets import GeotrekMapentityViewSet\nfrom .filters import SensitiveAreaFilterSet\nfrom .forms import SensitiveAreaForm, RegulatorySensitiveAreaForm\nfrom .models import SensitiveArea, Species, SportPractice\nfrom .serializers import SensitiveAreaSerializer, SensitiveAreaGeojsonSerializer\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass SensitiveAreaList(CustomColumnsMixin, MapEntityList):\n queryset = SensitiveArea.objects.existing()\n filterform = SensitiveAreaFilterSet\n mandatory_columns = ['id', 'species']\n default_extra_columns = ['category']\n\n\nclass SensitiveAreaFormatList(MapEntityFormat, SensitiveAreaList):\n mandatory_columns = ['id']\n default_extra_columns = [\n 'species', 'published', 'description', 'contact', 'radius', 'pretty_period', 'pretty_practices',\n ]\n\n\nclass SensitiveAreaDetail(MapEntityDetail):\n queryset = SensitiveArea.objects.existing()\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context['can_edit'] = self.object.same_structure(self.request.user)\n return context\n\n\nclass SensitiveAreaRadiiMixin:\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n species = Species.objects.filter(category=Species.SPECIES)\n context['radii'] = json.dumps({\n str(s.id): settings.SENSITIVITY_DEFAULT_RADIUS if s.radius is None else s.radius for s in species\n })\n return context\n\n\nclass SensitiveAreaCreate(SensitiveAreaRadiiMixin, MapEntityCreate):\n model = SensitiveArea\n\n def get_form_class(self):\n if self.request.GET.get('category') == str(Species.REGULATORY):\n return RegulatorySensitiveAreaForm\n return SensitiveAreaForm\n\n\nclass SensitiveAreaUpdate(SensitiveAreaRadiiMixin, MapEntityUpdate):\n queryset = SensitiveArea.objects.existing()\n\n def get_form_class(self):\n if self.object.species.category == Species.REGULATORY:\n return RegulatorySensitiveAreaForm\n return SensitiveAreaForm\n\n @same_structure_required('sensitivity:sensitivearea_detail')\n def dispatch(self, *args, **kwargs):\n return super().dispatch(*args, **kwargs)\n\n\nclass SensitiveAreaDelete(MapEntityDelete):\n model = SensitiveArea\n\n @same_structure_required('sensitivity:sensitivearea_detail')\n def dispatch(self, *args, **kwargs):\n return super().dispatch(*args, **kwargs)\n\n\nclass SensitiveAreaViewSet(GeotrekMapentityViewSet):\n model = SensitiveArea\n serializer_class = SensitiveAreaSerializer\n geojson_serializer_class = SensitiveAreaGeojsonSerializer\n filterset_class = SensitiveAreaFilterSet\n mapentity_list_class = SensitiveAreaList\n\n def get_queryset(self):\n qs = self.model.objects.existing().select_related('species')\n if self.format_kwarg == 'geojson':\n qs = qs.annotate(api_geom=Transform('geom', settings.API_SRID))\n qs = qs.only('id', 'species')\n return qs\n\n\nclass SensitiveAreaKMLDetail(LastModifiedMixin, PublicOrReadPermMixin, BaseDetailView):\n queryset = SensitiveArea.objects.existing()\n\n def render_to_response(self, context):\n area = self.get_object()\n response = HttpResponse(area.kml(),\n content_type='application/vnd.google-earth.kml+xml')\n return response\n\n\nclass SensitiveAreaOpenAirDetail(LastModifiedMixin, PublicOrReadPermMixin, BaseDetailView):\n queryset = SensitiveArea.objects.existing()\n\n def render_to_response(self, context):\n area = self.get_object()\n file_header = \"\"\"* This file has been produced from GeoTrek sensitivity (https://geotrek.fr/) module from website {scheme}://{domain}\n* Using pyopenair library (https://github.com/lpoaura/pyopenair)\n* This file was created on: {timestamp}\\n\\n\"\"\".format(scheme=self.request.scheme, domain=self.request.headers['host'], timestamp=datetime.now())\n is_aerial = area.species.practices.filter(name__in=settings.SENSITIVITY_OPENAIR_SPORT_PRACTICES).exists()\n if is_aerial and area.openair():\n result = file_header + area.openair()\n response = HttpResponse(result, content_type='application/octet-stream; charset=UTF-8')\n response['Content-Disposition'] = 'inline; filename=sensitivearea_openair_' + str(area.id) + '.txt'\n return response\n else:\n message = _('This is not an aerial area')\n response = HttpResponse(message, content_type='text/plain; charset=UTF-8')\n\n return response\n\n\nclass SensitiveAreaOpenAirList(PublicOrReadPermMixin, ListView):\n\n def get_queryset(self):\n aerial_practice = SportPractice.objects.filter(name__in=settings.SENSITIVITY_OPENAIR_SPORT_PRACTICES)\n return SensitiveArea.objects.existing().filter(\n species__practices__in=aerial_practice, published=True\n ).select_related('species')\n\n def render_to_response(self, context):\n areas = self.get_queryset()\n file_header = \"\"\"* This file has been produced from GeoTrek sensitivity (https://geotrek.fr/) module from website {scheme}://{domain}\n* Using pyopenair library (https://github.com/lpoaura/pyopenair)\n* This file was created on: {timestamp}\\n\\n\"\"\".format(scheme=self.request.scheme, domain=self.request.headers['host'], timestamp=datetime.now())\n airspace_list = [a.openair() for a in areas if a.openair()]\n airspace_core = '\\n\\n'.join(airspace_list)\n airspace_file = file_header + airspace_core\n response = HttpResponse(airspace_file, content_type='application/octet-stream; charset=UTF-8')\n response['Content-Disposition'] = 'inline; filename=sensitivearea_openair.txt'\n return response\n", "path": "geotrek/sensitivity/views.py"}]}
| 2,120 | 129 |
gh_patches_debug_63551
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-602
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hoist HTTPStatus into falcon top-level namespace
I.e., add an import line to `falcon/__init__.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `falcon/__init__.py`
Content:
```
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 HTTP_METHODS = (
16 'CONNECT',
17 'DELETE',
18 'GET',
19 'HEAD',
20 'OPTIONS',
21 'PATCH',
22 'POST',
23 'PUT',
24 'TRACE',
25 )
26
27 DEFAULT_MEDIA_TYPE = 'application/json; charset=utf-8'
28
29
30 # Hoist classes and functions into the falcon namespace
31 from falcon.version import __version__ # NOQA
32 from falcon.api import API, DEFAULT_MEDIA_TYPE # NOQA
33 from falcon.status_codes import * # NOQA
34 from falcon.errors import * # NOQA
35 from falcon.redirects import * # NOQA
36 from falcon.http_error import HTTPError # NOQA
37 from falcon.util import * # NOQA
38 from falcon.hooks import before, after # NOQA
39 from falcon.request import Request, RequestOptions # NOQA
40 from falcon.response import Response # NOQA
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/falcon/__init__.py b/falcon/__init__.py
--- a/falcon/__init__.py
+++ b/falcon/__init__.py
@@ -34,6 +34,7 @@
from falcon.errors import * # NOQA
from falcon.redirects import * # NOQA
from falcon.http_error import HTTPError # NOQA
+from falcon.http_status import HTTPStatus # NOQA
from falcon.util import * # NOQA
from falcon.hooks import before, after # NOQA
from falcon.request import Request, RequestOptions # NOQA
|
{"golden_diff": "diff --git a/falcon/__init__.py b/falcon/__init__.py\n--- a/falcon/__init__.py\n+++ b/falcon/__init__.py\n@@ -34,6 +34,7 @@\n from falcon.errors import * # NOQA\n from falcon.redirects import * # NOQA\n from falcon.http_error import HTTPError # NOQA\n+from falcon.http_status import HTTPStatus # NOQA\n from falcon.util import * # NOQA\n from falcon.hooks import before, after # NOQA\n from falcon.request import Request, RequestOptions # NOQA\n", "issue": "Hoist HTTPStatus into falcon top-level namespace\nI.e., add an import line to `falcon/__init__.py`\n\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nHTTP_METHODS = (\n 'CONNECT',\n 'DELETE',\n 'GET',\n 'HEAD',\n 'OPTIONS',\n 'PATCH',\n 'POST',\n 'PUT',\n 'TRACE',\n)\n\nDEFAULT_MEDIA_TYPE = 'application/json; charset=utf-8'\n\n\n# Hoist classes and functions into the falcon namespace\nfrom falcon.version import __version__ # NOQA\nfrom falcon.api import API, DEFAULT_MEDIA_TYPE # NOQA\nfrom falcon.status_codes import * # NOQA\nfrom falcon.errors import * # NOQA\nfrom falcon.redirects import * # NOQA\nfrom falcon.http_error import HTTPError # NOQA\nfrom falcon.util import * # NOQA\nfrom falcon.hooks import before, after # NOQA\nfrom falcon.request import Request, RequestOptions # NOQA\nfrom falcon.response import Response # NOQA\n", "path": "falcon/__init__.py"}], "after_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nHTTP_METHODS = (\n 'CONNECT',\n 'DELETE',\n 'GET',\n 'HEAD',\n 'OPTIONS',\n 'PATCH',\n 'POST',\n 'PUT',\n 'TRACE',\n)\n\nDEFAULT_MEDIA_TYPE = 'application/json; charset=utf-8'\n\n\n# Hoist classes and functions into the falcon namespace\nfrom falcon.version import __version__ # NOQA\nfrom falcon.api import API, DEFAULT_MEDIA_TYPE # NOQA\nfrom falcon.status_codes import * # NOQA\nfrom falcon.errors import * # NOQA\nfrom falcon.redirects import * # NOQA\nfrom falcon.http_error import HTTPError # NOQA\nfrom falcon.http_status import HTTPStatus # NOQA\nfrom falcon.util import * # NOQA\nfrom falcon.hooks import before, after # NOQA\nfrom falcon.request import Request, RequestOptions # NOQA\nfrom falcon.response import Response # NOQA\n", "path": "falcon/__init__.py"}]}
| 692 | 136 |
gh_patches_debug_13977
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-374
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add OS to usage stats collection
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/dataflow/usage_tracking/usage.py`
Content:
```
1 import uuid
2 import time
3 import hashlib
4 import os
5 import getpass
6 import json
7 import logging
8 import socket
9 import sys
10 import multiprocessing as mp
11
12 from parsl.dataflow.states import States
13 from parsl.version import VERSION as PARSL_VERSION
14
15 logger = logging.getLogger(__name__)
16
17
18 def async_process(fn):
19 """ Decorator function to launch a function as a separate process """
20
21 def run(*args, **kwargs):
22 proc = mp.Process(target=fn, args=args, kwargs=kwargs)
23 proc.start()
24 return proc
25
26 return run
27
28
29 @async_process
30 def udp_messenger(domain_name, UDP_IP, UDP_PORT, sock_timeout, message):
31 """Send UDP messages to usage tracker asynchronously
32
33 This multiprocessing based messenger was written to overcome the limitations
34 of signalling/terminating a thread that is blocked on a system call. This
35 messenger is created as a separate process, and initialized with 2 queues,
36 to_send to receive messages to be sent to the internet.
37
38 Args:
39 - domain_name (str) : Domain name string
40 - UDP_IP (str) : IP address YYY.YYY.YYY.YYY
41 - UDP_PORT (int) : UDP port to send out on
42 - sock_timeout (int) : Socket timeout
43 - to_send (multiprocessing.Queue) : Queue of outgoing messages to internet
44 """
45 try:
46 if message is None:
47 raise ValueError("message was none")
48
49 encoded_message = bytes(message, "utf-8")
50
51 if encoded_message is None:
52 raise ValueError("utf-8 encoding of message failed")
53
54 if domain_name:
55 try:
56 UDP_IP = socket.gethostbyname(domain_name)
57 except Exception:
58 # (False, "Domain lookup failed, defaulting to {0}".format(UDP_IP))
59 pass
60
61 if UDP_IP is None:
62 raise Exception("UDP_IP is None")
63
64 if UDP_PORT is None:
65 raise Exception("UDP_PORT is None")
66
67 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
68 sock.settimeout(sock_timeout)
69 sock.sendto(bytes(message, "utf-8"), (UDP_IP, UDP_PORT))
70 sock.close()
71
72 except socket.timeout:
73 logger.debug("Failed to send usage tracking data: socket timeout")
74 except OSError as e:
75 logger.debug("Failed to send usage tracking data: OSError: {}".format(e))
76 except Exception as e:
77 logger.debug("Failed to send usage tracking data: Exception: {}".format(e))
78
79
80 class UsageTracker (object):
81 """Anonymized Usage Tracking for Parsl.
82
83 Client for this is here : https://github.com/Parsl/parsl_tracking
84 This issue captures the discussion that went into functionality
85 implemented here : https://github.com/Parsl/parsl/issues/34
86
87 """
88
89 def __init__(self, dfk, ip='52.3.111.203', port=50077,
90 domain_name='tracking.parsl-project.org'):
91 """Initialize usage tracking unless the user has opted-out.
92
93 We will try to resolve the hostname specified in kwarg:domain_name
94 and if that fails attempt to use the kwarg:ip. Determining the
95 IP and sending message is threaded to avoid slowing down DFK
96 initialization.
97
98 Tracks usage stats by inspecting the internal state of the dfk.
99
100 Args:
101 - dfk (DFK object) : Data Flow Kernel object
102
103 KWargs:
104 - ip (string) : IP address
105 - port (int) : Port number, Default:50077
106 - domain_name (string) : Domain name, will override IP
107 Default: tracking.parsl-project.org
108 """
109
110 self.domain_name = domain_name
111 self.ip = ip
112 # The sock timeout will only apply to UDP send and not domain resolution
113 self.sock_timeout = 5
114 self.UDP_PORT = port
115 self.UDP_IP = None
116 self.procs = []
117 self.dfk = dfk
118 self.config = self.dfk.config
119 self.uuid = str(uuid.uuid4())
120 self.parsl_version = PARSL_VERSION
121 self.python_version = "{}.{}.{}".format(sys.version_info.major,
122 sys.version_info.minor,
123 sys.version_info.micro)
124 self.test_mode, self.tracking_enabled = self.check_tracking_enabled()
125 logger.debug("Tracking status: {}".format(self.tracking_enabled))
126 logger.debug("Testing mode : {}".format(self.test_mode))
127 self.initialized = False # Once first message is sent this will be True
128
129 def check_tracking_enabled(self):
130 """By default tracking is enabled.
131
132 If Test mode is set via env variable PARSL_TESTING, a test flag is set
133
134 Tracking is disabled if :
135 1. config["globals"]["usageTracking"] is set to False (Bool)
136 2. Environment variable PARSL_TRACKING is set to false (case insensitive)
137
138 """
139 track = True # By default we track usage
140 test = False # By default we are not in testing mode
141
142 testvar = str(os.environ.get("PARSL_TESTING", 'None')).lower()
143 if testvar == 'true':
144 test = True
145
146 if not self.config.usage_tracking:
147 track = False
148
149 envvar = str(os.environ.get("PARSL_TRACKING", True)).lower()
150 if envvar == "false":
151 track = False
152
153 return test, track
154
155 def construct_start_message(self):
156 """Collect preliminary run info at the start of the DFK.
157
158 Returns :
159 - Message dict dumped as json string, ready for UDP
160 """
161 uname = getpass.getuser().encode('latin1')
162 hashed_username = hashlib.sha256(uname).hexdigest()[0:10]
163 hname = socket.gethostname().encode('latin1')
164 hashed_hostname = hashlib.sha256(hname).hexdigest()[0:10]
165 message = {'uuid': self.uuid,
166 'uname': hashed_username,
167 'hname': hashed_hostname,
168 'test': self.test_mode,
169 'parsl_v': self.parsl_version,
170 'python_v': self.python_version,
171 'start': time.time()}
172
173 return json.dumps(message)
174
175 def construct_end_message(self):
176 """Collect the final run information at the time of DFK cleanup.
177
178 Returns:
179 - Message dict dumped as json string, ready for UDP
180 """
181 app_count = self.dfk.task_count
182
183 site_count = len([x for x in self.dfk.config.executors if x.managed])
184
185 failed_states = (States.failed, States.dep_fail)
186 app_fails = len([t for t in self.dfk.tasks if
187 self.dfk.tasks[t]['status'] in failed_states])
188
189 message = {'uuid': self.uuid,
190 'end': time.time(),
191 't_apps': app_count,
192 'sites': site_count,
193 'c_time': None,
194 'failed': app_fails,
195 'test': self.test_mode,
196 }
197
198 return json.dumps(message)
199
200 def send_UDP_message(self, message):
201 """Send UDP message."""
202 x = 0
203 if self.tracking_enabled:
204 try:
205 proc = udp_messenger(self.domain_name, self.UDP_IP, self.UDP_PORT, self.sock_timeout, message)
206 self.procs.append(proc)
207 except Exception as e:
208 logger.debug("Usage tracking failed: {}".format(e))
209 else:
210 x = -1
211
212 return x
213
214 def send_message(self):
215 """Send message over UDP.
216
217 If tracking is disables, the bytes_sent will always be set to -1
218
219 Returns:
220 (bytes_sent, time_taken)
221 """
222 start = time.time()
223 message = None
224 if not self.initialized:
225 message = self.construct_start_message()
226 self.initialized = True
227 else:
228 message = self.construct_end_message()
229
230 self.send_UDP_message(message)
231 end = time.time()
232
233 return end - start
234
235 def __del__(self):
236 return self.close()
237
238 def close(self):
239 """We terminate (SIGTERM) the processes added to the self.procs list """
240 for proc in self.procs:
241 proc.terminate()
242
243
244 if __name__ == '__main__':
245
246 from parsl import *
247
248 set_stream_logger()
249 workers = ThreadPoolExecutor(max_workers=4)
250 dfk = DataFlowKernel(executors=[workers])
251
252 dfk.cleanup()
253
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsl/dataflow/usage_tracking/usage.py b/parsl/dataflow/usage_tracking/usage.py
--- a/parsl/dataflow/usage_tracking/usage.py
+++ b/parsl/dataflow/usage_tracking/usage.py
@@ -7,6 +7,7 @@
import logging
import socket
import sys
+import platform
import multiprocessing as mp
from parsl.dataflow.states import States
@@ -168,6 +169,8 @@
'test': self.test_mode,
'parsl_v': self.parsl_version,
'python_v': self.python_version,
+ 'os': platform.system(),
+ 'os_v': platform.release(),
'start': time.time()}
return json.dumps(message)
|
{"golden_diff": "diff --git a/parsl/dataflow/usage_tracking/usage.py b/parsl/dataflow/usage_tracking/usage.py\n--- a/parsl/dataflow/usage_tracking/usage.py\n+++ b/parsl/dataflow/usage_tracking/usage.py\n@@ -7,6 +7,7 @@\n import logging\n import socket\n import sys\n+import platform\n import multiprocessing as mp\n \n from parsl.dataflow.states import States\n@@ -168,6 +169,8 @@\n 'test': self.test_mode,\n 'parsl_v': self.parsl_version,\n 'python_v': self.python_version,\n+ 'os': platform.system(),\n+ 'os_v': platform.release(),\n 'start': time.time()}\n \n return json.dumps(message)\n", "issue": "Add OS to usage stats collection\n\n", "before_files": [{"content": "import uuid\nimport time\nimport hashlib\nimport os\nimport getpass\nimport json\nimport logging\nimport socket\nimport sys\nimport multiprocessing as mp\n\nfrom parsl.dataflow.states import States\nfrom parsl.version import VERSION as PARSL_VERSION\n\nlogger = logging.getLogger(__name__)\n\n\ndef async_process(fn):\n \"\"\" Decorator function to launch a function as a separate process \"\"\"\n\n def run(*args, **kwargs):\n proc = mp.Process(target=fn, args=args, kwargs=kwargs)\n proc.start()\n return proc\n\n return run\n\n\n@async_process\ndef udp_messenger(domain_name, UDP_IP, UDP_PORT, sock_timeout, message):\n \"\"\"Send UDP messages to usage tracker asynchronously\n\n This multiprocessing based messenger was written to overcome the limitations\n of signalling/terminating a thread that is blocked on a system call. This\n messenger is created as a separate process, and initialized with 2 queues,\n to_send to receive messages to be sent to the internet.\n\n Args:\n - domain_name (str) : Domain name string\n - UDP_IP (str) : IP address YYY.YYY.YYY.YYY\n - UDP_PORT (int) : UDP port to send out on\n - sock_timeout (int) : Socket timeout\n - to_send (multiprocessing.Queue) : Queue of outgoing messages to internet\n \"\"\"\n try:\n if message is None:\n raise ValueError(\"message was none\")\n\n encoded_message = bytes(message, \"utf-8\")\n\n if encoded_message is None:\n raise ValueError(\"utf-8 encoding of message failed\")\n\n if domain_name:\n try:\n UDP_IP = socket.gethostbyname(domain_name)\n except Exception:\n # (False, \"Domain lookup failed, defaulting to {0}\".format(UDP_IP))\n pass\n\n if UDP_IP is None:\n raise Exception(\"UDP_IP is None\")\n\n if UDP_PORT is None:\n raise Exception(\"UDP_PORT is None\")\n\n sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP\n sock.settimeout(sock_timeout)\n sock.sendto(bytes(message, \"utf-8\"), (UDP_IP, UDP_PORT))\n sock.close()\n\n except socket.timeout:\n logger.debug(\"Failed to send usage tracking data: socket timeout\")\n except OSError as e:\n logger.debug(\"Failed to send usage tracking data: OSError: {}\".format(e))\n except Exception as e:\n logger.debug(\"Failed to send usage tracking data: Exception: {}\".format(e))\n\n\nclass UsageTracker (object):\n \"\"\"Anonymized Usage Tracking for Parsl.\n\n Client for this is here : https://github.com/Parsl/parsl_tracking\n This issue captures the discussion that went into functionality\n implemented here : https://github.com/Parsl/parsl/issues/34\n\n \"\"\"\n\n def __init__(self, dfk, ip='52.3.111.203', port=50077,\n domain_name='tracking.parsl-project.org'):\n \"\"\"Initialize usage tracking unless the user has opted-out.\n\n We will try to resolve the hostname specified in kwarg:domain_name\n and if that fails attempt to use the kwarg:ip. Determining the\n IP and sending message is threaded to avoid slowing down DFK\n initialization.\n\n Tracks usage stats by inspecting the internal state of the dfk.\n\n Args:\n - dfk (DFK object) : Data Flow Kernel object\n\n KWargs:\n - ip (string) : IP address\n - port (int) : Port number, Default:50077\n - domain_name (string) : Domain name, will override IP\n Default: tracking.parsl-project.org\n \"\"\"\n\n self.domain_name = domain_name\n self.ip = ip\n # The sock timeout will only apply to UDP send and not domain resolution\n self.sock_timeout = 5\n self.UDP_PORT = port\n self.UDP_IP = None\n self.procs = []\n self.dfk = dfk\n self.config = self.dfk.config\n self.uuid = str(uuid.uuid4())\n self.parsl_version = PARSL_VERSION\n self.python_version = \"{}.{}.{}\".format(sys.version_info.major,\n sys.version_info.minor,\n sys.version_info.micro)\n self.test_mode, self.tracking_enabled = self.check_tracking_enabled()\n logger.debug(\"Tracking status: {}\".format(self.tracking_enabled))\n logger.debug(\"Testing mode : {}\".format(self.test_mode))\n self.initialized = False # Once first message is sent this will be True\n\n def check_tracking_enabled(self):\n \"\"\"By default tracking is enabled.\n\n If Test mode is set via env variable PARSL_TESTING, a test flag is set\n\n Tracking is disabled if :\n 1. config[\"globals\"][\"usageTracking\"] is set to False (Bool)\n 2. Environment variable PARSL_TRACKING is set to false (case insensitive)\n\n \"\"\"\n track = True # By default we track usage\n test = False # By default we are not in testing mode\n\n testvar = str(os.environ.get(\"PARSL_TESTING\", 'None')).lower()\n if testvar == 'true':\n test = True\n\n if not self.config.usage_tracking:\n track = False\n\n envvar = str(os.environ.get(\"PARSL_TRACKING\", True)).lower()\n if envvar == \"false\":\n track = False\n\n return test, track\n\n def construct_start_message(self):\n \"\"\"Collect preliminary run info at the start of the DFK.\n\n Returns :\n - Message dict dumped as json string, ready for UDP\n \"\"\"\n uname = getpass.getuser().encode('latin1')\n hashed_username = hashlib.sha256(uname).hexdigest()[0:10]\n hname = socket.gethostname().encode('latin1')\n hashed_hostname = hashlib.sha256(hname).hexdigest()[0:10]\n message = {'uuid': self.uuid,\n 'uname': hashed_username,\n 'hname': hashed_hostname,\n 'test': self.test_mode,\n 'parsl_v': self.parsl_version,\n 'python_v': self.python_version,\n 'start': time.time()}\n\n return json.dumps(message)\n\n def construct_end_message(self):\n \"\"\"Collect the final run information at the time of DFK cleanup.\n\n Returns:\n - Message dict dumped as json string, ready for UDP\n \"\"\"\n app_count = self.dfk.task_count\n\n site_count = len([x for x in self.dfk.config.executors if x.managed])\n\n failed_states = (States.failed, States.dep_fail)\n app_fails = len([t for t in self.dfk.tasks if\n self.dfk.tasks[t]['status'] in failed_states])\n\n message = {'uuid': self.uuid,\n 'end': time.time(),\n 't_apps': app_count,\n 'sites': site_count,\n 'c_time': None,\n 'failed': app_fails,\n 'test': self.test_mode,\n }\n\n return json.dumps(message)\n\n def send_UDP_message(self, message):\n \"\"\"Send UDP message.\"\"\"\n x = 0\n if self.tracking_enabled:\n try:\n proc = udp_messenger(self.domain_name, self.UDP_IP, self.UDP_PORT, self.sock_timeout, message)\n self.procs.append(proc)\n except Exception as e:\n logger.debug(\"Usage tracking failed: {}\".format(e))\n else:\n x = -1\n\n return x\n\n def send_message(self):\n \"\"\"Send message over UDP.\n\n If tracking is disables, the bytes_sent will always be set to -1\n\n Returns:\n (bytes_sent, time_taken)\n \"\"\"\n start = time.time()\n message = None\n if not self.initialized:\n message = self.construct_start_message()\n self.initialized = True\n else:\n message = self.construct_end_message()\n\n self.send_UDP_message(message)\n end = time.time()\n\n return end - start\n\n def __del__(self):\n return self.close()\n\n def close(self):\n \"\"\"We terminate (SIGTERM) the processes added to the self.procs list \"\"\"\n for proc in self.procs:\n proc.terminate()\n\n\nif __name__ == '__main__':\n\n from parsl import *\n\n set_stream_logger()\n workers = ThreadPoolExecutor(max_workers=4)\n dfk = DataFlowKernel(executors=[workers])\n\n dfk.cleanup()\n", "path": "parsl/dataflow/usage_tracking/usage.py"}], "after_files": [{"content": "import uuid\nimport time\nimport hashlib\nimport os\nimport getpass\nimport json\nimport logging\nimport socket\nimport sys\nimport platform\nimport multiprocessing as mp\n\nfrom parsl.dataflow.states import States\nfrom parsl.version import VERSION as PARSL_VERSION\n\nlogger = logging.getLogger(__name__)\n\n\ndef async_process(fn):\n \"\"\" Decorator function to launch a function as a separate process \"\"\"\n\n def run(*args, **kwargs):\n proc = mp.Process(target=fn, args=args, kwargs=kwargs)\n proc.start()\n return proc\n\n return run\n\n\n@async_process\ndef udp_messenger(domain_name, UDP_IP, UDP_PORT, sock_timeout, message):\n \"\"\"Send UDP messages to usage tracker asynchronously\n\n This multiprocessing based messenger was written to overcome the limitations\n of signalling/terminating a thread that is blocked on a system call. This\n messenger is created as a separate process, and initialized with 2 queues,\n to_send to receive messages to be sent to the internet.\n\n Args:\n - domain_name (str) : Domain name string\n - UDP_IP (str) : IP address YYY.YYY.YYY.YYY\n - UDP_PORT (int) : UDP port to send out on\n - sock_timeout (int) : Socket timeout\n - to_send (multiprocessing.Queue) : Queue of outgoing messages to internet\n \"\"\"\n try:\n if message is None:\n raise ValueError(\"message was none\")\n\n encoded_message = bytes(message, \"utf-8\")\n\n if encoded_message is None:\n raise ValueError(\"utf-8 encoding of message failed\")\n\n if domain_name:\n try:\n UDP_IP = socket.gethostbyname(domain_name)\n except Exception:\n # (False, \"Domain lookup failed, defaulting to {0}\".format(UDP_IP))\n pass\n\n if UDP_IP is None:\n raise Exception(\"UDP_IP is None\")\n\n if UDP_PORT is None:\n raise Exception(\"UDP_PORT is None\")\n\n sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP\n sock.settimeout(sock_timeout)\n sock.sendto(bytes(message, \"utf-8\"), (UDP_IP, UDP_PORT))\n sock.close()\n\n except socket.timeout:\n logger.debug(\"Failed to send usage tracking data: socket timeout\")\n except OSError as e:\n logger.debug(\"Failed to send usage tracking data: OSError: {}\".format(e))\n except Exception as e:\n logger.debug(\"Failed to send usage tracking data: Exception: {}\".format(e))\n\n\nclass UsageTracker (object):\n \"\"\"Anonymized Usage Tracking for Parsl.\n\n Client for this is here : https://github.com/Parsl/parsl_tracking\n This issue captures the discussion that went into functionality\n implemented here : https://github.com/Parsl/parsl/issues/34\n\n \"\"\"\n\n def __init__(self, dfk, ip='52.3.111.203', port=50077,\n domain_name='tracking.parsl-project.org'):\n \"\"\"Initialize usage tracking unless the user has opted-out.\n\n We will try to resolve the hostname specified in kwarg:domain_name\n and if that fails attempt to use the kwarg:ip. Determining the\n IP and sending message is threaded to avoid slowing down DFK\n initialization.\n\n Tracks usage stats by inspecting the internal state of the dfk.\n\n Args:\n - dfk (DFK object) : Data Flow Kernel object\n\n KWargs:\n - ip (string) : IP address\n - port (int) : Port number, Default:50077\n - domain_name (string) : Domain name, will override IP\n Default: tracking.parsl-project.org\n \"\"\"\n\n self.domain_name = domain_name\n self.ip = ip\n # The sock timeout will only apply to UDP send and not domain resolution\n self.sock_timeout = 5\n self.UDP_PORT = port\n self.UDP_IP = None\n self.procs = []\n self.dfk = dfk\n self.config = self.dfk.config\n self.uuid = str(uuid.uuid4())\n self.parsl_version = PARSL_VERSION\n self.python_version = \"{}.{}.{}\".format(sys.version_info.major,\n sys.version_info.minor,\n sys.version_info.micro)\n self.test_mode, self.tracking_enabled = self.check_tracking_enabled()\n logger.debug(\"Tracking status: {}\".format(self.tracking_enabled))\n logger.debug(\"Testing mode : {}\".format(self.test_mode))\n self.initialized = False # Once first message is sent this will be True\n\n def check_tracking_enabled(self):\n \"\"\"By default tracking is enabled.\n\n If Test mode is set via env variable PARSL_TESTING, a test flag is set\n\n Tracking is disabled if :\n 1. config[\"globals\"][\"usageTracking\"] is set to False (Bool)\n 2. Environment variable PARSL_TRACKING is set to false (case insensitive)\n\n \"\"\"\n track = True # By default we track usage\n test = False # By default we are not in testing mode\n\n testvar = str(os.environ.get(\"PARSL_TESTING\", 'None')).lower()\n if testvar == 'true':\n test = True\n\n if not self.config.usage_tracking:\n track = False\n\n envvar = str(os.environ.get(\"PARSL_TRACKING\", True)).lower()\n if envvar == \"false\":\n track = False\n\n return test, track\n\n def construct_start_message(self):\n \"\"\"Collect preliminary run info at the start of the DFK.\n\n Returns :\n - Message dict dumped as json string, ready for UDP\n \"\"\"\n uname = getpass.getuser().encode('latin1')\n hashed_username = hashlib.sha256(uname).hexdigest()[0:10]\n hname = socket.gethostname().encode('latin1')\n hashed_hostname = hashlib.sha256(hname).hexdigest()[0:10]\n message = {'uuid': self.uuid,\n 'uname': hashed_username,\n 'hname': hashed_hostname,\n 'test': self.test_mode,\n 'parsl_v': self.parsl_version,\n 'python_v': self.python_version,\n 'os': platform.system(),\n 'os_v': platform.release(),\n 'start': time.time()}\n\n return json.dumps(message)\n\n def construct_end_message(self):\n \"\"\"Collect the final run information at the time of DFK cleanup.\n\n Returns:\n - Message dict dumped as json string, ready for UDP\n \"\"\"\n app_count = self.dfk.task_count\n\n site_count = len([x for x in self.dfk.config.executors if x.managed])\n\n failed_states = (States.failed, States.dep_fail)\n app_fails = len([t for t in self.dfk.tasks if\n self.dfk.tasks[t]['status'] in failed_states])\n\n message = {'uuid': self.uuid,\n 'end': time.time(),\n 't_apps': app_count,\n 'sites': site_count,\n 'c_time': None,\n 'failed': app_fails,\n 'test': self.test_mode,\n }\n\n return json.dumps(message)\n\n def send_UDP_message(self, message):\n \"\"\"Send UDP message.\"\"\"\n x = 0\n if self.tracking_enabled:\n try:\n proc = udp_messenger(self.domain_name, self.UDP_IP, self.UDP_PORT, self.sock_timeout, message)\n self.procs.append(proc)\n except Exception as e:\n logger.debug(\"Usage tracking failed: {}\".format(e))\n else:\n x = -1\n\n return x\n\n def send_message(self):\n \"\"\"Send message over UDP.\n\n If tracking is disables, the bytes_sent will always be set to -1\n\n Returns:\n (bytes_sent, time_taken)\n \"\"\"\n start = time.time()\n message = None\n if not self.initialized:\n message = self.construct_start_message()\n self.initialized = True\n else:\n message = self.construct_end_message()\n\n self.send_UDP_message(message)\n end = time.time()\n\n return end - start\n\n def __del__(self):\n return self.close()\n\n def close(self):\n \"\"\"We terminate (SIGTERM) the processes added to the self.procs list \"\"\"\n for proc in self.procs:\n proc.terminate()\n\n\nif __name__ == '__main__':\n\n from parsl import *\n\n set_stream_logger()\n workers = ThreadPoolExecutor(max_workers=4)\n dfk = DataFlowKernel(executors=[workers])\n\n dfk.cleanup()\n", "path": "parsl/dataflow/usage_tracking/usage.py"}]}
| 2,783 | 167 |
gh_patches_debug_34452
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-7685
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
make `mmdet.apis.init_detector` work with pathlib
**Describe the feature**
make `mmdet.apis.init_detector` work with pathlib
**Motivation**
Since mmcv works with pathlib, there is no reason to use str path only. (ref https://github.com/open-mmlab/mmcv/issues/3).
https://github.com/open-mmlab/mmdetection/blob/3e2693151add9b5d6db99b944da020cba837266b/mmdet/apis/inference.py#L31
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet/models/detectors/kd_one_stage.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import mmcv
3 import torch
4 from mmcv.runner import load_checkpoint
5
6 from .. import build_detector
7 from ..builder import DETECTORS
8 from .single_stage import SingleStageDetector
9
10
11 @DETECTORS.register_module()
12 class KnowledgeDistillationSingleStageDetector(SingleStageDetector):
13 r"""Implementation of `Distilling the Knowledge in a Neural Network.
14 <https://arxiv.org/abs/1503.02531>`_.
15
16 Args:
17 teacher_config (str | dict): Config file path
18 or the config object of teacher model.
19 teacher_ckpt (str, optional): Checkpoint path of teacher model.
20 If left as None, the model will not load any weights.
21 """
22
23 def __init__(self,
24 backbone,
25 neck,
26 bbox_head,
27 teacher_config,
28 teacher_ckpt=None,
29 eval_teacher=True,
30 train_cfg=None,
31 test_cfg=None,
32 pretrained=None):
33 super().__init__(backbone, neck, bbox_head, train_cfg, test_cfg,
34 pretrained)
35 self.eval_teacher = eval_teacher
36 # Build teacher model
37 if isinstance(teacher_config, str):
38 teacher_config = mmcv.Config.fromfile(teacher_config)
39 self.teacher_model = build_detector(teacher_config['model'])
40 if teacher_ckpt is not None:
41 load_checkpoint(
42 self.teacher_model, teacher_ckpt, map_location='cpu')
43
44 def forward_train(self,
45 img,
46 img_metas,
47 gt_bboxes,
48 gt_labels,
49 gt_bboxes_ignore=None):
50 """
51 Args:
52 img (Tensor): Input images of shape (N, C, H, W).
53 Typically these should be mean centered and std scaled.
54 img_metas (list[dict]): A List of image info dict where each dict
55 has: 'img_shape', 'scale_factor', 'flip', and may also contain
56 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
57 For details on the values of these keys see
58 :class:`mmdet.datasets.pipelines.Collect`.
59 gt_bboxes (list[Tensor]): Each item are the truth boxes for each
60 image in [tl_x, tl_y, br_x, br_y] format.
61 gt_labels (list[Tensor]): Class indices corresponding to each box
62 gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
63 boxes can be ignored when computing the loss.
64 Returns:
65 dict[str, Tensor]: A dictionary of loss components.
66 """
67 x = self.extract_feat(img)
68 with torch.no_grad():
69 teacher_x = self.teacher_model.extract_feat(img)
70 out_teacher = self.teacher_model.bbox_head(teacher_x)
71 losses = self.bbox_head.forward_train(x, out_teacher, img_metas,
72 gt_bboxes, gt_labels,
73 gt_bboxes_ignore)
74 return losses
75
76 def cuda(self, device=None):
77 """Since teacher_model is registered as a plain object, it is necessary
78 to put the teacher model to cuda when calling cuda function."""
79 self.teacher_model.cuda(device=device)
80 return super().cuda(device=device)
81
82 def train(self, mode=True):
83 """Set the same train mode for teacher and student model."""
84 if self.eval_teacher:
85 self.teacher_model.train(False)
86 else:
87 self.teacher_model.train(mode)
88 super().train(mode)
89
90 def __setattr__(self, name, value):
91 """Set attribute, i.e. self.name = value
92
93 This reloading prevent the teacher model from being registered as a
94 nn.Module. The teacher module is registered as a plain object, so that
95 the teacher parameters will not show up when calling
96 ``self.parameters``, ``self.modules``, ``self.children`` methods.
97 """
98 if name == 'teacher_model':
99 object.__setattr__(self, name, value)
100 else:
101 super().__setattr__(name, value)
102
```
Path: `mmdet/apis/inference.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import warnings
3
4 import mmcv
5 import numpy as np
6 import torch
7 from mmcv.ops import RoIPool
8 from mmcv.parallel import collate, scatter
9 from mmcv.runner import load_checkpoint
10
11 from mmdet.core import get_classes
12 from mmdet.datasets import replace_ImageToTensor
13 from mmdet.datasets.pipelines import Compose
14 from mmdet.models import build_detector
15
16
17 def init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):
18 """Initialize a detector from config file.
19
20 Args:
21 config (str or :obj:`mmcv.Config`): Config file path or the config
22 object.
23 checkpoint (str, optional): Checkpoint path. If left as None, the model
24 will not load any weights.
25 cfg_options (dict): Options to override some settings in the used
26 config.
27
28 Returns:
29 nn.Module: The constructed detector.
30 """
31 if isinstance(config, str):
32 config = mmcv.Config.fromfile(config)
33 elif not isinstance(config, mmcv.Config):
34 raise TypeError('config must be a filename or Config object, '
35 f'but got {type(config)}')
36 if cfg_options is not None:
37 config.merge_from_dict(cfg_options)
38 if 'pretrained' in config.model:
39 config.model.pretrained = None
40 elif 'init_cfg' in config.model.backbone:
41 config.model.backbone.init_cfg = None
42 config.model.train_cfg = None
43 model = build_detector(config.model, test_cfg=config.get('test_cfg'))
44 if checkpoint is not None:
45 checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')
46 if 'CLASSES' in checkpoint.get('meta', {}):
47 model.CLASSES = checkpoint['meta']['CLASSES']
48 else:
49 warnings.simplefilter('once')
50 warnings.warn('Class names are not saved in the checkpoint\'s '
51 'meta data, use COCO classes by default.')
52 model.CLASSES = get_classes('coco')
53 model.cfg = config # save the config in the model for convenience
54 model.to(device)
55 model.eval()
56 return model
57
58
59 class LoadImage:
60 """Deprecated.
61
62 A simple pipeline to load image.
63 """
64
65 def __call__(self, results):
66 """Call function to load images into results.
67
68 Args:
69 results (dict): A result dict contains the file name
70 of the image to be read.
71 Returns:
72 dict: ``results`` will be returned containing loaded image.
73 """
74 warnings.simplefilter('once')
75 warnings.warn('`LoadImage` is deprecated and will be removed in '
76 'future releases. You may use `LoadImageFromWebcam` '
77 'from `mmdet.datasets.pipelines.` instead.')
78 if isinstance(results['img'], str):
79 results['filename'] = results['img']
80 results['ori_filename'] = results['img']
81 else:
82 results['filename'] = None
83 results['ori_filename'] = None
84 img = mmcv.imread(results['img'])
85 results['img'] = img
86 results['img_fields'] = ['img']
87 results['img_shape'] = img.shape
88 results['ori_shape'] = img.shape
89 return results
90
91
92 def inference_detector(model, imgs):
93 """Inference image(s) with the detector.
94
95 Args:
96 model (nn.Module): The loaded detector.
97 imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]):
98 Either image files or loaded images.
99
100 Returns:
101 If imgs is a list or tuple, the same length list type results
102 will be returned, otherwise return the detection results directly.
103 """
104
105 if isinstance(imgs, (list, tuple)):
106 is_batch = True
107 else:
108 imgs = [imgs]
109 is_batch = False
110
111 cfg = model.cfg
112 device = next(model.parameters()).device # model device
113
114 if isinstance(imgs[0], np.ndarray):
115 cfg = cfg.copy()
116 # set loading pipeline type
117 cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'
118
119 cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
120 test_pipeline = Compose(cfg.data.test.pipeline)
121
122 datas = []
123 for img in imgs:
124 # prepare data
125 if isinstance(img, np.ndarray):
126 # directly add img
127 data = dict(img=img)
128 else:
129 # add information into dict
130 data = dict(img_info=dict(filename=img), img_prefix=None)
131 # build the data pipeline
132 data = test_pipeline(data)
133 datas.append(data)
134
135 data = collate(datas, samples_per_gpu=len(imgs))
136 # just get the actual data from DataContainer
137 data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]
138 data['img'] = [img.data[0] for img in data['img']]
139 if next(model.parameters()).is_cuda:
140 # scatter to specified GPU
141 data = scatter(data, [device])[0]
142 else:
143 for m in model.modules():
144 assert not isinstance(
145 m, RoIPool
146 ), 'CPU inference with RoIPool is not supported currently.'
147
148 # forward the model
149 with torch.no_grad():
150 results = model(return_loss=False, rescale=True, **data)
151
152 if not is_batch:
153 return results[0]
154 else:
155 return results
156
157
158 async def async_inference_detector(model, imgs):
159 """Async inference image(s) with the detector.
160
161 Args:
162 model (nn.Module): The loaded detector.
163 img (str | ndarray): Either image files or loaded images.
164
165 Returns:
166 Awaitable detection results.
167 """
168 if not isinstance(imgs, (list, tuple)):
169 imgs = [imgs]
170
171 cfg = model.cfg
172 device = next(model.parameters()).device # model device
173
174 if isinstance(imgs[0], np.ndarray):
175 cfg = cfg.copy()
176 # set loading pipeline type
177 cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'
178
179 cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
180 test_pipeline = Compose(cfg.data.test.pipeline)
181
182 datas = []
183 for img in imgs:
184 # prepare data
185 if isinstance(img, np.ndarray):
186 # directly add img
187 data = dict(img=img)
188 else:
189 # add information into dict
190 data = dict(img_info=dict(filename=img), img_prefix=None)
191 # build the data pipeline
192 data = test_pipeline(data)
193 datas.append(data)
194
195 data = collate(datas, samples_per_gpu=len(imgs))
196 # just get the actual data from DataContainer
197 data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]
198 data['img'] = [img.data[0] for img in data['img']]
199 if next(model.parameters()).is_cuda:
200 # scatter to specified GPU
201 data = scatter(data, [device])[0]
202 else:
203 for m in model.modules():
204 assert not isinstance(
205 m, RoIPool
206 ), 'CPU inference with RoIPool is not supported currently.'
207
208 # We don't restore `torch.is_grad_enabled()` value during concurrent
209 # inference since execution can overlap
210 torch.set_grad_enabled(False)
211 results = await model.aforward_test(rescale=True, **data)
212 return results
213
214
215 def show_result_pyplot(model,
216 img,
217 result,
218 score_thr=0.3,
219 title='result',
220 wait_time=0,
221 palette=None,
222 out_file=None):
223 """Visualize the detection results on the image.
224
225 Args:
226 model (nn.Module): The loaded detector.
227 img (str or np.ndarray): Image filename or loaded image.
228 result (tuple[list] or list): The detection result, can be either
229 (bbox, segm) or just bbox.
230 score_thr (float): The threshold to visualize the bboxes and masks.
231 title (str): Title of the pyplot figure.
232 wait_time (float): Value of waitKey param. Default: 0.
233 palette (str or tuple(int) or :obj:`Color`): Color.
234 The tuple of color should be in BGR order.
235 out_file (str or None): The path to write the image.
236 Default: None.
237 """
238 if hasattr(model, 'module'):
239 model = model.module
240 model.show_result(
241 img,
242 result,
243 score_thr=score_thr,
244 show=True,
245 wait_time=wait_time,
246 win_name=title,
247 bbox_color=palette,
248 text_color=(200, 200, 200),
249 mask_color=palette,
250 out_file=out_file)
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmdet/apis/inference.py b/mmdet/apis/inference.py
--- a/mmdet/apis/inference.py
+++ b/mmdet/apis/inference.py
@@ -1,5 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
import warnings
+from pathlib import Path
import mmcv
import numpy as np
@@ -18,8 +19,8 @@
"""Initialize a detector from config file.
Args:
- config (str or :obj:`mmcv.Config`): Config file path or the config
- object.
+ config (str, :obj:`Path`, or :obj:`mmcv.Config`): Config file path,
+ :obj:`Path`, or the config object.
checkpoint (str, optional): Checkpoint path. If left as None, the model
will not load any weights.
cfg_options (dict): Options to override some settings in the used
@@ -28,7 +29,7 @@
Returns:
nn.Module: The constructed detector.
"""
- if isinstance(config, str):
+ if isinstance(config, (str, Path)):
config = mmcv.Config.fromfile(config)
elif not isinstance(config, mmcv.Config):
raise TypeError('config must be a filename or Config object, '
diff --git a/mmdet/models/detectors/kd_one_stage.py b/mmdet/models/detectors/kd_one_stage.py
--- a/mmdet/models/detectors/kd_one_stage.py
+++ b/mmdet/models/detectors/kd_one_stage.py
@@ -1,4 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
+from pathlib import Path
+
import mmcv
import torch
from mmcv.runner import load_checkpoint
@@ -34,7 +36,7 @@
pretrained)
self.eval_teacher = eval_teacher
# Build teacher model
- if isinstance(teacher_config, str):
+ if isinstance(teacher_config, (str, Path)):
teacher_config = mmcv.Config.fromfile(teacher_config)
self.teacher_model = build_detector(teacher_config['model'])
if teacher_ckpt is not None:
|
{"golden_diff": "diff --git a/mmdet/apis/inference.py b/mmdet/apis/inference.py\n--- a/mmdet/apis/inference.py\n+++ b/mmdet/apis/inference.py\n@@ -1,5 +1,6 @@\n # Copyright (c) OpenMMLab. All rights reserved.\n import warnings\n+from pathlib import Path\n \n import mmcv\n import numpy as np\n@@ -18,8 +19,8 @@\n \"\"\"Initialize a detector from config file.\n \n Args:\n- config (str or :obj:`mmcv.Config`): Config file path or the config\n- object.\n+ config (str, :obj:`Path`, or :obj:`mmcv.Config`): Config file path,\n+ :obj:`Path`, or the config object.\n checkpoint (str, optional): Checkpoint path. If left as None, the model\n will not load any weights.\n cfg_options (dict): Options to override some settings in the used\n@@ -28,7 +29,7 @@\n Returns:\n nn.Module: The constructed detector.\n \"\"\"\n- if isinstance(config, str):\n+ if isinstance(config, (str, Path)):\n config = mmcv.Config.fromfile(config)\n elif not isinstance(config, mmcv.Config):\n raise TypeError('config must be a filename or Config object, '\ndiff --git a/mmdet/models/detectors/kd_one_stage.py b/mmdet/models/detectors/kd_one_stage.py\n--- a/mmdet/models/detectors/kd_one_stage.py\n+++ b/mmdet/models/detectors/kd_one_stage.py\n@@ -1,4 +1,6 @@\n # Copyright (c) OpenMMLab. All rights reserved.\n+from pathlib import Path\n+\n import mmcv\n import torch\n from mmcv.runner import load_checkpoint\n@@ -34,7 +36,7 @@\n pretrained)\n self.eval_teacher = eval_teacher\n # Build teacher model\n- if isinstance(teacher_config, str):\n+ if isinstance(teacher_config, (str, Path)):\n teacher_config = mmcv.Config.fromfile(teacher_config)\n self.teacher_model = build_detector(teacher_config['model'])\n if teacher_ckpt is not None:\n", "issue": "make `mmdet.apis.init_detector` work with pathlib\n**Describe the feature**\r\nmake `mmdet.apis.init_detector` work with pathlib \r\n**Motivation**\r\nSince mmcv works with pathlib, there is no reason to use str path only. (ref https://github.com/open-mmlab/mmcv/issues/3).\r\nhttps://github.com/open-mmlab/mmdetection/blob/3e2693151add9b5d6db99b944da020cba837266b/mmdet/apis/inference.py#L31\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport mmcv\nimport torch\nfrom mmcv.runner import load_checkpoint\n\nfrom .. import build_detector\nfrom ..builder import DETECTORS\nfrom .single_stage import SingleStageDetector\n\n\[email protected]_module()\nclass KnowledgeDistillationSingleStageDetector(SingleStageDetector):\n r\"\"\"Implementation of `Distilling the Knowledge in a Neural Network.\n <https://arxiv.org/abs/1503.02531>`_.\n\n Args:\n teacher_config (str | dict): Config file path\n or the config object of teacher model.\n teacher_ckpt (str, optional): Checkpoint path of teacher model.\n If left as None, the model will not load any weights.\n \"\"\"\n\n def __init__(self,\n backbone,\n neck,\n bbox_head,\n teacher_config,\n teacher_ckpt=None,\n eval_teacher=True,\n train_cfg=None,\n test_cfg=None,\n pretrained=None):\n super().__init__(backbone, neck, bbox_head, train_cfg, test_cfg,\n pretrained)\n self.eval_teacher = eval_teacher\n # Build teacher model\n if isinstance(teacher_config, str):\n teacher_config = mmcv.Config.fromfile(teacher_config)\n self.teacher_model = build_detector(teacher_config['model'])\n if teacher_ckpt is not None:\n load_checkpoint(\n self.teacher_model, teacher_ckpt, map_location='cpu')\n\n def forward_train(self,\n img,\n img_metas,\n gt_bboxes,\n gt_labels,\n gt_bboxes_ignore=None):\n \"\"\"\n Args:\n img (Tensor): Input images of shape (N, C, H, W).\n Typically these should be mean centered and std scaled.\n img_metas (list[dict]): A List of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n :class:`mmdet.datasets.pipelines.Collect`.\n gt_bboxes (list[Tensor]): Each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n gt_labels (list[Tensor]): Class indices corresponding to each box\n gt_bboxes_ignore (None | list[Tensor]): Specify which bounding\n boxes can be ignored when computing the loss.\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n x = self.extract_feat(img)\n with torch.no_grad():\n teacher_x = self.teacher_model.extract_feat(img)\n out_teacher = self.teacher_model.bbox_head(teacher_x)\n losses = self.bbox_head.forward_train(x, out_teacher, img_metas,\n gt_bboxes, gt_labels,\n gt_bboxes_ignore)\n return losses\n\n def cuda(self, device=None):\n \"\"\"Since teacher_model is registered as a plain object, it is necessary\n to put the teacher model to cuda when calling cuda function.\"\"\"\n self.teacher_model.cuda(device=device)\n return super().cuda(device=device)\n\n def train(self, mode=True):\n \"\"\"Set the same train mode for teacher and student model.\"\"\"\n if self.eval_teacher:\n self.teacher_model.train(False)\n else:\n self.teacher_model.train(mode)\n super().train(mode)\n\n def __setattr__(self, name, value):\n \"\"\"Set attribute, i.e. self.name = value\n\n This reloading prevent the teacher model from being registered as a\n nn.Module. The teacher module is registered as a plain object, so that\n the teacher parameters will not show up when calling\n ``self.parameters``, ``self.modules``, ``self.children`` methods.\n \"\"\"\n if name == 'teacher_model':\n object.__setattr__(self, name, value)\n else:\n super().__setattr__(name, value)\n", "path": "mmdet/models/detectors/kd_one_stage.py"}, {"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport mmcv\nimport numpy as np\nimport torch\nfrom mmcv.ops import RoIPool\nfrom mmcv.parallel import collate, scatter\nfrom mmcv.runner import load_checkpoint\n\nfrom mmdet.core import get_classes\nfrom mmdet.datasets import replace_ImageToTensor\nfrom mmdet.datasets.pipelines import Compose\nfrom mmdet.models import build_detector\n\n\ndef init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):\n \"\"\"Initialize a detector from config file.\n\n Args:\n config (str or :obj:`mmcv.Config`): Config file path or the config\n object.\n checkpoint (str, optional): Checkpoint path. If left as None, the model\n will not load any weights.\n cfg_options (dict): Options to override some settings in the used\n config.\n\n Returns:\n nn.Module: The constructed detector.\n \"\"\"\n if isinstance(config, str):\n config = mmcv.Config.fromfile(config)\n elif not isinstance(config, mmcv.Config):\n raise TypeError('config must be a filename or Config object, '\n f'but got {type(config)}')\n if cfg_options is not None:\n config.merge_from_dict(cfg_options)\n if 'pretrained' in config.model:\n config.model.pretrained = None\n elif 'init_cfg' in config.model.backbone:\n config.model.backbone.init_cfg = None\n config.model.train_cfg = None\n model = build_detector(config.model, test_cfg=config.get('test_cfg'))\n if checkpoint is not None:\n checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')\n if 'CLASSES' in checkpoint.get('meta', {}):\n model.CLASSES = checkpoint['meta']['CLASSES']\n else:\n warnings.simplefilter('once')\n warnings.warn('Class names are not saved in the checkpoint\\'s '\n 'meta data, use COCO classes by default.')\n model.CLASSES = get_classes('coco')\n model.cfg = config # save the config in the model for convenience\n model.to(device)\n model.eval()\n return model\n\n\nclass LoadImage:\n \"\"\"Deprecated.\n\n A simple pipeline to load image.\n \"\"\"\n\n def __call__(self, results):\n \"\"\"Call function to load images into results.\n\n Args:\n results (dict): A result dict contains the file name\n of the image to be read.\n Returns:\n dict: ``results`` will be returned containing loaded image.\n \"\"\"\n warnings.simplefilter('once')\n warnings.warn('`LoadImage` is deprecated and will be removed in '\n 'future releases. You may use `LoadImageFromWebcam` '\n 'from `mmdet.datasets.pipelines.` instead.')\n if isinstance(results['img'], str):\n results['filename'] = results['img']\n results['ori_filename'] = results['img']\n else:\n results['filename'] = None\n results['ori_filename'] = None\n img = mmcv.imread(results['img'])\n results['img'] = img\n results['img_fields'] = ['img']\n results['img_shape'] = img.shape\n results['ori_shape'] = img.shape\n return results\n\n\ndef inference_detector(model, imgs):\n \"\"\"Inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]):\n Either image files or loaded images.\n\n Returns:\n If imgs is a list or tuple, the same length list type results\n will be returned, otherwise return the detection results directly.\n \"\"\"\n\n if isinstance(imgs, (list, tuple)):\n is_batch = True\n else:\n imgs = [imgs]\n is_batch = False\n\n cfg = model.cfg\n device = next(model.parameters()).device # model device\n\n if isinstance(imgs[0], np.ndarray):\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'\n\n cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)\n test_pipeline = Compose(cfg.data.test.pipeline)\n\n datas = []\n for img in imgs:\n # prepare data\n if isinstance(img, np.ndarray):\n # directly add img\n data = dict(img=img)\n else:\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n data = test_pipeline(data)\n datas.append(data)\n\n data = collate(datas, samples_per_gpu=len(imgs))\n # just get the actual data from DataContainer\n data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]\n data['img'] = [img.data[0] for img in data['img']]\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n else:\n for m in model.modules():\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n\n # forward the model\n with torch.no_grad():\n results = model(return_loss=False, rescale=True, **data)\n\n if not is_batch:\n return results[0]\n else:\n return results\n\n\nasync def async_inference_detector(model, imgs):\n \"\"\"Async inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n img (str | ndarray): Either image files or loaded images.\n\n Returns:\n Awaitable detection results.\n \"\"\"\n if not isinstance(imgs, (list, tuple)):\n imgs = [imgs]\n\n cfg = model.cfg\n device = next(model.parameters()).device # model device\n\n if isinstance(imgs[0], np.ndarray):\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'\n\n cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)\n test_pipeline = Compose(cfg.data.test.pipeline)\n\n datas = []\n for img in imgs:\n # prepare data\n if isinstance(img, np.ndarray):\n # directly add img\n data = dict(img=img)\n else:\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n data = test_pipeline(data)\n datas.append(data)\n\n data = collate(datas, samples_per_gpu=len(imgs))\n # just get the actual data from DataContainer\n data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]\n data['img'] = [img.data[0] for img in data['img']]\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n else:\n for m in model.modules():\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n\n # We don't restore `torch.is_grad_enabled()` value during concurrent\n # inference since execution can overlap\n torch.set_grad_enabled(False)\n results = await model.aforward_test(rescale=True, **data)\n return results\n\n\ndef show_result_pyplot(model,\n img,\n result,\n score_thr=0.3,\n title='result',\n wait_time=0,\n palette=None,\n out_file=None):\n \"\"\"Visualize the detection results on the image.\n\n Args:\n model (nn.Module): The loaded detector.\n img (str or np.ndarray): Image filename or loaded image.\n result (tuple[list] or list): The detection result, can be either\n (bbox, segm) or just bbox.\n score_thr (float): The threshold to visualize the bboxes and masks.\n title (str): Title of the pyplot figure.\n wait_time (float): Value of waitKey param. Default: 0.\n palette (str or tuple(int) or :obj:`Color`): Color.\n The tuple of color should be in BGR order.\n out_file (str or None): The path to write the image.\n Default: None.\n \"\"\"\n if hasattr(model, 'module'):\n model = model.module\n model.show_result(\n img,\n result,\n score_thr=score_thr,\n show=True,\n wait_time=wait_time,\n win_name=title,\n bbox_color=palette,\n text_color=(200, 200, 200),\n mask_color=palette,\n out_file=out_file)\n", "path": "mmdet/apis/inference.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nfrom pathlib import Path\n\nimport mmcv\nimport torch\nfrom mmcv.runner import load_checkpoint\n\nfrom .. import build_detector\nfrom ..builder import DETECTORS\nfrom .single_stage import SingleStageDetector\n\n\[email protected]_module()\nclass KnowledgeDistillationSingleStageDetector(SingleStageDetector):\n r\"\"\"Implementation of `Distilling the Knowledge in a Neural Network.\n <https://arxiv.org/abs/1503.02531>`_.\n\n Args:\n teacher_config (str | dict): Config file path\n or the config object of teacher model.\n teacher_ckpt (str, optional): Checkpoint path of teacher model.\n If left as None, the model will not load any weights.\n \"\"\"\n\n def __init__(self,\n backbone,\n neck,\n bbox_head,\n teacher_config,\n teacher_ckpt=None,\n eval_teacher=True,\n train_cfg=None,\n test_cfg=None,\n pretrained=None):\n super().__init__(backbone, neck, bbox_head, train_cfg, test_cfg,\n pretrained)\n self.eval_teacher = eval_teacher\n # Build teacher model\n if isinstance(teacher_config, (str, Path)):\n teacher_config = mmcv.Config.fromfile(teacher_config)\n self.teacher_model = build_detector(teacher_config['model'])\n if teacher_ckpt is not None:\n load_checkpoint(\n self.teacher_model, teacher_ckpt, map_location='cpu')\n\n def forward_train(self,\n img,\n img_metas,\n gt_bboxes,\n gt_labels,\n gt_bboxes_ignore=None):\n \"\"\"\n Args:\n img (Tensor): Input images of shape (N, C, H, W).\n Typically these should be mean centered and std scaled.\n img_metas (list[dict]): A List of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n :class:`mmdet.datasets.pipelines.Collect`.\n gt_bboxes (list[Tensor]): Each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n gt_labels (list[Tensor]): Class indices corresponding to each box\n gt_bboxes_ignore (None | list[Tensor]): Specify which bounding\n boxes can be ignored when computing the loss.\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n x = self.extract_feat(img)\n with torch.no_grad():\n teacher_x = self.teacher_model.extract_feat(img)\n out_teacher = self.teacher_model.bbox_head(teacher_x)\n losses = self.bbox_head.forward_train(x, out_teacher, img_metas,\n gt_bboxes, gt_labels,\n gt_bboxes_ignore)\n return losses\n\n def cuda(self, device=None):\n \"\"\"Since teacher_model is registered as a plain object, it is necessary\n to put the teacher model to cuda when calling cuda function.\"\"\"\n self.teacher_model.cuda(device=device)\n return super().cuda(device=device)\n\n def train(self, mode=True):\n \"\"\"Set the same train mode for teacher and student model.\"\"\"\n if self.eval_teacher:\n self.teacher_model.train(False)\n else:\n self.teacher_model.train(mode)\n super().train(mode)\n\n def __setattr__(self, name, value):\n \"\"\"Set attribute, i.e. self.name = value\n\n This reloading prevent the teacher model from being registered as a\n nn.Module. The teacher module is registered as a plain object, so that\n the teacher parameters will not show up when calling\n ``self.parameters``, ``self.modules``, ``self.children`` methods.\n \"\"\"\n if name == 'teacher_model':\n object.__setattr__(self, name, value)\n else:\n super().__setattr__(name, value)\n", "path": "mmdet/models/detectors/kd_one_stage.py"}, {"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\nfrom pathlib import Path\n\nimport mmcv\nimport numpy as np\nimport torch\nfrom mmcv.ops import RoIPool\nfrom mmcv.parallel import collate, scatter\nfrom mmcv.runner import load_checkpoint\n\nfrom mmdet.core import get_classes\nfrom mmdet.datasets import replace_ImageToTensor\nfrom mmdet.datasets.pipelines import Compose\nfrom mmdet.models import build_detector\n\n\ndef init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):\n \"\"\"Initialize a detector from config file.\n\n Args:\n config (str, :obj:`Path`, or :obj:`mmcv.Config`): Config file path,\n :obj:`Path`, or the config object.\n checkpoint (str, optional): Checkpoint path. If left as None, the model\n will not load any weights.\n cfg_options (dict): Options to override some settings in the used\n config.\n\n Returns:\n nn.Module: The constructed detector.\n \"\"\"\n if isinstance(config, (str, Path)):\n config = mmcv.Config.fromfile(config)\n elif not isinstance(config, mmcv.Config):\n raise TypeError('config must be a filename or Config object, '\n f'but got {type(config)}')\n if cfg_options is not None:\n config.merge_from_dict(cfg_options)\n if 'pretrained' in config.model:\n config.model.pretrained = None\n elif 'init_cfg' in config.model.backbone:\n config.model.backbone.init_cfg = None\n config.model.train_cfg = None\n model = build_detector(config.model, test_cfg=config.get('test_cfg'))\n if checkpoint is not None:\n checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')\n if 'CLASSES' in checkpoint.get('meta', {}):\n model.CLASSES = checkpoint['meta']['CLASSES']\n else:\n warnings.simplefilter('once')\n warnings.warn('Class names are not saved in the checkpoint\\'s '\n 'meta data, use COCO classes by default.')\n model.CLASSES = get_classes('coco')\n model.cfg = config # save the config in the model for convenience\n model.to(device)\n model.eval()\n return model\n\n\nclass LoadImage:\n \"\"\"Deprecated.\n\n A simple pipeline to load image.\n \"\"\"\n\n def __call__(self, results):\n \"\"\"Call function to load images into results.\n\n Args:\n results (dict): A result dict contains the file name\n of the image to be read.\n Returns:\n dict: ``results`` will be returned containing loaded image.\n \"\"\"\n warnings.simplefilter('once')\n warnings.warn('`LoadImage` is deprecated and will be removed in '\n 'future releases. You may use `LoadImageFromWebcam` '\n 'from `mmdet.datasets.pipelines.` instead.')\n if isinstance(results['img'], str):\n results['filename'] = results['img']\n results['ori_filename'] = results['img']\n else:\n results['filename'] = None\n results['ori_filename'] = None\n img = mmcv.imread(results['img'])\n results['img'] = img\n results['img_fields'] = ['img']\n results['img_shape'] = img.shape\n results['ori_shape'] = img.shape\n return results\n\n\ndef inference_detector(model, imgs):\n \"\"\"Inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]):\n Either image files or loaded images.\n\n Returns:\n If imgs is a list or tuple, the same length list type results\n will be returned, otherwise return the detection results directly.\n \"\"\"\n\n if isinstance(imgs, (list, tuple)):\n is_batch = True\n else:\n imgs = [imgs]\n is_batch = False\n\n cfg = model.cfg\n device = next(model.parameters()).device # model device\n\n if isinstance(imgs[0], np.ndarray):\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'\n\n cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)\n test_pipeline = Compose(cfg.data.test.pipeline)\n\n datas = []\n for img in imgs:\n # prepare data\n if isinstance(img, np.ndarray):\n # directly add img\n data = dict(img=img)\n else:\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n data = test_pipeline(data)\n datas.append(data)\n\n data = collate(datas, samples_per_gpu=len(imgs))\n # just get the actual data from DataContainer\n data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]\n data['img'] = [img.data[0] for img in data['img']]\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n else:\n for m in model.modules():\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n\n # forward the model\n with torch.no_grad():\n results = model(return_loss=False, rescale=True, **data)\n\n if not is_batch:\n return results[0]\n else:\n return results\n\n\nasync def async_inference_detector(model, imgs):\n \"\"\"Async inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n img (str | ndarray): Either image files or loaded images.\n\n Returns:\n Awaitable detection results.\n \"\"\"\n if not isinstance(imgs, (list, tuple)):\n imgs = [imgs]\n\n cfg = model.cfg\n device = next(model.parameters()).device # model device\n\n if isinstance(imgs[0], np.ndarray):\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'\n\n cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)\n test_pipeline = Compose(cfg.data.test.pipeline)\n\n datas = []\n for img in imgs:\n # prepare data\n if isinstance(img, np.ndarray):\n # directly add img\n data = dict(img=img)\n else:\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n data = test_pipeline(data)\n datas.append(data)\n\n data = collate(datas, samples_per_gpu=len(imgs))\n # just get the actual data from DataContainer\n data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]\n data['img'] = [img.data[0] for img in data['img']]\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n else:\n for m in model.modules():\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n\n # We don't restore `torch.is_grad_enabled()` value during concurrent\n # inference since execution can overlap\n torch.set_grad_enabled(False)\n results = await model.aforward_test(rescale=True, **data)\n return results\n\n\ndef show_result_pyplot(model,\n img,\n result,\n score_thr=0.3,\n title='result',\n wait_time=0,\n palette=None):\n \"\"\"Visualize the detection results on the image.\n\n Args:\n model (nn.Module): The loaded detector.\n img (str or np.ndarray): Image filename or loaded image.\n result (tuple[list] or list): The detection result, can be either\n (bbox, segm) or just bbox.\n score_thr (float): The threshold to visualize the bboxes and masks.\n title (str): Title of the pyplot figure.\n wait_time (float): Value of waitKey param.\n Default: 0.\n \"\"\"\n if hasattr(model, 'module'):\n model = model.module\n model.show_result(\n img,\n result,\n score_thr=score_thr,\n show=True,\n wait_time=wait_time,\n win_name=title,\n bbox_color=palette,\n text_color=(200, 200, 200),\n mask_color=palette)\n", "path": "mmdet/apis/inference.py"}]}
| 4,019 | 480 |
gh_patches_debug_18108
|
rasdani/github-patches
|
git_diff
|
projectmesa__mesa-1355
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
refactor: Remove dependency on jQuery
We should replace the `$(...)` with vanilla JS.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import re
3 import os
4 import urllib.request
5 import zipfile
6 import shutil
7
8 from setuptools import setup, find_packages
9 from codecs import open
10
11 requires = ["click", "cookiecutter", "networkx", "numpy", "pandas", "tornado", "tqdm"]
12
13 extras_require = {
14 "dev": ["black", "coverage", "flake8", "pytest >= 4.6", "pytest-cov", "sphinx"],
15 "docs": ["sphinx", "ipython"],
16 }
17
18 version = ""
19 with open("mesa/__init__.py") as fd:
20 version = re.search(
21 r'^__version__\s*=\s*[\'"]([^\'"]*)[\'"]', fd.read(), re.MULTILINE
22 ).group(1)
23
24 with open("README.rst", "rb", encoding="utf-8") as f:
25 readme = f.read()
26
27 # Ensure JS dependencies are downloaded
28 external_dir = "mesa/visualization/templates/external"
29 # We use a different path for single-file JS because some of them are loaded
30 # the same way as Mesa JS files
31 external_dir_single = "mesa/visualization/templates/js/external"
32 # First, ensure that the external directories exists
33 os.makedirs(external_dir, exist_ok=True)
34 os.makedirs(external_dir_single, exist_ok=True)
35
36
37 def ensure_JS_dep(dirname, url):
38 dst_path = os.path.join(external_dir, dirname)
39 if os.path.isdir(dst_path):
40 # Do nothing if already downloaded
41 return
42 print(f"Downloading the {dirname} dependency from the internet...")
43 zip_file = dirname + ".zip"
44 urllib.request.urlretrieve(url, zip_file)
45 with zipfile.ZipFile(zip_file, "r") as zip_ref:
46 zip_ref.extractall()
47 shutil.move(dirname, dst_path)
48 # Cleanup
49 os.remove(zip_file)
50 print("Done")
51
52
53 def ensure_JS_dep_single(url, out_name=None):
54 # Used for downloading e.g. jQuery single file
55 if out_name is None:
56 out_name = url.split("/")[-1]
57 dst_path = os.path.join(external_dir_single, out_name)
58 if os.path.isfile(dst_path):
59 return
60 print(f"Downloading the {out_name} dependency from the internet...")
61 urllib.request.urlretrieve(url, out_name)
62 shutil.move(out_name, dst_path)
63
64
65 # Important: when you update JS dependency version, make sure to also update the
66 # hardcoded included files and versions in: mesa/visualization/templates/modular_template.html
67
68 # Ensure Bootstrap
69 bootstrap_version = "5.1.3"
70 ensure_JS_dep(
71 f"bootstrap-{bootstrap_version}-dist",
72 f"https://github.com/twbs/bootstrap/releases/download/v{bootstrap_version}/bootstrap-{bootstrap_version}-dist.zip",
73 )
74
75 # Ensure Bootstrap Slider
76 bootstrap_slider_version = "11.0.2"
77 ensure_JS_dep(
78 f"bootstrap-slider-{bootstrap_slider_version}",
79 f"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip",
80 )
81
82 jquery_version = "2.2.4"
83 ensure_JS_dep_single(
84 f"https://code.jquery.com/jquery-{jquery_version}.min.js",
85 )
86 # Important: when updating the D3 version, make sure to update the constant
87 # D3_JS_FILE in mesa/visualization/ModularVisualization.py.
88 d3_version = "7.4.3"
89 ensure_JS_dep_single(
90 f"https://cdnjs.cloudflare.com/ajax/libs/d3/{d3_version}/d3.min.js",
91 out_name=f"d3-{d3_version}.min.js",
92 )
93 # Important: Make sure to update CHART_JS_FILE in
94 # mesa/visualization/ModularVisualization.py.
95 chartjs_version = "3.6.1"
96 ensure_JS_dep_single(
97 f"https://cdn.jsdelivr.net/npm/chart.js@{chartjs_version}/dist/chart.min.js",
98 out_name=f"chart-{chartjs_version}.min.js",
99 )
100
101
102 setup(
103 name="Mesa",
104 version=version,
105 description="Agent-based modeling (ABM) in Python 3+",
106 long_description=readme,
107 author="Project Mesa Team",
108 author_email="[email protected]",
109 url="https://github.com/projectmesa/mesa",
110 packages=find_packages(),
111 package_data={
112 "mesa": [
113 "visualization/templates/*.html",
114 "visualization/templates/css/*",
115 "visualization/templates/js/*",
116 "visualization/templates/external/**/*",
117 ],
118 "cookiecutter-mesa": ["cookiecutter-mesa/*"],
119 },
120 include_package_data=True,
121 install_requires=requires,
122 extras_require=extras_require,
123 keywords="agent based modeling model ABM simulation multi-agent",
124 license="Apache 2.0",
125 zip_safe=False,
126 classifiers=[
127 "Topic :: Scientific/Engineering",
128 "Topic :: Scientific/Engineering :: Artificial Life",
129 "Topic :: Scientific/Engineering :: Artificial Intelligence",
130 "Intended Audience :: Science/Research",
131 "Programming Language :: Python :: 3 :: Only",
132 "Programming Language :: Python :: 3.7",
133 "Programming Language :: Python :: 3.8",
134 "Programming Language :: Python :: 3.9",
135 "Programming Language :: Python :: 3.10",
136 "License :: OSI Approved :: Apache Software License",
137 "Operating System :: OS Independent",
138 "Development Status :: 3 - Alpha",
139 "Natural Language :: English",
140 ],
141 entry_points="""
142 [console_scripts]
143 mesa=mesa.main:cli
144 """,
145 python_requires=">=3.7",
146 )
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -51,7 +51,7 @@
def ensure_JS_dep_single(url, out_name=None):
- # Used for downloading e.g. jQuery single file
+ # Used for downloading e.g. D3.js single file
if out_name is None:
out_name = url.split("/")[-1]
dst_path = os.path.join(external_dir_single, out_name)
@@ -79,10 +79,6 @@
f"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip",
)
-jquery_version = "2.2.4"
-ensure_JS_dep_single(
- f"https://code.jquery.com/jquery-{jquery_version}.min.js",
-)
# Important: when updating the D3 version, make sure to update the constant
# D3_JS_FILE in mesa/visualization/ModularVisualization.py.
d3_version = "7.4.3"
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -51,7 +51,7 @@\n \n \n def ensure_JS_dep_single(url, out_name=None):\n- # Used for downloading e.g. jQuery single file\n+ # Used for downloading e.g. D3.js single file\n if out_name is None:\n out_name = url.split(\"/\")[-1]\n dst_path = os.path.join(external_dir_single, out_name)\n@@ -79,10 +79,6 @@\n f\"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip\",\n )\n \n-jquery_version = \"2.2.4\"\n-ensure_JS_dep_single(\n- f\"https://code.jquery.com/jquery-{jquery_version}.min.js\",\n-)\n # Important: when updating the D3 version, make sure to update the constant\n # D3_JS_FILE in mesa/visualization/ModularVisualization.py.\n d3_version = \"7.4.3\"\n", "issue": "refactor: Remove dependency on jQuery\nWe should replace the `$(...)` with vanilla JS.\n", "before_files": [{"content": "#!/usr/bin/env python\nimport re\nimport os\nimport urllib.request\nimport zipfile\nimport shutil\n\nfrom setuptools import setup, find_packages\nfrom codecs import open\n\nrequires = [\"click\", \"cookiecutter\", \"networkx\", \"numpy\", \"pandas\", \"tornado\", \"tqdm\"]\n\nextras_require = {\n \"dev\": [\"black\", \"coverage\", \"flake8\", \"pytest >= 4.6\", \"pytest-cov\", \"sphinx\"],\n \"docs\": [\"sphinx\", \"ipython\"],\n}\n\nversion = \"\"\nwith open(\"mesa/__init__.py\") as fd:\n version = re.search(\n r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]', fd.read(), re.MULTILINE\n ).group(1)\n\nwith open(\"README.rst\", \"rb\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n# Ensure JS dependencies are downloaded\nexternal_dir = \"mesa/visualization/templates/external\"\n# We use a different path for single-file JS because some of them are loaded\n# the same way as Mesa JS files\nexternal_dir_single = \"mesa/visualization/templates/js/external\"\n# First, ensure that the external directories exists\nos.makedirs(external_dir, exist_ok=True)\nos.makedirs(external_dir_single, exist_ok=True)\n\n\ndef ensure_JS_dep(dirname, url):\n dst_path = os.path.join(external_dir, dirname)\n if os.path.isdir(dst_path):\n # Do nothing if already downloaded\n return\n print(f\"Downloading the {dirname} dependency from the internet...\")\n zip_file = dirname + \".zip\"\n urllib.request.urlretrieve(url, zip_file)\n with zipfile.ZipFile(zip_file, \"r\") as zip_ref:\n zip_ref.extractall()\n shutil.move(dirname, dst_path)\n # Cleanup\n os.remove(zip_file)\n print(\"Done\")\n\n\ndef ensure_JS_dep_single(url, out_name=None):\n # Used for downloading e.g. jQuery single file\n if out_name is None:\n out_name = url.split(\"/\")[-1]\n dst_path = os.path.join(external_dir_single, out_name)\n if os.path.isfile(dst_path):\n return\n print(f\"Downloading the {out_name} dependency from the internet...\")\n urllib.request.urlretrieve(url, out_name)\n shutil.move(out_name, dst_path)\n\n\n# Important: when you update JS dependency version, make sure to also update the\n# hardcoded included files and versions in: mesa/visualization/templates/modular_template.html\n\n# Ensure Bootstrap\nbootstrap_version = \"5.1.3\"\nensure_JS_dep(\n f\"bootstrap-{bootstrap_version}-dist\",\n f\"https://github.com/twbs/bootstrap/releases/download/v{bootstrap_version}/bootstrap-{bootstrap_version}-dist.zip\",\n)\n\n# Ensure Bootstrap Slider\nbootstrap_slider_version = \"11.0.2\"\nensure_JS_dep(\n f\"bootstrap-slider-{bootstrap_slider_version}\",\n f\"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip\",\n)\n\njquery_version = \"2.2.4\"\nensure_JS_dep_single(\n f\"https://code.jquery.com/jquery-{jquery_version}.min.js\",\n)\n# Important: when updating the D3 version, make sure to update the constant\n# D3_JS_FILE in mesa/visualization/ModularVisualization.py.\nd3_version = \"7.4.3\"\nensure_JS_dep_single(\n f\"https://cdnjs.cloudflare.com/ajax/libs/d3/{d3_version}/d3.min.js\",\n out_name=f\"d3-{d3_version}.min.js\",\n)\n# Important: Make sure to update CHART_JS_FILE in\n# mesa/visualization/ModularVisualization.py.\nchartjs_version = \"3.6.1\"\nensure_JS_dep_single(\n f\"https://cdn.jsdelivr.net/npm/chart.js@{chartjs_version}/dist/chart.min.js\",\n out_name=f\"chart-{chartjs_version}.min.js\",\n)\n\n\nsetup(\n name=\"Mesa\",\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author=\"Project Mesa Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/projectmesa/mesa\",\n packages=find_packages(),\n package_data={\n \"mesa\": [\n \"visualization/templates/*.html\",\n \"visualization/templates/css/*\",\n \"visualization/templates/js/*\",\n \"visualization/templates/external/**/*\",\n ],\n \"cookiecutter-mesa\": [\"cookiecutter-mesa/*\"],\n },\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords=\"agent based modeling model ABM simulation multi-agent\",\n license=\"Apache 2.0\",\n zip_safe=False,\n classifiers=[\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Life\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Intended Audience :: Science/Research\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 3 - Alpha\",\n \"Natural Language :: English\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n mesa=mesa.main:cli\n \"\"\",\n python_requires=\">=3.7\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport re\nimport os\nimport urllib.request\nimport zipfile\nimport shutil\n\nfrom setuptools import setup, find_packages\nfrom codecs import open\n\nrequires = [\"click\", \"cookiecutter\", \"networkx\", \"numpy\", \"pandas\", \"tornado\", \"tqdm\"]\n\nextras_require = {\n \"dev\": [\"black\", \"coverage\", \"flake8\", \"pytest >= 4.6\", \"pytest-cov\", \"sphinx\"],\n \"docs\": [\"sphinx\", \"ipython\"],\n}\n\nversion = \"\"\nwith open(\"mesa/__init__.py\") as fd:\n version = re.search(\n r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]', fd.read(), re.MULTILINE\n ).group(1)\n\nwith open(\"README.rst\", \"rb\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n# Ensure JS dependencies are downloaded\nexternal_dir = \"mesa/visualization/templates/external\"\n# We use a different path for single-file JS because some of them are loaded\n# the same way as Mesa JS files\nexternal_dir_single = \"mesa/visualization/templates/js/external\"\n# First, ensure that the external directories exists\nos.makedirs(external_dir, exist_ok=True)\nos.makedirs(external_dir_single, exist_ok=True)\n\n\ndef ensure_JS_dep(dirname, url):\n dst_path = os.path.join(external_dir, dirname)\n if os.path.isdir(dst_path):\n # Do nothing if already downloaded\n return\n print(f\"Downloading the {dirname} dependency from the internet...\")\n zip_file = dirname + \".zip\"\n urllib.request.urlretrieve(url, zip_file)\n with zipfile.ZipFile(zip_file, \"r\") as zip_ref:\n zip_ref.extractall()\n shutil.move(dirname, dst_path)\n # Cleanup\n os.remove(zip_file)\n print(\"Done\")\n\n\ndef ensure_JS_dep_single(url, out_name=None):\n # Used for downloading e.g. D3.js single file\n if out_name is None:\n out_name = url.split(\"/\")[-1]\n dst_path = os.path.join(external_dir_single, out_name)\n if os.path.isfile(dst_path):\n return\n print(f\"Downloading the {out_name} dependency from the internet...\")\n urllib.request.urlretrieve(url, out_name)\n shutil.move(out_name, dst_path)\n\n\n# Important: when you update JS dependency version, make sure to also update the\n# hardcoded included files and versions in: mesa/visualization/templates/modular_template.html\n\n# Ensure Bootstrap\nbootstrap_version = \"5.1.3\"\nensure_JS_dep(\n f\"bootstrap-{bootstrap_version}-dist\",\n f\"https://github.com/twbs/bootstrap/releases/download/v{bootstrap_version}/bootstrap-{bootstrap_version}-dist.zip\",\n)\n\n# Ensure Bootstrap Slider\nbootstrap_slider_version = \"11.0.2\"\nensure_JS_dep(\n f\"bootstrap-slider-{bootstrap_slider_version}\",\n f\"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip\",\n)\n\n# Important: when updating the D3 version, make sure to update the constant\n# D3_JS_FILE in mesa/visualization/ModularVisualization.py.\nd3_version = \"7.4.3\"\nensure_JS_dep_single(\n f\"https://cdnjs.cloudflare.com/ajax/libs/d3/{d3_version}/d3.min.js\",\n out_name=f\"d3-{d3_version}.min.js\",\n)\n# Important: Make sure to update CHART_JS_FILE in\n# mesa/visualization/ModularVisualization.py.\nchartjs_version = \"3.6.1\"\nensure_JS_dep_single(\n f\"https://cdn.jsdelivr.net/npm/chart.js@{chartjs_version}/dist/chart.min.js\",\n out_name=f\"chart-{chartjs_version}.min.js\",\n)\n\n\nsetup(\n name=\"Mesa\",\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author=\"Project Mesa Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/projectmesa/mesa\",\n packages=find_packages(),\n package_data={\n \"mesa\": [\n \"visualization/templates/*.html\",\n \"visualization/templates/css/*\",\n \"visualization/templates/js/*\",\n \"visualization/templates/external/**/*\",\n ],\n \"cookiecutter-mesa\": [\"cookiecutter-mesa/*\"],\n },\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords=\"agent based modeling model ABM simulation multi-agent\",\n license=\"Apache 2.0\",\n zip_safe=False,\n classifiers=[\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Life\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Intended Audience :: Science/Research\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 3 - Alpha\",\n \"Natural Language :: English\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n mesa=mesa.main:cli\n \"\"\",\n python_requires=\">=3.7\",\n)\n", "path": "setup.py"}]}
| 1,815 | 220 |
gh_patches_debug_22620
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-1582
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Build fails with IPython 3.0
Trying to use ipython notebooks with the current dev version of IPython (3.0.0) fails building with some warnings etc. because the `nbformat` interface has changed a little:
```
...WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:19: IPython.nbformat.current is deprecated.
- use IPython.nbformat for read/write/validate public API
- use IPython.nbformat.vX directly to composing notebooks of a particular version
...
... WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:75: reads_json is deprecated, use reads
...
AttributeError: cells
```
This is fairly easily fixed and I will send a PR shortly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/compile/ipynb/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2013-2015 Damián Avila and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Implementation of compile_html based on nbconvert."""
28
29 from __future__ import unicode_literals, print_function
30 import io
31 import os
32
33 try:
34 from IPython.nbconvert.exporters import HTMLExporter
35 from IPython.nbformat import current as nbformat
36 from IPython.config import Config
37 flag = True
38 except ImportError:
39 flag = None
40
41 from nikola.plugin_categories import PageCompiler
42 from nikola.utils import makedirs, req_missing
43
44
45 class CompileIPynb(PageCompiler):
46 """Compile IPynb into HTML."""
47
48 name = "ipynb"
49 supports_onefile = False
50 demote_headers = True
51
52 def compile_html(self, source, dest, is_two_file=True):
53 if flag is None:
54 req_missing(['ipython>=1.1.0'], 'build this site (compile ipynb)')
55 makedirs(os.path.dirname(dest))
56 HTMLExporter.default_template = 'basic'
57 c = Config(self.site.config['IPYNB_CONFIG'])
58 exportHtml = HTMLExporter(config=c)
59 with io.open(dest, "w+", encoding="utf8") as out_file:
60 with io.open(source, "r", encoding="utf8") as in_file:
61 nb = in_file.read()
62 nb_json = nbformat.reads_json(nb)
63 (body, resources) = exportHtml.from_notebook_node(nb_json)
64 out_file.write(body)
65
66 def create_post(self, path, **kw):
67 content = kw.pop('content', None)
68 onefile = kw.pop('onefile', False)
69 # is_page is not needed to create the file
70 kw.pop('is_page', False)
71
72 makedirs(os.path.dirname(path))
73 if onefile:
74 raise Exception('The one-file format is not supported by this compiler.')
75 with io.open(path, "w+", encoding="utf8") as fd:
76 if not content.startswith("Write your"):
77 fd.write(content)
78 else:
79 fd.write("""{
80 "metadata": {
81 "name": ""
82 },
83 "nbformat": 3,
84 "nbformat_minor": 0,
85 "worksheets": [
86 {
87 "cells": [
88 {
89 "cell_type": "code",
90 "collapsed": false,
91 "input": [],
92 "language": "python",
93 "metadata": {},
94 "outputs": []
95 }
96 ],
97 "metadata": {}
98 }
99 ]
100 }""")
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nikola/plugins/compile/ipynb/__init__.py b/nikola/plugins/compile/ipynb/__init__.py
--- a/nikola/plugins/compile/ipynb/__init__.py
+++ b/nikola/plugins/compile/ipynb/__init__.py
@@ -31,8 +31,15 @@
import os
try:
+ import IPython
from IPython.nbconvert.exporters import HTMLExporter
- from IPython.nbformat import current as nbformat
+ if IPython.version_info[0] >= 3: # API changed with 3.0.0
+ from IPython import nbformat
+ current_nbformat = nbformat.current_nbformat
+ else:
+ import IPython.nbformat.current as nbformat
+ current_nbformat = 'json'
+
from IPython.config import Config
flag = True
except ImportError:
@@ -58,8 +65,7 @@
exportHtml = HTMLExporter(config=c)
with io.open(dest, "w+", encoding="utf8") as out_file:
with io.open(source, "r", encoding="utf8") as in_file:
- nb = in_file.read()
- nb_json = nbformat.reads_json(nb)
+ nb_json = nbformat.read(in_file, current_nbformat)
(body, resources) = exportHtml.from_notebook_node(nb_json)
out_file.write(body)
|
{"golden_diff": "diff --git a/nikola/plugins/compile/ipynb/__init__.py b/nikola/plugins/compile/ipynb/__init__.py\n--- a/nikola/plugins/compile/ipynb/__init__.py\n+++ b/nikola/plugins/compile/ipynb/__init__.py\n@@ -31,8 +31,15 @@\n import os\n \n try:\n+ import IPython\n from IPython.nbconvert.exporters import HTMLExporter\n- from IPython.nbformat import current as nbformat\n+ if IPython.version_info[0] >= 3: # API changed with 3.0.0\n+ from IPython import nbformat\n+ current_nbformat = nbformat.current_nbformat\n+ else:\n+ import IPython.nbformat.current as nbformat\n+ current_nbformat = 'json'\n+\n from IPython.config import Config\n flag = True\n except ImportError:\n@@ -58,8 +65,7 @@\n exportHtml = HTMLExporter(config=c)\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n- nb = in_file.read()\n- nb_json = nbformat.reads_json(nb)\n+ nb_json = nbformat.read(in_file, current_nbformat)\n (body, resources) = exportHtml.from_notebook_node(nb_json)\n out_file.write(body)\n", "issue": "Build fails with IPython 3.0\nTrying to use ipython notebooks with the current dev version of IPython (3.0.0) fails building with some warnings etc. because the `nbformat` interface has changed a little:\n\n```\n...WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:19: IPython.nbformat.current is deprecated.\n\n- use IPython.nbformat for read/write/validate public API\n- use IPython.nbformat.vX directly to composing notebooks of a particular version\n...\n... WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:75: reads_json is deprecated, use reads\n...\nAttributeError: cells\n```\n\nThis is fairly easily fixed and I will send a PR shortly.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2013-2015 Dami\u00e1n Avila and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Implementation of compile_html based on nbconvert.\"\"\"\n\nfrom __future__ import unicode_literals, print_function\nimport io\nimport os\n\ntry:\n from IPython.nbconvert.exporters import HTMLExporter\n from IPython.nbformat import current as nbformat\n from IPython.config import Config\n flag = True\nexcept ImportError:\n flag = None\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import makedirs, req_missing\n\n\nclass CompileIPynb(PageCompiler):\n \"\"\"Compile IPynb into HTML.\"\"\"\n\n name = \"ipynb\"\n supports_onefile = False\n demote_headers = True\n\n def compile_html(self, source, dest, is_two_file=True):\n if flag is None:\n req_missing(['ipython>=1.1.0'], 'build this site (compile ipynb)')\n makedirs(os.path.dirname(dest))\n HTMLExporter.default_template = 'basic'\n c = Config(self.site.config['IPYNB_CONFIG'])\n exportHtml = HTMLExporter(config=c)\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n nb = in_file.read()\n nb_json = nbformat.reads_json(nb)\n (body, resources) = exportHtml.from_notebook_node(nb_json)\n out_file.write(body)\n\n def create_post(self, path, **kw):\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not needed to create the file\n kw.pop('is_page', False)\n\n makedirs(os.path.dirname(path))\n if onefile:\n raise Exception('The one-file format is not supported by this compiler.')\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if not content.startswith(\"Write your\"):\n fd.write(content)\n else:\n fd.write(\"\"\"{\n \"metadata\": {\n \"name\": \"\"\n },\n \"nbformat\": 3,\n \"nbformat_minor\": 0,\n \"worksheets\": [\n {\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"collapsed\": false,\n \"input\": [],\n \"language\": \"python\",\n \"metadata\": {},\n \"outputs\": []\n }\n ],\n \"metadata\": {}\n }\n ]\n}\"\"\")\n", "path": "nikola/plugins/compile/ipynb/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2013-2015 Dami\u00e1n Avila and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Implementation of compile_html based on nbconvert.\"\"\"\n\nfrom __future__ import unicode_literals, print_function\nimport io\nimport os\n\ntry:\n import IPython\n from IPython.nbconvert.exporters import HTMLExporter\n if IPython.version_info[0] >= 3: # API changed with 3.0.0\n from IPython import nbformat\n current_nbformat = nbformat.current_nbformat\n else:\n import IPython.nbformat.current as nbformat\n current_nbformat = 'json'\n\n from IPython.config import Config\n flag = True\nexcept ImportError:\n flag = None\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import makedirs, req_missing\n\n\nclass CompileIPynb(PageCompiler):\n \"\"\"Compile IPynb into HTML.\"\"\"\n\n name = \"ipynb\"\n supports_onefile = False\n demote_headers = True\n\n def compile_html(self, source, dest, is_two_file=True):\n if flag is None:\n req_missing(['ipython>=1.1.0'], 'build this site (compile ipynb)')\n makedirs(os.path.dirname(dest))\n HTMLExporter.default_template = 'basic'\n c = Config(self.site.config['IPYNB_CONFIG'])\n exportHtml = HTMLExporter(config=c)\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n nb_json = nbformat.read(in_file, current_nbformat)\n (body, resources) = exportHtml.from_notebook_node(nb_json)\n out_file.write(body)\n\n def create_post(self, path, **kw):\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not needed to create the file\n kw.pop('is_page', False)\n\n makedirs(os.path.dirname(path))\n if onefile:\n raise Exception('The one-file format is not supported by this compiler.')\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if not content.startswith(\"Write your\"):\n fd.write(content)\n else:\n fd.write(\"\"\"{\n \"metadata\": {\n \"name\": \"\"\n },\n \"nbformat\": 3,\n \"nbformat_minor\": 0,\n \"worksheets\": [\n {\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"collapsed\": false,\n \"input\": [],\n \"language\": \"python\",\n \"metadata\": {},\n \"outputs\": []\n }\n ],\n \"metadata\": {}\n }\n ]\n}\"\"\")\n", "path": "nikola/plugins/compile/ipynb/__init__.py"}]}
| 1,405 | 316 |
gh_patches_debug_10289
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-5661
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Include crawl date in data
I'm looking at an old output directory, trying to workout which release it is.
I think we could add the crawl time and/or build id to the dataset attributes easily.
I think @rjw62 asked for this before. Which I promptly forgot. Sorry.
I'll look at this later or Monday.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/exporters/geojson.py`
Content:
```
1 import base64
2 import hashlib
3 import io
4 import json
5 import logging
6 import uuid
7
8 from scrapy.exporters import JsonItemExporter
9 from scrapy.utils.misc import walk_modules
10 from scrapy.utils.python import to_bytes
11 from scrapy.utils.spider import iter_spider_classes
12
13 from locations.settings import SPIDER_MODULES
14
15 mapping = (
16 ("addr_full", "addr:full"),
17 ("housenumber", "addr:housenumber"),
18 ("street", "addr:street"),
19 ("street_address", "addr:street_address"),
20 ("city", "addr:city"),
21 ("state", "addr:state"),
22 ("postcode", "addr:postcode"),
23 ("country", "addr:country"),
24 ("name", "name"),
25 ("phone", "phone"),
26 ("website", "website"),
27 ("twitter", "contact:twitter"),
28 ("facebook", "contact:facebook"),
29 ("email", "contact:email"),
30 ("opening_hours", "opening_hours"),
31 ("image", "image"),
32 ("brand", "brand"),
33 ("brand_wikidata", "brand:wikidata"),
34 ("located_in", "located_in"),
35 ("located_in_wikidata", "located_in:wikidata"),
36 ("nsi_id", "nsi_id"),
37 )
38
39
40 def item_to_properties(item):
41 props = {}
42
43 # Ref is required, unless `no_refs = True` is set in spider
44 if ref := item.get("ref"):
45 props["ref"] = str(ref)
46
47 # Add in the extra bits
48 if extras := item.get("extras"):
49 for key, value in extras.items():
50 if value:
51 # Only export populated values
52 props[key] = value
53
54 # Bring in the optional stuff
55 for map_from, map_to in mapping:
56 if item_value := item.get(map_from):
57 props[map_to] = item_value
58
59 return props
60
61
62 def compute_hash(item):
63 ref = str(item.get("ref") or uuid.uuid1()).encode("utf8")
64 sha1 = hashlib.sha1(ref)
65
66 if spider_name := item.get("extras", {}).get("@spider"):
67 sha1.update(spider_name.encode("utf8"))
68
69 return base64.urlsafe_b64encode(sha1.digest()).decode("utf8")
70
71
72 def find_spider_class(spider_name):
73 if not spider_name:
74 return None
75 for mod in SPIDER_MODULES:
76 for module in walk_modules(mod):
77 for spider_class in iter_spider_classes(module):
78 if spider_name == spider_class.name:
79 return spider_class
80 return None
81
82
83 def get_dataset_attributes(spider_name) -> {}:
84 spider_class = find_spider_class(spider_name)
85 dataset_attributes = getattr(spider_class, "dataset_attributes", {})
86 settings = getattr(spider_class, "custom_settings", {}) or {}
87 if not settings.get("ROBOTSTXT_OBEY", True):
88 # See https://github.com/alltheplaces/alltheplaces/issues/4537
89 dataset_attributes["spider:robots_txt"] = "ignored"
90 dataset_attributes["@spider"] = spider_name
91
92 return dataset_attributes
93
94
95 class GeoJsonExporter(JsonItemExporter):
96 def __init__(self, file, **kwargs):
97 super().__init__(file, **kwargs)
98 self.spider_name = None
99
100 def start_exporting(self):
101 pass
102
103 def export_item(self, item):
104 spider_name = item.get("extras", {}).get("@spider")
105 if self.first_item:
106 self.spider_name = spider_name
107 self.write_geojson_header()
108 if spider_name != self.spider_name:
109 # It really should not happen that a single exporter instance
110 # handles output from different spiders. If it does happen,
111 # we rather crash than emit GeoJSON with the wrong dataset
112 # properties, which may include legally relevant license tags.
113 raise ValueError(
114 f"harvest from multiple spiders ({spider_name, self.spider_name}) cannot be written to same GeoJSON file"
115 )
116
117 super().export_item(item)
118
119 def _get_serialized_fields(self, item, default_value=None, include_empty=None):
120 feature = []
121 feature.append(("type", "Feature"))
122 feature.append(("id", compute_hash(item)))
123 feature.append(("properties", item_to_properties(item)))
124
125 lat = item.get("lat")
126 lon = item.get("lon")
127 geometry = item.get("geometry")
128 if lat and lon and not geometry:
129 try:
130 geometry = {
131 "type": "Point",
132 "coordinates": [float(item["lon"]), float(item["lat"])],
133 }
134 except ValueError:
135 logging.warning("Couldn't convert lat (%s) and lon (%s) to float", lat, lon)
136 feature.append(("geometry", geometry))
137
138 return feature
139
140 def write_geojson_header(self):
141 header = io.StringIO()
142 header.write('{"type":"FeatureCollection","dataset_attributes":')
143 json.dump(
144 get_dataset_attributes(self.spider_name), header, ensure_ascii=False, separators=(",", ":"), sort_keys=True
145 )
146 header.write(',"features":[\n')
147 self.file.write(to_bytes(header.getvalue(), self.encoding))
148
149 def finish_exporting(self):
150 self.file.write(b"\n]}\n")
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/exporters/geojson.py b/locations/exporters/geojson.py
--- a/locations/exporters/geojson.py
+++ b/locations/exporters/geojson.py
@@ -1,4 +1,5 @@
import base64
+import datetime
import hashlib
import io
import json
@@ -88,6 +89,7 @@
# See https://github.com/alltheplaces/alltheplaces/issues/4537
dataset_attributes["spider:robots_txt"] = "ignored"
dataset_attributes["@spider"] = spider_name
+ dataset_attributes["spider:collection_time"] = datetime.datetime.now().isoformat()
return dataset_attributes
|
{"golden_diff": "diff --git a/locations/exporters/geojson.py b/locations/exporters/geojson.py\n--- a/locations/exporters/geojson.py\n+++ b/locations/exporters/geojson.py\n@@ -1,4 +1,5 @@\n import base64\n+import datetime\n import hashlib\n import io\n import json\n@@ -88,6 +89,7 @@\n # See https://github.com/alltheplaces/alltheplaces/issues/4537\n dataset_attributes[\"spider:robots_txt\"] = \"ignored\"\n dataset_attributes[\"@spider\"] = spider_name\n+ dataset_attributes[\"spider:collection_time\"] = datetime.datetime.now().isoformat()\n \n return dataset_attributes\n", "issue": "Include crawl date in data\nI'm looking at an old output directory, trying to workout which release it is.\r\n\r\nI think we could add the crawl time and/or build id to the dataset attributes easily.\r\n\r\nI think @rjw62 asked for this before. Which I promptly forgot. Sorry.\r\n\r\nI'll look at this later or Monday.\n", "before_files": [{"content": "import base64\nimport hashlib\nimport io\nimport json\nimport logging\nimport uuid\n\nfrom scrapy.exporters import JsonItemExporter\nfrom scrapy.utils.misc import walk_modules\nfrom scrapy.utils.python import to_bytes\nfrom scrapy.utils.spider import iter_spider_classes\n\nfrom locations.settings import SPIDER_MODULES\n\nmapping = (\n (\"addr_full\", \"addr:full\"),\n (\"housenumber\", \"addr:housenumber\"),\n (\"street\", \"addr:street\"),\n (\"street_address\", \"addr:street_address\"),\n (\"city\", \"addr:city\"),\n (\"state\", \"addr:state\"),\n (\"postcode\", \"addr:postcode\"),\n (\"country\", \"addr:country\"),\n (\"name\", \"name\"),\n (\"phone\", \"phone\"),\n (\"website\", \"website\"),\n (\"twitter\", \"contact:twitter\"),\n (\"facebook\", \"contact:facebook\"),\n (\"email\", \"contact:email\"),\n (\"opening_hours\", \"opening_hours\"),\n (\"image\", \"image\"),\n (\"brand\", \"brand\"),\n (\"brand_wikidata\", \"brand:wikidata\"),\n (\"located_in\", \"located_in\"),\n (\"located_in_wikidata\", \"located_in:wikidata\"),\n (\"nsi_id\", \"nsi_id\"),\n)\n\n\ndef item_to_properties(item):\n props = {}\n\n # Ref is required, unless `no_refs = True` is set in spider\n if ref := item.get(\"ref\"):\n props[\"ref\"] = str(ref)\n\n # Add in the extra bits\n if extras := item.get(\"extras\"):\n for key, value in extras.items():\n if value:\n # Only export populated values\n props[key] = value\n\n # Bring in the optional stuff\n for map_from, map_to in mapping:\n if item_value := item.get(map_from):\n props[map_to] = item_value\n\n return props\n\n\ndef compute_hash(item):\n ref = str(item.get(\"ref\") or uuid.uuid1()).encode(\"utf8\")\n sha1 = hashlib.sha1(ref)\n\n if spider_name := item.get(\"extras\", {}).get(\"@spider\"):\n sha1.update(spider_name.encode(\"utf8\"))\n\n return base64.urlsafe_b64encode(sha1.digest()).decode(\"utf8\")\n\n\ndef find_spider_class(spider_name):\n if not spider_name:\n return None\n for mod in SPIDER_MODULES:\n for module in walk_modules(mod):\n for spider_class in iter_spider_classes(module):\n if spider_name == spider_class.name:\n return spider_class\n return None\n\n\ndef get_dataset_attributes(spider_name) -> {}:\n spider_class = find_spider_class(spider_name)\n dataset_attributes = getattr(spider_class, \"dataset_attributes\", {})\n settings = getattr(spider_class, \"custom_settings\", {}) or {}\n if not settings.get(\"ROBOTSTXT_OBEY\", True):\n # See https://github.com/alltheplaces/alltheplaces/issues/4537\n dataset_attributes[\"spider:robots_txt\"] = \"ignored\"\n dataset_attributes[\"@spider\"] = spider_name\n\n return dataset_attributes\n\n\nclass GeoJsonExporter(JsonItemExporter):\n def __init__(self, file, **kwargs):\n super().__init__(file, **kwargs)\n self.spider_name = None\n\n def start_exporting(self):\n pass\n\n def export_item(self, item):\n spider_name = item.get(\"extras\", {}).get(\"@spider\")\n if self.first_item:\n self.spider_name = spider_name\n self.write_geojson_header()\n if spider_name != self.spider_name:\n # It really should not happen that a single exporter instance\n # handles output from different spiders. If it does happen,\n # we rather crash than emit GeoJSON with the wrong dataset\n # properties, which may include legally relevant license tags.\n raise ValueError(\n f\"harvest from multiple spiders ({spider_name, self.spider_name}) cannot be written to same GeoJSON file\"\n )\n\n super().export_item(item)\n\n def _get_serialized_fields(self, item, default_value=None, include_empty=None):\n feature = []\n feature.append((\"type\", \"Feature\"))\n feature.append((\"id\", compute_hash(item)))\n feature.append((\"properties\", item_to_properties(item)))\n\n lat = item.get(\"lat\")\n lon = item.get(\"lon\")\n geometry = item.get(\"geometry\")\n if lat and lon and not geometry:\n try:\n geometry = {\n \"type\": \"Point\",\n \"coordinates\": [float(item[\"lon\"]), float(item[\"lat\"])],\n }\n except ValueError:\n logging.warning(\"Couldn't convert lat (%s) and lon (%s) to float\", lat, lon)\n feature.append((\"geometry\", geometry))\n\n return feature\n\n def write_geojson_header(self):\n header = io.StringIO()\n header.write('{\"type\":\"FeatureCollection\",\"dataset_attributes\":')\n json.dump(\n get_dataset_attributes(self.spider_name), header, ensure_ascii=False, separators=(\",\", \":\"), sort_keys=True\n )\n header.write(',\"features\":[\\n')\n self.file.write(to_bytes(header.getvalue(), self.encoding))\n\n def finish_exporting(self):\n self.file.write(b\"\\n]}\\n\")\n", "path": "locations/exporters/geojson.py"}], "after_files": [{"content": "import base64\nimport datetime\nimport hashlib\nimport io\nimport json\nimport logging\nimport uuid\n\nfrom scrapy.exporters import JsonItemExporter\nfrom scrapy.utils.misc import walk_modules\nfrom scrapy.utils.python import to_bytes\nfrom scrapy.utils.spider import iter_spider_classes\n\nfrom locations.settings import SPIDER_MODULES\n\nmapping = (\n (\"addr_full\", \"addr:full\"),\n (\"housenumber\", \"addr:housenumber\"),\n (\"street\", \"addr:street\"),\n (\"street_address\", \"addr:street_address\"),\n (\"city\", \"addr:city\"),\n (\"state\", \"addr:state\"),\n (\"postcode\", \"addr:postcode\"),\n (\"country\", \"addr:country\"),\n (\"name\", \"name\"),\n (\"phone\", \"phone\"),\n (\"website\", \"website\"),\n (\"twitter\", \"contact:twitter\"),\n (\"facebook\", \"contact:facebook\"),\n (\"email\", \"contact:email\"),\n (\"opening_hours\", \"opening_hours\"),\n (\"image\", \"image\"),\n (\"brand\", \"brand\"),\n (\"brand_wikidata\", \"brand:wikidata\"),\n (\"located_in\", \"located_in\"),\n (\"located_in_wikidata\", \"located_in:wikidata\"),\n (\"nsi_id\", \"nsi_id\"),\n)\n\n\ndef item_to_properties(item):\n props = {}\n\n # Ref is required, unless `no_refs = True` is set in spider\n if ref := item.get(\"ref\"):\n props[\"ref\"] = str(ref)\n\n # Add in the extra bits\n if extras := item.get(\"extras\"):\n for key, value in extras.items():\n if value:\n # Only export populated values\n props[key] = value\n\n # Bring in the optional stuff\n for map_from, map_to in mapping:\n if item_value := item.get(map_from):\n props[map_to] = item_value\n\n return props\n\n\ndef compute_hash(item):\n ref = str(item.get(\"ref\") or uuid.uuid1()).encode(\"utf8\")\n sha1 = hashlib.sha1(ref)\n\n if spider_name := item.get(\"extras\", {}).get(\"@spider\"):\n sha1.update(spider_name.encode(\"utf8\"))\n\n return base64.urlsafe_b64encode(sha1.digest()).decode(\"utf8\")\n\n\ndef find_spider_class(spider_name):\n if not spider_name:\n return None\n for mod in SPIDER_MODULES:\n for module in walk_modules(mod):\n for spider_class in iter_spider_classes(module):\n if spider_name == spider_class.name:\n return spider_class\n return None\n\n\ndef get_dataset_attributes(spider_name) -> {}:\n spider_class = find_spider_class(spider_name)\n dataset_attributes = getattr(spider_class, \"dataset_attributes\", {})\n settings = getattr(spider_class, \"custom_settings\", {}) or {}\n if not settings.get(\"ROBOTSTXT_OBEY\", True):\n # See https://github.com/alltheplaces/alltheplaces/issues/4537\n dataset_attributes[\"spider:robots_txt\"] = \"ignored\"\n dataset_attributes[\"@spider\"] = spider_name\n dataset_attributes[\"spider:collection_time\"] = datetime.datetime.now().isoformat()\n\n return dataset_attributes\n\n\nclass GeoJsonExporter(JsonItemExporter):\n def __init__(self, file, **kwargs):\n super().__init__(file, **kwargs)\n self.spider_name = None\n\n def start_exporting(self):\n pass\n\n def export_item(self, item):\n spider_name = item.get(\"extras\", {}).get(\"@spider\")\n if self.first_item:\n self.spider_name = spider_name\n self.write_geojson_header()\n if spider_name != self.spider_name:\n # It really should not happen that a single exporter instance\n # handles output from different spiders. If it does happen,\n # we rather crash than emit GeoJSON with the wrong dataset\n # properties, which may include legally relevant license tags.\n raise ValueError(\n f\"harvest from multiple spiders ({spider_name, self.spider_name}) cannot be written to same GeoJSON file\"\n )\n\n super().export_item(item)\n\n def _get_serialized_fields(self, item, default_value=None, include_empty=None):\n feature = []\n feature.append((\"type\", \"Feature\"))\n feature.append((\"id\", compute_hash(item)))\n feature.append((\"properties\", item_to_properties(item)))\n\n lat = item.get(\"lat\")\n lon = item.get(\"lon\")\n geometry = item.get(\"geometry\")\n if lat and lon and not geometry:\n try:\n geometry = {\n \"type\": \"Point\",\n \"coordinates\": [float(item[\"lon\"]), float(item[\"lat\"])],\n }\n except ValueError:\n logging.warning(\"Couldn't convert lat (%s) and lon (%s) to float\", lat, lon)\n feature.append((\"geometry\", geometry))\n\n return feature\n\n def write_geojson_header(self):\n header = io.StringIO()\n header.write('{\"type\":\"FeatureCollection\",\"dataset_attributes\":')\n json.dump(\n get_dataset_attributes(self.spider_name), header, ensure_ascii=False, separators=(\",\", \":\"), sort_keys=True\n )\n header.write(',\"features\":[\\n')\n self.file.write(to_bytes(header.getvalue(), self.encoding))\n\n def finish_exporting(self):\n self.file.write(b\"\\n]}\\n\")\n", "path": "locations/exporters/geojson.py"}]}
| 1,818 | 154 |
gh_patches_debug_11428
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-11825
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Unable to update Warehouse address
### What are you trying to achieve?
I'm trying to update the warehouse update, with the country set to "UK", according to addressValidationRules query, the required fields are
```
streetAddress1",
"city",
"postalCode"
```
### Steps to reproduce the problem
1. In shipping zone update/creating a new on select country UK
2. Fill all fields with the necessary information
3. Try to save changes
### What did you expect to happen?
Being able to update the warehouse address properly.
### Logs
Api responds with error -> Error code REQUIRED on field countryAreaAPI
### Environment
Saleor version: 3.10
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/account/forms.py`
Content:
```
1 from phonenumbers.phonenumberutil import country_code_for_region
2
3 from .i18n import AddressMetaForm, get_address_form_class
4
5
6 def get_address_form(
7 data, country_code, initial=None, instance=None, enable_normalization=True, **kwargs
8 ):
9 country_form = AddressMetaForm(data, initial=initial)
10 if country_form.is_valid():
11 country_code = country_form.cleaned_data["country"]
12
13 if initial is None and country_code:
14 initial = {}
15 if country_code:
16 initial["phone"] = "+{}".format(country_code_for_region(country_code))
17
18 address_form_class = get_address_form_class(country_code)
19
20 if instance is not None:
21 address_form_class = get_address_form_class(instance.country.code)
22 address_form = address_form_class(
23 data, instance=instance, enable_normalization=enable_normalization, **kwargs
24 )
25 else:
26 initial_address = initial
27 address_form = address_form_class(
28 data or None,
29 initial=initial_address,
30 enable_normalization=enable_normalization,
31 **kwargs,
32 )
33
34 if hasattr(address_form.fields["country_area"], "choices"):
35 choices = address_form.fields["country_area"].choices
36 choices = [(choice[1], choice[1]) for choice in choices]
37 address_form.fields["country_area"].choices = choices
38 return address_form
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/account/forms.py b/saleor/account/forms.py
--- a/saleor/account/forms.py
+++ b/saleor/account/forms.py
@@ -14,11 +14,9 @@
initial = {}
if country_code:
initial["phone"] = "+{}".format(country_code_for_region(country_code))
-
address_form_class = get_address_form_class(country_code)
if instance is not None:
- address_form_class = get_address_form_class(instance.country.code)
address_form = address_form_class(
data, instance=instance, enable_normalization=enable_normalization, **kwargs
)
|
{"golden_diff": "diff --git a/saleor/account/forms.py b/saleor/account/forms.py\n--- a/saleor/account/forms.py\n+++ b/saleor/account/forms.py\n@@ -14,11 +14,9 @@\n initial = {}\n if country_code:\n initial[\"phone\"] = \"+{}\".format(country_code_for_region(country_code))\n-\n address_form_class = get_address_form_class(country_code)\n \n if instance is not None:\n- address_form_class = get_address_form_class(instance.country.code)\n address_form = address_form_class(\n data, instance=instance, enable_normalization=enable_normalization, **kwargs\n )\n", "issue": "Bug: Unable to update Warehouse address\n### What are you trying to achieve?\n\nI'm trying to update the warehouse update, with the country set to \"UK\", according to addressValidationRules query, the required fields are \r\n```\r\nstreetAddress1\",\r\n\"city\",\r\n\"postalCode\"\r\n```\n\n### Steps to reproduce the problem\n\n1. In shipping zone update/creating a new on select country UK\r\n2. Fill all fields with the necessary information\r\n3. Try to save changes\n\n### What did you expect to happen?\n\nBeing able to update the warehouse address properly.\n\n### Logs\n\nApi responds with error -> Error code REQUIRED on field countryAreaAPI\n\n### Environment\n\nSaleor version: 3.10\r\n\n", "before_files": [{"content": "from phonenumbers.phonenumberutil import country_code_for_region\n\nfrom .i18n import AddressMetaForm, get_address_form_class\n\n\ndef get_address_form(\n data, country_code, initial=None, instance=None, enable_normalization=True, **kwargs\n):\n country_form = AddressMetaForm(data, initial=initial)\n if country_form.is_valid():\n country_code = country_form.cleaned_data[\"country\"]\n\n if initial is None and country_code:\n initial = {}\n if country_code:\n initial[\"phone\"] = \"+{}\".format(country_code_for_region(country_code))\n\n address_form_class = get_address_form_class(country_code)\n\n if instance is not None:\n address_form_class = get_address_form_class(instance.country.code)\n address_form = address_form_class(\n data, instance=instance, enable_normalization=enable_normalization, **kwargs\n )\n else:\n initial_address = initial\n address_form = address_form_class(\n data or None,\n initial=initial_address,\n enable_normalization=enable_normalization,\n **kwargs,\n )\n\n if hasattr(address_form.fields[\"country_area\"], \"choices\"):\n choices = address_form.fields[\"country_area\"].choices\n choices = [(choice[1], choice[1]) for choice in choices]\n address_form.fields[\"country_area\"].choices = choices\n return address_form\n", "path": "saleor/account/forms.py"}], "after_files": [{"content": "from phonenumbers.phonenumberutil import country_code_for_region\n\nfrom .i18n import AddressMetaForm, get_address_form_class\n\n\ndef get_address_form(\n data, country_code, initial=None, instance=None, enable_normalization=True, **kwargs\n):\n country_form = AddressMetaForm(data, initial=initial)\n if country_form.is_valid():\n country_code = country_form.cleaned_data[\"country\"]\n\n if initial is None and country_code:\n initial = {}\n if country_code:\n initial[\"phone\"] = \"+{}\".format(country_code_for_region(country_code))\n address_form_class = get_address_form_class(country_code)\n\n if instance is not None:\n address_form = address_form_class(\n data, instance=instance, enable_normalization=enable_normalization, **kwargs\n )\n else:\n initial_address = initial\n address_form = address_form_class(\n data or None,\n initial=initial_address,\n enable_normalization=enable_normalization,\n **kwargs,\n )\n\n if hasattr(address_form.fields[\"country_area\"], \"choices\"):\n choices = address_form.fields[\"country_area\"].choices\n choices = [(choice[1], choice[1]) for choice in choices]\n address_form.fields[\"country_area\"].choices = choices\n return address_form\n", "path": "saleor/account/forms.py"}]}
| 764 | 137 |
gh_patches_debug_19127
|
rasdani/github-patches
|
git_diff
|
opendatacube__datacube-core-1279
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing Jupyter Notebook documentation in docs
### Expected behaviour
Hi all, a while ago I updated the docs to include a better intro to querying products and datasets and loading data: https://github.com/opendatacube/datacube-core/pull/1244/files
From what I can tell, those new files are correctly included in the index:
https://github.com/opendatacube/datacube-core/blob/develop/docs/data-access-analysis/index.rst
### Actual behaviour
However, I can't seem to see these in the docs:
https://opendatacube.readthedocs.io/en/latest/data-access-analysis/apis/datacube-class.html

Can anyone see anything obviously wrong? Do we need to do something special to get Jupyter Notebook docs to show up? (the only one that does show correctly is the .rst file in that list)
### Steps to reproduce the behaviour
Visit https://opendatacube.readthedocs.io/en/latest/data-access-analysis/apis/datacube-class.html
### Environment information
* N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/click_utils.py`
Content:
```
1 # This file is part of the Open Data Cube, see https://opendatacube.org for more information
2 #
3 # Copyright (c) 2015-2020 ODC Contributors
4 # SPDX-License-Identifier: Apache-2.0
5 import pkg_resources
6 from docutils.nodes import literal_block, section, title, make_id
7 from sphinx.domains import Domain
8 from docutils.parsers.rst import Directive
9 import importlib
10
11 import click
12
13
14 class ClickHelpDirective(Directive):
15 has_content = True
16 required_arguments = 1
17
18 def run(self):
19 root_cmd = self.arguments[0]
20
21 env = self.state.document.settings.env
22
23 group = find_script_callable_from_env(root_cmd, env)
24
25 return [generate_help_text(group, [root_cmd])]
26
27
28 def find_script_callable_from_env(name, env):
29 commands = env.config.click_utils_commands
30
31 module, function_name = commands[name].split(':')
32 module = importlib.import_module(module)
33 return getattr(module, function_name)
34
35
36 def find_script_callable(name):
37 return list(pkg_resources.iter_entry_points(
38 'console_scripts', name))[0].load()
39
40
41 def generate_help_text(command, prefix):
42 ctx = click.Context(command)
43 help_opts = command.get_help_option(ctx).opts
44 full_cmd = ' '.join(prefix)
45 block = section(None,
46 title(None, full_cmd),
47 ids=[make_id(full_cmd)], names=[full_cmd])
48 if help_opts:
49 h = "$ {} {}\n".format(full_cmd, help_opts[0]) + command.get_help(ctx)
50 block.append(literal_block(None, h, language='console'))
51
52 if isinstance(command, click.core.MultiCommand):
53 for c in command.list_commands(ctx):
54 c = command.resolve_command(ctx, [c])[1]
55 block.append(generate_help_text(c, prefix+[c.name]))
56
57 return block
58
59
60 def make_block(command, opt, content):
61 h = "$ {} {}\n".format(command, opt) + content
62 return section(None,
63 title(None, command),
64 literal_block(None, h, language='console'),
65 ids=[make_id(command)], names=[command])
66
67
68 class DatacubeDomain(Domain):
69 name = 'datacube'
70 label = 'Data Cube'
71 directives = {
72 'click-help': ClickHelpDirective,
73 }
74
75
76 def setup(app):
77 app.add_config_value('click_utils_commands', {}, 'html')
78
79 app.add_domain(DatacubeDomain)
80
```
Path: `docs/conf.py`
Content:
```
1 # This file is part of the Open Data Cube, see https://opendatacube.org for more information
2 #
3 # Copyright (c) 2015-2020 ODC Contributors
4 # SPDX-License-Identifier: Apache-2.0
5 import os
6 import sys
7
8 from bs4 import BeautifulSoup as bs
9
10 # If extensions (or modules to document with autodoc) are in another directory,
11 # add these directories to sys.path here. If the directory is relative to the
12 # documentation root, use os.path.abspath to make it absolute, like shown here.
13 sys.path.insert(0, os.path.abspath('..'))
14 sys.path.insert(0, os.path.abspath('.'))
15 print(sys.path)
16 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
17
18 # -- General configuration ------------------------------------------------
19
20 # If your documentation needs a minimal Sphinx version, state it here.
21 # needs_sphinx = '1.0'
22
23 # Add any Sphinx extension module names here, as strings. They can be
24 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
25 # ones.
26 extensions = [
27 'sphinx.ext.autodoc',
28 'sphinx.ext.autosummary',
29 'sphinx_autodoc_typehints',
30 'sphinx.ext.graphviz',
31 'sphinx.ext.viewcode',
32 'sphinx.ext.intersphinx',
33 'sphinx.ext.extlinks',
34 'sphinx.ext.mathjax',
35 'sphinx_click.ext',
36 'click_utils',
37 'autodocsumm',
38 'sphinx.ext.napoleon'
39 ]
40
41 # Add any paths that contain templates here, relative to this directory.
42 templates_path = ['_templates']
43
44 # The suffix of source filenames.
45 source_suffix = ['.rst', '.md']
46
47 # The master toctree document.
48 master_doc = 'index'
49
50 # General information about the project.
51 project = u'Open Data Cube'
52
53 # The version info for the project you're documenting, acts as replacement for
54 # |version| and |release|, also used in various other places throughout the
55 # built documents.
56 #
57 # The short X.Y version.
58 version = "1.8"
59 # The full version, including alpha/beta/rc tags.
60 # FIXME: obtain real version by running git
61 release = version
62
63 # There are two options for replacing |today|: either, you set today to some
64 # non-false value, then it is used:
65 # today = ''
66 # Else, today_fmt is used as the format for a strftime call.
67 # today_fmt = '%B %d, %Y'
68
69 # List of patterns, relative to source directory, that match files and
70 # directories to ignore when looking for source files.
71 exclude_patterns = ['README.rst']
72
73 # If true, '()' will be appended to :func: etc. cross-reference text.
74 add_function_parentheses = True
75
76 # If true, sectionauthor and moduleauthor directives will be shown in the
77 # output. They are ignored by default.
78 show_authors = False
79
80 # The name of the Pygments (syntax highlighting) style to use.
81 pygments_style = 'friendly'
82
83 autosummary_generate = True
84 autoclass_content = "both"
85
86 autodoc_default_options = {
87 'autosummary': True,
88 'inherited-members': True
89 }
90
91 extlinks = {'issue': ('https://github.com/opendatacube/datacube-core/issues/%s', 'issue '),
92 'pull': ('https://github.com/opendatacube/datacube-core/pulls/%s', 'PR ')}
93
94 intersphinx_mapping = {
95 'python': ('https://docs.python.org/3', None),
96 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None),
97 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
98 'xarray': ('https://xarray.pydata.org/en/stable/', None),
99 }
100
101 graphviz_output_format = 'svg'
102
103 # -- Options for HTML output ----------------------------------------------
104
105 # The theme to use for HTML and HTML Help pages. See the documentation for
106 # a list of builtin themes.
107 if on_rtd:
108 html_theme = 'pydata_sphinx_theme'
109 else:
110 html_theme = 'pydata_sphinx_theme'
111
112 html_theme_options = {
113 "navigation_depth": 1,
114 "show_prev_next": False,
115 "collapse_navigation": True,
116 "use_edit_page_button": True,
117 "footer_items": ["odc-footer"],
118 "page_sidebar_items": [
119 "page-toc",
120 "autoclass_page_toc",
121 "autosummary_page_toc",
122 "edit-this-page"
123 ],
124 "icon_links": [
125 {
126 "name": "GitHub",
127 "url": "https://github.com/opendatacube/datacube-core",
128 "icon": "fab fa-github",
129 },
130 {
131 "name": "Slack",
132 "url": "http://slack.opendatacube.org/",
133 "icon": "fab fa-slack",
134 },
135 ],
136 }
137
138 html_context = {
139 "github_user": "opendatacube",
140 "github_repo": "datacube-core",
141 "github_version": "develop",
142 "doc_path": "docs",
143 }
144
145 html_logo = '_static/odc-logo-horizontal.svg'
146 html_static_path = ['_static']
147
148 # The name of an image file (within the static path) to use as favicon of the
149 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
150 # pixels large.
151 # html_favicon = None
152
153 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
154 # using the given strftime format.
155 html_last_updated_fmt = '%b %d, %Y'
156
157
158 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
159 html_show_sphinx = False
160
161 # Output file base name for HTML help builder.
162 htmlhelp_basename = 'ODCdoc'
163
164 # Grouping the document tree into LaTeX files. List of tuples
165 # (source start file, target name, title,
166 # author, documentclass [howto, manual, or own class]).
167 latex_documents = [
168 ('index', 'ODC.tex', u'Open Data Cube Documentation', 'Open Data Cube', 'manual')
169 ]
170
171 numfig = True
172
173 def custom_page_funcs(app, pagename, templatename, context, doctree):
174
175 def get_autosummary_toc():
176 soup = bs(context["body"], "html.parser")
177
178 class_sections = soup.find(class_='class')
179 if class_sections != None:
180 return ""
181
182 matches = soup.find_all('dl')
183 if matches == None or len(matches) == 0:
184 return ""
185
186 out = {
187 'title': '',
188 'menu_items': []
189 }
190
191 # remove the class dt
192 pyclass = matches.pop(0)
193 pyclass = pyclass.find('dt')
194 if pyclass != None:
195 out['title'] = pyclass.get('id')
196
197 for match in matches:
198 match_dt = match.find('dt')
199 link = match.find(class_="headerlink")
200 if link != None:
201 out['menu_items'].append({
202 'title': match_dt.get('id'),
203 'link': link['href']
204 })
205
206 return out
207
208 def get_class_toc():
209 soup = bs(context["body"], "html.parser")
210
211 class_sections = soup.find_all(class_='autosummary')
212 if class_sections == None or len(class_sections) == 0:
213 return ""
214
215 out = {
216 'title': '',
217 'menu_items': []
218 }
219 class_title = soup.find(class_='class')
220 if class_title == None:
221 return ""
222
223 pyclass = class_title.find('dt')
224 if pyclass != None:
225 out['title'] = pyclass.get('id')
226
227 for section in class_sections:
228 out_section = {
229 'title': '',
230 'menu_items': []
231 }
232 out_section['title'] = section.find_previous_sibling('p').text.replace(':','')
233 matches = section.find_all('tr')
234 for match in matches:
235 link = match.find(class_="internal")
236
237 if link != None:
238 title = link['title']
239 if title != None:
240 title = title.replace(out['title'], '')
241 out_section['menu_items'].append({
242 'title': title,
243 'link': link['href']
244 })
245 if len(out_section['menu_items']) > 0:
246 out['menu_items'].append(out_section)
247
248 # print(out)
249 return out
250
251 context['get_class_toc'] = get_class_toc
252 context['get_autosummary_toc'] = get_autosummary_toc
253
254
255
256 def setup(app):
257 # Fix bug where code isn't being highlighted
258 app.add_css_file('pygments.css')
259 app.add_css_file('custom.css')
260
261 app.connect("html-page-context", custom_page_funcs)
262
263
264 # Clean up generated documentation files that RTD seems to be having trouble with
265 if on_rtd:
266 import shutil
267
268 shutil.rmtree('./dev/generate', ignore_errors=True)
269
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/click_utils.py b/docs/click_utils.py
--- a/docs/click_utils.py
+++ b/docs/click_utils.py
@@ -77,3 +77,8 @@
app.add_config_value('click_utils_commands', {}, 'html')
app.add_domain(DatacubeDomain)
+ return {
+ 'parallel_read_safe': False,
+ 'parallel_write_safe': False,
+ }
+
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -35,6 +35,7 @@
'sphinx_click.ext',
'click_utils',
'autodocsumm',
+ 'nbsphinx',
'sphinx.ext.napoleon'
]
@@ -68,7 +69,7 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
-exclude_patterns = ['README.rst']
+exclude_patterns = ['README.rst', '.condaenv', '.direnv']
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
|
{"golden_diff": "diff --git a/docs/click_utils.py b/docs/click_utils.py\n--- a/docs/click_utils.py\n+++ b/docs/click_utils.py\n@@ -77,3 +77,8 @@\n app.add_config_value('click_utils_commands', {}, 'html')\n \n app.add_domain(DatacubeDomain)\n+ return {\n+ 'parallel_read_safe': False,\n+ 'parallel_write_safe': False,\n+ }\n+\ndiff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -35,6 +35,7 @@\n 'sphinx_click.ext',\n 'click_utils',\n 'autodocsumm',\n+ 'nbsphinx',\n 'sphinx.ext.napoleon'\n ]\n \n@@ -68,7 +69,7 @@\n \n # List of patterns, relative to source directory, that match files and\n # directories to ignore when looking for source files.\n-exclude_patterns = ['README.rst']\n+exclude_patterns = ['README.rst', '.condaenv', '.direnv']\n \n # If true, '()' will be appended to :func: etc. cross-reference text.\n add_function_parentheses = True\n", "issue": "Missing Jupyter Notebook documentation in docs\n### Expected behaviour\r\nHi all, a while ago I updated the docs to include a better intro to querying products and datasets and loading data: https://github.com/opendatacube/datacube-core/pull/1244/files\r\n\r\nFrom what I can tell, those new files are correctly included in the index:\r\nhttps://github.com/opendatacube/datacube-core/blob/develop/docs/data-access-analysis/index.rst\r\n\r\n### Actual behaviour\r\nHowever, I can't seem to see these in the docs:\r\nhttps://opendatacube.readthedocs.io/en/latest/data-access-analysis/apis/datacube-class.html\r\n\r\n\r\n\r\n\r\nCan anyone see anything obviously wrong? Do we need to do something special to get Jupyter Notebook docs to show up? (the only one that does show correctly is the .rst file in that list)\r\n\r\n\r\n### Steps to reproduce the behaviour\r\n\r\nVisit https://opendatacube.readthedocs.io/en/latest/data-access-analysis/apis/datacube-class.html\r\n\r\n### Environment information\r\n\r\n* N/A\r\n\r\n\n", "before_files": [{"content": "# This file is part of the Open Data Cube, see https://opendatacube.org for more information\n#\n# Copyright (c) 2015-2020 ODC Contributors\n# SPDX-License-Identifier: Apache-2.0\nimport pkg_resources\nfrom docutils.nodes import literal_block, section, title, make_id\nfrom sphinx.domains import Domain\nfrom docutils.parsers.rst import Directive\nimport importlib\n\nimport click\n\n\nclass ClickHelpDirective(Directive):\n has_content = True\n required_arguments = 1\n\n def run(self):\n root_cmd = self.arguments[0]\n\n env = self.state.document.settings.env\n\n group = find_script_callable_from_env(root_cmd, env)\n\n return [generate_help_text(group, [root_cmd])]\n\n\ndef find_script_callable_from_env(name, env):\n commands = env.config.click_utils_commands\n\n module, function_name = commands[name].split(':')\n module = importlib.import_module(module)\n return getattr(module, function_name)\n\n\ndef find_script_callable(name):\n return list(pkg_resources.iter_entry_points(\n 'console_scripts', name))[0].load()\n\n\ndef generate_help_text(command, prefix):\n ctx = click.Context(command)\n help_opts = command.get_help_option(ctx).opts\n full_cmd = ' '.join(prefix)\n block = section(None,\n title(None, full_cmd),\n ids=[make_id(full_cmd)], names=[full_cmd])\n if help_opts:\n h = \"$ {} {}\\n\".format(full_cmd, help_opts[0]) + command.get_help(ctx)\n block.append(literal_block(None, h, language='console'))\n\n if isinstance(command, click.core.MultiCommand):\n for c in command.list_commands(ctx):\n c = command.resolve_command(ctx, [c])[1]\n block.append(generate_help_text(c, prefix+[c.name]))\n\n return block\n\n\ndef make_block(command, opt, content):\n h = \"$ {} {}\\n\".format(command, opt) + content\n return section(None,\n title(None, command),\n literal_block(None, h, language='console'),\n ids=[make_id(command)], names=[command])\n\n\nclass DatacubeDomain(Domain):\n name = 'datacube'\n label = 'Data Cube'\n directives = {\n 'click-help': ClickHelpDirective,\n }\n\n\ndef setup(app):\n app.add_config_value('click_utils_commands', {}, 'html')\n\n app.add_domain(DatacubeDomain)\n", "path": "docs/click_utils.py"}, {"content": "# This file is part of the Open Data Cube, see https://opendatacube.org for more information\n#\n# Copyright (c) 2015-2020 ODC Contributors\n# SPDX-License-Identifier: Apache-2.0\nimport os\nimport sys\n\nfrom bs4 import BeautifulSoup as bs\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.insert(0, os.path.abspath('.'))\nprint(sys.path)\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx_autodoc_typehints',\n 'sphinx.ext.graphviz',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.extlinks',\n 'sphinx.ext.mathjax',\n 'sphinx_click.ext',\n 'click_utils',\n 'autodocsumm',\n 'sphinx.ext.napoleon'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = ['.rst', '.md']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Open Data Cube'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"1.8\"\n# The full version, including alpha/beta/rc tags.\n# FIXME: obtain real version by running git\nrelease = version\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['README.rst']\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\nadd_function_parentheses = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\nshow_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'friendly'\n\nautosummary_generate = True\nautoclass_content = \"both\"\n\nautodoc_default_options = {\n 'autosummary': True,\n 'inherited-members': True\n}\n\nextlinks = {'issue': ('https://github.com/opendatacube/datacube-core/issues/%s', 'issue '),\n 'pull': ('https://github.com/opendatacube/datacube-core/pulls/%s', 'PR ')}\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None),\n 'numpy': ('https://docs.scipy.org/doc/numpy/', None),\n 'xarray': ('https://xarray.pydata.org/en/stable/', None),\n}\n\ngraphviz_output_format = 'svg'\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'pydata_sphinx_theme'\nelse:\n html_theme = 'pydata_sphinx_theme'\n\nhtml_theme_options = {\n \"navigation_depth\": 1,\n \"show_prev_next\": False,\n \"collapse_navigation\": True,\n \"use_edit_page_button\": True,\n \"footer_items\": [\"odc-footer\"],\n \"page_sidebar_items\": [\n \"page-toc\",\n \"autoclass_page_toc\",\n \"autosummary_page_toc\",\n \"edit-this-page\"\n ],\n \"icon_links\": [\n {\n \"name\": \"GitHub\",\n \"url\": \"https://github.com/opendatacube/datacube-core\",\n \"icon\": \"fab fa-github\",\n },\n {\n \"name\": \"Slack\",\n \"url\": \"http://slack.opendatacube.org/\",\n \"icon\": \"fab fa-slack\",\n },\n ],\n}\n\nhtml_context = {\n \"github_user\": \"opendatacube\",\n \"github_repo\": \"datacube-core\",\n \"github_version\": \"develop\",\n \"doc_path\": \"docs\",\n}\n\nhtml_logo = '_static/odc-logo-horizontal.svg'\nhtml_static_path = ['_static']\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ODCdoc'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'ODC.tex', u'Open Data Cube Documentation', 'Open Data Cube', 'manual')\n]\n\nnumfig = True\n\ndef custom_page_funcs(app, pagename, templatename, context, doctree):\n\n def get_autosummary_toc():\n soup = bs(context[\"body\"], \"html.parser\")\n\n class_sections = soup.find(class_='class')\n if class_sections != None:\n return \"\"\n\n matches = soup.find_all('dl')\n if matches == None or len(matches) == 0:\n return \"\"\n\n out = {\n 'title': '',\n 'menu_items': []\n }\n\n # remove the class dt\n pyclass = matches.pop(0)\n pyclass = pyclass.find('dt')\n if pyclass != None:\n out['title'] = pyclass.get('id')\n\n for match in matches:\n match_dt = match.find('dt')\n link = match.find(class_=\"headerlink\")\n if link != None:\n out['menu_items'].append({\n 'title': match_dt.get('id'),\n 'link': link['href']\n })\n\n return out\n\n def get_class_toc():\n soup = bs(context[\"body\"], \"html.parser\")\n\n class_sections = soup.find_all(class_='autosummary')\n if class_sections == None or len(class_sections) == 0:\n return \"\"\n\n out = {\n 'title': '',\n 'menu_items': []\n }\n class_title = soup.find(class_='class')\n if class_title == None:\n return \"\"\n\n pyclass = class_title.find('dt')\n if pyclass != None:\n out['title'] = pyclass.get('id')\n\n for section in class_sections:\n out_section = {\n 'title': '',\n 'menu_items': []\n }\n out_section['title'] = section.find_previous_sibling('p').text.replace(':','')\n matches = section.find_all('tr')\n for match in matches:\n link = match.find(class_=\"internal\")\n \n if link != None:\n title = link['title']\n if title != None:\n title = title.replace(out['title'], '')\n out_section['menu_items'].append({\n 'title': title,\n 'link': link['href']\n })\n if len(out_section['menu_items']) > 0:\n out['menu_items'].append(out_section)\n\n # print(out)\n return out\n\n context['get_class_toc'] = get_class_toc\n context['get_autosummary_toc'] = get_autosummary_toc\n\n\n\ndef setup(app):\n # Fix bug where code isn't being highlighted\n app.add_css_file('pygments.css')\n app.add_css_file('custom.css')\n\n app.connect(\"html-page-context\", custom_page_funcs)\n\n\n# Clean up generated documentation files that RTD seems to be having trouble with\nif on_rtd:\n import shutil\n\n shutil.rmtree('./dev/generate', ignore_errors=True)\n", "path": "docs/conf.py"}], "after_files": [{"content": "# This file is part of the Open Data Cube, see https://opendatacube.org for more information\n#\n# Copyright (c) 2015-2020 ODC Contributors\n# SPDX-License-Identifier: Apache-2.0\nimport pkg_resources\nfrom docutils.nodes import literal_block, section, title, make_id\nfrom sphinx.domains import Domain\nfrom docutils.parsers.rst import Directive\nimport importlib\n\nimport click\n\n\nclass ClickHelpDirective(Directive):\n has_content = True\n required_arguments = 1\n\n def run(self):\n root_cmd = self.arguments[0]\n\n env = self.state.document.settings.env\n\n group = find_script_callable_from_env(root_cmd, env)\n\n return [generate_help_text(group, [root_cmd])]\n\n\ndef find_script_callable_from_env(name, env):\n commands = env.config.click_utils_commands\n\n module, function_name = commands[name].split(':')\n module = importlib.import_module(module)\n return getattr(module, function_name)\n\n\ndef find_script_callable(name):\n return list(pkg_resources.iter_entry_points(\n 'console_scripts', name))[0].load()\n\n\ndef generate_help_text(command, prefix):\n ctx = click.Context(command)\n help_opts = command.get_help_option(ctx).opts\n full_cmd = ' '.join(prefix)\n block = section(None,\n title(None, full_cmd),\n ids=[make_id(full_cmd)], names=[full_cmd])\n if help_opts:\n h = \"$ {} {}\\n\".format(full_cmd, help_opts[0]) + command.get_help(ctx)\n block.append(literal_block(None, h, language='console'))\n\n if isinstance(command, click.core.MultiCommand):\n for c in command.list_commands(ctx):\n c = command.resolve_command(ctx, [c])[1]\n block.append(generate_help_text(c, prefix+[c.name]))\n\n return block\n\n\ndef make_block(command, opt, content):\n h = \"$ {} {}\\n\".format(command, opt) + content\n return section(None,\n title(None, command),\n literal_block(None, h, language='console'),\n ids=[make_id(command)], names=[command])\n\n\nclass DatacubeDomain(Domain):\n name = 'datacube'\n label = 'Data Cube'\n directives = {\n 'click-help': ClickHelpDirective,\n }\n\n\ndef setup(app):\n app.add_config_value('click_utils_commands', {}, 'html')\n\n app.add_domain(DatacubeDomain)\n return {\n 'parallel_read_safe': False,\n 'parallel_write_safe': False,\n }\n\n", "path": "docs/click_utils.py"}, {"content": "# This file is part of the Open Data Cube, see https://opendatacube.org for more information\n#\n# Copyright (c) 2015-2020 ODC Contributors\n# SPDX-License-Identifier: Apache-2.0\nimport os\nimport sys\n\nfrom bs4 import BeautifulSoup as bs\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.insert(0, os.path.abspath('.'))\nprint(sys.path)\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx_autodoc_typehints',\n 'sphinx.ext.graphviz',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.extlinks',\n 'sphinx.ext.mathjax',\n 'sphinx_click.ext',\n 'click_utils',\n 'autodocsumm',\n 'nbsphinx',\n 'sphinx.ext.napoleon'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = ['.rst', '.md']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Open Data Cube'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"1.8\"\n# The full version, including alpha/beta/rc tags.\n# FIXME: obtain real version by running git\nrelease = version\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['README.rst', '.condaenv', '.direnv']\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\nadd_function_parentheses = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\nshow_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'friendly'\n\nautosummary_generate = True\nautoclass_content = \"both\"\n\nautodoc_default_options = {\n 'autosummary': True,\n 'inherited-members': True\n}\n\nextlinks = {'issue': ('https://github.com/opendatacube/datacube-core/issues/%s', 'issue '),\n 'pull': ('https://github.com/opendatacube/datacube-core/pulls/%s', 'PR ')}\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None),\n 'numpy': ('https://docs.scipy.org/doc/numpy/', None),\n 'xarray': ('https://xarray.pydata.org/en/stable/', None),\n}\n\ngraphviz_output_format = 'svg'\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'pydata_sphinx_theme'\nelse:\n html_theme = 'pydata_sphinx_theme'\n\nhtml_theme_options = {\n \"navigation_depth\": 1,\n \"show_prev_next\": False,\n \"collapse_navigation\": True,\n \"use_edit_page_button\": True,\n \"footer_items\": [\"odc-footer\"],\n \"page_sidebar_items\": [\n \"page-toc\",\n \"autoclass_page_toc\",\n \"autosummary_page_toc\",\n \"edit-this-page\"\n ],\n \"icon_links\": [\n {\n \"name\": \"GitHub\",\n \"url\": \"https://github.com/opendatacube/datacube-core\",\n \"icon\": \"fab fa-github\",\n },\n {\n \"name\": \"Slack\",\n \"url\": \"http://slack.opendatacube.org/\",\n \"icon\": \"fab fa-slack\",\n },\n ],\n}\n\nhtml_context = {\n \"github_user\": \"opendatacube\",\n \"github_repo\": \"datacube-core\",\n \"github_version\": \"develop\",\n \"doc_path\": \"docs\",\n}\n\nhtml_logo = '_static/odc-logo-horizontal.svg'\nhtml_static_path = ['_static']\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ODCdoc'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'ODC.tex', u'Open Data Cube Documentation', 'Open Data Cube', 'manual')\n]\n\nnumfig = True\n\ndef custom_page_funcs(app, pagename, templatename, context, doctree):\n\n def get_autosummary_toc():\n soup = bs(context[\"body\"], \"html.parser\")\n\n class_sections = soup.find(class_='class')\n if class_sections != None:\n return \"\"\n\n matches = soup.find_all('dl')\n if matches == None or len(matches) == 0:\n return \"\"\n\n out = {\n 'title': '',\n 'menu_items': []\n }\n\n # remove the class dt\n pyclass = matches.pop(0)\n pyclass = pyclass.find('dt')\n if pyclass != None:\n out['title'] = pyclass.get('id')\n\n for match in matches:\n match_dt = match.find('dt')\n link = match.find(class_=\"headerlink\")\n if link != None:\n out['menu_items'].append({\n 'title': match_dt.get('id'),\n 'link': link['href']\n })\n\n return out\n\n def get_class_toc():\n soup = bs(context[\"body\"], \"html.parser\")\n\n class_sections = soup.find_all(class_='autosummary')\n if class_sections == None or len(class_sections) == 0:\n return \"\"\n\n out = {\n 'title': '',\n 'menu_items': []\n }\n class_title = soup.find(class_='class')\n if class_title == None:\n return \"\"\n\n pyclass = class_title.find('dt')\n if pyclass != None:\n out['title'] = pyclass.get('id')\n\n for section in class_sections:\n out_section = {\n 'title': '',\n 'menu_items': []\n }\n out_section['title'] = section.find_previous_sibling('p').text.replace(':','')\n matches = section.find_all('tr')\n for match in matches:\n link = match.find(class_=\"internal\")\n \n if link != None:\n title = link['title']\n if title != None:\n title = title.replace(out['title'], '')\n out_section['menu_items'].append({\n 'title': title,\n 'link': link['href']\n })\n if len(out_section['menu_items']) > 0:\n out['menu_items'].append(out_section)\n\n # print(out)\n return out\n\n context['get_class_toc'] = get_class_toc\n context['get_autosummary_toc'] = get_autosummary_toc\n\n\n\ndef setup(app):\n # Fix bug where code isn't being highlighted\n app.add_css_file('pygments.css')\n app.add_css_file('custom.css')\n\n app.connect(\"html-page-context\", custom_page_funcs)\n\n\n# Clean up generated documentation files that RTD seems to be having trouble with\nif on_rtd:\n import shutil\n\n shutil.rmtree('./dev/generate', ignore_errors=True)\n", "path": "docs/conf.py"}]}
| 3,933 | 258 |
gh_patches_debug_35033
|
rasdani/github-patches
|
git_diff
|
kornia__kornia-2635
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
feat: LightGlue-ONNX
#### Changes
<!-- Please include a summary of the change and which issue is fixed. -->
This PR adds a wrapper class for loading and running LightGlue-ONNX models via ONNXRuntime.
<!-- Please also include relevant motivation and context. -->
Re: https://github.com/fabio-sim/LightGlue-ONNX/issues/40
<!-- List any dependencies that are required for this change. -->
`onnxruntime-gpu>=1.16` is required as a dependency to instantiate the new `OnnxLightGlue` class. (Importing it will still work without installing).
related to #2559
#### Type of change
<!-- Please delete options that are not relevant. -->
- [x] 📚 Documentation Update
- [x] 🧪 Tests Cases
- [x] 🔬 New feature (non-breaking change which adds functionality)
- [x] 📝 This change requires a documentation update
#### Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] Did you update CHANGELOG in case of a major change?
#### Example Usage
Sample images can be found [here](https://github.com/fabio-sim/LightGlue-ONNX/tree/main/assets).
```python
import torch
from kornia.feature import DISK, OnnxLightGlue
from kornia.io import ImageLoadType, load_image
device = torch.device("cuda")
img0 = load_image("sacre_coeur1.jpg", ImageLoadType.RGB32, device=device)[None]
img1 = load_image("sacre_coeur2.jpg", ImageLoadType.RGB32, device=device)[None]
extractor = DISK.from_pretrained("depth", device=device).eval().to(device)
data = {}
with torch.no_grad():
for key, img in [("image0", img0), ("image1", img1)]:
features = extractor(img, n=None, window_size=5, score_threshold=0.0, pad_if_not_divisible=True)
data[key] = {
"image": img,
"keypoints": features[0].keypoints[None],
"keypoint_scores": features[0].detection_scores[None],
"descriptors": features[0].descriptors[None],
}
matcher = OnnxLightGlue(weights="disk_fp16", device=device)
result = matcher(data)
print(result)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/feature/lightglue_onnx/lightglue.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import ClassVar
4
5 import torch
6
7 from kornia.core import Device, Tensor
8 from kornia.core.check import KORNIA_CHECK, KORNIA_CHECK_SAME_DEVICES, KORNIA_CHECK_SHAPE
9
10 from .utils import download_onnx_from_url, normalize_keypoints
11
12 try:
13 import numpy as np
14 import onnxruntime as ort
15 except ImportError:
16 np = None # type: ignore
17 ort = None
18
19 __all__ = ["OnnxLightGlue"]
20
21
22 class OnnxLightGlue:
23 r"""Wrapper for loading LightGlue-ONNX models and running inference via ONNXRuntime.
24
25 LightGlue :cite:`LightGlue2023` performs fast descriptor-based deep keypoint matching.
26 This module requires `onnxruntime` to be installed.
27
28 If you have trained your own LightGlue model, see https://github.com/fabio-sim/LightGlue-ONNX
29 for how to export the model to ONNX and optimize it.
30
31 Args:
32 weights: Pretrained weights, or a path to your own exported ONNX model. Available pretrained weights are:
33 `disk`, `superpoint`, `disk_fp16`, and `superpoint_fp16`. Defaults to `disk_fp16`.
34 device: Device to run inference on. Currently, only `cuda` is supported. Defaults to `cuda`.
35 """
36 MODEL_URLS: ClassVar[dict[str, str]] = {
37 "disk": "https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/disk_lightglue_fused.onnx",
38 "superpoint": "https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/superpoint_lightglue_fused.onnx",
39 "disk_fp16": "https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/disk_lightglue_fused_fp16.onnx",
40 "superpoint_fp16": "https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/superpoint_lightglue_fused_fp16.onnx",
41 }
42
43 required_data_keys: ClassVar[list[str]] = ["image0", "image1"]
44
45 def __init__(self, weights: str = "disk_fp16", device: Device = None) -> None:
46 KORNIA_CHECK(ort is not None, "onnxruntime is not installed.")
47 KORNIA_CHECK(np is not None, "numpy is not installed.")
48
49 if device is None:
50 device = torch.device("cuda")
51 elif isinstance(device, str):
52 device = torch.device(device)
53 self.device = device
54
55 if device.type == "cpu":
56 raise NotImplementedError("CPUExecutionProvider is not supported yet for Multihead-Attention op.")
57 elif device.type == "cuda":
58 providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
59 else:
60 raise ValueError(f"Unsupported device {device}")
61
62 if weights in self.MODEL_URLS:
63 weights = download_onnx_from_url(self.MODEL_URLS[weights])
64
65 self.session = ort.InferenceSession(weights, providers=providers)
66
67 def __call__(self, data: dict[str, dict[str, Tensor]]) -> dict[str, Tensor]:
68 return self.forward(data)
69
70 def forward(self, data: dict[str, dict[str, Tensor]]) -> dict[str, Tensor]:
71 r"""Match keypoints and descriptors between two images.
72
73 The output contains the matches (the indices of the matching keypoint pairs between the first and second image)
74 and the corresponding confidence scores.
75 Only a batch size of 1 is supported.
76
77 Args:
78 data: Dictionary containing both images and the keypoints and descriptors thereof.
79
80 Returns:
81 output: Dictionary containing the following matches and scores.
82
83 `data`:
84 image0: dict
85 keypoints (`float32`): [1 x M x 2]
86 descriptors (`float32`): [1 x M x D]
87 image: [1 x C x H x W] or image_size: [1 x 2]
88 image1: dict
89 keypoints (`float32`): [1 x N x 2]
90 descriptors (`float32`): [1 x N x D]
91 image: [1 x C x H x W] or image_size: [1 x 2]
92
93 `output`:
94 matches (`int64`): [S x 2]
95 scores (`float32`): [S]
96 """
97 # Input validation.
98 for key in self.required_data_keys:
99 KORNIA_CHECK(key in data, f'Missing key {key} in data')
100 data0, data1 = data['image0'], data['image1']
101 kpts0_, kpts1_ = data0['keypoints'].contiguous(), data1['keypoints'].contiguous()
102 desc0, desc1 = data0['descriptors'].contiguous(), data1['descriptors'].contiguous()
103 KORNIA_CHECK_SAME_DEVICES([kpts0_, desc0, kpts1_, desc1], "Wrong device")
104 KORNIA_CHECK(kpts0_.device.type == self.device.type, "Wrong device")
105 KORNIA_CHECK(torch.float32 == kpts0_.dtype == kpts1_.dtype == desc0.dtype == desc1.dtype, "Wrong dtype")
106 KORNIA_CHECK_SHAPE(kpts0_, ["1", "M", "2"])
107 KORNIA_CHECK_SHAPE(kpts1_, ["1", "N", "2"])
108 KORNIA_CHECK_SHAPE(desc0, ["1", "M", "D"])
109 KORNIA_CHECK_SHAPE(desc1, ["1", "N", "D"])
110 KORNIA_CHECK(kpts0_.shape[1] == desc0.shape[1], "Number of keypoints does not match number of descriptors")
111 KORNIA_CHECK(kpts1_.shape[1] == desc1.shape[1], "Number of keypoints does not match number of descriptors")
112 KORNIA_CHECK(desc0.shape[2] == desc1.shape[2], "Descriptors' dimensions do not match")
113
114 # Normalize keypoints.
115 size0, size1 = data0.get('image_size'), data1.get('image_size')
116 size0 = size0 if size0 is not None else data0['image'].shape[-2:][::-1] # type: ignore
117 size1 = size1 if size1 is not None else data1['image'].shape[-2:][::-1] # type: ignore
118
119 kpts0 = normalize_keypoints(kpts0_, size=size0) # type: ignore
120 kpts1 = normalize_keypoints(kpts1_, size=size1) # type: ignore
121
122 KORNIA_CHECK(torch.all(kpts0 >= -1).item() and torch.all(kpts0 <= 1).item(), "") # type: ignore
123 KORNIA_CHECK(torch.all(kpts1 >= -1).item() and torch.all(kpts1 <= 1).item(), "") # type: ignore
124
125 # Inference.
126 lightglue_inputs = {"kpts0": kpts0, "kpts1": kpts1, "desc0": desc0, "desc1": desc1}
127 lightglue_outputs = ["matches0", "mscores0"]
128 binding = self.session.io_binding()
129
130 for name, tensor in lightglue_inputs.items():
131 binding.bind_input(
132 name,
133 device_type=self.device.type,
134 device_id=0,
135 element_type=np.float32,
136 shape=tuple(tensor.shape),
137 buffer_ptr=tensor.data_ptr(),
138 )
139
140 for name in lightglue_outputs:
141 binding.bind_output(name, device_type=self.device.type, device_id=0)
142
143 self.session.run_with_iobinding(binding)
144
145 matches, mscores = binding.get_outputs()
146
147 # TODO: The following is an unnecessary copy. Replace with a better solution when torch supports
148 # constructing a tensor from a data pointer, or when ORT supports converting to torch tensor.
149 # https://github.com/microsoft/onnxruntime/issues/15963
150 outputs = {
151 "matches": torch.from_dlpack(matches.numpy()).to(self.device),
152 "scores": torch.from_dlpack(mscores.numpy()).to(self.device),
153 }
154 return outputs
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kornia/feature/lightglue_onnx/lightglue.py b/kornia/feature/lightglue_onnx/lightglue.py
--- a/kornia/feature/lightglue_onnx/lightglue.py
+++ b/kornia/feature/lightglue_onnx/lightglue.py
@@ -29,9 +29,9 @@
for how to export the model to ONNX and optimize it.
Args:
- weights: Pretrained weights, or a path to your own exported ONNX model. Available pretrained weights are:
- `disk`, `superpoint`, `disk_fp16`, and `superpoint_fp16`. Defaults to `disk_fp16`.
- device: Device to run inference on. Currently, only `cuda` is supported. Defaults to `cuda`.
+ weights: Pretrained weights, or a path to your own exported ONNX model. Available pretrained weights
+ are ``'disk'``, ``'superpoint'``, ``'disk_fp16'``, and ``'superpoint_fp16'``.
+ device: Device to run inference on. Currently, only ``'cuda'`` is supported. Defaults to ``'cuda'``.
"""
MODEL_URLS: ClassVar[dict[str, str]] = {
"disk": "https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/disk_lightglue_fused.onnx",
@@ -78,21 +78,27 @@
data: Dictionary containing both images and the keypoints and descriptors thereof.
Returns:
- output: Dictionary containing the following matches and scores.
-
- `data`:
- image0: dict
- keypoints (`float32`): [1 x M x 2]
- descriptors (`float32`): [1 x M x D]
- image: [1 x C x H x W] or image_size: [1 x 2]
- image1: dict
- keypoints (`float32`): [1 x N x 2]
- descriptors (`float32`): [1 x N x D]
- image: [1 x C x H x W] or image_size: [1 x 2]
-
- `output`:
- matches (`int64`): [S x 2]
- scores (`float32`): [S]
+ Dictionary containing the matches and scores.
+
+ ``data`` (``dict``):
+ ``image0`` (``dict``):
+ ``keypoints`` (`float32`): :math:`(1, M, 2)`
+
+ ``descriptors`` (`float32`): :math:`(1, M, D)`
+
+ ``image``: :math:`(1, C, H, W)` or ``image_size``: :math:`(1, 2)`
+
+ ``image1`` (``dict``):
+ ``keypoints`` (`float32`): :math:`(1, N, 2)`
+
+ ``descriptors`` (`float32`): :math:`(1, N, D)`
+
+ ``image``: :math:`(1, C, H, W)` or ``image_size``: :math:`(1, 2)`
+
+ ``output`` (``dict``):
+ ``matches`` (`int64`): :math:`(S, 2)`
+
+ ``scores`` (`float32`): :math:`(S)`
"""
# Input validation.
for key in self.required_data_keys:
|
{"golden_diff": "diff --git a/kornia/feature/lightglue_onnx/lightglue.py b/kornia/feature/lightglue_onnx/lightglue.py\n--- a/kornia/feature/lightglue_onnx/lightglue.py\n+++ b/kornia/feature/lightglue_onnx/lightglue.py\n@@ -29,9 +29,9 @@\n for how to export the model to ONNX and optimize it.\n \n Args:\n- weights: Pretrained weights, or a path to your own exported ONNX model. Available pretrained weights are:\n- `disk`, `superpoint`, `disk_fp16`, and `superpoint_fp16`. Defaults to `disk_fp16`.\n- device: Device to run inference on. Currently, only `cuda` is supported. Defaults to `cuda`.\n+ weights: Pretrained weights, or a path to your own exported ONNX model. Available pretrained weights\n+ are ``'disk'``, ``'superpoint'``, ``'disk_fp16'``, and ``'superpoint_fp16'``.\n+ device: Device to run inference on. Currently, only ``'cuda'`` is supported. Defaults to ``'cuda'``.\n \"\"\"\n MODEL_URLS: ClassVar[dict[str, str]] = {\n \"disk\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/disk_lightglue_fused.onnx\",\n@@ -78,21 +78,27 @@\n data: Dictionary containing both images and the keypoints and descriptors thereof.\n \n Returns:\n- output: Dictionary containing the following matches and scores.\n-\n- `data`:\n- image0: dict\n- keypoints (`float32`): [1 x M x 2]\n- descriptors (`float32`): [1 x M x D]\n- image: [1 x C x H x W] or image_size: [1 x 2]\n- image1: dict\n- keypoints (`float32`): [1 x N x 2]\n- descriptors (`float32`): [1 x N x D]\n- image: [1 x C x H x W] or image_size: [1 x 2]\n-\n- `output`:\n- matches (`int64`): [S x 2]\n- scores (`float32`): [S]\n+ Dictionary containing the matches and scores.\n+\n+ ``data`` (``dict``):\n+ ``image0`` (``dict``):\n+ ``keypoints`` (`float32`): :math:`(1, M, 2)`\n+\n+ ``descriptors`` (`float32`): :math:`(1, M, D)`\n+\n+ ``image``: :math:`(1, C, H, W)` or ``image_size``: :math:`(1, 2)`\n+\n+ ``image1`` (``dict``):\n+ ``keypoints`` (`float32`): :math:`(1, N, 2)`\n+\n+ ``descriptors`` (`float32`): :math:`(1, N, D)`\n+\n+ ``image``: :math:`(1, C, H, W)` or ``image_size``: :math:`(1, 2)`\n+\n+ ``output`` (``dict``):\n+ ``matches`` (`int64`): :math:`(S, 2)`\n+\n+ ``scores`` (`float32`): :math:`(S)`\n \"\"\"\n # Input validation.\n for key in self.required_data_keys:\n", "issue": "feat: LightGlue-ONNX\n#### Changes\r\n<!-- Please include a summary of the change and which issue is fixed. -->\r\nThis PR adds a wrapper class for loading and running LightGlue-ONNX models via ONNXRuntime.\r\n<!-- Please also include relevant motivation and context. -->\r\nRe: https://github.com/fabio-sim/LightGlue-ONNX/issues/40\r\n<!-- List any dependencies that are required for this change. -->\r\n`onnxruntime-gpu>=1.16` is required as a dependency to instantiate the new `OnnxLightGlue` class. (Importing it will still work without installing).\r\nrelated to #2559 \r\n\r\n#### Type of change\r\n<!-- Please delete options that are not relevant. -->\r\n- [x] \ud83d\udcda Documentation Update\r\n- [x] \ud83e\uddea Tests Cases\r\n- [x] \ud83d\udd2c New feature (non-breaking change which adds functionality)\r\n- [x] \ud83d\udcdd This change requires a documentation update\r\n\r\n#### Checklist\r\n\r\n- [x] My code follows the style guidelines of this project\r\n- [x] I have performed a self-review of my own code\r\n- [x] I have commented my code, particularly in hard-to-understand areas\r\n- [x] I have made corresponding changes to the documentation\r\n- [x] My changes generate no new warnings\r\n- [ ] Did you update CHANGELOG in case of a major change?\r\n\r\n#### Example Usage\r\nSample images can be found [here](https://github.com/fabio-sim/LightGlue-ONNX/tree/main/assets).\r\n```python\r\nimport torch\r\n\r\nfrom kornia.feature import DISK, OnnxLightGlue\r\nfrom kornia.io import ImageLoadType, load_image\r\n\r\ndevice = torch.device(\"cuda\")\r\n\r\nimg0 = load_image(\"sacre_coeur1.jpg\", ImageLoadType.RGB32, device=device)[None]\r\nimg1 = load_image(\"sacre_coeur2.jpg\", ImageLoadType.RGB32, device=device)[None]\r\n\r\nextractor = DISK.from_pretrained(\"depth\", device=device).eval().to(device)\r\n\r\ndata = {}\r\nwith torch.no_grad():\r\n for key, img in [(\"image0\", img0), (\"image1\", img1)]:\r\n features = extractor(img, n=None, window_size=5, score_threshold=0.0, pad_if_not_divisible=True)\r\n data[key] = {\r\n \"image\": img,\r\n \"keypoints\": features[0].keypoints[None],\r\n \"keypoint_scores\": features[0].detection_scores[None],\r\n \"descriptors\": features[0].descriptors[None],\r\n }\r\n\r\nmatcher = OnnxLightGlue(weights=\"disk_fp16\", device=device)\r\nresult = matcher(data)\r\nprint(result)\r\n```\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import ClassVar\n\nimport torch\n\nfrom kornia.core import Device, Tensor\nfrom kornia.core.check import KORNIA_CHECK, KORNIA_CHECK_SAME_DEVICES, KORNIA_CHECK_SHAPE\n\nfrom .utils import download_onnx_from_url, normalize_keypoints\n\ntry:\n import numpy as np\n import onnxruntime as ort\nexcept ImportError:\n np = None # type: ignore\n ort = None\n\n__all__ = [\"OnnxLightGlue\"]\n\n\nclass OnnxLightGlue:\n r\"\"\"Wrapper for loading LightGlue-ONNX models and running inference via ONNXRuntime.\n\n LightGlue :cite:`LightGlue2023` performs fast descriptor-based deep keypoint matching.\n This module requires `onnxruntime` to be installed.\n\n If you have trained your own LightGlue model, see https://github.com/fabio-sim/LightGlue-ONNX\n for how to export the model to ONNX and optimize it.\n\n Args:\n weights: Pretrained weights, or a path to your own exported ONNX model. Available pretrained weights are:\n `disk`, `superpoint`, `disk_fp16`, and `superpoint_fp16`. Defaults to `disk_fp16`.\n device: Device to run inference on. Currently, only `cuda` is supported. Defaults to `cuda`.\n \"\"\"\n MODEL_URLS: ClassVar[dict[str, str]] = {\n \"disk\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/disk_lightglue_fused.onnx\",\n \"superpoint\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/superpoint_lightglue_fused.onnx\",\n \"disk_fp16\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/disk_lightglue_fused_fp16.onnx\",\n \"superpoint_fp16\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/superpoint_lightglue_fused_fp16.onnx\",\n }\n\n required_data_keys: ClassVar[list[str]] = [\"image0\", \"image1\"]\n\n def __init__(self, weights: str = \"disk_fp16\", device: Device = None) -> None:\n KORNIA_CHECK(ort is not None, \"onnxruntime is not installed.\")\n KORNIA_CHECK(np is not None, \"numpy is not installed.\")\n\n if device is None:\n device = torch.device(\"cuda\")\n elif isinstance(device, str):\n device = torch.device(device)\n self.device = device\n\n if device.type == \"cpu\":\n raise NotImplementedError(\"CPUExecutionProvider is not supported yet for Multihead-Attention op.\")\n elif device.type == \"cuda\":\n providers = [\"CUDAExecutionProvider\", \"CPUExecutionProvider\"]\n else:\n raise ValueError(f\"Unsupported device {device}\")\n\n if weights in self.MODEL_URLS:\n weights = download_onnx_from_url(self.MODEL_URLS[weights])\n\n self.session = ort.InferenceSession(weights, providers=providers)\n\n def __call__(self, data: dict[str, dict[str, Tensor]]) -> dict[str, Tensor]:\n return self.forward(data)\n\n def forward(self, data: dict[str, dict[str, Tensor]]) -> dict[str, Tensor]:\n r\"\"\"Match keypoints and descriptors between two images.\n\n The output contains the matches (the indices of the matching keypoint pairs between the first and second image)\n and the corresponding confidence scores.\n Only a batch size of 1 is supported.\n\n Args:\n data: Dictionary containing both images and the keypoints and descriptors thereof.\n\n Returns:\n output: Dictionary containing the following matches and scores.\n\n `data`:\n image0: dict\n keypoints (`float32`): [1 x M x 2]\n descriptors (`float32`): [1 x M x D]\n image: [1 x C x H x W] or image_size: [1 x 2]\n image1: dict\n keypoints (`float32`): [1 x N x 2]\n descriptors (`float32`): [1 x N x D]\n image: [1 x C x H x W] or image_size: [1 x 2]\n\n `output`:\n matches (`int64`): [S x 2]\n scores (`float32`): [S]\n \"\"\"\n # Input validation.\n for key in self.required_data_keys:\n KORNIA_CHECK(key in data, f'Missing key {key} in data')\n data0, data1 = data['image0'], data['image1']\n kpts0_, kpts1_ = data0['keypoints'].contiguous(), data1['keypoints'].contiguous()\n desc0, desc1 = data0['descriptors'].contiguous(), data1['descriptors'].contiguous()\n KORNIA_CHECK_SAME_DEVICES([kpts0_, desc0, kpts1_, desc1], \"Wrong device\")\n KORNIA_CHECK(kpts0_.device.type == self.device.type, \"Wrong device\")\n KORNIA_CHECK(torch.float32 == kpts0_.dtype == kpts1_.dtype == desc0.dtype == desc1.dtype, \"Wrong dtype\")\n KORNIA_CHECK_SHAPE(kpts0_, [\"1\", \"M\", \"2\"])\n KORNIA_CHECK_SHAPE(kpts1_, [\"1\", \"N\", \"2\"])\n KORNIA_CHECK_SHAPE(desc0, [\"1\", \"M\", \"D\"])\n KORNIA_CHECK_SHAPE(desc1, [\"1\", \"N\", \"D\"])\n KORNIA_CHECK(kpts0_.shape[1] == desc0.shape[1], \"Number of keypoints does not match number of descriptors\")\n KORNIA_CHECK(kpts1_.shape[1] == desc1.shape[1], \"Number of keypoints does not match number of descriptors\")\n KORNIA_CHECK(desc0.shape[2] == desc1.shape[2], \"Descriptors' dimensions do not match\")\n\n # Normalize keypoints.\n size0, size1 = data0.get('image_size'), data1.get('image_size')\n size0 = size0 if size0 is not None else data0['image'].shape[-2:][::-1] # type: ignore\n size1 = size1 if size1 is not None else data1['image'].shape[-2:][::-1] # type: ignore\n\n kpts0 = normalize_keypoints(kpts0_, size=size0) # type: ignore\n kpts1 = normalize_keypoints(kpts1_, size=size1) # type: ignore\n\n KORNIA_CHECK(torch.all(kpts0 >= -1).item() and torch.all(kpts0 <= 1).item(), \"\") # type: ignore\n KORNIA_CHECK(torch.all(kpts1 >= -1).item() and torch.all(kpts1 <= 1).item(), \"\") # type: ignore\n\n # Inference.\n lightglue_inputs = {\"kpts0\": kpts0, \"kpts1\": kpts1, \"desc0\": desc0, \"desc1\": desc1}\n lightglue_outputs = [\"matches0\", \"mscores0\"]\n binding = self.session.io_binding()\n\n for name, tensor in lightglue_inputs.items():\n binding.bind_input(\n name,\n device_type=self.device.type,\n device_id=0,\n element_type=np.float32,\n shape=tuple(tensor.shape),\n buffer_ptr=tensor.data_ptr(),\n )\n\n for name in lightglue_outputs:\n binding.bind_output(name, device_type=self.device.type, device_id=0)\n\n self.session.run_with_iobinding(binding)\n\n matches, mscores = binding.get_outputs()\n\n # TODO: The following is an unnecessary copy. Replace with a better solution when torch supports\n # constructing a tensor from a data pointer, or when ORT supports converting to torch tensor.\n # https://github.com/microsoft/onnxruntime/issues/15963\n outputs = {\n \"matches\": torch.from_dlpack(matches.numpy()).to(self.device),\n \"scores\": torch.from_dlpack(mscores.numpy()).to(self.device),\n }\n return outputs\n", "path": "kornia/feature/lightglue_onnx/lightglue.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import ClassVar\n\nimport torch\n\nfrom kornia.core import Device, Tensor\nfrom kornia.core.check import KORNIA_CHECK, KORNIA_CHECK_SAME_DEVICES, KORNIA_CHECK_SHAPE\n\nfrom .utils import download_onnx_from_url, normalize_keypoints\n\ntry:\n import numpy as np\n import onnxruntime as ort\nexcept ImportError:\n np = None # type: ignore\n ort = None\n\n__all__ = [\"OnnxLightGlue\"]\n\n\nclass OnnxLightGlue:\n r\"\"\"Wrapper for loading LightGlue-ONNX models and running inference via ONNXRuntime.\n\n LightGlue :cite:`LightGlue2023` performs fast descriptor-based deep keypoint matching.\n This module requires `onnxruntime` to be installed.\n\n If you have trained your own LightGlue model, see https://github.com/fabio-sim/LightGlue-ONNX\n for how to export the model to ONNX and optimize it.\n\n Args:\n weights: Pretrained weights, or a path to your own exported ONNX model. Available pretrained weights\n are ``'disk'``, ``'superpoint'``, ``'disk_fp16'``, and ``'superpoint_fp16'``.\n device: Device to run inference on. Currently, only ``'cuda'`` is supported. Defaults to ``'cuda'``.\n \"\"\"\n MODEL_URLS: ClassVar[dict[str, str]] = {\n \"disk\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/disk_lightglue_fused.onnx\",\n \"superpoint\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/superpoint_lightglue_fused.onnx\",\n \"disk_fp16\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/disk_lightglue_fused_fp16.onnx\",\n \"superpoint_fp16\": \"https://github.com/fabio-sim/LightGlue-ONNX/releases/download/v1.0.0/superpoint_lightglue_fused_fp16.onnx\",\n }\n\n required_data_keys: ClassVar[list[str]] = [\"image0\", \"image1\"]\n\n def __init__(self, weights: str = \"disk_fp16\", device: Device = None) -> None:\n KORNIA_CHECK(ort is not None, \"onnxruntime is not installed.\")\n KORNIA_CHECK(np is not None, \"numpy is not installed.\")\n\n if device is None:\n device = torch.device(\"cuda\")\n elif isinstance(device, str):\n device = torch.device(device)\n self.device = device\n\n if device.type == \"cpu\":\n raise NotImplementedError(\"CPUExecutionProvider is not supported yet for Multihead-Attention op.\")\n elif device.type == \"cuda\":\n providers = [\"CUDAExecutionProvider\", \"CPUExecutionProvider\"]\n else:\n raise ValueError(f\"Unsupported device {device}\")\n\n if weights in self.MODEL_URLS:\n weights = download_onnx_from_url(self.MODEL_URLS[weights])\n\n self.session = ort.InferenceSession(weights, providers=providers)\n\n def __call__(self, data: dict[str, dict[str, Tensor]]) -> dict[str, Tensor]:\n return self.forward(data)\n\n def forward(self, data: dict[str, dict[str, Tensor]]) -> dict[str, Tensor]:\n r\"\"\"Match keypoints and descriptors between two images.\n\n The output contains the matches (the indices of the matching keypoint pairs between the first and second image)\n and the corresponding confidence scores.\n Only a batch size of 1 is supported.\n\n Args:\n data: Dictionary containing both images and the keypoints and descriptors thereof.\n\n Returns:\n Dictionary containing the matches and scores.\n\n ``data`` (``dict``):\n ``image0`` (``dict``):\n ``keypoints`` (`float32`): :math:`(1, M, 2)`\n\n ``descriptors`` (`float32`): :math:`(1, M, D)`\n\n ``image``: :math:`(1, C, H, W)` or ``image_size``: :math:`(1, 2)`\n\n ``image1`` (``dict``):\n ``keypoints`` (`float32`): :math:`(1, N, 2)`\n\n ``descriptors`` (`float32`): :math:`(1, N, D)`\n\n ``image``: :math:`(1, C, H, W)` or ``image_size``: :math:`(1, 2)`\n\n ``output`` (``dict``):\n ``matches`` (`int64`): :math:`(S, 2)`\n\n ``scores`` (`float32`): :math:`(S)`\n \"\"\"\n # Input validation.\n for key in self.required_data_keys:\n KORNIA_CHECK(key in data, f'Missing key {key} in data')\n data0, data1 = data['image0'], data['image1']\n kpts0_, kpts1_ = data0['keypoints'].contiguous(), data1['keypoints'].contiguous()\n desc0, desc1 = data0['descriptors'].contiguous(), data1['descriptors'].contiguous()\n KORNIA_CHECK_SAME_DEVICES([kpts0_, desc0, kpts1_, desc1], \"Wrong device\")\n KORNIA_CHECK(kpts0_.device.type == self.device.type, \"Wrong device\")\n KORNIA_CHECK(torch.float32 == kpts0_.dtype == kpts1_.dtype == desc0.dtype == desc1.dtype, \"Wrong dtype\")\n KORNIA_CHECK_SHAPE(kpts0_, [\"1\", \"M\", \"2\"])\n KORNIA_CHECK_SHAPE(kpts1_, [\"1\", \"N\", \"2\"])\n KORNIA_CHECK_SHAPE(desc0, [\"1\", \"M\", \"D\"])\n KORNIA_CHECK_SHAPE(desc1, [\"1\", \"N\", \"D\"])\n KORNIA_CHECK(kpts0_.shape[1] == desc0.shape[1], \"Number of keypoints does not match number of descriptors\")\n KORNIA_CHECK(kpts1_.shape[1] == desc1.shape[1], \"Number of keypoints does not match number of descriptors\")\n KORNIA_CHECK(desc0.shape[2] == desc1.shape[2], \"Descriptors' dimensions do not match\")\n\n # Normalize keypoints.\n size0, size1 = data0.get('image_size'), data1.get('image_size')\n size0 = size0 if size0 is not None else data0['image'].shape[-2:][::-1] # type: ignore\n size1 = size1 if size1 is not None else data1['image'].shape[-2:][::-1] # type: ignore\n\n kpts0 = normalize_keypoints(kpts0_, size=size0) # type: ignore\n kpts1 = normalize_keypoints(kpts1_, size=size1) # type: ignore\n\n KORNIA_CHECK(torch.all(kpts0 >= -1).item() and torch.all(kpts0 <= 1).item(), \"\") # type: ignore\n KORNIA_CHECK(torch.all(kpts1 >= -1).item() and torch.all(kpts1 <= 1).item(), \"\") # type: ignore\n\n # Inference.\n lightglue_inputs = {\"kpts0\": kpts0, \"kpts1\": kpts1, \"desc0\": desc0, \"desc1\": desc1}\n lightglue_outputs = [\"matches0\", \"mscores0\"]\n binding = self.session.io_binding()\n\n for name, tensor in lightglue_inputs.items():\n binding.bind_input(\n name,\n device_type=self.device.type,\n device_id=0,\n element_type=np.float32,\n shape=tuple(tensor.shape),\n buffer_ptr=tensor.data_ptr(),\n )\n\n for name in lightglue_outputs:\n binding.bind_output(name, device_type=self.device.type, device_id=0)\n\n self.session.run_with_iobinding(binding)\n\n matches, mscores = binding.get_outputs()\n\n # TODO: The following is an unnecessary copy. Replace with a better solution when torch supports\n # constructing a tensor from a data pointer, or when ORT supports converting to torch tensor.\n # https://github.com/microsoft/onnxruntime/issues/15963\n outputs = {\n \"matches\": torch.from_dlpack(matches.numpy()).to(self.device),\n \"scores\": torch.from_dlpack(mscores.numpy()).to(self.device),\n }\n return outputs\n", "path": "kornia/feature/lightglue_onnx/lightglue.py"}]}
| 3,052 | 797 |
gh_patches_debug_24695
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-484
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Recursively initialize Module parameters
Say I have an `ExactGP` called `gp`. It would be great if I could just do `gp.initialize(kwargs)` and all the parameters will be initialized recursively. This would allow us to put all the initialization values in one place.
Note that it would have to raise an error if there were a parameter name collision.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/module.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from collections import OrderedDict
4
5 import torch
6 from torch import nn
7 from torch.distributions import Distribution
8
9 from .lazy import LazyTensor
10 from .utils.deprecation import DeprecationError
11
12
13 class Module(nn.Module):
14 def __init__(self):
15 super().__init__()
16 self._added_loss_terms = OrderedDict()
17 self._priors = OrderedDict()
18
19 def __call__(self, *inputs, **kwargs):
20 outputs = self.forward(*inputs, **kwargs)
21 if isinstance(outputs, list):
22 return [_validate_module_outputs(output) for output in outputs]
23 return _validate_module_outputs(outputs)
24
25 def _get_module_and_name(self, parameter_name):
26 """Get module and name from full parameter name."""
27 module, name = parameter_name.split(".", 1)
28 if module in self._modules:
29 return self.__getattr__(module), name
30 else:
31 raise AttributeError(
32 "Invalid parameter name {}. {} has no module {}".format(parameter_name, type(self).__name__, module)
33 )
34
35 def added_loss_terms(self):
36 for _, strategy in self.named_added_loss_terms():
37 yield strategy
38
39 def forward(self, *inputs, **kwargs):
40 raise NotImplementedError
41
42 def hyperparameters(self):
43 for _, param in self.named_hyperparameters():
44 yield param
45
46 def initialize(self, **kwargs):
47 """
48 Set a value for a parameter
49
50 kwargs: (param_name, value) - parameter to initialize
51 Value can take the form of a tensor, a float, or an int
52 """
53
54 for name, val in kwargs.items():
55 if isinstance(val, int):
56 val = float(val)
57 if not hasattr(self, name):
58 raise AttributeError("Unknown parameter {p} for {c}".format(p=name, c=self.__class__.__name__))
59 elif name not in self._parameters:
60 setattr(self, name, val)
61 elif torch.is_tensor(val):
62 try:
63 self.__getattr__(name).data.copy_(val.expand_as(self.__getattr__(name)))
64 except RuntimeError:
65 self.__getattr__(name).data.copy_(val.view_as(self.__getattr__(name)))
66
67 elif isinstance(val, float):
68 self.__getattr__(name).data.fill_(val)
69 else:
70 raise AttributeError("Type {t} not valid for initializing parameter {p}".format(t=type(val), p=name))
71
72 # Ensure value is contained in support of prior (if present)
73 prior_name = "_".join([name, "prior"])
74 if prior_name in self._priors:
75 prior, closure, _ = self._priors[prior_name]
76 try:
77 prior._validate_sample(closure())
78 except ValueError as e:
79 raise ValueError("Invalid input value for prior {}. Error:\n{}".format(prior_name, e))
80
81 return self
82
83 def named_added_loss_terms(self):
84 """Returns an iterator over module variational strategies, yielding both
85 the name of the variational strategy as well as the strategy itself.
86
87 Yields:
88 (string, VariationalStrategy): Tuple containing the name of the
89 strategy and the strategy
90
91 """
92 return _extract_named_added_loss_terms(module=self, memo=None, prefix="")
93
94 def named_hyperparameters(self):
95 for name, param in self.named_parameters():
96 if "variational_" not in name:
97 yield name, param
98
99 def named_priors(self, memo=None, prefix=""):
100 """Returns an iterator over the module's priors, yielding the name of the prior,
101 the prior, the associated parameter names, and the transformation callable.
102
103 Yields:
104 (string, Prior, tuple((Parameter, callable)), callable): Tuple containing:
105 - the name of the prior
106 - the prior
107 - a tuple of tuples (param, transform), one for each of the parameters associated with the prior
108 - the prior's transform to be called on the parameters
109 """
110 return _extract_named_priors(module=self, memo=None, prefix="")
111
112 def named_variational_parameters(self):
113 for name, param in self.named_parameters():
114 if "variational_" in name:
115 yield name, param
116
117 def register_added_loss_term(self, name):
118 self._added_loss_terms[name] = None
119
120 def register_parameter(self, name, parameter, prior=None):
121 r"""
122 Adds a parameter to the module. The parameter can be accessed as an attribute using the given name.
123
124 Args:
125 :attr:`name` (str):
126 The name of the parameter
127 :attr:`parameter` (torch.nn.Parameter):
128 The parameter
129 """
130 if prior is not None:
131 raise DeprecationError(
132 "Setting a prior upon registering a parameter is deprecated. Please use "
133 ".register_prior('{name}_prior', prior, '{name}') instead.".format(name=name)
134 )
135 if "_parameters" not in self.__dict__:
136 raise AttributeError("Cannot assign parameter before Module.__init__() call")
137 super().register_parameter(name, parameter)
138
139 def register_prior(self, name, prior, param_or_closure, setting_closure=None):
140 """
141 Adds a prior to the module. The prior can be accessed as an attribute using the given name.
142
143 Args:
144 :attr:`name` (str):
145 The name of the prior
146 :attr:`prior` (Prior):
147 The prior to be registered`
148 :attr:`param_or_closure` (string or callable):
149 Either the name of the parameter, or a closure (which upon calling evalutes a function on
150 one or more parameters):
151 single parameter without a transform: `.register_prior("foo_prior", foo_prior, "foo_param")`
152 transform a single parameter (e.g. put a log-Normal prior on it):
153 `.register_prior("foo_prior", NormalPrior(0, 1), lambda: torch.log(self.foo_param))`
154 function of multiple parameters:
155 `.register_prior("foo2_prior", foo2_prior, lambda: f(self.param1, self.param2)))`
156 :attr:`setting_closure` (callable, optional):
157 A function taking in a tensor in (transformed) parameter space and initializing the
158 internal parameter representation to the proper value by applying the inverse transform.
159 Enables setting parametres directly in the transformed space, as well as sampling
160 parameter values from priors (see `sample_from_prior`)
161
162 """
163 if isinstance(param_or_closure, str):
164 if param_or_closure not in self._parameters:
165 raise AttributeError(
166 "Unknown parameter {name} for {module}".format(
167 name=param_or_closure, module=self.__class__.__name__
168 )
169 + "Make sure the parameter is registered before registering a prior."
170 )
171
172 def closure():
173 return self._parameters[param_or_closure]
174
175 if setting_closure is not None:
176 raise RuntimeError("Must specify a closure instead of a parameter name when providing setting_closure")
177
178 def setting_closure(val):
179 return self.initialize(**{param_or_closure: val})
180
181 else:
182 closure = param_or_closure
183 self.add_module(name, prior)
184 self._priors[name] = (prior, closure, setting_closure)
185
186 def sample_from_prior(self, prior_name):
187 """Sample parameter values from prior. Modifies the module's parameters in-place."""
188 if prior_name not in self._priors:
189 raise RuntimeError("Unknown prior name '{}'".format(prior_name))
190 prior, _, setting_closure = self._priors[prior_name]
191 if setting_closure is None:
192 raise RuntimeError("Must provide inverse transform to be able to sample from prior.")
193 setting_closure(prior.sample())
194
195 def update_added_loss_term(self, name, added_loss_term):
196 from .mlls import AddedLossTerm
197
198 if not isinstance(added_loss_term, AddedLossTerm):
199 raise RuntimeError("added_loss_term must be a AddedLossTerm")
200 if name not in self._added_loss_terms.keys():
201 raise RuntimeError("added_loss_term {} not registered".format(name))
202 self._added_loss_terms[name] = added_loss_term
203
204 def variational_parameters(self):
205 for _, param in self.named_variational_parameters():
206 yield param
207
208 def __getattr__(self, name):
209 try:
210 return super().__getattr__(name)
211 except AttributeError as e:
212 try:
213 return super().__getattribute__(name)
214 except AttributeError:
215 raise e
216
217
218 def _validate_module_outputs(outputs):
219 if isinstance(outputs, tuple):
220 if not all(
221 torch.is_tensor(output) or isinstance(output, Distribution) or isinstance(output, LazyTensor)
222 for output in outputs
223 ):
224 raise RuntimeError(
225 "All outputs must be a Distribution, torch.Tensor, or LazyTensor. "
226 "Got {}".format([output.__class__.__name__ for output in outputs])
227 )
228 if len(outputs) == 1:
229 outputs = outputs[0]
230 return outputs
231 elif torch.is_tensor(outputs) or isinstance(outputs, Distribution) or isinstance(outputs, LazyTensor):
232 return outputs
233 else:
234 raise RuntimeError(
235 "Output must be a Distribution, torch.Tensor, or LazyTensor. Got {}".format(outputs.__class__.__name__)
236 )
237
238
239 def _extract_named_added_loss_terms(module, memo=None, prefix=""):
240 if memo is None:
241 memo = set()
242 if hasattr(module, "_added_loss_terms"):
243 for name, strategy in module._added_loss_terms.items():
244 if strategy is not None and strategy not in memo:
245 memo.add(strategy)
246 yield prefix + ("." if prefix else "") + name, strategy
247 for mname, module_ in module.named_children():
248 submodule_prefix = prefix + ("." if prefix else "") + mname
249 for name, strategy in _extract_named_added_loss_terms(module=module_, memo=memo, prefix=submodule_prefix):
250 yield name, strategy
251
252
253 def _extract_named_priors(module, memo=None, prefix=""):
254 if memo is None:
255 memo = set()
256 if hasattr(module, "_priors"):
257 for name, (prior, closure, inv_closure) in module._priors.items():
258 if prior is not None and prior not in memo:
259 memo.add(prior)
260 full_name = ("." if prefix else "").join([prefix, name])
261 yield full_name, prior, closure, inv_closure
262 for mname, module_ in module.named_children():
263 submodule_prefix = prefix + ("." if prefix else "") + mname
264 for name, prior, closure, inv_closure in _extract_named_priors(module_, memo=memo, prefix=submodule_prefix):
265 yield name, prior, closure, inv_closure
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gpytorch/module.py b/gpytorch/module.py
--- a/gpytorch/module.py
+++ b/gpytorch/module.py
@@ -47,14 +47,26 @@
"""
Set a value for a parameter
- kwargs: (param_name, value) - parameter to initialize
+ kwargs: (param_name, value) - parameter to initialize.
+ Can also initialize recursively by passing in the full name of a
+ parameter. For example if model has attribute model.likelihood,
+ we can initialize the noise with either
+ `model.initialize(**{'likelihood.noise': 0.1})`
+ or
+ `model.likelihood.initialize(noise=0.1)`.
+ The former method would allow users to more easily store the
+ initialization values as one object.
+
Value can take the form of a tensor, a float, or an int
"""
for name, val in kwargs.items():
if isinstance(val, int):
val = float(val)
- if not hasattr(self, name):
+ if '.' in name:
+ module, name = self._get_module_and_name(name)
+ module.initialize(**{name: val})
+ elif not hasattr(self, name):
raise AttributeError("Unknown parameter {p} for {c}".format(p=name, c=self.__class__.__name__))
elif name not in self._parameters:
setattr(self, name, val)
|
{"golden_diff": "diff --git a/gpytorch/module.py b/gpytorch/module.py\n--- a/gpytorch/module.py\n+++ b/gpytorch/module.py\n@@ -47,14 +47,26 @@\n \"\"\"\n Set a value for a parameter\n \n- kwargs: (param_name, value) - parameter to initialize\n+ kwargs: (param_name, value) - parameter to initialize.\n+ Can also initialize recursively by passing in the full name of a\n+ parameter. For example if model has attribute model.likelihood,\n+ we can initialize the noise with either\n+ `model.initialize(**{'likelihood.noise': 0.1})`\n+ or\n+ `model.likelihood.initialize(noise=0.1)`.\n+ The former method would allow users to more easily store the\n+ initialization values as one object.\n+\n Value can take the form of a tensor, a float, or an int\n \"\"\"\n \n for name, val in kwargs.items():\n if isinstance(val, int):\n val = float(val)\n- if not hasattr(self, name):\n+ if '.' in name:\n+ module, name = self._get_module_and_name(name)\n+ module.initialize(**{name: val})\n+ elif not hasattr(self, name):\n raise AttributeError(\"Unknown parameter {p} for {c}\".format(p=name, c=self.__class__.__name__))\n elif name not in self._parameters:\n setattr(self, name, val)\n", "issue": "Recursively initialize Module parameters\nSay I have an `ExactGP` called `gp`. It would be great if I could just do `gp.initialize(kwargs)` and all the parameters will be initialized recursively. This would allow us to put all the initialization values in one place. \r\n\r\nNote that it would have to raise an error if there were a parameter name collision.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn\nfrom torch.distributions import Distribution\n\nfrom .lazy import LazyTensor\nfrom .utils.deprecation import DeprecationError\n\n\nclass Module(nn.Module):\n def __init__(self):\n super().__init__()\n self._added_loss_terms = OrderedDict()\n self._priors = OrderedDict()\n\n def __call__(self, *inputs, **kwargs):\n outputs = self.forward(*inputs, **kwargs)\n if isinstance(outputs, list):\n return [_validate_module_outputs(output) for output in outputs]\n return _validate_module_outputs(outputs)\n\n def _get_module_and_name(self, parameter_name):\n \"\"\"Get module and name from full parameter name.\"\"\"\n module, name = parameter_name.split(\".\", 1)\n if module in self._modules:\n return self.__getattr__(module), name\n else:\n raise AttributeError(\n \"Invalid parameter name {}. {} has no module {}\".format(parameter_name, type(self).__name__, module)\n )\n\n def added_loss_terms(self):\n for _, strategy in self.named_added_loss_terms():\n yield strategy\n\n def forward(self, *inputs, **kwargs):\n raise NotImplementedError\n\n def hyperparameters(self):\n for _, param in self.named_hyperparameters():\n yield param\n\n def initialize(self, **kwargs):\n \"\"\"\n Set a value for a parameter\n\n kwargs: (param_name, value) - parameter to initialize\n Value can take the form of a tensor, a float, or an int\n \"\"\"\n\n for name, val in kwargs.items():\n if isinstance(val, int):\n val = float(val)\n if not hasattr(self, name):\n raise AttributeError(\"Unknown parameter {p} for {c}\".format(p=name, c=self.__class__.__name__))\n elif name not in self._parameters:\n setattr(self, name, val)\n elif torch.is_tensor(val):\n try:\n self.__getattr__(name).data.copy_(val.expand_as(self.__getattr__(name)))\n except RuntimeError:\n self.__getattr__(name).data.copy_(val.view_as(self.__getattr__(name)))\n\n elif isinstance(val, float):\n self.__getattr__(name).data.fill_(val)\n else:\n raise AttributeError(\"Type {t} not valid for initializing parameter {p}\".format(t=type(val), p=name))\n\n # Ensure value is contained in support of prior (if present)\n prior_name = \"_\".join([name, \"prior\"])\n if prior_name in self._priors:\n prior, closure, _ = self._priors[prior_name]\n try:\n prior._validate_sample(closure())\n except ValueError as e:\n raise ValueError(\"Invalid input value for prior {}. Error:\\n{}\".format(prior_name, e))\n\n return self\n\n def named_added_loss_terms(self):\n \"\"\"Returns an iterator over module variational strategies, yielding both\n the name of the variational strategy as well as the strategy itself.\n\n Yields:\n (string, VariationalStrategy): Tuple containing the name of the\n strategy and the strategy\n\n \"\"\"\n return _extract_named_added_loss_terms(module=self, memo=None, prefix=\"\")\n\n def named_hyperparameters(self):\n for name, param in self.named_parameters():\n if \"variational_\" not in name:\n yield name, param\n\n def named_priors(self, memo=None, prefix=\"\"):\n \"\"\"Returns an iterator over the module's priors, yielding the name of the prior,\n the prior, the associated parameter names, and the transformation callable.\n\n Yields:\n (string, Prior, tuple((Parameter, callable)), callable): Tuple containing:\n - the name of the prior\n - the prior\n - a tuple of tuples (param, transform), one for each of the parameters associated with the prior\n - the prior's transform to be called on the parameters\n \"\"\"\n return _extract_named_priors(module=self, memo=None, prefix=\"\")\n\n def named_variational_parameters(self):\n for name, param in self.named_parameters():\n if \"variational_\" in name:\n yield name, param\n\n def register_added_loss_term(self, name):\n self._added_loss_terms[name] = None\n\n def register_parameter(self, name, parameter, prior=None):\n r\"\"\"\n Adds a parameter to the module. The parameter can be accessed as an attribute using the given name.\n\n Args:\n :attr:`name` (str):\n The name of the parameter\n :attr:`parameter` (torch.nn.Parameter):\n The parameter\n \"\"\"\n if prior is not None:\n raise DeprecationError(\n \"Setting a prior upon registering a parameter is deprecated. Please use \"\n \".register_prior('{name}_prior', prior, '{name}') instead.\".format(name=name)\n )\n if \"_parameters\" not in self.__dict__:\n raise AttributeError(\"Cannot assign parameter before Module.__init__() call\")\n super().register_parameter(name, parameter)\n\n def register_prior(self, name, prior, param_or_closure, setting_closure=None):\n \"\"\"\n Adds a prior to the module. The prior can be accessed as an attribute using the given name.\n\n Args:\n :attr:`name` (str):\n The name of the prior\n :attr:`prior` (Prior):\n The prior to be registered`\n :attr:`param_or_closure` (string or callable):\n Either the name of the parameter, or a closure (which upon calling evalutes a function on\n one or more parameters):\n single parameter without a transform: `.register_prior(\"foo_prior\", foo_prior, \"foo_param\")`\n transform a single parameter (e.g. put a log-Normal prior on it):\n `.register_prior(\"foo_prior\", NormalPrior(0, 1), lambda: torch.log(self.foo_param))`\n function of multiple parameters:\n `.register_prior(\"foo2_prior\", foo2_prior, lambda: f(self.param1, self.param2)))`\n :attr:`setting_closure` (callable, optional):\n A function taking in a tensor in (transformed) parameter space and initializing the\n internal parameter representation to the proper value by applying the inverse transform.\n Enables setting parametres directly in the transformed space, as well as sampling\n parameter values from priors (see `sample_from_prior`)\n\n \"\"\"\n if isinstance(param_or_closure, str):\n if param_or_closure not in self._parameters:\n raise AttributeError(\n \"Unknown parameter {name} for {module}\".format(\n name=param_or_closure, module=self.__class__.__name__\n )\n + \"Make sure the parameter is registered before registering a prior.\"\n )\n\n def closure():\n return self._parameters[param_or_closure]\n\n if setting_closure is not None:\n raise RuntimeError(\"Must specify a closure instead of a parameter name when providing setting_closure\")\n\n def setting_closure(val):\n return self.initialize(**{param_or_closure: val})\n\n else:\n closure = param_or_closure\n self.add_module(name, prior)\n self._priors[name] = (prior, closure, setting_closure)\n\n def sample_from_prior(self, prior_name):\n \"\"\"Sample parameter values from prior. Modifies the module's parameters in-place.\"\"\"\n if prior_name not in self._priors:\n raise RuntimeError(\"Unknown prior name '{}'\".format(prior_name))\n prior, _, setting_closure = self._priors[prior_name]\n if setting_closure is None:\n raise RuntimeError(\"Must provide inverse transform to be able to sample from prior.\")\n setting_closure(prior.sample())\n\n def update_added_loss_term(self, name, added_loss_term):\n from .mlls import AddedLossTerm\n\n if not isinstance(added_loss_term, AddedLossTerm):\n raise RuntimeError(\"added_loss_term must be a AddedLossTerm\")\n if name not in self._added_loss_terms.keys():\n raise RuntimeError(\"added_loss_term {} not registered\".format(name))\n self._added_loss_terms[name] = added_loss_term\n\n def variational_parameters(self):\n for _, param in self.named_variational_parameters():\n yield param\n\n def __getattr__(self, name):\n try:\n return super().__getattr__(name)\n except AttributeError as e:\n try:\n return super().__getattribute__(name)\n except AttributeError:\n raise e\n\n\ndef _validate_module_outputs(outputs):\n if isinstance(outputs, tuple):\n if not all(\n torch.is_tensor(output) or isinstance(output, Distribution) or isinstance(output, LazyTensor)\n for output in outputs\n ):\n raise RuntimeError(\n \"All outputs must be a Distribution, torch.Tensor, or LazyTensor. \"\n \"Got {}\".format([output.__class__.__name__ for output in outputs])\n )\n if len(outputs) == 1:\n outputs = outputs[0]\n return outputs\n elif torch.is_tensor(outputs) or isinstance(outputs, Distribution) or isinstance(outputs, LazyTensor):\n return outputs\n else:\n raise RuntimeError(\n \"Output must be a Distribution, torch.Tensor, or LazyTensor. Got {}\".format(outputs.__class__.__name__)\n )\n\n\ndef _extract_named_added_loss_terms(module, memo=None, prefix=\"\"):\n if memo is None:\n memo = set()\n if hasattr(module, \"_added_loss_terms\"):\n for name, strategy in module._added_loss_terms.items():\n if strategy is not None and strategy not in memo:\n memo.add(strategy)\n yield prefix + (\".\" if prefix else \"\") + name, strategy\n for mname, module_ in module.named_children():\n submodule_prefix = prefix + (\".\" if prefix else \"\") + mname\n for name, strategy in _extract_named_added_loss_terms(module=module_, memo=memo, prefix=submodule_prefix):\n yield name, strategy\n\n\ndef _extract_named_priors(module, memo=None, prefix=\"\"):\n if memo is None:\n memo = set()\n if hasattr(module, \"_priors\"):\n for name, (prior, closure, inv_closure) in module._priors.items():\n if prior is not None and prior not in memo:\n memo.add(prior)\n full_name = (\".\" if prefix else \"\").join([prefix, name])\n yield full_name, prior, closure, inv_closure\n for mname, module_ in module.named_children():\n submodule_prefix = prefix + (\".\" if prefix else \"\") + mname\n for name, prior, closure, inv_closure in _extract_named_priors(module_, memo=memo, prefix=submodule_prefix):\n yield name, prior, closure, inv_closure\n", "path": "gpytorch/module.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn\nfrom torch.distributions import Distribution\n\nfrom .lazy import LazyTensor\nfrom .utils.deprecation import DeprecationError\n\n\nclass Module(nn.Module):\n def __init__(self):\n super().__init__()\n self._added_loss_terms = OrderedDict()\n self._priors = OrderedDict()\n\n def __call__(self, *inputs, **kwargs):\n outputs = self.forward(*inputs, **kwargs)\n if isinstance(outputs, list):\n return [_validate_module_outputs(output) for output in outputs]\n return _validate_module_outputs(outputs)\n\n def _get_module_and_name(self, parameter_name):\n \"\"\"Get module and name from full parameter name.\"\"\"\n module, name = parameter_name.split(\".\", 1)\n if module in self._modules:\n return self.__getattr__(module), name\n else:\n raise AttributeError(\n \"Invalid parameter name {}. {} has no module {}\".format(parameter_name, type(self).__name__, module)\n )\n\n def added_loss_terms(self):\n for _, strategy in self.named_added_loss_terms():\n yield strategy\n\n def forward(self, *inputs, **kwargs):\n raise NotImplementedError\n\n def hyperparameters(self):\n for _, param in self.named_hyperparameters():\n yield param\n\n def initialize(self, **kwargs):\n \"\"\"\n Set a value for a parameter\n\n kwargs: (param_name, value) - parameter to initialize.\n Can also initialize recursively by passing in the full name of a\n parameter. For example if model has attribute model.likelihood,\n we can initialize the noise with either\n `model.initialize(**{'likelihood.noise': 0.1})`\n or\n `model.likelihood.initialize(noise=0.1)`.\n The former method would allow users to more easily store the\n initialization values as one object.\n\n Value can take the form of a tensor, a float, or an int\n \"\"\"\n\n for name, val in kwargs.items():\n if isinstance(val, int):\n val = float(val)\n if '.' in name:\n module, name = self._get_module_and_name(name)\n module.initialize(**{name: val})\n elif not hasattr(self, name):\n raise AttributeError(\"Unknown parameter {p} for {c}\".format(p=name, c=self.__class__.__name__))\n elif name not in self._parameters:\n setattr(self, name, val)\n elif torch.is_tensor(val):\n try:\n self.__getattr__(name).data.copy_(val.expand_as(self.__getattr__(name)))\n except RuntimeError:\n self.__getattr__(name).data.copy_(val.view_as(self.__getattr__(name)))\n\n elif isinstance(val, float):\n self.__getattr__(name).data.fill_(val)\n else:\n raise AttributeError(\"Type {t} not valid for initializing parameter {p}\".format(t=type(val), p=name))\n\n # Ensure value is contained in support of prior (if present)\n prior_name = \"_\".join([name, \"prior\"])\n if prior_name in self._priors:\n prior, closure, _ = self._priors[prior_name]\n try:\n prior._validate_sample(closure())\n except ValueError as e:\n raise ValueError(\"Invalid input value for prior {}. Error:\\n{}\".format(prior_name, e))\n\n return self\n\n def named_added_loss_terms(self):\n \"\"\"Returns an iterator over module variational strategies, yielding both\n the name of the variational strategy as well as the strategy itself.\n\n Yields:\n (string, VariationalStrategy): Tuple containing the name of the\n strategy and the strategy\n\n \"\"\"\n return _extract_named_added_loss_terms(module=self, memo=None, prefix=\"\")\n\n def named_hyperparameters(self):\n for name, param in self.named_parameters():\n if \"variational_\" not in name:\n yield name, param\n\n def named_priors(self, memo=None, prefix=\"\"):\n \"\"\"Returns an iterator over the module's priors, yielding the name of the prior,\n the prior, the associated parameter names, and the transformation callable.\n\n Yields:\n (string, Prior, tuple((Parameter, callable)), callable): Tuple containing:\n - the name of the prior\n - the prior\n - a tuple of tuples (param, transform), one for each of the parameters associated with the prior\n - the prior's transform to be called on the parameters\n \"\"\"\n return _extract_named_priors(module=self, memo=None, prefix=\"\")\n\n def named_variational_parameters(self):\n for name, param in self.named_parameters():\n if \"variational_\" in name:\n yield name, param\n\n def register_added_loss_term(self, name):\n self._added_loss_terms[name] = None\n\n def register_parameter(self, name, parameter, prior=None):\n r\"\"\"\n Adds a parameter to the module. The parameter can be accessed as an attribute using the given name.\n\n Args:\n :attr:`name` (str):\n The name of the parameter\n :attr:`parameter` (torch.nn.Parameter):\n The parameter\n \"\"\"\n if prior is not None:\n raise DeprecationError(\n \"Setting a prior upon registering a parameter is deprecated. Please use \"\n \".register_prior('{name}_prior', prior, '{name}') instead.\".format(name=name)\n )\n if \"_parameters\" not in self.__dict__:\n raise AttributeError(\"Cannot assign parameter before Module.__init__() call\")\n super().register_parameter(name, parameter)\n\n def register_prior(self, name, prior, param_or_closure, setting_closure=None):\n \"\"\"\n Adds a prior to the module. The prior can be accessed as an attribute using the given name.\n\n Args:\n :attr:`name` (str):\n The name of the prior\n :attr:`prior` (Prior):\n The prior to be registered`\n :attr:`param_or_closure` (string or callable):\n Either the name of the parameter, or a closure (which upon calling evalutes a function on\n one or more parameters):\n single parameter without a transform: `.register_prior(\"foo_prior\", foo_prior, \"foo_param\")`\n transform a single parameter (e.g. put a log-Normal prior on it):\n `.register_prior(\"foo_prior\", NormalPrior(0, 1), lambda: torch.log(self.foo_param))`\n function of multiple parameters:\n `.register_prior(\"foo2_prior\", foo2_prior, lambda: f(self.param1, self.param2)))`\n :attr:`setting_closure` (callable, optional):\n A function taking in a tensor in (transformed) parameter space and initializing the\n internal parameter representation to the proper value by applying the inverse transform.\n Enables setting parametres directly in the transformed space, as well as sampling\n parameter values from priors (see `sample_from_prior`)\n\n \"\"\"\n if isinstance(param_or_closure, str):\n if param_or_closure not in self._parameters:\n raise AttributeError(\n \"Unknown parameter {name} for {module}\".format(\n name=param_or_closure, module=self.__class__.__name__\n )\n + \"Make sure the parameter is registered before registering a prior.\"\n )\n\n def closure():\n return self._parameters[param_or_closure]\n\n if setting_closure is not None:\n raise RuntimeError(\"Must specify a closure instead of a parameter name when providing setting_closure\")\n\n def setting_closure(val):\n return self.initialize(**{param_or_closure: val})\n\n else:\n closure = param_or_closure\n self.add_module(name, prior)\n self._priors[name] = (prior, closure, setting_closure)\n\n def sample_from_prior(self, prior_name):\n \"\"\"Sample parameter values from prior. Modifies the module's parameters in-place.\"\"\"\n if prior_name not in self._priors:\n raise RuntimeError(\"Unknown prior name '{}'\".format(prior_name))\n prior, _, setting_closure = self._priors[prior_name]\n if setting_closure is None:\n raise RuntimeError(\"Must provide inverse transform to be able to sample from prior.\")\n setting_closure(prior.sample())\n\n def update_added_loss_term(self, name, added_loss_term):\n from .mlls import AddedLossTerm\n\n if not isinstance(added_loss_term, AddedLossTerm):\n raise RuntimeError(\"added_loss_term must be a AddedLossTerm\")\n if name not in self._added_loss_terms.keys():\n raise RuntimeError(\"added_loss_term {} not registered\".format(name))\n self._added_loss_terms[name] = added_loss_term\n\n def variational_parameters(self):\n for _, param in self.named_variational_parameters():\n yield param\n\n def __getattr__(self, name):\n try:\n return super().__getattr__(name)\n except AttributeError as e:\n try:\n return super().__getattribute__(name)\n except AttributeError:\n raise e\n\n\ndef _validate_module_outputs(outputs):\n if isinstance(outputs, tuple):\n if not all(\n torch.is_tensor(output) or isinstance(output, Distribution) or isinstance(output, LazyTensor)\n for output in outputs\n ):\n raise RuntimeError(\n \"All outputs must be a Distribution, torch.Tensor, or LazyTensor. \"\n \"Got {}\".format([output.__class__.__name__ for output in outputs])\n )\n if len(outputs) == 1:\n outputs = outputs[0]\n return outputs\n elif torch.is_tensor(outputs) or isinstance(outputs, Distribution) or isinstance(outputs, LazyTensor):\n return outputs\n else:\n raise RuntimeError(\n \"Output must be a Distribution, torch.Tensor, or LazyTensor. Got {}\".format(outputs.__class__.__name__)\n )\n\n\ndef _extract_named_added_loss_terms(module, memo=None, prefix=\"\"):\n if memo is None:\n memo = set()\n if hasattr(module, \"_added_loss_terms\"):\n for name, strategy in module._added_loss_terms.items():\n if strategy is not None and strategy not in memo:\n memo.add(strategy)\n yield prefix + (\".\" if prefix else \"\") + name, strategy\n for mname, module_ in module.named_children():\n submodule_prefix = prefix + (\".\" if prefix else \"\") + mname\n for name, strategy in _extract_named_added_loss_terms(module=module_, memo=memo, prefix=submodule_prefix):\n yield name, strategy\n\n\ndef _extract_named_priors(module, memo=None, prefix=\"\"):\n if memo is None:\n memo = set()\n if hasattr(module, \"_priors\"):\n for name, (prior, closure, inv_closure) in module._priors.items():\n if prior is not None and prior not in memo:\n memo.add(prior)\n full_name = (\".\" if prefix else \"\").join([prefix, name])\n yield full_name, prior, closure, inv_closure\n for mname, module_ in module.named_children():\n submodule_prefix = prefix + (\".\" if prefix else \"\") + mname\n for name, prior, closure, inv_closure in _extract_named_priors(module_, memo=memo, prefix=submodule_prefix):\n yield name, prior, closure, inv_closure\n", "path": "gpytorch/module.py"}]}
| 3,286 | 316 |
gh_patches_debug_27471
|
rasdani/github-patches
|
git_diff
|
wger-project__wger-235
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate weight entries in CSV import
It seems it's possible to trigger a uniqueness constraint error using the import CSV function for the weight entries. I could have sworn this was already fixed, but it looks it isn't.
During import the view should make sure that duplicate entries are not saved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wger/weight/helpers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # This file is part of wger Workout Manager.
4 #
5 # wger Workout Manager is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU Affero General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # wger Workout Manager is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU Affero General Public License
16
17 import logging
18 import six
19 import datetime
20 import decimal
21 import csv
22 import json
23 from collections import OrderedDict
24
25 from django.core.cache import cache
26
27 from wger.utils.helpers import DecimalJsonEncoder
28 from wger.utils.cache import cache_mapper
29 from wger.weight.models import WeightEntry
30 from wger.manager.models import WorkoutSession
31 from wger.manager.models import WorkoutLog
32
33 logger = logging.getLogger(__name__)
34
35
36 def parse_weight_csv(request, cleaned_data):
37
38 try:
39 dialect = csv.Sniffer().sniff(cleaned_data['csv_input'])
40 except csv.Error:
41 dialect = 'excel'
42
43 # csv.reader expects a file-like object, so use StringIO
44 parsed_csv = csv.reader(six.StringIO(cleaned_data['csv_input']),
45 dialect)
46 distinct_weight_entries = []
47 weight_list = []
48 error_list = []
49
50 # Process the CSV items first
51 for row in parsed_csv:
52 try:
53 parsed_date = datetime.datetime.strptime(row[0], cleaned_data['date_format'])
54 parsed_weight = decimal.Decimal(row[1].replace(',', '.'))
55 duplicate_date_in_db = WeightEntry.objects.filter(date=parsed_date,
56 user=request.user).exists()
57 # within the list there are no duplicates
58 unique_among_csv = (parsed_date, parsed_weight) not in distinct_weight_entries
59 # there is no existing weight entry in the database for that date
60 unique_in_db = not duplicate_date_in_db
61
62 if unique_among_csv and unique_in_db:
63 distinct_weight_entries.append((parsed_date, parsed_weight))
64 else:
65 error_list.append(row)
66
67 except (ValueError, IndexError, decimal.InvalidOperation):
68 error_list.append(row)
69
70 # Create the valid weight entries
71 for date, weight in distinct_weight_entries:
72 weight_list.append(WeightEntry(date=date,
73 weight=weight,
74 user=request.user))
75
76 return (weight_list, error_list)
77
78
79 def group_log_entries(user, year, month, day=None):
80 '''
81 Processes and regroups a list of log entries so they can be more easily
82 used in the different calendar pages
83
84 :param user: the user to filter the logs for
85 :param year: year
86 :param month: month
87 :param day: optional, day
88
89 :return: a dictionary with grouped logs by date and exercise
90 '''
91 if day:
92 log_hash = hash((user.pk, year, month, day))
93 else:
94 log_hash = hash((user.pk, year, month))
95
96 # There can be workout sessions without any associated log entries, so it is
97 # not enough so simply iterate through the logs
98 if day:
99 filter_date = datetime.date(year, month, day)
100 logs = WorkoutLog.objects.filter(user=user, date=filter_date)
101 sessions = WorkoutSession.objects.filter(user=user, date=filter_date)
102
103 else:
104 logs = WorkoutLog.objects.filter(user=user,
105 date__year=year,
106 date__month=month)
107
108 sessions = WorkoutSession.objects.filter(user=user,
109 date__year=year,
110 date__month=month)
111
112 logs = logs.order_by('date', 'id')
113 out = cache.get(cache_mapper.get_workout_log_list(log_hash))
114 # out = OrderedDict()
115
116 if not out:
117 out = OrderedDict()
118
119 # Logs
120 for entry in logs:
121 if not out.get(entry.date):
122 out[entry.date] = {'date': entry.date,
123 'workout': entry.workout,
124 'session': entry.get_workout_session(),
125 'logs': OrderedDict()}
126
127 if not out[entry.date]['logs'].get(entry.exercise):
128 out[entry.date]['logs'][entry.exercise] = []
129
130 out[entry.date]['logs'][entry.exercise].append(entry)
131
132 # Sessions
133 for entry in sessions:
134 if not out.get(entry.date):
135 out[entry.date] = {'date': entry.date,
136 'workout': entry.workout,
137 'session': entry,
138 'logs': {}}
139
140 cache.set(cache_mapper.get_workout_log_list(log_hash), out)
141 return out
142
143
144 def process_log_entries(logs):
145 '''
146 Processes and regroups a list of log entries so they can be rendered
147 and passed to the D3 library to render a chart
148 '''
149
150 reps = []
151 entry_log = OrderedDict()
152 chart_data = []
153 max_weight = {}
154
155 # Group by date
156 for entry in logs:
157 if entry.reps not in reps:
158 reps.append(entry.reps)
159
160 if not entry_log.get(entry.date):
161 entry_log[entry.date] = []
162 entry_log[entry.date].append(entry)
163
164 # Find the maximum weight per date per repetition.
165 # If on a day there are several entries with the same number of
166 # repetitions, but different weights, only the entry with the
167 # higher weight is shown in the chart
168 if not max_weight.get(entry.date):
169 max_weight[entry.date] = {entry.reps: entry.weight}
170
171 if not max_weight[entry.date].get(entry.reps):
172 max_weight[entry.date][entry.reps] = entry.weight
173
174 if entry.weight > max_weight[entry.date][entry.reps]:
175 max_weight[entry.date][entry.reps] = entry.weight
176
177 # Group by repetitions
178 reps_list = {}
179 for entry in logs:
180 temp = {'date': '%s' % entry.date,
181 'id': 'manager:workout:log-%s' % entry.id}
182
183 # Only unique date, rep and weight combinations
184 if reps_list.get((entry.date, entry.reps, entry.weight)):
185 continue
186 else:
187 reps_list[(entry.date, entry.reps, entry.weight)] = True
188
189 # Only add if weight is the maximum for the day
190 if entry.weight != max_weight[entry.date][entry.reps]:
191 continue
192
193 for rep in reps:
194 if entry.reps == rep:
195 temp[rep] = entry.weight
196 else:
197 # Mark entries without data, this is later filtered out by D3.
198 # We use the string 'n.a' instead of 0 to differentiate actual exercises
199 # where no weight was used.
200 temp[rep] = 'n.a'
201 chart_data.append(temp)
202
203 return entry_log, json.dumps(chart_data, cls=DecimalJsonEncoder)
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wger/weight/helpers.py b/wger/weight/helpers.py
--- a/wger/weight/helpers.py
+++ b/wger/weight/helpers.py
@@ -44,6 +44,7 @@
parsed_csv = csv.reader(six.StringIO(cleaned_data['csv_input']),
dialect)
distinct_weight_entries = []
+ entry_dates = set()
weight_list = []
error_list = []
@@ -54,13 +55,15 @@
parsed_weight = decimal.Decimal(row[1].replace(',', '.'))
duplicate_date_in_db = WeightEntry.objects.filter(date=parsed_date,
user=request.user).exists()
- # within the list there are no duplicates
- unique_among_csv = (parsed_date, parsed_weight) not in distinct_weight_entries
+ # within the list there are no duplicate dates
+ unique_among_csv = parsed_date not in entry_dates
+
# there is no existing weight entry in the database for that date
unique_in_db = not duplicate_date_in_db
if unique_among_csv and unique_in_db:
distinct_weight_entries.append((parsed_date, parsed_weight))
+ entry_dates.add(parsed_date)
else:
error_list.append(row)
|
{"golden_diff": "diff --git a/wger/weight/helpers.py b/wger/weight/helpers.py\n--- a/wger/weight/helpers.py\n+++ b/wger/weight/helpers.py\n@@ -44,6 +44,7 @@\n parsed_csv = csv.reader(six.StringIO(cleaned_data['csv_input']),\n dialect)\n distinct_weight_entries = []\n+ entry_dates = set()\n weight_list = []\n error_list = []\n \n@@ -54,13 +55,15 @@\n parsed_weight = decimal.Decimal(row[1].replace(',', '.'))\n duplicate_date_in_db = WeightEntry.objects.filter(date=parsed_date,\n user=request.user).exists()\n- # within the list there are no duplicates\n- unique_among_csv = (parsed_date, parsed_weight) not in distinct_weight_entries\n+ # within the list there are no duplicate dates\n+ unique_among_csv = parsed_date not in entry_dates\n+\n # there is no existing weight entry in the database for that date\n unique_in_db = not duplicate_date_in_db\n \n if unique_among_csv and unique_in_db:\n distinct_weight_entries.append((parsed_date, parsed_weight))\n+ entry_dates.add(parsed_date)\n else:\n error_list.append(row)\n", "issue": "Duplicate weight entries in CSV import\nIt seems it's possible to trigger a uniqueness constraint error using the import CSV function for the weight entries. I could have sworn this was already fixed, but it looks it isn't.\n\nDuring import the view should make sure that duplicate entries are not saved.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This file is part of wger Workout Manager.\n#\n# wger Workout Manager is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# wger Workout Manager is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n\nimport logging\nimport six\nimport datetime\nimport decimal\nimport csv\nimport json\nfrom collections import OrderedDict\n\nfrom django.core.cache import cache\n\nfrom wger.utils.helpers import DecimalJsonEncoder\nfrom wger.utils.cache import cache_mapper\nfrom wger.weight.models import WeightEntry\nfrom wger.manager.models import WorkoutSession\nfrom wger.manager.models import WorkoutLog\n\nlogger = logging.getLogger(__name__)\n\n\ndef parse_weight_csv(request, cleaned_data):\n\n try:\n dialect = csv.Sniffer().sniff(cleaned_data['csv_input'])\n except csv.Error:\n dialect = 'excel'\n\n # csv.reader expects a file-like object, so use StringIO\n parsed_csv = csv.reader(six.StringIO(cleaned_data['csv_input']),\n dialect)\n distinct_weight_entries = []\n weight_list = []\n error_list = []\n\n # Process the CSV items first\n for row in parsed_csv:\n try:\n parsed_date = datetime.datetime.strptime(row[0], cleaned_data['date_format'])\n parsed_weight = decimal.Decimal(row[1].replace(',', '.'))\n duplicate_date_in_db = WeightEntry.objects.filter(date=parsed_date,\n user=request.user).exists()\n # within the list there are no duplicates\n unique_among_csv = (parsed_date, parsed_weight) not in distinct_weight_entries\n # there is no existing weight entry in the database for that date\n unique_in_db = not duplicate_date_in_db\n\n if unique_among_csv and unique_in_db:\n distinct_weight_entries.append((parsed_date, parsed_weight))\n else:\n error_list.append(row)\n\n except (ValueError, IndexError, decimal.InvalidOperation):\n error_list.append(row)\n\n # Create the valid weight entries\n for date, weight in distinct_weight_entries:\n weight_list.append(WeightEntry(date=date,\n weight=weight,\n user=request.user))\n\n return (weight_list, error_list)\n\n\ndef group_log_entries(user, year, month, day=None):\n '''\n Processes and regroups a list of log entries so they can be more easily\n used in the different calendar pages\n\n :param user: the user to filter the logs for\n :param year: year\n :param month: month\n :param day: optional, day\n\n :return: a dictionary with grouped logs by date and exercise\n '''\n if day:\n log_hash = hash((user.pk, year, month, day))\n else:\n log_hash = hash((user.pk, year, month))\n\n # There can be workout sessions without any associated log entries, so it is\n # not enough so simply iterate through the logs\n if day:\n filter_date = datetime.date(year, month, day)\n logs = WorkoutLog.objects.filter(user=user, date=filter_date)\n sessions = WorkoutSession.objects.filter(user=user, date=filter_date)\n\n else:\n logs = WorkoutLog.objects.filter(user=user,\n date__year=year,\n date__month=month)\n\n sessions = WorkoutSession.objects.filter(user=user,\n date__year=year,\n date__month=month)\n\n logs = logs.order_by('date', 'id')\n out = cache.get(cache_mapper.get_workout_log_list(log_hash))\n # out = OrderedDict()\n\n if not out:\n out = OrderedDict()\n\n # Logs\n for entry in logs:\n if not out.get(entry.date):\n out[entry.date] = {'date': entry.date,\n 'workout': entry.workout,\n 'session': entry.get_workout_session(),\n 'logs': OrderedDict()}\n\n if not out[entry.date]['logs'].get(entry.exercise):\n out[entry.date]['logs'][entry.exercise] = []\n\n out[entry.date]['logs'][entry.exercise].append(entry)\n\n # Sessions\n for entry in sessions:\n if not out.get(entry.date):\n out[entry.date] = {'date': entry.date,\n 'workout': entry.workout,\n 'session': entry,\n 'logs': {}}\n\n cache.set(cache_mapper.get_workout_log_list(log_hash), out)\n return out\n\n\ndef process_log_entries(logs):\n '''\n Processes and regroups a list of log entries so they can be rendered\n and passed to the D3 library to render a chart\n '''\n\n reps = []\n entry_log = OrderedDict()\n chart_data = []\n max_weight = {}\n\n # Group by date\n for entry in logs:\n if entry.reps not in reps:\n reps.append(entry.reps)\n\n if not entry_log.get(entry.date):\n entry_log[entry.date] = []\n entry_log[entry.date].append(entry)\n\n # Find the maximum weight per date per repetition.\n # If on a day there are several entries with the same number of\n # repetitions, but different weights, only the entry with the\n # higher weight is shown in the chart\n if not max_weight.get(entry.date):\n max_weight[entry.date] = {entry.reps: entry.weight}\n\n if not max_weight[entry.date].get(entry.reps):\n max_weight[entry.date][entry.reps] = entry.weight\n\n if entry.weight > max_weight[entry.date][entry.reps]:\n max_weight[entry.date][entry.reps] = entry.weight\n\n # Group by repetitions\n reps_list = {}\n for entry in logs:\n temp = {'date': '%s' % entry.date,\n 'id': 'manager:workout:log-%s' % entry.id}\n\n # Only unique date, rep and weight combinations\n if reps_list.get((entry.date, entry.reps, entry.weight)):\n continue\n else:\n reps_list[(entry.date, entry.reps, entry.weight)] = True\n\n # Only add if weight is the maximum for the day\n if entry.weight != max_weight[entry.date][entry.reps]:\n continue\n\n for rep in reps:\n if entry.reps == rep:\n temp[rep] = entry.weight\n else:\n # Mark entries without data, this is later filtered out by D3.\n # We use the string 'n.a' instead of 0 to differentiate actual exercises\n # where no weight was used.\n temp[rep] = 'n.a'\n chart_data.append(temp)\n\n return entry_log, json.dumps(chart_data, cls=DecimalJsonEncoder)\n", "path": "wger/weight/helpers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This file is part of wger Workout Manager.\n#\n# wger Workout Manager is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# wger Workout Manager is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n\nimport logging\nimport six\nimport datetime\nimport decimal\nimport csv\nimport json\nfrom collections import OrderedDict\n\nfrom django.core.cache import cache\n\nfrom wger.utils.helpers import DecimalJsonEncoder\nfrom wger.utils.cache import cache_mapper\nfrom wger.weight.models import WeightEntry\nfrom wger.manager.models import WorkoutSession\nfrom wger.manager.models import WorkoutLog\n\nlogger = logging.getLogger(__name__)\n\n\ndef parse_weight_csv(request, cleaned_data):\n\n try:\n dialect = csv.Sniffer().sniff(cleaned_data['csv_input'])\n except csv.Error:\n dialect = 'excel'\n\n # csv.reader expects a file-like object, so use StringIO\n parsed_csv = csv.reader(six.StringIO(cleaned_data['csv_input']),\n dialect)\n distinct_weight_entries = []\n entry_dates = set()\n weight_list = []\n error_list = []\n\n # Process the CSV items first\n for row in parsed_csv:\n try:\n parsed_date = datetime.datetime.strptime(row[0], cleaned_data['date_format'])\n parsed_weight = decimal.Decimal(row[1].replace(',', '.'))\n duplicate_date_in_db = WeightEntry.objects.filter(date=parsed_date,\n user=request.user).exists()\n # within the list there are no duplicate dates\n unique_among_csv = parsed_date not in entry_dates\n\n # there is no existing weight entry in the database for that date\n unique_in_db = not duplicate_date_in_db\n\n if unique_among_csv and unique_in_db:\n distinct_weight_entries.append((parsed_date, parsed_weight))\n entry_dates.add(parsed_date)\n else:\n error_list.append(row)\n\n except (ValueError, IndexError, decimal.InvalidOperation):\n error_list.append(row)\n\n # Create the valid weight entries\n for date, weight in distinct_weight_entries:\n weight_list.append(WeightEntry(date=date,\n weight=weight,\n user=request.user))\n\n return (weight_list, error_list)\n\n\ndef group_log_entries(user, year, month, day=None):\n '''\n Processes and regroups a list of log entries so they can be more easily\n used in the different calendar pages\n\n :param user: the user to filter the logs for\n :param year: year\n :param month: month\n :param day: optional, day\n\n :return: a dictionary with grouped logs by date and exercise\n '''\n if day:\n log_hash = hash((user.pk, year, month, day))\n else:\n log_hash = hash((user.pk, year, month))\n\n # There can be workout sessions without any associated log entries, so it is\n # not enough so simply iterate through the logs\n if day:\n filter_date = datetime.date(year, month, day)\n logs = WorkoutLog.objects.filter(user=user, date=filter_date)\n sessions = WorkoutSession.objects.filter(user=user, date=filter_date)\n\n else:\n logs = WorkoutLog.objects.filter(user=user,\n date__year=year,\n date__month=month)\n\n sessions = WorkoutSession.objects.filter(user=user,\n date__year=year,\n date__month=month)\n\n logs = logs.order_by('date', 'id')\n out = cache.get(cache_mapper.get_workout_log_list(log_hash))\n # out = OrderedDict()\n\n if not out:\n out = OrderedDict()\n\n # Logs\n for entry in logs:\n if not out.get(entry.date):\n out[entry.date] = {'date': entry.date,\n 'workout': entry.workout,\n 'session': entry.get_workout_session(),\n 'logs': OrderedDict()}\n\n if not out[entry.date]['logs'].get(entry.exercise):\n out[entry.date]['logs'][entry.exercise] = []\n\n out[entry.date]['logs'][entry.exercise].append(entry)\n\n # Sessions\n for entry in sessions:\n if not out.get(entry.date):\n out[entry.date] = {'date': entry.date,\n 'workout': entry.workout,\n 'session': entry,\n 'logs': {}}\n\n cache.set(cache_mapper.get_workout_log_list(log_hash), out)\n return out\n\n\ndef process_log_entries(logs):\n '''\n Processes and regroups a list of log entries so they can be rendered\n and passed to the D3 library to render a chart\n '''\n\n reps = []\n entry_log = OrderedDict()\n chart_data = []\n max_weight = {}\n\n # Group by date\n for entry in logs:\n if entry.reps not in reps:\n reps.append(entry.reps)\n\n if not entry_log.get(entry.date):\n entry_log[entry.date] = []\n entry_log[entry.date].append(entry)\n\n # Find the maximum weight per date per repetition.\n # If on a day there are several entries with the same number of\n # repetitions, but different weights, only the entry with the\n # higher weight is shown in the chart\n if not max_weight.get(entry.date):\n max_weight[entry.date] = {entry.reps: entry.weight}\n\n if not max_weight[entry.date].get(entry.reps):\n max_weight[entry.date][entry.reps] = entry.weight\n\n if entry.weight > max_weight[entry.date][entry.reps]:\n max_weight[entry.date][entry.reps] = entry.weight\n\n # Group by repetitions\n reps_list = {}\n for entry in logs:\n temp = {'date': '%s' % entry.date,\n 'id': 'manager:workout:log-%s' % entry.id}\n\n # Only unique date, rep and weight combinations\n if reps_list.get((entry.date, entry.reps, entry.weight)):\n continue\n else:\n reps_list[(entry.date, entry.reps, entry.weight)] = True\n\n # Only add if weight is the maximum for the day\n if entry.weight != max_weight[entry.date][entry.reps]:\n continue\n\n for rep in reps:\n if entry.reps == rep:\n temp[rep] = entry.weight\n else:\n # Mark entries without data, this is later filtered out by D3.\n # We use the string 'n.a' instead of 0 to differentiate actual exercises\n # where no weight was used.\n temp[rep] = 'n.a'\n chart_data.append(temp)\n\n return entry_log, json.dumps(chart_data, cls=DecimalJsonEncoder)\n", "path": "wger/weight/helpers.py"}]}
| 2,352 | 266 |
gh_patches_debug_14810
|
rasdani/github-patches
|
git_diff
|
pypa__setuptools-1720
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
setup_requires="string" not handled by PEP 517 backend
Does this need to be fixed in setuptools rather since the PEP says the return value needs to be a list of strings? https://www.python.org/dev/peps/pep-0517/#get-requires-for-build-wheel
It looks like here is the setuptools code: https://github.com/pypa/setuptools/blob/cdb5eeae678d8ccc90bf7d4348013a294f11be75/setuptools/build_meta.py#L138
_Originally posted by @cjerdonek in https://github.com/pypa/pip/issues/6255#issuecomment-462468517_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/build_meta.py`
Content:
```
1 """A PEP 517 interface to setuptools
2
3 Previously, when a user or a command line tool (let's call it a "frontend")
4 needed to make a request of setuptools to take a certain action, for
5 example, generating a list of installation requirements, the frontend would
6 would call "setup.py egg_info" or "setup.py bdist_wheel" on the command line.
7
8 PEP 517 defines a different method of interfacing with setuptools. Rather
9 than calling "setup.py" directly, the frontend should:
10
11 1. Set the current directory to the directory with a setup.py file
12 2. Import this module into a safe python interpreter (one in which
13 setuptools can potentially set global variables or crash hard).
14 3. Call one of the functions defined in PEP 517.
15
16 What each function does is defined in PEP 517. However, here is a "casual"
17 definition of the functions (this definition should not be relied on for
18 bug reports or API stability):
19
20 - `build_wheel`: build a wheel in the folder and return the basename
21 - `get_requires_for_build_wheel`: get the `setup_requires` to build
22 - `prepare_metadata_for_build_wheel`: get the `install_requires`
23 - `build_sdist`: build an sdist in the folder and return the basename
24 - `get_requires_for_build_sdist`: get the `setup_requires` to build
25
26 Again, this is not a formal definition! Just a "taste" of the module.
27 """
28
29 import io
30 import os
31 import sys
32 import tokenize
33 import shutil
34 import contextlib
35
36 import setuptools
37 import distutils
38
39 __all__ = ['get_requires_for_build_sdist',
40 'get_requires_for_build_wheel',
41 'prepare_metadata_for_build_wheel',
42 'build_wheel',
43 'build_sdist',
44 '__legacy__',
45 'SetupRequirementsError']
46
47 class SetupRequirementsError(BaseException):
48 def __init__(self, specifiers):
49 self.specifiers = specifiers
50
51
52 class Distribution(setuptools.dist.Distribution):
53 def fetch_build_eggs(self, specifiers):
54 raise SetupRequirementsError(specifiers)
55
56 @classmethod
57 @contextlib.contextmanager
58 def patch(cls):
59 """
60 Replace
61 distutils.dist.Distribution with this class
62 for the duration of this context.
63 """
64 orig = distutils.core.Distribution
65 distutils.core.Distribution = cls
66 try:
67 yield
68 finally:
69 distutils.core.Distribution = orig
70
71
72 def _to_str(s):
73 """
74 Convert a filename to a string (on Python 2, explicitly
75 a byte string, not Unicode) as distutils checks for the
76 exact type str.
77 """
78 if sys.version_info[0] == 2 and not isinstance(s, str):
79 # Assume it's Unicode, as that's what the PEP says
80 # should be provided.
81 return s.encode(sys.getfilesystemencoding())
82 return s
83
84
85 def _get_immediate_subdirectories(a_dir):
86 return [name for name in os.listdir(a_dir)
87 if os.path.isdir(os.path.join(a_dir, name))]
88
89
90 def _file_with_extension(directory, extension):
91 matching = (
92 f for f in os.listdir(directory)
93 if f.endswith(extension)
94 )
95 file, = matching
96 return file
97
98
99 def _open_setup_script(setup_script):
100 if not os.path.exists(setup_script):
101 # Supply a default setup.py
102 return io.StringIO(u"from setuptools import setup; setup()")
103
104 return getattr(tokenize, 'open', open)(setup_script)
105
106
107 class _BuildMetaBackend(object):
108
109 def _fix_config(self, config_settings):
110 config_settings = config_settings or {}
111 config_settings.setdefault('--global-option', [])
112 return config_settings
113
114 def _get_build_requires(self, config_settings, requirements):
115 config_settings = self._fix_config(config_settings)
116
117 sys.argv = sys.argv[:1] + ['egg_info'] + \
118 config_settings["--global-option"]
119 try:
120 with Distribution.patch():
121 self.run_setup()
122 except SetupRequirementsError as e:
123 requirements += e.specifiers
124
125 return requirements
126
127 def run_setup(self, setup_script='setup.py'):
128 # Note that we can reuse our build directory between calls
129 # Correctness comes first, then optimization later
130 __file__ = setup_script
131 __name__ = '__main__'
132
133 with _open_setup_script(__file__) as f:
134 code = f.read().replace(r'\r\n', r'\n')
135
136 exec(compile(code, __file__, 'exec'), locals())
137
138 def get_requires_for_build_wheel(self, config_settings=None):
139 config_settings = self._fix_config(config_settings)
140 return self._get_build_requires(config_settings, requirements=['wheel'])
141
142 def get_requires_for_build_sdist(self, config_settings=None):
143 config_settings = self._fix_config(config_settings)
144 return self._get_build_requires(config_settings, requirements=[])
145
146 def prepare_metadata_for_build_wheel(self, metadata_directory,
147 config_settings=None):
148 sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',
149 _to_str(metadata_directory)]
150 self.run_setup()
151
152 dist_info_directory = metadata_directory
153 while True:
154 dist_infos = [f for f in os.listdir(dist_info_directory)
155 if f.endswith('.dist-info')]
156
157 if (len(dist_infos) == 0 and
158 len(_get_immediate_subdirectories(dist_info_directory)) == 1):
159
160 dist_info_directory = os.path.join(
161 dist_info_directory, os.listdir(dist_info_directory)[0])
162 continue
163
164 assert len(dist_infos) == 1
165 break
166
167 # PEP 517 requires that the .dist-info directory be placed in the
168 # metadata_directory. To comply, we MUST copy the directory to the root
169 if dist_info_directory != metadata_directory:
170 shutil.move(
171 os.path.join(dist_info_directory, dist_infos[0]),
172 metadata_directory)
173 shutil.rmtree(dist_info_directory, ignore_errors=True)
174
175 return dist_infos[0]
176
177 def build_wheel(self, wheel_directory, config_settings=None,
178 metadata_directory=None):
179 config_settings = self._fix_config(config_settings)
180 wheel_directory = os.path.abspath(wheel_directory)
181 sys.argv = sys.argv[:1] + ['bdist_wheel'] + \
182 config_settings["--global-option"]
183 self.run_setup()
184 if wheel_directory != 'dist':
185 shutil.rmtree(wheel_directory)
186 shutil.copytree('dist', wheel_directory)
187
188 return _file_with_extension(wheel_directory, '.whl')
189
190 def build_sdist(self, sdist_directory, config_settings=None):
191 config_settings = self._fix_config(config_settings)
192 sdist_directory = os.path.abspath(sdist_directory)
193 sys.argv = sys.argv[:1] + ['sdist', '--formats', 'gztar'] + \
194 config_settings["--global-option"] + \
195 ["--dist-dir", sdist_directory]
196 self.run_setup()
197
198 return _file_with_extension(sdist_directory, '.tar.gz')
199
200
201 class _BuildMetaLegacyBackend(_BuildMetaBackend):
202 """Compatibility backend for setuptools
203
204 This is a version of setuptools.build_meta that endeavors to maintain backwards
205 compatibility with pre-PEP 517 modes of invocation. It exists as a temporary
206 bridge between the old packaging mechanism and the new packaging mechanism,
207 and will eventually be removed.
208 """
209 def run_setup(self, setup_script='setup.py'):
210 # In order to maintain compatibility with scripts assuming that
211 # the setup.py script is in a directory on the PYTHONPATH, inject
212 # '' into sys.path. (pypa/setuptools#1642)
213 sys_path = list(sys.path) # Save the original path
214
215 script_dir = os.path.dirname(os.path.abspath(setup_script))
216 if script_dir not in sys.path:
217 sys.path.insert(0, script_dir)
218
219 try:
220 super(_BuildMetaLegacyBackend,
221 self).run_setup(setup_script=setup_script)
222 finally:
223 # While PEP 517 frontends should be calling each hook in a fresh
224 # subprocess according to the standard (and thus it should not be
225 # strictly necessary to restore the old sys.path), we'll restore
226 # the original path so that the path manipulation does not persist
227 # within the hook after run_setup is called.
228 sys.path[:] = sys_path
229
230 # The primary backend
231 _BACKEND = _BuildMetaBackend()
232
233 get_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel
234 get_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist
235 prepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel
236 build_wheel = _BACKEND.build_wheel
237 build_sdist = _BACKEND.build_sdist
238
239
240 # The legacy backend
241 __legacy__ = _BuildMetaLegacyBackend()
242
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py
--- a/setuptools/build_meta.py
+++ b/setuptools/build_meta.py
@@ -36,6 +36,8 @@
import setuptools
import distutils
+from pkg_resources import parse_requirements
+
__all__ = ['get_requires_for_build_sdist',
'get_requires_for_build_wheel',
'prepare_metadata_for_build_wheel',
@@ -51,7 +53,9 @@
class Distribution(setuptools.dist.Distribution):
def fetch_build_eggs(self, specifiers):
- raise SetupRequirementsError(specifiers)
+ specifier_list = list(map(str, parse_requirements(specifiers)))
+
+ raise SetupRequirementsError(specifier_list)
@classmethod
@contextlib.contextmanager
|
{"golden_diff": "diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py\n--- a/setuptools/build_meta.py\n+++ b/setuptools/build_meta.py\n@@ -36,6 +36,8 @@\n import setuptools\n import distutils\n \n+from pkg_resources import parse_requirements\n+\n __all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n@@ -51,7 +53,9 @@\n \n class Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n- raise SetupRequirementsError(specifiers)\n+ specifier_list = list(map(str, parse_requirements(specifiers)))\n+\n+ raise SetupRequirementsError(specifier_list)\n \n @classmethod\n @contextlib.contextmanager\n", "issue": "setup_requires=\"string\" not handled by PEP 517 backend\nDoes this need to be fixed in setuptools rather since the PEP says the return value needs to be a list of strings? https://www.python.org/dev/peps/pep-0517/#get-requires-for-build-wheel\r\n\r\nIt looks like here is the setuptools code: https://github.com/pypa/setuptools/blob/cdb5eeae678d8ccc90bf7d4348013a294f11be75/setuptools/build_meta.py#L138\r\n\r\n_Originally posted by @cjerdonek in https://github.com/pypa/pip/issues/6255#issuecomment-462468517_\n", "before_files": [{"content": "\"\"\"A PEP 517 interface to setuptools\n\nPreviously, when a user or a command line tool (let's call it a \"frontend\")\nneeded to make a request of setuptools to take a certain action, for\nexample, generating a list of installation requirements, the frontend would\nwould call \"setup.py egg_info\" or \"setup.py bdist_wheel\" on the command line.\n\nPEP 517 defines a different method of interfacing with setuptools. Rather\nthan calling \"setup.py\" directly, the frontend should:\n\n 1. Set the current directory to the directory with a setup.py file\n 2. Import this module into a safe python interpreter (one in which\n setuptools can potentially set global variables or crash hard).\n 3. Call one of the functions defined in PEP 517.\n\nWhat each function does is defined in PEP 517. However, here is a \"casual\"\ndefinition of the functions (this definition should not be relied on for\nbug reports or API stability):\n\n - `build_wheel`: build a wheel in the folder and return the basename\n - `get_requires_for_build_wheel`: get the `setup_requires` to build\n - `prepare_metadata_for_build_wheel`: get the `install_requires`\n - `build_sdist`: build an sdist in the folder and return the basename\n - `get_requires_for_build_sdist`: get the `setup_requires` to build\n\nAgain, this is not a formal definition! Just a \"taste\" of the module.\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport tokenize\nimport shutil\nimport contextlib\n\nimport setuptools\nimport distutils\n\n__all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n 'build_wheel',\n 'build_sdist',\n '__legacy__',\n 'SetupRequirementsError']\n\nclass SetupRequirementsError(BaseException):\n def __init__(self, specifiers):\n self.specifiers = specifiers\n\n\nclass Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n raise SetupRequirementsError(specifiers)\n\n @classmethod\n @contextlib.contextmanager\n def patch(cls):\n \"\"\"\n Replace\n distutils.dist.Distribution with this class\n for the duration of this context.\n \"\"\"\n orig = distutils.core.Distribution\n distutils.core.Distribution = cls\n try:\n yield\n finally:\n distutils.core.Distribution = orig\n\n\ndef _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n a byte string, not Unicode) as distutils checks for the\n exact type str.\n \"\"\"\n if sys.version_info[0] == 2 and not isinstance(s, str):\n # Assume it's Unicode, as that's what the PEP says\n # should be provided.\n return s.encode(sys.getfilesystemencoding())\n return s\n\n\ndef _get_immediate_subdirectories(a_dir):\n return [name for name in os.listdir(a_dir)\n if os.path.isdir(os.path.join(a_dir, name))]\n\n\ndef _file_with_extension(directory, extension):\n matching = (\n f for f in os.listdir(directory)\n if f.endswith(extension)\n )\n file, = matching\n return file\n\n\ndef _open_setup_script(setup_script):\n if not os.path.exists(setup_script):\n # Supply a default setup.py\n return io.StringIO(u\"from setuptools import setup; setup()\")\n\n return getattr(tokenize, 'open', open)(setup_script)\n\n\nclass _BuildMetaBackend(object):\n\n def _fix_config(self, config_settings):\n config_settings = config_settings or {}\n config_settings.setdefault('--global-option', [])\n return config_settings\n\n def _get_build_requires(self, config_settings, requirements):\n config_settings = self._fix_config(config_settings)\n\n sys.argv = sys.argv[:1] + ['egg_info'] + \\\n config_settings[\"--global-option\"]\n try:\n with Distribution.patch():\n self.run_setup()\n except SetupRequirementsError as e:\n requirements += e.specifiers\n\n return requirements\n\n def run_setup(self, setup_script='setup.py'):\n # Note that we can reuse our build directory between calls\n # Correctness comes first, then optimization later\n __file__ = setup_script\n __name__ = '__main__'\n\n with _open_setup_script(__file__) as f:\n code = f.read().replace(r'\\r\\n', r'\\n')\n\n exec(compile(code, __file__, 'exec'), locals())\n\n def get_requires_for_build_wheel(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=['wheel'])\n\n def get_requires_for_build_sdist(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=[])\n\n def prepare_metadata_for_build_wheel(self, metadata_directory,\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n self.run_setup()\n\n dist_info_directory = metadata_directory\n while True:\n dist_infos = [f for f in os.listdir(dist_info_directory)\n if f.endswith('.dist-info')]\n\n if (len(dist_infos) == 0 and\n len(_get_immediate_subdirectories(dist_info_directory)) == 1):\n\n dist_info_directory = os.path.join(\n dist_info_directory, os.listdir(dist_info_directory)[0])\n continue\n\n assert len(dist_infos) == 1\n break\n\n # PEP 517 requires that the .dist-info directory be placed in the\n # metadata_directory. To comply, we MUST copy the directory to the root\n if dist_info_directory != metadata_directory:\n shutil.move(\n os.path.join(dist_info_directory, dist_infos[0]),\n metadata_directory)\n shutil.rmtree(dist_info_directory, ignore_errors=True)\n\n return dist_infos[0]\n\n def build_wheel(self, wheel_directory, config_settings=None,\n metadata_directory=None):\n config_settings = self._fix_config(config_settings)\n wheel_directory = os.path.abspath(wheel_directory)\n sys.argv = sys.argv[:1] + ['bdist_wheel'] + \\\n config_settings[\"--global-option\"]\n self.run_setup()\n if wheel_directory != 'dist':\n shutil.rmtree(wheel_directory)\n shutil.copytree('dist', wheel_directory)\n\n return _file_with_extension(wheel_directory, '.whl')\n\n def build_sdist(self, sdist_directory, config_settings=None):\n config_settings = self._fix_config(config_settings)\n sdist_directory = os.path.abspath(sdist_directory)\n sys.argv = sys.argv[:1] + ['sdist', '--formats', 'gztar'] + \\\n config_settings[\"--global-option\"] + \\\n [\"--dist-dir\", sdist_directory]\n self.run_setup()\n\n return _file_with_extension(sdist_directory, '.tar.gz')\n\n\nclass _BuildMetaLegacyBackend(_BuildMetaBackend):\n \"\"\"Compatibility backend for setuptools\n\n This is a version of setuptools.build_meta that endeavors to maintain backwards\n compatibility with pre-PEP 517 modes of invocation. It exists as a temporary\n bridge between the old packaging mechanism and the new packaging mechanism,\n and will eventually be removed.\n \"\"\"\n def run_setup(self, setup_script='setup.py'):\n # In order to maintain compatibility with scripts assuming that\n # the setup.py script is in a directory on the PYTHONPATH, inject\n # '' into sys.path. (pypa/setuptools#1642)\n sys_path = list(sys.path) # Save the original path\n\n script_dir = os.path.dirname(os.path.abspath(setup_script))\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n finally:\n # While PEP 517 frontends should be calling each hook in a fresh\n # subprocess according to the standard (and thus it should not be\n # strictly necessary to restore the old sys.path), we'll restore\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n\n# The primary backend\n_BACKEND = _BuildMetaBackend()\n\nget_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel\nget_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist\nprepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel\nbuild_wheel = _BACKEND.build_wheel\nbuild_sdist = _BACKEND.build_sdist\n\n\n# The legacy backend\n__legacy__ = _BuildMetaLegacyBackend()\n", "path": "setuptools/build_meta.py"}], "after_files": [{"content": "\"\"\"A PEP 517 interface to setuptools\n\nPreviously, when a user or a command line tool (let's call it a \"frontend\")\nneeded to make a request of setuptools to take a certain action, for\nexample, generating a list of installation requirements, the frontend would\nwould call \"setup.py egg_info\" or \"setup.py bdist_wheel\" on the command line.\n\nPEP 517 defines a different method of interfacing with setuptools. Rather\nthan calling \"setup.py\" directly, the frontend should:\n\n 1. Set the current directory to the directory with a setup.py file\n 2. Import this module into a safe python interpreter (one in which\n setuptools can potentially set global variables or crash hard).\n 3. Call one of the functions defined in PEP 517.\n\nWhat each function does is defined in PEP 517. However, here is a \"casual\"\ndefinition of the functions (this definition should not be relied on for\nbug reports or API stability):\n\n - `build_wheel`: build a wheel in the folder and return the basename\n - `get_requires_for_build_wheel`: get the `setup_requires` to build\n - `prepare_metadata_for_build_wheel`: get the `install_requires`\n - `build_sdist`: build an sdist in the folder and return the basename\n - `get_requires_for_build_sdist`: get the `setup_requires` to build\n\nAgain, this is not a formal definition! Just a \"taste\" of the module.\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport tokenize\nimport shutil\nimport contextlib\n\nimport setuptools\nimport distutils\n\nfrom pkg_resources import parse_requirements\n\n__all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n 'build_wheel',\n 'build_sdist',\n '__legacy__',\n 'SetupRequirementsError']\n\nclass SetupRequirementsError(BaseException):\n def __init__(self, specifiers):\n self.specifiers = specifiers\n\n\nclass Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n specifier_list = list(map(str, parse_requirements(specifiers)))\n\n raise SetupRequirementsError(specifier_list)\n\n @classmethod\n @contextlib.contextmanager\n def patch(cls):\n \"\"\"\n Replace\n distutils.dist.Distribution with this class\n for the duration of this context.\n \"\"\"\n orig = distutils.core.Distribution\n distutils.core.Distribution = cls\n try:\n yield\n finally:\n distutils.core.Distribution = orig\n\n\ndef _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n a byte string, not Unicode) as distutils checks for the\n exact type str.\n \"\"\"\n if sys.version_info[0] == 2 and not isinstance(s, str):\n # Assume it's Unicode, as that's what the PEP says\n # should be provided.\n return s.encode(sys.getfilesystemencoding())\n return s\n\n\ndef _get_immediate_subdirectories(a_dir):\n return [name for name in os.listdir(a_dir)\n if os.path.isdir(os.path.join(a_dir, name))]\n\n\ndef _file_with_extension(directory, extension):\n matching = (\n f for f in os.listdir(directory)\n if f.endswith(extension)\n )\n file, = matching\n return file\n\n\ndef _open_setup_script(setup_script):\n if not os.path.exists(setup_script):\n # Supply a default setup.py\n return io.StringIO(u\"from setuptools import setup; setup()\")\n\n return getattr(tokenize, 'open', open)(setup_script)\n\n\nclass _BuildMetaBackend(object):\n\n def _fix_config(self, config_settings):\n config_settings = config_settings or {}\n config_settings.setdefault('--global-option', [])\n return config_settings\n\n def _get_build_requires(self, config_settings, requirements):\n config_settings = self._fix_config(config_settings)\n\n sys.argv = sys.argv[:1] + ['egg_info'] + \\\n config_settings[\"--global-option\"]\n try:\n with Distribution.patch():\n self.run_setup()\n except SetupRequirementsError as e:\n requirements += e.specifiers\n\n return requirements\n\n def run_setup(self, setup_script='setup.py'):\n # Note that we can reuse our build directory between calls\n # Correctness comes first, then optimization later\n __file__ = setup_script\n __name__ = '__main__'\n\n with _open_setup_script(__file__) as f:\n code = f.read().replace(r'\\r\\n', r'\\n')\n\n exec(compile(code, __file__, 'exec'), locals())\n\n def get_requires_for_build_wheel(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=['wheel'])\n\n def get_requires_for_build_sdist(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=[])\n\n def prepare_metadata_for_build_wheel(self, metadata_directory,\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n self.run_setup()\n\n dist_info_directory = metadata_directory\n while True:\n dist_infos = [f for f in os.listdir(dist_info_directory)\n if f.endswith('.dist-info')]\n\n if (len(dist_infos) == 0 and\n len(_get_immediate_subdirectories(dist_info_directory)) == 1):\n\n dist_info_directory = os.path.join(\n dist_info_directory, os.listdir(dist_info_directory)[0])\n continue\n\n assert len(dist_infos) == 1\n break\n\n # PEP 517 requires that the .dist-info directory be placed in the\n # metadata_directory. To comply, we MUST copy the directory to the root\n if dist_info_directory != metadata_directory:\n shutil.move(\n os.path.join(dist_info_directory, dist_infos[0]),\n metadata_directory)\n shutil.rmtree(dist_info_directory, ignore_errors=True)\n\n return dist_infos[0]\n\n def build_wheel(self, wheel_directory, config_settings=None,\n metadata_directory=None):\n config_settings = self._fix_config(config_settings)\n wheel_directory = os.path.abspath(wheel_directory)\n sys.argv = sys.argv[:1] + ['bdist_wheel'] + \\\n config_settings[\"--global-option\"]\n self.run_setup()\n if wheel_directory != 'dist':\n shutil.rmtree(wheel_directory)\n shutil.copytree('dist', wheel_directory)\n\n return _file_with_extension(wheel_directory, '.whl')\n\n def build_sdist(self, sdist_directory, config_settings=None):\n config_settings = self._fix_config(config_settings)\n sdist_directory = os.path.abspath(sdist_directory)\n sys.argv = sys.argv[:1] + ['sdist', '--formats', 'gztar'] + \\\n config_settings[\"--global-option\"] + \\\n [\"--dist-dir\", sdist_directory]\n self.run_setup()\n\n return _file_with_extension(sdist_directory, '.tar.gz')\n\n\nclass _BuildMetaLegacyBackend(_BuildMetaBackend):\n \"\"\"Compatibility backend for setuptools\n\n This is a version of setuptools.build_meta that endeavors to maintain backwards\n compatibility with pre-PEP 517 modes of invocation. It exists as a temporary\n bridge between the old packaging mechanism and the new packaging mechanism,\n and will eventually be removed.\n \"\"\"\n def run_setup(self, setup_script='setup.py'):\n # In order to maintain compatibility with scripts assuming that\n # the setup.py script is in a directory on the PYTHONPATH, inject\n # '' into sys.path. (pypa/setuptools#1642)\n sys_path = list(sys.path) # Save the original path\n\n script_dir = os.path.dirname(os.path.abspath(setup_script))\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n finally:\n # While PEP 517 frontends should be calling each hook in a fresh\n # subprocess according to the standard (and thus it should not be\n # strictly necessary to restore the old sys.path), we'll restore\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n\n# The primary backend\n_BACKEND = _BuildMetaBackend()\n\nget_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel\nget_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist\nprepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel\nbuild_wheel = _BACKEND.build_wheel\nbuild_sdist = _BACKEND.build_sdist\n\n\n# The legacy backend\n__legacy__ = _BuildMetaLegacyBackend()\n", "path": "setuptools/build_meta.py"}]}
| 2,977 | 166 |
gh_patches_debug_19322
|
rasdani/github-patches
|
git_diff
|
psf__black-3282
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support formatting Jupyter Notebooks in GitHub Actions
**Is your feature request related to a problem? Please describe.**
I'm trying to setup a GitHub Action that runs Black on a project that includes *.py and *.ipynb files, but the default action does not include the Jupyter extra. I followed the integration described in [this piece of documentation](https://black.readthedocs.io/en/stable/integrations/github_actions.html) but the option to include the Jupyter extra (`black[jupyter]`) is not available.
**Describe the solution you'd like**
If the action included an argument to include the Jupyter extra, the GitHub Action would work in as expected (when using `pip install black[jupyter]` locally).
**Describe alternatives you've considered**
I considered a custom GitHub Action and installing Black manually, but found out that modifying part of the action available in this repository is cleaner and would bring support to users with a similar need without affecting those that already use the GitHub Action.
**Additional context**
I was trying different things out and arrived to a solution that works as expected and can be included in this project without affecting users that already use the GitHub Action. **Add a new option to the GitHub Action to enable the Jupyter extra dependency**. I think that a boolean value might do the trick and using `false` as default maintains the current behavior.
``` diff
diff --git a/action.yml b/action.yml
index cfa6ef9..ed6c32e 100644
--- a/action.yml
+++ b/action.yml
@@ -8,6 +8,10 @@ inputs:
'--check --diff'"
required: false
default: "--check --diff"
+ jupyter:
+ description: "Include the required extra dependencies to format Jupyter Notebooks."
+ required: false
+ default: false
src:
description: "Source to run Black. Default: '.'"
required: false
@@ -38,6 +42,7 @@ runs:
# TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.
INPUT_OPTIONS: ${{ inputs.options }}
INPUT_SRC: ${{ inputs.src }}
+ INPUT_JUPYTER: ${{ inputs.jupyter }}
INPUT_BLACK_ARGS: ${{ inputs.black_args }}
INPUT_VERSION: ${{ inputs.version }}
pythonioencoding: utf-8
```
In this file, if the flag is enabled (if the `INPUT_JUPYTER` envar has a true value) then the `jupyter` extra is included in the installation step. Colorama is already included by default.
```diff
diff --git a/action/main.py b/action/main.py
index cd920f5..fbf6e73 100644
--- a/action/main.py
+++ b/action/main.py
@@ -10,11 +10,16 @@ ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
SRC = os.getenv("INPUT_SRC", default="")
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
+JUPYTER = os.getenv("INPUT_JUPYTER")
VERSION = os.getenv("INPUT_VERSION", default="")
run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
-req = "black[colorama]"
+
+if JUPYTER:
+ req = "black[colorama,jupyter]"
+else:
+ req = "black[colorama]"
if VERSION:
req += f"=={VERSION}"
pip_proc = run(
```
The only difference would be visible in case I want to use the Jupyter extra, which can be enabled by passing the value explicitly:
```diff
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: psf/black@stable
+ jupyter: true
options: "--check --diff --verbose"
```
I forked this project to test the GitHub Action and it does work as expected (https://github.com/aaossa/black/commit/7af4287355003cd44e0febd8fe88e92f205db324). If you agree with this feature request, I can submit a PR with these changes and update the relevant documentation 👌
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `action/main.py`
Content:
```
1 import os
2 import shlex
3 import sys
4 from pathlib import Path
5 from subprocess import PIPE, STDOUT, run
6
7 ACTION_PATH = Path(os.environ["GITHUB_ACTION_PATH"])
8 ENV_PATH = ACTION_PATH / ".black-env"
9 ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
10 OPTIONS = os.getenv("INPUT_OPTIONS", default="")
11 SRC = os.getenv("INPUT_SRC", default="")
12 BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
13 VERSION = os.getenv("INPUT_VERSION", default="")
14
15 run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
16
17 version_specifier = VERSION
18 if VERSION and VERSION[0] in "0123456789":
19 version_specifier = f"=={VERSION}"
20 req = f"black[colorama]{version_specifier}"
21 pip_proc = run(
22 [str(ENV_BIN / "python"), "-m", "pip", "install", req],
23 stdout=PIPE,
24 stderr=STDOUT,
25 encoding="utf-8",
26 )
27 if pip_proc.returncode:
28 print(pip_proc.stdout)
29 print("::error::Failed to install Black.", flush=True)
30 sys.exit(pip_proc.returncode)
31
32
33 base_cmd = [str(ENV_BIN / "black")]
34 if BLACK_ARGS:
35 # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.
36 proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])
37 else:
38 proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])
39
40 sys.exit(proc.returncode)
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/action/main.py b/action/main.py
--- a/action/main.py
+++ b/action/main.py
@@ -9,6 +9,7 @@
ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
SRC = os.getenv("INPUT_SRC", default="")
+JUPYTER = os.getenv("INPUT_JUPYTER") == "true"
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
VERSION = os.getenv("INPUT_VERSION", default="")
@@ -17,7 +18,11 @@
version_specifier = VERSION
if VERSION and VERSION[0] in "0123456789":
version_specifier = f"=={VERSION}"
-req = f"black[colorama]{version_specifier}"
+if JUPYTER:
+ extra_deps = "[colorama,jupyter]"
+else:
+ extra_deps = "[colorama]"
+req = f"black{extra_deps}{version_specifier}"
pip_proc = run(
[str(ENV_BIN / "python"), "-m", "pip", "install", req],
stdout=PIPE,
|
{"golden_diff": "diff --git a/action/main.py b/action/main.py\n--- a/action/main.py\n+++ b/action/main.py\n@@ -9,6 +9,7 @@\n ENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\n OPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\n SRC = os.getenv(\"INPUT_SRC\", default=\"\")\n+JUPYTER = os.getenv(\"INPUT_JUPYTER\") == \"true\"\n BLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\n VERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n \n@@ -17,7 +18,11 @@\n version_specifier = VERSION\n if VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\n-req = f\"black[colorama]{version_specifier}\"\n+if JUPYTER:\n+ extra_deps = \"[colorama,jupyter]\"\n+else:\n+ extra_deps = \"[colorama]\"\n+req = f\"black{extra_deps}{version_specifier}\"\n pip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n", "issue": "Support formatting Jupyter Notebooks in GitHub Actions\n**Is your feature request related to a problem? Please describe.**\r\n\r\nI'm trying to setup a GitHub Action that runs Black on a project that includes *.py and *.ipynb files, but the default action does not include the Jupyter extra. I followed the integration described in [this piece of documentation](https://black.readthedocs.io/en/stable/integrations/github_actions.html) but the option to include the Jupyter extra (`black[jupyter]`) is not available.\r\n\r\n**Describe the solution you'd like**\r\n\r\nIf the action included an argument to include the Jupyter extra, the GitHub Action would work in as expected (when using `pip install black[jupyter]` locally).\r\n\r\n**Describe alternatives you've considered**\r\n\r\nI considered a custom GitHub Action and installing Black manually, but found out that modifying part of the action available in this repository is cleaner and would bring support to users with a similar need without affecting those that already use the GitHub Action.\r\n\r\n**Additional context**\r\n\r\nI was trying different things out and arrived to a solution that works as expected and can be included in this project without affecting users that already use the GitHub Action. **Add a new option to the GitHub Action to enable the Jupyter extra dependency**. I think that a boolean value might do the trick and using `false` as default maintains the current behavior.\r\n\r\n``` diff\r\ndiff --git a/action.yml b/action.yml\r\nindex cfa6ef9..ed6c32e 100644\r\n--- a/action.yml\r\n+++ b/action.yml\r\n@@ -8,6 +8,10 @@ inputs:\r\n '--check --diff'\"\r\n required: false\r\n default: \"--check --diff\"\r\n+ jupyter:\r\n+ description: \"Include the required extra dependencies to format Jupyter Notebooks.\"\r\n+ required: false\r\n+ default: false\r\n src:\r\n description: \"Source to run Black. Default: '.'\"\r\n required: false\r\n@@ -38,6 +42,7 @@ runs:\r\n # TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.\r\n INPUT_OPTIONS: ${{ inputs.options }}\r\n INPUT_SRC: ${{ inputs.src }}\r\n+ INPUT_JUPYTER: ${{ inputs.jupyter }}\r\n INPUT_BLACK_ARGS: ${{ inputs.black_args }}\r\n INPUT_VERSION: ${{ inputs.version }}\r\n pythonioencoding: utf-8\r\n```\r\n\r\nIn this file, if the flag is enabled (if the `INPUT_JUPYTER` envar has a true value) then the `jupyter` extra is included in the installation step. Colorama is already included by default. \r\n\r\n```diff\r\ndiff --git a/action/main.py b/action/main.py\r\nindex cd920f5..fbf6e73 100644\r\n--- a/action/main.py\r\n+++ b/action/main.py\r\n@@ -10,11 +10,16 @@ ENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\r\n OPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\r\n SRC = os.getenv(\"INPUT_SRC\", default=\"\")\r\n BLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\r\n+JUPYTER = os.getenv(\"INPUT_JUPYTER\")\r\n VERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\r\n\r\n run([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\r\n\r\n-req = \"black[colorama]\"\r\n+\r\n+if JUPYTER:\r\n+ req = \"black[colorama,jupyter]\"\r\n+else:\r\n+ req = \"black[colorama]\"\r\n if VERSION:\r\n req += f\"=={VERSION}\"\r\n pip_proc = run(\r\n```\r\n\r\nThe only difference would be visible in case I want to use the Jupyter extra, which can be enabled by passing the value explicitly:\r\n\r\n```diff\r\njobs:\r\n lint:\r\n runs-on: ubuntu-latest\r\n steps:\r\n - uses: actions/checkout@v2\r\n - uses: psf/black@stable\r\n+ jupyter: true\r\n options: \"--check --diff --verbose\"\r\n\r\n```\r\n\r\nI forked this project to test the GitHub Action and it does work as expected (https://github.com/aaossa/black/commit/7af4287355003cd44e0febd8fe88e92f205db324). If you agree with this feature request, I can submit a PR with these changes and update the relevant documentation \ud83d\udc4c \r\n\r\n\n", "before_files": [{"content": "import os\nimport shlex\nimport sys\nfrom pathlib import Path\nfrom subprocess import PIPE, STDOUT, run\n\nACTION_PATH = Path(os.environ[\"GITHUB_ACTION_PATH\"])\nENV_PATH = ACTION_PATH / \".black-env\"\nENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\nOPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\nSRC = os.getenv(\"INPUT_SRC\", default=\"\")\nBLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\nVERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n\nrun([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\n\nversion_specifier = VERSION\nif VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\nreq = f\"black[colorama]{version_specifier}\"\npip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n)\nif pip_proc.returncode:\n print(pip_proc.stdout)\n print(\"::error::Failed to install Black.\", flush=True)\n sys.exit(pip_proc.returncode)\n\n\nbase_cmd = [str(ENV_BIN / \"black\")]\nif BLACK_ARGS:\n # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.\n proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])\nelse:\n proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])\n\nsys.exit(proc.returncode)\n", "path": "action/main.py"}], "after_files": [{"content": "import os\nimport shlex\nimport sys\nfrom pathlib import Path\nfrom subprocess import PIPE, STDOUT, run\n\nACTION_PATH = Path(os.environ[\"GITHUB_ACTION_PATH\"])\nENV_PATH = ACTION_PATH / \".black-env\"\nENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\nOPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\nSRC = os.getenv(\"INPUT_SRC\", default=\"\")\nJUPYTER = os.getenv(\"INPUT_JUPYTER\") == \"true\"\nBLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\nVERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n\nrun([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\n\nversion_specifier = VERSION\nif VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\nif JUPYTER:\n extra_deps = \"[colorama,jupyter]\"\nelse:\n extra_deps = \"[colorama]\"\nreq = f\"black{extra_deps}{version_specifier}\"\npip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n)\nif pip_proc.returncode:\n print(pip_proc.stdout)\n print(\"::error::Failed to install Black.\", flush=True)\n sys.exit(pip_proc.returncode)\n\n\nbase_cmd = [str(ENV_BIN / \"black\")]\nif BLACK_ARGS:\n # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.\n proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])\nelse:\n proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])\n\nsys.exit(proc.returncode)\n", "path": "action/main.py"}]}
| 1,650 | 256 |
gh_patches_debug_27301
|
rasdani/github-patches
|
git_diff
|
redis__redis-py-684
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Next sentinel host is not contacted after socket timeout
If a there is a socket timeout with a sentinel host, `redis.exceptions.TimeoutError` is returned and none of the other sentinel hosts are contacted.
If there is a connection timeout or connection refused, the next host is tried. It would great if there was way to try the next sentinel host in the same way for socket timeout errors.
(You can `retry_on_timeout=True` to retry the same sentinel host once, but if you get a socket timeout a second time, `redis.exceptions.TimeoutError` is returned.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redis/sentinel.py`
Content:
```
1 import os
2 import random
3 import weakref
4
5 from redis.client import StrictRedis
6 from redis.connection import ConnectionPool, Connection
7 from redis.exceptions import ConnectionError, ResponseError, ReadOnlyError
8 from redis._compat import iteritems, nativestr, xrange
9
10
11 class MasterNotFoundError(ConnectionError):
12 pass
13
14
15 class SlaveNotFoundError(ConnectionError):
16 pass
17
18
19 class SentinelManagedConnection(Connection):
20 def __init__(self, **kwargs):
21 self.connection_pool = kwargs.pop('connection_pool')
22 super(SentinelManagedConnection, self).__init__(**kwargs)
23
24 def __repr__(self):
25 pool = self.connection_pool
26 s = '%s<service=%s%%s>' % (type(self).__name__, pool.service_name)
27 if self.host:
28 host_info = ',host=%s,port=%s' % (self.host, self.port)
29 s = s % host_info
30 return s
31
32 def connect_to(self, address):
33 self.host, self.port = address
34 super(SentinelManagedConnection, self).connect()
35 if self.connection_pool.check_connection:
36 self.send_command('PING')
37 if nativestr(self.read_response()) != 'PONG':
38 raise ConnectionError('PING failed')
39
40 def connect(self):
41 if self._sock:
42 return # already connected
43 if self.connection_pool.is_master:
44 self.connect_to(self.connection_pool.get_master_address())
45 else:
46 for slave in self.connection_pool.rotate_slaves():
47 try:
48 return self.connect_to(slave)
49 except ConnectionError:
50 continue
51 raise SlaveNotFoundError # Never be here
52
53 def read_response(self):
54 try:
55 return super(SentinelManagedConnection, self).read_response()
56 except ReadOnlyError:
57 if self.connection_pool.is_master:
58 # When talking to a master, a ReadOnlyError when likely
59 # indicates that the previous master that we're still connected
60 # to has been demoted to a slave and there's a new master.
61 # calling disconnect will force the connection to re-query
62 # sentinel during the next connect() attempt.
63 self.disconnect()
64 raise ConnectionError('The previous master is now a slave')
65 raise
66
67
68 class SentinelConnectionPool(ConnectionPool):
69 """
70 Sentinel backed connection pool.
71
72 If ``check_connection`` flag is set to True, SentinelManagedConnection
73 sends a PING command right after establishing the connection.
74 """
75
76 def __init__(self, service_name, sentinel_manager, **kwargs):
77 kwargs['connection_class'] = kwargs.get(
78 'connection_class', SentinelManagedConnection)
79 self.is_master = kwargs.pop('is_master', True)
80 self.check_connection = kwargs.pop('check_connection', False)
81 super(SentinelConnectionPool, self).__init__(**kwargs)
82 self.connection_kwargs['connection_pool'] = weakref.proxy(self)
83 self.service_name = service_name
84 self.sentinel_manager = sentinel_manager
85
86 def __repr__(self):
87 return "%s<service=%s(%s)" % (
88 type(self).__name__,
89 self.service_name,
90 self.is_master and 'master' or 'slave',
91 )
92
93 def reset(self):
94 super(SentinelConnectionPool, self).reset()
95 self.master_address = None
96 self.slave_rr_counter = None
97
98 def get_master_address(self):
99 master_address = self.sentinel_manager.discover_master(
100 self.service_name)
101 if self.is_master:
102 if self.master_address is None:
103 self.master_address = master_address
104 elif master_address != self.master_address:
105 # Master address changed, disconnect all clients in this pool
106 self.disconnect()
107 return master_address
108
109 def rotate_slaves(self):
110 "Round-robin slave balancer"
111 slaves = self.sentinel_manager.discover_slaves(self.service_name)
112 if slaves:
113 if self.slave_rr_counter is None:
114 self.slave_rr_counter = random.randint(0, len(slaves) - 1)
115 for _ in xrange(len(slaves)):
116 self.slave_rr_counter = (
117 self.slave_rr_counter + 1) % len(slaves)
118 slave = slaves[self.slave_rr_counter]
119 yield slave
120 # Fallback to the master connection
121 try:
122 yield self.get_master_address()
123 except MasterNotFoundError:
124 pass
125 raise SlaveNotFoundError('No slave found for %r' % (self.service_name))
126
127 def _checkpid(self):
128 if self.pid != os.getpid():
129 self.disconnect()
130 self.reset()
131 self.__init__(self.service_name, self.sentinel_manager,
132 is_master=self.is_master,
133 check_connection=self.check_connection,
134 connection_class=self.connection_class,
135 max_connections=self.max_connections,
136 **self.connection_kwargs)
137
138
139 class Sentinel(object):
140 """
141 Redis Sentinel cluster client
142
143 >>> from redis.sentinel import Sentinel
144 >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
145 >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
146 >>> master.set('foo', 'bar')
147 >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
148 >>> slave.get('foo')
149 'bar'
150
151 ``sentinels`` is a list of sentinel nodes. Each node is represented by
152 a pair (hostname, port).
153
154 ``min_other_sentinels`` defined a minimum number of peers for a sentinel.
155 When querying a sentinel, if it doesn't meet this threshold, responses
156 from that sentinel won't be considered valid.
157
158 ``sentinel_kwargs`` is a dictionary of connection arguments used when
159 connecting to sentinel instances. Any argument that can be passed to
160 a normal Redis connection can be specified here. If ``sentinel_kwargs`` is
161 not specified, any socket_timeout and socket_keepalive options specified
162 in ``connection_kwargs`` will be used.
163
164 ``connection_kwargs`` are keyword arguments that will be used when
165 establishing a connection to a Redis server.
166 """
167
168 def __init__(self, sentinels, min_other_sentinels=0, sentinel_kwargs=None,
169 **connection_kwargs):
170 # if sentinel_kwargs isn't defined, use the socket_* options from
171 # connection_kwargs
172 if sentinel_kwargs is None:
173 sentinel_kwargs = dict([(k, v)
174 for k, v in iteritems(connection_kwargs)
175 if k.startswith('socket_')
176 ])
177 self.sentinel_kwargs = sentinel_kwargs
178
179 self.sentinels = [StrictRedis(hostname, port, **self.sentinel_kwargs)
180 for hostname, port in sentinels]
181 self.min_other_sentinels = min_other_sentinels
182 self.connection_kwargs = connection_kwargs
183
184 def __repr__(self):
185 sentinel_addresses = []
186 for sentinel in self.sentinels:
187 sentinel_addresses.append('%s:%s' % (
188 sentinel.connection_pool.connection_kwargs['host'],
189 sentinel.connection_pool.connection_kwargs['port'],
190 ))
191 return '%s<sentinels=[%s]>' % (
192 type(self).__name__,
193 ','.join(sentinel_addresses))
194
195 def check_master_state(self, state, service_name):
196 if not state['is_master'] or state['is_sdown'] or state['is_odown']:
197 return False
198 # Check if our sentinel doesn't see other nodes
199 if state['num-other-sentinels'] < self.min_other_sentinels:
200 return False
201 return True
202
203 def discover_master(self, service_name):
204 """
205 Asks sentinel servers for the Redis master's address corresponding
206 to the service labeled ``service_name``.
207
208 Returns a pair (address, port) or raises MasterNotFoundError if no
209 master is found.
210 """
211 for sentinel_no, sentinel in enumerate(self.sentinels):
212 try:
213 masters = sentinel.sentinel_masters()
214 except ConnectionError:
215 continue
216 state = masters.get(service_name)
217 if state and self.check_master_state(state, service_name):
218 # Put this sentinel at the top of the list
219 self.sentinels[0], self.sentinels[sentinel_no] = (
220 sentinel, self.sentinels[0])
221 return state['ip'], state['port']
222 raise MasterNotFoundError("No master found for %r" % (service_name,))
223
224 def filter_slaves(self, slaves):
225 "Remove slaves that are in an ODOWN or SDOWN state"
226 slaves_alive = []
227 for slave in slaves:
228 if slave['is_odown'] or slave['is_sdown']:
229 continue
230 slaves_alive.append((slave['ip'], slave['port']))
231 return slaves_alive
232
233 def discover_slaves(self, service_name):
234 "Returns a list of alive slaves for service ``service_name``"
235 for sentinel in self.sentinels:
236 try:
237 slaves = sentinel.sentinel_slaves(service_name)
238 except (ConnectionError, ResponseError):
239 continue
240 slaves = self.filter_slaves(slaves)
241 if slaves:
242 return slaves
243 return []
244
245 def master_for(self, service_name, redis_class=StrictRedis,
246 connection_pool_class=SentinelConnectionPool, **kwargs):
247 """
248 Returns a redis client instance for the ``service_name`` master.
249
250 A SentinelConnectionPool class is used to retrive the master's
251 address before establishing a new connection.
252
253 NOTE: If the master's address has changed, any cached connections to
254 the old master are closed.
255
256 By default clients will be a redis.StrictRedis instance. Specify a
257 different class to the ``redis_class`` argument if you desire
258 something different.
259
260 The ``connection_pool_class`` specifies the connection pool to use.
261 The SentinelConnectionPool will be used by default.
262
263 All other keyword arguments are merged with any connection_kwargs
264 passed to this class and passed to the connection pool as keyword
265 arguments to be used to initialize Redis connections.
266 """
267 kwargs['is_master'] = True
268 connection_kwargs = dict(self.connection_kwargs)
269 connection_kwargs.update(kwargs)
270 return redis_class(connection_pool=connection_pool_class(
271 service_name, self, **connection_kwargs))
272
273 def slave_for(self, service_name, redis_class=StrictRedis,
274 connection_pool_class=SentinelConnectionPool, **kwargs):
275 """
276 Returns redis client instance for the ``service_name`` slave(s).
277
278 A SentinelConnectionPool class is used to retrive the slave's
279 address before establishing a new connection.
280
281 By default clients will be a redis.StrictRedis instance. Specify a
282 different class to the ``redis_class`` argument if you desire
283 something different.
284
285 The ``connection_pool_class`` specifies the connection pool to use.
286 The SentinelConnectionPool will be used by default.
287
288 All other keyword arguments are merged with any connection_kwargs
289 passed to this class and passed to the connection pool as keyword
290 arguments to be used to initialize Redis connections.
291 """
292 kwargs['is_master'] = False
293 connection_kwargs = dict(self.connection_kwargs)
294 connection_kwargs.update(kwargs)
295 return redis_class(connection_pool=connection_pool_class(
296 service_name, self, **connection_kwargs))
297
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redis/sentinel.py b/redis/sentinel.py
--- a/redis/sentinel.py
+++ b/redis/sentinel.py
@@ -4,7 +4,8 @@
from redis.client import StrictRedis
from redis.connection import ConnectionPool, Connection
-from redis.exceptions import ConnectionError, ResponseError, ReadOnlyError
+from redis.exceptions import (ConnectionError, ResponseError, ReadOnlyError,
+ TimeoutError)
from redis._compat import iteritems, nativestr, xrange
@@ -211,7 +212,7 @@
for sentinel_no, sentinel in enumerate(self.sentinels):
try:
masters = sentinel.sentinel_masters()
- except ConnectionError:
+ except (ConnectionError, TimeoutError):
continue
state = masters.get(service_name)
if state and self.check_master_state(state, service_name):
@@ -235,7 +236,7 @@
for sentinel in self.sentinels:
try:
slaves = sentinel.sentinel_slaves(service_name)
- except (ConnectionError, ResponseError):
+ except (ConnectionError, ResponseError, TimeoutError):
continue
slaves = self.filter_slaves(slaves)
if slaves:
|
{"golden_diff": "diff --git a/redis/sentinel.py b/redis/sentinel.py\n--- a/redis/sentinel.py\n+++ b/redis/sentinel.py\n@@ -4,7 +4,8 @@\n \n from redis.client import StrictRedis\n from redis.connection import ConnectionPool, Connection\n-from redis.exceptions import ConnectionError, ResponseError, ReadOnlyError\n+from redis.exceptions import (ConnectionError, ResponseError, ReadOnlyError,\n+ TimeoutError)\n from redis._compat import iteritems, nativestr, xrange\n \n \n@@ -211,7 +212,7 @@\n for sentinel_no, sentinel in enumerate(self.sentinels):\n try:\n masters = sentinel.sentinel_masters()\n- except ConnectionError:\n+ except (ConnectionError, TimeoutError):\n continue\n state = masters.get(service_name)\n if state and self.check_master_state(state, service_name):\n@@ -235,7 +236,7 @@\n for sentinel in self.sentinels:\n try:\n slaves = sentinel.sentinel_slaves(service_name)\n- except (ConnectionError, ResponseError):\n+ except (ConnectionError, ResponseError, TimeoutError):\n continue\n slaves = self.filter_slaves(slaves)\n if slaves:\n", "issue": "Next sentinel host is not contacted after socket timeout\nIf a there is a socket timeout with a sentinel host, `redis.exceptions.TimeoutError` is returned and none of the other sentinel hosts are contacted.\n\nIf there is a connection timeout or connection refused, the next host is tried. It would great if there was way to try the next sentinel host in the same way for socket timeout errors.\n\n(You can `retry_on_timeout=True` to retry the same sentinel host once, but if you get a socket timeout a second time, `redis.exceptions.TimeoutError` is returned.)\n\n", "before_files": [{"content": "import os\nimport random\nimport weakref\n\nfrom redis.client import StrictRedis\nfrom redis.connection import ConnectionPool, Connection\nfrom redis.exceptions import ConnectionError, ResponseError, ReadOnlyError\nfrom redis._compat import iteritems, nativestr, xrange\n\n\nclass MasterNotFoundError(ConnectionError):\n pass\n\n\nclass SlaveNotFoundError(ConnectionError):\n pass\n\n\nclass SentinelManagedConnection(Connection):\n def __init__(self, **kwargs):\n self.connection_pool = kwargs.pop('connection_pool')\n super(SentinelManagedConnection, self).__init__(**kwargs)\n\n def __repr__(self):\n pool = self.connection_pool\n s = '%s<service=%s%%s>' % (type(self).__name__, pool.service_name)\n if self.host:\n host_info = ',host=%s,port=%s' % (self.host, self.port)\n s = s % host_info\n return s\n\n def connect_to(self, address):\n self.host, self.port = address\n super(SentinelManagedConnection, self).connect()\n if self.connection_pool.check_connection:\n self.send_command('PING')\n if nativestr(self.read_response()) != 'PONG':\n raise ConnectionError('PING failed')\n\n def connect(self):\n if self._sock:\n return # already connected\n if self.connection_pool.is_master:\n self.connect_to(self.connection_pool.get_master_address())\n else:\n for slave in self.connection_pool.rotate_slaves():\n try:\n return self.connect_to(slave)\n except ConnectionError:\n continue\n raise SlaveNotFoundError # Never be here\n\n def read_response(self):\n try:\n return super(SentinelManagedConnection, self).read_response()\n except ReadOnlyError:\n if self.connection_pool.is_master:\n # When talking to a master, a ReadOnlyError when likely\n # indicates that the previous master that we're still connected\n # to has been demoted to a slave and there's a new master.\n # calling disconnect will force the connection to re-query\n # sentinel during the next connect() attempt.\n self.disconnect()\n raise ConnectionError('The previous master is now a slave')\n raise\n\n\nclass SentinelConnectionPool(ConnectionPool):\n \"\"\"\n Sentinel backed connection pool.\n\n If ``check_connection`` flag is set to True, SentinelManagedConnection\n sends a PING command right after establishing the connection.\n \"\"\"\n\n def __init__(self, service_name, sentinel_manager, **kwargs):\n kwargs['connection_class'] = kwargs.get(\n 'connection_class', SentinelManagedConnection)\n self.is_master = kwargs.pop('is_master', True)\n self.check_connection = kwargs.pop('check_connection', False)\n super(SentinelConnectionPool, self).__init__(**kwargs)\n self.connection_kwargs['connection_pool'] = weakref.proxy(self)\n self.service_name = service_name\n self.sentinel_manager = sentinel_manager\n\n def __repr__(self):\n return \"%s<service=%s(%s)\" % (\n type(self).__name__,\n self.service_name,\n self.is_master and 'master' or 'slave',\n )\n\n def reset(self):\n super(SentinelConnectionPool, self).reset()\n self.master_address = None\n self.slave_rr_counter = None\n\n def get_master_address(self):\n master_address = self.sentinel_manager.discover_master(\n self.service_name)\n if self.is_master:\n if self.master_address is None:\n self.master_address = master_address\n elif master_address != self.master_address:\n # Master address changed, disconnect all clients in this pool\n self.disconnect()\n return master_address\n\n def rotate_slaves(self):\n \"Round-robin slave balancer\"\n slaves = self.sentinel_manager.discover_slaves(self.service_name)\n if slaves:\n if self.slave_rr_counter is None:\n self.slave_rr_counter = random.randint(0, len(slaves) - 1)\n for _ in xrange(len(slaves)):\n self.slave_rr_counter = (\n self.slave_rr_counter + 1) % len(slaves)\n slave = slaves[self.slave_rr_counter]\n yield slave\n # Fallback to the master connection\n try:\n yield self.get_master_address()\n except MasterNotFoundError:\n pass\n raise SlaveNotFoundError('No slave found for %r' % (self.service_name))\n\n def _checkpid(self):\n if self.pid != os.getpid():\n self.disconnect()\n self.reset()\n self.__init__(self.service_name, self.sentinel_manager,\n is_master=self.is_master,\n check_connection=self.check_connection,\n connection_class=self.connection_class,\n max_connections=self.max_connections,\n **self.connection_kwargs)\n\n\nclass Sentinel(object):\n \"\"\"\n Redis Sentinel cluster client\n\n >>> from redis.sentinel import Sentinel\n >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)\n >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)\n >>> master.set('foo', 'bar')\n >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)\n >>> slave.get('foo')\n 'bar'\n\n ``sentinels`` is a list of sentinel nodes. Each node is represented by\n a pair (hostname, port).\n\n ``min_other_sentinels`` defined a minimum number of peers for a sentinel.\n When querying a sentinel, if it doesn't meet this threshold, responses\n from that sentinel won't be considered valid.\n\n ``sentinel_kwargs`` is a dictionary of connection arguments used when\n connecting to sentinel instances. Any argument that can be passed to\n a normal Redis connection can be specified here. If ``sentinel_kwargs`` is\n not specified, any socket_timeout and socket_keepalive options specified\n in ``connection_kwargs`` will be used.\n\n ``connection_kwargs`` are keyword arguments that will be used when\n establishing a connection to a Redis server.\n \"\"\"\n\n def __init__(self, sentinels, min_other_sentinels=0, sentinel_kwargs=None,\n **connection_kwargs):\n # if sentinel_kwargs isn't defined, use the socket_* options from\n # connection_kwargs\n if sentinel_kwargs is None:\n sentinel_kwargs = dict([(k, v)\n for k, v in iteritems(connection_kwargs)\n if k.startswith('socket_')\n ])\n self.sentinel_kwargs = sentinel_kwargs\n\n self.sentinels = [StrictRedis(hostname, port, **self.sentinel_kwargs)\n for hostname, port in sentinels]\n self.min_other_sentinels = min_other_sentinels\n self.connection_kwargs = connection_kwargs\n\n def __repr__(self):\n sentinel_addresses = []\n for sentinel in self.sentinels:\n sentinel_addresses.append('%s:%s' % (\n sentinel.connection_pool.connection_kwargs['host'],\n sentinel.connection_pool.connection_kwargs['port'],\n ))\n return '%s<sentinels=[%s]>' % (\n type(self).__name__,\n ','.join(sentinel_addresses))\n\n def check_master_state(self, state, service_name):\n if not state['is_master'] or state['is_sdown'] or state['is_odown']:\n return False\n # Check if our sentinel doesn't see other nodes\n if state['num-other-sentinels'] < self.min_other_sentinels:\n return False\n return True\n\n def discover_master(self, service_name):\n \"\"\"\n Asks sentinel servers for the Redis master's address corresponding\n to the service labeled ``service_name``.\n\n Returns a pair (address, port) or raises MasterNotFoundError if no\n master is found.\n \"\"\"\n for sentinel_no, sentinel in enumerate(self.sentinels):\n try:\n masters = sentinel.sentinel_masters()\n except ConnectionError:\n continue\n state = masters.get(service_name)\n if state and self.check_master_state(state, service_name):\n # Put this sentinel at the top of the list\n self.sentinels[0], self.sentinels[sentinel_no] = (\n sentinel, self.sentinels[0])\n return state['ip'], state['port']\n raise MasterNotFoundError(\"No master found for %r\" % (service_name,))\n\n def filter_slaves(self, slaves):\n \"Remove slaves that are in an ODOWN or SDOWN state\"\n slaves_alive = []\n for slave in slaves:\n if slave['is_odown'] or slave['is_sdown']:\n continue\n slaves_alive.append((slave['ip'], slave['port']))\n return slaves_alive\n\n def discover_slaves(self, service_name):\n \"Returns a list of alive slaves for service ``service_name``\"\n for sentinel in self.sentinels:\n try:\n slaves = sentinel.sentinel_slaves(service_name)\n except (ConnectionError, ResponseError):\n continue\n slaves = self.filter_slaves(slaves)\n if slaves:\n return slaves\n return []\n\n def master_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns a redis client instance for the ``service_name`` master.\n\n A SentinelConnectionPool class is used to retrive the master's\n address before establishing a new connection.\n\n NOTE: If the master's address has changed, any cached connections to\n the old master are closed.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other keyword arguments are merged with any connection_kwargs\n passed to this class and passed to the connection pool as keyword\n arguments to be used to initialize Redis connections.\n \"\"\"\n kwargs['is_master'] = True\n connection_kwargs = dict(self.connection_kwargs)\n connection_kwargs.update(kwargs)\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **connection_kwargs))\n\n def slave_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns redis client instance for the ``service_name`` slave(s).\n\n A SentinelConnectionPool class is used to retrive the slave's\n address before establishing a new connection.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other keyword arguments are merged with any connection_kwargs\n passed to this class and passed to the connection pool as keyword\n arguments to be used to initialize Redis connections.\n \"\"\"\n kwargs['is_master'] = False\n connection_kwargs = dict(self.connection_kwargs)\n connection_kwargs.update(kwargs)\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **connection_kwargs))\n", "path": "redis/sentinel.py"}], "after_files": [{"content": "import os\nimport random\nimport weakref\n\nfrom redis.client import StrictRedis\nfrom redis.connection import ConnectionPool, Connection\nfrom redis.exceptions import (ConnectionError, ResponseError, ReadOnlyError,\n TimeoutError)\nfrom redis._compat import iteritems, nativestr, xrange\n\n\nclass MasterNotFoundError(ConnectionError):\n pass\n\n\nclass SlaveNotFoundError(ConnectionError):\n pass\n\n\nclass SentinelManagedConnection(Connection):\n def __init__(self, **kwargs):\n self.connection_pool = kwargs.pop('connection_pool')\n super(SentinelManagedConnection, self).__init__(**kwargs)\n\n def __repr__(self):\n pool = self.connection_pool\n s = '%s<service=%s%%s>' % (type(self).__name__, pool.service_name)\n if self.host:\n host_info = ',host=%s,port=%s' % (self.host, self.port)\n s = s % host_info\n return s\n\n def connect_to(self, address):\n self.host, self.port = address\n super(SentinelManagedConnection, self).connect()\n if self.connection_pool.check_connection:\n self.send_command('PING')\n if nativestr(self.read_response()) != 'PONG':\n raise ConnectionError('PING failed')\n\n def connect(self):\n if self._sock:\n return # already connected\n if self.connection_pool.is_master:\n self.connect_to(self.connection_pool.get_master_address())\n else:\n for slave in self.connection_pool.rotate_slaves():\n try:\n return self.connect_to(slave)\n except ConnectionError:\n continue\n raise SlaveNotFoundError # Never be here\n\n def read_response(self):\n try:\n return super(SentinelManagedConnection, self).read_response()\n except ReadOnlyError:\n if self.connection_pool.is_master:\n # When talking to a master, a ReadOnlyError when likely\n # indicates that the previous master that we're still connected\n # to has been demoted to a slave and there's a new master.\n # calling disconnect will force the connection to re-query\n # sentinel during the next connect() attempt.\n self.disconnect()\n raise ConnectionError('The previous master is now a slave')\n raise\n\n\nclass SentinelConnectionPool(ConnectionPool):\n \"\"\"\n Sentinel backed connection pool.\n\n If ``check_connection`` flag is set to True, SentinelManagedConnection\n sends a PING command right after establishing the connection.\n \"\"\"\n\n def __init__(self, service_name, sentinel_manager, **kwargs):\n kwargs['connection_class'] = kwargs.get(\n 'connection_class', SentinelManagedConnection)\n self.is_master = kwargs.pop('is_master', True)\n self.check_connection = kwargs.pop('check_connection', False)\n super(SentinelConnectionPool, self).__init__(**kwargs)\n self.connection_kwargs['connection_pool'] = weakref.proxy(self)\n self.service_name = service_name\n self.sentinel_manager = sentinel_manager\n\n def __repr__(self):\n return \"%s<service=%s(%s)\" % (\n type(self).__name__,\n self.service_name,\n self.is_master and 'master' or 'slave',\n )\n\n def reset(self):\n super(SentinelConnectionPool, self).reset()\n self.master_address = None\n self.slave_rr_counter = None\n\n def get_master_address(self):\n master_address = self.sentinel_manager.discover_master(\n self.service_name)\n if self.is_master:\n if self.master_address is None:\n self.master_address = master_address\n elif master_address != self.master_address:\n # Master address changed, disconnect all clients in this pool\n self.disconnect()\n return master_address\n\n def rotate_slaves(self):\n \"Round-robin slave balancer\"\n slaves = self.sentinel_manager.discover_slaves(self.service_name)\n if slaves:\n if self.slave_rr_counter is None:\n self.slave_rr_counter = random.randint(0, len(slaves) - 1)\n for _ in xrange(len(slaves)):\n self.slave_rr_counter = (\n self.slave_rr_counter + 1) % len(slaves)\n slave = slaves[self.slave_rr_counter]\n yield slave\n # Fallback to the master connection\n try:\n yield self.get_master_address()\n except MasterNotFoundError:\n pass\n raise SlaveNotFoundError('No slave found for %r' % (self.service_name))\n\n def _checkpid(self):\n if self.pid != os.getpid():\n self.disconnect()\n self.reset()\n self.__init__(self.service_name, self.sentinel_manager,\n is_master=self.is_master,\n check_connection=self.check_connection,\n connection_class=self.connection_class,\n max_connections=self.max_connections,\n **self.connection_kwargs)\n\n\nclass Sentinel(object):\n \"\"\"\n Redis Sentinel cluster client\n\n >>> from redis.sentinel import Sentinel\n >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)\n >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)\n >>> master.set('foo', 'bar')\n >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)\n >>> slave.get('foo')\n 'bar'\n\n ``sentinels`` is a list of sentinel nodes. Each node is represented by\n a pair (hostname, port).\n\n ``min_other_sentinels`` defined a minimum number of peers for a sentinel.\n When querying a sentinel, if it doesn't meet this threshold, responses\n from that sentinel won't be considered valid.\n\n ``sentinel_kwargs`` is a dictionary of connection arguments used when\n connecting to sentinel instances. Any argument that can be passed to\n a normal Redis connection can be specified here. If ``sentinel_kwargs`` is\n not specified, any socket_timeout and socket_keepalive options specified\n in ``connection_kwargs`` will be used.\n\n ``connection_kwargs`` are keyword arguments that will be used when\n establishing a connection to a Redis server.\n \"\"\"\n\n def __init__(self, sentinels, min_other_sentinels=0, sentinel_kwargs=None,\n **connection_kwargs):\n # if sentinel_kwargs isn't defined, use the socket_* options from\n # connection_kwargs\n if sentinel_kwargs is None:\n sentinel_kwargs = dict([(k, v)\n for k, v in iteritems(connection_kwargs)\n if k.startswith('socket_')\n ])\n self.sentinel_kwargs = sentinel_kwargs\n\n self.sentinels = [StrictRedis(hostname, port, **self.sentinel_kwargs)\n for hostname, port in sentinels]\n self.min_other_sentinels = min_other_sentinels\n self.connection_kwargs = connection_kwargs\n\n def __repr__(self):\n sentinel_addresses = []\n for sentinel in self.sentinels:\n sentinel_addresses.append('%s:%s' % (\n sentinel.connection_pool.connection_kwargs['host'],\n sentinel.connection_pool.connection_kwargs['port'],\n ))\n return '%s<sentinels=[%s]>' % (\n type(self).__name__,\n ','.join(sentinel_addresses))\n\n def check_master_state(self, state, service_name):\n if not state['is_master'] or state['is_sdown'] or state['is_odown']:\n return False\n # Check if our sentinel doesn't see other nodes\n if state['num-other-sentinels'] < self.min_other_sentinels:\n return False\n return True\n\n def discover_master(self, service_name):\n \"\"\"\n Asks sentinel servers for the Redis master's address corresponding\n to the service labeled ``service_name``.\n\n Returns a pair (address, port) or raises MasterNotFoundError if no\n master is found.\n \"\"\"\n for sentinel_no, sentinel in enumerate(self.sentinels):\n try:\n masters = sentinel.sentinel_masters()\n except (ConnectionError, TimeoutError):\n continue\n state = masters.get(service_name)\n if state and self.check_master_state(state, service_name):\n # Put this sentinel at the top of the list\n self.sentinels[0], self.sentinels[sentinel_no] = (\n sentinel, self.sentinels[0])\n return state['ip'], state['port']\n raise MasterNotFoundError(\"No master found for %r\" % (service_name,))\n\n def filter_slaves(self, slaves):\n \"Remove slaves that are in an ODOWN or SDOWN state\"\n slaves_alive = []\n for slave in slaves:\n if slave['is_odown'] or slave['is_sdown']:\n continue\n slaves_alive.append((slave['ip'], slave['port']))\n return slaves_alive\n\n def discover_slaves(self, service_name):\n \"Returns a list of alive slaves for service ``service_name``\"\n for sentinel in self.sentinels:\n try:\n slaves = sentinel.sentinel_slaves(service_name)\n except (ConnectionError, ResponseError, TimeoutError):\n continue\n slaves = self.filter_slaves(slaves)\n if slaves:\n return slaves\n return []\n\n def master_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns a redis client instance for the ``service_name`` master.\n\n A SentinelConnectionPool class is used to retrive the master's\n address before establishing a new connection.\n\n NOTE: If the master's address has changed, any cached connections to\n the old master are closed.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other keyword arguments are merged with any connection_kwargs\n passed to this class and passed to the connection pool as keyword\n arguments to be used to initialize Redis connections.\n \"\"\"\n kwargs['is_master'] = True\n connection_kwargs = dict(self.connection_kwargs)\n connection_kwargs.update(kwargs)\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **connection_kwargs))\n\n def slave_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns redis client instance for the ``service_name`` slave(s).\n\n A SentinelConnectionPool class is used to retrive the slave's\n address before establishing a new connection.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other keyword arguments are merged with any connection_kwargs\n passed to this class and passed to the connection pool as keyword\n arguments to be used to initialize Redis connections.\n \"\"\"\n kwargs['is_master'] = False\n connection_kwargs = dict(self.connection_kwargs)\n connection_kwargs.update(kwargs)\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **connection_kwargs))\n", "path": "redis/sentinel.py"}]}
| 3,537 | 267 |
gh_patches_debug_33798
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-43869
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
openvswitch_db: ovs-vsctl: "True" is not a valid boolean (use "true" or "false")
##### SUMMARY
`test/integration/targets/openvswitch_db/tests/basic.yaml:68`
```yaml
{
"changed": false,
"cmd": "/usr/bin/ovs-vsctl -t 5 set Bridge br-test stp_enable=True",
"msg": "ovs-vsctl: \"True\" is not a valid boolean (use \"true\" or \"false\")",
"rc": 1,
"stderr": "ovs-vsctl: \"True\" is not a valid boolean (use \"true\" or \"false\")\n",
"stderr_lines": [
"ovs-vsctl: \"True\" is not a valid boolean (use \"true\" or \"false\")"
],
"stdout": "",
"stdout_lines": []
}
```
Possibly caused by https://github.com/ansible/ansible/pull/42110
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
openvswitch_db
##### ANSIBLE VERSION
```
2.7
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/modules/network/ovs/openvswitch_db.py`
Content:
```
1 #!/usr/bin/python
2 # coding: utf-8 -*-
3
4 #
5 # (c) 2015, Mark Hamilton <[email protected]>
6 # Portions copyright @ 2015 VMware, Inc.
7 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
8
9 from __future__ import absolute_import, division, print_function
10 __metaclass__ = type
11
12
13 ANSIBLE_METADATA = {'metadata_version': '1.1',
14 'status': ['preview'],
15 'supported_by': 'network'}
16
17
18 DOCUMENTATION = """
19 ---
20 module: openvswitch_db
21 author: "Mark Hamilton ([email protected])"
22 version_added: 2.0
23 short_description: Configure open vswitch database.
24 requirements: [ "ovs-vsctl >= 2.3.3" ]
25 description:
26 - Set column values in record in database table.
27 options:
28 state:
29 required: false
30 description:
31 - Configures the state of the key. When set
32 to I(present), the I(key) and I(value) pair will be set
33 on the I(record) and when set to I(absent) the I(key)
34 will not be set.
35 default: present
36 choices: ['present', 'absent']
37 version_added: "2.4"
38 table:
39 required: true
40 description:
41 - Identifies the table in the database.
42 record:
43 required: true
44 description:
45 - Identifies the recoard in the table.
46 col:
47 required: true
48 description:
49 - Identifies the column in the record.
50 key:
51 required: false
52 description:
53 - Identifies the key in the record column, when the column is a map
54 type.
55 value:
56 required: true
57 description:
58 - Expected value for the table, record, column and key.
59 timeout:
60 required: false
61 default: 5
62 description:
63 - How long to wait for ovs-vswitchd to respond
64 """
65
66 EXAMPLES = '''
67 # Increase the maximum idle time to 50 seconds before pruning unused kernel
68 # rules.
69 - openvswitch_db:
70 table: open_vswitch
71 record: .
72 col: other_config
73 key: max-idle
74 value: 50000
75
76 # Disable in band copy
77 - openvswitch_db:
78 table: Bridge
79 record: br-int
80 col: other_config
81 key: disable-in-band
82 value: true
83
84 # Remove in band key
85 - openvswitch_db:
86 state: present
87 table: Bridge
88 record: br-int
89 col: other_config
90 key: disable-in-band
91
92 # Mark port with tag 10
93 - openvswitch_db:
94 table: Port
95 record: port0
96 col: tag
97 value: 10
98 '''
99 import re
100
101 from ansible.module_utils.basic import AnsibleModule
102
103 # Regular expression for map type, must not be empty
104 NON_EMPTY_MAP_RE = re.compile(r'{.+}')
105 # Regular expression for a map column type
106 MAP_RE = re.compile(r'{.*}')
107
108
109 def map_obj_to_commands(want, have, module):
110 """ Define ovs-vsctl command to meet desired state """
111 commands = list()
112
113 if module.params['state'] == 'absent':
114 if 'key' in have.keys():
115 templatized_command = "%(ovs-vsctl)s -t %(timeout)s remove %(table)s %(record)s " \
116 "%(col)s %(key)s=%(value)s"
117 commands.append(templatized_command % module.params)
118 elif module.params['key'] is None:
119 templatized_command = "%(ovs-vsctl)s -t %(timeout)s remove %(table)s %(record)s " \
120 "%(col)s"
121 commands.append(templatized_command % module.params)
122 else:
123 if module.params['key'] is None:
124 templatized_command = "%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s " \
125 "%(col)s=%(value)s"
126 commands.append(templatized_command % module.params)
127 elif 'key' not in have.keys():
128 templatized_command = "%(ovs-vsctl)s -t %(timeout)s add %(table)s %(record)s " \
129 "%(col)s %(key)s=%(value)s"
130 commands.append(templatized_command % module.params)
131 elif want['value'] != have['value']:
132 templatized_command = "%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s " \
133 "%(col)s:%(key)s=%(value)s"
134 commands.append(templatized_command % module.params)
135
136 return commands
137
138
139 def map_config_to_obj(module):
140 templatized_command = "%(ovs-vsctl)s -t %(timeout)s list %(table)s %(record)s"
141 command = templatized_command % module.params
142 rc, out, err = module.run_command(command, check_rc=True)
143 if rc != 0:
144 module.fail_json(msg=err)
145
146 match = re.search(r'^' + module.params['col'] + r'(\s+):(\s+)(.*)$', out, re.M)
147
148 col_value = match.group(3)
149
150 # Map types require key argument
151 has_key = module.params['key'] is not None
152 is_map = MAP_RE.match(col_value)
153 if is_map and not has_key:
154 module.fail_json(
155 msg="missing required arguments: key for map type of column")
156
157 col_value_to_dict = {}
158 if NON_EMPTY_MAP_RE.match(col_value):
159 for kv in col_value[1:-1].split(', '):
160 k, v = kv.split('=')
161 col_value_to_dict[k.strip()] = v.strip()
162
163 obj = {
164 'table': module.params['table'],
165 'record': module.params['record'],
166 'col': module.params['col'],
167 }
168
169 if has_key and is_map:
170 if module.params['key'] in col_value_to_dict:
171 obj['key'] = module.params['key']
172 obj['value'] = col_value_to_dict[module.params['key']]
173 else:
174 obj['value'] = col_value.strip()
175
176 return obj
177
178
179 def map_params_to_obj(module):
180 obj = {
181 'table': module.params['table'],
182 'record': module.params['record'],
183 'col': module.params['col'],
184 'value': module.params['value']
185 }
186
187 key = module.params['key']
188 if key is not None:
189 obj['key'] = key
190
191 return obj
192
193
194 def main():
195 """ Entry point for ansible module. """
196 argument_spec = {
197 'state': {'default': 'present', 'choices': ['present', 'absent']},
198 'table': {'required': True},
199 'record': {'required': True},
200 'col': {'required': True},
201 'key': {'required': False},
202 'value': {'required': True},
203 'timeout': {'default': 5, 'type': 'int'},
204 }
205
206 module = AnsibleModule(argument_spec=argument_spec,
207 supports_check_mode=True)
208
209 result = {'changed': False}
210
211 # We add ovs-vsctl to module_params to later build up templatized commands
212 module.params["ovs-vsctl"] = module.get_bin_path("ovs-vsctl", True)
213
214 want = map_params_to_obj(module)
215 have = map_config_to_obj(module)
216
217 commands = map_obj_to_commands(want, have, module)
218 result['commands'] = commands
219
220 if commands:
221 if not module.check_mode:
222 for c in commands:
223 module.run_command(c, check_rc=True)
224 result['changed'] = True
225
226 module.exit_json(**result)
227
228
229 if __name__ == '__main__':
230 main()
231
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/ansible/modules/network/ovs/openvswitch_db.py b/lib/ansible/modules/network/ovs/openvswitch_db.py
--- a/lib/ansible/modules/network/ovs/openvswitch_db.py
+++ b/lib/ansible/modules/network/ovs/openvswitch_db.py
@@ -120,15 +120,14 @@
"%(col)s"
commands.append(templatized_command % module.params)
else:
+ if want == have:
+ # Nothing to commit
+ return commands
if module.params['key'] is None:
templatized_command = "%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s " \
"%(col)s=%(value)s"
commands.append(templatized_command % module.params)
- elif 'key' not in have.keys():
- templatized_command = "%(ovs-vsctl)s -t %(timeout)s add %(table)s %(record)s " \
- "%(col)s %(key)s=%(value)s"
- commands.append(templatized_command % module.params)
- elif want['value'] != have['value']:
+ else:
templatized_command = "%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s " \
"%(col)s:%(key)s=%(value)s"
commands.append(templatized_command % module.params)
@@ -171,7 +170,7 @@
obj['key'] = module.params['key']
obj['value'] = col_value_to_dict[module.params['key']]
else:
- obj['value'] = col_value.strip()
+ obj['value'] = str(col_value.strip())
return obj
@@ -199,7 +198,7 @@
'record': {'required': True},
'col': {'required': True},
'key': {'required': False},
- 'value': {'required': True},
+ 'value': {'required': True, 'type': 'str'},
'timeout': {'default': 5, 'type': 'int'},
}
|
{"golden_diff": "diff --git a/lib/ansible/modules/network/ovs/openvswitch_db.py b/lib/ansible/modules/network/ovs/openvswitch_db.py\n--- a/lib/ansible/modules/network/ovs/openvswitch_db.py\n+++ b/lib/ansible/modules/network/ovs/openvswitch_db.py\n@@ -120,15 +120,14 @@\n \"%(col)s\"\n commands.append(templatized_command % module.params)\n else:\n+ if want == have:\n+ # Nothing to commit\n+ return commands\n if module.params['key'] is None:\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s \" \\\n \"%(col)s=%(value)s\"\n commands.append(templatized_command % module.params)\n- elif 'key' not in have.keys():\n- templatized_command = \"%(ovs-vsctl)s -t %(timeout)s add %(table)s %(record)s \" \\\n- \"%(col)s %(key)s=%(value)s\"\n- commands.append(templatized_command % module.params)\n- elif want['value'] != have['value']:\n+ else:\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s \" \\\n \"%(col)s:%(key)s=%(value)s\"\n commands.append(templatized_command % module.params)\n@@ -171,7 +170,7 @@\n obj['key'] = module.params['key']\n obj['value'] = col_value_to_dict[module.params['key']]\n else:\n- obj['value'] = col_value.strip()\n+ obj['value'] = str(col_value.strip())\n \n return obj\n \n@@ -199,7 +198,7 @@\n 'record': {'required': True},\n 'col': {'required': True},\n 'key': {'required': False},\n- 'value': {'required': True},\n+ 'value': {'required': True, 'type': 'str'},\n 'timeout': {'default': 5, 'type': 'int'},\n }\n", "issue": "openvswitch_db: ovs-vsctl: \"True\" is not a valid boolean (use \"true\" or \"false\") \n##### SUMMARY\r\n`test/integration/targets/openvswitch_db/tests/basic.yaml:68`\r\n\r\n```yaml\r\n{\r\n\"changed\": false, \r\n\"cmd\": \"/usr/bin/ovs-vsctl -t 5 set Bridge br-test stp_enable=True\", \r\n\"msg\": \"ovs-vsctl: \\\"True\\\" is not a valid boolean (use \\\"true\\\" or \\\"false\\\")\", \r\n\"rc\": 1, \r\n\"stderr\": \"ovs-vsctl: \\\"True\\\" is not a valid boolean (use \\\"true\\\" or \\\"false\\\")\\n\", \r\n\"stderr_lines\": [\r\n\"ovs-vsctl: \\\"True\\\" is not a valid boolean (use \\\"true\\\" or \\\"false\\\")\"\r\n], \r\n\"stdout\": \"\", \r\n\"stdout_lines\": []\r\n}\r\n```\r\n\r\nPossibly caused by https://github.com/ansible/ansible/pull/42110\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nopenvswitch_db\r\n\r\n##### ANSIBLE VERSION\r\n```\r\n2.7\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of \"ansible-config dump --only-changed\"\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).-->\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.-->\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used. -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\n\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# coding: utf-8 -*-\n\n#\n# (c) 2015, Mark Hamilton <[email protected]>\n# Portions copyright @ 2015 VMware, Inc.\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'network'}\n\n\nDOCUMENTATION = \"\"\"\n---\nmodule: openvswitch_db\nauthor: \"Mark Hamilton ([email protected])\"\nversion_added: 2.0\nshort_description: Configure open vswitch database.\nrequirements: [ \"ovs-vsctl >= 2.3.3\" ]\ndescription:\n - Set column values in record in database table.\noptions:\n state:\n required: false\n description:\n - Configures the state of the key. When set\n to I(present), the I(key) and I(value) pair will be set\n on the I(record) and when set to I(absent) the I(key)\n will not be set.\n default: present\n choices: ['present', 'absent']\n version_added: \"2.4\"\n table:\n required: true\n description:\n - Identifies the table in the database.\n record:\n required: true\n description:\n - Identifies the recoard in the table.\n col:\n required: true\n description:\n - Identifies the column in the record.\n key:\n required: false\n description:\n - Identifies the key in the record column, when the column is a map\n type.\n value:\n required: true\n description:\n - Expected value for the table, record, column and key.\n timeout:\n required: false\n default: 5\n description:\n - How long to wait for ovs-vswitchd to respond\n\"\"\"\n\nEXAMPLES = '''\n# Increase the maximum idle time to 50 seconds before pruning unused kernel\n# rules.\n- openvswitch_db:\n table: open_vswitch\n record: .\n col: other_config\n key: max-idle\n value: 50000\n\n# Disable in band copy\n- openvswitch_db:\n table: Bridge\n record: br-int\n col: other_config\n key: disable-in-band\n value: true\n\n# Remove in band key\n- openvswitch_db:\n state: present\n table: Bridge\n record: br-int\n col: other_config\n key: disable-in-band\n\n# Mark port with tag 10\n- openvswitch_db:\n table: Port\n record: port0\n col: tag\n value: 10\n'''\nimport re\n\nfrom ansible.module_utils.basic import AnsibleModule\n\n# Regular expression for map type, must not be empty\nNON_EMPTY_MAP_RE = re.compile(r'{.+}')\n# Regular expression for a map column type\nMAP_RE = re.compile(r'{.*}')\n\n\ndef map_obj_to_commands(want, have, module):\n \"\"\" Define ovs-vsctl command to meet desired state \"\"\"\n commands = list()\n\n if module.params['state'] == 'absent':\n if 'key' in have.keys():\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s remove %(table)s %(record)s \" \\\n \"%(col)s %(key)s=%(value)s\"\n commands.append(templatized_command % module.params)\n elif module.params['key'] is None:\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s remove %(table)s %(record)s \" \\\n \"%(col)s\"\n commands.append(templatized_command % module.params)\n else:\n if module.params['key'] is None:\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s \" \\\n \"%(col)s=%(value)s\"\n commands.append(templatized_command % module.params)\n elif 'key' not in have.keys():\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s add %(table)s %(record)s \" \\\n \"%(col)s %(key)s=%(value)s\"\n commands.append(templatized_command % module.params)\n elif want['value'] != have['value']:\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s \" \\\n \"%(col)s:%(key)s=%(value)s\"\n commands.append(templatized_command % module.params)\n\n return commands\n\n\ndef map_config_to_obj(module):\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s list %(table)s %(record)s\"\n command = templatized_command % module.params\n rc, out, err = module.run_command(command, check_rc=True)\n if rc != 0:\n module.fail_json(msg=err)\n\n match = re.search(r'^' + module.params['col'] + r'(\\s+):(\\s+)(.*)$', out, re.M)\n\n col_value = match.group(3)\n\n # Map types require key argument\n has_key = module.params['key'] is not None\n is_map = MAP_RE.match(col_value)\n if is_map and not has_key:\n module.fail_json(\n msg=\"missing required arguments: key for map type of column\")\n\n col_value_to_dict = {}\n if NON_EMPTY_MAP_RE.match(col_value):\n for kv in col_value[1:-1].split(', '):\n k, v = kv.split('=')\n col_value_to_dict[k.strip()] = v.strip()\n\n obj = {\n 'table': module.params['table'],\n 'record': module.params['record'],\n 'col': module.params['col'],\n }\n\n if has_key and is_map:\n if module.params['key'] in col_value_to_dict:\n obj['key'] = module.params['key']\n obj['value'] = col_value_to_dict[module.params['key']]\n else:\n obj['value'] = col_value.strip()\n\n return obj\n\n\ndef map_params_to_obj(module):\n obj = {\n 'table': module.params['table'],\n 'record': module.params['record'],\n 'col': module.params['col'],\n 'value': module.params['value']\n }\n\n key = module.params['key']\n if key is not None:\n obj['key'] = key\n\n return obj\n\n\ndef main():\n \"\"\" Entry point for ansible module. \"\"\"\n argument_spec = {\n 'state': {'default': 'present', 'choices': ['present', 'absent']},\n 'table': {'required': True},\n 'record': {'required': True},\n 'col': {'required': True},\n 'key': {'required': False},\n 'value': {'required': True},\n 'timeout': {'default': 5, 'type': 'int'},\n }\n\n module = AnsibleModule(argument_spec=argument_spec,\n supports_check_mode=True)\n\n result = {'changed': False}\n\n # We add ovs-vsctl to module_params to later build up templatized commands\n module.params[\"ovs-vsctl\"] = module.get_bin_path(\"ovs-vsctl\", True)\n\n want = map_params_to_obj(module)\n have = map_config_to_obj(module)\n\n commands = map_obj_to_commands(want, have, module)\n result['commands'] = commands\n\n if commands:\n if not module.check_mode:\n for c in commands:\n module.run_command(c, check_rc=True)\n result['changed'] = True\n\n module.exit_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/network/ovs/openvswitch_db.py"}], "after_files": [{"content": "#!/usr/bin/python\n# coding: utf-8 -*-\n\n#\n# (c) 2015, Mark Hamilton <[email protected]>\n# Portions copyright @ 2015 VMware, Inc.\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'network'}\n\n\nDOCUMENTATION = \"\"\"\n---\nmodule: openvswitch_db\nauthor: \"Mark Hamilton ([email protected])\"\nversion_added: 2.0\nshort_description: Configure open vswitch database.\nrequirements: [ \"ovs-vsctl >= 2.3.3\" ]\ndescription:\n - Set column values in record in database table.\noptions:\n state:\n required: false\n description:\n - Configures the state of the key. When set\n to I(present), the I(key) and I(value) pair will be set\n on the I(record) and when set to I(absent) the I(key)\n will not be set.\n default: present\n choices: ['present', 'absent']\n version_added: \"2.4\"\n table:\n required: true\n description:\n - Identifies the table in the database.\n record:\n required: true\n description:\n - Identifies the recoard in the table.\n col:\n required: true\n description:\n - Identifies the column in the record.\n key:\n required: false\n description:\n - Identifies the key in the record column, when the column is a map\n type.\n value:\n required: true\n description:\n - Expected value for the table, record, column and key.\n timeout:\n required: false\n default: 5\n description:\n - How long to wait for ovs-vswitchd to respond\n\"\"\"\n\nEXAMPLES = '''\n# Increase the maximum idle time to 50 seconds before pruning unused kernel\n# rules.\n- openvswitch_db:\n table: open_vswitch\n record: .\n col: other_config\n key: max-idle\n value: 50000\n\n# Disable in band copy\n- openvswitch_db:\n table: Bridge\n record: br-int\n col: other_config\n key: disable-in-band\n value: true\n\n# Remove in band key\n- openvswitch_db:\n state: present\n table: Bridge\n record: br-int\n col: other_config\n key: disable-in-band\n\n# Mark port with tag 10\n- openvswitch_db:\n table: Port\n record: port0\n col: tag\n value: 10\n'''\nimport re\n\nfrom ansible.module_utils.basic import AnsibleModule\n\n# Regular expression for map type, must not be empty\nNON_EMPTY_MAP_RE = re.compile(r'{.+}')\n# Regular expression for a map column type\nMAP_RE = re.compile(r'{.*}')\n\n\ndef map_obj_to_commands(want, have, module):\n \"\"\" Define ovs-vsctl command to meet desired state \"\"\"\n commands = list()\n\n if module.params['state'] == 'absent':\n if 'key' in have.keys():\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s remove %(table)s %(record)s \" \\\n \"%(col)s %(key)s=%(value)s\"\n commands.append(templatized_command % module.params)\n elif module.params['key'] is None:\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s remove %(table)s %(record)s \" \\\n \"%(col)s\"\n commands.append(templatized_command % module.params)\n else:\n if want == have:\n # Nothing to commit\n return commands\n if module.params['key'] is None:\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s \" \\\n \"%(col)s=%(value)s\"\n commands.append(templatized_command % module.params)\n else:\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s \" \\\n \"%(col)s:%(key)s=%(value)s\"\n commands.append(templatized_command % module.params)\n\n return commands\n\n\ndef map_config_to_obj(module):\n templatized_command = \"%(ovs-vsctl)s -t %(timeout)s list %(table)s %(record)s\"\n command = templatized_command % module.params\n rc, out, err = module.run_command(command, check_rc=True)\n if rc != 0:\n module.fail_json(msg=err)\n\n match = re.search(r'^' + module.params['col'] + r'(\\s+):(\\s+)(.*)$', out, re.M)\n\n col_value = match.group(3)\n\n # Map types require key argument\n has_key = module.params['key'] is not None\n is_map = MAP_RE.match(col_value)\n if is_map and not has_key:\n module.fail_json(\n msg=\"missing required arguments: key for map type of column\")\n\n col_value_to_dict = {}\n if NON_EMPTY_MAP_RE.match(col_value):\n for kv in col_value[1:-1].split(', '):\n k, v = kv.split('=')\n col_value_to_dict[k.strip()] = v.strip()\n\n obj = {\n 'table': module.params['table'],\n 'record': module.params['record'],\n 'col': module.params['col'],\n }\n\n if has_key and is_map:\n if module.params['key'] in col_value_to_dict:\n obj['key'] = module.params['key']\n obj['value'] = col_value_to_dict[module.params['key']]\n else:\n obj['value'] = str(col_value.strip())\n\n return obj\n\n\ndef map_params_to_obj(module):\n obj = {\n 'table': module.params['table'],\n 'record': module.params['record'],\n 'col': module.params['col'],\n 'value': module.params['value']\n }\n\n key = module.params['key']\n if key is not None:\n obj['key'] = key\n\n return obj\n\n\ndef main():\n \"\"\" Entry point for ansible module. \"\"\"\n argument_spec = {\n 'state': {'default': 'present', 'choices': ['present', 'absent']},\n 'table': {'required': True},\n 'record': {'required': True},\n 'col': {'required': True},\n 'key': {'required': False},\n 'value': {'required': True, 'type': 'str'},\n 'timeout': {'default': 5, 'type': 'int'},\n }\n\n module = AnsibleModule(argument_spec=argument_spec,\n supports_check_mode=True)\n\n result = {'changed': False}\n\n # We add ovs-vsctl to module_params to later build up templatized commands\n module.params[\"ovs-vsctl\"] = module.get_bin_path(\"ovs-vsctl\", True)\n\n want = map_params_to_obj(module)\n have = map_config_to_obj(module)\n\n commands = map_obj_to_commands(want, have, module)\n result['commands'] = commands\n\n if commands:\n if not module.check_mode:\n for c in commands:\n module.run_command(c, check_rc=True)\n result['changed'] = True\n\n module.exit_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/network/ovs/openvswitch_db.py"}]}
| 3,103 | 471 |
gh_patches_debug_21823
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-2259
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
slurm provider can lose track of blocks if squeue polling isn't fast enough, leading to parsl hang
**Describe the bug**
The slurm provider can lose track of slurm jobs/blocks in certain circumstances. It will not report any status change for those blocks, and so if a block has finished but the current recorded status is that the block is running, then the block will be reported as running forever.
This can result in parsl hanging forever, reporting one block available, as it believes it has the ability to make progress when it does not, like this:
```
2020-03-03 06:57:54.444 parsl.dataflow.strategy:199 [DEBUG] Executor worker-nodes has 109 active tasks, 1/0 running/pending blocks, and 0 connected workers
...
2020-03-03 08:48:29 parsl.dataflow.strategy:199 [DEBUG] Executor worker-nodes has 109 active tasks, 1/0 running/pending blocks, and 0 connected workers
```
There is a race condition between the provider polling for jobs with `squeue` and slurm moving jobs from `C` (completed) state to not being known/listed.
If that move from `C` to non-existent happens before `slurm._status()` successfully polls that job to discover completion, then further polls with `squeue` will exit with non-zero unix exit code, and this will be ignored silently by `slurm.py` as a slurm failure, forever, rather than as a job completion.
It looks like this may have been introduced around 37250848b8bbf6bdbed3b3ec2f54d5482b79ba5f
Here is an example of such a poll:
```
bxc@cori02:~> squeue --job 28509629
slurm_load_jobs error: Invalid job id specified
bxc@cori02:~> echo $?
1
```
Jobs which are executing will die with `ManagerLost` and cause progress, but jobs which are queued for that executor but waiting to be executed will sit, waiting, forever.
**To Reproduce**
This needs a delay in polling long enough for the completed job to disappear from the slurm queue.
**Expected behavior**
parsl should never hang without making progress.
at the very least, failures in executing slurm commands should be logged as WARNINGs rather than silently ignored.
**Environment**
cori. `lsst-dm-202002` branch of parsl.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/providers/slurm/slurm.py`
Content:
```
1 import os
2 import math
3 import time
4 import logging
5 import typeguard
6
7 from typing import Optional
8
9 from parsl.channels import LocalChannel
10 from parsl.channels.base import Channel
11 from parsl.launchers import SingleNodeLauncher
12 from parsl.launchers.launchers import Launcher
13 from parsl.providers.cluster_provider import ClusterProvider
14 from parsl.providers.provider_base import JobState, JobStatus
15 from parsl.providers.slurm.template import template_string
16 from parsl.utils import RepresentationMixin, wtime_to_minutes
17
18 logger = logging.getLogger(__name__)
19
20 translate_table = {
21 'PD': JobState.PENDING,
22 'R': JobState.RUNNING,
23 'CA': JobState.CANCELLED,
24 'CF': JobState.PENDING, # (configuring),
25 'CG': JobState.RUNNING, # (completing),
26 'CD': JobState.COMPLETED,
27 'F': JobState.FAILED, # (failed),
28 'TO': JobState.TIMEOUT, # (timeout),
29 'NF': JobState.FAILED, # (node failure),
30 'RV': JobState.FAILED, # (revoked) and
31 'SE': JobState.FAILED # (special exit state)
32 }
33
34
35 class SlurmProvider(ClusterProvider, RepresentationMixin):
36 """Slurm Execution Provider
37
38 This provider uses sbatch to submit, squeue for status and scancel to cancel
39 jobs. The sbatch script to be used is created from a template file in this
40 same module.
41
42 Parameters
43 ----------
44 partition : str
45 Slurm partition to request blocks from. If unspecified or ``None``, no partition slurm directive will be specified.
46 account : str
47 Slurm account to which to charge resources used by the job. If unspecified or ``None``, the job will use the
48 user's default account.
49 channel : Channel
50 Channel for accessing this provider. Possible channels include
51 :class:`~parsl.channels.LocalChannel` (the default),
52 :class:`~parsl.channels.SSHChannel`, or
53 :class:`~parsl.channels.SSHInteractiveLoginChannel`.
54 nodes_per_block : int
55 Nodes to provision per block.
56 cores_per_node : int
57 Specify the number of cores to provision per node. If set to None, executors
58 will assume all cores on the node are available for computation. Default is None.
59 mem_per_node : int
60 Specify the real memory to provision per node in GB. If set to None, no
61 explicit request to the scheduler will be made. Default is None.
62 min_blocks : int
63 Minimum number of blocks to maintain.
64 max_blocks : int
65 Maximum number of blocks to maintain.
66 parallelism : float
67 Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive
68 scaling where as many resources as possible are used; parallelism close to 0 represents
69 the opposite situation in which as few resources as possible (i.e., min_blocks) are used.
70 walltime : str
71 Walltime requested per block in HH:MM:SS.
72 scheduler_options : str
73 String to prepend to the #SBATCH blocks in the submit script to the scheduler.
74 worker_init : str
75 Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.
76 exclusive : bool (Default = True)
77 Requests nodes which are not shared with other running jobs.
78 launcher : Launcher
79 Launcher for this provider. Possible launchers include
80 :class:`~parsl.launchers.SingleNodeLauncher` (the default),
81 :class:`~parsl.launchers.SrunLauncher`, or
82 :class:`~parsl.launchers.AprunLauncher`
83 move_files : Optional[Bool]: should files be moved? by default, Parsl will try to move files.
84 """
85
86 @typeguard.typechecked
87 def __init__(self,
88 partition: Optional[str] = None,
89 account: Optional[str] = None,
90 channel: Channel = LocalChannel(),
91 nodes_per_block: int = 1,
92 cores_per_node: Optional[int] = None,
93 mem_per_node: Optional[int] = None,
94 init_blocks: int = 1,
95 min_blocks: int = 0,
96 max_blocks: int = 1,
97 parallelism: float = 1,
98 walltime: str = "00:10:00",
99 scheduler_options: str = '',
100 worker_init: str = '',
101 cmd_timeout: int = 10,
102 exclusive: bool = True,
103 move_files: bool = True,
104 launcher: Launcher = SingleNodeLauncher()):
105 label = 'slurm'
106 super().__init__(label,
107 channel,
108 nodes_per_block,
109 init_blocks,
110 min_blocks,
111 max_blocks,
112 parallelism,
113 walltime,
114 cmd_timeout=cmd_timeout,
115 launcher=launcher)
116
117 self.partition = partition
118 self.cores_per_node = cores_per_node
119 self.mem_per_node = mem_per_node
120 self.exclusive = exclusive
121 self.move_files = move_files
122 self.account = account
123 self.scheduler_options = scheduler_options + '\n'
124 if exclusive:
125 self.scheduler_options += "#SBATCH --exclusive\n"
126 if partition:
127 self.scheduler_options += "#SBATCH --partition={}\n".format(partition)
128 if account:
129 self.scheduler_options += "#SBATCH --account={}\n".format(account)
130 self.worker_init = worker_init + '\n'
131
132 def _status(self):
133 ''' Internal: Do not call. Returns the status list for a list of job_ids
134
135 Args:
136 self
137
138 Returns:
139 [status...] : Status list of all jobs
140 '''
141 job_id_list = ','.join(self.resources.keys())
142 cmd = "squeue --job {0}".format(job_id_list)
143 logger.debug("Executing sqeueue")
144 retcode, stdout, stderr = self.execute_wait(cmd)
145 logger.debug("sqeueue returned")
146
147 # Execute_wait failed. Do no update
148 if retcode != 0:
149 logger.warning("squeue failed with non-zero exit code {} - see https://github.com/Parsl/parsl/issues/1588".format(retcode))
150 return
151
152 jobs_missing = list(self.resources.keys())
153 for line in stdout.split('\n'):
154 parts = line.split()
155 if parts and parts[0] != 'JOBID':
156 job_id = parts[0]
157 status = translate_table.get(parts[4], JobState.UNKNOWN)
158 logger.debug("Updating job {} with slurm status {} to parsl status {}".format(job_id, parts[4], status))
159 self.resources[job_id]['status'] = JobStatus(status)
160 jobs_missing.remove(job_id)
161
162 # squeue does not report on jobs that are not running. So we are filling in the
163 # blanks for missing jobs, we might lose some information about why the jobs failed.
164 for missing_job in jobs_missing:
165 logger.debug("Updating missing job {} to completed status".format(missing_job))
166 self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED)
167
168 def submit(self, command, tasks_per_node, job_name="parsl.slurm"):
169 """Submit the command as a slurm job.
170
171 Parameters
172 ----------
173 command : str
174 Command to be made on the remote side.
175 tasks_per_node : int
176 Command invocations to be launched per node
177 job_name : str
178 Name for the job
179 Returns
180 -------
181 None or str
182 If at capacity, returns None; otherwise, a string identifier for the job
183 """
184
185 scheduler_options = self.scheduler_options
186 worker_init = self.worker_init
187 if self.mem_per_node is not None:
188 scheduler_options += '#SBATCH --mem={}g\n'.format(self.mem_per_node)
189 worker_init += 'export PARSL_MEMORY_GB={}\n'.format(self.mem_per_node)
190 if self.cores_per_node is not None:
191 cpus_per_task = math.floor(self.cores_per_node / tasks_per_node)
192 scheduler_options += '#SBATCH --cpus-per-task={}'.format(cpus_per_task)
193 worker_init += 'export PARSL_CORES={}\n'.format(cpus_per_task)
194
195 job_name = "{0}.{1}".format(job_name, time.time())
196
197 script_path = "{0}/{1}.submit".format(self.script_dir, job_name)
198 script_path = os.path.abspath(script_path)
199
200 logger.debug("Requesting one block with {} nodes".format(self.nodes_per_block))
201
202 job_config = {}
203 job_config["submit_script_dir"] = self.channel.script_dir
204 job_config["nodes"] = self.nodes_per_block
205 job_config["tasks_per_node"] = tasks_per_node
206 job_config["walltime"] = wtime_to_minutes(self.walltime)
207 job_config["scheduler_options"] = scheduler_options
208 job_config["worker_init"] = worker_init
209 job_config["user_script"] = command
210
211 # Wrap the command
212 job_config["user_script"] = self.launcher(command,
213 tasks_per_node,
214 self.nodes_per_block)
215
216 logger.debug("Writing submit script")
217 self._write_submit_script(template_string, script_path, job_name, job_config)
218
219 if self.move_files:
220 logger.debug("moving files")
221 channel_script_path = self.channel.push_file(script_path, self.channel.script_dir)
222 else:
223 logger.debug("not moving files")
224 channel_script_path = script_path
225
226 retcode, stdout, stderr = self.execute_wait("sbatch {0}".format(channel_script_path))
227
228 job_id = None
229 if retcode == 0:
230 for line in stdout.split('\n'):
231 if line.startswith("Submitted batch job"):
232 job_id = line.split("Submitted batch job")[1].strip()
233 self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)}
234 else:
235 print("Submission of command to scale_out failed")
236 logger.error("Retcode:%s STDOUT:%s STDERR:%s", retcode, stdout.strip(), stderr.strip())
237 return job_id
238
239 def cancel(self, job_ids):
240 ''' Cancels the jobs specified by a list of job ids
241
242 Args:
243 job_ids : [<job_id> ...]
244
245 Returns :
246 [True/False...] : If the cancel operation fails the entire list will be False.
247 '''
248
249 job_id_list = ' '.join(job_ids)
250 retcode, stdout, stderr = self.execute_wait("scancel {0}".format(job_id_list))
251 rets = None
252 if retcode == 0:
253 for jid in job_ids:
254 self.resources[jid]['status'] = JobStatus(JobState.CANCELLED) # Setting state to cancelled
255 rets = [True for i in job_ids]
256 else:
257 rets = [False for i in job_ids]
258
259 return rets
260
261 @property
262 def status_polling_interval(self):
263 return 60
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsl/providers/slurm/slurm.py b/parsl/providers/slurm/slurm.py
--- a/parsl/providers/slurm/slurm.py
+++ b/parsl/providers/slurm/slurm.py
@@ -138,15 +138,21 @@
Returns:
[status...] : Status list of all jobs
'''
- job_id_list = ','.join(self.resources.keys())
+ job_id_list = ','.join(
+ [jid for jid, job in self.resources.keys() if not job['status'].terminal]
+ )
+ if not job_id_list:
+ logger.debug('No active jobs, skipping status update')
+ return
+
cmd = "squeue --job {0}".format(job_id_list)
- logger.debug("Executing sqeueue")
+ logger.debug("Executing %s", cmd)
retcode, stdout, stderr = self.execute_wait(cmd)
- logger.debug("sqeueue returned")
+ logger.debug("sqeueue returned %s %s", stdout, stderr)
# Execute_wait failed. Do no update
if retcode != 0:
- logger.warning("squeue failed with non-zero exit code {} - see https://github.com/Parsl/parsl/issues/1588".format(retcode))
+ logger.warning("squeue failed with non-zero exit code {}".format(retcode))
return
jobs_missing = list(self.resources.keys())
|
{"golden_diff": "diff --git a/parsl/providers/slurm/slurm.py b/parsl/providers/slurm/slurm.py\n--- a/parsl/providers/slurm/slurm.py\n+++ b/parsl/providers/slurm/slurm.py\n@@ -138,15 +138,21 @@\n Returns:\n [status...] : Status list of all jobs\n '''\n- job_id_list = ','.join(self.resources.keys())\n+ job_id_list = ','.join(\n+ [jid for jid, job in self.resources.keys() if not job['status'].terminal]\n+ )\n+ if not job_id_list:\n+ logger.debug('No active jobs, skipping status update')\n+ return\n+\n cmd = \"squeue --job {0}\".format(job_id_list)\n- logger.debug(\"Executing sqeueue\")\n+ logger.debug(\"Executing %s\", cmd)\n retcode, stdout, stderr = self.execute_wait(cmd)\n- logger.debug(\"sqeueue returned\")\n+ logger.debug(\"sqeueue returned %s %s\", stdout, stderr)\n \n # Execute_wait failed. Do no update\n if retcode != 0:\n- logger.warning(\"squeue failed with non-zero exit code {} - see https://github.com/Parsl/parsl/issues/1588\".format(retcode))\n+ logger.warning(\"squeue failed with non-zero exit code {}\".format(retcode))\n return\n \n jobs_missing = list(self.resources.keys())\n", "issue": "slurm provider can lose track of blocks if squeue polling isn't fast enough, leading to parsl hang\n**Describe the bug**\r\nThe slurm provider can lose track of slurm jobs/blocks in certain circumstances. It will not report any status change for those blocks, and so if a block has finished but the current recorded status is that the block is running, then the block will be reported as running forever.\r\nThis can result in parsl hanging forever, reporting one block available, as it believes it has the ability to make progress when it does not, like this:\r\n\r\n```\r\n2020-03-03 06:57:54.444 parsl.dataflow.strategy:199 [DEBUG] Executor worker-nodes has 109 active tasks, 1/0 running/pending blocks, and 0 connected workers\r\n...\r\n2020-03-03 08:48:29 parsl.dataflow.strategy:199 [DEBUG] Executor worker-nodes has 109 active tasks, 1/0 running/pending blocks, and 0 connected workers\r\n```\r\n\r\nThere is a race condition between the provider polling for jobs with `squeue` and slurm moving jobs from `C` (completed) state to not being known/listed.\r\n\r\nIf that move from `C` to non-existent happens before `slurm._status()` successfully polls that job to discover completion, then further polls with `squeue` will exit with non-zero unix exit code, and this will be ignored silently by `slurm.py` as a slurm failure, forever, rather than as a job completion.\r\n\r\nIt looks like this may have been introduced around 37250848b8bbf6bdbed3b3ec2f54d5482b79ba5f\r\n\r\nHere is an example of such a poll:\r\n```\r\nbxc@cori02:~> squeue --job 28509629\r\nslurm_load_jobs error: Invalid job id specified\r\nbxc@cori02:~> echo $?\r\n1\r\n```\r\n\r\nJobs which are executing will die with `ManagerLost` and cause progress, but jobs which are queued for that executor but waiting to be executed will sit, waiting, forever.\r\n\r\n**To Reproduce**\r\nThis needs a delay in polling long enough for the completed job to disappear from the slurm queue.\r\n\r\n**Expected behavior**\r\nparsl should never hang without making progress.\r\n\r\nat the very least, failures in executing slurm commands should be logged as WARNINGs rather than silently ignored.\r\n\r\n**Environment**\r\ncori. `lsst-dm-202002` branch of parsl.\n", "before_files": [{"content": "import os\nimport math\nimport time\nimport logging\nimport typeguard\n\nfrom typing import Optional\n\nfrom parsl.channels import LocalChannel\nfrom parsl.channels.base import Channel\nfrom parsl.launchers import SingleNodeLauncher\nfrom parsl.launchers.launchers import Launcher\nfrom parsl.providers.cluster_provider import ClusterProvider\nfrom parsl.providers.provider_base import JobState, JobStatus\nfrom parsl.providers.slurm.template import template_string\nfrom parsl.utils import RepresentationMixin, wtime_to_minutes\n\nlogger = logging.getLogger(__name__)\n\ntranslate_table = {\n 'PD': JobState.PENDING,\n 'R': JobState.RUNNING,\n 'CA': JobState.CANCELLED,\n 'CF': JobState.PENDING, # (configuring),\n 'CG': JobState.RUNNING, # (completing),\n 'CD': JobState.COMPLETED,\n 'F': JobState.FAILED, # (failed),\n 'TO': JobState.TIMEOUT, # (timeout),\n 'NF': JobState.FAILED, # (node failure),\n 'RV': JobState.FAILED, # (revoked) and\n 'SE': JobState.FAILED # (special exit state)\n}\n\n\nclass SlurmProvider(ClusterProvider, RepresentationMixin):\n \"\"\"Slurm Execution Provider\n\n This provider uses sbatch to submit, squeue for status and scancel to cancel\n jobs. The sbatch script to be used is created from a template file in this\n same module.\n\n Parameters\n ----------\n partition : str\n Slurm partition to request blocks from. If unspecified or ``None``, no partition slurm directive will be specified.\n account : str\n Slurm account to which to charge resources used by the job. If unspecified or ``None``, the job will use the\n user's default account.\n channel : Channel\n Channel for accessing this provider. Possible channels include\n :class:`~parsl.channels.LocalChannel` (the default),\n :class:`~parsl.channels.SSHChannel`, or\n :class:`~parsl.channels.SSHInteractiveLoginChannel`.\n nodes_per_block : int\n Nodes to provision per block.\n cores_per_node : int\n Specify the number of cores to provision per node. If set to None, executors\n will assume all cores on the node are available for computation. Default is None.\n mem_per_node : int\n Specify the real memory to provision per node in GB. If set to None, no\n explicit request to the scheduler will be made. Default is None.\n min_blocks : int\n Minimum number of blocks to maintain.\n max_blocks : int\n Maximum number of blocks to maintain.\n parallelism : float\n Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive\n scaling where as many resources as possible are used; parallelism close to 0 represents\n the opposite situation in which as few resources as possible (i.e., min_blocks) are used.\n walltime : str\n Walltime requested per block in HH:MM:SS.\n scheduler_options : str\n String to prepend to the #SBATCH blocks in the submit script to the scheduler.\n worker_init : str\n Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.\n exclusive : bool (Default = True)\n Requests nodes which are not shared with other running jobs.\n launcher : Launcher\n Launcher for this provider. Possible launchers include\n :class:`~parsl.launchers.SingleNodeLauncher` (the default),\n :class:`~parsl.launchers.SrunLauncher`, or\n :class:`~parsl.launchers.AprunLauncher`\n move_files : Optional[Bool]: should files be moved? by default, Parsl will try to move files.\n \"\"\"\n\n @typeguard.typechecked\n def __init__(self,\n partition: Optional[str] = None,\n account: Optional[str] = None,\n channel: Channel = LocalChannel(),\n nodes_per_block: int = 1,\n cores_per_node: Optional[int] = None,\n mem_per_node: Optional[int] = None,\n init_blocks: int = 1,\n min_blocks: int = 0,\n max_blocks: int = 1,\n parallelism: float = 1,\n walltime: str = \"00:10:00\",\n scheduler_options: str = '',\n worker_init: str = '',\n cmd_timeout: int = 10,\n exclusive: bool = True,\n move_files: bool = True,\n launcher: Launcher = SingleNodeLauncher()):\n label = 'slurm'\n super().__init__(label,\n channel,\n nodes_per_block,\n init_blocks,\n min_blocks,\n max_blocks,\n parallelism,\n walltime,\n cmd_timeout=cmd_timeout,\n launcher=launcher)\n\n self.partition = partition\n self.cores_per_node = cores_per_node\n self.mem_per_node = mem_per_node\n self.exclusive = exclusive\n self.move_files = move_files\n self.account = account\n self.scheduler_options = scheduler_options + '\\n'\n if exclusive:\n self.scheduler_options += \"#SBATCH --exclusive\\n\"\n if partition:\n self.scheduler_options += \"#SBATCH --partition={}\\n\".format(partition)\n if account:\n self.scheduler_options += \"#SBATCH --account={}\\n\".format(account)\n self.worker_init = worker_init + '\\n'\n\n def _status(self):\n ''' Internal: Do not call. Returns the status list for a list of job_ids\n\n Args:\n self\n\n Returns:\n [status...] : Status list of all jobs\n '''\n job_id_list = ','.join(self.resources.keys())\n cmd = \"squeue --job {0}\".format(job_id_list)\n logger.debug(\"Executing sqeueue\")\n retcode, stdout, stderr = self.execute_wait(cmd)\n logger.debug(\"sqeueue returned\")\n\n # Execute_wait failed. Do no update\n if retcode != 0:\n logger.warning(\"squeue failed with non-zero exit code {} - see https://github.com/Parsl/parsl/issues/1588\".format(retcode))\n return\n\n jobs_missing = list(self.resources.keys())\n for line in stdout.split('\\n'):\n parts = line.split()\n if parts and parts[0] != 'JOBID':\n job_id = parts[0]\n status = translate_table.get(parts[4], JobState.UNKNOWN)\n logger.debug(\"Updating job {} with slurm status {} to parsl status {}\".format(job_id, parts[4], status))\n self.resources[job_id]['status'] = JobStatus(status)\n jobs_missing.remove(job_id)\n\n # squeue does not report on jobs that are not running. So we are filling in the\n # blanks for missing jobs, we might lose some information about why the jobs failed.\n for missing_job in jobs_missing:\n logger.debug(\"Updating missing job {} to completed status\".format(missing_job))\n self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED)\n\n def submit(self, command, tasks_per_node, job_name=\"parsl.slurm\"):\n \"\"\"Submit the command as a slurm job.\n\n Parameters\n ----------\n command : str\n Command to be made on the remote side.\n tasks_per_node : int\n Command invocations to be launched per node\n job_name : str\n Name for the job\n Returns\n -------\n None or str\n If at capacity, returns None; otherwise, a string identifier for the job\n \"\"\"\n\n scheduler_options = self.scheduler_options\n worker_init = self.worker_init\n if self.mem_per_node is not None:\n scheduler_options += '#SBATCH --mem={}g\\n'.format(self.mem_per_node)\n worker_init += 'export PARSL_MEMORY_GB={}\\n'.format(self.mem_per_node)\n if self.cores_per_node is not None:\n cpus_per_task = math.floor(self.cores_per_node / tasks_per_node)\n scheduler_options += '#SBATCH --cpus-per-task={}'.format(cpus_per_task)\n worker_init += 'export PARSL_CORES={}\\n'.format(cpus_per_task)\n\n job_name = \"{0}.{1}\".format(job_name, time.time())\n\n script_path = \"{0}/{1}.submit\".format(self.script_dir, job_name)\n script_path = os.path.abspath(script_path)\n\n logger.debug(\"Requesting one block with {} nodes\".format(self.nodes_per_block))\n\n job_config = {}\n job_config[\"submit_script_dir\"] = self.channel.script_dir\n job_config[\"nodes\"] = self.nodes_per_block\n job_config[\"tasks_per_node\"] = tasks_per_node\n job_config[\"walltime\"] = wtime_to_minutes(self.walltime)\n job_config[\"scheduler_options\"] = scheduler_options\n job_config[\"worker_init\"] = worker_init\n job_config[\"user_script\"] = command\n\n # Wrap the command\n job_config[\"user_script\"] = self.launcher(command,\n tasks_per_node,\n self.nodes_per_block)\n\n logger.debug(\"Writing submit script\")\n self._write_submit_script(template_string, script_path, job_name, job_config)\n\n if self.move_files:\n logger.debug(\"moving files\")\n channel_script_path = self.channel.push_file(script_path, self.channel.script_dir)\n else:\n logger.debug(\"not moving files\")\n channel_script_path = script_path\n\n retcode, stdout, stderr = self.execute_wait(\"sbatch {0}\".format(channel_script_path))\n\n job_id = None\n if retcode == 0:\n for line in stdout.split('\\n'):\n if line.startswith(\"Submitted batch job\"):\n job_id = line.split(\"Submitted batch job\")[1].strip()\n self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)}\n else:\n print(\"Submission of command to scale_out failed\")\n logger.error(\"Retcode:%s STDOUT:%s STDERR:%s\", retcode, stdout.strip(), stderr.strip())\n return job_id\n\n def cancel(self, job_ids):\n ''' Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [<job_id> ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False.\n '''\n\n job_id_list = ' '.join(job_ids)\n retcode, stdout, stderr = self.execute_wait(\"scancel {0}\".format(job_id_list))\n rets = None\n if retcode == 0:\n for jid in job_ids:\n self.resources[jid]['status'] = JobStatus(JobState.CANCELLED) # Setting state to cancelled\n rets = [True for i in job_ids]\n else:\n rets = [False for i in job_ids]\n\n return rets\n\n @property\n def status_polling_interval(self):\n return 60\n", "path": "parsl/providers/slurm/slurm.py"}], "after_files": [{"content": "import os\nimport math\nimport time\nimport logging\nimport typeguard\n\nfrom typing import Optional\n\nfrom parsl.channels import LocalChannel\nfrom parsl.channels.base import Channel\nfrom parsl.launchers import SingleNodeLauncher\nfrom parsl.launchers.launchers import Launcher\nfrom parsl.providers.cluster_provider import ClusterProvider\nfrom parsl.providers.provider_base import JobState, JobStatus\nfrom parsl.providers.slurm.template import template_string\nfrom parsl.utils import RepresentationMixin, wtime_to_minutes\n\nlogger = logging.getLogger(__name__)\n\ntranslate_table = {\n 'PD': JobState.PENDING,\n 'R': JobState.RUNNING,\n 'CA': JobState.CANCELLED,\n 'CF': JobState.PENDING, # (configuring),\n 'CG': JobState.RUNNING, # (completing),\n 'CD': JobState.COMPLETED,\n 'F': JobState.FAILED, # (failed),\n 'TO': JobState.TIMEOUT, # (timeout),\n 'NF': JobState.FAILED, # (node failure),\n 'RV': JobState.FAILED, # (revoked) and\n 'SE': JobState.FAILED # (special exit state)\n}\n\n\nclass SlurmProvider(ClusterProvider, RepresentationMixin):\n \"\"\"Slurm Execution Provider\n\n This provider uses sbatch to submit, squeue for status and scancel to cancel\n jobs. The sbatch script to be used is created from a template file in this\n same module.\n\n Parameters\n ----------\n partition : str\n Slurm partition to request blocks from. If unspecified or ``None``, no partition slurm directive will be specified.\n account : str\n Slurm account to which to charge resources used by the job. If unspecified or ``None``, the job will use the\n user's default account.\n channel : Channel\n Channel for accessing this provider. Possible channels include\n :class:`~parsl.channels.LocalChannel` (the default),\n :class:`~parsl.channels.SSHChannel`, or\n :class:`~parsl.channels.SSHInteractiveLoginChannel`.\n nodes_per_block : int\n Nodes to provision per block.\n cores_per_node : int\n Specify the number of cores to provision per node. If set to None, executors\n will assume all cores on the node are available for computation. Default is None.\n mem_per_node : int\n Specify the real memory to provision per node in GB. If set to None, no\n explicit request to the scheduler will be made. Default is None.\n min_blocks : int\n Minimum number of blocks to maintain.\n max_blocks : int\n Maximum number of blocks to maintain.\n parallelism : float\n Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive\n scaling where as many resources as possible are used; parallelism close to 0 represents\n the opposite situation in which as few resources as possible (i.e., min_blocks) are used.\n walltime : str\n Walltime requested per block in HH:MM:SS.\n scheduler_options : str\n String to prepend to the #SBATCH blocks in the submit script to the scheduler.\n worker_init : str\n Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.\n exclusive : bool (Default = True)\n Requests nodes which are not shared with other running jobs.\n launcher : Launcher\n Launcher for this provider. Possible launchers include\n :class:`~parsl.launchers.SingleNodeLauncher` (the default),\n :class:`~parsl.launchers.SrunLauncher`, or\n :class:`~parsl.launchers.AprunLauncher`\n move_files : Optional[Bool]: should files be moved? by default, Parsl will try to move files.\n \"\"\"\n\n @typeguard.typechecked\n def __init__(self,\n partition: Optional[str] = None,\n account: Optional[str] = None,\n channel: Channel = LocalChannel(),\n nodes_per_block: int = 1,\n cores_per_node: Optional[int] = None,\n mem_per_node: Optional[int] = None,\n init_blocks: int = 1,\n min_blocks: int = 0,\n max_blocks: int = 1,\n parallelism: float = 1,\n walltime: str = \"00:10:00\",\n scheduler_options: str = '',\n worker_init: str = '',\n cmd_timeout: int = 10,\n exclusive: bool = True,\n move_files: bool = True,\n launcher: Launcher = SingleNodeLauncher()):\n label = 'slurm'\n super().__init__(label,\n channel,\n nodes_per_block,\n init_blocks,\n min_blocks,\n max_blocks,\n parallelism,\n walltime,\n cmd_timeout=cmd_timeout,\n launcher=launcher)\n\n self.partition = partition\n self.cores_per_node = cores_per_node\n self.mem_per_node = mem_per_node\n self.exclusive = exclusive\n self.move_files = move_files\n self.account = account\n self.scheduler_options = scheduler_options + '\\n'\n if exclusive:\n self.scheduler_options += \"#SBATCH --exclusive\\n\"\n if partition:\n self.scheduler_options += \"#SBATCH --partition={}\\n\".format(partition)\n if account:\n self.scheduler_options += \"#SBATCH --account={}\\n\".format(account)\n self.worker_init = worker_init + '\\n'\n\n def _status(self):\n ''' Internal: Do not call. Returns the status list for a list of job_ids\n\n Args:\n self\n\n Returns:\n [status...] : Status list of all jobs\n '''\n job_id_list = ','.join(\n [jid for jid, job in self.resources.keys() if not job['status'].terminal]\n )\n if not job_id_list:\n logger.debug('No active jobs, skipping status update')\n return\n\n cmd = \"squeue --job {0}\".format(job_id_list)\n logger.debug(\"Executing %s\", cmd)\n retcode, stdout, stderr = self.execute_wait(cmd)\n logger.debug(\"sqeueue returned %s %s\", stdout, stderr)\n\n # Execute_wait failed. Do no update\n if retcode != 0:\n logger.warning(\"squeue failed with non-zero exit code {}\".format(retcode))\n return\n\n jobs_missing = list(self.resources.keys())\n for line in stdout.split('\\n'):\n parts = line.split()\n if parts and parts[0] != 'JOBID':\n job_id = parts[0]\n status = translate_table.get(parts[4], JobState.UNKNOWN)\n logger.debug(\"Updating job {} with slurm status {} to parsl status {}\".format(job_id, parts[4], status))\n self.resources[job_id]['status'] = JobStatus(status)\n jobs_missing.remove(job_id)\n\n # squeue does not report on jobs that are not running. So we are filling in the\n # blanks for missing jobs, we might lose some information about why the jobs failed.\n for missing_job in jobs_missing:\n logger.debug(\"Updating missing job {} to completed status\".format(missing_job))\n self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED)\n\n def submit(self, command, tasks_per_node, job_name=\"parsl.slurm\"):\n \"\"\"Submit the command as a slurm job.\n\n Parameters\n ----------\n command : str\n Command to be made on the remote side.\n tasks_per_node : int\n Command invocations to be launched per node\n job_name : str\n Name for the job\n Returns\n -------\n None or str\n If at capacity, returns None; otherwise, a string identifier for the job\n \"\"\"\n\n scheduler_options = self.scheduler_options\n worker_init = self.worker_init\n if self.mem_per_node is not None:\n scheduler_options += '#SBATCH --mem={}g\\n'.format(self.mem_per_node)\n worker_init += 'export PARSL_MEMORY_GB={}\\n'.format(self.mem_per_node)\n if self.cores_per_node is not None:\n cpus_per_task = math.floor(self.cores_per_node / tasks_per_node)\n scheduler_options += '#SBATCH --cpus-per-task={}'.format(cpus_per_task)\n worker_init += 'export PARSL_CORES={}\\n'.format(cpus_per_task)\n\n job_name = \"{0}.{1}\".format(job_name, time.time())\n\n script_path = \"{0}/{1}.submit\".format(self.script_dir, job_name)\n script_path = os.path.abspath(script_path)\n\n logger.debug(\"Requesting one block with {} nodes\".format(self.nodes_per_block))\n\n job_config = {}\n job_config[\"submit_script_dir\"] = self.channel.script_dir\n job_config[\"nodes\"] = self.nodes_per_block\n job_config[\"tasks_per_node\"] = tasks_per_node\n job_config[\"walltime\"] = wtime_to_minutes(self.walltime)\n job_config[\"scheduler_options\"] = scheduler_options\n job_config[\"worker_init\"] = worker_init\n job_config[\"user_script\"] = command\n\n # Wrap the command\n job_config[\"user_script\"] = self.launcher(command,\n tasks_per_node,\n self.nodes_per_block)\n\n logger.debug(\"Writing submit script\")\n self._write_submit_script(template_string, script_path, job_name, job_config)\n\n if self.move_files:\n logger.debug(\"moving files\")\n channel_script_path = self.channel.push_file(script_path, self.channel.script_dir)\n else:\n logger.debug(\"not moving files\")\n channel_script_path = script_path\n\n retcode, stdout, stderr = self.execute_wait(\"sbatch {0}\".format(channel_script_path))\n\n job_id = None\n if retcode == 0:\n for line in stdout.split('\\n'):\n if line.startswith(\"Submitted batch job\"):\n job_id = line.split(\"Submitted batch job\")[1].strip()\n self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)}\n else:\n print(\"Submission of command to scale_out failed\")\n logger.error(\"Retcode:%s STDOUT:%s STDERR:%s\", retcode, stdout.strip(), stderr.strip())\n return job_id\n\n def cancel(self, job_ids):\n ''' Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [<job_id> ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False.\n '''\n\n job_id_list = ' '.join(job_ids)\n retcode, stdout, stderr = self.execute_wait(\"scancel {0}\".format(job_id_list))\n rets = None\n if retcode == 0:\n for jid in job_ids:\n self.resources[jid]['status'] = JobStatus(JobState.CANCELLED) # Setting state to cancelled\n rets = [True for i in job_ids]\n else:\n rets = [False for i in job_ids]\n\n return rets\n\n @property\n def status_polling_interval(self):\n return 60\n", "path": "parsl/providers/slurm/slurm.py"}]}
| 3,902 | 316 |
gh_patches_debug_14767
|
rasdani/github-patches
|
git_diff
|
pymedusa__Medusa-6656
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Torrent file content is empty
**Describe the bug**
I have many log like this:
2019-04-25 21:01:35 WARNING SNATCHQUEUE-SNATCH-274431 :: [f46bfac] Torrent file content is empty: Gotham.S04E20.VOSTFR.WebDl.720p.x264.-.Chris44
**Expected behavior**
torrent from yggtorrent dont snatched
**Screenshots**
<img width="1174" alt="Capture d’écran 2019-04-26 à 08 26 25" src="https://user-images.githubusercontent.com/14791276/56787518-0af92e00-67fd-11e9-8c6e-72063f929f3a.png">
**Medusa (please complete the following information):**
- OS: debian9
- Branch: master
- Commit: Branch:master
Commit: f46bfacf8763204fbde4f26a5916095371d494d1
Version: 0.3.1
**Logs:**
2019-04-25 21:01:35 WARNING SNATCHQUEUE-SNATCH-274431 :: [f46bfac] Torrent file content is empty: Gotham.S04E20.VOSTFR.WebDl.720p.x264.-.Chris44</details>
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `medusa/providers/torrent/torrent_provider.py`
Content:
```
1 # coding=utf-8
2
3 """Provider code for Generic Torrent Provider."""
4
5 from __future__ import unicode_literals
6
7 import logging
8 import os
9 import re
10 from base64 import b16encode, b32decode
11 from os.path import join
12 from random import shuffle
13
14 from bencode import BencodeDecodeError, bdecode
15
16 from feedparser.util import FeedParserDict
17
18 from medusa import app
19 from medusa.classes import TorrentSearchResult
20 from medusa.helper.common import sanitize_filename, try_int
21 from medusa.helpers import remove_file_failed
22 from medusa.logger.adapters.style import BraceAdapter
23 from medusa.providers.generic_provider import GenericProvider
24
25 log = BraceAdapter(logging.getLogger(__name__))
26 log.logger.addHandler(logging.NullHandler())
27
28
29 class TorrentProvider(GenericProvider):
30 """Generic Torrent provider."""
31
32 def __init__(self, name):
33 """Initialize the class."""
34 super(TorrentProvider, self).__init__(name)
35
36 self.ratio = None
37 self.provider_type = GenericProvider.TORRENT
38 self.minseed = 0
39 self.minleech = 0
40
41 def is_active(self):
42 """Check if provider is enabled."""
43 return bool(app.USE_TORRENTS) and self.is_enabled()
44
45 @property
46 def _custom_trackers(self):
47 """Check if provider has custom trackers."""
48 if not self.public or not app.TRACKERS_LIST:
49 return ''
50
51 return '&tr=' + '&tr='.join(x.strip() for x in app.TRACKERS_LIST if x.strip())
52
53 def _get_result(self, episodes):
54 """Return a provider result object."""
55 return TorrentSearchResult(episodes, provider=self)
56
57 def _get_size(self, item):
58 """Get result size."""
59 if isinstance(item, dict):
60 size = item.get('size', -1)
61 elif isinstance(item, (list, tuple)) and len(item) > 2:
62 size = item[2]
63 else:
64 size = -1
65
66 return try_int(size, -1)
67
68 def _get_storage_dir(self):
69 """Get torrent storage dir."""
70 return app.TORRENT_DIR
71
72 def _get_result_info(self, item):
73 """Return seeders and leechers from result."""
74 if isinstance(item, (dict, FeedParserDict)):
75 seeders = item.get('seeders', '-1')
76 leechers = item.get('leechers', '-1')
77
78 elif isinstance(item, (list, tuple)) and len(item) > 1:
79 seeders = item[3]
80 leechers = item[4]
81 else:
82 seeders = -1
83 leechers = -1
84
85 return seeders, leechers
86
87 def _get_title_and_url(self, item):
88 """Get title and url from result."""
89 if isinstance(item, (dict, FeedParserDict)):
90 download_url = item.get('url', '')
91 title = item.get('title', '')
92
93 if not download_url:
94 download_url = item.get('link', '')
95 elif isinstance(item, (list, tuple)) and len(item) > 1:
96 download_url = item[1]
97 title = item[0]
98 else:
99 download_url = ''
100 title = ''
101
102 if download_url:
103 download_url = download_url.replace('&', '&')
104
105 if title:
106 title = title.replace(' ', '.')
107
108 return title, download_url
109
110 def _verify_download(self, file_name=None):
111 """Validate torrent file."""
112 if not file_name or not os.path.isfile(file_name):
113 return False
114
115 try:
116 with open(file_name, 'rb') as f:
117 # `bencode.bdecode` is monkeypatched in `medusa.init`
118 meta_info = bdecode(f.read(), allow_extra_data=True)
119 return 'info' in meta_info and meta_info['info']
120 except BencodeDecodeError as error:
121 log.debug('Failed to validate torrent file: {name}. Error: {error}',
122 {'name': file_name, 'error': error})
123
124 remove_file_failed(file_name)
125 log.debug('{result} is not a valid torrent file',
126 {'result': file_name})
127
128 return False
129
130 def seed_ratio(self):
131 """Return seed ratio of provider."""
132 return self.ratio
133
134 def _get_pubdate(self, item):
135 """Return publish date of the item.
136
137 If provider doesnt have _get_pubdate function this will be used
138 """
139 if isinstance(item, dict):
140 pubdate = item.get('pubdate')
141 elif isinstance(item, (list, tuple)) and len(item) > 2:
142 pubdate = item[5]
143 else:
144 pubdate = None
145
146 return pubdate
147
148 def get_redirect_url(self, url):
149 """Get the address that the provided URL redirects to."""
150 log.debug('Retrieving redirect URL for {url}', {'url': url})
151
152 response = self.session.get(url, allow_redirects=False)
153 if response and response.headers.get('Location'):
154 return response.headers['Location']
155
156 log.debug('Unable to retrieve redirect URL for {url}', {'url': url})
157 return url
158
159 def _make_url(self, result):
160 """Return url if result is a magnet link."""
161 urls = []
162 filename = ''
163
164 if not result or not result.url:
165 return urls, filename
166
167 if result.url.startswith('magnet:'):
168 try:
169 info_hash = re.findall(r'urn:btih:([\w]{32,40})', result.url)[0].upper()
170
171 try:
172 torrent_name = re.findall('dn=([^&]+)', result.url)[0]
173 except Exception:
174 torrent_name = 'NO_DOWNLOAD_NAME'
175
176 if len(info_hash) == 32:
177 info_hash = b16encode(b32decode(info_hash)).upper()
178
179 if not info_hash:
180 log.error('Unable to extract torrent hash from magnet: {0}', result.url)
181 return urls, filename
182
183 urls = [x.format(info_hash=info_hash, torrent_name=torrent_name) for x in self.bt_cache_urls]
184 shuffle(urls)
185 except Exception:
186 log.error('Unable to extract torrent hash or name from magnet: {0}', result.url)
187 return urls, filename
188 else:
189 # Required for Jackett providers that use magnet redirects
190 # See: https://github.com/pymedusa/Medusa/issues/3435
191 if self.kind() == 'TorznabProvider':
192 redirect_url = self.get_redirect_url(result.url)
193 if redirect_url != result.url:
194 result.url = redirect_url
195 return self._make_url(result)
196
197 urls = [result.url]
198
199 result_name = sanitize_filename(result.name)
200 filename = join(self._get_storage_dir(), result_name + '.' + self.provider_type)
201
202 return urls, filename
203
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/medusa/providers/torrent/torrent_provider.py b/medusa/providers/torrent/torrent_provider.py
--- a/medusa/providers/torrent/torrent_provider.py
+++ b/medusa/providers/torrent/torrent_provider.py
@@ -146,12 +146,13 @@
return pubdate
def get_redirect_url(self, url):
- """Get the address that the provided URL redirects to."""
+ """Get the final address that the provided URL redirects to."""
log.debug('Retrieving redirect URL for {url}', {'url': url})
- response = self.session.get(url, allow_redirects=False)
- if response and response.headers.get('Location'):
- return response.headers['Location']
+ response = self.session.get(url, stream=True)
+ if response:
+ response.close()
+ return response.url
log.debug('Unable to retrieve redirect URL for {url}', {'url': url})
return url
|
{"golden_diff": "diff --git a/medusa/providers/torrent/torrent_provider.py b/medusa/providers/torrent/torrent_provider.py\n--- a/medusa/providers/torrent/torrent_provider.py\n+++ b/medusa/providers/torrent/torrent_provider.py\n@@ -146,12 +146,13 @@\n return pubdate\n \n def get_redirect_url(self, url):\n- \"\"\"Get the address that the provided URL redirects to.\"\"\"\n+ \"\"\"Get the final address that the provided URL redirects to.\"\"\"\n log.debug('Retrieving redirect URL for {url}', {'url': url})\n \n- response = self.session.get(url, allow_redirects=False)\n- if response and response.headers.get('Location'):\n- return response.headers['Location']\n+ response = self.session.get(url, stream=True)\n+ if response:\n+ response.close()\n+ return response.url\n \n log.debug('Unable to retrieve redirect URL for {url}', {'url': url})\n return url\n", "issue": "Torrent file content is empty\n**Describe the bug**\r\n\r\nI have many log like this:\r\n\r\n2019-04-25 21:01:35 WARNING SNATCHQUEUE-SNATCH-274431 :: [f46bfac] Torrent file content is empty: Gotham.S04E20.VOSTFR.WebDl.720p.x264.-.Chris44\r\n\r\n**Expected behavior**\r\ntorrent from yggtorrent dont snatched\r\n\r\n**Screenshots**\r\n\r\n<img width=\"1174\" alt=\"Capture d\u2019e\u0301cran 2019-04-26 a\u0300 08 26 25\" src=\"https://user-images.githubusercontent.com/14791276/56787518-0af92e00-67fd-11e9-8c6e-72063f929f3a.png\">\r\n\r\n\r\n**Medusa (please complete the following information):**\r\n - OS: debian9\r\n - Branch: master\r\n - Commit: Branch:master \r\n Commit: f46bfacf8763204fbde4f26a5916095371d494d1 \r\n Version: 0.3.1\r\n\r\n**Logs:**\r\n\r\n2019-04-25 21:01:35 WARNING SNATCHQUEUE-SNATCH-274431 :: [f46bfac] Torrent file content is empty: Gotham.S04E20.VOSTFR.WebDl.720p.x264.-.Chris44</details>\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Provider code for Generic Torrent Provider.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport re\nfrom base64 import b16encode, b32decode\nfrom os.path import join\nfrom random import shuffle\n\nfrom bencode import BencodeDecodeError, bdecode\n\nfrom feedparser.util import FeedParserDict\n\nfrom medusa import app\nfrom medusa.classes import TorrentSearchResult\nfrom medusa.helper.common import sanitize_filename, try_int\nfrom medusa.helpers import remove_file_failed\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.providers.generic_provider import GenericProvider\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass TorrentProvider(GenericProvider):\n \"\"\"Generic Torrent provider.\"\"\"\n\n def __init__(self, name):\n \"\"\"Initialize the class.\"\"\"\n super(TorrentProvider, self).__init__(name)\n\n self.ratio = None\n self.provider_type = GenericProvider.TORRENT\n self.minseed = 0\n self.minleech = 0\n\n def is_active(self):\n \"\"\"Check if provider is enabled.\"\"\"\n return bool(app.USE_TORRENTS) and self.is_enabled()\n\n @property\n def _custom_trackers(self):\n \"\"\"Check if provider has custom trackers.\"\"\"\n if not self.public or not app.TRACKERS_LIST:\n return ''\n\n return '&tr=' + '&tr='.join(x.strip() for x in app.TRACKERS_LIST if x.strip())\n\n def _get_result(self, episodes):\n \"\"\"Return a provider result object.\"\"\"\n return TorrentSearchResult(episodes, provider=self)\n\n def _get_size(self, item):\n \"\"\"Get result size.\"\"\"\n if isinstance(item, dict):\n size = item.get('size', -1)\n elif isinstance(item, (list, tuple)) and len(item) > 2:\n size = item[2]\n else:\n size = -1\n\n return try_int(size, -1)\n\n def _get_storage_dir(self):\n \"\"\"Get torrent storage dir.\"\"\"\n return app.TORRENT_DIR\n\n def _get_result_info(self, item):\n \"\"\"Return seeders and leechers from result.\"\"\"\n if isinstance(item, (dict, FeedParserDict)):\n seeders = item.get('seeders', '-1')\n leechers = item.get('leechers', '-1')\n\n elif isinstance(item, (list, tuple)) and len(item) > 1:\n seeders = item[3]\n leechers = item[4]\n else:\n seeders = -1\n leechers = -1\n\n return seeders, leechers\n\n def _get_title_and_url(self, item):\n \"\"\"Get title and url from result.\"\"\"\n if isinstance(item, (dict, FeedParserDict)):\n download_url = item.get('url', '')\n title = item.get('title', '')\n\n if not download_url:\n download_url = item.get('link', '')\n elif isinstance(item, (list, tuple)) and len(item) > 1:\n download_url = item[1]\n title = item[0]\n else:\n download_url = ''\n title = ''\n\n if download_url:\n download_url = download_url.replace('&', '&')\n\n if title:\n title = title.replace(' ', '.')\n\n return title, download_url\n\n def _verify_download(self, file_name=None):\n \"\"\"Validate torrent file.\"\"\"\n if not file_name or not os.path.isfile(file_name):\n return False\n\n try:\n with open(file_name, 'rb') as f:\n # `bencode.bdecode` is monkeypatched in `medusa.init`\n meta_info = bdecode(f.read(), allow_extra_data=True)\n return 'info' in meta_info and meta_info['info']\n except BencodeDecodeError as error:\n log.debug('Failed to validate torrent file: {name}. Error: {error}',\n {'name': file_name, 'error': error})\n\n remove_file_failed(file_name)\n log.debug('{result} is not a valid torrent file',\n {'result': file_name})\n\n return False\n\n def seed_ratio(self):\n \"\"\"Return seed ratio of provider.\"\"\"\n return self.ratio\n\n def _get_pubdate(self, item):\n \"\"\"Return publish date of the item.\n\n If provider doesnt have _get_pubdate function this will be used\n \"\"\"\n if isinstance(item, dict):\n pubdate = item.get('pubdate')\n elif isinstance(item, (list, tuple)) and len(item) > 2:\n pubdate = item[5]\n else:\n pubdate = None\n\n return pubdate\n\n def get_redirect_url(self, url):\n \"\"\"Get the address that the provided URL redirects to.\"\"\"\n log.debug('Retrieving redirect URL for {url}', {'url': url})\n\n response = self.session.get(url, allow_redirects=False)\n if response and response.headers.get('Location'):\n return response.headers['Location']\n\n log.debug('Unable to retrieve redirect URL for {url}', {'url': url})\n return url\n\n def _make_url(self, result):\n \"\"\"Return url if result is a magnet link.\"\"\"\n urls = []\n filename = ''\n\n if not result or not result.url:\n return urls, filename\n\n if result.url.startswith('magnet:'):\n try:\n info_hash = re.findall(r'urn:btih:([\\w]{32,40})', result.url)[0].upper()\n\n try:\n torrent_name = re.findall('dn=([^&]+)', result.url)[0]\n except Exception:\n torrent_name = 'NO_DOWNLOAD_NAME'\n\n if len(info_hash) == 32:\n info_hash = b16encode(b32decode(info_hash)).upper()\n\n if not info_hash:\n log.error('Unable to extract torrent hash from magnet: {0}', result.url)\n return urls, filename\n\n urls = [x.format(info_hash=info_hash, torrent_name=torrent_name) for x in self.bt_cache_urls]\n shuffle(urls)\n except Exception:\n log.error('Unable to extract torrent hash or name from magnet: {0}', result.url)\n return urls, filename\n else:\n # Required for Jackett providers that use magnet redirects\n # See: https://github.com/pymedusa/Medusa/issues/3435\n if self.kind() == 'TorznabProvider':\n redirect_url = self.get_redirect_url(result.url)\n if redirect_url != result.url:\n result.url = redirect_url\n return self._make_url(result)\n\n urls = [result.url]\n\n result_name = sanitize_filename(result.name)\n filename = join(self._get_storage_dir(), result_name + '.' + self.provider_type)\n\n return urls, filename\n", "path": "medusa/providers/torrent/torrent_provider.py"}], "after_files": [{"content": "# coding=utf-8\n\n\"\"\"Provider code for Generic Torrent Provider.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport re\nfrom base64 import b16encode, b32decode\nfrom os.path import join\nfrom random import shuffle\n\nfrom bencode import BencodeDecodeError, bdecode\n\nfrom feedparser.util import FeedParserDict\n\nfrom medusa import app\nfrom medusa.classes import TorrentSearchResult\nfrom medusa.helper.common import sanitize_filename, try_int\nfrom medusa.helpers import remove_file_failed\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.providers.generic_provider import GenericProvider\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass TorrentProvider(GenericProvider):\n \"\"\"Generic Torrent provider.\"\"\"\n\n def __init__(self, name):\n \"\"\"Initialize the class.\"\"\"\n super(TorrentProvider, self).__init__(name)\n\n self.ratio = None\n self.provider_type = GenericProvider.TORRENT\n self.minseed = 0\n self.minleech = 0\n\n def is_active(self):\n \"\"\"Check if provider is enabled.\"\"\"\n return bool(app.USE_TORRENTS) and self.is_enabled()\n\n @property\n def _custom_trackers(self):\n \"\"\"Check if provider has custom trackers.\"\"\"\n if not self.public or not app.TRACKERS_LIST:\n return ''\n\n return '&tr=' + '&tr='.join(x.strip() for x in app.TRACKERS_LIST if x.strip())\n\n def _get_result(self, episodes):\n \"\"\"Return a provider result object.\"\"\"\n return TorrentSearchResult(episodes, provider=self)\n\n def _get_size(self, item):\n \"\"\"Get result size.\"\"\"\n if isinstance(item, dict):\n size = item.get('size', -1)\n elif isinstance(item, (list, tuple)) and len(item) > 2:\n size = item[2]\n else:\n size = -1\n\n return try_int(size, -1)\n\n def _get_storage_dir(self):\n \"\"\"Get torrent storage dir.\"\"\"\n return app.TORRENT_DIR\n\n def _get_result_info(self, item):\n \"\"\"Return seeders and leechers from result.\"\"\"\n if isinstance(item, (dict, FeedParserDict)):\n seeders = item.get('seeders', '-1')\n leechers = item.get('leechers', '-1')\n\n elif isinstance(item, (list, tuple)) and len(item) > 1:\n seeders = item[3]\n leechers = item[4]\n else:\n seeders = -1\n leechers = -1\n\n return seeders, leechers\n\n def _get_title_and_url(self, item):\n \"\"\"Get title and url from result.\"\"\"\n if isinstance(item, (dict, FeedParserDict)):\n download_url = item.get('url', '')\n title = item.get('title', '')\n\n if not download_url:\n download_url = item.get('link', '')\n elif isinstance(item, (list, tuple)) and len(item) > 1:\n download_url = item[1]\n title = item[0]\n else:\n download_url = ''\n title = ''\n\n if download_url:\n download_url = download_url.replace('&', '&')\n\n if title:\n title = title.replace(' ', '.')\n\n return title, download_url\n\n def _verify_download(self, file_name=None):\n \"\"\"Validate torrent file.\"\"\"\n if not file_name or not os.path.isfile(file_name):\n return False\n\n try:\n with open(file_name, 'rb') as f:\n # `bencode.bdecode` is monkeypatched in `medusa.init`\n meta_info = bdecode(f.read(), allow_extra_data=True)\n return 'info' in meta_info and meta_info['info']\n except BencodeDecodeError as error:\n log.debug('Failed to validate torrent file: {name}. Error: {error}',\n {'name': file_name, 'error': error})\n\n remove_file_failed(file_name)\n log.debug('{result} is not a valid torrent file',\n {'result': file_name})\n\n return False\n\n def seed_ratio(self):\n \"\"\"Return seed ratio of provider.\"\"\"\n return self.ratio\n\n def _get_pubdate(self, item):\n \"\"\"Return publish date of the item.\n\n If provider doesnt have _get_pubdate function this will be used\n \"\"\"\n if isinstance(item, dict):\n pubdate = item.get('pubdate')\n elif isinstance(item, (list, tuple)) and len(item) > 2:\n pubdate = item[5]\n else:\n pubdate = None\n\n return pubdate\n\n def get_redirect_url(self, url):\n \"\"\"Get the final address that the provided URL redirects to.\"\"\"\n log.debug('Retrieving redirect URL for {url}', {'url': url})\n\n response = self.session.get(url, stream=True)\n if response:\n response.close()\n return response.url\n\n log.debug('Unable to retrieve redirect URL for {url}', {'url': url})\n return url\n\n def _make_url(self, result):\n \"\"\"Return url if result is a magnet link.\"\"\"\n urls = []\n filename = ''\n\n if not result or not result.url:\n return urls, filename\n\n if result.url.startswith('magnet:'):\n try:\n info_hash = re.findall(r'urn:btih:([\\w]{32,40})', result.url)[0].upper()\n\n try:\n torrent_name = re.findall('dn=([^&]+)', result.url)[0]\n except Exception:\n torrent_name = 'NO_DOWNLOAD_NAME'\n\n if len(info_hash) == 32:\n info_hash = b16encode(b32decode(info_hash)).upper()\n\n if not info_hash:\n log.error('Unable to extract torrent hash from magnet: {0}', result.url)\n return urls, filename\n\n urls = [x.format(info_hash=info_hash, torrent_name=torrent_name) for x in self.bt_cache_urls]\n shuffle(urls)\n except Exception:\n log.error('Unable to extract torrent hash or name from magnet: {0}', result.url)\n return urls, filename\n else:\n # Required for Jackett providers that use magnet redirects\n # See: https://github.com/pymedusa/Medusa/issues/3435\n if self.kind() == 'TorznabProvider':\n redirect_url = self.get_redirect_url(result.url)\n if redirect_url != result.url:\n result.url = redirect_url\n return self._make_url(result)\n\n urls = [result.url]\n\n result_name = sanitize_filename(result.name)\n filename = join(self._get_storage_dir(), result_name + '.' + self.provider_type)\n\n return urls, filename\n", "path": "medusa/providers/torrent/torrent_provider.py"}]}
| 2,649 | 210 |
gh_patches_debug_57398
|
rasdani/github-patches
|
git_diff
|
translate__pootle-5797
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pootle_fs not expiring cache_keys
When a project uses pootle FS, stats are not updated. We have to manually call `pootle flush_cache --lru --django-cache` to update it manually.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_revision/receivers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.db.models.signals import post_save, pre_delete
10 from django.dispatch import receiver
11
12 from pootle.core.delegate import revision_updater
13 from pootle_app.models import Directory
14 from pootle_data.models import StoreData
15 from pootle_store.models import Store
16
17
18 @receiver(post_save, sender=StoreData)
19 def handle_storedata_save(**kwargs):
20 revision_updater.get(Store)(
21 context=kwargs["instance"].store).update(keys=["stats", "checks"])
22
23
24 @receiver(post_save, sender=Directory)
25 def handle_directory_save(**kwargs):
26 if kwargs.get("created"):
27 return
28 revision_updater.get(Directory)(
29 context=kwargs["instance"]).update(keys=["stats", "checks"])
30
31
32 @receiver(pre_delete, sender=Directory)
33 def handle_directory_delete(**kwargs):
34 revision_updater.get(Directory)(
35 context=kwargs["instance"].parent).update(keys=["stats", "checks"])
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py
--- a/pootle/apps/pootle_revision/receivers.py
+++ b/pootle/apps/pootle_revision/receivers.py
@@ -23,10 +23,12 @@
@receiver(post_save, sender=Directory)
def handle_directory_save(**kwargs):
- if kwargs.get("created"):
- return
+ context = (
+ kwargs["instance"].parent
+ if kwargs.get("created")
+ else kwargs["instance"])
revision_updater.get(Directory)(
- context=kwargs["instance"]).update(keys=["stats", "checks"])
+ context=context).update(keys=["stats", "checks"])
@receiver(pre_delete, sender=Directory)
|
{"golden_diff": "diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py\n--- a/pootle/apps/pootle_revision/receivers.py\n+++ b/pootle/apps/pootle_revision/receivers.py\n@@ -23,10 +23,12 @@\n \n @receiver(post_save, sender=Directory)\n def handle_directory_save(**kwargs):\n- if kwargs.get(\"created\"):\n- return\n+ context = (\n+ kwargs[\"instance\"].parent\n+ if kwargs.get(\"created\")\n+ else kwargs[\"instance\"])\n revision_updater.get(Directory)(\n- context=kwargs[\"instance\"]).update(keys=[\"stats\", \"checks\"])\n+ context=context).update(keys=[\"stats\", \"checks\"])\n \n \n @receiver(pre_delete, sender=Directory)\n", "issue": "pootle_fs not expiring cache_keys\nWhen a project uses pootle FS, stats are not updated. We have to manually call `pootle flush_cache --lru --django-cache` to update it manually.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save, pre_delete\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_app.models import Directory\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(post_save, sender=Directory)\ndef handle_directory_save(**kwargs):\n if kwargs.get(\"created\"):\n return\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"]).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=Directory)\ndef handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save, pre_delete\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_app.models import Directory\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(post_save, sender=Directory)\ndef handle_directory_save(**kwargs):\n context = (\n kwargs[\"instance\"].parent\n if kwargs.get(\"created\")\n else kwargs[\"instance\"])\n revision_updater.get(Directory)(\n context=context).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=Directory)\ndef handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}]}
| 643 | 178 |
gh_patches_debug_26609
|
rasdani/github-patches
|
git_diff
|
pyscript__pyscript-915
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] print() doesn't output HTML tags.
**print() doesn't output HTML tags**
I was experimenting in Pyscript and I tried to print an HTML table, but it didn't work. It seems to delete the tags and mantain just the plain text.
This is the code that I tried, but it just printed "test" once:
```HTML
<py-script>
print("<table>")
for i in range (2):
print("<tr>")
for j in range (2):
print("<td>test</td>")
print("</tr>")
print("</table>")
</py-script>
```
And this is a screenshot I took of the output:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyscriptjs/src/python/pyscript.py`
Content:
```
1 import asyncio
2 import base64
3 import io
4 import time
5 from textwrap import dedent
6
7 import micropip # noqa: F401
8 from js import console, document
9
10 loop = asyncio.get_event_loop()
11
12 MIME_METHODS = {
13 "__repr__": "text/plain",
14 "_repr_html_": "text/html",
15 "_repr_markdown_": "text/markdown",
16 "_repr_svg_": "image/svg+xml",
17 "_repr_png_": "image/png",
18 "_repr_pdf_": "application/pdf",
19 "_repr_jpeg_": "image/jpeg",
20 "_repr_latex": "text/latex",
21 "_repr_json_": "application/json",
22 "_repr_javascript_": "application/javascript",
23 "savefig": "image/png",
24 }
25
26
27 def render_image(mime, value, meta):
28 data = f"data:{mime};charset=utf-8;base64,{value}"
29 attrs = " ".join(['{k}="{v}"' for k, v in meta.items()])
30 return f'<img src="{data}" {attrs}</img>'
31
32
33 def identity(value, meta):
34 return value
35
36
37 MIME_RENDERERS = {
38 "text/plain": identity,
39 "text/html": identity,
40 "image/png": lambda value, meta: render_image("image/png", value, meta),
41 "image/jpeg": lambda value, meta: render_image("image/jpeg", value, meta),
42 "image/svg+xml": identity,
43 "application/json": identity,
44 "application/javascript": lambda value, meta: f"<script>{value}</script>",
45 }
46
47
48 def eval_formatter(obj, print_method):
49 """
50 Evaluates a formatter method.
51 """
52 if print_method == "__repr__":
53 return repr(obj)
54 elif hasattr(obj, print_method):
55 if print_method == "savefig":
56 buf = io.BytesIO()
57 obj.savefig(buf, format="png")
58 buf.seek(0)
59 return base64.b64encode(buf.read()).decode("utf-8")
60 return getattr(obj, print_method)()
61 elif print_method == "_repr_mimebundle_":
62 return {}, {}
63 return None
64
65
66 def format_mime(obj):
67 """
68 Formats object using _repr_x_ methods.
69 """
70 if isinstance(obj, str):
71 return obj, "text/plain"
72
73 mimebundle = eval_formatter(obj, "_repr_mimebundle_")
74 if isinstance(mimebundle, tuple):
75 format_dict, _ = mimebundle
76 else:
77 format_dict = mimebundle
78
79 output, not_available = None, []
80 for method, mime_type in reversed(MIME_METHODS.items()):
81 if mime_type in format_dict:
82 output = format_dict[mime_type]
83 else:
84 output = eval_formatter(obj, method)
85
86 if output is None:
87 continue
88 elif mime_type not in MIME_RENDERERS:
89 not_available.append(mime_type)
90 continue
91 break
92 if output is None:
93 if not_available:
94 console.warn(
95 f"Rendered object requested unavailable MIME renderers: {not_available}"
96 )
97 output = repr(output)
98 mime_type = "text/plain"
99 elif isinstance(output, tuple):
100 output, meta = output
101 else:
102 meta = {}
103 return MIME_RENDERERS[mime_type](output, meta), mime_type
104
105
106 class PyScript:
107 loop = loop
108
109 @staticmethod
110 def run_until_complete(f):
111 _ = loop.run_until_complete(f)
112
113 @staticmethod
114 def write(element_id, value, append=False, exec_id=0):
115 """Writes value to the element with id "element_id"""
116 Element(element_id).write(value=value, append=append)
117 console.warn(
118 dedent(
119 """PyScript Deprecation Warning: PyScript.write is
120 marked as deprecated and will be removed sometime soon. Please, use
121 Element(<id>).write instead."""
122 )
123 )
124
125
126 def set_current_display_target(target_id):
127 get_current_display_target._id = target_id
128
129
130 def get_current_display_target():
131 return get_current_display_target._id
132
133
134 get_current_display_target._id = None
135
136
137 def display(*values, target=None, append=True):
138 default_target = get_current_display_target()
139
140 if default_target is None and target is None:
141 raise Exception(
142 "Implicit target not allowed here. Please use display(..., target=...)"
143 )
144
145 if target is not None:
146 for v in values:
147 Element(target).write(v, append=append)
148 else:
149 for v in values:
150 Element(default_target).write(v, append=append)
151
152
153 class Element:
154 def __init__(self, element_id, element=None):
155 self._id = element_id
156 self._element = element
157
158 @property
159 def id(self):
160 return self._id
161
162 @property
163 def element(self):
164 """Return the dom element"""
165 if not self._element:
166 self._element = document.querySelector(f"#{self._id}")
167 return self._element
168
169 @property
170 def value(self):
171 return self.element.value
172
173 @property
174 def innerHtml(self):
175 return self.element.innerHtml
176
177 def write(self, value, append=False):
178 out_element_id = self.id
179
180 html, mime_type = format_mime(value)
181 if html == "\n":
182 return
183
184 if append:
185 child = document.createElement("div")
186 exec_id = self.element.childElementCount + 1
187 out_element_id = child.id = f"{self.id}-{exec_id}"
188 self.element.appendChild(child)
189
190 out_element = document.querySelector(f"#{out_element_id}")
191
192 if mime_type in ("application/javascript", "text/html"):
193 script_element = document.createRange().createContextualFragment(html)
194 out_element.appendChild(script_element)
195 else:
196 out_element.innerHTML = html
197
198 def clear(self):
199 if hasattr(self.element, "value"):
200 self.element.value = ""
201 else:
202 self.write("", append=False)
203
204 def select(self, query, from_content=False):
205 el = self.element
206 if from_content:
207 el = el.content
208
209 _el = el.querySelector(query)
210 if _el:
211 return Element(_el.id, _el)
212 else:
213 console.warn(f"WARNING: can't find element matching query {query}")
214
215 def clone(self, new_id=None, to=None):
216 if new_id is None:
217 new_id = self.element.id
218
219 clone = self.element.cloneNode(True)
220 clone.id = new_id
221
222 if to:
223 to.element.appendChild(clone)
224
225 # Inject it into the DOM
226 self.element.after(clone)
227
228 return Element(clone.id, clone)
229
230 def remove_class(self, classname):
231 if isinstance(classname, list):
232 for cl in classname:
233 self.remove_class(cl)
234 else:
235 self.element.classList.remove(classname)
236
237 def add_class(self, classname):
238 self.element.classList.add(classname)
239
240
241 def add_classes(element, class_list):
242 for klass in class_list.split(" "):
243 element.classList.add(klass)
244
245
246 def create(what, id_=None, classes=""):
247 element = document.createElement(what)
248 if id_:
249 element.id = id_
250 add_classes(element, classes)
251 return Element(id_, element)
252
253
254 class PyWidgetTheme:
255 def __init__(self, main_style_classes):
256 self.main_style_classes = main_style_classes
257
258 def theme_it(self, widget):
259 for klass in self.main_style_classes.split(" "):
260 widget.classList.add(klass)
261
262
263 class PyItemTemplate(Element):
264 label_fields = None
265
266 def __init__(self, data, labels=None, state_key=None, parent=None):
267 self.data = data
268
269 self.register_parent(parent)
270
271 if not labels:
272 labels = list(self.data.keys())
273 self.labels = labels
274
275 self.state_key = state_key
276
277 super().__init__(self._id)
278
279 def register_parent(self, parent):
280 self._parent = parent
281 if parent:
282 self._id = f"{self._parent._id}-c-{len(self._parent._children)}"
283 self.data["id"] = self._id
284 else:
285 self._id = None
286
287 def create(self):
288 new_child = create("div", self._id, "py-li-element")
289 new_child._element.innerHTML = dedent(
290 f"""
291 <label id="{self._id}" for="flex items-center p-2 ">
292 <input class="mr-2" type="checkbox" class="task-check">
293 <p>{self.render_content()}</p>
294 </label>
295 """
296 )
297 return new_child
298
299 def on_click(self, evt):
300 pass
301
302 def pre_append(self):
303 pass
304
305 def post_append(self):
306 self.element.click = self.on_click
307 self.element.onclick = self.on_click
308
309 self._post_append()
310
311 def _post_append(self):
312 pass
313
314 def strike(self, value, extra=None):
315 if value:
316 self.add_class("line-through")
317 else:
318 self.remove_class("line-through")
319
320 def render_content(self):
321 return " - ".join([self.data[f] for f in self.labels])
322
323
324 class PyListTemplate:
325 theme = PyWidgetTheme("py-li-element")
326 item_class = PyItemTemplate
327
328 def __init__(self, parent):
329 self.parent = parent
330 self._children = []
331 self._id = self.parent.id
332
333 @property
334 def children(self):
335 return self._children
336
337 @property
338 def data(self):
339 return [c.data for c in self._children]
340
341 def render_children(self):
342 binds = {}
343 for i, c in enumerate(self._children):
344 txt = c.element.innerHTML
345 rnd = str(time.time()).replace(".", "")[-5:]
346 new_id = f"{c.element.id}-{i}-{rnd}"
347 binds[new_id] = c.element.id
348 txt = txt.replace(">", f" id='{new_id}'>")
349 print(txt)
350
351 def foo(evt):
352 evtEl = evt.srcElement
353 srcEl = Element(binds[evtEl.id])
354 srcEl.element.onclick()
355 evtEl.classList = srcEl.element.classList
356
357 for new_id in binds:
358 Element(new_id).element.onclick = foo
359
360 def connect(self):
361 self.md = main_div = document.createElement("div")
362 main_div.id = self._id + "-list-tasks-container"
363
364 if self.theme:
365 self.theme.theme_it(main_div)
366
367 self.parent.appendChild(main_div)
368
369 def add(self, *args, **kws):
370 if not isinstance(args[0], self.item_class):
371 child = self.item_class(*args, **kws)
372 else:
373 child = args[0]
374 child.register_parent(self)
375 return self._add(child)
376
377 def _add(self, child_elem):
378 self.pre_child_append(child_elem)
379 child_elem.pre_append()
380 self._children.append(child_elem)
381 self.md.appendChild(child_elem.create().element)
382 child_elem.post_append()
383 self.child_appended(child_elem)
384 return child_elem
385
386 def pre_child_append(self, child):
387 pass
388
389 def child_appended(self, child):
390 """Overwrite me to define logic"""
391 pass
392
393
394 pyscript = PyScript()
395
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyscriptjs/src/python/pyscript.py b/pyscriptjs/src/python/pyscript.py
--- a/pyscriptjs/src/python/pyscript.py
+++ b/pyscriptjs/src/python/pyscript.py
@@ -1,5 +1,6 @@
import asyncio
import base64
+import html
import io
import time
from textwrap import dedent
@@ -35,7 +36,7 @@
MIME_RENDERERS = {
- "text/plain": identity,
+ "text/plain": html.escape,
"text/html": identity,
"image/png": lambda value, meta: render_image("image/png", value, meta),
"image/jpeg": lambda value, meta: render_image("image/jpeg", value, meta),
@@ -45,6 +46,18 @@
}
+class HTML:
+ """
+ Wrap a string so that display() can render it as plain HTML
+ """
+
+ def __init__(self, html):
+ self._html = html
+
+ def _repr_html_(self):
+ return self._html
+
+
def eval_formatter(obj, print_method):
"""
Evaluates a formatter method.
@@ -68,7 +81,7 @@
Formats object using _repr_x_ methods.
"""
if isinstance(obj, str):
- return obj, "text/plain"
+ return html.escape(obj), "text/plain"
mimebundle = eval_formatter(obj, "_repr_mimebundle_")
if isinstance(mimebundle, tuple):
|
{"golden_diff": "diff --git a/pyscriptjs/src/python/pyscript.py b/pyscriptjs/src/python/pyscript.py\n--- a/pyscriptjs/src/python/pyscript.py\n+++ b/pyscriptjs/src/python/pyscript.py\n@@ -1,5 +1,6 @@\n import asyncio\n import base64\n+import html\n import io\n import time\n from textwrap import dedent\n@@ -35,7 +36,7 @@\n \n \n MIME_RENDERERS = {\n- \"text/plain\": identity,\n+ \"text/plain\": html.escape,\n \"text/html\": identity,\n \"image/png\": lambda value, meta: render_image(\"image/png\", value, meta),\n \"image/jpeg\": lambda value, meta: render_image(\"image/jpeg\", value, meta),\n@@ -45,6 +46,18 @@\n }\n \n \n+class HTML:\n+ \"\"\"\n+ Wrap a string so that display() can render it as plain HTML\n+ \"\"\"\n+\n+ def __init__(self, html):\n+ self._html = html\n+\n+ def _repr_html_(self):\n+ return self._html\n+\n+\n def eval_formatter(obj, print_method):\n \"\"\"\n Evaluates a formatter method.\n@@ -68,7 +81,7 @@\n Formats object using _repr_x_ methods.\n \"\"\"\n if isinstance(obj, str):\n- return obj, \"text/plain\"\n+ return html.escape(obj), \"text/plain\"\n \n mimebundle = eval_formatter(obj, \"_repr_mimebundle_\")\n if isinstance(mimebundle, tuple):\n", "issue": "[BUG] print() doesn't output HTML tags.\n**print() doesn't output HTML tags**\r\nI was experimenting in Pyscript and I tried to print an HTML table, but it didn't work. It seems to delete the tags and mantain just the plain text.\r\n\r\nThis is the code that I tried, but it just printed \"test\" once:\r\n\r\n```HTML\r\n<py-script>\r\nprint(\"<table>\")\r\nfor i in range (2):\r\n print(\"<tr>\")\r\n for j in range (2):\r\n print(\"<td>test</td>\")\r\n print(\"</tr>\")\r\nprint(\"</table>\")\r\n</py-script>\r\n```\r\n\r\nAnd this is a screenshot I took of the output:\r\n\r\n\n", "before_files": [{"content": "import asyncio\nimport base64\nimport io\nimport time\nfrom textwrap import dedent\n\nimport micropip # noqa: F401\nfrom js import console, document\n\nloop = asyncio.get_event_loop()\n\nMIME_METHODS = {\n \"__repr__\": \"text/plain\",\n \"_repr_html_\": \"text/html\",\n \"_repr_markdown_\": \"text/markdown\",\n \"_repr_svg_\": \"image/svg+xml\",\n \"_repr_png_\": \"image/png\",\n \"_repr_pdf_\": \"application/pdf\",\n \"_repr_jpeg_\": \"image/jpeg\",\n \"_repr_latex\": \"text/latex\",\n \"_repr_json_\": \"application/json\",\n \"_repr_javascript_\": \"application/javascript\",\n \"savefig\": \"image/png\",\n}\n\n\ndef render_image(mime, value, meta):\n data = f\"data:{mime};charset=utf-8;base64,{value}\"\n attrs = \" \".join(['{k}=\"{v}\"' for k, v in meta.items()])\n return f'<img src=\"{data}\" {attrs}</img>'\n\n\ndef identity(value, meta):\n return value\n\n\nMIME_RENDERERS = {\n \"text/plain\": identity,\n \"text/html\": identity,\n \"image/png\": lambda value, meta: render_image(\"image/png\", value, meta),\n \"image/jpeg\": lambda value, meta: render_image(\"image/jpeg\", value, meta),\n \"image/svg+xml\": identity,\n \"application/json\": identity,\n \"application/javascript\": lambda value, meta: f\"<script>{value}</script>\",\n}\n\n\ndef eval_formatter(obj, print_method):\n \"\"\"\n Evaluates a formatter method.\n \"\"\"\n if print_method == \"__repr__\":\n return repr(obj)\n elif hasattr(obj, print_method):\n if print_method == \"savefig\":\n buf = io.BytesIO()\n obj.savefig(buf, format=\"png\")\n buf.seek(0)\n return base64.b64encode(buf.read()).decode(\"utf-8\")\n return getattr(obj, print_method)()\n elif print_method == \"_repr_mimebundle_\":\n return {}, {}\n return None\n\n\ndef format_mime(obj):\n \"\"\"\n Formats object using _repr_x_ methods.\n \"\"\"\n if isinstance(obj, str):\n return obj, \"text/plain\"\n\n mimebundle = eval_formatter(obj, \"_repr_mimebundle_\")\n if isinstance(mimebundle, tuple):\n format_dict, _ = mimebundle\n else:\n format_dict = mimebundle\n\n output, not_available = None, []\n for method, mime_type in reversed(MIME_METHODS.items()):\n if mime_type in format_dict:\n output = format_dict[mime_type]\n else:\n output = eval_formatter(obj, method)\n\n if output is None:\n continue\n elif mime_type not in MIME_RENDERERS:\n not_available.append(mime_type)\n continue\n break\n if output is None:\n if not_available:\n console.warn(\n f\"Rendered object requested unavailable MIME renderers: {not_available}\"\n )\n output = repr(output)\n mime_type = \"text/plain\"\n elif isinstance(output, tuple):\n output, meta = output\n else:\n meta = {}\n return MIME_RENDERERS[mime_type](output, meta), mime_type\n\n\nclass PyScript:\n loop = loop\n\n @staticmethod\n def run_until_complete(f):\n _ = loop.run_until_complete(f)\n\n @staticmethod\n def write(element_id, value, append=False, exec_id=0):\n \"\"\"Writes value to the element with id \"element_id\"\"\"\n Element(element_id).write(value=value, append=append)\n console.warn(\n dedent(\n \"\"\"PyScript Deprecation Warning: PyScript.write is\n marked as deprecated and will be removed sometime soon. Please, use\n Element(<id>).write instead.\"\"\"\n )\n )\n\n\ndef set_current_display_target(target_id):\n get_current_display_target._id = target_id\n\n\ndef get_current_display_target():\n return get_current_display_target._id\n\n\nget_current_display_target._id = None\n\n\ndef display(*values, target=None, append=True):\n default_target = get_current_display_target()\n\n if default_target is None and target is None:\n raise Exception(\n \"Implicit target not allowed here. Please use display(..., target=...)\"\n )\n\n if target is not None:\n for v in values:\n Element(target).write(v, append=append)\n else:\n for v in values:\n Element(default_target).write(v, append=append)\n\n\nclass Element:\n def __init__(self, element_id, element=None):\n self._id = element_id\n self._element = element\n\n @property\n def id(self):\n return self._id\n\n @property\n def element(self):\n \"\"\"Return the dom element\"\"\"\n if not self._element:\n self._element = document.querySelector(f\"#{self._id}\")\n return self._element\n\n @property\n def value(self):\n return self.element.value\n\n @property\n def innerHtml(self):\n return self.element.innerHtml\n\n def write(self, value, append=False):\n out_element_id = self.id\n\n html, mime_type = format_mime(value)\n if html == \"\\n\":\n return\n\n if append:\n child = document.createElement(\"div\")\n exec_id = self.element.childElementCount + 1\n out_element_id = child.id = f\"{self.id}-{exec_id}\"\n self.element.appendChild(child)\n\n out_element = document.querySelector(f\"#{out_element_id}\")\n\n if mime_type in (\"application/javascript\", \"text/html\"):\n script_element = document.createRange().createContextualFragment(html)\n out_element.appendChild(script_element)\n else:\n out_element.innerHTML = html\n\n def clear(self):\n if hasattr(self.element, \"value\"):\n self.element.value = \"\"\n else:\n self.write(\"\", append=False)\n\n def select(self, query, from_content=False):\n el = self.element\n if from_content:\n el = el.content\n\n _el = el.querySelector(query)\n if _el:\n return Element(_el.id, _el)\n else:\n console.warn(f\"WARNING: can't find element matching query {query}\")\n\n def clone(self, new_id=None, to=None):\n if new_id is None:\n new_id = self.element.id\n\n clone = self.element.cloneNode(True)\n clone.id = new_id\n\n if to:\n to.element.appendChild(clone)\n\n # Inject it into the DOM\n self.element.after(clone)\n\n return Element(clone.id, clone)\n\n def remove_class(self, classname):\n if isinstance(classname, list):\n for cl in classname:\n self.remove_class(cl)\n else:\n self.element.classList.remove(classname)\n\n def add_class(self, classname):\n self.element.classList.add(classname)\n\n\ndef add_classes(element, class_list):\n for klass in class_list.split(\" \"):\n element.classList.add(klass)\n\n\ndef create(what, id_=None, classes=\"\"):\n element = document.createElement(what)\n if id_:\n element.id = id_\n add_classes(element, classes)\n return Element(id_, element)\n\n\nclass PyWidgetTheme:\n def __init__(self, main_style_classes):\n self.main_style_classes = main_style_classes\n\n def theme_it(self, widget):\n for klass in self.main_style_classes.split(\" \"):\n widget.classList.add(klass)\n\n\nclass PyItemTemplate(Element):\n label_fields = None\n\n def __init__(self, data, labels=None, state_key=None, parent=None):\n self.data = data\n\n self.register_parent(parent)\n\n if not labels:\n labels = list(self.data.keys())\n self.labels = labels\n\n self.state_key = state_key\n\n super().__init__(self._id)\n\n def register_parent(self, parent):\n self._parent = parent\n if parent:\n self._id = f\"{self._parent._id}-c-{len(self._parent._children)}\"\n self.data[\"id\"] = self._id\n else:\n self._id = None\n\n def create(self):\n new_child = create(\"div\", self._id, \"py-li-element\")\n new_child._element.innerHTML = dedent(\n f\"\"\"\n <label id=\"{self._id}\" for=\"flex items-center p-2 \">\n <input class=\"mr-2\" type=\"checkbox\" class=\"task-check\">\n <p>{self.render_content()}</p>\n </label>\n \"\"\"\n )\n return new_child\n\n def on_click(self, evt):\n pass\n\n def pre_append(self):\n pass\n\n def post_append(self):\n self.element.click = self.on_click\n self.element.onclick = self.on_click\n\n self._post_append()\n\n def _post_append(self):\n pass\n\n def strike(self, value, extra=None):\n if value:\n self.add_class(\"line-through\")\n else:\n self.remove_class(\"line-through\")\n\n def render_content(self):\n return \" - \".join([self.data[f] for f in self.labels])\n\n\nclass PyListTemplate:\n theme = PyWidgetTheme(\"py-li-element\")\n item_class = PyItemTemplate\n\n def __init__(self, parent):\n self.parent = parent\n self._children = []\n self._id = self.parent.id\n\n @property\n def children(self):\n return self._children\n\n @property\n def data(self):\n return [c.data for c in self._children]\n\n def render_children(self):\n binds = {}\n for i, c in enumerate(self._children):\n txt = c.element.innerHTML\n rnd = str(time.time()).replace(\".\", \"\")[-5:]\n new_id = f\"{c.element.id}-{i}-{rnd}\"\n binds[new_id] = c.element.id\n txt = txt.replace(\">\", f\" id='{new_id}'>\")\n print(txt)\n\n def foo(evt):\n evtEl = evt.srcElement\n srcEl = Element(binds[evtEl.id])\n srcEl.element.onclick()\n evtEl.classList = srcEl.element.classList\n\n for new_id in binds:\n Element(new_id).element.onclick = foo\n\n def connect(self):\n self.md = main_div = document.createElement(\"div\")\n main_div.id = self._id + \"-list-tasks-container\"\n\n if self.theme:\n self.theme.theme_it(main_div)\n\n self.parent.appendChild(main_div)\n\n def add(self, *args, **kws):\n if not isinstance(args[0], self.item_class):\n child = self.item_class(*args, **kws)\n else:\n child = args[0]\n child.register_parent(self)\n return self._add(child)\n\n def _add(self, child_elem):\n self.pre_child_append(child_elem)\n child_elem.pre_append()\n self._children.append(child_elem)\n self.md.appendChild(child_elem.create().element)\n child_elem.post_append()\n self.child_appended(child_elem)\n return child_elem\n\n def pre_child_append(self, child):\n pass\n\n def child_appended(self, child):\n \"\"\"Overwrite me to define logic\"\"\"\n pass\n\n\npyscript = PyScript()\n", "path": "pyscriptjs/src/python/pyscript.py"}], "after_files": [{"content": "import asyncio\nimport base64\nimport html\nimport io\nimport time\nfrom textwrap import dedent\n\nimport micropip # noqa: F401\nfrom js import console, document\n\nloop = asyncio.get_event_loop()\n\nMIME_METHODS = {\n \"__repr__\": \"text/plain\",\n \"_repr_html_\": \"text/html\",\n \"_repr_markdown_\": \"text/markdown\",\n \"_repr_svg_\": \"image/svg+xml\",\n \"_repr_png_\": \"image/png\",\n \"_repr_pdf_\": \"application/pdf\",\n \"_repr_jpeg_\": \"image/jpeg\",\n \"_repr_latex\": \"text/latex\",\n \"_repr_json_\": \"application/json\",\n \"_repr_javascript_\": \"application/javascript\",\n \"savefig\": \"image/png\",\n}\n\n\ndef render_image(mime, value, meta):\n data = f\"data:{mime};charset=utf-8;base64,{value}\"\n attrs = \" \".join(['{k}=\"{v}\"' for k, v in meta.items()])\n return f'<img src=\"{data}\" {attrs}</img>'\n\n\ndef identity(value, meta):\n return value\n\n\nMIME_RENDERERS = {\n \"text/plain\": html.escape,\n \"text/html\": identity,\n \"image/png\": lambda value, meta: render_image(\"image/png\", value, meta),\n \"image/jpeg\": lambda value, meta: render_image(\"image/jpeg\", value, meta),\n \"image/svg+xml\": identity,\n \"application/json\": identity,\n \"application/javascript\": lambda value, meta: f\"<script>{value}</script>\",\n}\n\n\nclass HTML:\n \"\"\"\n Wrap a string so that display() can render it as plain HTML\n \"\"\"\n\n def __init__(self, html):\n self._html = html\n\n def _repr_html_(self):\n return self._html\n\n\ndef eval_formatter(obj, print_method):\n \"\"\"\n Evaluates a formatter method.\n \"\"\"\n if print_method == \"__repr__\":\n return repr(obj)\n elif hasattr(obj, print_method):\n if print_method == \"savefig\":\n buf = io.BytesIO()\n obj.savefig(buf, format=\"png\")\n buf.seek(0)\n return base64.b64encode(buf.read()).decode(\"utf-8\")\n return getattr(obj, print_method)()\n elif print_method == \"_repr_mimebundle_\":\n return {}, {}\n return None\n\n\ndef format_mime(obj):\n \"\"\"\n Formats object using _repr_x_ methods.\n \"\"\"\n if isinstance(obj, str):\n return html.escape(obj), \"text/plain\"\n\n mimebundle = eval_formatter(obj, \"_repr_mimebundle_\")\n if isinstance(mimebundle, tuple):\n format_dict, _ = mimebundle\n else:\n format_dict = mimebundle\n\n output, not_available = None, []\n for method, mime_type in reversed(MIME_METHODS.items()):\n if mime_type in format_dict:\n output = format_dict[mime_type]\n else:\n output = eval_formatter(obj, method)\n\n if output is None:\n continue\n elif mime_type not in MIME_RENDERERS:\n not_available.append(mime_type)\n continue\n break\n if output is None:\n if not_available:\n console.warn(\n f\"Rendered object requested unavailable MIME renderers: {not_available}\"\n )\n output = repr(output)\n mime_type = \"text/plain\"\n elif isinstance(output, tuple):\n output, meta = output\n else:\n meta = {}\n return MIME_RENDERERS[mime_type](output, meta), mime_type\n\n\nclass PyScript:\n loop = loop\n\n @staticmethod\n def run_until_complete(f):\n _ = loop.run_until_complete(f)\n\n @staticmethod\n def write(element_id, value, append=False, exec_id=0):\n \"\"\"Writes value to the element with id \"element_id\"\"\"\n Element(element_id).write(value=value, append=append)\n console.warn(\n dedent(\n \"\"\"PyScript Deprecation Warning: PyScript.write is\n marked as deprecated and will be removed sometime soon. Please, use\n Element(<id>).write instead.\"\"\"\n )\n )\n\n\ndef set_current_display_target(target_id):\n get_current_display_target._id = target_id\n\n\ndef get_current_display_target():\n return get_current_display_target._id\n\n\nget_current_display_target._id = None\n\n\ndef display(*values, target=None, append=True):\n default_target = get_current_display_target()\n\n if default_target is None and target is None:\n raise Exception(\n \"Implicit target not allowed here. Please use display(..., target=...)\"\n )\n\n if target is not None:\n for v in values:\n Element(target).write(v, append=append)\n else:\n for v in values:\n Element(default_target).write(v, append=append)\n\n\nclass Element:\n def __init__(self, element_id, element=None):\n self._id = element_id\n self._element = element\n\n @property\n def id(self):\n return self._id\n\n @property\n def element(self):\n \"\"\"Return the dom element\"\"\"\n if not self._element:\n self._element = document.querySelector(f\"#{self._id}\")\n return self._element\n\n @property\n def value(self):\n return self.element.value\n\n @property\n def innerHtml(self):\n return self.element.innerHtml\n\n def write(self, value, append=False):\n out_element_id = self.id\n\n html, mime_type = format_mime(value)\n if html == \"\\n\":\n return\n\n if append:\n child = document.createElement(\"div\")\n exec_id = self.element.childElementCount + 1\n out_element_id = child.id = f\"{self.id}-{exec_id}\"\n self.element.appendChild(child)\n\n out_element = document.querySelector(f\"#{out_element_id}\")\n\n if mime_type in (\"application/javascript\", \"text/html\"):\n script_element = document.createRange().createContextualFragment(html)\n out_element.appendChild(script_element)\n else:\n out_element.innerHTML = html\n\n def clear(self):\n if hasattr(self.element, \"value\"):\n self.element.value = \"\"\n else:\n self.write(\"\", append=False)\n\n def select(self, query, from_content=False):\n el = self.element\n if from_content:\n el = el.content\n\n _el = el.querySelector(query)\n if _el:\n return Element(_el.id, _el)\n else:\n console.warn(f\"WARNING: can't find element matching query {query}\")\n\n def clone(self, new_id=None, to=None):\n if new_id is None:\n new_id = self.element.id\n\n clone = self.element.cloneNode(True)\n clone.id = new_id\n\n if to:\n to.element.appendChild(clone)\n\n # Inject it into the DOM\n self.element.after(clone)\n\n return Element(clone.id, clone)\n\n def remove_class(self, classname):\n if isinstance(classname, list):\n for cl in classname:\n self.remove_class(cl)\n else:\n self.element.classList.remove(classname)\n\n def add_class(self, classname):\n self.element.classList.add(classname)\n\n\ndef add_classes(element, class_list):\n for klass in class_list.split(\" \"):\n element.classList.add(klass)\n\n\ndef create(what, id_=None, classes=\"\"):\n element = document.createElement(what)\n if id_:\n element.id = id_\n add_classes(element, classes)\n return Element(id_, element)\n\n\nclass PyWidgetTheme:\n def __init__(self, main_style_classes):\n self.main_style_classes = main_style_classes\n\n def theme_it(self, widget):\n for klass in self.main_style_classes.split(\" \"):\n widget.classList.add(klass)\n\n\nclass PyItemTemplate(Element):\n label_fields = None\n\n def __init__(self, data, labels=None, state_key=None, parent=None):\n self.data = data\n\n self.register_parent(parent)\n\n if not labels:\n labels = list(self.data.keys())\n self.labels = labels\n\n self.state_key = state_key\n\n super().__init__(self._id)\n\n def register_parent(self, parent):\n self._parent = parent\n if parent:\n self._id = f\"{self._parent._id}-c-{len(self._parent._children)}\"\n self.data[\"id\"] = self._id\n else:\n self._id = None\n\n def create(self):\n new_child = create(\"div\", self._id, \"py-li-element\")\n new_child._element.innerHTML = dedent(\n f\"\"\"\n <label id=\"{self._id}\" for=\"flex items-center p-2 \">\n <input class=\"mr-2\" type=\"checkbox\" class=\"task-check\">\n <p>{self.render_content()}</p>\n </label>\n \"\"\"\n )\n return new_child\n\n def on_click(self, evt):\n pass\n\n def pre_append(self):\n pass\n\n def post_append(self):\n self.element.click = self.on_click\n self.element.onclick = self.on_click\n\n self._post_append()\n\n def _post_append(self):\n pass\n\n def strike(self, value, extra=None):\n if value:\n self.add_class(\"line-through\")\n else:\n self.remove_class(\"line-through\")\n\n def render_content(self):\n return \" - \".join([self.data[f] for f in self.labels])\n\n\nclass PyListTemplate:\n theme = PyWidgetTheme(\"py-li-element\")\n item_class = PyItemTemplate\n\n def __init__(self, parent):\n self.parent = parent\n self._children = []\n self._id = self.parent.id\n\n @property\n def children(self):\n return self._children\n\n @property\n def data(self):\n return [c.data for c in self._children]\n\n def render_children(self):\n binds = {}\n for i, c in enumerate(self._children):\n txt = c.element.innerHTML\n rnd = str(time.time()).replace(\".\", \"\")[-5:]\n new_id = f\"{c.element.id}-{i}-{rnd}\"\n binds[new_id] = c.element.id\n txt = txt.replace(\">\", f\" id='{new_id}'>\")\n print(txt)\n\n def foo(evt):\n evtEl = evt.srcElement\n srcEl = Element(binds[evtEl.id])\n srcEl.element.onclick()\n evtEl.classList = srcEl.element.classList\n\n for new_id in binds:\n Element(new_id).element.onclick = foo\n\n def connect(self):\n self.md = main_div = document.createElement(\"div\")\n main_div.id = self._id + \"-list-tasks-container\"\n\n if self.theme:\n self.theme.theme_it(main_div)\n\n self.parent.appendChild(main_div)\n\n def add(self, *args, **kws):\n if not isinstance(args[0], self.item_class):\n child = self.item_class(*args, **kws)\n else:\n child = args[0]\n child.register_parent(self)\n return self._add(child)\n\n def _add(self, child_elem):\n self.pre_child_append(child_elem)\n child_elem.pre_append()\n self._children.append(child_elem)\n self.md.appendChild(child_elem.create().element)\n child_elem.post_append()\n self.child_appended(child_elem)\n return child_elem\n\n def pre_child_append(self, child):\n pass\n\n def child_appended(self, child):\n \"\"\"Overwrite me to define logic\"\"\"\n pass\n\n\npyscript = PyScript()\n", "path": "pyscriptjs/src/python/pyscript.py"}]}
| 4,016 | 340 |
gh_patches_debug_22625
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__PaddleDetection-391
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
请问模型裁剪只能针对yolov3么?
如果想对faster-rcnn的模型进行裁剪,应该怎么做呢?
谢谢!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slim/prune/eval.py`
Content:
```
1 # Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18
19 import os
20
21
22 def set_paddle_flags(**kwargs):
23 for key, value in kwargs.items():
24 if os.environ.get(key, None) is None:
25 os.environ[key] = str(value)
26
27
28 # NOTE(paddle-dev): All of these flags should be set before
29 # `import paddle`. Otherwise, it would not take any effect.
30 set_paddle_flags(
31 FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory
32 )
33
34 import paddle.fluid as fluid
35 from paddleslim.prune import Pruner
36 from paddleslim.analysis import flops
37
38 from ppdet.utils.eval_utils import parse_fetches, eval_run, eval_results, json_eval_results
39 import ppdet.utils.checkpoint as checkpoint
40 from ppdet.utils.check import check_gpu, check_version
41
42 from ppdet.data.reader import create_reader
43
44 from ppdet.core.workspace import load_config, merge_config, create
45 from ppdet.utils.cli import ArgsParser
46
47 import logging
48 FORMAT = '%(asctime)s-%(levelname)s: %(message)s'
49 logging.basicConfig(level=logging.INFO, format=FORMAT)
50 logger = logging.getLogger(__name__)
51
52
53 def main():
54 """
55 Main evaluate function
56 """
57 cfg = load_config(FLAGS.config)
58 if 'architecture' in cfg:
59 main_arch = cfg.architecture
60 else:
61 raise ValueError("'architecture' not specified in config file.")
62
63 merge_config(FLAGS.opt)
64 # check if set use_gpu=True in paddlepaddle cpu version
65 check_gpu(cfg.use_gpu)
66 # check if paddlepaddle version is satisfied
67 check_version()
68
69 multi_scale_test = getattr(cfg, 'MultiScaleTEST', None)
70
71 # define executor
72 place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()
73 exe = fluid.Executor(place)
74
75 # build program
76 model = create(main_arch)
77 startup_prog = fluid.Program()
78 eval_prog = fluid.Program()
79 with fluid.program_guard(eval_prog, startup_prog):
80 with fluid.unique_name.guard():
81 inputs_def = cfg['EvalReader']['inputs_def']
82 feed_vars, loader = model.build_inputs(**inputs_def)
83 if multi_scale_test is None:
84 fetches = model.eval(feed_vars)
85 else:
86 fetches = model.eval(feed_vars, multi_scale_test)
87 eval_prog = eval_prog.clone(True)
88
89 reader = create_reader(cfg.EvalReader)
90 loader.set_sample_list_generator(reader, place)
91
92 dataset = cfg['EvalReader']['dataset']
93
94 # eval already exists json file
95 if FLAGS.json_eval:
96 logger.info(
97 "In json_eval mode, PaddleDetection will evaluate json files in "
98 "output_eval directly. And proposal.json, bbox.json and mask.json "
99 "will be detected by default.")
100 json_eval_results(
101 cfg.metric, json_directory=FLAGS.output_eval, dataset=dataset)
102 return
103
104 pruned_params = FLAGS.pruned_params
105 assert (
106 FLAGS.pruned_params is not None
107 ), "FLAGS.pruned_params is empty!!! Please set it by '--pruned_params' option."
108 pruned_params = FLAGS.pruned_params.strip().split(",")
109 logger.info("pruned params: {}".format(pruned_params))
110 pruned_ratios = [float(n) for n in FLAGS.pruned_ratios.strip().split(",")]
111 logger.info("pruned ratios: {}".format(pruned_ratios))
112 assert (len(pruned_params) == len(pruned_ratios)
113 ), "The length of pruned params and pruned ratios should be equal."
114 assert (pruned_ratios > [0] * len(pruned_ratios) and
115 pruned_ratios < [1] * len(pruned_ratios)
116 ), "The elements of pruned ratios should be in range (0, 1)."
117
118 base_flops = flops(eval_prog)
119 pruner = Pruner()
120 eval_prog, _, _ = pruner.prune(
121 eval_prog,
122 fluid.global_scope(),
123 params=pruned_params,
124 ratios=pruned_ratios,
125 place=place,
126 only_graph=True)
127 pruned_flops = flops(eval_prog)
128 logger.info("pruned FLOPS: {}".format(
129 float(base_flops - pruned_flops) / base_flops))
130
131 compile_program = fluid.compiler.CompiledProgram(
132 eval_prog).with_data_parallel()
133
134 assert cfg.metric != 'OID', "eval process of OID dataset \
135 is not supported."
136
137 if cfg.metric == "WIDERFACE":
138 raise ValueError("metric type {} does not support in tools/eval.py, "
139 "please use tools/face_eval.py".format(cfg.metric))
140 assert cfg.metric in ['COCO', 'VOC'], \
141 "unknown metric type {}".format(cfg.metric)
142 extra_keys = []
143
144 if cfg.metric == 'COCO':
145 extra_keys = ['im_info', 'im_id', 'im_shape']
146 if cfg.metric == 'VOC':
147 extra_keys = ['gt_bbox', 'gt_class', 'is_difficult']
148
149 keys, values, cls = parse_fetches(fetches, eval_prog, extra_keys)
150
151 # whether output bbox is normalized in model output layer
152 is_bbox_normalized = False
153 if hasattr(model, 'is_bbox_normalized') and \
154 callable(model.is_bbox_normalized):
155 is_bbox_normalized = model.is_bbox_normalized()
156
157 sub_eval_prog = None
158 sub_keys = None
159 sub_values = None
160 # build sub-program
161 if 'Mask' in main_arch and multi_scale_test:
162 sub_eval_prog = fluid.Program()
163 with fluid.program_guard(sub_eval_prog, startup_prog):
164 with fluid.unique_name.guard():
165 inputs_def = cfg['EvalReader']['inputs_def']
166 inputs_def['mask_branch'] = True
167 feed_vars, eval_loader = model.build_inputs(**inputs_def)
168 sub_fetches = model.eval(
169 feed_vars, multi_scale_test, mask_branch=True)
170 assert cfg.metric == 'COCO'
171 extra_keys = ['im_id', 'im_shape']
172 sub_keys, sub_values, _ = parse_fetches(sub_fetches, sub_eval_prog,
173 extra_keys)
174 sub_eval_prog = sub_eval_prog.clone(True)
175
176 # load model
177 exe.run(startup_prog)
178 if 'weights' in cfg:
179 checkpoint.load_checkpoint(exe, eval_prog, cfg.weights)
180
181 results = eval_run(exe, compile_program, loader, keys, values, cls, cfg,
182 sub_eval_prog, sub_keys, sub_values)
183
184 # evaluation
185 resolution = None
186 if 'mask' in results[0]:
187 resolution = model.mask_head.resolution
188 # if map_type not set, use default 11point, only use in VOC eval
189 map_type = cfg.map_type if 'map_type' in cfg else '11point'
190 eval_results(
191 results,
192 cfg.metric,
193 cfg.num_classes,
194 resolution,
195 is_bbox_normalized,
196 FLAGS.output_eval,
197 map_type,
198 dataset=dataset)
199
200
201 if __name__ == '__main__':
202 parser = ArgsParser()
203 parser.add_argument(
204 "--json_eval",
205 action='store_true',
206 default=False,
207 help="Whether to re eval with already exists bbox.json or mask.json")
208 parser.add_argument(
209 "-f",
210 "--output_eval",
211 default=None,
212 type=str,
213 help="Evaluation file directory, default is current directory.")
214
215 parser.add_argument(
216 "-p",
217 "--pruned_params",
218 default=None,
219 type=str,
220 help="The parameters to be pruned when calculating sensitivities.")
221 parser.add_argument(
222 "--pruned_ratios",
223 default=None,
224 type=str,
225 help="The ratios pruned iteratively for each parameter when calculating sensitivities."
226 )
227
228 FLAGS = parser.parse_args()
229 main()
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/slim/prune/eval.py b/slim/prune/eval.py
--- a/slim/prune/eval.py
+++ b/slim/prune/eval.py
@@ -86,6 +86,7 @@
fetches = model.eval(feed_vars, multi_scale_test)
eval_prog = eval_prog.clone(True)
+ exe.run(startup_prog)
reader = create_reader(cfg.EvalReader)
loader.set_sample_list_generator(reader, place)
@@ -123,7 +124,7 @@
params=pruned_params,
ratios=pruned_ratios,
place=place,
- only_graph=True)
+ only_graph=False)
pruned_flops = flops(eval_prog)
logger.info("pruned FLOPS: {}".format(
float(base_flops - pruned_flops) / base_flops))
@@ -174,7 +175,6 @@
sub_eval_prog = sub_eval_prog.clone(True)
# load model
- exe.run(startup_prog)
if 'weights' in cfg:
checkpoint.load_checkpoint(exe, eval_prog, cfg.weights)
|
{"golden_diff": "diff --git a/slim/prune/eval.py b/slim/prune/eval.py\n--- a/slim/prune/eval.py\n+++ b/slim/prune/eval.py\n@@ -86,6 +86,7 @@\n fetches = model.eval(feed_vars, multi_scale_test)\n eval_prog = eval_prog.clone(True)\n \n+ exe.run(startup_prog)\n reader = create_reader(cfg.EvalReader)\n loader.set_sample_list_generator(reader, place)\n \n@@ -123,7 +124,7 @@\n params=pruned_params,\n ratios=pruned_ratios,\n place=place,\n- only_graph=True)\n+ only_graph=False)\n pruned_flops = flops(eval_prog)\n logger.info(\"pruned FLOPS: {}\".format(\n float(base_flops - pruned_flops) / base_flops))\n@@ -174,7 +175,6 @@\n sub_eval_prog = sub_eval_prog.clone(True)\n \n # load model\n- exe.run(startup_prog)\n if 'weights' in cfg:\n checkpoint.load_checkpoint(exe, eval_prog, cfg.weights)\n", "issue": "\u8bf7\u95ee\u6a21\u578b\u88c1\u526a\u53ea\u80fd\u9488\u5bf9yolov3\u4e48\uff1f\n\u5982\u679c\u60f3\u5bf9faster-rcnn\u7684\u6a21\u578b\u8fdb\u884c\u88c1\u526a\uff0c\u5e94\u8be5\u600e\u4e48\u505a\u5462\uff1f\r\n\r\n\u8c22\u8c22\uff01\n", "before_files": [{"content": "# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\n\n\ndef set_paddle_flags(**kwargs):\n for key, value in kwargs.items():\n if os.environ.get(key, None) is None:\n os.environ[key] = str(value)\n\n\n# NOTE(paddle-dev): All of these flags should be set before\n# `import paddle`. Otherwise, it would not take any effect.\nset_paddle_flags(\n FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory\n)\n\nimport paddle.fluid as fluid\nfrom paddleslim.prune import Pruner\nfrom paddleslim.analysis import flops\n\nfrom ppdet.utils.eval_utils import parse_fetches, eval_run, eval_results, json_eval_results\nimport ppdet.utils.checkpoint as checkpoint\nfrom ppdet.utils.check import check_gpu, check_version\n\nfrom ppdet.data.reader import create_reader\n\nfrom ppdet.core.workspace import load_config, merge_config, create\nfrom ppdet.utils.cli import ArgsParser\n\nimport logging\nFORMAT = '%(asctime)s-%(levelname)s: %(message)s'\nlogging.basicConfig(level=logging.INFO, format=FORMAT)\nlogger = logging.getLogger(__name__)\n\n\ndef main():\n \"\"\"\n Main evaluate function\n \"\"\"\n cfg = load_config(FLAGS.config)\n if 'architecture' in cfg:\n main_arch = cfg.architecture\n else:\n raise ValueError(\"'architecture' not specified in config file.\")\n\n merge_config(FLAGS.opt)\n # check if set use_gpu=True in paddlepaddle cpu version\n check_gpu(cfg.use_gpu)\n # check if paddlepaddle version is satisfied\n check_version()\n\n multi_scale_test = getattr(cfg, 'MultiScaleTEST', None)\n\n # define executor\n place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()\n exe = fluid.Executor(place)\n\n # build program\n model = create(main_arch)\n startup_prog = fluid.Program()\n eval_prog = fluid.Program()\n with fluid.program_guard(eval_prog, startup_prog):\n with fluid.unique_name.guard():\n inputs_def = cfg['EvalReader']['inputs_def']\n feed_vars, loader = model.build_inputs(**inputs_def)\n if multi_scale_test is None:\n fetches = model.eval(feed_vars)\n else:\n fetches = model.eval(feed_vars, multi_scale_test)\n eval_prog = eval_prog.clone(True)\n\n reader = create_reader(cfg.EvalReader)\n loader.set_sample_list_generator(reader, place)\n\n dataset = cfg['EvalReader']['dataset']\n\n # eval already exists json file\n if FLAGS.json_eval:\n logger.info(\n \"In json_eval mode, PaddleDetection will evaluate json files in \"\n \"output_eval directly. And proposal.json, bbox.json and mask.json \"\n \"will be detected by default.\")\n json_eval_results(\n cfg.metric, json_directory=FLAGS.output_eval, dataset=dataset)\n return\n\n pruned_params = FLAGS.pruned_params\n assert (\n FLAGS.pruned_params is not None\n ), \"FLAGS.pruned_params is empty!!! Please set it by '--pruned_params' option.\"\n pruned_params = FLAGS.pruned_params.strip().split(\",\")\n logger.info(\"pruned params: {}\".format(pruned_params))\n pruned_ratios = [float(n) for n in FLAGS.pruned_ratios.strip().split(\",\")]\n logger.info(\"pruned ratios: {}\".format(pruned_ratios))\n assert (len(pruned_params) == len(pruned_ratios)\n ), \"The length of pruned params and pruned ratios should be equal.\"\n assert (pruned_ratios > [0] * len(pruned_ratios) and\n pruned_ratios < [1] * len(pruned_ratios)\n ), \"The elements of pruned ratios should be in range (0, 1).\"\n\n base_flops = flops(eval_prog)\n pruner = Pruner()\n eval_prog, _, _ = pruner.prune(\n eval_prog,\n fluid.global_scope(),\n params=pruned_params,\n ratios=pruned_ratios,\n place=place,\n only_graph=True)\n pruned_flops = flops(eval_prog)\n logger.info(\"pruned FLOPS: {}\".format(\n float(base_flops - pruned_flops) / base_flops))\n\n compile_program = fluid.compiler.CompiledProgram(\n eval_prog).with_data_parallel()\n\n assert cfg.metric != 'OID', \"eval process of OID dataset \\\n is not supported.\"\n\n if cfg.metric == \"WIDERFACE\":\n raise ValueError(\"metric type {} does not support in tools/eval.py, \"\n \"please use tools/face_eval.py\".format(cfg.metric))\n assert cfg.metric in ['COCO', 'VOC'], \\\n \"unknown metric type {}\".format(cfg.metric)\n extra_keys = []\n\n if cfg.metric == 'COCO':\n extra_keys = ['im_info', 'im_id', 'im_shape']\n if cfg.metric == 'VOC':\n extra_keys = ['gt_bbox', 'gt_class', 'is_difficult']\n\n keys, values, cls = parse_fetches(fetches, eval_prog, extra_keys)\n\n # whether output bbox is normalized in model output layer\n is_bbox_normalized = False\n if hasattr(model, 'is_bbox_normalized') and \\\n callable(model.is_bbox_normalized):\n is_bbox_normalized = model.is_bbox_normalized()\n\n sub_eval_prog = None\n sub_keys = None\n sub_values = None\n # build sub-program\n if 'Mask' in main_arch and multi_scale_test:\n sub_eval_prog = fluid.Program()\n with fluid.program_guard(sub_eval_prog, startup_prog):\n with fluid.unique_name.guard():\n inputs_def = cfg['EvalReader']['inputs_def']\n inputs_def['mask_branch'] = True\n feed_vars, eval_loader = model.build_inputs(**inputs_def)\n sub_fetches = model.eval(\n feed_vars, multi_scale_test, mask_branch=True)\n assert cfg.metric == 'COCO'\n extra_keys = ['im_id', 'im_shape']\n sub_keys, sub_values, _ = parse_fetches(sub_fetches, sub_eval_prog,\n extra_keys)\n sub_eval_prog = sub_eval_prog.clone(True)\n\n # load model\n exe.run(startup_prog)\n if 'weights' in cfg:\n checkpoint.load_checkpoint(exe, eval_prog, cfg.weights)\n\n results = eval_run(exe, compile_program, loader, keys, values, cls, cfg,\n sub_eval_prog, sub_keys, sub_values)\n\n # evaluation\n resolution = None\n if 'mask' in results[0]:\n resolution = model.mask_head.resolution\n # if map_type not set, use default 11point, only use in VOC eval\n map_type = cfg.map_type if 'map_type' in cfg else '11point'\n eval_results(\n results,\n cfg.metric,\n cfg.num_classes,\n resolution,\n is_bbox_normalized,\n FLAGS.output_eval,\n map_type,\n dataset=dataset)\n\n\nif __name__ == '__main__':\n parser = ArgsParser()\n parser.add_argument(\n \"--json_eval\",\n action='store_true',\n default=False,\n help=\"Whether to re eval with already exists bbox.json or mask.json\")\n parser.add_argument(\n \"-f\",\n \"--output_eval\",\n default=None,\n type=str,\n help=\"Evaluation file directory, default is current directory.\")\n\n parser.add_argument(\n \"-p\",\n \"--pruned_params\",\n default=None,\n type=str,\n help=\"The parameters to be pruned when calculating sensitivities.\")\n parser.add_argument(\n \"--pruned_ratios\",\n default=None,\n type=str,\n help=\"The ratios pruned iteratively for each parameter when calculating sensitivities.\"\n )\n\n FLAGS = parser.parse_args()\n main()\n", "path": "slim/prune/eval.py"}], "after_files": [{"content": "# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\n\n\ndef set_paddle_flags(**kwargs):\n for key, value in kwargs.items():\n if os.environ.get(key, None) is None:\n os.environ[key] = str(value)\n\n\n# NOTE(paddle-dev): All of these flags should be set before\n# `import paddle`. Otherwise, it would not take any effect.\nset_paddle_flags(\n FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory\n)\n\nimport paddle.fluid as fluid\nfrom paddleslim.prune import Pruner\nfrom paddleslim.analysis import flops\n\nfrom ppdet.utils.eval_utils import parse_fetches, eval_run, eval_results, json_eval_results\nimport ppdet.utils.checkpoint as checkpoint\nfrom ppdet.utils.check import check_gpu, check_version\n\nfrom ppdet.data.reader import create_reader\n\nfrom ppdet.core.workspace import load_config, merge_config, create\nfrom ppdet.utils.cli import ArgsParser\n\nimport logging\nFORMAT = '%(asctime)s-%(levelname)s: %(message)s'\nlogging.basicConfig(level=logging.INFO, format=FORMAT)\nlogger = logging.getLogger(__name__)\n\n\ndef main():\n \"\"\"\n Main evaluate function\n \"\"\"\n cfg = load_config(FLAGS.config)\n if 'architecture' in cfg:\n main_arch = cfg.architecture\n else:\n raise ValueError(\"'architecture' not specified in config file.\")\n\n merge_config(FLAGS.opt)\n # check if set use_gpu=True in paddlepaddle cpu version\n check_gpu(cfg.use_gpu)\n # check if paddlepaddle version is satisfied\n check_version()\n\n multi_scale_test = getattr(cfg, 'MultiScaleTEST', None)\n\n # define executor\n place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()\n exe = fluid.Executor(place)\n\n # build program\n model = create(main_arch)\n startup_prog = fluid.Program()\n eval_prog = fluid.Program()\n with fluid.program_guard(eval_prog, startup_prog):\n with fluid.unique_name.guard():\n inputs_def = cfg['EvalReader']['inputs_def']\n feed_vars, loader = model.build_inputs(**inputs_def)\n if multi_scale_test is None:\n fetches = model.eval(feed_vars)\n else:\n fetches = model.eval(feed_vars, multi_scale_test)\n eval_prog = eval_prog.clone(True)\n\n exe.run(startup_prog)\n reader = create_reader(cfg.EvalReader)\n loader.set_sample_list_generator(reader, place)\n\n dataset = cfg['EvalReader']['dataset']\n\n # eval already exists json file\n if FLAGS.json_eval:\n logger.info(\n \"In json_eval mode, PaddleDetection will evaluate json files in \"\n \"output_eval directly. And proposal.json, bbox.json and mask.json \"\n \"will be detected by default.\")\n json_eval_results(\n cfg.metric, json_directory=FLAGS.output_eval, dataset=dataset)\n return\n\n pruned_params = FLAGS.pruned_params\n assert (\n FLAGS.pruned_params is not None\n ), \"FLAGS.pruned_params is empty!!! Please set it by '--pruned_params' option.\"\n pruned_params = FLAGS.pruned_params.strip().split(\",\")\n logger.info(\"pruned params: {}\".format(pruned_params))\n pruned_ratios = [float(n) for n in FLAGS.pruned_ratios.strip().split(\",\")]\n logger.info(\"pruned ratios: {}\".format(pruned_ratios))\n assert (len(pruned_params) == len(pruned_ratios)\n ), \"The length of pruned params and pruned ratios should be equal.\"\n assert (pruned_ratios > [0] * len(pruned_ratios) and\n pruned_ratios < [1] * len(pruned_ratios)\n ), \"The elements of pruned ratios should be in range (0, 1).\"\n\n base_flops = flops(eval_prog)\n pruner = Pruner()\n eval_prog, _, _ = pruner.prune(\n eval_prog,\n fluid.global_scope(),\n params=pruned_params,\n ratios=pruned_ratios,\n place=place,\n only_graph=False)\n pruned_flops = flops(eval_prog)\n logger.info(\"pruned FLOPS: {}\".format(\n float(base_flops - pruned_flops) / base_flops))\n\n compile_program = fluid.compiler.CompiledProgram(\n eval_prog).with_data_parallel()\n\n assert cfg.metric != 'OID', \"eval process of OID dataset \\\n is not supported.\"\n\n if cfg.metric == \"WIDERFACE\":\n raise ValueError(\"metric type {} does not support in tools/eval.py, \"\n \"please use tools/face_eval.py\".format(cfg.metric))\n assert cfg.metric in ['COCO', 'VOC'], \\\n \"unknown metric type {}\".format(cfg.metric)\n extra_keys = []\n\n if cfg.metric == 'COCO':\n extra_keys = ['im_info', 'im_id', 'im_shape']\n if cfg.metric == 'VOC':\n extra_keys = ['gt_bbox', 'gt_class', 'is_difficult']\n\n keys, values, cls = parse_fetches(fetches, eval_prog, extra_keys)\n\n # whether output bbox is normalized in model output layer\n is_bbox_normalized = False\n if hasattr(model, 'is_bbox_normalized') and \\\n callable(model.is_bbox_normalized):\n is_bbox_normalized = model.is_bbox_normalized()\n\n sub_eval_prog = None\n sub_keys = None\n sub_values = None\n # build sub-program\n if 'Mask' in main_arch and multi_scale_test:\n sub_eval_prog = fluid.Program()\n with fluid.program_guard(sub_eval_prog, startup_prog):\n with fluid.unique_name.guard():\n inputs_def = cfg['EvalReader']['inputs_def']\n inputs_def['mask_branch'] = True\n feed_vars, eval_loader = model.build_inputs(**inputs_def)\n sub_fetches = model.eval(\n feed_vars, multi_scale_test, mask_branch=True)\n assert cfg.metric == 'COCO'\n extra_keys = ['im_id', 'im_shape']\n sub_keys, sub_values, _ = parse_fetches(sub_fetches, sub_eval_prog,\n extra_keys)\n sub_eval_prog = sub_eval_prog.clone(True)\n\n # load model\n if 'weights' in cfg:\n checkpoint.load_checkpoint(exe, eval_prog, cfg.weights)\n\n results = eval_run(exe, compile_program, loader, keys, values, cls, cfg,\n sub_eval_prog, sub_keys, sub_values)\n\n # evaluation\n resolution = None\n if 'mask' in results[0]:\n resolution = model.mask_head.resolution\n # if map_type not set, use default 11point, only use in VOC eval\n map_type = cfg.map_type if 'map_type' in cfg else '11point'\n eval_results(\n results,\n cfg.metric,\n cfg.num_classes,\n resolution,\n is_bbox_normalized,\n FLAGS.output_eval,\n map_type,\n dataset=dataset)\n\n\nif __name__ == '__main__':\n parser = ArgsParser()\n parser.add_argument(\n \"--json_eval\",\n action='store_true',\n default=False,\n help=\"Whether to re eval with already exists bbox.json or mask.json\")\n parser.add_argument(\n \"-f\",\n \"--output_eval\",\n default=None,\n type=str,\n help=\"Evaluation file directory, default is current directory.\")\n\n parser.add_argument(\n \"-p\",\n \"--pruned_params\",\n default=None,\n type=str,\n help=\"The parameters to be pruned when calculating sensitivities.\")\n parser.add_argument(\n \"--pruned_ratios\",\n default=None,\n type=str,\n help=\"The ratios pruned iteratively for each parameter when calculating sensitivities.\"\n )\n\n FLAGS = parser.parse_args()\n main()\n", "path": "slim/prune/eval.py"}]}
| 2,718 | 250 |
gh_patches_debug_34855
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5869
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.bloomberg: error: unmatched '{' in format spec
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
streamlink 6.6.2
### Description
It's quite a strange error. Seems like there is a change to the JSON data return from Bloomberg, or it is corrupted.
### Debug log
```text
$ streamlink --loglevel=debug https://www.bloomberg.com/live/us
[session][debug] Loading plugin: bloomberg
[cli][debug] OS: macOS 10.16
[cli][debug] Python: 3.9.12
[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022
[cli][debug] Streamlink: 6.6.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2021.10.8
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.8.0
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.19.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.27.1
[cli][debug] trio: 0.22.2
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.1.1
[cli][debug] urllib3: 1.26.9
[cli][debug] websocket-client: 1.6.3
[cli][debug] Arguments:
[cli][debug] url=https://www.bloomberg.com/live/us
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin bloomberg for URL https://www.bloomberg.com/live/us
error: unmatched '{' in format spec
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/bloomberg.py`
Content:
```
1 """
2 $description America-based television network centred towards business and capital market programming.
3 $url bloomberg.com
4 $type live, vod
5 $metadata title
6 """
7
8 import logging
9 import re
10
11 from streamlink.plugin import Plugin, PluginError, pluginmatcher
12 from streamlink.plugin.api import validate
13 from streamlink.stream.hls import HLSStream
14
15
16 log = logging.getLogger(__name__)
17
18
19 @pluginmatcher(re.compile(r"""
20 https?://(?:www\.)?bloomberg\.com/
21 (?:
22 (?P<live>live)(?:/(?P<channel>[^/]+))?
23 |
24 news/videos/[^/]+/[^/]+
25 )
26 """, re.VERBOSE))
27 class Bloomberg(Plugin):
28 LIVE_API_URL = "https://cdn.gotraffic.net/projector/latest/assets/config/config.min.json?v=1"
29 VOD_API_URL = "https://www.bloomberg.com/api/embed?id={0}"
30 DEFAULT_CHANNEL = "us"
31
32 def _get_live_streams(self, data, channel):
33 schema_live_ids = validate.Schema(
34 {"live": {"channels": {"byChannelId": {
35 channel: validate.all(
36 {"liveId": str},
37 validate.get("liveId"),
38 ),
39 }}}},
40 validate.get(("live", "channels", "byChannelId", channel)),
41 )
42 try:
43 live_id = schema_live_ids.validate(data)
44 except PluginError:
45 log.error(f"Could not find liveId for channel '{channel}'")
46 return
47
48 log.debug(f"Found liveId: {live_id}")
49 return self.session.http.get(self.LIVE_API_URL, schema=validate.Schema(
50 validate.parse_json(),
51 {"livestreams": {
52 live_id: {
53 validate.optional("cdns"): validate.all(
54 [{"streams": [{
55 "url": validate.url(),
56 }]}],
57 validate.transform(lambda x: [urls["url"] for y in x for urls in y["streams"]]),
58 ),
59 },
60 }},
61 validate.get(("livestreams", live_id, "cdns")),
62 ))
63
64 def _get_vod_streams(self, data):
65 schema_vod_list = validate.Schema(
66 validate.any(
67 validate.all(
68 {"video": {"videoStory": dict}},
69 validate.get(("video", "videoStory")),
70 ),
71 validate.all(
72 {"quicktakeVideo": {"videoStory": dict}},
73 validate.get(("quicktakeVideo", "videoStory")),
74 ),
75 ),
76 {"video": {
77 "bmmrId": str,
78 }},
79 validate.get(("video", "bmmrId")),
80 )
81 schema_url = validate.all(
82 {"url": validate.url()},
83 validate.get("url"),
84 )
85
86 try:
87 video_id = schema_vod_list.validate(data)
88 except PluginError:
89 log.error("Could not find videoId")
90 return
91
92 log.debug(f"Found videoId: {video_id}")
93 vod_url = self.VOD_API_URL.format(video_id)
94 secureStreams, streams, self.title = self.session.http.get(vod_url, schema=validate.Schema(
95 validate.parse_json(),
96 {
97 validate.optional("secureStreams"): [schema_url],
98 validate.optional("streams"): [schema_url],
99 "title": str,
100 },
101 validate.union_get("secureStreams", "streams", "title"),
102 ))
103
104 return secureStreams or streams
105
106 def _get_streams(self):
107 del self.session.http.headers["Accept-Encoding"]
108
109 try:
110 data = self.session.http.get(self.url, schema=validate.Schema(
111 validate.parse_html(),
112 validate.xml_xpath_string(".//script[contains(text(),'window.__PRELOADED_STATE__')][1]/text()"),
113 str,
114 validate.regex(re.compile(r"^\s*window\.__PRELOADED_STATE__\s*=\s*({.+})\s*;?\s*$", re.DOTALL)),
115 validate.get(1),
116 validate.parse_json(),
117 ))
118 except PluginError:
119 log.error("Could not find JSON data. Invalid URL or bot protection...")
120 return
121
122 if self.match.group("live"):
123 streams = self._get_live_streams(data, self.match.group("channel") or self.DEFAULT_CHANNEL)
124 else:
125 streams = self._get_vod_streams(data)
126
127 if streams:
128 # just return the first stream
129 return HLSStream.parse_variant_playlist(self.session, streams[0])
130
131
132 __plugin__ = Bloomberg
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/bloomberg.py b/src/streamlink/plugins/bloomberg.py
--- a/src/streamlink/plugins/bloomberg.py
+++ b/src/streamlink/plugins/bloomberg.py
@@ -16,14 +16,14 @@
log = logging.getLogger(__name__)
-@pluginmatcher(re.compile(r"""
- https?://(?:www\.)?bloomberg\.com/
- (?:
- (?P<live>live)(?:/(?P<channel>[^/]+))?
- |
- news/videos/[^/]+/[^/]+
- )
-""", re.VERBOSE))
+@pluginmatcher(
+ name="live",
+ pattern=re.compile(r"https?://(?:www\.)?bloomberg\.com/live(?:/(?P<channel>[^/]+))?"),
+)
+@pluginmatcher(
+ name="vod",
+ pattern=re.compile(r"https?://(?:www\.)?bloomberg\.com/news/videos/[^/]+/[^/]+"),
+)
class Bloomberg(Plugin):
LIVE_API_URL = "https://cdn.gotraffic.net/projector/latest/assets/config/config.min.json?v=1"
VOD_API_URL = "https://www.bloomberg.com/api/embed?id={0}"
@@ -106,21 +106,23 @@
def _get_streams(self):
del self.session.http.headers["Accept-Encoding"]
- try:
- data = self.session.http.get(self.url, schema=validate.Schema(
- validate.parse_html(),
- validate.xml_xpath_string(".//script[contains(text(),'window.__PRELOADED_STATE__')][1]/text()"),
- str,
- validate.regex(re.compile(r"^\s*window\.__PRELOADED_STATE__\s*=\s*({.+})\s*;?\s*$", re.DOTALL)),
- validate.get(1),
- validate.parse_json(),
- ))
- except PluginError:
+ data = self.session.http.get(self.url, schema=validate.Schema(
+ validate.parse_html(),
+ validate.xml_xpath_string(".//script[contains(text(),'window.__PRELOADED_STATE__')][1]/text()"),
+ validate.none_or_all(
+ re.compile(r"\bwindow\.__PRELOADED_STATE__\s*=\s*(?P<json>{.+?})\s*;(?:\s|$)"),
+ validate.none_or_all(
+ validate.get("json"),
+ validate.parse_json(),
+ ),
+ ),
+ ))
+ if not data:
log.error("Could not find JSON data. Invalid URL or bot protection...")
return
- if self.match.group("live"):
- streams = self._get_live_streams(data, self.match.group("channel") or self.DEFAULT_CHANNEL)
+ if self.matches["live"]:
+ streams = self._get_live_streams(data, self.match["channel"] or self.DEFAULT_CHANNEL)
else:
streams = self._get_vod_streams(data)
|
{"golden_diff": "diff --git a/src/streamlink/plugins/bloomberg.py b/src/streamlink/plugins/bloomberg.py\n--- a/src/streamlink/plugins/bloomberg.py\n+++ b/src/streamlink/plugins/bloomberg.py\n@@ -16,14 +16,14 @@\n log = logging.getLogger(__name__)\n \n \n-@pluginmatcher(re.compile(r\"\"\"\n- https?://(?:www\\.)?bloomberg\\.com/\n- (?:\n- (?P<live>live)(?:/(?P<channel>[^/]+))?\n- |\n- news/videos/[^/]+/[^/]+\n- )\n-\"\"\", re.VERBOSE))\n+@pluginmatcher(\n+ name=\"live\",\n+ pattern=re.compile(r\"https?://(?:www\\.)?bloomberg\\.com/live(?:/(?P<channel>[^/]+))?\"),\n+)\n+@pluginmatcher(\n+ name=\"vod\",\n+ pattern=re.compile(r\"https?://(?:www\\.)?bloomberg\\.com/news/videos/[^/]+/[^/]+\"),\n+)\n class Bloomberg(Plugin):\n LIVE_API_URL = \"https://cdn.gotraffic.net/projector/latest/assets/config/config.min.json?v=1\"\n VOD_API_URL = \"https://www.bloomberg.com/api/embed?id={0}\"\n@@ -106,21 +106,23 @@\n def _get_streams(self):\n del self.session.http.headers[\"Accept-Encoding\"]\n \n- try:\n- data = self.session.http.get(self.url, schema=validate.Schema(\n- validate.parse_html(),\n- validate.xml_xpath_string(\".//script[contains(text(),'window.__PRELOADED_STATE__')][1]/text()\"),\n- str,\n- validate.regex(re.compile(r\"^\\s*window\\.__PRELOADED_STATE__\\s*=\\s*({.+})\\s*;?\\s*$\", re.DOTALL)),\n- validate.get(1),\n- validate.parse_json(),\n- ))\n- except PluginError:\n+ data = self.session.http.get(self.url, schema=validate.Schema(\n+ validate.parse_html(),\n+ validate.xml_xpath_string(\".//script[contains(text(),'window.__PRELOADED_STATE__')][1]/text()\"),\n+ validate.none_or_all(\n+ re.compile(r\"\\bwindow\\.__PRELOADED_STATE__\\s*=\\s*(?P<json>{.+?})\\s*;(?:\\s|$)\"),\n+ validate.none_or_all(\n+ validate.get(\"json\"),\n+ validate.parse_json(),\n+ ),\n+ ),\n+ ))\n+ if not data:\n log.error(\"Could not find JSON data. Invalid URL or bot protection...\")\n return\n \n- if self.match.group(\"live\"):\n- streams = self._get_live_streams(data, self.match.group(\"channel\") or self.DEFAULT_CHANNEL)\n+ if self.matches[\"live\"]:\n+ streams = self._get_live_streams(data, self.match[\"channel\"] or self.DEFAULT_CHANNEL)\n else:\n streams = self._get_vod_streams(data)\n", "issue": "plugins.bloomberg: error: unmatched '{' in format spec\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nstreamlink 6.6.2\n\n### Description\n\nIt's quite a strange error. Seems like there is a change to the JSON data return from Bloomberg, or it is corrupted.\n\n### Debug log\n\n```text\n$ streamlink --loglevel=debug https://www.bloomberg.com/live/us\r\n[session][debug] Loading plugin: bloomberg\r\n[cli][debug] OS: macOS 10.16\r\n[cli][debug] Python: 3.9.12\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022\r\n[cli][debug] Streamlink: 6.6.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2021.10.8\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.8.0\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.19.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.27.1\r\n[cli][debug] trio: 0.22.2\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.1.1\r\n[cli][debug] urllib3: 1.26.9\r\n[cli][debug] websocket-client: 1.6.3\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.bloomberg.com/live/us\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin bloomberg for URL https://www.bloomberg.com/live/us\r\nerror: unmatched '{' in format spec\n```\n\n", "before_files": [{"content": "\"\"\"\n$description America-based television network centred towards business and capital market programming.\n$url bloomberg.com\n$type live, vod\n$metadata title\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, PluginError, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?bloomberg\\.com/\n (?:\n (?P<live>live)(?:/(?P<channel>[^/]+))?\n |\n news/videos/[^/]+/[^/]+\n )\n\"\"\", re.VERBOSE))\nclass Bloomberg(Plugin):\n LIVE_API_URL = \"https://cdn.gotraffic.net/projector/latest/assets/config/config.min.json?v=1\"\n VOD_API_URL = \"https://www.bloomberg.com/api/embed?id={0}\"\n DEFAULT_CHANNEL = \"us\"\n\n def _get_live_streams(self, data, channel):\n schema_live_ids = validate.Schema(\n {\"live\": {\"channels\": {\"byChannelId\": {\n channel: validate.all(\n {\"liveId\": str},\n validate.get(\"liveId\"),\n ),\n }}}},\n validate.get((\"live\", \"channels\", \"byChannelId\", channel)),\n )\n try:\n live_id = schema_live_ids.validate(data)\n except PluginError:\n log.error(f\"Could not find liveId for channel '{channel}'\")\n return\n\n log.debug(f\"Found liveId: {live_id}\")\n return self.session.http.get(self.LIVE_API_URL, schema=validate.Schema(\n validate.parse_json(),\n {\"livestreams\": {\n live_id: {\n validate.optional(\"cdns\"): validate.all(\n [{\"streams\": [{\n \"url\": validate.url(),\n }]}],\n validate.transform(lambda x: [urls[\"url\"] for y in x for urls in y[\"streams\"]]),\n ),\n },\n }},\n validate.get((\"livestreams\", live_id, \"cdns\")),\n ))\n\n def _get_vod_streams(self, data):\n schema_vod_list = validate.Schema(\n validate.any(\n validate.all(\n {\"video\": {\"videoStory\": dict}},\n validate.get((\"video\", \"videoStory\")),\n ),\n validate.all(\n {\"quicktakeVideo\": {\"videoStory\": dict}},\n validate.get((\"quicktakeVideo\", \"videoStory\")),\n ),\n ),\n {\"video\": {\n \"bmmrId\": str,\n }},\n validate.get((\"video\", \"bmmrId\")),\n )\n schema_url = validate.all(\n {\"url\": validate.url()},\n validate.get(\"url\"),\n )\n\n try:\n video_id = schema_vod_list.validate(data)\n except PluginError:\n log.error(\"Could not find videoId\")\n return\n\n log.debug(f\"Found videoId: {video_id}\")\n vod_url = self.VOD_API_URL.format(video_id)\n secureStreams, streams, self.title = self.session.http.get(vod_url, schema=validate.Schema(\n validate.parse_json(),\n {\n validate.optional(\"secureStreams\"): [schema_url],\n validate.optional(\"streams\"): [schema_url],\n \"title\": str,\n },\n validate.union_get(\"secureStreams\", \"streams\", \"title\"),\n ))\n\n return secureStreams or streams\n\n def _get_streams(self):\n del self.session.http.headers[\"Accept-Encoding\"]\n\n try:\n data = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[contains(text(),'window.__PRELOADED_STATE__')][1]/text()\"),\n str,\n validate.regex(re.compile(r\"^\\s*window\\.__PRELOADED_STATE__\\s*=\\s*({.+})\\s*;?\\s*$\", re.DOTALL)),\n validate.get(1),\n validate.parse_json(),\n ))\n except PluginError:\n log.error(\"Could not find JSON data. Invalid URL or bot protection...\")\n return\n\n if self.match.group(\"live\"):\n streams = self._get_live_streams(data, self.match.group(\"channel\") or self.DEFAULT_CHANNEL)\n else:\n streams = self._get_vod_streams(data)\n\n if streams:\n # just return the first stream\n return HLSStream.parse_variant_playlist(self.session, streams[0])\n\n\n__plugin__ = Bloomberg\n", "path": "src/streamlink/plugins/bloomberg.py"}], "after_files": [{"content": "\"\"\"\n$description America-based television network centred towards business and capital market programming.\n$url bloomberg.com\n$type live, vod\n$metadata title\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, PluginError, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(\n name=\"live\",\n pattern=re.compile(r\"https?://(?:www\\.)?bloomberg\\.com/live(?:/(?P<channel>[^/]+))?\"),\n)\n@pluginmatcher(\n name=\"vod\",\n pattern=re.compile(r\"https?://(?:www\\.)?bloomberg\\.com/news/videos/[^/]+/[^/]+\"),\n)\nclass Bloomberg(Plugin):\n LIVE_API_URL = \"https://cdn.gotraffic.net/projector/latest/assets/config/config.min.json?v=1\"\n VOD_API_URL = \"https://www.bloomberg.com/api/embed?id={0}\"\n DEFAULT_CHANNEL = \"us\"\n\n def _get_live_streams(self, data, channel):\n schema_live_ids = validate.Schema(\n {\"live\": {\"channels\": {\"byChannelId\": {\n channel: validate.all(\n {\"liveId\": str},\n validate.get(\"liveId\"),\n ),\n }}}},\n validate.get((\"live\", \"channels\", \"byChannelId\", channel)),\n )\n try:\n live_id = schema_live_ids.validate(data)\n except PluginError:\n log.error(f\"Could not find liveId for channel '{channel}'\")\n return\n\n log.debug(f\"Found liveId: {live_id}\")\n return self.session.http.get(self.LIVE_API_URL, schema=validate.Schema(\n validate.parse_json(),\n {\"livestreams\": {\n live_id: {\n validate.optional(\"cdns\"): validate.all(\n [{\"streams\": [{\n \"url\": validate.url(),\n }]}],\n validate.transform(lambda x: [urls[\"url\"] for y in x for urls in y[\"streams\"]]),\n ),\n },\n }},\n validate.get((\"livestreams\", live_id, \"cdns\")),\n ))\n\n def _get_vod_streams(self, data):\n schema_vod_list = validate.Schema(\n validate.any(\n validate.all(\n {\"video\": {\"videoStory\": dict}},\n validate.get((\"video\", \"videoStory\")),\n ),\n validate.all(\n {\"quicktakeVideo\": {\"videoStory\": dict}},\n validate.get((\"quicktakeVideo\", \"videoStory\")),\n ),\n ),\n {\"video\": {\n \"bmmrId\": str,\n }},\n validate.get((\"video\", \"bmmrId\")),\n )\n schema_url = validate.all(\n {\"url\": validate.url()},\n validate.get(\"url\"),\n )\n\n try:\n video_id = schema_vod_list.validate(data)\n except PluginError:\n log.error(\"Could not find videoId\")\n return\n\n log.debug(f\"Found videoId: {video_id}\")\n vod_url = self.VOD_API_URL.format(video_id)\n secureStreams, streams, self.title = self.session.http.get(vod_url, schema=validate.Schema(\n validate.parse_json(),\n {\n validate.optional(\"secureStreams\"): [schema_url],\n validate.optional(\"streams\"): [schema_url],\n \"title\": str,\n },\n validate.union_get(\"secureStreams\", \"streams\", \"title\"),\n ))\n\n return secureStreams or streams\n\n def _get_streams(self):\n del self.session.http.headers[\"Accept-Encoding\"]\n\n data = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[contains(text(),'window.__PRELOADED_STATE__')][1]/text()\"),\n validate.none_or_all(\n re.compile(r\"\\bwindow\\.__PRELOADED_STATE__\\s*=\\s*(?P<json>{.+?})\\s*;(?:\\s|$)\"),\n validate.none_or_all(\n validate.get(\"json\"),\n validate.parse_json(),\n ),\n ),\n ))\n if not data:\n log.error(\"Could not find JSON data. Invalid URL or bot protection...\")\n return\n\n if self.matches[\"live\"]:\n streams = self._get_live_streams(data, self.match[\"channel\"] or self.DEFAULT_CHANNEL)\n else:\n streams = self._get_vod_streams(data)\n\n if streams:\n # just return the first stream\n return HLSStream.parse_variant_playlist(self.session, streams[0])\n\n\n__plugin__ = Bloomberg\n", "path": "src/streamlink/plugins/bloomberg.py"}]}
| 2,100 | 645 |
gh_patches_debug_17276
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2164
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow no-passphrase private keys
### Describe the bug
The Snowflake connector (at least) requires a passphrase in the profile file to open a private key connection.
### Steps To Reproduce
Create a dbt target like the following:
```
qa:
type: snowflake
account: my_account
user: my_user
role: ANALYST
# Keypair config
private_key_path: "path/to/my/no/passphrase/key"
private_key_passphrase: None
database: DB
warehouse: WH
schema: PUBLIC
threads: 1
client_session_keep_alive: False
```
Attempt to run against said DBT target. DBT will fail because no passphrase is provided. If, instead, a passphrase is provided, the connection will fail because the key is not encrypted.
### Expected behavior
Perhaps a warning in output that unencrypted keys are not the norm, requiring additional setting of override field in profile. If that's set, go ahead with the unencrypted key.
### System information
**Which database are you using dbt with?**
- [ ] postgres
- [ ] redshift
- [ ] bigquery
- [ x] snowflake
- [ ] other (specify: ____________)
**The output of `dbt --version`:**
```
0.14.2
```
**The operating system you're using:**
OSX
**The output of `python --version`:**
Python 3.7.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/snowflake/dbt/adapters/snowflake/connections.py`
Content:
```
1 import base64
2 import datetime
3 import pytz
4 import re
5 from contextlib import contextmanager
6 from dataclasses import dataclass
7 from io import StringIO
8 from typing import Optional
9
10 from cryptography.hazmat.backends import default_backend
11 from cryptography.hazmat.primitives import serialization
12 import requests
13 import snowflake.connector
14 import snowflake.connector.errors
15
16 from dbt.exceptions import (
17 InternalException, RuntimeException, FailedToConnectException,
18 DatabaseException, warn_or_error
19 )
20 from dbt.adapters.base import Credentials
21 from dbt.adapters.sql import SQLConnectionManager
22 from dbt.logger import GLOBAL_LOGGER as logger
23
24
25 _TOKEN_REQUEST_URL = 'https://{}.snowflakecomputing.com/oauth/token-request'
26
27
28 @dataclass
29 class SnowflakeCredentials(Credentials):
30 account: str
31 user: str
32 warehouse: Optional[str]
33 role: Optional[str]
34 password: Optional[str]
35 authenticator: Optional[str]
36 private_key_path: Optional[str]
37 private_key_passphrase: Optional[str]
38 token: Optional[str]
39 oauth_client_id: Optional[str]
40 oauth_client_secret: Optional[str]
41 client_session_keep_alive: bool = False
42
43 def __post_init__(self):
44 if (
45 self.authenticator != 'oauth' and
46 (self.oauth_client_secret or self.oauth_client_id or self.token)
47 ):
48 # the user probably forgot to set 'authenticator' like I keep doing
49 warn_or_error(
50 'Authenticator is not set to oauth, but an oauth-only '
51 'parameter is set! Did you mean to set authenticator: oauth?'
52 )
53
54 @property
55 def type(self):
56 return 'snowflake'
57
58 def _connection_keys(self):
59 return (
60 'account', 'user', 'database', 'schema', 'warehouse', 'role',
61 'client_session_keep_alive'
62 )
63
64 def auth_args(self):
65 # Pull all of the optional authentication args for the connector,
66 # let connector handle the actual arg validation
67 result = {}
68 if self.password:
69 result['password'] = self.password
70 if self.authenticator:
71 result['authenticator'] = self.authenticator
72 if self.authenticator == 'oauth':
73 token = self.token
74 # if we have a client ID/client secret, the token is a refresh
75 # token, not an access token
76 if self.oauth_client_id and self.oauth_client_secret:
77 token = self._get_access_token()
78 elif self.oauth_client_id:
79 warn_or_error(
80 'Invalid profile: got an oauth_client_id, but not an '
81 'oauth_client_secret!'
82 )
83 elif self.oauth_client_secret:
84 warn_or_error(
85 'Invalid profile: got an oauth_client_secret, but not '
86 'an oauth_client_id!'
87 )
88
89 result['token'] = token
90 result['private_key'] = self._get_private_key()
91 return result
92
93 def _get_access_token(self) -> str:
94 if self.authenticator != 'oauth':
95 raise InternalException('Can only get access tokens for oauth')
96 missing = any(
97 x is None for x in
98 (self.oauth_client_id, self.oauth_client_secret, self.token)
99 )
100 if missing:
101 raise InternalException(
102 'need a client ID a client secret, and a refresh token to get '
103 'an access token'
104 )
105 # should the full url be a config item?
106 token_url = _TOKEN_REQUEST_URL.format(self.account)
107 # I think this is only used to redirect on success, which we ignore
108 # (it does not have to match the integration's settings in snowflake)
109 redirect_uri = 'http://localhost:9999'
110 data = {
111 'grant_type': 'refresh_token',
112 'refresh_token': self.token,
113 'redirect_uri': redirect_uri
114 }
115
116 auth = base64.b64encode(
117 f'{self.oauth_client_id}:{self.oauth_client_secret}'
118 .encode('ascii')
119 ).decode('ascii')
120 headers = {
121 'Authorization': f'Basic {auth}',
122 'Content-type': 'application/x-www-form-urlencoded;charset=utf-8'
123 }
124 result = requests.post(token_url, headers=headers, data=data)
125 result_json = result.json()
126 if 'access_token' not in result_json:
127 raise DatabaseException(f'Did not get a token: {result_json}')
128 return result_json['access_token']
129
130 def _get_private_key(self):
131 """Get Snowflake private key by path or None."""
132 if not self.private_key_path or self.private_key_passphrase is None:
133 return None
134
135 with open(self.private_key_path, 'rb') as key:
136 p_key = serialization.load_pem_private_key(
137 key.read(),
138 password=self.private_key_passphrase.encode(),
139 backend=default_backend())
140
141 return p_key.private_bytes(
142 encoding=serialization.Encoding.DER,
143 format=serialization.PrivateFormat.PKCS8,
144 encryption_algorithm=serialization.NoEncryption())
145
146
147 class SnowflakeConnectionManager(SQLConnectionManager):
148 TYPE = 'snowflake'
149
150 @contextmanager
151 def exception_handler(self, sql):
152 try:
153 yield
154 except snowflake.connector.errors.ProgrammingError as e:
155 msg = str(e)
156
157 logger.debug('Snowflake error: {}'.format(msg))
158
159 if 'Empty SQL statement' in msg:
160 logger.debug("got empty sql statement, moving on")
161 elif 'This session does not have a current database' in msg:
162 self.release()
163 raise FailedToConnectException(
164 ('{}\n\nThis error sometimes occurs when invalid '
165 'credentials are provided, or when your default role '
166 'does not have access to use the specified database. '
167 'Please double check your profile and try again.')
168 .format(msg))
169 else:
170 self.release()
171 raise DatabaseException(msg)
172 except Exception as e:
173 logger.debug("Error running SQL: {}", sql)
174 logger.debug("Rolling back transaction.")
175 self.release()
176 if isinstance(e, RuntimeException):
177 # during a sql query, an internal to dbt exception was raised.
178 # this sounds a lot like a signal handler and probably has
179 # useful information, so raise it without modification.
180 raise
181 raise RuntimeException(str(e)) from e
182
183 @classmethod
184 def open(cls, connection):
185 if connection.state == 'open':
186 logger.debug('Connection is already open, skipping open.')
187 return connection
188
189 try:
190 creds = connection.credentials
191
192 handle = snowflake.connector.connect(
193 account=creds.account,
194 user=creds.user,
195 database=creds.database,
196 schema=creds.schema,
197 warehouse=creds.warehouse,
198 role=creds.role,
199 autocommit=False,
200 client_session_keep_alive=creds.client_session_keep_alive,
201 application='dbt',
202 **creds.auth_args()
203 )
204
205 connection.handle = handle
206 connection.state = 'open'
207 except snowflake.connector.errors.Error as e:
208 logger.debug("Got an error when attempting to open a snowflake "
209 "connection: '{}'"
210 .format(e))
211
212 connection.handle = None
213 connection.state = 'fail'
214
215 raise FailedToConnectException(str(e))
216
217 def cancel(self, connection):
218 handle = connection.handle
219 sid = handle.session_id
220
221 connection_name = connection.name
222
223 sql = 'select system$abort_session({})'.format(sid)
224
225 logger.debug("Cancelling query '{}' ({})".format(connection_name, sid))
226
227 _, cursor = self.add_query(sql)
228 res = cursor.fetchone()
229
230 logger.debug("Cancel query '{}': {}".format(connection_name, res))
231
232 @classmethod
233 def get_status(cls, cursor):
234 state = cursor.sqlstate
235
236 if state is None:
237 state = 'SUCCESS'
238
239 return "{} {}".format(state, cursor.rowcount)
240
241 @classmethod
242 def _split_queries(cls, sql):
243 "Splits sql statements at semicolons into discrete queries"
244
245 sql_s = str(sql)
246 sql_buf = StringIO(sql_s)
247 split_query = snowflake.connector.util_text.split_statements(sql_buf)
248 return [part[0] for part in split_query]
249
250 @classmethod
251 def process_results(cls, column_names, rows):
252 # Override for Snowflake. The datetime objects returned by
253 # snowflake-connector-python are not pickleable, so we need
254 # to replace them with sane timezones
255 fixed = []
256 for row in rows:
257 fixed_row = []
258 for col in row:
259 if isinstance(col, datetime.datetime) and col.tzinfo:
260 offset = col.utcoffset()
261 offset_seconds = offset.total_seconds()
262 new_timezone = pytz.FixedOffset(offset_seconds // 60)
263 col = col.astimezone(tz=new_timezone)
264 fixed_row.append(col)
265
266 fixed.append(fixed_row)
267
268 return super().process_results(column_names, fixed)
269
270 def add_query(self, sql, auto_begin=True,
271 bindings=None, abridge_sql_log=False):
272
273 connection = None
274 cursor = None
275
276 if bindings:
277 # The snowflake connector is more strict than, eg., psycopg2 -
278 # which allows any iterable thing to be passed as a binding.
279 bindings = tuple(bindings)
280
281 queries = self._split_queries(sql)
282
283 for individual_query in queries:
284 # hack -- after the last ';', remove comments and don't run
285 # empty queries. this avoids using exceptions as flow control,
286 # and also allows us to return the status of the last cursor
287 without_comments = re.sub(
288 re.compile('^.*(--.*)$', re.MULTILINE),
289 '', individual_query).strip()
290
291 if without_comments == "":
292 continue
293
294 connection, cursor = super().add_query(
295 individual_query, auto_begin,
296 bindings=bindings,
297 abridge_sql_log=abridge_sql_log
298 )
299
300 if cursor is None:
301 conn = self.get_thread_connection()
302 if conn is None or conn.name is None:
303 conn_name = '<None>'
304 else:
305 conn_name = conn.name
306
307 raise RuntimeException(
308 "Tried to run an empty query on model '{}'. If you are "
309 "conditionally running\nsql, eg. in a model hook, make "
310 "sure your `else` clause contains valid sql!\n\n"
311 "Provided SQL:\n{}"
312 .format(conn_name, sql)
313 )
314
315 return connection, cursor
316
317 @classmethod
318 def _rollback_handle(cls, connection):
319 """On snowflake, rolling back the handle of an aborted session raises
320 an exception.
321 """
322 logger.debug('initiating rollback')
323 try:
324 connection.handle.rollback()
325 except snowflake.connector.errors.ProgrammingError as e:
326 msg = str(e)
327 if 'Session no longer exists' not in msg:
328 raise
329
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/snowflake/dbt/adapters/snowflake/connections.py b/plugins/snowflake/dbt/adapters/snowflake/connections.py
--- a/plugins/snowflake/dbt/adapters/snowflake/connections.py
+++ b/plugins/snowflake/dbt/adapters/snowflake/connections.py
@@ -129,13 +129,18 @@
def _get_private_key(self):
"""Get Snowflake private key by path or None."""
- if not self.private_key_path or self.private_key_passphrase is None:
+ if not self.private_key_path:
return None
+ if self.private_key_passphrase:
+ encoded_passphrase = self.private_key_passphrase.encode()
+ else:
+ encoded_passphrase = None
+
with open(self.private_key_path, 'rb') as key:
p_key = serialization.load_pem_private_key(
key.read(),
- password=self.private_key_passphrase.encode(),
+ password=encoded_passphrase,
backend=default_backend())
return p_key.private_bytes(
|
{"golden_diff": "diff --git a/plugins/snowflake/dbt/adapters/snowflake/connections.py b/plugins/snowflake/dbt/adapters/snowflake/connections.py\n--- a/plugins/snowflake/dbt/adapters/snowflake/connections.py\n+++ b/plugins/snowflake/dbt/adapters/snowflake/connections.py\n@@ -129,13 +129,18 @@\n \n def _get_private_key(self):\n \"\"\"Get Snowflake private key by path or None.\"\"\"\n- if not self.private_key_path or self.private_key_passphrase is None:\n+ if not self.private_key_path:\n return None\n \n+ if self.private_key_passphrase:\n+ encoded_passphrase = self.private_key_passphrase.encode()\n+ else:\n+ encoded_passphrase = None\n+\n with open(self.private_key_path, 'rb') as key:\n p_key = serialization.load_pem_private_key(\n key.read(),\n- password=self.private_key_passphrase.encode(),\n+ password=encoded_passphrase,\n backend=default_backend())\n \n return p_key.private_bytes(\n", "issue": "Allow no-passphrase private keys\n### Describe the bug\r\nThe Snowflake connector (at least) requires a passphrase in the profile file to open a private key connection.\r\n\r\n### Steps To Reproduce\r\nCreate a dbt target like the following:\r\n```\r\n qa:\r\n type: snowflake\r\n account: my_account\r\n user: my_user\r\n role: ANALYST\r\n\r\n # Keypair config\r\n private_key_path: \"path/to/my/no/passphrase/key\"\r\n private_key_passphrase: None\r\n\r\n database: DB\r\n warehouse: WH\r\n schema: PUBLIC\r\n threads: 1\r\n client_session_keep_alive: False\r\n```\r\nAttempt to run against said DBT target. DBT will fail because no passphrase is provided. If, instead, a passphrase is provided, the connection will fail because the key is not encrypted.\r\n\r\n### Expected behavior\r\nPerhaps a warning in output that unencrypted keys are not the norm, requiring additional setting of override field in profile. If that's set, go ahead with the unencrypted key.\r\n\r\n### System information\r\n**Which database are you using dbt with?**\r\n- [ ] postgres\r\n- [ ] redshift\r\n- [ ] bigquery\r\n- [ x] snowflake\r\n- [ ] other (specify: ____________)\r\n\r\n\r\n**The output of `dbt --version`:**\r\n```\r\n0.14.2\r\n```\r\n\r\n**The operating system you're using:**\r\nOSX\r\n**The output of `python --version`:**\r\nPython 3.7.3\r\n\n", "before_files": [{"content": "import base64\nimport datetime\nimport pytz\nimport re\nfrom contextlib import contextmanager\nfrom dataclasses import dataclass\nfrom io import StringIO\nfrom typing import Optional\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives import serialization\nimport requests\nimport snowflake.connector\nimport snowflake.connector.errors\n\nfrom dbt.exceptions import (\n InternalException, RuntimeException, FailedToConnectException,\n DatabaseException, warn_or_error\n)\nfrom dbt.adapters.base import Credentials\nfrom dbt.adapters.sql import SQLConnectionManager\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\n\n_TOKEN_REQUEST_URL = 'https://{}.snowflakecomputing.com/oauth/token-request'\n\n\n@dataclass\nclass SnowflakeCredentials(Credentials):\n account: str\n user: str\n warehouse: Optional[str]\n role: Optional[str]\n password: Optional[str]\n authenticator: Optional[str]\n private_key_path: Optional[str]\n private_key_passphrase: Optional[str]\n token: Optional[str]\n oauth_client_id: Optional[str]\n oauth_client_secret: Optional[str]\n client_session_keep_alive: bool = False\n\n def __post_init__(self):\n if (\n self.authenticator != 'oauth' and\n (self.oauth_client_secret or self.oauth_client_id or self.token)\n ):\n # the user probably forgot to set 'authenticator' like I keep doing\n warn_or_error(\n 'Authenticator is not set to oauth, but an oauth-only '\n 'parameter is set! Did you mean to set authenticator: oauth?'\n )\n\n @property\n def type(self):\n return 'snowflake'\n\n def _connection_keys(self):\n return (\n 'account', 'user', 'database', 'schema', 'warehouse', 'role',\n 'client_session_keep_alive'\n )\n\n def auth_args(self):\n # Pull all of the optional authentication args for the connector,\n # let connector handle the actual arg validation\n result = {}\n if self.password:\n result['password'] = self.password\n if self.authenticator:\n result['authenticator'] = self.authenticator\n if self.authenticator == 'oauth':\n token = self.token\n # if we have a client ID/client secret, the token is a refresh\n # token, not an access token\n if self.oauth_client_id and self.oauth_client_secret:\n token = self._get_access_token()\n elif self.oauth_client_id:\n warn_or_error(\n 'Invalid profile: got an oauth_client_id, but not an '\n 'oauth_client_secret!'\n )\n elif self.oauth_client_secret:\n warn_or_error(\n 'Invalid profile: got an oauth_client_secret, but not '\n 'an oauth_client_id!'\n )\n\n result['token'] = token\n result['private_key'] = self._get_private_key()\n return result\n\n def _get_access_token(self) -> str:\n if self.authenticator != 'oauth':\n raise InternalException('Can only get access tokens for oauth')\n missing = any(\n x is None for x in\n (self.oauth_client_id, self.oauth_client_secret, self.token)\n )\n if missing:\n raise InternalException(\n 'need a client ID a client secret, and a refresh token to get '\n 'an access token'\n )\n # should the full url be a config item?\n token_url = _TOKEN_REQUEST_URL.format(self.account)\n # I think this is only used to redirect on success, which we ignore\n # (it does not have to match the integration's settings in snowflake)\n redirect_uri = 'http://localhost:9999'\n data = {\n 'grant_type': 'refresh_token',\n 'refresh_token': self.token,\n 'redirect_uri': redirect_uri\n }\n\n auth = base64.b64encode(\n f'{self.oauth_client_id}:{self.oauth_client_secret}'\n .encode('ascii')\n ).decode('ascii')\n headers = {\n 'Authorization': f'Basic {auth}',\n 'Content-type': 'application/x-www-form-urlencoded;charset=utf-8'\n }\n result = requests.post(token_url, headers=headers, data=data)\n result_json = result.json()\n if 'access_token' not in result_json:\n raise DatabaseException(f'Did not get a token: {result_json}')\n return result_json['access_token']\n\n def _get_private_key(self):\n \"\"\"Get Snowflake private key by path or None.\"\"\"\n if not self.private_key_path or self.private_key_passphrase is None:\n return None\n\n with open(self.private_key_path, 'rb') as key:\n p_key = serialization.load_pem_private_key(\n key.read(),\n password=self.private_key_passphrase.encode(),\n backend=default_backend())\n\n return p_key.private_bytes(\n encoding=serialization.Encoding.DER,\n format=serialization.PrivateFormat.PKCS8,\n encryption_algorithm=serialization.NoEncryption())\n\n\nclass SnowflakeConnectionManager(SQLConnectionManager):\n TYPE = 'snowflake'\n\n @contextmanager\n def exception_handler(self, sql):\n try:\n yield\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n\n logger.debug('Snowflake error: {}'.format(msg))\n\n if 'Empty SQL statement' in msg:\n logger.debug(\"got empty sql statement, moving on\")\n elif 'This session does not have a current database' in msg:\n self.release()\n raise FailedToConnectException(\n ('{}\\n\\nThis error sometimes occurs when invalid '\n 'credentials are provided, or when your default role '\n 'does not have access to use the specified database. '\n 'Please double check your profile and try again.')\n .format(msg))\n else:\n self.release()\n raise DatabaseException(msg)\n except Exception as e:\n logger.debug(\"Error running SQL: {}\", sql)\n logger.debug(\"Rolling back transaction.\")\n self.release()\n if isinstance(e, RuntimeException):\n # during a sql query, an internal to dbt exception was raised.\n # this sounds a lot like a signal handler and probably has\n # useful information, so raise it without modification.\n raise\n raise RuntimeException(str(e)) from e\n\n @classmethod\n def open(cls, connection):\n if connection.state == 'open':\n logger.debug('Connection is already open, skipping open.')\n return connection\n\n try:\n creds = connection.credentials\n\n handle = snowflake.connector.connect(\n account=creds.account,\n user=creds.user,\n database=creds.database,\n schema=creds.schema,\n warehouse=creds.warehouse,\n role=creds.role,\n autocommit=False,\n client_session_keep_alive=creds.client_session_keep_alive,\n application='dbt',\n **creds.auth_args()\n )\n\n connection.handle = handle\n connection.state = 'open'\n except snowflake.connector.errors.Error as e:\n logger.debug(\"Got an error when attempting to open a snowflake \"\n \"connection: '{}'\"\n .format(e))\n\n connection.handle = None\n connection.state = 'fail'\n\n raise FailedToConnectException(str(e))\n\n def cancel(self, connection):\n handle = connection.handle\n sid = handle.session_id\n\n connection_name = connection.name\n\n sql = 'select system$abort_session({})'.format(sid)\n\n logger.debug(\"Cancelling query '{}' ({})\".format(connection_name, sid))\n\n _, cursor = self.add_query(sql)\n res = cursor.fetchone()\n\n logger.debug(\"Cancel query '{}': {}\".format(connection_name, res))\n\n @classmethod\n def get_status(cls, cursor):\n state = cursor.sqlstate\n\n if state is None:\n state = 'SUCCESS'\n\n return \"{} {}\".format(state, cursor.rowcount)\n\n @classmethod\n def _split_queries(cls, sql):\n \"Splits sql statements at semicolons into discrete queries\"\n\n sql_s = str(sql)\n sql_buf = StringIO(sql_s)\n split_query = snowflake.connector.util_text.split_statements(sql_buf)\n return [part[0] for part in split_query]\n\n @classmethod\n def process_results(cls, column_names, rows):\n # Override for Snowflake. The datetime objects returned by\n # snowflake-connector-python are not pickleable, so we need\n # to replace them with sane timezones\n fixed = []\n for row in rows:\n fixed_row = []\n for col in row:\n if isinstance(col, datetime.datetime) and col.tzinfo:\n offset = col.utcoffset()\n offset_seconds = offset.total_seconds()\n new_timezone = pytz.FixedOffset(offset_seconds // 60)\n col = col.astimezone(tz=new_timezone)\n fixed_row.append(col)\n\n fixed.append(fixed_row)\n\n return super().process_results(column_names, fixed)\n\n def add_query(self, sql, auto_begin=True,\n bindings=None, abridge_sql_log=False):\n\n connection = None\n cursor = None\n\n if bindings:\n # The snowflake connector is more strict than, eg., psycopg2 -\n # which allows any iterable thing to be passed as a binding.\n bindings = tuple(bindings)\n\n queries = self._split_queries(sql)\n\n for individual_query in queries:\n # hack -- after the last ';', remove comments and don't run\n # empty queries. this avoids using exceptions as flow control,\n # and also allows us to return the status of the last cursor\n without_comments = re.sub(\n re.compile('^.*(--.*)$', re.MULTILINE),\n '', individual_query).strip()\n\n if without_comments == \"\":\n continue\n\n connection, cursor = super().add_query(\n individual_query, auto_begin,\n bindings=bindings,\n abridge_sql_log=abridge_sql_log\n )\n\n if cursor is None:\n conn = self.get_thread_connection()\n if conn is None or conn.name is None:\n conn_name = '<None>'\n else:\n conn_name = conn.name\n\n raise RuntimeException(\n \"Tried to run an empty query on model '{}'. If you are \"\n \"conditionally running\\nsql, eg. in a model hook, make \"\n \"sure your `else` clause contains valid sql!\\n\\n\"\n \"Provided SQL:\\n{}\"\n .format(conn_name, sql)\n )\n\n return connection, cursor\n\n @classmethod\n def _rollback_handle(cls, connection):\n \"\"\"On snowflake, rolling back the handle of an aborted session raises\n an exception.\n \"\"\"\n logger.debug('initiating rollback')\n try:\n connection.handle.rollback()\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n if 'Session no longer exists' not in msg:\n raise\n", "path": "plugins/snowflake/dbt/adapters/snowflake/connections.py"}], "after_files": [{"content": "import base64\nimport datetime\nimport pytz\nimport re\nfrom contextlib import contextmanager\nfrom dataclasses import dataclass\nfrom io import StringIO\nfrom typing import Optional\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives import serialization\nimport requests\nimport snowflake.connector\nimport snowflake.connector.errors\n\nfrom dbt.exceptions import (\n InternalException, RuntimeException, FailedToConnectException,\n DatabaseException, warn_or_error\n)\nfrom dbt.adapters.base import Credentials\nfrom dbt.adapters.sql import SQLConnectionManager\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\n\n_TOKEN_REQUEST_URL = 'https://{}.snowflakecomputing.com/oauth/token-request'\n\n\n@dataclass\nclass SnowflakeCredentials(Credentials):\n account: str\n user: str\n warehouse: Optional[str]\n role: Optional[str]\n password: Optional[str]\n authenticator: Optional[str]\n private_key_path: Optional[str]\n private_key_passphrase: Optional[str]\n token: Optional[str]\n oauth_client_id: Optional[str]\n oauth_client_secret: Optional[str]\n client_session_keep_alive: bool = False\n\n def __post_init__(self):\n if (\n self.authenticator != 'oauth' and\n (self.oauth_client_secret or self.oauth_client_id or self.token)\n ):\n # the user probably forgot to set 'authenticator' like I keep doing\n warn_or_error(\n 'Authenticator is not set to oauth, but an oauth-only '\n 'parameter is set! Did you mean to set authenticator: oauth?'\n )\n\n @property\n def type(self):\n return 'snowflake'\n\n def _connection_keys(self):\n return (\n 'account', 'user', 'database', 'schema', 'warehouse', 'role',\n 'client_session_keep_alive'\n )\n\n def auth_args(self):\n # Pull all of the optional authentication args for the connector,\n # let connector handle the actual arg validation\n result = {}\n if self.password:\n result['password'] = self.password\n if self.authenticator:\n result['authenticator'] = self.authenticator\n if self.authenticator == 'oauth':\n token = self.token\n # if we have a client ID/client secret, the token is a refresh\n # token, not an access token\n if self.oauth_client_id and self.oauth_client_secret:\n token = self._get_access_token()\n elif self.oauth_client_id:\n warn_or_error(\n 'Invalid profile: got an oauth_client_id, but not an '\n 'oauth_client_secret!'\n )\n elif self.oauth_client_secret:\n warn_or_error(\n 'Invalid profile: got an oauth_client_secret, but not '\n 'an oauth_client_id!'\n )\n\n result['token'] = token\n result['private_key'] = self._get_private_key()\n return result\n\n def _get_access_token(self) -> str:\n if self.authenticator != 'oauth':\n raise InternalException('Can only get access tokens for oauth')\n missing = any(\n x is None for x in\n (self.oauth_client_id, self.oauth_client_secret, self.token)\n )\n if missing:\n raise InternalException(\n 'need a client ID a client secret, and a refresh token to get '\n 'an access token'\n )\n # should the full url be a config item?\n token_url = _TOKEN_REQUEST_URL.format(self.account)\n # I think this is only used to redirect on success, which we ignore\n # (it does not have to match the integration's settings in snowflake)\n redirect_uri = 'http://localhost:9999'\n data = {\n 'grant_type': 'refresh_token',\n 'refresh_token': self.token,\n 'redirect_uri': redirect_uri\n }\n\n auth = base64.b64encode(\n f'{self.oauth_client_id}:{self.oauth_client_secret}'\n .encode('ascii')\n ).decode('ascii')\n headers = {\n 'Authorization': f'Basic {auth}',\n 'Content-type': 'application/x-www-form-urlencoded;charset=utf-8'\n }\n result = requests.post(token_url, headers=headers, data=data)\n result_json = result.json()\n if 'access_token' not in result_json:\n raise DatabaseException(f'Did not get a token: {result_json}')\n return result_json['access_token']\n\n def _get_private_key(self):\n \"\"\"Get Snowflake private key by path or None.\"\"\"\n if not self.private_key_path:\n return None\n\n if self.private_key_passphrase:\n encoded_passphrase = self.private_key_passphrase.encode()\n else:\n encoded_passphrase = None\n\n with open(self.private_key_path, 'rb') as key:\n p_key = serialization.load_pem_private_key(\n key.read(),\n password=encoded_passphrase,\n backend=default_backend())\n\n return p_key.private_bytes(\n encoding=serialization.Encoding.DER,\n format=serialization.PrivateFormat.PKCS8,\n encryption_algorithm=serialization.NoEncryption())\n\n\nclass SnowflakeConnectionManager(SQLConnectionManager):\n TYPE = 'snowflake'\n\n @contextmanager\n def exception_handler(self, sql):\n try:\n yield\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n\n logger.debug('Snowflake error: {}'.format(msg))\n\n if 'Empty SQL statement' in msg:\n logger.debug(\"got empty sql statement, moving on\")\n elif 'This session does not have a current database' in msg:\n self.release()\n raise FailedToConnectException(\n ('{}\\n\\nThis error sometimes occurs when invalid '\n 'credentials are provided, or when your default role '\n 'does not have access to use the specified database. '\n 'Please double check your profile and try again.')\n .format(msg))\n else:\n self.release()\n raise DatabaseException(msg)\n except Exception as e:\n logger.debug(\"Error running SQL: {}\", sql)\n logger.debug(\"Rolling back transaction.\")\n self.release()\n if isinstance(e, RuntimeException):\n # during a sql query, an internal to dbt exception was raised.\n # this sounds a lot like a signal handler and probably has\n # useful information, so raise it without modification.\n raise\n raise RuntimeException(str(e)) from e\n\n @classmethod\n def open(cls, connection):\n if connection.state == 'open':\n logger.debug('Connection is already open, skipping open.')\n return connection\n\n try:\n creds = connection.credentials\n\n handle = snowflake.connector.connect(\n account=creds.account,\n user=creds.user,\n database=creds.database,\n schema=creds.schema,\n warehouse=creds.warehouse,\n role=creds.role,\n autocommit=False,\n client_session_keep_alive=creds.client_session_keep_alive,\n application='dbt',\n **creds.auth_args()\n )\n\n connection.handle = handle\n connection.state = 'open'\n except snowflake.connector.errors.Error as e:\n logger.debug(\"Got an error when attempting to open a snowflake \"\n \"connection: '{}'\"\n .format(e))\n\n connection.handle = None\n connection.state = 'fail'\n\n raise FailedToConnectException(str(e))\n\n def cancel(self, connection):\n handle = connection.handle\n sid = handle.session_id\n\n connection_name = connection.name\n\n sql = 'select system$abort_session({})'.format(sid)\n\n logger.debug(\"Cancelling query '{}' ({})\".format(connection_name, sid))\n\n _, cursor = self.add_query(sql)\n res = cursor.fetchone()\n\n logger.debug(\"Cancel query '{}': {}\".format(connection_name, res))\n\n @classmethod\n def get_status(cls, cursor):\n state = cursor.sqlstate\n\n if state is None:\n state = 'SUCCESS'\n\n return \"{} {}\".format(state, cursor.rowcount)\n\n @classmethod\n def _split_queries(cls, sql):\n \"Splits sql statements at semicolons into discrete queries\"\n\n sql_s = str(sql)\n sql_buf = StringIO(sql_s)\n split_query = snowflake.connector.util_text.split_statements(sql_buf)\n return [part[0] for part in split_query]\n\n @classmethod\n def process_results(cls, column_names, rows):\n # Override for Snowflake. The datetime objects returned by\n # snowflake-connector-python are not pickleable, so we need\n # to replace them with sane timezones\n fixed = []\n for row in rows:\n fixed_row = []\n for col in row:\n if isinstance(col, datetime.datetime) and col.tzinfo:\n offset = col.utcoffset()\n offset_seconds = offset.total_seconds()\n new_timezone = pytz.FixedOffset(offset_seconds // 60)\n col = col.astimezone(tz=new_timezone)\n fixed_row.append(col)\n\n fixed.append(fixed_row)\n\n return super().process_results(column_names, fixed)\n\n def add_query(self, sql, auto_begin=True,\n bindings=None, abridge_sql_log=False):\n\n connection = None\n cursor = None\n\n if bindings:\n # The snowflake connector is more strict than, eg., psycopg2 -\n # which allows any iterable thing to be passed as a binding.\n bindings = tuple(bindings)\n\n queries = self._split_queries(sql)\n\n for individual_query in queries:\n # hack -- after the last ';', remove comments and don't run\n # empty queries. this avoids using exceptions as flow control,\n # and also allows us to return the status of the last cursor\n without_comments = re.sub(\n re.compile('^.*(--.*)$', re.MULTILINE),\n '', individual_query).strip()\n\n if without_comments == \"\":\n continue\n\n connection, cursor = super().add_query(\n individual_query, auto_begin,\n bindings=bindings,\n abridge_sql_log=abridge_sql_log\n )\n\n if cursor is None:\n conn = self.get_thread_connection()\n if conn is None or conn.name is None:\n conn_name = '<None>'\n else:\n conn_name = conn.name\n\n raise RuntimeException(\n \"Tried to run an empty query on model '{}'. If you are \"\n \"conditionally running\\nsql, eg. in a model hook, make \"\n \"sure your `else` clause contains valid sql!\\n\\n\"\n \"Provided SQL:\\n{}\"\n .format(conn_name, sql)\n )\n\n return connection, cursor\n\n @classmethod\n def _rollback_handle(cls, connection):\n \"\"\"On snowflake, rolling back the handle of an aborted session raises\n an exception.\n \"\"\"\n logger.debug('initiating rollback')\n try:\n connection.handle.rollback()\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n if 'Session no longer exists' not in msg:\n raise\n", "path": "plugins/snowflake/dbt/adapters/snowflake/connections.py"}]}
| 3,793 | 230 |
gh_patches_debug_5639
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-613
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG]The parameter `open_browser` of `new_cluster` doesn't work
**Describe the bug**
In `new_cluster`, we use
```python
open_browser = open_browser or options.deploy.open_browser
```
to decide if we should open the browser after web worker available. When `open_browser` is `False`, it will still fall back to `options.deploy.open_browser` and open the browser.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.7
2. The version of Mars you use: master
3. Versions of crucial packages, such as numpy, scipy and protobuf
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
The web browser shouldn't be opened when `open_browser` is `False`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/deploy/local/core.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from __future__ import print_function
18
19 import atexit
20 import multiprocessing
21 import os
22 import signal
23 import sys
24 import time
25
26 from ...actors import create_actor_pool
27 from ...compat import six, TimeoutError # pylint: disable=W0622
28 from ...config import options
29 from ...lib import gipc
30 from ...resource import cpu_count
31 from ...scheduler.service import SchedulerService
32 from ...session import new_session
33 from ...utils import get_next_port
34 from ...worker.service import WorkerService
35 from .distributor import gen_distributor
36
37 _local_cluster_clients = dict()
38 atexit.register(lambda: [v.stop() for v in list(_local_cluster_clients.values())])
39
40
41 class LocalDistributedCluster(object):
42
43 # at least 2 process are required by scheduler and worker
44 MIN_SCHEDULER_N_PROCESS = 2
45 MIN_WORKER_N_PROCESS = 2
46
47 def __init__(self, endpoint, n_process=None, scheduler_n_process=None,
48 worker_n_process=None, ignore_avail_mem=True, shared_memory=None):
49 self._endpoint = endpoint
50
51 self._started = False
52 self._stopped = False
53
54 self._pool = None
55 self._scheduler_service = SchedulerService()
56 self._worker_service = WorkerService(ignore_avail_mem=ignore_avail_mem,
57 cache_mem_limit=shared_memory)
58
59 self._scheduler_n_process, self._worker_n_process = \
60 self._calc_scheduler_worker_n_process(n_process,
61 scheduler_n_process,
62 worker_n_process)
63
64 @property
65 def pool(self):
66 return self._pool
67
68 @classmethod
69 def _calc_scheduler_worker_n_process(cls, n_process, scheduler_n_process, worker_n_process,
70 calc_cpu_count=cpu_count):
71 n_scheduler, n_worker = scheduler_n_process, worker_n_process
72
73 if n_scheduler is None and n_worker is None:
74 n_scheduler = cls.MIN_SCHEDULER_N_PROCESS
75 n_process = n_process if n_process is not None else calc_cpu_count() + n_scheduler
76 n_worker = max(n_process - n_scheduler, cls.MIN_WORKER_N_PROCESS)
77 elif n_scheduler is None or n_worker is None:
78 # one of scheduler and worker n_process provided
79 if n_scheduler is None:
80 n_process = n_process if n_process is not None else calc_cpu_count()
81 n_scheduler = max(n_process - n_worker, cls.MIN_SCHEDULER_N_PROCESS)
82 else:
83 assert n_worker is None
84 n_process = n_process if n_process is not None else calc_cpu_count() + n_scheduler
85 n_worker = max(n_process - n_scheduler, cls.MIN_WORKER_N_PROCESS)
86
87 return n_scheduler, n_worker
88
89 def _make_sure_scheduler_ready(self, timeout=120):
90 check_start_time = time.time()
91 while True:
92 workers_meta = self._scheduler_service._resource_ref.get_workers_meta()
93 if not workers_meta:
94 # wait for worker to report status
95 self._pool.sleep(.5)
96 if time.time() - check_start_time > timeout: # pragma: no cover
97 raise TimeoutError('Check worker ready timed out.')
98 else:
99 break
100
101 def start_service(self):
102 if self._started:
103 return
104 self._started = True
105
106 # start plasma
107 self._worker_service.start_plasma()
108
109 # start actor pool
110 n_process = self._scheduler_n_process + self._worker_n_process
111 distributor = gen_distributor(self._scheduler_n_process, self._worker_n_process)
112 self._pool = create_actor_pool(self._endpoint, n_process, distributor=distributor)
113
114 # start scheduler first
115 self._scheduler_service.start(self._endpoint, None, self._pool)
116
117 # start worker next
118 self._worker_service.start(self._endpoint, self._pool, distributed=False,
119 schedulers=[self._endpoint],
120 process_start_index=self._scheduler_n_process)
121
122 # make sure scheduler is ready
123 self._make_sure_scheduler_ready()
124
125 def stop_service(self):
126 if self._stopped:
127 return
128
129 self._stopped = True
130 try:
131 self._scheduler_service.stop(self._pool)
132 self._worker_service.stop()
133 finally:
134 self._pool.stop()
135
136 def serve_forever(self):
137 try:
138 self._pool.join()
139 finally:
140 self.stop_service()
141
142 def __enter__(self):
143 self.start_service()
144 return self
145
146 def __exit__(self, *_):
147 self.stop_service()
148
149
150 def gen_endpoint(address):
151 port = None
152 tries = 5 # retry for 5 times
153
154 for i in range(tries):
155 try:
156 port = get_next_port()
157 break
158 except SystemError:
159 if i < tries - 1:
160 continue
161 raise
162
163 return '{0}:{1}'.format(address, port)
164
165
166 def _start_cluster(endpoint, event, n_process=None, shared_memory=None, **kw):
167 cluster = LocalDistributedCluster(endpoint, n_process=n_process,
168 shared_memory=shared_memory, **kw)
169 cluster.start_service()
170 event.set()
171 try:
172 cluster.serve_forever()
173 finally:
174 cluster.stop_service()
175
176
177 def _start_cluster_process(endpoint, n_process, shared_memory, **kw):
178 event = multiprocessing.Event()
179
180 kw = kw.copy()
181 kw['n_process'] = n_process
182 kw['shared_memory'] = shared_memory or '20%'
183 process = gipc.start_process(_start_cluster, args=(endpoint, event), kwargs=kw)
184
185 while True:
186 event.wait(5)
187 if not event.is_set():
188 # service not started yet
189 continue
190 if not process.is_alive():
191 raise SystemError('New local cluster failed')
192 else:
193 break
194
195 return process
196
197
198 def _start_web(scheduler_address, ui_port, event):
199 import gevent.monkey
200 gevent.monkey.patch_all(thread=False)
201
202 from ...web import MarsWeb
203
204 web = MarsWeb(ui_port, scheduler_address)
205 try:
206 web.start(event=event, block=True)
207 finally:
208 web.stop()
209
210
211 def _start_web_process(scheduler_endpoint, web_endpoint):
212 web_event = multiprocessing.Event()
213 ui_port = int(web_endpoint.rsplit(':', 1)[1])
214 web_process = gipc.start_process(
215 _start_web, args=(scheduler_endpoint, ui_port, web_event), daemon=True)
216
217 while True:
218 web_event.wait(5)
219 if not web_event.is_set():
220 # web not started yet
221 continue
222 if not web_process.is_alive():
223 raise SystemError('New web interface failed')
224 else:
225 break
226
227 return web_process
228
229
230 class LocalDistributedClusterClient(object):
231 def __init__(self, endpoint, web_endpoint, cluster_process, web_process):
232 self._cluster_process = cluster_process
233 self._web_process = web_process
234 self._endpoint = endpoint
235 self._web_endpoint = web_endpoint
236 self._session = new_session(endpoint).as_default()
237
238 @property
239 def endpoint(self):
240 return self._endpoint
241
242 @property
243 def web_endpoint(self):
244 return self._web_endpoint
245
246 @property
247 def session(self):
248 return self._session
249
250 def __enter__(self):
251 return self
252
253 def __exit__(self, *_):
254 self.stop()
255
256 @staticmethod
257 def _ensure_process_finish(proc):
258 if proc is None or not proc.is_alive():
259 return
260 proc.join(3)
261
262 # in case the process does not finish
263 if proc.is_alive(): # pragma: no cover
264 try:
265 import psutil
266 for subproc in psutil.Process(proc.pid).children(recursive=True):
267 try:
268 subproc.kill()
269 except psutil.NoSuchProcess: # pragma: no cover
270 pass
271 except ImportError:
272 pass
273 finally:
274 proc.terminate()
275
276 def stop(self):
277 try:
278 del _local_cluster_clients[id(self)]
279 except KeyError: # pragma: no cover
280 pass
281
282 if self._cluster_process.is_alive():
283 os.kill(self._cluster_process.pid, signal.SIGINT)
284 if self._web_process is not None and self._web_process.is_alive():
285 os.kill(self._web_process.pid, signal.SIGINT)
286
287 self._ensure_process_finish(self._cluster_process)
288 self._ensure_process_finish(self._web_process)
289
290
291 def new_cluster(address='0.0.0.0', web=False, n_process=None, shared_memory=None,
292 open_browser=None, **kw):
293 open_browser = open_browser or options.deploy.open_browser
294 endpoint = gen_endpoint(address)
295 web_endpoint = None
296 if web is True:
297 web_endpoint = gen_endpoint('0.0.0.0')
298 elif isinstance(web, six.string_types):
299 if ':' in web:
300 web_endpoint = web
301 else:
302 web_endpoint = gen_endpoint(web)
303
304 process = _start_cluster_process(endpoint, n_process, shared_memory, **kw)
305
306 web_process = None
307 if web_endpoint:
308 web_process = _start_web_process(endpoint, web_endpoint)
309 print('Web endpoint started at http://%s' % web_endpoint, file=sys.stderr)
310 if open_browser:
311 import webbrowser
312 webbrowser.open_new_tab('http://%s' % web_endpoint)
313
314 client = LocalDistributedClusterClient(endpoint, web_endpoint, process, web_process)
315 _local_cluster_clients[id(client)] = client
316 return client
317
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mars/deploy/local/core.py b/mars/deploy/local/core.py
--- a/mars/deploy/local/core.py
+++ b/mars/deploy/local/core.py
@@ -290,7 +290,8 @@
def new_cluster(address='0.0.0.0', web=False, n_process=None, shared_memory=None,
open_browser=None, **kw):
- open_browser = open_browser or options.deploy.open_browser
+ if open_browser is None:
+ open_browser = options.deploy.open_browser
endpoint = gen_endpoint(address)
web_endpoint = None
if web is True:
|
{"golden_diff": "diff --git a/mars/deploy/local/core.py b/mars/deploy/local/core.py\n--- a/mars/deploy/local/core.py\n+++ b/mars/deploy/local/core.py\n@@ -290,7 +290,8 @@\n \n def new_cluster(address='0.0.0.0', web=False, n_process=None, shared_memory=None,\n open_browser=None, **kw):\n- open_browser = open_browser or options.deploy.open_browser\n+ if open_browser is None:\n+ open_browser = options.deploy.open_browser\n endpoint = gen_endpoint(address)\n web_endpoint = None\n if web is True:\n", "issue": "[BUG]The parameter `open_browser` of `new_cluster` doesn't work\n**Describe the bug**\r\n\r\nIn `new_cluster`, we use \r\n\r\n```python\r\nopen_browser = open_browser or options.deploy.open_browser\r\n```\r\n\r\nto decide if we should open the browser after web worker available. When `open_browser` is `False`, it will still fall back to `options.deploy.open_browser` and open the browser.\r\n\r\n**To Reproduce**\r\n\r\nTo help us reproducing this bug, please provide information below:\r\n1. Your Python version: 3.7\r\n2. The version of Mars you use: master\r\n3. Versions of crucial packages, such as numpy, scipy and protobuf\r\n4. Full stack of the error.\r\n5. Minimized code to reproduce the error.\r\n\r\n**Expected behavior**\r\n\r\nThe web browser shouldn't be opened when `open_browser` is `False`.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import print_function\n\nimport atexit\nimport multiprocessing\nimport os\nimport signal\nimport sys\nimport time\n\nfrom ...actors import create_actor_pool\nfrom ...compat import six, TimeoutError # pylint: disable=W0622\nfrom ...config import options\nfrom ...lib import gipc\nfrom ...resource import cpu_count\nfrom ...scheduler.service import SchedulerService\nfrom ...session import new_session\nfrom ...utils import get_next_port\nfrom ...worker.service import WorkerService\nfrom .distributor import gen_distributor\n\n_local_cluster_clients = dict()\natexit.register(lambda: [v.stop() for v in list(_local_cluster_clients.values())])\n\n\nclass LocalDistributedCluster(object):\n\n # at least 2 process are required by scheduler and worker\n MIN_SCHEDULER_N_PROCESS = 2\n MIN_WORKER_N_PROCESS = 2\n\n def __init__(self, endpoint, n_process=None, scheduler_n_process=None,\n worker_n_process=None, ignore_avail_mem=True, shared_memory=None):\n self._endpoint = endpoint\n\n self._started = False\n self._stopped = False\n\n self._pool = None\n self._scheduler_service = SchedulerService()\n self._worker_service = WorkerService(ignore_avail_mem=ignore_avail_mem,\n cache_mem_limit=shared_memory)\n\n self._scheduler_n_process, self._worker_n_process = \\\n self._calc_scheduler_worker_n_process(n_process,\n scheduler_n_process,\n worker_n_process)\n\n @property\n def pool(self):\n return self._pool\n\n @classmethod\n def _calc_scheduler_worker_n_process(cls, n_process, scheduler_n_process, worker_n_process,\n calc_cpu_count=cpu_count):\n n_scheduler, n_worker = scheduler_n_process, worker_n_process\n\n if n_scheduler is None and n_worker is None:\n n_scheduler = cls.MIN_SCHEDULER_N_PROCESS\n n_process = n_process if n_process is not None else calc_cpu_count() + n_scheduler\n n_worker = max(n_process - n_scheduler, cls.MIN_WORKER_N_PROCESS)\n elif n_scheduler is None or n_worker is None:\n # one of scheduler and worker n_process provided\n if n_scheduler is None:\n n_process = n_process if n_process is not None else calc_cpu_count()\n n_scheduler = max(n_process - n_worker, cls.MIN_SCHEDULER_N_PROCESS)\n else:\n assert n_worker is None\n n_process = n_process if n_process is not None else calc_cpu_count() + n_scheduler\n n_worker = max(n_process - n_scheduler, cls.MIN_WORKER_N_PROCESS)\n\n return n_scheduler, n_worker\n\n def _make_sure_scheduler_ready(self, timeout=120):\n check_start_time = time.time()\n while True:\n workers_meta = self._scheduler_service._resource_ref.get_workers_meta()\n if not workers_meta:\n # wait for worker to report status\n self._pool.sleep(.5)\n if time.time() - check_start_time > timeout: # pragma: no cover\n raise TimeoutError('Check worker ready timed out.')\n else:\n break\n\n def start_service(self):\n if self._started:\n return\n self._started = True\n\n # start plasma\n self._worker_service.start_plasma()\n\n # start actor pool\n n_process = self._scheduler_n_process + self._worker_n_process\n distributor = gen_distributor(self._scheduler_n_process, self._worker_n_process)\n self._pool = create_actor_pool(self._endpoint, n_process, distributor=distributor)\n\n # start scheduler first\n self._scheduler_service.start(self._endpoint, None, self._pool)\n\n # start worker next\n self._worker_service.start(self._endpoint, self._pool, distributed=False,\n schedulers=[self._endpoint],\n process_start_index=self._scheduler_n_process)\n\n # make sure scheduler is ready\n self._make_sure_scheduler_ready()\n\n def stop_service(self):\n if self._stopped:\n return\n\n self._stopped = True\n try:\n self._scheduler_service.stop(self._pool)\n self._worker_service.stop()\n finally:\n self._pool.stop()\n\n def serve_forever(self):\n try:\n self._pool.join()\n finally:\n self.stop_service()\n\n def __enter__(self):\n self.start_service()\n return self\n\n def __exit__(self, *_):\n self.stop_service()\n\n\ndef gen_endpoint(address):\n port = None\n tries = 5 # retry for 5 times\n\n for i in range(tries):\n try:\n port = get_next_port()\n break\n except SystemError:\n if i < tries - 1:\n continue\n raise\n\n return '{0}:{1}'.format(address, port)\n\n\ndef _start_cluster(endpoint, event, n_process=None, shared_memory=None, **kw):\n cluster = LocalDistributedCluster(endpoint, n_process=n_process,\n shared_memory=shared_memory, **kw)\n cluster.start_service()\n event.set()\n try:\n cluster.serve_forever()\n finally:\n cluster.stop_service()\n\n\ndef _start_cluster_process(endpoint, n_process, shared_memory, **kw):\n event = multiprocessing.Event()\n\n kw = kw.copy()\n kw['n_process'] = n_process\n kw['shared_memory'] = shared_memory or '20%'\n process = gipc.start_process(_start_cluster, args=(endpoint, event), kwargs=kw)\n\n while True:\n event.wait(5)\n if not event.is_set():\n # service not started yet\n continue\n if not process.is_alive():\n raise SystemError('New local cluster failed')\n else:\n break\n\n return process\n\n\ndef _start_web(scheduler_address, ui_port, event):\n import gevent.monkey\n gevent.monkey.patch_all(thread=False)\n\n from ...web import MarsWeb\n\n web = MarsWeb(ui_port, scheduler_address)\n try:\n web.start(event=event, block=True)\n finally:\n web.stop()\n\n\ndef _start_web_process(scheduler_endpoint, web_endpoint):\n web_event = multiprocessing.Event()\n ui_port = int(web_endpoint.rsplit(':', 1)[1])\n web_process = gipc.start_process(\n _start_web, args=(scheduler_endpoint, ui_port, web_event), daemon=True)\n\n while True:\n web_event.wait(5)\n if not web_event.is_set():\n # web not started yet\n continue\n if not web_process.is_alive():\n raise SystemError('New web interface failed')\n else:\n break\n\n return web_process\n\n\nclass LocalDistributedClusterClient(object):\n def __init__(self, endpoint, web_endpoint, cluster_process, web_process):\n self._cluster_process = cluster_process\n self._web_process = web_process\n self._endpoint = endpoint\n self._web_endpoint = web_endpoint\n self._session = new_session(endpoint).as_default()\n\n @property\n def endpoint(self):\n return self._endpoint\n\n @property\n def web_endpoint(self):\n return self._web_endpoint\n\n @property\n def session(self):\n return self._session\n\n def __enter__(self):\n return self\n\n def __exit__(self, *_):\n self.stop()\n\n @staticmethod\n def _ensure_process_finish(proc):\n if proc is None or not proc.is_alive():\n return\n proc.join(3)\n\n # in case the process does not finish\n if proc.is_alive(): # pragma: no cover\n try:\n import psutil\n for subproc in psutil.Process(proc.pid).children(recursive=True):\n try:\n subproc.kill()\n except psutil.NoSuchProcess: # pragma: no cover\n pass\n except ImportError:\n pass\n finally:\n proc.terminate()\n\n def stop(self):\n try:\n del _local_cluster_clients[id(self)]\n except KeyError: # pragma: no cover\n pass\n\n if self._cluster_process.is_alive():\n os.kill(self._cluster_process.pid, signal.SIGINT)\n if self._web_process is not None and self._web_process.is_alive():\n os.kill(self._web_process.pid, signal.SIGINT)\n\n self._ensure_process_finish(self._cluster_process)\n self._ensure_process_finish(self._web_process)\n\n\ndef new_cluster(address='0.0.0.0', web=False, n_process=None, shared_memory=None,\n open_browser=None, **kw):\n open_browser = open_browser or options.deploy.open_browser\n endpoint = gen_endpoint(address)\n web_endpoint = None\n if web is True:\n web_endpoint = gen_endpoint('0.0.0.0')\n elif isinstance(web, six.string_types):\n if ':' in web:\n web_endpoint = web\n else:\n web_endpoint = gen_endpoint(web)\n\n process = _start_cluster_process(endpoint, n_process, shared_memory, **kw)\n\n web_process = None\n if web_endpoint:\n web_process = _start_web_process(endpoint, web_endpoint)\n print('Web endpoint started at http://%s' % web_endpoint, file=sys.stderr)\n if open_browser:\n import webbrowser\n webbrowser.open_new_tab('http://%s' % web_endpoint)\n\n client = LocalDistributedClusterClient(endpoint, web_endpoint, process, web_process)\n _local_cluster_clients[id(client)] = client\n return client\n", "path": "mars/deploy/local/core.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import print_function\n\nimport atexit\nimport multiprocessing\nimport os\nimport signal\nimport sys\nimport time\n\nfrom ...actors import create_actor_pool\nfrom ...compat import six, TimeoutError # pylint: disable=W0622\nfrom ...config import options\nfrom ...lib import gipc\nfrom ...resource import cpu_count\nfrom ...scheduler.service import SchedulerService\nfrom ...session import new_session\nfrom ...utils import get_next_port\nfrom ...worker.service import WorkerService\nfrom .distributor import gen_distributor\n\n_local_cluster_clients = dict()\natexit.register(lambda: [v.stop() for v in list(_local_cluster_clients.values())])\n\n\nclass LocalDistributedCluster(object):\n\n # at least 2 process are required by scheduler and worker\n MIN_SCHEDULER_N_PROCESS = 2\n MIN_WORKER_N_PROCESS = 2\n\n def __init__(self, endpoint, n_process=None, scheduler_n_process=None,\n worker_n_process=None, ignore_avail_mem=True, shared_memory=None):\n self._endpoint = endpoint\n\n self._started = False\n self._stopped = False\n\n self._pool = None\n self._scheduler_service = SchedulerService()\n self._worker_service = WorkerService(ignore_avail_mem=ignore_avail_mem,\n cache_mem_limit=shared_memory)\n\n self._scheduler_n_process, self._worker_n_process = \\\n self._calc_scheduler_worker_n_process(n_process,\n scheduler_n_process,\n worker_n_process)\n\n @property\n def pool(self):\n return self._pool\n\n @classmethod\n def _calc_scheduler_worker_n_process(cls, n_process, scheduler_n_process, worker_n_process,\n calc_cpu_count=cpu_count):\n n_scheduler, n_worker = scheduler_n_process, worker_n_process\n\n if n_scheduler is None and n_worker is None:\n n_scheduler = cls.MIN_SCHEDULER_N_PROCESS\n n_process = n_process if n_process is not None else calc_cpu_count() + n_scheduler\n n_worker = max(n_process - n_scheduler, cls.MIN_WORKER_N_PROCESS)\n elif n_scheduler is None or n_worker is None:\n # one of scheduler and worker n_process provided\n if n_scheduler is None:\n n_process = n_process if n_process is not None else calc_cpu_count()\n n_scheduler = max(n_process - n_worker, cls.MIN_SCHEDULER_N_PROCESS)\n else:\n assert n_worker is None\n n_process = n_process if n_process is not None else calc_cpu_count() + n_scheduler\n n_worker = max(n_process - n_scheduler, cls.MIN_WORKER_N_PROCESS)\n\n return n_scheduler, n_worker\n\n def _make_sure_scheduler_ready(self, timeout=120):\n check_start_time = time.time()\n while True:\n workers_meta = self._scheduler_service._resource_ref.get_workers_meta()\n if not workers_meta:\n # wait for worker to report status\n self._pool.sleep(.5)\n if time.time() - check_start_time > timeout: # pragma: no cover\n raise TimeoutError('Check worker ready timed out.')\n else:\n break\n\n def start_service(self):\n if self._started:\n return\n self._started = True\n\n # start plasma\n self._worker_service.start_plasma()\n\n # start actor pool\n n_process = self._scheduler_n_process + self._worker_n_process\n distributor = gen_distributor(self._scheduler_n_process, self._worker_n_process)\n self._pool = create_actor_pool(self._endpoint, n_process, distributor=distributor)\n\n # start scheduler first\n self._scheduler_service.start(self._endpoint, None, self._pool)\n\n # start worker next\n self._worker_service.start(self._endpoint, self._pool, distributed=False,\n schedulers=[self._endpoint],\n process_start_index=self._scheduler_n_process)\n\n # make sure scheduler is ready\n self._make_sure_scheduler_ready()\n\n def stop_service(self):\n if self._stopped:\n return\n\n self._stopped = True\n try:\n self._scheduler_service.stop(self._pool)\n self._worker_service.stop()\n finally:\n self._pool.stop()\n\n def serve_forever(self):\n try:\n self._pool.join()\n finally:\n self.stop_service()\n\n def __enter__(self):\n self.start_service()\n return self\n\n def __exit__(self, *_):\n self.stop_service()\n\n\ndef gen_endpoint(address):\n port = None\n tries = 5 # retry for 5 times\n\n for i in range(tries):\n try:\n port = get_next_port()\n break\n except SystemError:\n if i < tries - 1:\n continue\n raise\n\n return '{0}:{1}'.format(address, port)\n\n\ndef _start_cluster(endpoint, event, n_process=None, shared_memory=None, **kw):\n cluster = LocalDistributedCluster(endpoint, n_process=n_process,\n shared_memory=shared_memory, **kw)\n cluster.start_service()\n event.set()\n try:\n cluster.serve_forever()\n finally:\n cluster.stop_service()\n\n\ndef _start_cluster_process(endpoint, n_process, shared_memory, **kw):\n event = multiprocessing.Event()\n\n kw = kw.copy()\n kw['n_process'] = n_process\n kw['shared_memory'] = shared_memory or '20%'\n process = gipc.start_process(_start_cluster, args=(endpoint, event), kwargs=kw)\n\n while True:\n event.wait(5)\n if not event.is_set():\n # service not started yet\n continue\n if not process.is_alive():\n raise SystemError('New local cluster failed')\n else:\n break\n\n return process\n\n\ndef _start_web(scheduler_address, ui_port, event):\n import gevent.monkey\n gevent.monkey.patch_all(thread=False)\n\n from ...web import MarsWeb\n\n web = MarsWeb(ui_port, scheduler_address)\n try:\n web.start(event=event, block=True)\n finally:\n web.stop()\n\n\ndef _start_web_process(scheduler_endpoint, web_endpoint):\n web_event = multiprocessing.Event()\n ui_port = int(web_endpoint.rsplit(':', 1)[1])\n web_process = gipc.start_process(\n _start_web, args=(scheduler_endpoint, ui_port, web_event), daemon=True)\n\n while True:\n web_event.wait(5)\n if not web_event.is_set():\n # web not started yet\n continue\n if not web_process.is_alive():\n raise SystemError('New web interface failed')\n else:\n break\n\n return web_process\n\n\nclass LocalDistributedClusterClient(object):\n def __init__(self, endpoint, web_endpoint, cluster_process, web_process):\n self._cluster_process = cluster_process\n self._web_process = web_process\n self._endpoint = endpoint\n self._web_endpoint = web_endpoint\n self._session = new_session(endpoint).as_default()\n\n @property\n def endpoint(self):\n return self._endpoint\n\n @property\n def web_endpoint(self):\n return self._web_endpoint\n\n @property\n def session(self):\n return self._session\n\n def __enter__(self):\n return self\n\n def __exit__(self, *_):\n self.stop()\n\n @staticmethod\n def _ensure_process_finish(proc):\n if proc is None or not proc.is_alive():\n return\n proc.join(3)\n\n # in case the process does not finish\n if proc.is_alive(): # pragma: no cover\n try:\n import psutil\n for subproc in psutil.Process(proc.pid).children(recursive=True):\n try:\n subproc.kill()\n except psutil.NoSuchProcess: # pragma: no cover\n pass\n except ImportError:\n pass\n finally:\n proc.terminate()\n\n def stop(self):\n try:\n del _local_cluster_clients[id(self)]\n except KeyError: # pragma: no cover\n pass\n\n if self._cluster_process.is_alive():\n os.kill(self._cluster_process.pid, signal.SIGINT)\n if self._web_process is not None and self._web_process.is_alive():\n os.kill(self._web_process.pid, signal.SIGINT)\n\n self._ensure_process_finish(self._cluster_process)\n self._ensure_process_finish(self._web_process)\n\n\ndef new_cluster(address='0.0.0.0', web=False, n_process=None, shared_memory=None,\n open_browser=None, **kw):\n if open_browser is None:\n open_browser = options.deploy.open_browser\n endpoint = gen_endpoint(address)\n web_endpoint = None\n if web is True:\n web_endpoint = gen_endpoint('0.0.0.0')\n elif isinstance(web, six.string_types):\n if ':' in web:\n web_endpoint = web\n else:\n web_endpoint = gen_endpoint(web)\n\n process = _start_cluster_process(endpoint, n_process, shared_memory, **kw)\n\n web_process = None\n if web_endpoint:\n web_process = _start_web_process(endpoint, web_endpoint)\n print('Web endpoint started at http://%s' % web_endpoint, file=sys.stderr)\n if open_browser:\n import webbrowser\n webbrowser.open_new_tab('http://%s' % web_endpoint)\n\n client = LocalDistributedClusterClient(endpoint, web_endpoint, process, web_process)\n _local_cluster_clients[id(client)] = client\n return client\n", "path": "mars/deploy/local/core.py"}]}
| 3,496 | 137 |
gh_patches_debug_37997
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-1037
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inaccurate transaction names for FastAPI sub-applications
**Description**
For requests to endpoints defined in FastAPI sub-applications, the mount path is chosen as the transaction name. I expected the full route of the endpoint.
**To Reproduce**
1. Run the following simple FastAPI app:
```python
import uvicorn
from elasticapm.contrib.starlette import ElasticAPM, make_apm_client
from fastapi import FastAPI
app = FastAPI()
sub = FastAPI()
app.mount("/sub", sub)
apm = make_apm_client(
{
"SERVICE_NAME": "sub-app-test",
}
)
app.add_middleware(ElasticAPM, client=apm)
@sub.get("/hi")
async def hi():
return "hi"
@sub.get("/bye")
async def bye():
return "bye"
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8888)
```
2.
- **Observed behavior**
The transactions of `/sub/hi` and `/sub/bye` are both named `/sub` and grouped.

- **Expected behavior**
The transactions of `/sub/hi` and `/sub/bye` are named according to the full route.
**Environment**
- OS:
- Client: Windows
- Server: Ubuntu
- Python version: 3.7.3
- Framework and version: `fastapi==0.61.2`
- APM Server version: docker image `elasticsearch/elasticsearch:7.10.2`
- Agent version: `elastic-apm==6.0.0`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/contrib/starlette/__init__.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
31
32 from __future__ import absolute_import
33
34 import starlette
35 from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
36 from starlette.requests import Request
37 from starlette.responses import Response
38 from starlette.routing import Match
39 from starlette.types import ASGIApp
40
41 import elasticapm
42 import elasticapm.instrumentation.control
43 from elasticapm.base import Client
44 from elasticapm.conf import constants
45 from elasticapm.contrib.asyncio.traces import set_context
46 from elasticapm.contrib.starlette.utils import get_body, get_data_from_request, get_data_from_response
47 from elasticapm.utils.disttracing import TraceParent
48 from elasticapm.utils.logging import get_logger
49
50 logger = get_logger("elasticapm.errors.client")
51
52
53 def make_apm_client(config: dict, client_cls=Client, **defaults) -> Client:
54 """Builds ElasticAPM client.
55
56 Args:
57 config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.
58 client_cls (Client): Must be Client or its child.
59 **defaults: Additional parameters for Client. See `elasticapm.base.Client`
60
61 Returns:
62 Client
63 """
64 if "framework_name" not in defaults:
65 defaults["framework_name"] = "starlette"
66 defaults["framework_version"] = starlette.__version__
67
68 return client_cls(config, **defaults)
69
70
71 class ElasticAPM(BaseHTTPMiddleware):
72 """
73 Starlette / FastAPI middleware for Elastic APM capturing.
74
75 >>> elasticapm = make_apm_client({
76 >>> 'SERVICE_NAME': 'myapp',
77 >>> 'DEBUG': True,
78 >>> 'SERVER_URL': 'http://localhost:8200',
79 >>> 'CAPTURE_HEADERS': True,
80 >>> 'CAPTURE_BODY': 'all'
81 >>> })
82
83 >>> app.add_middleware(ElasticAPM, client=elasticapm)
84
85 Pass an arbitrary APP_NAME and SECRET_TOKEN::
86
87 >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')
88
89 Pass an explicit client::
90
91 >>> elasticapm = ElasticAPM(app, client=client)
92
93 Automatically configure logging::
94
95 >>> elasticapm = ElasticAPM(app, logging=True)
96
97 Capture an exception::
98
99 >>> try:
100 >>> 1 / 0
101 >>> except ZeroDivisionError:
102 >>> elasticapm.capture_exception()
103
104 Capture a message::
105
106 >>> elasticapm.capture_message('hello, world!')
107 """
108
109 def __init__(self, app: ASGIApp, client: Client):
110 """
111
112 Args:
113 app (ASGIApp): Starlette app
114 client (Client): ElasticAPM Client
115 """
116 self.client = client
117
118 if self.client.config.instrument and self.client.config.enabled:
119 elasticapm.instrumentation.control.instrument()
120
121 super().__init__(app)
122
123 async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:
124 """Processes the whole request APM capturing.
125
126 Args:
127 request (Request)
128 call_next (RequestResponseEndpoint): Next request process in Starlette.
129
130 Returns:
131 Response
132 """
133 await self._request_started(request)
134
135 try:
136 response = await call_next(request)
137 elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)
138 except Exception:
139 await self.capture_exception(
140 context={"request": await get_data_from_request(request, self.client.config, constants.ERROR)}
141 )
142 elasticapm.set_transaction_result("HTTP 5xx", override=False)
143 elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)
144 elasticapm.set_context({"status_code": 500}, "response")
145
146 raise
147 else:
148 await self._request_finished(response)
149 finally:
150 self.client.end_transaction()
151
152 return response
153
154 async def capture_exception(self, *args, **kwargs):
155 """Captures your exception.
156
157 Args:
158 *args:
159 **kwargs:
160 """
161 self.client.capture_exception(*args, **kwargs)
162
163 async def capture_message(self, *args, **kwargs):
164 """Captures your message.
165
166 Args:
167 *args: Whatever
168 **kwargs: Whatever
169 """
170 self.client.capture_message(*args, **kwargs)
171
172 async def _request_started(self, request: Request):
173 """Captures the begin of the request processing to APM.
174
175 Args:
176 request (Request)
177 """
178 # When we consume the body, we replace the streaming mechanism with
179 # a mocked version -- this workaround came from
180 # https://github.com/encode/starlette/issues/495#issuecomment-513138055
181 # and we call the workaround here to make sure that regardless of
182 # `capture_body` settings, we will have access to the body if we need it.
183 if self.client.config.capture_body != "off":
184 await get_body(request)
185
186 if not self.client.should_ignore_url(request.url.path):
187 trace_parent = TraceParent.from_headers(dict(request.headers))
188 self.client.begin_transaction("request", trace_parent=trace_parent)
189
190 await set_context(
191 lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), "request"
192 )
193 transaction_name = self.get_route_name(request) or request.url.path
194 elasticapm.set_transaction_name("{} {}".format(request.method, transaction_name), override=False)
195
196 async def _request_finished(self, response: Response):
197 """Captures the end of the request processing to APM.
198
199 Args:
200 response (Response)
201 """
202 await set_context(
203 lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), "response"
204 )
205
206 result = "HTTP {}xx".format(response.status_code // 100)
207 elasticapm.set_transaction_result(result, override=False)
208
209 def get_route_name(self, request: Request) -> str:
210 route_name = None
211 app = request.app
212 scope = request.scope
213 routes = app.routes
214
215 for route in routes:
216 match, _ = route.matches(scope)
217 if match == Match.FULL:
218 route_name = route.path
219 break
220 elif match == Match.PARTIAL and route_name is None:
221 route_name = route.path
222 # Starlette magically redirects requests if the path matches a route name with a trailing slash
223 # appended or removed. To not spam the transaction names list, we do the same here and put these
224 # redirects all in the same "redirect trailing slashes" transaction name
225 if not route_name and app.router.redirect_slashes and scope["path"] != "/":
226 redirect_scope = dict(scope)
227 if scope["path"].endswith("/"):
228 redirect_scope["path"] = scope["path"][:-1]
229 trim = True
230 else:
231 redirect_scope["path"] = scope["path"] + "/"
232 trim = False
233 for route in routes:
234 match, _ = route.matches(redirect_scope)
235 if match != Match.NONE:
236 route_name = route.path + "/" if trim else route.path[:-1]
237 break
238 return route_name
239
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticapm/contrib/starlette/__init__.py b/elasticapm/contrib/starlette/__init__.py
--- a/elasticapm/contrib/starlette/__init__.py
+++ b/elasticapm/contrib/starlette/__init__.py
@@ -35,7 +35,7 @@
from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
from starlette.requests import Request
from starlette.responses import Response
-from starlette.routing import Match
+from starlette.routing import Match, Mount
from starlette.types import ASGIApp
import elasticapm
@@ -207,18 +207,11 @@
elasticapm.set_transaction_result(result, override=False)
def get_route_name(self, request: Request) -> str:
- route_name = None
app = request.app
scope = request.scope
routes = app.routes
+ route_name = self._get_route_name(scope, routes)
- for route in routes:
- match, _ = route.matches(scope)
- if match == Match.FULL:
- route_name = route.path
- break
- elif match == Match.PARTIAL and route_name is None:
- route_name = route.path
# Starlette magically redirects requests if the path matches a route name with a trailing slash
# appended or removed. To not spam the transaction names list, we do the same here and put these
# redirects all in the same "redirect trailing slashes" transaction name
@@ -230,9 +223,23 @@
else:
redirect_scope["path"] = scope["path"] + "/"
trim = False
- for route in routes:
- match, _ = route.matches(redirect_scope)
- if match != Match.NONE:
- route_name = route.path + "/" if trim else route.path[:-1]
- break
+
+ route_name = self._get_route_name(redirect_scope, routes)
+ route_name = route_name + "/" if trim else route_name[:-1]
return route_name
+
+ def _get_route_name(self, scope, routes, route_name=None):
+ for route in routes:
+ match, child_scope = route.matches(scope)
+ if match == Match.FULL:
+ route_name = route.path
+ child_scope = {**scope, **child_scope}
+ if isinstance(route, Mount):
+ child_route_name = self._get_route_name(child_scope, route.routes, route_name)
+ if child_route_name is None:
+ route_name = None
+ else:
+ route_name += child_route_name
+ return route_name
+ elif match == Match.PARTIAL and route_name is None:
+ route_name = route.path
|
{"golden_diff": "diff --git a/elasticapm/contrib/starlette/__init__.py b/elasticapm/contrib/starlette/__init__.py\n--- a/elasticapm/contrib/starlette/__init__.py\n+++ b/elasticapm/contrib/starlette/__init__.py\n@@ -35,7 +35,7 @@\n from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\n from starlette.requests import Request\n from starlette.responses import Response\n-from starlette.routing import Match\n+from starlette.routing import Match, Mount\n from starlette.types import ASGIApp\n \n import elasticapm\n@@ -207,18 +207,11 @@\n elasticapm.set_transaction_result(result, override=False)\n \n def get_route_name(self, request: Request) -> str:\n- route_name = None\n app = request.app\n scope = request.scope\n routes = app.routes\n+ route_name = self._get_route_name(scope, routes)\n \n- for route in routes:\n- match, _ = route.matches(scope)\n- if match == Match.FULL:\n- route_name = route.path\n- break\n- elif match == Match.PARTIAL and route_name is None:\n- route_name = route.path\n # Starlette magically redirects requests if the path matches a route name with a trailing slash\n # appended or removed. To not spam the transaction names list, we do the same here and put these\n # redirects all in the same \"redirect trailing slashes\" transaction name\n@@ -230,9 +223,23 @@\n else:\n redirect_scope[\"path\"] = scope[\"path\"] + \"/\"\n trim = False\n- for route in routes:\n- match, _ = route.matches(redirect_scope)\n- if match != Match.NONE:\n- route_name = route.path + \"/\" if trim else route.path[:-1]\n- break\n+\n+ route_name = self._get_route_name(redirect_scope, routes)\n+ route_name = route_name + \"/\" if trim else route_name[:-1]\n return route_name\n+\n+ def _get_route_name(self, scope, routes, route_name=None):\n+ for route in routes:\n+ match, child_scope = route.matches(scope)\n+ if match == Match.FULL:\n+ route_name = route.path\n+ child_scope = {**scope, **child_scope}\n+ if isinstance(route, Mount):\n+ child_route_name = self._get_route_name(child_scope, route.routes, route_name)\n+ if child_route_name is None:\n+ route_name = None\n+ else:\n+ route_name += child_route_name\n+ return route_name\n+ elif match == Match.PARTIAL and route_name is None:\n+ route_name = route.path\n", "issue": "Inaccurate transaction names for FastAPI sub-applications\n**Description**\r\nFor requests to endpoints defined in FastAPI sub-applications, the mount path is chosen as the transaction name. I expected the full route of the endpoint.\r\n\r\n**To Reproduce**\r\n\r\n1. Run the following simple FastAPI app:\r\n\r\n```python\r\nimport uvicorn\r\nfrom elasticapm.contrib.starlette import ElasticAPM, make_apm_client\r\nfrom fastapi import FastAPI\r\n\r\napp = FastAPI()\r\nsub = FastAPI()\r\napp.mount(\"/sub\", sub)\r\n\r\napm = make_apm_client(\r\n {\r\n \"SERVICE_NAME\": \"sub-app-test\",\r\n }\r\n)\r\n\r\napp.add_middleware(ElasticAPM, client=apm)\r\n\r\n\r\[email protected](\"/hi\")\r\nasync def hi():\r\n return \"hi\"\r\n\r\n\r\[email protected](\"/bye\")\r\nasync def bye():\r\n return \"bye\"\r\n\r\n\r\nif __name__ == \"__main__\":\r\n uvicorn.run(app, host=\"0.0.0.0\", port=8888)\r\n```\r\n\r\n2. \r\n- **Observed behavior**\r\nThe transactions of `/sub/hi` and `/sub/bye` are both named `/sub` and grouped.\r\n\r\n\r\n- **Expected behavior**\r\nThe transactions of `/sub/hi` and `/sub/bye` are named according to the full route.\r\n\r\n\r\n**Environment**\r\n- OS: \r\n - Client: Windows\r\n - Server: Ubuntu\r\n- Python version: 3.7.3\r\n- Framework and version: `fastapi==0.61.2`\r\n- APM Server version: docker image `elasticsearch/elasticsearch:7.10.2` \r\n- Agent version: `elastic-apm==6.0.0`\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom __future__ import absolute_import\n\nimport starlette\nfrom starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\nfrom starlette.requests import Request\nfrom starlette.responses import Response\nfrom starlette.routing import Match\nfrom starlette.types import ASGIApp\n\nimport elasticapm\nimport elasticapm.instrumentation.control\nfrom elasticapm.base import Client\nfrom elasticapm.conf import constants\nfrom elasticapm.contrib.asyncio.traces import set_context\nfrom elasticapm.contrib.starlette.utils import get_body, get_data_from_request, get_data_from_response\nfrom elasticapm.utils.disttracing import TraceParent\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.errors.client\")\n\n\ndef make_apm_client(config: dict, client_cls=Client, **defaults) -> Client:\n \"\"\"Builds ElasticAPM client.\n\n Args:\n config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.\n client_cls (Client): Must be Client or its child.\n **defaults: Additional parameters for Client. See `elasticapm.base.Client`\n\n Returns:\n Client\n \"\"\"\n if \"framework_name\" not in defaults:\n defaults[\"framework_name\"] = \"starlette\"\n defaults[\"framework_version\"] = starlette.__version__\n\n return client_cls(config, **defaults)\n\n\nclass ElasticAPM(BaseHTTPMiddleware):\n \"\"\"\n Starlette / FastAPI middleware for Elastic APM capturing.\n\n >>> elasticapm = make_apm_client({\n >>> 'SERVICE_NAME': 'myapp',\n >>> 'DEBUG': True,\n >>> 'SERVER_URL': 'http://localhost:8200',\n >>> 'CAPTURE_HEADERS': True,\n >>> 'CAPTURE_BODY': 'all'\n >>> })\n\n >>> app.add_middleware(ElasticAPM, client=elasticapm)\n\n Pass an arbitrary APP_NAME and SECRET_TOKEN::\n\n >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')\n\n Pass an explicit client::\n\n >>> elasticapm = ElasticAPM(app, client=client)\n\n Automatically configure logging::\n\n >>> elasticapm = ElasticAPM(app, logging=True)\n\n Capture an exception::\n\n >>> try:\n >>> 1 / 0\n >>> except ZeroDivisionError:\n >>> elasticapm.capture_exception()\n\n Capture a message::\n\n >>> elasticapm.capture_message('hello, world!')\n \"\"\"\n\n def __init__(self, app: ASGIApp, client: Client):\n \"\"\"\n\n Args:\n app (ASGIApp): Starlette app\n client (Client): ElasticAPM Client\n \"\"\"\n self.client = client\n\n if self.client.config.instrument and self.client.config.enabled:\n elasticapm.instrumentation.control.instrument()\n\n super().__init__(app)\n\n async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:\n \"\"\"Processes the whole request APM capturing.\n\n Args:\n request (Request)\n call_next (RequestResponseEndpoint): Next request process in Starlette.\n\n Returns:\n Response\n \"\"\"\n await self._request_started(request)\n\n try:\n response = await call_next(request)\n elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)\n except Exception:\n await self.capture_exception(\n context={\"request\": await get_data_from_request(request, self.client.config, constants.ERROR)}\n )\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n raise\n else:\n await self._request_finished(response)\n finally:\n self.client.end_transaction()\n\n return response\n\n async def capture_exception(self, *args, **kwargs):\n \"\"\"Captures your exception.\n\n Args:\n *args:\n **kwargs:\n \"\"\"\n self.client.capture_exception(*args, **kwargs)\n\n async def capture_message(self, *args, **kwargs):\n \"\"\"Captures your message.\n\n Args:\n *args: Whatever\n **kwargs: Whatever\n \"\"\"\n self.client.capture_message(*args, **kwargs)\n\n async def _request_started(self, request: Request):\n \"\"\"Captures the begin of the request processing to APM.\n\n Args:\n request (Request)\n \"\"\"\n # When we consume the body, we replace the streaming mechanism with\n # a mocked version -- this workaround came from\n # https://github.com/encode/starlette/issues/495#issuecomment-513138055\n # and we call the workaround here to make sure that regardless of\n # `capture_body` settings, we will have access to the body if we need it.\n if self.client.config.capture_body != \"off\":\n await get_body(request)\n\n if not self.client.should_ignore_url(request.url.path):\n trace_parent = TraceParent.from_headers(dict(request.headers))\n self.client.begin_transaction(\"request\", trace_parent=trace_parent)\n\n await set_context(\n lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), \"request\"\n )\n transaction_name = self.get_route_name(request) or request.url.path\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, transaction_name), override=False)\n\n async def _request_finished(self, response: Response):\n \"\"\"Captures the end of the request processing to APM.\n\n Args:\n response (Response)\n \"\"\"\n await set_context(\n lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), \"response\"\n )\n\n result = \"HTTP {}xx\".format(response.status_code // 100)\n elasticapm.set_transaction_result(result, override=False)\n\n def get_route_name(self, request: Request) -> str:\n route_name = None\n app = request.app\n scope = request.scope\n routes = app.routes\n\n for route in routes:\n match, _ = route.matches(scope)\n if match == Match.FULL:\n route_name = route.path\n break\n elif match == Match.PARTIAL and route_name is None:\n route_name = route.path\n # Starlette magically redirects requests if the path matches a route name with a trailing slash\n # appended or removed. To not spam the transaction names list, we do the same here and put these\n # redirects all in the same \"redirect trailing slashes\" transaction name\n if not route_name and app.router.redirect_slashes and scope[\"path\"] != \"/\":\n redirect_scope = dict(scope)\n if scope[\"path\"].endswith(\"/\"):\n redirect_scope[\"path\"] = scope[\"path\"][:-1]\n trim = True\n else:\n redirect_scope[\"path\"] = scope[\"path\"] + \"/\"\n trim = False\n for route in routes:\n match, _ = route.matches(redirect_scope)\n if match != Match.NONE:\n route_name = route.path + \"/\" if trim else route.path[:-1]\n break\n return route_name\n", "path": "elasticapm/contrib/starlette/__init__.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom __future__ import absolute_import\n\nimport starlette\nfrom starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\nfrom starlette.requests import Request\nfrom starlette.responses import Response\nfrom starlette.routing import Match, Mount\nfrom starlette.types import ASGIApp\n\nimport elasticapm\nimport elasticapm.instrumentation.control\nfrom elasticapm.base import Client\nfrom elasticapm.conf import constants\nfrom elasticapm.contrib.asyncio.traces import set_context\nfrom elasticapm.contrib.starlette.utils import get_body, get_data_from_request, get_data_from_response\nfrom elasticapm.utils.disttracing import TraceParent\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.errors.client\")\n\n\ndef make_apm_client(config: dict, client_cls=Client, **defaults) -> Client:\n \"\"\"Builds ElasticAPM client.\n\n Args:\n config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.\n client_cls (Client): Must be Client or its child.\n **defaults: Additional parameters for Client. See `elasticapm.base.Client`\n\n Returns:\n Client\n \"\"\"\n if \"framework_name\" not in defaults:\n defaults[\"framework_name\"] = \"starlette\"\n defaults[\"framework_version\"] = starlette.__version__\n\n return client_cls(config, **defaults)\n\n\nclass ElasticAPM(BaseHTTPMiddleware):\n \"\"\"\n Starlette / FastAPI middleware for Elastic APM capturing.\n\n >>> elasticapm = make_apm_client({\n >>> 'SERVICE_NAME': 'myapp',\n >>> 'DEBUG': True,\n >>> 'SERVER_URL': 'http://localhost:8200',\n >>> 'CAPTURE_HEADERS': True,\n >>> 'CAPTURE_BODY': 'all'\n >>> })\n\n >>> app.add_middleware(ElasticAPM, client=elasticapm)\n\n Pass an arbitrary APP_NAME and SECRET_TOKEN::\n\n >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')\n\n Pass an explicit client::\n\n >>> elasticapm = ElasticAPM(app, client=client)\n\n Automatically configure logging::\n\n >>> elasticapm = ElasticAPM(app, logging=True)\n\n Capture an exception::\n\n >>> try:\n >>> 1 / 0\n >>> except ZeroDivisionError:\n >>> elasticapm.capture_exception()\n\n Capture a message::\n\n >>> elasticapm.capture_message('hello, world!')\n \"\"\"\n\n def __init__(self, app: ASGIApp, client: Client):\n \"\"\"\n\n Args:\n app (ASGIApp): Starlette app\n client (Client): ElasticAPM Client\n \"\"\"\n self.client = client\n\n if self.client.config.instrument and self.client.config.enabled:\n elasticapm.instrumentation.control.instrument()\n\n super().__init__(app)\n\n async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:\n \"\"\"Processes the whole request APM capturing.\n\n Args:\n request (Request)\n call_next (RequestResponseEndpoint): Next request process in Starlette.\n\n Returns:\n Response\n \"\"\"\n await self._request_started(request)\n\n try:\n response = await call_next(request)\n elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)\n except Exception:\n await self.capture_exception(\n context={\"request\": await get_data_from_request(request, self.client.config, constants.ERROR)}\n )\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n raise\n else:\n await self._request_finished(response)\n finally:\n self.client.end_transaction()\n\n return response\n\n async def capture_exception(self, *args, **kwargs):\n \"\"\"Captures your exception.\n\n Args:\n *args:\n **kwargs:\n \"\"\"\n self.client.capture_exception(*args, **kwargs)\n\n async def capture_message(self, *args, **kwargs):\n \"\"\"Captures your message.\n\n Args:\n *args: Whatever\n **kwargs: Whatever\n \"\"\"\n self.client.capture_message(*args, **kwargs)\n\n async def _request_started(self, request: Request):\n \"\"\"Captures the begin of the request processing to APM.\n\n Args:\n request (Request)\n \"\"\"\n # When we consume the body, we replace the streaming mechanism with\n # a mocked version -- this workaround came from\n # https://github.com/encode/starlette/issues/495#issuecomment-513138055\n # and we call the workaround here to make sure that regardless of\n # `capture_body` settings, we will have access to the body if we need it.\n if self.client.config.capture_body != \"off\":\n await get_body(request)\n\n if not self.client.should_ignore_url(request.url.path):\n trace_parent = TraceParent.from_headers(dict(request.headers))\n self.client.begin_transaction(\"request\", trace_parent=trace_parent)\n\n await set_context(\n lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), \"request\"\n )\n transaction_name = self.get_route_name(request) or request.url.path\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, transaction_name), override=False)\n\n async def _request_finished(self, response: Response):\n \"\"\"Captures the end of the request processing to APM.\n\n Args:\n response (Response)\n \"\"\"\n await set_context(\n lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), \"response\"\n )\n\n result = \"HTTP {}xx\".format(response.status_code // 100)\n elasticapm.set_transaction_result(result, override=False)\n\n def get_route_name(self, request: Request) -> str:\n app = request.app\n scope = request.scope\n routes = app.routes\n route_name = self._get_route_name(scope, routes)\n\n # Starlette magically redirects requests if the path matches a route name with a trailing slash\n # appended or removed. To not spam the transaction names list, we do the same here and put these\n # redirects all in the same \"redirect trailing slashes\" transaction name\n if not route_name and app.router.redirect_slashes and scope[\"path\"] != \"/\":\n redirect_scope = dict(scope)\n if scope[\"path\"].endswith(\"/\"):\n redirect_scope[\"path\"] = scope[\"path\"][:-1]\n trim = True\n else:\n redirect_scope[\"path\"] = scope[\"path\"] + \"/\"\n trim = False\n\n route_name = self._get_route_name(redirect_scope, routes)\n route_name = route_name + \"/\" if trim else route_name[:-1]\n return route_name\n\n def _get_route_name(self, scope, routes, route_name=None):\n for route in routes:\n match, child_scope = route.matches(scope)\n if match == Match.FULL:\n route_name = route.path\n child_scope = {**scope, **child_scope}\n if isinstance(route, Mount):\n child_route_name = self._get_route_name(child_scope, route.routes, route_name)\n if child_route_name is None:\n route_name = None\n else:\n route_name += child_route_name\n return route_name\n elif match == Match.PARTIAL and route_name is None:\n route_name = route.path\n", "path": "elasticapm/contrib/starlette/__init__.py"}]}
| 3,191 | 607 |
gh_patches_debug_61634
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-484
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Metrics] add indexing synthetic sugar
Idea is to improve the current implementation of `Metric` and to be able to do the following:
```
# A custom class ConfusionMatrix
cm = ConfusionMatrix(num_classes=3, output_transform=output_gt_predicted_classes_bg)
# Instead of below lines
# from ignite.metrics import MetricsLambda
# IoU = MetricsLambda(lambda res: res[1:], (cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag())))
# We could have:
IoU = (cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag()))[1:]
mIoU = IoU.mean()
```
cc @zasdfgbnm
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/metric.py`
Content:
```
1 from abc import ABCMeta, abstractmethod
2 from ignite._six import with_metaclass
3 from ignite.engine import Events
4 import torch
5
6
7 class Metric(with_metaclass(ABCMeta, object)):
8 """
9 Base class for all Metrics.
10
11 Args:
12 output_transform (callable, optional): a callable that is used to transform the
13 :class:`~ignite.engine.Engine`'s `process_function`'s output into the
14 form expected by the metric. This can be useful if, for example, you have a multi-output model and
15 you want to compute the metric with respect to one of the outputs.
16
17 """
18
19 def __init__(self, output_transform=lambda x: x):
20 self._output_transform = output_transform
21 self.reset()
22
23 @abstractmethod
24 def reset(self):
25 """
26 Resets the metric to it's initial state.
27
28 This is called at the start of each epoch.
29 """
30 pass
31
32 @abstractmethod
33 def update(self, output):
34 """
35 Updates the metric's state using the passed batch output.
36
37 This is called once for each batch.
38
39 Args:
40 output: the is the output from the engine's process function.
41 """
42 pass
43
44 @abstractmethod
45 def compute(self):
46 """
47 Computes the metric based on it's accumulated state.
48
49 This is called at the end of each epoch.
50
51 Returns:
52 Any: the actual quantity of interest.
53
54 Raises:
55 NotComputableError: raised when the metric cannot be computed.
56 """
57 pass
58
59 def started(self, engine):
60 self.reset()
61
62 @torch.no_grad()
63 def iteration_completed(self, engine):
64 output = self._output_transform(engine.state.output)
65 self.update(output)
66
67 def completed(self, engine, name):
68 result = self.compute()
69 if torch.is_tensor(result) and len(result.shape) == 0:
70 result = result.item()
71 engine.state.metrics[name] = result
72
73 def attach(self, engine, name):
74 engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)
75 if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):
76 engine.add_event_handler(Events.EPOCH_STARTED, self.started)
77 if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
78 engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
79
80 def __add__(self, other):
81 from ignite.metrics import MetricsLambda
82 return MetricsLambda(lambda x, y: x + y, self, other)
83
84 def __radd__(self, other):
85 from ignite.metrics import MetricsLambda
86 return MetricsLambda(lambda x, y: x + y, other, self)
87
88 def __sub__(self, other):
89 from ignite.metrics import MetricsLambda
90 return MetricsLambda(lambda x, y: x - y, self, other)
91
92 def __rsub__(self, other):
93 from ignite.metrics import MetricsLambda
94 return MetricsLambda(lambda x, y: x - y, other, self)
95
96 def __mul__(self, other):
97 from ignite.metrics import MetricsLambda
98 return MetricsLambda(lambda x, y: x * y, self, other)
99
100 def __rmul__(self, other):
101 from ignite.metrics import MetricsLambda
102 return MetricsLambda(lambda x, y: x * y, other, self)
103
104 def __pow__(self, other):
105 from ignite.metrics import MetricsLambda
106 return MetricsLambda(lambda x, y: x ** y, self, other)
107
108 def __rpow__(self, other):
109 from ignite.metrics import MetricsLambda
110 return MetricsLambda(lambda x, y: x ** y, other, self)
111
112 def __mod__(self, other):
113 from ignite.metrics import MetricsLambda
114 return MetricsLambda(lambda x, y: x % y, self, other)
115
116 def __div__(self, other):
117 from ignite.metrics import MetricsLambda
118 return MetricsLambda(lambda x, y: x.__div__(y), self, other)
119
120 def __rdiv__(self, other):
121 from ignite.metrics import MetricsLambda
122 return MetricsLambda(lambda x, y: x.__div__(y), other, self)
123
124 def __truediv__(self, other):
125 from ignite.metrics import MetricsLambda
126 return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)
127
128 def __rtruediv__(self, other):
129 from ignite.metrics import MetricsLambda
130 return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)
131
132 def __floordiv__(self, other):
133 from ignite.metrics import MetricsLambda
134 return MetricsLambda(lambda x, y: x // y, self, other)
135
136 def __getattr__(self, attr):
137 from ignite.metrics import MetricsLambda
138
139 def fn(x, *args, **kwargs):
140 return getattr(x, attr)(*args, **kwargs)
141
142 def wrapper(*args, **kwargs):
143 return MetricsLambda(fn, self, *args, **kwargs)
144 return wrapper
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py
--- a/ignite/metrics/metric.py
+++ b/ignite/metrics/metric.py
@@ -142,3 +142,7 @@
def wrapper(*args, **kwargs):
return MetricsLambda(fn, self, *args, **kwargs)
return wrapper
+
+ def __getitem__(self, index):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x: x[index], self)
|
{"golden_diff": "diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py\n--- a/ignite/metrics/metric.py\n+++ b/ignite/metrics/metric.py\n@@ -142,3 +142,7 @@\n def wrapper(*args, **kwargs):\n return MetricsLambda(fn, self, *args, **kwargs)\n return wrapper\n+\n+ def __getitem__(self, index):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x: x[index], self)\n", "issue": "[Metrics] add indexing synthetic sugar\nIdea is to improve the current implementation of `Metric` and to be able to do the following:\r\n```\r\n# A custom class ConfusionMatrix\r\ncm = ConfusionMatrix(num_classes=3, output_transform=output_gt_predicted_classes_bg)\r\n\r\n# Instead of below lines\r\n# from ignite.metrics import MetricsLambda\r\n# IoU = MetricsLambda(lambda res: res[1:], (cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag())))\r\n# We could have: \r\nIoU = (cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag()))[1:]\r\nmIoU = IoU.mean()\r\n```\r\n\r\ncc @zasdfgbnm \n", "before_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom ignite._six import with_metaclass\nfrom ignite.engine import Events\nimport torch\n\n\nclass Metric(with_metaclass(ABCMeta, object)):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x):\n self._output_transform = output_transform\n self.reset()\n\n @abstractmethod\n def reset(self):\n \"\"\"\n Resets the metric to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output):\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function.\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self):\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest.\n\n Raises:\n NotComputableError: raised when the metric cannot be computed.\n \"\"\"\n pass\n\n def started(self, engine):\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine):\n output = self._output_transform(engine.state.output)\n self.update(output)\n\n def completed(self, engine, name):\n result = self.compute()\n if torch.is_tensor(result) and len(result.shape) == 0:\n result = result.item()\n engine.state.metrics[name] = result\n\n def attach(self, engine, name):\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __radd__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, other, self)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __rsub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, other, self)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __rmul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, other, self)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __rpow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, other, self)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __rdiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __rtruediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n\n def __getattr__(self, attr):\n from ignite.metrics import MetricsLambda\n\n def fn(x, *args, **kwargs):\n return getattr(x, attr)(*args, **kwargs)\n\n def wrapper(*args, **kwargs):\n return MetricsLambda(fn, self, *args, **kwargs)\n return wrapper\n", "path": "ignite/metrics/metric.py"}], "after_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom ignite._six import with_metaclass\nfrom ignite.engine import Events\nimport torch\n\n\nclass Metric(with_metaclass(ABCMeta, object)):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x):\n self._output_transform = output_transform\n self.reset()\n\n @abstractmethod\n def reset(self):\n \"\"\"\n Resets the metric to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output):\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function.\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self):\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest.\n\n Raises:\n NotComputableError: raised when the metric cannot be computed.\n \"\"\"\n pass\n\n def started(self, engine):\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine):\n output = self._output_transform(engine.state.output)\n self.update(output)\n\n def completed(self, engine, name):\n result = self.compute()\n if torch.is_tensor(result) and len(result.shape) == 0:\n result = result.item()\n engine.state.metrics[name] = result\n\n def attach(self, engine, name):\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __radd__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, other, self)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __rsub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, other, self)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __rmul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, other, self)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __rpow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, other, self)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __rdiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __rtruediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n\n def __getattr__(self, attr):\n from ignite.metrics import MetricsLambda\n\n def fn(x, *args, **kwargs):\n return getattr(x, attr)(*args, **kwargs)\n\n def wrapper(*args, **kwargs):\n return MetricsLambda(fn, self, *args, **kwargs)\n return wrapper\n\n def __getitem__(self, index):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x: x[index], self)\n", "path": "ignite/metrics/metric.py"}]}
| 1,842 | 114 |
gh_patches_debug_43844
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-474
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
product_404.html:14-17: Create styles for 404 products...
The puzzle `444-2276c763` from #444 has to be resolved:
https://github.com/fidals/shopelectro/blob/4db19ac9abcba2ce9849c5fd1210ba4ff7b0b8d3/templates/catalog/product_404.html#L14-L17
The puzzle was created by duker33 on 02-Aug-18.
Estimate: 60 minutes,
If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/views/catalog.py`
Content:
```
1 import typing
2 from functools import partial
3
4 from django import http
5 from django.conf import settings
6 from django.core.paginator import Paginator, InvalidPage
7 from django.shortcuts import render, get_object_or_404
8 from django.views.decorators.http import require_POST
9 from django_user_agents.utils import get_user_agent
10
11 from catalog.views import catalog
12 from images.models import Image
13 from pages import views as pages_views
14
15 from shopelectro import config
16 from shopelectro import models
17 from shopelectro.views.helpers import set_csrf_cookie
18
19 PRODUCTS_ON_PAGE_PC = 48
20 PRODUCTS_ON_PAGE_MOB = 12
21
22
23 def get_products_count(request):
24 """Calculate max products list size from request. List size depends on device type."""
25 mobile_view = get_user_agent(request).is_mobile
26 return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC
27
28
29 def get_paginated_page_or_404(objects, per_page, page_number):
30 try:
31 return Paginator(objects, per_page).page(page_number)
32 except InvalidPage:
33 raise http.Http404('Page does not exist')
34
35
36 # CATALOG VIEWS
37 class CategoryTree(catalog.CategoryTree):
38 category_model = models.Category
39
40
41 @set_csrf_cookie
42 class ProductPage(catalog.ProductPage):
43 pk_url_kwarg = None
44 slug_url_kwarg = 'product_vendor_code'
45 slug_field = 'vendor_code'
46
47 queryset = (
48 models.Product.objects
49 .filter(category__isnull=False, page__is_active=True)
50 .prefetch_related('product_feedbacks', 'page__images')
51 .select_related('page')
52 )
53
54 def get(self, request, *args, **kwargs):
55 try:
56 self.object = self.get_object()
57 except http.Http404 as error404:
58 response_404 = self.render_siblings_on_404(request, **kwargs)
59 if response_404:
60 return response_404
61 else:
62 raise error404
63
64 context = self.get_context_data(object=self.object)
65 return self.render_to_response(context)
66
67 def get_context_data(self, **kwargs):
68 context = super(ProductPage, self).get_context_data(**kwargs)
69 product = self.object
70 if not product.page.is_active:
71 # this context required to render 404 page
72 # with it's own logic
73 return context
74
75 group_tags_pairs = (
76 models.Tag.objects
77 .filter(products=self.object)
78 .get_group_tags_pairs()
79 )
80
81 return {
82 **context,
83 'price_bounds': config.PRICE_BOUNDS,
84 'group_tags_pairs': group_tags_pairs
85 }
86
87 def render_siblings_on_404(
88 self, request, **url_kwargs
89 ) -> typing.Union[http.Http404, None]:
90 """Try to render removed product's siblings on it's 404 page."""
91 inactive_product = models.Product.objects.filter(
92 **{self.slug_field: url_kwargs['product_vendor_code']},
93 category__isnull=False,
94 page__is_active=False
95 ).first()
96 if inactive_product:
97 related_products = models.Product.objects.filter(
98 category=inactive_product.category,
99 page__is_active=True
100 )[:10]
101 self.object = inactive_product
102 context = self.get_context_data(object=inactive_product, **url_kwargs)
103 context.update(related_products=related_products)
104 return render(request, 'catalog/product_404.html', context, status=404)
105
106
107 # SHOPELECTRO-SPECIFIC VIEWS
108 @set_csrf_cookie
109 class IndexPage(pages_views.CustomPageView):
110
111 def get_context_data(self, **kwargs):
112 """Extended method. Add product's images to context."""
113 context = super(IndexPage, self).get_context_data(**kwargs)
114 mobile_view = get_user_agent(self.request).is_mobile
115
116 top_products = (
117 models.Product.objects
118 .filter(id__in=settings.TOP_PRODUCTS, page__is_active=True)
119 .prefetch_related('category')
120 .select_related('page')
121 )
122
123 images = Image.objects.get_main_images_by_pages(
124 models.ProductPage.objects.filter(
125 shopelectro_product__in=top_products
126 )
127 )
128
129 categories = models.Category.objects.get_root_categories_by_products(
130 top_products)
131
132 prepared_top_products = []
133 if not mobile_view:
134 prepared_top_products = [
135 (product, images.get(product.page), categories.get(product))
136 for product in top_products
137 ]
138
139 return {
140 **context,
141 'category_tile': config.MAIN_PAGE_TILE,
142 'prepared_top_products': prepared_top_products,
143 }
144
145
146 def merge_products_and_images(products):
147 images = Image.objects.get_main_images_by_pages(
148 models.ProductPage.objects.filter(shopelectro_product__in=products)
149 )
150
151 return [
152 (product, images.get(product.page))
153 for product in products
154 ]
155
156
157 @set_csrf_cookie
158 class CategoryPage(catalog.CategoryPage):
159
160 def get_context_data(self, **kwargs):
161 """Add sorting options and view_types in context."""
162 context = super().get_context_data(**kwargs)
163 products_on_page = int(self.request.GET.get(
164 'step', get_products_count(self.request),
165 ))
166 page_number = int(self.request.GET.get('page', 1))
167 view_type = self.request.session.get('view_type', 'tile')
168 sorting = int(self.kwargs.get('sorting', 0))
169 sorting_option = config.category_sorting(sorting)
170 category = context['category']
171 if (
172 page_number < 1 or
173 products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS
174 ):
175 raise http.Http404('Page does not exist.')
176
177 all_products = (
178 models.Product.objects
179 .prefetch_related('page__images')
180 .select_related('page')
181 .get_by_category(category, ordering=(sorting_option, ))
182 )
183
184 group_tags_pairs = (
185 models.Tag.objects
186 .filter(products__in=all_products)
187 .get_group_tags_pairs()
188 )
189
190 tags = self.kwargs.get('tags')
191
192 tag_titles = ''
193 if tags:
194 slugs = models.Tag.parse_url_tags(tags)
195 tags = models.Tag.objects.filter(slug__in=slugs)
196
197 all_products = (
198 all_products
199 .filter(tags__in=tags)
200 # Use distinct because filtering by QuerySet tags,
201 # that related with products by many-to-many relation.
202 .distinct(sorting_option.lstrip('-'))
203 )
204
205 tag_titles = models.serialize_tags_to_title(tags)
206
207 def template_context(page, tag_titles, tags):
208 return {
209 'page': page,
210 'tag_titles': tag_titles,
211 'tags': tags,
212 }
213
214 page = context['page']
215 page.get_template_render_context = partial(
216 template_context, page, tag_titles, tags)
217
218 paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)
219 total_products = all_products.count()
220 products = paginated_page.object_list
221 if not products:
222 raise http.Http404('Page without products does not exist.')
223
224 return {
225 **context,
226 'product_image_pairs': merge_products_and_images(products),
227 'group_tags_pairs': group_tags_pairs,
228 'total_products': total_products,
229 'products_count': (page_number - 1) * products_on_page + products.count(),
230 'paginated_page': paginated_page,
231 'sorting_options': config.category_sorting(),
232 'limits': settings.CATEGORY_STEP_MULTIPLIERS,
233 'sort': sorting,
234 'tags': tags,
235 'view_type': view_type,
236 'skip_canonical': bool(tags),
237 }
238
239
240 def load_more(request, category_slug, offset=0, limit=0, sorting=0, tags=None):
241 """
242 Load more products of a given category.
243
244 :param sorting: preferred sorting index from CATEGORY_SORTING tuple
245 :param request: HttpRequest object
246 :param category_slug: Slug for a given category
247 :param offset: used for slicing QuerySet.
248 :return: products list in html format
249 """
250 products_on_page = limit or get_products_count(request)
251 offset = int(offset)
252 if offset < 0:
253 return http.HttpResponseBadRequest(
254 'The offset is wrong. An offset should be greater than or equal to 0.'
255 )
256 if products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS:
257 return http.HttpResponseBadRequest(
258 'The limit number is wrong. List of available numbers:'
259 f' {", ".join(map(str, settings.CATEGORY_STEP_MULTIPLIERS))}'
260 )
261 # increment page number because:
262 # 11 // 12 = 0, 0 // 12 = 0 but it should be the first page
263 # 12 // 12 = 1, 23 // 12 = 1, but it should be the second page
264 page_number = (offset // products_on_page) + 1
265 category = get_object_or_404(models.CategoryPage, slug=category_slug).model
266 sorting_option = config.category_sorting(int(sorting))
267
268 all_products = (
269 models.Product.objects
270 .prefetch_related('page__images')
271 .select_related('page')
272 .get_by_category(category, ordering=(sorting_option,))
273 )
274
275 if tags:
276 tag_entities = models.Tag.objects.filter(
277 slug__in=models.Tag.parse_url_tags(tags)
278 )
279
280 all_products = (
281 all_products
282 .filter(tags__in=tag_entities)
283 # Use distinct because filtering by QuerySet tags,
284 # that related with products by many-to-many relation.
285 .distinct(sorting_option.lstrip('-'))
286 )
287
288 paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)
289 products = paginated_page.object_list
290 view = request.session.get('view_type', 'tile')
291
292 return render(request, 'catalog/category_products.html', {
293 'product_image_pairs': merge_products_and_images(products),
294 'paginated_page': paginated_page,
295 'view_type': view,
296 'prods': products_on_page,
297 })
298
299
300 @require_POST
301 def save_feedback(request):
302 def get_keys_from_post(*args):
303 return {arg: request.POST.get(arg, '') for arg in args}
304
305 product_id = request.POST.get('id')
306 product = models.Product.objects.filter(id=product_id, page__is_active=True).first()
307 if not (product_id and product):
308 return http.HttpResponse(status=422)
309
310 fields = ['rating', 'name', 'dignities', 'limitations', 'general']
311 feedback_data = get_keys_from_post(*fields)
312
313 models.ProductFeedback.objects.create(product=product, **feedback_data)
314 return http.HttpResponse('ok')
315
316
317 @require_POST
318 def delete_feedback(request):
319 if not request.user.is_authenticated:
320 return http.HttpResponseForbidden('Not today, sly guy...')
321
322 feedback_id = request.POST.get('id')
323 feedback = models.ProductFeedback.objects.filter(id=feedback_id).first()
324 if not (feedback_id and feedback):
325 return http.HttpResponse(status=422)
326
327 feedback.delete()
328 return http.HttpResponse('Feedback with id={} was deleted.'.format(feedback_id))
329
330
331 class ProductsWithoutImages(catalog.ProductsWithoutImages):
332 model = models.Product
333
334
335 class ProductsWithoutText(catalog.ProductsWithoutText):
336 model = models.Product
337
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/shopelectro/views/catalog.py b/shopelectro/views/catalog.py
--- a/shopelectro/views/catalog.py
+++ b/shopelectro/views/catalog.py
@@ -38,6 +38,21 @@
category_model = models.Category
+def prepare_tile_products(products):
+ images = Image.objects.get_main_images_by_pages(
+ models.ProductPage.objects.filter(
+ shopelectro_product__in=products
+ )
+ )
+ categories = models.Category.objects.get_root_categories_by_products(
+ products
+ )
+ return [
+ (product, images.get(product.page), categories.get(product))
+ for product in products
+ ]
+
+
@set_csrf_cookie
class ProductPage(catalog.ProductPage):
pk_url_kwarg = None
@@ -94,13 +109,22 @@
page__is_active=False
).first()
if inactive_product:
- related_products = models.Product.objects.filter(
- category=inactive_product.category,
- page__is_active=True
- )[:10]
+ related_products = (
+ models.Product.objects
+ .filter(
+ category=inactive_product.category,
+ page__is_active=True,
+ )
+ .prefetch_related('category')
+ .select_related('page')[:10]
+ )
self.object = inactive_product
- context = self.get_context_data(object=inactive_product, **url_kwargs)
- context.update(related_products=related_products)
+ context = self.get_context_data(
+ object=inactive_product,
+ tile_products=prepare_tile_products(related_products),
+ tile_title='Возможно вас заинтересуют похожие товары:',
+ **url_kwargs,
+ )
return render(request, 'catalog/product_404.html', context, status=404)
@@ -113,33 +137,21 @@
context = super(IndexPage, self).get_context_data(**kwargs)
mobile_view = get_user_agent(self.request).is_mobile
- top_products = (
- models.Product.objects
- .filter(id__in=settings.TOP_PRODUCTS, page__is_active=True)
- .prefetch_related('category')
- .select_related('page')
- )
-
- images = Image.objects.get_main_images_by_pages(
- models.ProductPage.objects.filter(
- shopelectro_product__in=top_products
- )
- )
-
- categories = models.Category.objects.get_root_categories_by_products(
- top_products)
-
- prepared_top_products = []
+ tile_products = []
if not mobile_view:
- prepared_top_products = [
- (product, images.get(product.page), categories.get(product))
- for product in top_products
- ]
+ top_products = (
+ models.Product.objects
+ .filter(id__in=settings.TOP_PRODUCTS, page__is_active=True)
+ .prefetch_related('category')
+ .select_related('page')
+ )
+ tile_products = prepare_tile_products(top_products)
return {
**context,
+ 'tile_title': 'ТОП 10 ТОВАРОВ',
'category_tile': config.MAIN_PAGE_TILE,
- 'prepared_top_products': prepared_top_products,
+ 'tile_products': tile_products,
}
@@ -172,10 +184,14 @@
page_number < 1 or
products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS
):
- raise http.Http404('Page does not exist.')
+ raise http.Http404('Page does not exist.') # Ignore CPDBear
+
+ # @todo #470:15m Implement a new method for a Product's manager to get all_products
+ # as below.
all_products = (
models.Product.objects
+ .filter(page__is_active=True)
.prefetch_related('page__images')
.select_related('page')
.get_by_category(category, ordering=(sorting_option, ))
@@ -267,6 +283,7 @@
all_products = (
models.Product.objects
+ .filter(page__is_active=True)
.prefetch_related('page__images')
.select_related('page')
.get_by_category(category, ordering=(sorting_option,))
|
{"golden_diff": "diff --git a/shopelectro/views/catalog.py b/shopelectro/views/catalog.py\n--- a/shopelectro/views/catalog.py\n+++ b/shopelectro/views/catalog.py\n@@ -38,6 +38,21 @@\n category_model = models.Category\n \n \n+def prepare_tile_products(products):\n+ images = Image.objects.get_main_images_by_pages(\n+ models.ProductPage.objects.filter(\n+ shopelectro_product__in=products\n+ )\n+ )\n+ categories = models.Category.objects.get_root_categories_by_products(\n+ products\n+ )\n+ return [\n+ (product, images.get(product.page), categories.get(product))\n+ for product in products\n+ ]\n+\n+\n @set_csrf_cookie\n class ProductPage(catalog.ProductPage):\n pk_url_kwarg = None\n@@ -94,13 +109,22 @@\n page__is_active=False\n ).first()\n if inactive_product:\n- related_products = models.Product.objects.filter(\n- category=inactive_product.category,\n- page__is_active=True\n- )[:10]\n+ related_products = (\n+ models.Product.objects\n+ .filter(\n+ category=inactive_product.category,\n+ page__is_active=True,\n+ )\n+ .prefetch_related('category')\n+ .select_related('page')[:10]\n+ )\n self.object = inactive_product\n- context = self.get_context_data(object=inactive_product, **url_kwargs)\n- context.update(related_products=related_products)\n+ context = self.get_context_data(\n+ object=inactive_product,\n+ tile_products=prepare_tile_products(related_products),\n+ tile_title='\u0412\u043e\u0437\u043c\u043e\u0436\u043d\u043e \u0432\u0430\u0441 \u0437\u0430\u0438\u043d\u0442\u0435\u0440\u0435\u0441\u0443\u044e\u0442 \u043f\u043e\u0445\u043e\u0436\u0438\u0435 \u0442\u043e\u0432\u0430\u0440\u044b:',\n+ **url_kwargs,\n+ )\n return render(request, 'catalog/product_404.html', context, status=404)\n \n \n@@ -113,33 +137,21 @@\n context = super(IndexPage, self).get_context_data(**kwargs)\n mobile_view = get_user_agent(self.request).is_mobile\n \n- top_products = (\n- models.Product.objects\n- .filter(id__in=settings.TOP_PRODUCTS, page__is_active=True)\n- .prefetch_related('category')\n- .select_related('page')\n- )\n-\n- images = Image.objects.get_main_images_by_pages(\n- models.ProductPage.objects.filter(\n- shopelectro_product__in=top_products\n- )\n- )\n-\n- categories = models.Category.objects.get_root_categories_by_products(\n- top_products)\n-\n- prepared_top_products = []\n+ tile_products = []\n if not mobile_view:\n- prepared_top_products = [\n- (product, images.get(product.page), categories.get(product))\n- for product in top_products\n- ]\n+ top_products = (\n+ models.Product.objects\n+ .filter(id__in=settings.TOP_PRODUCTS, page__is_active=True)\n+ .prefetch_related('category')\n+ .select_related('page')\n+ )\n+ tile_products = prepare_tile_products(top_products)\n \n return {\n **context,\n+ 'tile_title': '\u0422\u041e\u041f 10 \u0422\u041e\u0412\u0410\u0420\u041e\u0412',\n 'category_tile': config.MAIN_PAGE_TILE,\n- 'prepared_top_products': prepared_top_products,\n+ 'tile_products': tile_products,\n }\n \n \n@@ -172,10 +184,14 @@\n page_number < 1 or\n products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS\n ):\n- raise http.Http404('Page does not exist.')\n+ raise http.Http404('Page does not exist.') # Ignore CPDBear\n+\n+ # @todo #470:15m Implement a new method for a Product's manager to get all_products\n+ # as below.\n \n all_products = (\n models.Product.objects\n+ .filter(page__is_active=True)\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option, ))\n@@ -267,6 +283,7 @@\n \n all_products = (\n models.Product.objects\n+ .filter(page__is_active=True)\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option,))\n", "issue": "product_404.html:14-17: Create styles for 404 products...\nThe puzzle `444-2276c763` from #444 has to be resolved:\n\nhttps://github.com/fidals/shopelectro/blob/4db19ac9abcba2ce9849c5fd1210ba4ff7b0b8d3/templates/catalog/product_404.html#L14-L17\n\nThe puzzle was created by duker33 on 02-Aug-18. \n\nEstimate: 60 minutes, \n\nIf you have any technical questions, don't ask me, submit new tickets instead. The task will be \"done\" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).\n", "before_files": [{"content": "import typing\nfrom functools import partial\n\nfrom django import http\nfrom django.conf import settings\nfrom django.core.paginator import Paginator, InvalidPage\nfrom django.shortcuts import render, get_object_or_404\nfrom django.views.decorators.http import require_POST\nfrom django_user_agents.utils import get_user_agent\n\nfrom catalog.views import catalog\nfrom images.models import Image\nfrom pages import views as pages_views\n\nfrom shopelectro import config\nfrom shopelectro import models\nfrom shopelectro.views.helpers import set_csrf_cookie\n\nPRODUCTS_ON_PAGE_PC = 48\nPRODUCTS_ON_PAGE_MOB = 12\n\n\ndef get_products_count(request):\n \"\"\"Calculate max products list size from request. List size depends on device type.\"\"\"\n mobile_view = get_user_agent(request).is_mobile\n return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC\n\n\ndef get_paginated_page_or_404(objects, per_page, page_number):\n try:\n return Paginator(objects, per_page).page(page_number)\n except InvalidPage:\n raise http.Http404('Page does not exist')\n\n\n# CATALOG VIEWS\nclass CategoryTree(catalog.CategoryTree):\n category_model = models.Category\n\n\n@set_csrf_cookie\nclass ProductPage(catalog.ProductPage):\n pk_url_kwarg = None\n slug_url_kwarg = 'product_vendor_code'\n slug_field = 'vendor_code'\n\n queryset = (\n models.Product.objects\n .filter(category__isnull=False, page__is_active=True)\n .prefetch_related('product_feedbacks', 'page__images')\n .select_related('page')\n )\n\n def get(self, request, *args, **kwargs):\n try:\n self.object = self.get_object()\n except http.Http404 as error404:\n response_404 = self.render_siblings_on_404(request, **kwargs)\n if response_404:\n return response_404\n else:\n raise error404\n\n context = self.get_context_data(object=self.object)\n return self.render_to_response(context)\n\n def get_context_data(self, **kwargs):\n context = super(ProductPage, self).get_context_data(**kwargs)\n product = self.object\n if not product.page.is_active:\n # this context required to render 404 page\n # with it's own logic\n return context\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products=self.object)\n .get_group_tags_pairs()\n )\n\n return {\n **context,\n 'price_bounds': config.PRICE_BOUNDS,\n 'group_tags_pairs': group_tags_pairs\n }\n\n def render_siblings_on_404(\n self, request, **url_kwargs\n ) -> typing.Union[http.Http404, None]:\n \"\"\"Try to render removed product's siblings on it's 404 page.\"\"\"\n inactive_product = models.Product.objects.filter(\n **{self.slug_field: url_kwargs['product_vendor_code']},\n category__isnull=False,\n page__is_active=False\n ).first()\n if inactive_product:\n related_products = models.Product.objects.filter(\n category=inactive_product.category,\n page__is_active=True\n )[:10]\n self.object = inactive_product\n context = self.get_context_data(object=inactive_product, **url_kwargs)\n context.update(related_products=related_products)\n return render(request, 'catalog/product_404.html', context, status=404)\n\n\n# SHOPELECTRO-SPECIFIC VIEWS\n@set_csrf_cookie\nclass IndexPage(pages_views.CustomPageView):\n\n def get_context_data(self, **kwargs):\n \"\"\"Extended method. Add product's images to context.\"\"\"\n context = super(IndexPage, self).get_context_data(**kwargs)\n mobile_view = get_user_agent(self.request).is_mobile\n\n top_products = (\n models.Product.objects\n .filter(id__in=settings.TOP_PRODUCTS, page__is_active=True)\n .prefetch_related('category')\n .select_related('page')\n )\n\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(\n shopelectro_product__in=top_products\n )\n )\n\n categories = models.Category.objects.get_root_categories_by_products(\n top_products)\n\n prepared_top_products = []\n if not mobile_view:\n prepared_top_products = [\n (product, images.get(product.page), categories.get(product))\n for product in top_products\n ]\n\n return {\n **context,\n 'category_tile': config.MAIN_PAGE_TILE,\n 'prepared_top_products': prepared_top_products,\n }\n\n\ndef merge_products_and_images(products):\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(shopelectro_product__in=products)\n )\n\n return [\n (product, images.get(product.page))\n for product in products\n ]\n\n\n@set_csrf_cookie\nclass CategoryPage(catalog.CategoryPage):\n\n def get_context_data(self, **kwargs):\n \"\"\"Add sorting options and view_types in context.\"\"\"\n context = super().get_context_data(**kwargs)\n products_on_page = int(self.request.GET.get(\n 'step', get_products_count(self.request),\n ))\n page_number = int(self.request.GET.get('page', 1))\n view_type = self.request.session.get('view_type', 'tile')\n sorting = int(self.kwargs.get('sorting', 0))\n sorting_option = config.category_sorting(sorting)\n category = context['category']\n if (\n page_number < 1 or\n products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS\n ):\n raise http.Http404('Page does not exist.')\n\n all_products = (\n models.Product.objects\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option, ))\n )\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products__in=all_products)\n .get_group_tags_pairs()\n )\n\n tags = self.kwargs.get('tags')\n\n tag_titles = ''\n if tags:\n slugs = models.Tag.parse_url_tags(tags)\n tags = models.Tag.objects.filter(slug__in=slugs)\n\n all_products = (\n all_products\n .filter(tags__in=tags)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n tag_titles = models.serialize_tags_to_title(tags)\n\n def template_context(page, tag_titles, tags):\n return {\n 'page': page,\n 'tag_titles': tag_titles,\n 'tags': tags,\n }\n\n page = context['page']\n page.get_template_render_context = partial(\n template_context, page, tag_titles, tags)\n\n paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)\n total_products = all_products.count()\n products = paginated_page.object_list\n if not products:\n raise http.Http404('Page without products does not exist.')\n\n return {\n **context,\n 'product_image_pairs': merge_products_and_images(products),\n 'group_tags_pairs': group_tags_pairs,\n 'total_products': total_products,\n 'products_count': (page_number - 1) * products_on_page + products.count(),\n 'paginated_page': paginated_page,\n 'sorting_options': config.category_sorting(),\n 'limits': settings.CATEGORY_STEP_MULTIPLIERS,\n 'sort': sorting,\n 'tags': tags,\n 'view_type': view_type,\n 'skip_canonical': bool(tags),\n }\n\n\ndef load_more(request, category_slug, offset=0, limit=0, sorting=0, tags=None):\n \"\"\"\n Load more products of a given category.\n\n :param sorting: preferred sorting index from CATEGORY_SORTING tuple\n :param request: HttpRequest object\n :param category_slug: Slug for a given category\n :param offset: used for slicing QuerySet.\n :return: products list in html format\n \"\"\"\n products_on_page = limit or get_products_count(request)\n offset = int(offset)\n if offset < 0:\n return http.HttpResponseBadRequest(\n 'The offset is wrong. An offset should be greater than or equal to 0.'\n )\n if products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS:\n return http.HttpResponseBadRequest(\n 'The limit number is wrong. List of available numbers:'\n f' {\", \".join(map(str, settings.CATEGORY_STEP_MULTIPLIERS))}'\n )\n # increment page number because:\n # 11 // 12 = 0, 0 // 12 = 0 but it should be the first page\n # 12 // 12 = 1, 23 // 12 = 1, but it should be the second page\n page_number = (offset // products_on_page) + 1\n category = get_object_or_404(models.CategoryPage, slug=category_slug).model\n sorting_option = config.category_sorting(int(sorting))\n\n all_products = (\n models.Product.objects\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option,))\n )\n\n if tags:\n tag_entities = models.Tag.objects.filter(\n slug__in=models.Tag.parse_url_tags(tags)\n )\n\n all_products = (\n all_products\n .filter(tags__in=tag_entities)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)\n products = paginated_page.object_list\n view = request.session.get('view_type', 'tile')\n\n return render(request, 'catalog/category_products.html', {\n 'product_image_pairs': merge_products_and_images(products),\n 'paginated_page': paginated_page,\n 'view_type': view,\n 'prods': products_on_page,\n })\n\n\n@require_POST\ndef save_feedback(request):\n def get_keys_from_post(*args):\n return {arg: request.POST.get(arg, '') for arg in args}\n\n product_id = request.POST.get('id')\n product = models.Product.objects.filter(id=product_id, page__is_active=True).first()\n if not (product_id and product):\n return http.HttpResponse(status=422)\n\n fields = ['rating', 'name', 'dignities', 'limitations', 'general']\n feedback_data = get_keys_from_post(*fields)\n\n models.ProductFeedback.objects.create(product=product, **feedback_data)\n return http.HttpResponse('ok')\n\n\n@require_POST\ndef delete_feedback(request):\n if not request.user.is_authenticated:\n return http.HttpResponseForbidden('Not today, sly guy...')\n\n feedback_id = request.POST.get('id')\n feedback = models.ProductFeedback.objects.filter(id=feedback_id).first()\n if not (feedback_id and feedback):\n return http.HttpResponse(status=422)\n\n feedback.delete()\n return http.HttpResponse('Feedback with id={} was deleted.'.format(feedback_id))\n\n\nclass ProductsWithoutImages(catalog.ProductsWithoutImages):\n model = models.Product\n\n\nclass ProductsWithoutText(catalog.ProductsWithoutText):\n model = models.Product\n", "path": "shopelectro/views/catalog.py"}], "after_files": [{"content": "import typing\nfrom functools import partial\n\nfrom django import http\nfrom django.conf import settings\nfrom django.core.paginator import Paginator, InvalidPage\nfrom django.shortcuts import render, get_object_or_404\nfrom django.views.decorators.http import require_POST\nfrom django_user_agents.utils import get_user_agent\n\nfrom catalog.views import catalog\nfrom images.models import Image\nfrom pages import views as pages_views\n\nfrom shopelectro import config\nfrom shopelectro import models\nfrom shopelectro.views.helpers import set_csrf_cookie\n\nPRODUCTS_ON_PAGE_PC = 48\nPRODUCTS_ON_PAGE_MOB = 12\n\n\ndef get_products_count(request):\n \"\"\"Calculate max products list size from request. List size depends on device type.\"\"\"\n mobile_view = get_user_agent(request).is_mobile\n return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC\n\n\ndef get_paginated_page_or_404(objects, per_page, page_number):\n try:\n return Paginator(objects, per_page).page(page_number)\n except InvalidPage:\n raise http.Http404('Page does not exist')\n\n\n# CATALOG VIEWS\nclass CategoryTree(catalog.CategoryTree):\n category_model = models.Category\n\n\ndef prepare_tile_products(products):\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(\n shopelectro_product__in=products\n )\n )\n categories = models.Category.objects.get_root_categories_by_products(\n products\n )\n return [\n (product, images.get(product.page), categories.get(product))\n for product in products\n ]\n\n\n@set_csrf_cookie\nclass ProductPage(catalog.ProductPage):\n pk_url_kwarg = None\n slug_url_kwarg = 'product_vendor_code'\n slug_field = 'vendor_code'\n\n queryset = (\n models.Product.objects\n .filter(category__isnull=False, page__is_active=True)\n .prefetch_related('product_feedbacks', 'page__images')\n .select_related('page')\n )\n\n def get(self, request, *args, **kwargs):\n try:\n self.object = self.get_object()\n except http.Http404 as error404:\n response_404 = self.render_siblings_on_404(request, **kwargs)\n if response_404:\n return response_404\n else:\n raise error404\n\n context = self.get_context_data(object=self.object)\n return self.render_to_response(context)\n\n def get_context_data(self, **kwargs):\n context = super(ProductPage, self).get_context_data(**kwargs)\n product = self.object\n if not product.page.is_active:\n # this context required to render 404 page\n # with it's own logic\n return context\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products=self.object)\n .get_group_tags_pairs()\n )\n\n return {\n **context,\n 'price_bounds': config.PRICE_BOUNDS,\n 'group_tags_pairs': group_tags_pairs\n }\n\n def render_siblings_on_404(\n self, request, **url_kwargs\n ) -> typing.Union[http.Http404, None]:\n \"\"\"Try to render removed product's siblings on it's 404 page.\"\"\"\n inactive_product = models.Product.objects.filter(\n **{self.slug_field: url_kwargs['product_vendor_code']},\n category__isnull=False,\n page__is_active=False\n ).first()\n if inactive_product:\n related_products = (\n models.Product.objects\n .filter(\n category=inactive_product.category,\n page__is_active=True,\n )\n .prefetch_related('category')\n .select_related('page')[:10]\n )\n self.object = inactive_product\n context = self.get_context_data(\n object=inactive_product,\n tile_products=prepare_tile_products(related_products),\n tile_title='\u0412\u043e\u0437\u043c\u043e\u0436\u043d\u043e \u0432\u0430\u0441 \u0437\u0430\u0438\u043d\u0442\u0435\u0440\u0435\u0441\u0443\u044e\u0442 \u043f\u043e\u0445\u043e\u0436\u0438\u0435 \u0442\u043e\u0432\u0430\u0440\u044b:',\n **url_kwargs,\n )\n return render(request, 'catalog/product_404.html', context, status=404)\n\n\n# SHOPELECTRO-SPECIFIC VIEWS\n@set_csrf_cookie\nclass IndexPage(pages_views.CustomPageView):\n\n def get_context_data(self, **kwargs):\n \"\"\"Extended method. Add product's images to context.\"\"\"\n context = super(IndexPage, self).get_context_data(**kwargs)\n mobile_view = get_user_agent(self.request).is_mobile\n\n tile_products = []\n if not mobile_view:\n top_products = (\n models.Product.objects\n .filter(id__in=settings.TOP_PRODUCTS, page__is_active=True)\n .prefetch_related('category')\n .select_related('page')\n )\n tile_products = prepare_tile_products(top_products)\n\n return {\n **context,\n 'tile_title': '\u0422\u041e\u041f 10 \u0422\u041e\u0412\u0410\u0420\u041e\u0412',\n 'category_tile': config.MAIN_PAGE_TILE,\n 'tile_products': tile_products,\n }\n\n\ndef merge_products_and_images(products):\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(shopelectro_product__in=products)\n )\n\n return [\n (product, images.get(product.page))\n for product in products\n ]\n\n\n@set_csrf_cookie\nclass CategoryPage(catalog.CategoryPage):\n\n def get_context_data(self, **kwargs):\n \"\"\"Add sorting options and view_types in context.\"\"\"\n context = super().get_context_data(**kwargs)\n products_on_page = int(self.request.GET.get(\n 'step', get_products_count(self.request),\n ))\n page_number = int(self.request.GET.get('page', 1))\n view_type = self.request.session.get('view_type', 'tile')\n sorting = int(self.kwargs.get('sorting', 0))\n sorting_option = config.category_sorting(sorting)\n category = context['category']\n if (\n page_number < 1 or\n products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS\n ):\n raise http.Http404('Page does not exist.') # Ignore CPDBear\n\n # @todo #470:15m Implement a new method for a Product's manager to get all_products\n # as below.\n\n all_products = (\n models.Product.objects\n .filter(page__is_active=True)\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option, ))\n )\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products__in=all_products)\n .get_group_tags_pairs()\n )\n\n tags = self.kwargs.get('tags')\n\n tag_titles = ''\n if tags:\n slugs = models.Tag.parse_url_tags(tags)\n tags = models.Tag.objects.filter(slug__in=slugs)\n\n all_products = (\n all_products\n .filter(tags__in=tags)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n tag_titles = models.serialize_tags_to_title(tags)\n\n def template_context(page, tag_titles, tags):\n return {\n 'page': page,\n 'tag_titles': tag_titles,\n 'tags': tags,\n }\n\n page = context['page']\n page.get_template_render_context = partial(\n template_context, page, tag_titles, tags)\n\n paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)\n total_products = all_products.count()\n products = paginated_page.object_list\n if not products:\n raise http.Http404('Page without products does not exist.')\n\n return {\n **context,\n 'product_image_pairs': merge_products_and_images(products),\n 'group_tags_pairs': group_tags_pairs,\n 'total_products': total_products,\n 'products_count': (page_number - 1) * products_on_page + products.count(),\n 'paginated_page': paginated_page,\n 'sorting_options': config.category_sorting(),\n 'limits': settings.CATEGORY_STEP_MULTIPLIERS,\n 'sort': sorting,\n 'tags': tags,\n 'view_type': view_type,\n 'skip_canonical': bool(tags),\n }\n\n\ndef load_more(request, category_slug, offset=0, limit=0, sorting=0, tags=None):\n \"\"\"\n Load more products of a given category.\n\n :param sorting: preferred sorting index from CATEGORY_SORTING tuple\n :param request: HttpRequest object\n :param category_slug: Slug for a given category\n :param offset: used for slicing QuerySet.\n :return: products list in html format\n \"\"\"\n products_on_page = limit or get_products_count(request)\n offset = int(offset)\n if offset < 0:\n return http.HttpResponseBadRequest(\n 'The offset is wrong. An offset should be greater than or equal to 0.'\n )\n if products_on_page not in settings.CATEGORY_STEP_MULTIPLIERS:\n return http.HttpResponseBadRequest(\n 'The limit number is wrong. List of available numbers:'\n f' {\", \".join(map(str, settings.CATEGORY_STEP_MULTIPLIERS))}'\n )\n # increment page number because:\n # 11 // 12 = 0, 0 // 12 = 0 but it should be the first page\n # 12 // 12 = 1, 23 // 12 = 1, but it should be the second page\n page_number = (offset // products_on_page) + 1\n category = get_object_or_404(models.CategoryPage, slug=category_slug).model\n sorting_option = config.category_sorting(int(sorting))\n\n all_products = (\n models.Product.objects\n .filter(page__is_active=True)\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option,))\n )\n\n if tags:\n tag_entities = models.Tag.objects.filter(\n slug__in=models.Tag.parse_url_tags(tags)\n )\n\n all_products = (\n all_products\n .filter(tags__in=tag_entities)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n paginated_page = get_paginated_page_or_404(all_products, products_on_page, page_number)\n products = paginated_page.object_list\n view = request.session.get('view_type', 'tile')\n\n return render(request, 'catalog/category_products.html', {\n 'product_image_pairs': merge_products_and_images(products),\n 'paginated_page': paginated_page,\n 'view_type': view,\n 'prods': products_on_page,\n })\n\n\n@require_POST\ndef save_feedback(request):\n def get_keys_from_post(*args):\n return {arg: request.POST.get(arg, '') for arg in args}\n\n product_id = request.POST.get('id')\n product = models.Product.objects.filter(id=product_id, page__is_active=True).first()\n if not (product_id and product):\n return http.HttpResponse(status=422)\n\n fields = ['rating', 'name', 'dignities', 'limitations', 'general']\n feedback_data = get_keys_from_post(*fields)\n\n models.ProductFeedback.objects.create(product=product, **feedback_data)\n return http.HttpResponse('ok')\n\n\n@require_POST\ndef delete_feedback(request):\n if not request.user.is_authenticated:\n return http.HttpResponseForbidden('Not today, sly guy...')\n\n feedback_id = request.POST.get('id')\n feedback = models.ProductFeedback.objects.filter(id=feedback_id).first()\n if not (feedback_id and feedback):\n return http.HttpResponse(status=422)\n\n feedback.delete()\n return http.HttpResponse('Feedback with id={} was deleted.'.format(feedback_id))\n\n\nclass ProductsWithoutImages(catalog.ProductsWithoutImages):\n model = models.Product\n\n\nclass ProductsWithoutText(catalog.ProductsWithoutText):\n model = models.Product\n", "path": "shopelectro/views/catalog.py"}]}
| 3,916 | 964 |
gh_patches_debug_27040
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-1005
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Imported ratings added as reviews
During a goodreads import, star ratings seem to be added as Reviews, rather than ReviewRatings
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/importers/importer.py`
Content:
```
1 """ handle reading a csv from an external service, defaults are from GoodReads """
2 import csv
3 import logging
4
5 from bookwyrm import models
6 from bookwyrm.models import ImportJob, ImportItem
7 from bookwyrm.tasks import app
8
9 logger = logging.getLogger(__name__)
10
11
12 class Importer:
13 """Generic class for csv data import from an outside service"""
14
15 service = "Unknown"
16 delimiter = ","
17 encoding = "UTF-8"
18 mandatory_fields = ["Title", "Author"]
19
20 def create_job(self, user, csv_file, include_reviews, privacy):
21 """check over a csv and creates a database entry for the job"""
22 job = ImportJob.objects.create(
23 user=user, include_reviews=include_reviews, privacy=privacy
24 )
25 for index, entry in enumerate(
26 list(csv.DictReader(csv_file, delimiter=self.delimiter))
27 ):
28 if not all(x in entry for x in self.mandatory_fields):
29 raise ValueError("Author and title must be in data.")
30 entry = self.parse_fields(entry)
31 self.save_item(job, index, entry)
32 return job
33
34 def save_item(self, job, index, data): # pylint: disable=no-self-use
35 """creates and saves an import item"""
36 ImportItem(job=job, index=index, data=data).save()
37
38 def parse_fields(self, entry):
39 """updates csv data with additional info"""
40 entry.update({"import_source": self.service})
41 return entry
42
43 def create_retry_job(self, user, original_job, items):
44 """retry items that didn't import"""
45 job = ImportJob.objects.create(
46 user=user,
47 include_reviews=original_job.include_reviews,
48 privacy=original_job.privacy,
49 retry=True,
50 )
51 for item in items:
52 self.save_item(job, item.index, item.data)
53 return job
54
55 def start_import(self, job):
56 """initalizes a csv import job"""
57 result = import_data.delay(self.service, job.id)
58 job.task_id = result.id
59 job.save()
60
61
62 @app.task
63 def import_data(source, job_id):
64 """does the actual lookup work in a celery task"""
65 job = ImportJob.objects.get(id=job_id)
66 try:
67 for item in job.items.all():
68 try:
69 item.resolve()
70 except Exception as e: # pylint: disable=broad-except
71 logger.exception(e)
72 item.fail_reason = "Error loading book"
73 item.save()
74 continue
75
76 if item.book:
77 item.save()
78
79 # shelves book and handles reviews
80 handle_imported_book(
81 source, job.user, item, job.include_reviews, job.privacy
82 )
83 else:
84 item.fail_reason = "Could not find a match for book"
85 item.save()
86 finally:
87 job.complete = True
88 job.save()
89
90
91 def handle_imported_book(source, user, item, include_reviews, privacy):
92 """process a csv and then post about it"""
93 if isinstance(item.book, models.Work):
94 item.book = item.book.default_edition
95 if not item.book:
96 return
97
98 existing_shelf = models.ShelfBook.objects.filter(book=item.book, user=user).exists()
99
100 # shelve the book if it hasn't been shelved already
101 if item.shelf and not existing_shelf:
102 desired_shelf = models.Shelf.objects.get(identifier=item.shelf, user=user)
103 models.ShelfBook.objects.create(book=item.book, shelf=desired_shelf, user=user)
104
105 for read in item.reads:
106 # check for an existing readthrough with the same dates
107 if models.ReadThrough.objects.filter(
108 user=user,
109 book=item.book,
110 start_date=read.start_date,
111 finish_date=read.finish_date,
112 ).exists():
113 continue
114 read.book = item.book
115 read.user = user
116 read.save()
117
118 if include_reviews and (item.rating or item.review):
119 review_title = (
120 "Review of {!r} on {!r}".format(
121 item.book.title,
122 source,
123 )
124 if item.review
125 else ""
126 )
127
128 # we don't know the publication date of the review,
129 # but "now" is a bad guess
130 published_date_guess = item.date_read or item.date_added
131 models.Review.objects.create(
132 user=user,
133 book=item.book,
134 name=review_title,
135 content=item.review,
136 rating=item.rating,
137 published_date=published_date_guess,
138 privacy=privacy,
139 )
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/importers/importer.py b/bookwyrm/importers/importer.py
--- a/bookwyrm/importers/importer.py
+++ b/bookwyrm/importers/importer.py
@@ -116,24 +116,33 @@
read.save()
if include_reviews and (item.rating or item.review):
- review_title = (
- "Review of {!r} on {!r}".format(
- item.book.title,
- source,
- )
- if item.review
- else ""
- )
-
# we don't know the publication date of the review,
# but "now" is a bad guess
published_date_guess = item.date_read or item.date_added
- models.Review.objects.create(
- user=user,
- book=item.book,
- name=review_title,
- content=item.review,
- rating=item.rating,
- published_date=published_date_guess,
- privacy=privacy,
- )
+ if item.review:
+ review_title = (
+ "Review of {!r} on {!r}".format(
+ item.book.title,
+ source,
+ )
+ if item.review
+ else ""
+ )
+ models.Review.objects.create(
+ user=user,
+ book=item.book,
+ name=review_title,
+ content=item.review,
+ rating=item.rating,
+ published_date=published_date_guess,
+ privacy=privacy,
+ )
+ else:
+ # just a rating
+ models.ReviewRating.objects.create(
+ user=user,
+ book=item.book,
+ rating=item.rating,
+ published_date=published_date_guess,
+ privacy=privacy,
+ )
|
{"golden_diff": "diff --git a/bookwyrm/importers/importer.py b/bookwyrm/importers/importer.py\n--- a/bookwyrm/importers/importer.py\n+++ b/bookwyrm/importers/importer.py\n@@ -116,24 +116,33 @@\n read.save()\n \n if include_reviews and (item.rating or item.review):\n- review_title = (\n- \"Review of {!r} on {!r}\".format(\n- item.book.title,\n- source,\n- )\n- if item.review\n- else \"\"\n- )\n-\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n- models.Review.objects.create(\n- user=user,\n- book=item.book,\n- name=review_title,\n- content=item.review,\n- rating=item.rating,\n- published_date=published_date_guess,\n- privacy=privacy,\n- )\n+ if item.review:\n+ review_title = (\n+ \"Review of {!r} on {!r}\".format(\n+ item.book.title,\n+ source,\n+ )\n+ if item.review\n+ else \"\"\n+ )\n+ models.Review.objects.create(\n+ user=user,\n+ book=item.book,\n+ name=review_title,\n+ content=item.review,\n+ rating=item.rating,\n+ published_date=published_date_guess,\n+ privacy=privacy,\n+ )\n+ else:\n+ # just a rating\n+ models.ReviewRating.objects.create(\n+ user=user,\n+ book=item.book,\n+ rating=item.rating,\n+ published_date=published_date_guess,\n+ privacy=privacy,\n+ )\n", "issue": "Imported ratings added as reviews\nDuring a goodreads import, star ratings seem to be added as Reviews, rather than ReviewRatings\n", "before_files": [{"content": "\"\"\" handle reading a csv from an external service, defaults are from GoodReads \"\"\"\nimport csv\nimport logging\n\nfrom bookwyrm import models\nfrom bookwyrm.models import ImportJob, ImportItem\nfrom bookwyrm.tasks import app\n\nlogger = logging.getLogger(__name__)\n\n\nclass Importer:\n \"\"\"Generic class for csv data import from an outside service\"\"\"\n\n service = \"Unknown\"\n delimiter = \",\"\n encoding = \"UTF-8\"\n mandatory_fields = [\"Title\", \"Author\"]\n\n def create_job(self, user, csv_file, include_reviews, privacy):\n \"\"\"check over a csv and creates a database entry for the job\"\"\"\n job = ImportJob.objects.create(\n user=user, include_reviews=include_reviews, privacy=privacy\n )\n for index, entry in enumerate(\n list(csv.DictReader(csv_file, delimiter=self.delimiter))\n ):\n if not all(x in entry for x in self.mandatory_fields):\n raise ValueError(\"Author and title must be in data.\")\n entry = self.parse_fields(entry)\n self.save_item(job, index, entry)\n return job\n\n def save_item(self, job, index, data): # pylint: disable=no-self-use\n \"\"\"creates and saves an import item\"\"\"\n ImportItem(job=job, index=index, data=data).save()\n\n def parse_fields(self, entry):\n \"\"\"updates csv data with additional info\"\"\"\n entry.update({\"import_source\": self.service})\n return entry\n\n def create_retry_job(self, user, original_job, items):\n \"\"\"retry items that didn't import\"\"\"\n job = ImportJob.objects.create(\n user=user,\n include_reviews=original_job.include_reviews,\n privacy=original_job.privacy,\n retry=True,\n )\n for item in items:\n self.save_item(job, item.index, item.data)\n return job\n\n def start_import(self, job):\n \"\"\"initalizes a csv import job\"\"\"\n result = import_data.delay(self.service, job.id)\n job.task_id = result.id\n job.save()\n\n\[email protected]\ndef import_data(source, job_id):\n \"\"\"does the actual lookup work in a celery task\"\"\"\n job = ImportJob.objects.get(id=job_id)\n try:\n for item in job.items.all():\n try:\n item.resolve()\n except Exception as e: # pylint: disable=broad-except\n logger.exception(e)\n item.fail_reason = \"Error loading book\"\n item.save()\n continue\n\n if item.book:\n item.save()\n\n # shelves book and handles reviews\n handle_imported_book(\n source, job.user, item, job.include_reviews, job.privacy\n )\n else:\n item.fail_reason = \"Could not find a match for book\"\n item.save()\n finally:\n job.complete = True\n job.save()\n\n\ndef handle_imported_book(source, user, item, include_reviews, privacy):\n \"\"\"process a csv and then post about it\"\"\"\n if isinstance(item.book, models.Work):\n item.book = item.book.default_edition\n if not item.book:\n return\n\n existing_shelf = models.ShelfBook.objects.filter(book=item.book, user=user).exists()\n\n # shelve the book if it hasn't been shelved already\n if item.shelf and not existing_shelf:\n desired_shelf = models.Shelf.objects.get(identifier=item.shelf, user=user)\n models.ShelfBook.objects.create(book=item.book, shelf=desired_shelf, user=user)\n\n for read in item.reads:\n # check for an existing readthrough with the same dates\n if models.ReadThrough.objects.filter(\n user=user,\n book=item.book,\n start_date=read.start_date,\n finish_date=read.finish_date,\n ).exists():\n continue\n read.book = item.book\n read.user = user\n read.save()\n\n if include_reviews and (item.rating or item.review):\n review_title = (\n \"Review of {!r} on {!r}\".format(\n item.book.title,\n source,\n )\n if item.review\n else \"\"\n )\n\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n models.Review.objects.create(\n user=user,\n book=item.book,\n name=review_title,\n content=item.review,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n", "path": "bookwyrm/importers/importer.py"}], "after_files": [{"content": "\"\"\" handle reading a csv from an external service, defaults are from GoodReads \"\"\"\nimport csv\nimport logging\n\nfrom bookwyrm import models\nfrom bookwyrm.models import ImportJob, ImportItem\nfrom bookwyrm.tasks import app\n\nlogger = logging.getLogger(__name__)\n\n\nclass Importer:\n \"\"\"Generic class for csv data import from an outside service\"\"\"\n\n service = \"Unknown\"\n delimiter = \",\"\n encoding = \"UTF-8\"\n mandatory_fields = [\"Title\", \"Author\"]\n\n def create_job(self, user, csv_file, include_reviews, privacy):\n \"\"\"check over a csv and creates a database entry for the job\"\"\"\n job = ImportJob.objects.create(\n user=user, include_reviews=include_reviews, privacy=privacy\n )\n for index, entry in enumerate(\n list(csv.DictReader(csv_file, delimiter=self.delimiter))\n ):\n if not all(x in entry for x in self.mandatory_fields):\n raise ValueError(\"Author and title must be in data.\")\n entry = self.parse_fields(entry)\n self.save_item(job, index, entry)\n return job\n\n def save_item(self, job, index, data): # pylint: disable=no-self-use\n \"\"\"creates and saves an import item\"\"\"\n ImportItem(job=job, index=index, data=data).save()\n\n def parse_fields(self, entry):\n \"\"\"updates csv data with additional info\"\"\"\n entry.update({\"import_source\": self.service})\n return entry\n\n def create_retry_job(self, user, original_job, items):\n \"\"\"retry items that didn't import\"\"\"\n job = ImportJob.objects.create(\n user=user,\n include_reviews=original_job.include_reviews,\n privacy=original_job.privacy,\n retry=True,\n )\n for item in items:\n self.save_item(job, item.index, item.data)\n return job\n\n def start_import(self, job):\n \"\"\"initalizes a csv import job\"\"\"\n result = import_data.delay(self.service, job.id)\n job.task_id = result.id\n job.save()\n\n\[email protected]\ndef import_data(source, job_id):\n \"\"\"does the actual lookup work in a celery task\"\"\"\n job = ImportJob.objects.get(id=job_id)\n try:\n for item in job.items.all():\n try:\n item.resolve()\n except Exception as e: # pylint: disable=broad-except\n logger.exception(e)\n item.fail_reason = \"Error loading book\"\n item.save()\n continue\n\n if item.book:\n item.save()\n\n # shelves book and handles reviews\n handle_imported_book(\n source, job.user, item, job.include_reviews, job.privacy\n )\n else:\n item.fail_reason = \"Could not find a match for book\"\n item.save()\n finally:\n job.complete = True\n job.save()\n\n\ndef handle_imported_book(source, user, item, include_reviews, privacy):\n \"\"\"process a csv and then post about it\"\"\"\n if isinstance(item.book, models.Work):\n item.book = item.book.default_edition\n if not item.book:\n return\n\n existing_shelf = models.ShelfBook.objects.filter(book=item.book, user=user).exists()\n\n # shelve the book if it hasn't been shelved already\n if item.shelf and not existing_shelf:\n desired_shelf = models.Shelf.objects.get(identifier=item.shelf, user=user)\n models.ShelfBook.objects.create(book=item.book, shelf=desired_shelf, user=user)\n\n for read in item.reads:\n # check for an existing readthrough with the same dates\n if models.ReadThrough.objects.filter(\n user=user,\n book=item.book,\n start_date=read.start_date,\n finish_date=read.finish_date,\n ).exists():\n continue\n read.book = item.book\n read.user = user\n read.save()\n\n if include_reviews and (item.rating or item.review):\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n if item.review:\n review_title = (\n \"Review of {!r} on {!r}\".format(\n item.book.title,\n source,\n )\n if item.review\n else \"\"\n )\n models.Review.objects.create(\n user=user,\n book=item.book,\n name=review_title,\n content=item.review,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n else:\n # just a rating\n models.ReviewRating.objects.create(\n user=user,\n book=item.book,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n", "path": "bookwyrm/importers/importer.py"}]}
| 1,566 | 380 |
gh_patches_debug_22233
|
rasdani/github-patches
|
git_diff
|
statsmodels__statsmodels-4999
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[MAINT/CLN] remove function explicitly marked as duplicate
In the function docstring:
`duplicate: Skipper added sm.tools.drop_missing`
<b>update</b> The relevant function is not used outside this module; nor is the other function in this module.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/tools/wrappers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Convenience Wrappers
3
4 Created on Sat Oct 30 14:56:35 2010
5
6 Author: josef-pktd
7 License: BSD
8 """
9
10 import numpy as np
11 import statsmodels.api as sm
12 from statsmodels import GLS, WLS, OLS
13
14 def remove_nanrows(y, x):
15 '''remove common rows in [y,x] that contain at least one nan
16
17 TODO: this should be made more flexible,
18 arbitrary number of arrays and 1d or 2d arrays
19
20 duplicate: Skipper added sm.tools.drop_missing
21
22 '''
23 mask = ~np.isnan(y)
24 mask *= ~(np.isnan(x).any(-1)) #* or &
25 y = y[mask]
26 x = x[mask]
27 return y, x
28
29
30 def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,
31 **kwds):
32 '''get linear model with extra options for entry
33
34 dispatches to regular model class and does not wrap the output
35
36 If several options are exclusive, for example sigma and weights, then the
37 chosen class depends on the implementation sequence.
38 '''
39
40 if filter_missing:
41 y, x = remove_nanrows(y, x)
42 #do the same for masked arrays
43
44 if add_const:
45 x = sm.add_constant(x, prepend=True)
46
47 if not sigma is None:
48 return GLS(y, x, sigma=sigma, **kwds)
49 elif not weights is None:
50 return WLS(y, x, weights=weights, **kwds)
51 else:
52 return OLS(y, x, **kwds)
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/statsmodels/tools/wrappers.py b/statsmodels/tools/wrappers.py
deleted file mode 100644
--- a/statsmodels/tools/wrappers.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Convenience Wrappers
-
-Created on Sat Oct 30 14:56:35 2010
-
-Author: josef-pktd
-License: BSD
-"""
-
-import numpy as np
-import statsmodels.api as sm
-from statsmodels import GLS, WLS, OLS
-
-def remove_nanrows(y, x):
- '''remove common rows in [y,x] that contain at least one nan
-
- TODO: this should be made more flexible,
- arbitrary number of arrays and 1d or 2d arrays
-
- duplicate: Skipper added sm.tools.drop_missing
-
- '''
- mask = ~np.isnan(y)
- mask *= ~(np.isnan(x).any(-1)) #* or &
- y = y[mask]
- x = x[mask]
- return y, x
-
-
-def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,
- **kwds):
- '''get linear model with extra options for entry
-
- dispatches to regular model class and does not wrap the output
-
- If several options are exclusive, for example sigma and weights, then the
- chosen class depends on the implementation sequence.
- '''
-
- if filter_missing:
- y, x = remove_nanrows(y, x)
- #do the same for masked arrays
-
- if add_const:
- x = sm.add_constant(x, prepend=True)
-
- if not sigma is None:
- return GLS(y, x, sigma=sigma, **kwds)
- elif not weights is None:
- return WLS(y, x, weights=weights, **kwds)
- else:
- return OLS(y, x, **kwds)
|
{"golden_diff": "diff --git a/statsmodels/tools/wrappers.py b/statsmodels/tools/wrappers.py\ndeleted file mode 100644\n--- a/statsmodels/tools/wrappers.py\n+++ /dev/null\n@@ -1,52 +0,0 @@\n-# -*- coding: utf-8 -*-\n-\"\"\"Convenience Wrappers\n-\n-Created on Sat Oct 30 14:56:35 2010\n-\n-Author: josef-pktd\n-License: BSD\n-\"\"\"\n-\n-import numpy as np\n-import statsmodels.api as sm\n-from statsmodels import GLS, WLS, OLS\n-\n-def remove_nanrows(y, x):\n- '''remove common rows in [y,x] that contain at least one nan\n-\n- TODO: this should be made more flexible,\n- arbitrary number of arrays and 1d or 2d arrays\n-\n- duplicate: Skipper added sm.tools.drop_missing\n-\n- '''\n- mask = ~np.isnan(y)\n- mask *= ~(np.isnan(x).any(-1)) #* or &\n- y = y[mask]\n- x = x[mask]\n- return y, x\n-\n-\n-def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,\n- **kwds):\n- '''get linear model with extra options for entry\n-\n- dispatches to regular model class and does not wrap the output\n-\n- If several options are exclusive, for example sigma and weights, then the\n- chosen class depends on the implementation sequence.\n- '''\n-\n- if filter_missing:\n- y, x = remove_nanrows(y, x)\n- #do the same for masked arrays\n-\n- if add_const:\n- x = sm.add_constant(x, prepend=True)\n-\n- if not sigma is None:\n- return GLS(y, x, sigma=sigma, **kwds)\n- elif not weights is None:\n- return WLS(y, x, weights=weights, **kwds)\n- else:\n- return OLS(y, x, **kwds)\n", "issue": "[MAINT/CLN] remove function explicitly marked as duplicate\nIn the function docstring:\r\n`duplicate: Skipper added sm.tools.drop_missing`\r\n\r\n<b>update</b> The relevant function is not used outside this module; nor is the other function in this module.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Convenience Wrappers\n\nCreated on Sat Oct 30 14:56:35 2010\n\nAuthor: josef-pktd\nLicense: BSD\n\"\"\"\n\nimport numpy as np\nimport statsmodels.api as sm\nfrom statsmodels import GLS, WLS, OLS\n\ndef remove_nanrows(y, x):\n '''remove common rows in [y,x] that contain at least one nan\n\n TODO: this should be made more flexible,\n arbitrary number of arrays and 1d or 2d arrays\n\n duplicate: Skipper added sm.tools.drop_missing\n\n '''\n mask = ~np.isnan(y)\n mask *= ~(np.isnan(x).any(-1)) #* or &\n y = y[mask]\n x = x[mask]\n return y, x\n\n\ndef linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,\n **kwds):\n '''get linear model with extra options for entry\n\n dispatches to regular model class and does not wrap the output\n\n If several options are exclusive, for example sigma and weights, then the\n chosen class depends on the implementation sequence.\n '''\n\n if filter_missing:\n y, x = remove_nanrows(y, x)\n #do the same for masked arrays\n\n if add_const:\n x = sm.add_constant(x, prepend=True)\n\n if not sigma is None:\n return GLS(y, x, sigma=sigma, **kwds)\n elif not weights is None:\n return WLS(y, x, weights=weights, **kwds)\n else:\n return OLS(y, x, **kwds)\n", "path": "statsmodels/tools/wrappers.py"}], "after_files": [{"content": null, "path": "statsmodels/tools/wrappers.py"}]}
| 790 | 464 |
gh_patches_debug_4669
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1441
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change in Python 3.9.5 (and 3.8.10) causes Faker's list_module() to fail
* Faker version: 8.1.2
* OS: macOS 11.3.1
A [regression in Python](https://bugs.python.org/issue44061) breaks Faker, specifically [this line of code in Faker](https://github.com/joke2k/faker/blob/master/faker/utils/loading.py#L35) that calls `pkgutil.iter_modules([path])`.
It's not clear to me from the discussion in that python bug report exactly how they intend to resolve the issue, but I thought I'd flag this here.
### Steps to reproduce
1. Install python 3.9.5 or 3.8.10
1. Install faker
1. `import faker`
### Expected behavior
`import faker` should succeed
### Actual behavior
`import faker` raises an exception
```shell
>>> import faker
>>> import faker
Traceback (most recent call last):
File "/python/3.9/lib/python3.9/pkgutil.py", line 416, in get_importer
importer = sys.path_importer_cache[path_item]
KeyError: PosixPath('/venv/lib/python3.9/site-packages/faker/providers')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/venv/lib/python3.9/site-packages/faker/__init__.py", line 1, in <module>
from faker.factory import Factory
File "/venv/lib/python3.9/site-packages/faker/factory.py", line 7, in <module>
from faker.config import AVAILABLE_LOCALES, DEFAULT_LOCALE, PROVIDERS
File "/venv/lib/python3.9/site-packages/faker/config.py", line 11, in <module>
PROVIDERS = find_available_providers(
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 57, in find_available_providers
for mod in list_module(providers_mod) if mod != '__pycache__'
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 35, in list_module
return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 35, in <listcomp>
return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
File "/python/3.9/lib/python3.9/pkgutil.py", line 130, in iter_modules
for i in importers:
File "/python/3.9/lib/python3.9/pkgutil.py", line 420, in get_importer
importer = path_hook(path_item)
File "<frozen importlib._bootstrap_external>", line 1601, in path_hook_for_FileFinder
File "<frozen importlib._bootstrap_external>", line 1476, in __init__
File "<frozen importlib._bootstrap_external>", line 177, in _path_isabs
AttributeError: 'PosixPath' object has no attribute 'startswith'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/utils/loading.py`
Content:
```
1 import pkgutil
2 import sys
3
4 from importlib import import_module
5 from pathlib import Path
6 from types import ModuleType
7 from typing import List, Set
8
9
10 def get_path(module: ModuleType) -> str:
11 if getattr(sys, 'frozen', False):
12 # frozen
13
14 if getattr(sys, '_MEIPASS', False):
15 # PyInstaller
16 lib_dir = Path(getattr(sys, '_MEIPASS'))
17 else:
18 # others
19 lib_dir = Path(sys.executable).parent / 'lib'
20
21 path = lib_dir.joinpath(*module.__package__.split("."))
22 else:
23 # unfrozen
24 path = Path(module.__file__).parent
25 return str(path)
26
27
28 def list_module(module: ModuleType) -> List[str]:
29 path = get_path(module)
30
31 if getattr(sys, '_MEIPASS', False):
32 # PyInstaller
33 return [file.parent.name for file in Path(path).glob('*/__init__.py')]
34 else:
35 return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
36
37
38 def find_available_locales(providers: List[str]) -> List[str]:
39 available_locales: Set[str] = set()
40
41 for provider_path in providers:
42
43 provider_module = import_module(provider_path)
44 if getattr(provider_module, 'localized', False):
45 langs = list_module(provider_module)
46 available_locales.update(langs)
47 available_locales: List[str] = sorted(available_locales)
48 return available_locales
49
50
51 def find_available_providers(modules: List[ModuleType]) -> List[str]:
52 available_providers = set()
53 for providers_mod in modules:
54 if providers_mod.__package__:
55 providers = [
56 '.'.join([providers_mod.__package__, mod])
57 for mod in list_module(providers_mod) if mod != '__pycache__'
58 ]
59 available_providers.update(providers)
60 return sorted(available_providers)
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/faker/utils/loading.py b/faker/utils/loading.py
--- a/faker/utils/loading.py
+++ b/faker/utils/loading.py
@@ -32,7 +32,7 @@
# PyInstaller
return [file.parent.name for file in Path(path).glob('*/__init__.py')]
else:
- return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
+ return [name for _, name, is_pkg in pkgutil.iter_modules([str(path)]) if is_pkg]
def find_available_locales(providers: List[str]) -> List[str]:
|
{"golden_diff": "diff --git a/faker/utils/loading.py b/faker/utils/loading.py\n--- a/faker/utils/loading.py\n+++ b/faker/utils/loading.py\n@@ -32,7 +32,7 @@\n # PyInstaller\n return [file.parent.name for file in Path(path).glob('*/__init__.py')]\n else:\n- return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n+ return [name for _, name, is_pkg in pkgutil.iter_modules([str(path)]) if is_pkg]\n \n \n def find_available_locales(providers: List[str]) -> List[str]:\n", "issue": "Change in Python 3.9.5 (and 3.8.10) causes Faker's list_module() to fail\n* Faker version: 8.1.2\r\n* OS: macOS 11.3.1\r\n\r\nA [regression in Python](https://bugs.python.org/issue44061) breaks Faker, specifically [this line of code in Faker](https://github.com/joke2k/faker/blob/master/faker/utils/loading.py#L35) that calls `pkgutil.iter_modules([path])`.\r\n\r\nIt's not clear to me from the discussion in that python bug report exactly how they intend to resolve the issue, but I thought I'd flag this here.\r\n\r\n### Steps to reproduce\r\n\r\n1. Install python 3.9.5 or 3.8.10\r\n1. Install faker\r\n1. `import faker`\r\n\r\n### Expected behavior\r\n\r\n`import faker` should succeed\r\n\r\n### Actual behavior\r\n\r\n`import faker` raises an exception\r\n\r\n```shell\r\n>>> import faker\r\n>>> import faker\r\nTraceback (most recent call last):\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 416, in get_importer\r\n importer = sys.path_importer_cache[path_item]\r\nKeyError: PosixPath('/venv/lib/python3.9/site-packages/faker/providers')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/venv/lib/python3.9/site-packages/faker/__init__.py\", line 1, in <module>\r\n from faker.factory import Factory\r\n File \"/venv/lib/python3.9/site-packages/faker/factory.py\", line 7, in <module>\r\n from faker.config import AVAILABLE_LOCALES, DEFAULT_LOCALE, PROVIDERS\r\n File \"/venv/lib/python3.9/site-packages/faker/config.py\", line 11, in <module>\r\n PROVIDERS = find_available_providers(\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 57, in find_available_providers\r\n for mod in list_module(providers_mod) if mod != '__pycache__'\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 35, in list_module\r\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 35, in <listcomp>\r\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 130, in iter_modules\r\n for i in importers:\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 420, in get_importer\r\n importer = path_hook(path_item)\r\n File \"<frozen importlib._bootstrap_external>\", line 1601, in path_hook_for_FileFinder\r\n File \"<frozen importlib._bootstrap_external>\", line 1476, in __init__\r\n File \"<frozen importlib._bootstrap_external>\", line 177, in _path_isabs\r\nAttributeError: 'PosixPath' object has no attribute 'startswith'\r\n```\n", "before_files": [{"content": "import pkgutil\nimport sys\n\nfrom importlib import import_module\nfrom pathlib import Path\nfrom types import ModuleType\nfrom typing import List, Set\n\n\ndef get_path(module: ModuleType) -> str:\n if getattr(sys, 'frozen', False):\n # frozen\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n lib_dir = Path(getattr(sys, '_MEIPASS'))\n else:\n # others\n lib_dir = Path(sys.executable).parent / 'lib'\n\n path = lib_dir.joinpath(*module.__package__.split(\".\"))\n else:\n # unfrozen\n path = Path(module.__file__).parent\n return str(path)\n\n\ndef list_module(module: ModuleType) -> List[str]:\n path = get_path(module)\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n return [file.parent.name for file in Path(path).glob('*/__init__.py')]\n else:\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n\n\ndef find_available_locales(providers: List[str]) -> List[str]:\n available_locales: Set[str] = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n available_locales: List[str] = sorted(available_locales)\n return available_locales\n\n\ndef find_available_providers(modules: List[ModuleType]) -> List[str]:\n available_providers = set()\n for providers_mod in modules:\n if providers_mod.__package__:\n providers = [\n '.'.join([providers_mod.__package__, mod])\n for mod in list_module(providers_mod) if mod != '__pycache__'\n ]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}], "after_files": [{"content": "import pkgutil\nimport sys\n\nfrom importlib import import_module\nfrom pathlib import Path\nfrom types import ModuleType\nfrom typing import List, Set\n\n\ndef get_path(module: ModuleType) -> str:\n if getattr(sys, 'frozen', False):\n # frozen\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n lib_dir = Path(getattr(sys, '_MEIPASS'))\n else:\n # others\n lib_dir = Path(sys.executable).parent / 'lib'\n\n path = lib_dir.joinpath(*module.__package__.split(\".\"))\n else:\n # unfrozen\n path = Path(module.__file__).parent\n return str(path)\n\n\ndef list_module(module: ModuleType) -> List[str]:\n path = get_path(module)\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n return [file.parent.name for file in Path(path).glob('*/__init__.py')]\n else:\n return [name for _, name, is_pkg in pkgutil.iter_modules([str(path)]) if is_pkg]\n\n\ndef find_available_locales(providers: List[str]) -> List[str]:\n available_locales: Set[str] = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n available_locales: List[str] = sorted(available_locales)\n return available_locales\n\n\ndef find_available_providers(modules: List[ModuleType]) -> List[str]:\n available_providers = set()\n for providers_mod in modules:\n if providers_mod.__package__:\n providers = [\n '.'.join([providers_mod.__package__, mod])\n for mod in list_module(providers_mod) if mod != '__pycache__'\n ]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}]}
| 1,528 | 135 |
gh_patches_debug_14375
|
rasdani/github-patches
|
git_diff
|
mabel-dev__opteryx-1467
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
🪲 Column Names not Aliased
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
Example from user
~~~sql
SELECT *
FROM $planets AS P
INNER JOIN $satellites AS S
ON P.id = S.id
~~~
Simplified example
~~~sql
SELECT *
FROM $planets
INNER JOIN $satellites
ON $planets.id = $satellites.id
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opteryx/operators/exit_node.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 """
14 Exit Node
15
16 This is a SQL Query Execution Plan Node.
17
18 This does the final preparation before returning results to users.
19
20 This does two things that the projection node doesn't do:
21 - renames columns from the internal names
22 - removes all columns not being returned to the user
23
24 This node doesn't do any calculations, it is a pure Projection.
25 """
26 import time
27 from typing import Generator
28
29 from opteryx.exceptions import AmbiguousIdentifierError
30 from opteryx.exceptions import InvalidInternalStateError
31 from opteryx.models import QueryProperties
32 from opteryx.operators import BasePlanNode
33
34
35 class ExitNode(BasePlanNode):
36 def __init__(self, properties: QueryProperties, **config):
37 super().__init__(properties=properties)
38 self.columns = config.get("columns", [])
39
40 @property
41 def config(self): # pragma: no cover
42 return None
43
44 @property
45 def name(self): # pragma: no cover
46 return "Exit"
47
48 def execute(self) -> Generator:
49 start = time.monotonic_ns()
50 morsels = self._producers[0] # type:ignore
51
52 final_columns = []
53 final_names = []
54 for column in self.columns:
55 final_columns.append(column.schema_column.identity)
56 final_names.append(column.current_name)
57
58 if len(final_columns) != len(set(final_columns)): # pragma: no cover
59 from collections import Counter
60
61 duplicates = [column for column, count in Counter(final_columns).items() if count > 1]
62 matches = {a for a, b in zip(final_names, final_columns) if b in duplicates}
63 raise AmbiguousIdentifierError(
64 message=f"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`"
65 )
66
67 self.statistics.time_exiting += time.monotonic_ns() - start
68 for morsel in morsels.execute():
69 start = time.monotonic_ns()
70 if not set(final_columns).issubset(morsel.column_names): # pragma: no cover
71 mapping = {name: int_name for name, int_name in zip(final_columns, final_names)}
72 missing_references = {
73 mapping.get(ref): ref for ref in final_columns if ref not in morsel.column_names
74 }
75
76 raise InvalidInternalStateError(
77 f"The following fields were not in the resultset - {', '.join(missing_references.keys())}"
78 )
79
80 morsel = morsel.select(final_columns)
81 morsel = morsel.rename_columns(final_names)
82
83 self.statistics.time_exiting += time.monotonic_ns() - start
84 yield morsel
85 start = time.monotonic_ns()
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opteryx/operators/exit_node.py b/opteryx/operators/exit_node.py
--- a/opteryx/operators/exit_node.py
+++ b/opteryx/operators/exit_node.py
@@ -64,6 +64,14 @@
message=f"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`"
)
+ if len(set(final_names)) != len(final_names): # we have duplicate names
+ final_names = []
+ for column in self.columns:
+ if column.schema_column.origin:
+ final_names.append(f"{column.schema_column.origin[0]}.{column.current_name}")
+ else:
+ final_names.append(column.qualified_name)
+
self.statistics.time_exiting += time.monotonic_ns() - start
for morsel in morsels.execute():
start = time.monotonic_ns()
|
{"golden_diff": "diff --git a/opteryx/operators/exit_node.py b/opteryx/operators/exit_node.py\n--- a/opteryx/operators/exit_node.py\n+++ b/opteryx/operators/exit_node.py\n@@ -64,6 +64,14 @@\n message=f\"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`\"\n )\n \n+ if len(set(final_names)) != len(final_names): # we have duplicate names\n+ final_names = []\n+ for column in self.columns:\n+ if column.schema_column.origin:\n+ final_names.append(f\"{column.schema_column.origin[0]}.{column.current_name}\")\n+ else:\n+ final_names.append(column.qualified_name)\n+\n self.statistics.time_exiting += time.monotonic_ns() - start\n for morsel in morsels.execute():\n start = time.monotonic_ns()\n", "issue": "\ud83e\udeb2 Column Names not Aliased\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\nExample from user\r\n~~~sql\r\nSELECT *\r\n FROM $planets AS P\r\n INNER JOIN $satellites AS S\r\n ON P.id = S.id\r\n~~~\r\n\r\nSimplified example\r\n~~~sql\r\nSELECT *\r\n FROM $planets\r\n INNER JOIN $satellites\r\n ON $planets.id = $satellites.id\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nExit Node\n\nThis is a SQL Query Execution Plan Node.\n\nThis does the final preparation before returning results to users.\n\nThis does two things that the projection node doesn't do:\n - renames columns from the internal names\n - removes all columns not being returned to the user\n\nThis node doesn't do any calculations, it is a pure Projection.\n\"\"\"\nimport time\nfrom typing import Generator\n\nfrom opteryx.exceptions import AmbiguousIdentifierError\nfrom opteryx.exceptions import InvalidInternalStateError\nfrom opteryx.models import QueryProperties\nfrom opteryx.operators import BasePlanNode\n\n\nclass ExitNode(BasePlanNode):\n def __init__(self, properties: QueryProperties, **config):\n super().__init__(properties=properties)\n self.columns = config.get(\"columns\", [])\n\n @property\n def config(self): # pragma: no cover\n return None\n\n @property\n def name(self): # pragma: no cover\n return \"Exit\"\n\n def execute(self) -> Generator:\n start = time.monotonic_ns()\n morsels = self._producers[0] # type:ignore\n\n final_columns = []\n final_names = []\n for column in self.columns:\n final_columns.append(column.schema_column.identity)\n final_names.append(column.current_name)\n\n if len(final_columns) != len(set(final_columns)): # pragma: no cover\n from collections import Counter\n\n duplicates = [column for column, count in Counter(final_columns).items() if count > 1]\n matches = {a for a, b in zip(final_names, final_columns) if b in duplicates}\n raise AmbiguousIdentifierError(\n message=f\"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`\"\n )\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n for morsel in morsels.execute():\n start = time.monotonic_ns()\n if not set(final_columns).issubset(morsel.column_names): # pragma: no cover\n mapping = {name: int_name for name, int_name in zip(final_columns, final_names)}\n missing_references = {\n mapping.get(ref): ref for ref in final_columns if ref not in morsel.column_names\n }\n\n raise InvalidInternalStateError(\n f\"The following fields were not in the resultset - {', '.join(missing_references.keys())}\"\n )\n\n morsel = morsel.select(final_columns)\n morsel = morsel.rename_columns(final_names)\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n yield morsel\n start = time.monotonic_ns()\n", "path": "opteryx/operators/exit_node.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nExit Node\n\nThis is a SQL Query Execution Plan Node.\n\nThis does the final preparation before returning results to users.\n\nThis does two things that the projection node doesn't do:\n - renames columns from the internal names\n - removes all columns not being returned to the user\n\nThis node doesn't do any calculations, it is a pure Projection.\n\"\"\"\nimport time\nfrom typing import Generator\n\nfrom opteryx.exceptions import AmbiguousIdentifierError\nfrom opteryx.exceptions import InvalidInternalStateError\nfrom opteryx.models import QueryProperties\nfrom opteryx.operators import BasePlanNode\n\n\nclass ExitNode(BasePlanNode):\n def __init__(self, properties: QueryProperties, **config):\n super().__init__(properties=properties)\n self.columns = config.get(\"columns\", [])\n\n @property\n def config(self): # pragma: no cover\n return None\n\n @property\n def name(self): # pragma: no cover\n return \"Exit\"\n\n def execute(self) -> Generator:\n start = time.monotonic_ns()\n morsels = self._producers[0] # type:ignore\n\n final_columns = []\n final_names = []\n for column in self.columns:\n final_columns.append(column.schema_column.identity)\n final_names.append(column.current_name)\n\n if len(final_columns) != len(set(final_columns)): # pragma: no cover\n from collections import Counter\n\n duplicates = [column for column, count in Counter(final_columns).items() if count > 1]\n matches = {a for a, b in zip(final_names, final_columns) if b in duplicates}\n raise AmbiguousIdentifierError(\n message=f\"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`\"\n )\n\n if len(set(final_names)) != len(final_names): # we have duplicate names\n final_names = []\n for column in self.columns:\n if column.schema_column.origin:\n final_names.append(f\"{column.schema_column.origin[0]}.{column.current_name}\")\n else:\n final_names.append(column.qualified_name)\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n for morsel in morsels.execute():\n start = time.monotonic_ns()\n if not set(final_columns).issubset(morsel.column_names): # pragma: no cover\n mapping = {name: int_name for name, int_name in zip(final_columns, final_names)}\n missing_references = {\n mapping.get(ref): ref for ref in final_columns if ref not in morsel.column_names\n }\n\n raise InvalidInternalStateError(\n f\"The following fields were not in the resultset - {', '.join(missing_references.keys())}\"\n )\n\n morsel = morsel.select(final_columns)\n morsel = morsel.rename_columns(final_names)\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n yield morsel\n start = time.monotonic_ns()\n", "path": "opteryx/operators/exit_node.py"}]}
| 1,281 | 198 |
gh_patches_debug_16442
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-2613
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Warn on OpenSSL 0.9.8?
Starting in 3.5 weeks OpenSSL 0.9.8 will officially be unsupported by the upstream team. It's unclear what this will mean for various downstreams (notable RHEL, CentOS, and OS X), but in practice it means there's likely to be a significantly decreased level of attention, research, and patching that goes into it.
I'd like to suggest that, starting with whatever release comes after January 1st, 2016, we emit a warning if users are linked against OpenSSL 0.9.8, suggesting they upgrade to a newer OpenSSL (or OS I guess?).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/bindings/openssl/binding.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import collections
8 import os
9 import threading
10 import types
11
12 from cryptography.exceptions import InternalError
13 from cryptography.hazmat.bindings._openssl import ffi, lib
14 from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES
15
16
17 _OpenSSLError = collections.namedtuple("_OpenSSLError",
18 ["code", "lib", "func", "reason"])
19
20
21 def _consume_errors(lib):
22 errors = []
23 while True:
24 code = lib.ERR_get_error()
25 if code == 0:
26 break
27
28 err_lib = lib.ERR_GET_LIB(code)
29 err_func = lib.ERR_GET_FUNC(code)
30 err_reason = lib.ERR_GET_REASON(code)
31
32 errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))
33 return errors
34
35
36 def _openssl_assert(lib, ok):
37 if not ok:
38 errors = _consume_errors(lib)
39 raise InternalError(
40 "Unknown OpenSSL error. Please file an issue at https://github.com"
41 "/pyca/cryptography/issues with information on how to reproduce "
42 "this. ({0!r})".format(errors),
43 errors
44 )
45
46
47 @ffi.callback("int (*)(unsigned char *, int)", error=-1)
48 def _osrandom_rand_bytes(buf, size):
49 signed = ffi.cast("char *", buf)
50 result = os.urandom(size)
51 signed[0:size] = result
52 return 1
53
54
55 @ffi.callback("int (*)(void)")
56 def _osrandom_rand_status():
57 return 1
58
59
60 def build_conditional_library(lib, conditional_names):
61 conditional_lib = types.ModuleType("lib")
62 excluded_names = set()
63 for condition, names in conditional_names.items():
64 if not getattr(lib, condition):
65 excluded_names |= set(names)
66
67 for attr in dir(lib):
68 if attr not in excluded_names:
69 setattr(conditional_lib, attr, getattr(lib, attr))
70
71 return conditional_lib
72
73
74 class Binding(object):
75 """
76 OpenSSL API wrapper.
77 """
78 lib = None
79 ffi = ffi
80 _lib_loaded = False
81 _locks = None
82 _lock_cb_handle = None
83 _init_lock = threading.Lock()
84 _lock_init_lock = threading.Lock()
85
86 _osrandom_engine_id = ffi.new("const char[]", b"osrandom")
87 _osrandom_engine_name = ffi.new("const char[]", b"osrandom_engine")
88 _osrandom_method = ffi.new(
89 "RAND_METHOD *",
90 dict(bytes=_osrandom_rand_bytes, pseudorand=_osrandom_rand_bytes,
91 status=_osrandom_rand_status)
92 )
93
94 def __init__(self):
95 self._ensure_ffi_initialized()
96
97 @classmethod
98 def _register_osrandom_engine(cls):
99 _openssl_assert(cls.lib, cls.lib.ERR_peek_error() == 0)
100
101 engine = cls.lib.ENGINE_new()
102 _openssl_assert(cls.lib, engine != cls.ffi.NULL)
103 try:
104 result = cls.lib.ENGINE_set_id(engine, cls._osrandom_engine_id)
105 _openssl_assert(cls.lib, result == 1)
106 result = cls.lib.ENGINE_set_name(engine, cls._osrandom_engine_name)
107 _openssl_assert(cls.lib, result == 1)
108 result = cls.lib.ENGINE_set_RAND(engine, cls._osrandom_method)
109 _openssl_assert(cls.lib, result == 1)
110 result = cls.lib.ENGINE_add(engine)
111 if result != 1:
112 errors = _consume_errors(cls.lib)
113 _openssl_assert(
114 cls.lib,
115 errors[0].reason == cls.lib.ENGINE_R_CONFLICTING_ENGINE_ID
116 )
117
118 finally:
119 result = cls.lib.ENGINE_free(engine)
120 _openssl_assert(cls.lib, result == 1)
121
122 @classmethod
123 def _ensure_ffi_initialized(cls):
124 with cls._init_lock:
125 if not cls._lib_loaded:
126 cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)
127 cls._lib_loaded = True
128 # initialize the SSL library
129 cls.lib.SSL_library_init()
130 # adds all ciphers/digests for EVP
131 cls.lib.OpenSSL_add_all_algorithms()
132 # loads error strings for libcrypto and libssl functions
133 cls.lib.SSL_load_error_strings()
134 cls._register_osrandom_engine()
135
136 @classmethod
137 def init_static_locks(cls):
138 with cls._lock_init_lock:
139 cls._ensure_ffi_initialized()
140
141 if not cls._lock_cb_handle:
142 cls._lock_cb_handle = cls.ffi.callback(
143 "void(int, int, const char *, int)",
144 cls._lock_cb
145 )
146
147 # Use Python's implementation if available, importing _ssl triggers
148 # the setup for this.
149 __import__("_ssl")
150
151 if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:
152 return
153
154 # If nothing else has setup a locking callback already, we set up
155 # our own
156 num_locks = cls.lib.CRYPTO_num_locks()
157 cls._locks = [threading.Lock() for n in range(num_locks)]
158
159 cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)
160
161 @classmethod
162 def _lock_cb(cls, mode, n, file, line):
163 lock = cls._locks[n]
164
165 if mode & cls.lib.CRYPTO_LOCK:
166 lock.acquire()
167 elif mode & cls.lib.CRYPTO_UNLOCK:
168 lock.release()
169 else:
170 raise RuntimeError(
171 "Unknown lock mode {0}: lock={1}, file={2}, line={3}.".format(
172 mode, n, file, line
173 )
174 )
175
176
177 # OpenSSL is not thread safe until the locks are initialized. We call this
178 # method in module scope so that it executes with the import lock. On
179 # Pythons < 3.4 this import lock is a global lock, which can prevent a race
180 # condition registering the OpenSSL locks. On Python 3.4+ the import lock
181 # is per module so this approach will not work.
182 Binding.init_static_locks()
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py
--- a/src/cryptography/hazmat/bindings/openssl/binding.py
+++ b/src/cryptography/hazmat/bindings/openssl/binding.py
@@ -8,6 +8,7 @@
import os
import threading
import types
+import warnings
from cryptography.exceptions import InternalError
from cryptography.hazmat.bindings._openssl import ffi, lib
@@ -180,3 +181,11 @@
# condition registering the OpenSSL locks. On Python 3.4+ the import lock
# is per module so this approach will not work.
Binding.init_static_locks()
+
+if Binding.lib.SSLeay() < 0x10001000:
+ warnings.warn(
+ "OpenSSL versions less than 1.0.1 are no longer supported by the "
+ "OpenSSL project, please upgrade. A future version of cryptography "
+ "will drop support for these versions.",
+ DeprecationWarning
+ )
|
{"golden_diff": "diff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py\n--- a/src/cryptography/hazmat/bindings/openssl/binding.py\n+++ b/src/cryptography/hazmat/bindings/openssl/binding.py\n@@ -8,6 +8,7 @@\n import os\n import threading\n import types\n+import warnings\n \n from cryptography.exceptions import InternalError\n from cryptography.hazmat.bindings._openssl import ffi, lib\n@@ -180,3 +181,11 @@\n # condition registering the OpenSSL locks. On Python 3.4+ the import lock\n # is per module so this approach will not work.\n Binding.init_static_locks()\n+\n+if Binding.lib.SSLeay() < 0x10001000:\n+ warnings.warn(\n+ \"OpenSSL versions less than 1.0.1 are no longer supported by the \"\n+ \"OpenSSL project, please upgrade. A future version of cryptography \"\n+ \"will drop support for these versions.\",\n+ DeprecationWarning\n+ )\n", "issue": "Warn on OpenSSL 0.9.8?\nStarting in 3.5 weeks OpenSSL 0.9.8 will officially be unsupported by the upstream team. It's unclear what this will mean for various downstreams (notable RHEL, CentOS, and OS X), but in practice it means there's likely to be a significantly decreased level of attention, research, and patching that goes into it.\n\nI'd like to suggest that, starting with whatever release comes after January 1st, 2016, we emit a warning if users are linked against OpenSSL 0.9.8, suggesting they upgrade to a newer OpenSSL (or OS I guess?).\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport collections\nimport os\nimport threading\nimport types\n\nfrom cryptography.exceptions import InternalError\nfrom cryptography.hazmat.bindings._openssl import ffi, lib\nfrom cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES\n\n\n_OpenSSLError = collections.namedtuple(\"_OpenSSLError\",\n [\"code\", \"lib\", \"func\", \"reason\"])\n\n\ndef _consume_errors(lib):\n errors = []\n while True:\n code = lib.ERR_get_error()\n if code == 0:\n break\n\n err_lib = lib.ERR_GET_LIB(code)\n err_func = lib.ERR_GET_FUNC(code)\n err_reason = lib.ERR_GET_REASON(code)\n\n errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))\n return errors\n\n\ndef _openssl_assert(lib, ok):\n if not ok:\n errors = _consume_errors(lib)\n raise InternalError(\n \"Unknown OpenSSL error. Please file an issue at https://github.com\"\n \"/pyca/cryptography/issues with information on how to reproduce \"\n \"this. ({0!r})\".format(errors),\n errors\n )\n\n\[email protected](\"int (*)(unsigned char *, int)\", error=-1)\ndef _osrandom_rand_bytes(buf, size):\n signed = ffi.cast(\"char *\", buf)\n result = os.urandom(size)\n signed[0:size] = result\n return 1\n\n\[email protected](\"int (*)(void)\")\ndef _osrandom_rand_status():\n return 1\n\n\ndef build_conditional_library(lib, conditional_names):\n conditional_lib = types.ModuleType(\"lib\")\n excluded_names = set()\n for condition, names in conditional_names.items():\n if not getattr(lib, condition):\n excluded_names |= set(names)\n\n for attr in dir(lib):\n if attr not in excluded_names:\n setattr(conditional_lib, attr, getattr(lib, attr))\n\n return conditional_lib\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n lib = None\n ffi = ffi\n _lib_loaded = False\n _locks = None\n _lock_cb_handle = None\n _init_lock = threading.Lock()\n _lock_init_lock = threading.Lock()\n\n _osrandom_engine_id = ffi.new(\"const char[]\", b\"osrandom\")\n _osrandom_engine_name = ffi.new(\"const char[]\", b\"osrandom_engine\")\n _osrandom_method = ffi.new(\n \"RAND_METHOD *\",\n dict(bytes=_osrandom_rand_bytes, pseudorand=_osrandom_rand_bytes,\n status=_osrandom_rand_status)\n )\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _register_osrandom_engine(cls):\n _openssl_assert(cls.lib, cls.lib.ERR_peek_error() == 0)\n\n engine = cls.lib.ENGINE_new()\n _openssl_assert(cls.lib, engine != cls.ffi.NULL)\n try:\n result = cls.lib.ENGINE_set_id(engine, cls._osrandom_engine_id)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_set_name(engine, cls._osrandom_engine_name)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_set_RAND(engine, cls._osrandom_method)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_add(engine)\n if result != 1:\n errors = _consume_errors(cls.lib)\n _openssl_assert(\n cls.lib,\n errors[0].reason == cls.lib.ENGINE_R_CONFLICTING_ENGINE_ID\n )\n\n finally:\n result = cls.lib.ENGINE_free(engine)\n _openssl_assert(cls.lib, result == 1)\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n with cls._init_lock:\n if not cls._lib_loaded:\n cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)\n cls._lib_loaded = True\n # initialize the SSL library\n cls.lib.SSL_library_init()\n # adds all ciphers/digests for EVP\n cls.lib.OpenSSL_add_all_algorithms()\n # loads error strings for libcrypto and libssl functions\n cls.lib.SSL_load_error_strings()\n cls._register_osrandom_engine()\n\n @classmethod\n def init_static_locks(cls):\n with cls._lock_init_lock:\n cls._ensure_ffi_initialized()\n\n if not cls._lock_cb_handle:\n cls._lock_cb_handle = cls.ffi.callback(\n \"void(int, int, const char *, int)\",\n cls._lock_cb\n )\n\n # Use Python's implementation if available, importing _ssl triggers\n # the setup for this.\n __import__(\"_ssl\")\n\n if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:\n return\n\n # If nothing else has setup a locking callback already, we set up\n # our own\n num_locks = cls.lib.CRYPTO_num_locks()\n cls._locks = [threading.Lock() for n in range(num_locks)]\n\n cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)\n\n @classmethod\n def _lock_cb(cls, mode, n, file, line):\n lock = cls._locks[n]\n\n if mode & cls.lib.CRYPTO_LOCK:\n lock.acquire()\n elif mode & cls.lib.CRYPTO_UNLOCK:\n lock.release()\n else:\n raise RuntimeError(\n \"Unknown lock mode {0}: lock={1}, file={2}, line={3}.\".format(\n mode, n, file, line\n )\n )\n\n\n# OpenSSL is not thread safe until the locks are initialized. We call this\n# method in module scope so that it executes with the import lock. On\n# Pythons < 3.4 this import lock is a global lock, which can prevent a race\n# condition registering the OpenSSL locks. On Python 3.4+ the import lock\n# is per module so this approach will not work.\nBinding.init_static_locks()\n", "path": "src/cryptography/hazmat/bindings/openssl/binding.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport collections\nimport os\nimport threading\nimport types\nimport warnings\n\nfrom cryptography.exceptions import InternalError\nfrom cryptography.hazmat.bindings._openssl import ffi, lib\nfrom cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES\n\n\n_OpenSSLError = collections.namedtuple(\"_OpenSSLError\",\n [\"code\", \"lib\", \"func\", \"reason\"])\n\n\ndef _consume_errors(lib):\n errors = []\n while True:\n code = lib.ERR_get_error()\n if code == 0:\n break\n\n err_lib = lib.ERR_GET_LIB(code)\n err_func = lib.ERR_GET_FUNC(code)\n err_reason = lib.ERR_GET_REASON(code)\n\n errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))\n return errors\n\n\ndef _openssl_assert(lib, ok):\n if not ok:\n errors = _consume_errors(lib)\n raise InternalError(\n \"Unknown OpenSSL error. Please file an issue at https://github.com\"\n \"/pyca/cryptography/issues with information on how to reproduce \"\n \"this. ({0!r})\".format(errors),\n errors\n )\n\n\[email protected](\"int (*)(unsigned char *, int)\", error=-1)\ndef _osrandom_rand_bytes(buf, size):\n signed = ffi.cast(\"char *\", buf)\n result = os.urandom(size)\n signed[0:size] = result\n return 1\n\n\[email protected](\"int (*)(void)\")\ndef _osrandom_rand_status():\n return 1\n\n\ndef build_conditional_library(lib, conditional_names):\n conditional_lib = types.ModuleType(\"lib\")\n excluded_names = set()\n for condition, names in conditional_names.items():\n if not getattr(lib, condition):\n excluded_names |= set(names)\n\n for attr in dir(lib):\n if attr not in excluded_names:\n setattr(conditional_lib, attr, getattr(lib, attr))\n\n return conditional_lib\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n lib = None\n ffi = ffi\n _lib_loaded = False\n _locks = None\n _lock_cb_handle = None\n _init_lock = threading.Lock()\n _lock_init_lock = threading.Lock()\n\n _osrandom_engine_id = ffi.new(\"const char[]\", b\"osrandom\")\n _osrandom_engine_name = ffi.new(\"const char[]\", b\"osrandom_engine\")\n _osrandom_method = ffi.new(\n \"RAND_METHOD *\",\n dict(bytes=_osrandom_rand_bytes, pseudorand=_osrandom_rand_bytes,\n status=_osrandom_rand_status)\n )\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _register_osrandom_engine(cls):\n _openssl_assert(cls.lib, cls.lib.ERR_peek_error() == 0)\n\n engine = cls.lib.ENGINE_new()\n _openssl_assert(cls.lib, engine != cls.ffi.NULL)\n try:\n result = cls.lib.ENGINE_set_id(engine, cls._osrandom_engine_id)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_set_name(engine, cls._osrandom_engine_name)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_set_RAND(engine, cls._osrandom_method)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_add(engine)\n if result != 1:\n errors = _consume_errors(cls.lib)\n _openssl_assert(\n cls.lib,\n errors[0].reason == cls.lib.ENGINE_R_CONFLICTING_ENGINE_ID\n )\n\n finally:\n result = cls.lib.ENGINE_free(engine)\n _openssl_assert(cls.lib, result == 1)\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n with cls._init_lock:\n if not cls._lib_loaded:\n cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)\n cls._lib_loaded = True\n # initialize the SSL library\n cls.lib.SSL_library_init()\n # adds all ciphers/digests for EVP\n cls.lib.OpenSSL_add_all_algorithms()\n # loads error strings for libcrypto and libssl functions\n cls.lib.SSL_load_error_strings()\n cls._register_osrandom_engine()\n\n @classmethod\n def init_static_locks(cls):\n with cls._lock_init_lock:\n cls._ensure_ffi_initialized()\n\n if not cls._lock_cb_handle:\n cls._lock_cb_handle = cls.ffi.callback(\n \"void(int, int, const char *, int)\",\n cls._lock_cb\n )\n\n # Use Python's implementation if available, importing _ssl triggers\n # the setup for this.\n __import__(\"_ssl\")\n\n if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:\n return\n\n # If nothing else has setup a locking callback already, we set up\n # our own\n num_locks = cls.lib.CRYPTO_num_locks()\n cls._locks = [threading.Lock() for n in range(num_locks)]\n\n cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)\n\n @classmethod\n def _lock_cb(cls, mode, n, file, line):\n lock = cls._locks[n]\n\n if mode & cls.lib.CRYPTO_LOCK:\n lock.acquire()\n elif mode & cls.lib.CRYPTO_UNLOCK:\n lock.release()\n else:\n raise RuntimeError(\n \"Unknown lock mode {0}: lock={1}, file={2}, line={3}.\".format(\n mode, n, file, line\n )\n )\n\n\n# OpenSSL is not thread safe until the locks are initialized. We call this\n# method in module scope so that it executes with the import lock. On\n# Pythons < 3.4 this import lock is a global lock, which can prevent a race\n# condition registering the OpenSSL locks. On Python 3.4+ the import lock\n# is per module so this approach will not work.\nBinding.init_static_locks()\n\nif Binding.lib.SSLeay() < 0x10001000:\n warnings.warn(\n \"OpenSSL versions less than 1.0.1 are no longer supported by the \"\n \"OpenSSL project, please upgrade. A future version of cryptography \"\n \"will drop support for these versions.\",\n DeprecationWarning\n )\n", "path": "src/cryptography/hazmat/bindings/openssl/binding.py"}]}
| 2,206 | 243 |
gh_patches_debug_31113
|
rasdani/github-patches
|
git_diff
|
pymeasure__pymeasure-867
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`VISAAdapter` still terminating on default term character in `read_bytes(-1)`
Pretty odd and specific issue, not sure if this belong here or on PyVisa.
When I try to read the complete buffer in a serial connection using the `VISAAdapter`, it still breaks on the byte corresponding to `\n`:
```
def __init__(self, adapter, name="Velleman K8090", timeout=1000, **kwargs):
super().__init__(
adapter,
name=name,
asrl={"baud_rate": 19200},
write_termination="",
read_termination=chr(0x0F),
timeout=timeout,
**kwargs,
)
# ...
def read(self):
response = self.read_bytes(-1)
# `response` will end with "\n", even though there are more bytes in the buffer!
```
Encountered in #859 .
It seems the issue is two fold: in this code any termchar should be ignored and it's even responding to the wrong termchar.
This is with the `pyvisa-py` backend.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pymeasure/adapters/visa.py`
Content:
```
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2023 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25 import logging
26 from warnings import warn
27
28 import pyvisa
29 import numpy as np
30
31 from .adapter import Adapter
32 from .protocol import ProtocolAdapter
33
34 log = logging.getLogger(__name__)
35 log.addHandler(logging.NullHandler())
36
37
38 # noinspection PyPep8Naming,PyUnresolvedReferences
39 class VISAAdapter(Adapter):
40 """ Adapter class for the VISA library, using PyVISA to communicate with instruments.
41
42 The workhorse of our library, used by most instruments.
43
44 :param resource_name: A
45 `VISA resource string <https://pyvisa.readthedocs.io/en/latest/introduction/names.html>`__
46 or GPIB address integer that identifies the target of the connection
47 :param visa_library: PyVISA VisaLibrary Instance, path of the VISA library or VisaLibrary spec
48 string (``@py`` or ``@ivi``). If not given, the default for the platform will be used.
49 :param preprocess_reply: An optional callable used to preprocess strings
50 received from the instrument. The callable returns the processed string.
51
52 .. deprecated:: 0.11
53 Implement it in the instrument's `read` method instead.
54
55 :param float query_delay: Time in s to wait after writing and before reading.
56
57 .. deprecated:: 0.11
58 Implement it in the instrument's `wait_for` method instead.
59
60 :param log: Parent logger of the 'Adapter' logger.
61 :param \\**kwargs: Keyword arguments for configuring the PyVISA connection.
62
63 :Kwargs:
64 Keyword arguments are used to configure the connection created by PyVISA. This is
65 complicated by the fact that *which* arguments are valid depends on the interface (e.g.
66 serial, GPIB, TCPI/IP, USB) determined by the current ``resource_name``.
67
68 A flexible process is used to easily define reasonable *default values* for
69 different instrument interfaces, but also enable the instrument user to *override any
70 setting* if their situation demands it.
71
72 A kwarg that names a pyVISA interface type (most commonly ``asrl``, ``gpib``, ``tcpip``, or
73 ``usb``) is a dictionary with keyword arguments defining defaults specific to that
74 interface. Example: ``asrl={'baud_rate': 4200}``.
75
76 All other kwargs are either generally valid (e.g. ``timeout=500``) or override any default
77 settings from the interface-specific entries above. For example, passing
78 ``baud_rate=115200`` when connecting via a resource name ``ASRL1`` would override a
79 default of 4200 defined as above.
80
81 See :ref:`connection_settings` for how to tweak settings when *connecting* to an instrument.
82 See :ref:`default_connection_settings` for how to best define default settings when
83 *implementing an instrument*.
84 """
85
86 def __init__(self, resource_name, visa_library='', preprocess_reply=None,
87 query_delay=0, log=None, **kwargs):
88 super().__init__(preprocess_reply=preprocess_reply, log=log)
89 if query_delay:
90 warn(("Parameter `query_delay` is deprecated. "
91 "Implement in Instrument's `wait_for` instead."),
92 FutureWarning)
93 kwargs.setdefault("query_delay", query_delay)
94 self.query_delay = query_delay
95 if isinstance(resource_name, ProtocolAdapter):
96 self.connection = resource_name
97 self.connection.write_raw = self.connection.write_bytes
98 self.read_bytes = self.connection.read_bytes
99 return
100 elif isinstance(resource_name, VISAAdapter):
101 # Allow to reuse the connection.
102 self.resource_name = getattr(resource_name, "resource_name", None)
103 self.connection = resource_name.connection
104 self.manager = resource_name.manager
105 self.query_delay = resource_name.query_delay
106 return
107 elif isinstance(resource_name, int):
108 resource_name = "GPIB0::%d::INSTR" % resource_name
109
110 self.resource_name = resource_name
111 self.manager = pyvisa.ResourceManager(visa_library)
112
113 # Clean up kwargs considering the interface type matching resource_name
114 if_type = self.manager.resource_info(self.resource_name).interface_type
115 for key in list(kwargs.keys()): # iterate over a copy of the keys as we modify kwargs
116 # Remove all interface-specific kwargs:
117 if key in pyvisa.constants.InterfaceType.__members__:
118 if getattr(pyvisa.constants.InterfaceType, key) is if_type:
119 # For the present interface, dump contents into kwargs first if they are not
120 # present already. This way, it is possible to override default values with
121 # kwargs passed to Instrument.__init__()
122 for k, v in kwargs[key].items():
123 kwargs.setdefault(k, v)
124 del kwargs[key]
125
126 self.connection = self.manager.open_resource(
127 resource_name,
128 **kwargs
129 )
130
131 def close(self):
132 """Close the connection.
133
134 .. note::
135
136 This closes the connection to the resource for all adapters using
137 it currently (e.g. different adapters using the same GPIB line).
138 """
139 super().close()
140 try:
141 self.manager.close()
142 except AttributeError:
143 pass # Closed from another adapter using the same connection.
144
145 def _write(self, command, **kwargs):
146 """Write a string command to the instrument appending `write_termination`.
147
148 :param str command: Command string to be sent to the instrument
149 (without termination).
150 :param \\**kwargs: Keyword arguments for the connection itself.
151 """
152 self.connection.write(command, **kwargs)
153
154 def _write_bytes(self, content, **kwargs):
155 """Write the bytes `content` to the instrument.
156
157 :param bytes content: The bytes to write to the instrument.
158 :param \\**kwargs: Keyword arguments for the connection itself.
159 """
160 self.connection.write_raw(content, **kwargs)
161
162 def _read(self, **kwargs):
163 """Read up to (excluding) `read_termination` or the whole read buffer.
164
165 :param \\**kwargs: Keyword arguments for the connection itself.
166 :returns str: ASCII response of the instrument (excluding read_termination).
167 """
168 return self.connection.read(**kwargs)
169
170 def _read_bytes(self, count, break_on_termchar=False, **kwargs):
171 """Read a certain number of bytes from the instrument.
172
173 :param int count: Number of bytes to read. A value of -1 indicates to
174 read from the whole read buffer.
175 :param bool break_on_termchar: Stop reading at a termination character.
176 :param \\**kwargs: Keyword arguments for the connection itself.
177 :returns bytes: Bytes response of the instrument (including termination).
178 """
179 if count >= 0:
180 return self.connection.read_bytes(count, break_on_termchar=break_on_termchar, **kwargs)
181 elif break_on_termchar:
182 return self.connection.read_raw(None, **kwargs)
183 else:
184 read_termination = self.connection.read_termination
185 self.connection.read_termination = None
186 # Try except allows to set the read_termination even after an error.
187 try:
188 return self.connection.read_raw(**kwargs)
189 finally:
190 self.connection.read_termination = read_termination
191
192 def ask(self, command):
193 """ Writes the command to the instrument and returns the resulting
194 ASCII response
195
196 .. deprecated:: 0.11
197 Call `Instrument.ask` instead.
198
199 :param command: SCPI command string to be sent to the instrument
200 :returns: String ASCII response of the instrument
201 """
202 warn("`Adapter.ask` is deprecated, call `Instrument.ask` instead.", FutureWarning)
203 return self.connection.query(command)
204
205 def ask_values(self, command, **kwargs):
206 """ Writes a command to the instrument and returns a list of formatted
207 values from the result. This leverages the `query_ascii_values` method
208 in PyVISA.
209
210 .. deprecated:: 0.11
211 Call `Instrument.values` instead.
212
213 :param command: SCPI command to be sent to the instrument
214 :param \\**kwargs: Key-word arguments to pass onto `query_ascii_values`
215 :returns: Formatted response of the instrument.
216 """
217 warn("`Adapter.ask_values` is deprecated, call `Instrument.values` instead.",
218 FutureWarning)
219
220 return self.connection.query_ascii_values(command, **kwargs)
221
222 def binary_values(self, command, header_bytes=0, dtype=np.float32):
223 """ Returns a numpy array from a query for binary data
224
225 .. deprecated:: 0.11
226 Call `Instrument.binary_values` instead.
227
228 :param command: SCPI command to be sent to the instrument
229 :param header_bytes: Integer number of bytes to ignore in header
230 :param dtype: The NumPy data type to format the values with
231 :returns: NumPy array of values
232 """
233 warn("`Adapter.binary_values` is deprecated, call `Instrument.binary_values` instead.",
234 FutureWarning)
235 self.connection.write(command)
236 binary = self.connection.read_raw()
237 # header = binary[:header_bytes]
238 data = binary[header_bytes:]
239 return np.fromstring(data, dtype=dtype)
240
241 def wait_for_srq(self, timeout=25, delay=0.1):
242 """ Block until a SRQ, and leave the bit high
243
244 :param timeout: Timeout duration in seconds
245 :param delay: Time delay between checking SRQ in seconds
246 """
247 self.connection.wait_for_srq(timeout * 1000)
248
249 def flush_read_buffer(self):
250 """ Flush and discard the input buffer
251
252 As detailed by pyvisa, discard the read buffer contents and if data was present
253 in the read buffer and no END-indicator was present, read from the device until
254 encountering an END indicator (which causes loss of data).
255 """
256 try:
257 self.connection.flush(pyvisa.constants.BufferOperation.discard_read_buffer)
258 except NotImplementedError:
259 # NotImplementedError is raised when using resource types other than `asrl`
260 # in conjunction with pyvisa-py.
261 # Upstream issue: https://github.com/pyvisa/pyvisa-py/issues/348
262 # fake discarding the read buffer by reading all available messages.
263 timeout = self.connection.timeout
264 self.connection.timeout = 0
265 try:
266 self.read_bytes(-1)
267 except pyvisa.errors.VisaIOError:
268 pass
269 finally:
270 self.connection.timeout = timeout
271
272 def __repr__(self):
273 return "<VISAAdapter(resource='%s')>" % self.connection.resource_name
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pymeasure/adapters/visa.py b/pymeasure/adapters/visa.py
--- a/pymeasure/adapters/visa.py
+++ b/pymeasure/adapters/visa.py
@@ -171,7 +171,7 @@
"""Read a certain number of bytes from the instrument.
:param int count: Number of bytes to read. A value of -1 indicates to
- read from the whole read buffer.
+ read from the whole read buffer until timeout.
:param bool break_on_termchar: Stop reading at a termination character.
:param \\**kwargs: Keyword arguments for the connection itself.
:returns bytes: Bytes response of the instrument (including termination).
@@ -181,13 +181,17 @@
elif break_on_termchar:
return self.connection.read_raw(None, **kwargs)
else:
- read_termination = self.connection.read_termination
- self.connection.read_termination = None
- # Try except allows to set the read_termination even after an error.
- try:
- return self.connection.read_raw(**kwargs)
- finally:
- self.connection.read_termination = read_termination
+ # pyvisa's `read_raw` reads until newline, if no termination_character defined
+ # and if not configured to stop at a termination lane etc.
+ # see https://github.com/pyvisa/pyvisa/issues/728
+ result = bytearray()
+ while True:
+ try:
+ result.extend(self.connection.read_bytes(1))
+ except pyvisa.errors.VisaIOError as exc:
+ if exc.error_code == pyvisa.constants.StatusCode.error_timeout:
+ return bytes(result)
+ raise
def ask(self, command):
""" Writes the command to the instrument and returns the resulting
|
{"golden_diff": "diff --git a/pymeasure/adapters/visa.py b/pymeasure/adapters/visa.py\n--- a/pymeasure/adapters/visa.py\n+++ b/pymeasure/adapters/visa.py\n@@ -171,7 +171,7 @@\n \"\"\"Read a certain number of bytes from the instrument.\n \n :param int count: Number of bytes to read. A value of -1 indicates to\n- read from the whole read buffer.\n+ read from the whole read buffer until timeout.\n :param bool break_on_termchar: Stop reading at a termination character.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns bytes: Bytes response of the instrument (including termination).\n@@ -181,13 +181,17 @@\n elif break_on_termchar:\n return self.connection.read_raw(None, **kwargs)\n else:\n- read_termination = self.connection.read_termination\n- self.connection.read_termination = None\n- # Try except allows to set the read_termination even after an error.\n- try:\n- return self.connection.read_raw(**kwargs)\n- finally:\n- self.connection.read_termination = read_termination\n+ # pyvisa's `read_raw` reads until newline, if no termination_character defined\n+ # and if not configured to stop at a termination lane etc.\n+ # see https://github.com/pyvisa/pyvisa/issues/728\n+ result = bytearray()\n+ while True:\n+ try:\n+ result.extend(self.connection.read_bytes(1))\n+ except pyvisa.errors.VisaIOError as exc:\n+ if exc.error_code == pyvisa.constants.StatusCode.error_timeout:\n+ return bytes(result)\n+ raise\n \n def ask(self, command):\n \"\"\" Writes the command to the instrument and returns the resulting\n", "issue": "`VISAAdapter` still terminating on default term character in `read_bytes(-1)`\nPretty odd and specific issue, not sure if this belong here or on PyVisa.\r\n\r\nWhen I try to read the complete buffer in a serial connection using the `VISAAdapter`, it still breaks on the byte corresponding to `\\n`:\r\n\r\n```\r\n def __init__(self, adapter, name=\"Velleman K8090\", timeout=1000, **kwargs):\r\n super().__init__(\r\n adapter,\r\n name=name,\r\n asrl={\"baud_rate\": 19200},\r\n write_termination=\"\",\r\n read_termination=chr(0x0F),\r\n timeout=timeout,\r\n **kwargs,\r\n )\r\n \r\n # ...\r\n\r\n def read(self):\r\n response = self.read_bytes(-1)\r\n\r\n # `response` will end with \"\\n\", even though there are more bytes in the buffer!\r\n```\r\n\r\nEncountered in #859 .\r\n\r\nIt seems the issue is two fold: in this code any termchar should be ignored and it's even responding to the wrong termchar.\r\n\r\nThis is with the `pyvisa-py` backend.\n", "before_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport logging\nfrom warnings import warn\n\nimport pyvisa\nimport numpy as np\n\nfrom .adapter import Adapter\nfrom .protocol import ProtocolAdapter\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\n# noinspection PyPep8Naming,PyUnresolvedReferences\nclass VISAAdapter(Adapter):\n \"\"\" Adapter class for the VISA library, using PyVISA to communicate with instruments.\n\n The workhorse of our library, used by most instruments.\n\n :param resource_name: A\n `VISA resource string <https://pyvisa.readthedocs.io/en/latest/introduction/names.html>`__\n or GPIB address integer that identifies the target of the connection\n :param visa_library: PyVISA VisaLibrary Instance, path of the VISA library or VisaLibrary spec\n string (``@py`` or ``@ivi``). If not given, the default for the platform will be used.\n :param preprocess_reply: An optional callable used to preprocess strings\n received from the instrument. The callable returns the processed string.\n\n .. deprecated:: 0.11\n Implement it in the instrument's `read` method instead.\n\n :param float query_delay: Time in s to wait after writing and before reading.\n\n .. deprecated:: 0.11\n Implement it in the instrument's `wait_for` method instead.\n\n :param log: Parent logger of the 'Adapter' logger.\n :param \\\\**kwargs: Keyword arguments for configuring the PyVISA connection.\n\n :Kwargs:\n Keyword arguments are used to configure the connection created by PyVISA. This is\n complicated by the fact that *which* arguments are valid depends on the interface (e.g.\n serial, GPIB, TCPI/IP, USB) determined by the current ``resource_name``.\n\n A flexible process is used to easily define reasonable *default values* for\n different instrument interfaces, but also enable the instrument user to *override any\n setting* if their situation demands it.\n\n A kwarg that names a pyVISA interface type (most commonly ``asrl``, ``gpib``, ``tcpip``, or\n ``usb``) is a dictionary with keyword arguments defining defaults specific to that\n interface. Example: ``asrl={'baud_rate': 4200}``.\n\n All other kwargs are either generally valid (e.g. ``timeout=500``) or override any default\n settings from the interface-specific entries above. For example, passing\n ``baud_rate=115200`` when connecting via a resource name ``ASRL1`` would override a\n default of 4200 defined as above.\n\n See :ref:`connection_settings` for how to tweak settings when *connecting* to an instrument.\n See :ref:`default_connection_settings` for how to best define default settings when\n *implementing an instrument*.\n \"\"\"\n\n def __init__(self, resource_name, visa_library='', preprocess_reply=None,\n query_delay=0, log=None, **kwargs):\n super().__init__(preprocess_reply=preprocess_reply, log=log)\n if query_delay:\n warn((\"Parameter `query_delay` is deprecated. \"\n \"Implement in Instrument's `wait_for` instead.\"),\n FutureWarning)\n kwargs.setdefault(\"query_delay\", query_delay)\n self.query_delay = query_delay\n if isinstance(resource_name, ProtocolAdapter):\n self.connection = resource_name\n self.connection.write_raw = self.connection.write_bytes\n self.read_bytes = self.connection.read_bytes\n return\n elif isinstance(resource_name, VISAAdapter):\n # Allow to reuse the connection.\n self.resource_name = getattr(resource_name, \"resource_name\", None)\n self.connection = resource_name.connection\n self.manager = resource_name.manager\n self.query_delay = resource_name.query_delay\n return\n elif isinstance(resource_name, int):\n resource_name = \"GPIB0::%d::INSTR\" % resource_name\n\n self.resource_name = resource_name\n self.manager = pyvisa.ResourceManager(visa_library)\n\n # Clean up kwargs considering the interface type matching resource_name\n if_type = self.manager.resource_info(self.resource_name).interface_type\n for key in list(kwargs.keys()): # iterate over a copy of the keys as we modify kwargs\n # Remove all interface-specific kwargs:\n if key in pyvisa.constants.InterfaceType.__members__:\n if getattr(pyvisa.constants.InterfaceType, key) is if_type:\n # For the present interface, dump contents into kwargs first if they are not\n # present already. This way, it is possible to override default values with\n # kwargs passed to Instrument.__init__()\n for k, v in kwargs[key].items():\n kwargs.setdefault(k, v)\n del kwargs[key]\n\n self.connection = self.manager.open_resource(\n resource_name,\n **kwargs\n )\n\n def close(self):\n \"\"\"Close the connection.\n\n .. note::\n\n This closes the connection to the resource for all adapters using\n it currently (e.g. different adapters using the same GPIB line).\n \"\"\"\n super().close()\n try:\n self.manager.close()\n except AttributeError:\n pass # Closed from another adapter using the same connection.\n\n def _write(self, command, **kwargs):\n \"\"\"Write a string command to the instrument appending `write_termination`.\n\n :param str command: Command string to be sent to the instrument\n (without termination).\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n \"\"\"\n self.connection.write(command, **kwargs)\n\n def _write_bytes(self, content, **kwargs):\n \"\"\"Write the bytes `content` to the instrument.\n\n :param bytes content: The bytes to write to the instrument.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n \"\"\"\n self.connection.write_raw(content, **kwargs)\n\n def _read(self, **kwargs):\n \"\"\"Read up to (excluding) `read_termination` or the whole read buffer.\n\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns str: ASCII response of the instrument (excluding read_termination).\n \"\"\"\n return self.connection.read(**kwargs)\n\n def _read_bytes(self, count, break_on_termchar=False, **kwargs):\n \"\"\"Read a certain number of bytes from the instrument.\n\n :param int count: Number of bytes to read. A value of -1 indicates to\n read from the whole read buffer.\n :param bool break_on_termchar: Stop reading at a termination character.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns bytes: Bytes response of the instrument (including termination).\n \"\"\"\n if count >= 0:\n return self.connection.read_bytes(count, break_on_termchar=break_on_termchar, **kwargs)\n elif break_on_termchar:\n return self.connection.read_raw(None, **kwargs)\n else:\n read_termination = self.connection.read_termination\n self.connection.read_termination = None\n # Try except allows to set the read_termination even after an error.\n try:\n return self.connection.read_raw(**kwargs)\n finally:\n self.connection.read_termination = read_termination\n\n def ask(self, command):\n \"\"\" Writes the command to the instrument and returns the resulting\n ASCII response\n\n .. deprecated:: 0.11\n Call `Instrument.ask` instead.\n\n :param command: SCPI command string to be sent to the instrument\n :returns: String ASCII response of the instrument\n \"\"\"\n warn(\"`Adapter.ask` is deprecated, call `Instrument.ask` instead.\", FutureWarning)\n return self.connection.query(command)\n\n def ask_values(self, command, **kwargs):\n \"\"\" Writes a command to the instrument and returns a list of formatted\n values from the result. This leverages the `query_ascii_values` method\n in PyVISA.\n\n .. deprecated:: 0.11\n Call `Instrument.values` instead.\n\n :param command: SCPI command to be sent to the instrument\n :param \\\\**kwargs: Key-word arguments to pass onto `query_ascii_values`\n :returns: Formatted response of the instrument.\n \"\"\"\n warn(\"`Adapter.ask_values` is deprecated, call `Instrument.values` instead.\",\n FutureWarning)\n\n return self.connection.query_ascii_values(command, **kwargs)\n\n def binary_values(self, command, header_bytes=0, dtype=np.float32):\n \"\"\" Returns a numpy array from a query for binary data\n\n .. deprecated:: 0.11\n Call `Instrument.binary_values` instead.\n\n :param command: SCPI command to be sent to the instrument\n :param header_bytes: Integer number of bytes to ignore in header\n :param dtype: The NumPy data type to format the values with\n :returns: NumPy array of values\n \"\"\"\n warn(\"`Adapter.binary_values` is deprecated, call `Instrument.binary_values` instead.\",\n FutureWarning)\n self.connection.write(command)\n binary = self.connection.read_raw()\n # header = binary[:header_bytes]\n data = binary[header_bytes:]\n return np.fromstring(data, dtype=dtype)\n\n def wait_for_srq(self, timeout=25, delay=0.1):\n \"\"\" Block until a SRQ, and leave the bit high\n\n :param timeout: Timeout duration in seconds\n :param delay: Time delay between checking SRQ in seconds\n \"\"\"\n self.connection.wait_for_srq(timeout * 1000)\n\n def flush_read_buffer(self):\n \"\"\" Flush and discard the input buffer\n\n As detailed by pyvisa, discard the read buffer contents and if data was present\n in the read buffer and no END-indicator was present, read from the device until\n encountering an END indicator (which causes loss of data).\n \"\"\"\n try:\n self.connection.flush(pyvisa.constants.BufferOperation.discard_read_buffer)\n except NotImplementedError:\n # NotImplementedError is raised when using resource types other than `asrl`\n # in conjunction with pyvisa-py.\n # Upstream issue: https://github.com/pyvisa/pyvisa-py/issues/348\n # fake discarding the read buffer by reading all available messages.\n timeout = self.connection.timeout\n self.connection.timeout = 0\n try:\n self.read_bytes(-1)\n except pyvisa.errors.VisaIOError:\n pass\n finally:\n self.connection.timeout = timeout\n\n def __repr__(self):\n return \"<VISAAdapter(resource='%s')>\" % self.connection.resource_name\n", "path": "pymeasure/adapters/visa.py"}], "after_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport logging\nfrom warnings import warn\n\nimport pyvisa\nimport numpy as np\n\nfrom .adapter import Adapter\nfrom .protocol import ProtocolAdapter\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\n# noinspection PyPep8Naming,PyUnresolvedReferences\nclass VISAAdapter(Adapter):\n \"\"\" Adapter class for the VISA library, using PyVISA to communicate with instruments.\n\n The workhorse of our library, used by most instruments.\n\n :param resource_name: A\n `VISA resource string <https://pyvisa.readthedocs.io/en/latest/introduction/names.html>`__\n or GPIB address integer that identifies the target of the connection\n :param visa_library: PyVISA VisaLibrary Instance, path of the VISA library or VisaLibrary spec\n string (``@py`` or ``@ivi``). If not given, the default for the platform will be used.\n :param preprocess_reply: An optional callable used to preprocess strings\n received from the instrument. The callable returns the processed string.\n\n .. deprecated:: 0.11\n Implement it in the instrument's `read` method instead.\n\n :param float query_delay: Time in s to wait after writing and before reading.\n\n .. deprecated:: 0.11\n Implement it in the instrument's `wait_for` method instead.\n\n :param log: Parent logger of the 'Adapter' logger.\n :param \\\\**kwargs: Keyword arguments for configuring the PyVISA connection.\n\n :Kwargs:\n Keyword arguments are used to configure the connection created by PyVISA. This is\n complicated by the fact that *which* arguments are valid depends on the interface (e.g.\n serial, GPIB, TCPI/IP, USB) determined by the current ``resource_name``.\n\n A flexible process is used to easily define reasonable *default values* for\n different instrument interfaces, but also enable the instrument user to *override any\n setting* if their situation demands it.\n\n A kwarg that names a pyVISA interface type (most commonly ``asrl``, ``gpib``, ``tcpip``, or\n ``usb``) is a dictionary with keyword arguments defining defaults specific to that\n interface. Example: ``asrl={'baud_rate': 4200}``.\n\n All other kwargs are either generally valid (e.g. ``timeout=500``) or override any default\n settings from the interface-specific entries above. For example, passing\n ``baud_rate=115200`` when connecting via a resource name ``ASRL1`` would override a\n default of 4200 defined as above.\n\n See :ref:`connection_settings` for how to tweak settings when *connecting* to an instrument.\n See :ref:`default_connection_settings` for how to best define default settings when\n *implementing an instrument*.\n \"\"\"\n\n def __init__(self, resource_name, visa_library='', preprocess_reply=None,\n query_delay=0, log=None, **kwargs):\n super().__init__(preprocess_reply=preprocess_reply, log=log)\n if query_delay:\n warn((\"Parameter `query_delay` is deprecated. \"\n \"Implement in Instrument's `wait_for` instead.\"),\n FutureWarning)\n kwargs.setdefault(\"query_delay\", query_delay)\n self.query_delay = query_delay\n if isinstance(resource_name, ProtocolAdapter):\n self.connection = resource_name\n self.connection.write_raw = self.connection.write_bytes\n self.read_bytes = self.connection.read_bytes\n return\n elif isinstance(resource_name, VISAAdapter):\n # Allow to reuse the connection.\n self.resource_name = getattr(resource_name, \"resource_name\", None)\n self.connection = resource_name.connection\n self.manager = resource_name.manager\n self.query_delay = resource_name.query_delay\n return\n elif isinstance(resource_name, int):\n resource_name = \"GPIB0::%d::INSTR\" % resource_name\n\n self.resource_name = resource_name\n self.manager = pyvisa.ResourceManager(visa_library)\n\n # Clean up kwargs considering the interface type matching resource_name\n if_type = self.manager.resource_info(self.resource_name).interface_type\n for key in list(kwargs.keys()): # iterate over a copy of the keys as we modify kwargs\n # Remove all interface-specific kwargs:\n if key in pyvisa.constants.InterfaceType.__members__:\n if getattr(pyvisa.constants.InterfaceType, key) is if_type:\n # For the present interface, dump contents into kwargs first if they are not\n # present already. This way, it is possible to override default values with\n # kwargs passed to Instrument.__init__()\n for k, v in kwargs[key].items():\n kwargs.setdefault(k, v)\n del kwargs[key]\n\n self.connection = self.manager.open_resource(\n resource_name,\n **kwargs\n )\n\n def close(self):\n \"\"\"Close the connection.\n\n .. note::\n\n This closes the connection to the resource for all adapters using\n it currently (e.g. different adapters using the same GPIB line).\n \"\"\"\n super().close()\n try:\n self.manager.close()\n except AttributeError:\n pass # Closed from another adapter using the same connection.\n\n def _write(self, command, **kwargs):\n \"\"\"Write a string command to the instrument appending `write_termination`.\n\n :param str command: Command string to be sent to the instrument\n (without termination).\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n \"\"\"\n self.connection.write(command, **kwargs)\n\n def _write_bytes(self, content, **kwargs):\n \"\"\"Write the bytes `content` to the instrument.\n\n :param bytes content: The bytes to write to the instrument.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n \"\"\"\n self.connection.write_raw(content, **kwargs)\n\n def _read(self, **kwargs):\n \"\"\"Read up to (excluding) `read_termination` or the whole read buffer.\n\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns str: ASCII response of the instrument (excluding read_termination).\n \"\"\"\n return self.connection.read(**kwargs)\n\n def _read_bytes(self, count, break_on_termchar=False, **kwargs):\n \"\"\"Read a certain number of bytes from the instrument.\n\n :param int count: Number of bytes to read. A value of -1 indicates to\n read from the whole read buffer until timeout.\n :param bool break_on_termchar: Stop reading at a termination character.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns bytes: Bytes response of the instrument (including termination).\n \"\"\"\n if count >= 0:\n return self.connection.read_bytes(count, break_on_termchar=break_on_termchar, **kwargs)\n elif break_on_termchar:\n return self.connection.read_raw(None, **kwargs)\n else:\n # pyvisa's `read_raw` reads until newline, if no termination_character defined\n # and if not configured to stop at a termination lane etc.\n # see https://github.com/pyvisa/pyvisa/issues/728\n result = bytearray()\n while True:\n try:\n result.extend(self.connection.read_bytes(1))\n except pyvisa.errors.VisaIOError as exc:\n if exc.error_code == pyvisa.constants.StatusCode.error_timeout:\n return bytes(result)\n raise\n\n def ask(self, command):\n \"\"\" Writes the command to the instrument and returns the resulting\n ASCII response\n\n .. deprecated:: 0.11\n Call `Instrument.ask` instead.\n\n :param command: SCPI command string to be sent to the instrument\n :returns: String ASCII response of the instrument\n \"\"\"\n warn(\"`Adapter.ask` is deprecated, call `Instrument.ask` instead.\", FutureWarning)\n return self.connection.query(command)\n\n def ask_values(self, command, **kwargs):\n \"\"\" Writes a command to the instrument and returns a list of formatted\n values from the result. This leverages the `query_ascii_values` method\n in PyVISA.\n\n .. deprecated:: 0.11\n Call `Instrument.values` instead.\n\n :param command: SCPI command to be sent to the instrument\n :param \\\\**kwargs: Key-word arguments to pass onto `query_ascii_values`\n :returns: Formatted response of the instrument.\n \"\"\"\n warn(\"`Adapter.ask_values` is deprecated, call `Instrument.values` instead.\",\n FutureWarning)\n\n return self.connection.query_ascii_values(command, **kwargs)\n\n def binary_values(self, command, header_bytes=0, dtype=np.float32):\n \"\"\" Returns a numpy array from a query for binary data\n\n .. deprecated:: 0.11\n Call `Instrument.binary_values` instead.\n\n :param command: SCPI command to be sent to the instrument\n :param header_bytes: Integer number of bytes to ignore in header\n :param dtype: The NumPy data type to format the values with\n :returns: NumPy array of values\n \"\"\"\n warn(\"`Adapter.binary_values` is deprecated, call `Instrument.binary_values` instead.\",\n FutureWarning)\n self.connection.write(command)\n binary = self.connection.read_raw()\n # header = binary[:header_bytes]\n data = binary[header_bytes:]\n return np.fromstring(data, dtype=dtype)\n\n def wait_for_srq(self, timeout=25, delay=0.1):\n \"\"\" Block until a SRQ, and leave the bit high\n\n :param timeout: Timeout duration in seconds\n :param delay: Time delay between checking SRQ in seconds\n \"\"\"\n self.connection.wait_for_srq(timeout * 1000)\n\n def flush_read_buffer(self):\n \"\"\" Flush and discard the input buffer\n\n As detailed by pyvisa, discard the read buffer contents and if data was present\n in the read buffer and no END-indicator was present, read from the device until\n encountering an END indicator (which causes loss of data).\n \"\"\"\n try:\n self.connection.flush(pyvisa.constants.BufferOperation.discard_read_buffer)\n except NotImplementedError:\n # NotImplementedError is raised when using resource types other than `asrl`\n # in conjunction with pyvisa-py.\n # Upstream issue: https://github.com/pyvisa/pyvisa-py/issues/348\n # fake discarding the read buffer by reading all available messages.\n timeout = self.connection.timeout\n self.connection.timeout = 0\n try:\n self.read_bytes(-1)\n except pyvisa.errors.VisaIOError:\n pass\n finally:\n self.connection.timeout = timeout\n\n def __repr__(self):\n return \"<VISAAdapter(resource='%s')>\" % self.connection.resource_name\n", "path": "pymeasure/adapters/visa.py"}]}
| 3,760 | 398 |
gh_patches_debug_487
|
rasdani/github-patches
|
git_diff
|
hylang__hy-343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Translate foo? -> is_foo
Andddd discuss
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/lex/parser.py`
Content:
```
1 # Copyright (c) 2013 Nicolas Dandrimont <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the "Software"),
5 # to deal in the Software without restriction, including without limitation
6 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 # and/or sell copies of the Software, and to permit persons to whom the
8 # Software is furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
16 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import sys
22 from functools import wraps
23
24 from rply import ParserGenerator
25
26 from hy.models.complex import HyComplex
27 from hy.models.dict import HyDict
28 from hy.models.expression import HyExpression
29 from hy.models.float import HyFloat
30 from hy.models.integer import HyInteger
31 from hy.models.keyword import HyKeyword
32 from hy.models.lambdalist import HyLambdaListKeyword
33 from hy.models.list import HyList
34 from hy.models.string import HyString
35 from hy.models.symbol import HySymbol
36
37 from .lexer import lexer
38 from .exceptions import LexException, PrematureEndOfInput
39
40
41 pg = ParserGenerator(
42 [rule.name for rule in lexer.rules] + ['$end'],
43 cache_id="hy_parser"
44 )
45
46
47 def set_boundaries(fun):
48 @wraps(fun)
49 def wrapped(p):
50 start = p[0].source_pos
51 end = p[-1].source_pos
52 ret = fun(p)
53 ret.start_line = start.lineno
54 ret.start_column = start.colno
55 if start is not end:
56 ret.end_line = end.lineno
57 ret.end_column = end.colno
58 else:
59 ret.end_line = start.lineno
60 ret.end_column = start.colno + len(p[0].value)
61 return ret
62 return wrapped
63
64
65 def set_quote_boundaries(fun):
66 @wraps(fun)
67 def wrapped(p):
68 start = p[0].source_pos
69 ret = fun(p)
70 ret.start_line = start.lineno
71 ret.start_column = start.colno
72 ret.end_line = p[-1].end_line
73 ret.end_column = p[-1].end_column
74 return ret
75 return wrapped
76
77
78 @pg.production("main : HASHBANG real_main")
79 def main_hashbang(p):
80 return p[1]
81
82
83 @pg.production("main : real_main")
84 def main(p):
85 return p[0]
86
87
88 @pg.production("real_main : list_contents")
89 def real_main(p):
90 return p[0]
91
92
93 @pg.production("real_main : $end")
94 def real_main_empty(p):
95 return []
96
97
98 @pg.production("paren : LPAREN list_contents RPAREN")
99 @set_boundaries
100 def paren(p):
101 return HyExpression(p[1])
102
103
104 @pg.production("paren : LPAREN RPAREN")
105 @set_boundaries
106 def empty_paren(p):
107 return HyExpression([])
108
109
110 @pg.production("list_contents : term list_contents")
111 def list_contents(p):
112 return [p[0]] + p[1]
113
114
115 @pg.production("list_contents : term")
116 def list_contents_single(p):
117 return [p[0]]
118
119
120 @pg.production("term : identifier")
121 @pg.production("term : paren")
122 @pg.production("term : dict")
123 @pg.production("term : list")
124 @pg.production("term : string")
125 def term(p):
126 return p[0]
127
128
129 @pg.production("term : QUOTE term")
130 @set_quote_boundaries
131 def term_quote(p):
132 return HyExpression([HySymbol("quote"), p[1]])
133
134
135 @pg.production("term : QUASIQUOTE term")
136 @set_quote_boundaries
137 def term_quasiquote(p):
138 return HyExpression([HySymbol("quasiquote"), p[1]])
139
140
141 @pg.production("term : UNQUOTE term")
142 @set_quote_boundaries
143 def term_unquote(p):
144 return HyExpression([HySymbol("unquote"), p[1]])
145
146
147 @pg.production("term : UNQUOTESPLICE term")
148 @set_quote_boundaries
149 def term_unquote_splice(p):
150 return HyExpression([HySymbol("unquote_splice"), p[1]])
151
152
153 @pg.production("dict : LCURLY list_contents RCURLY")
154 @set_boundaries
155 def t_dict(p):
156 return HyDict(p[1])
157
158
159 @pg.production("dict : LCURLY RCURLY")
160 @set_boundaries
161 def empty_dict(p):
162 return HyDict([])
163
164
165 @pg.production("list : LBRACKET list_contents RBRACKET")
166 @set_boundaries
167 def t_list(p):
168 return HyList(p[1])
169
170
171 @pg.production("list : LBRACKET RBRACKET")
172 @set_boundaries
173 def t_empty_list(p):
174 return HyList([])
175
176
177 if sys.version_info[0] >= 3:
178 def uni_hystring(s):
179 return HyString(eval(s))
180 else:
181 def uni_hystring(s):
182 return HyString(eval('u'+s))
183
184
185 @pg.production("string : STRING")
186 @set_boundaries
187 def t_string(p):
188 # remove trailing quote
189 s = p[0].value[:-1]
190 # get the header
191 header, s = s.split('"', 1)
192 # remove unicode marker
193 header = header.replace("u", "")
194 # build python string
195 s = header + '"""' + s + '"""'
196 return uni_hystring(s)
197
198
199 @pg.production("identifier : IDENTIFIER")
200 @set_boundaries
201 def t_identifier(p):
202 obj = p[0].value
203
204 try:
205 return HyInteger(obj)
206 except ValueError:
207 pass
208
209 try:
210 return HyFloat(obj)
211 except ValueError:
212 pass
213
214 if obj != 'j':
215 try:
216 return HyComplex(obj)
217 except ValueError:
218 pass
219
220 table = {
221 "true": "True",
222 "false": "False",
223 "null": "None",
224 }
225
226 if obj in table:
227 return HySymbol(table[obj])
228
229 if obj.startswith(":"):
230 return HyKeyword(obj)
231
232 if obj.startswith("&"):
233 return HyLambdaListKeyword(obj)
234
235 if obj.startswith("*") and obj.endswith("*") and obj not in ("*", "**"):
236 obj = obj[1:-1].upper()
237
238 if "-" in obj and obj != "-":
239 obj = obj.replace("-", "_")
240
241 return HySymbol(obj)
242
243
244 @pg.error
245 def error_handler(token):
246 tokentype = token.gettokentype()
247 if tokentype == '$end':
248 raise PrematureEndOfInput
249 else:
250 raise LexException(
251 "Ran into a %s where it wasn't expected at line %s, column %s" %
252 (tokentype, token.source_pos.lineno, token.source_pos.colno)
253 )
254
255
256 parser = pg.build()
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hy/lex/parser.py b/hy/lex/parser.py
--- a/hy/lex/parser.py
+++ b/hy/lex/parser.py
@@ -238,6 +238,9 @@
if "-" in obj and obj != "-":
obj = obj.replace("-", "_")
+ if obj.endswith("?") and obj != "?":
+ obj = "is_%s" % (obj[:-1])
+
return HySymbol(obj)
|
{"golden_diff": "diff --git a/hy/lex/parser.py b/hy/lex/parser.py\n--- a/hy/lex/parser.py\n+++ b/hy/lex/parser.py\n@@ -238,6 +238,9 @@\n if \"-\" in obj and obj != \"-\":\n obj = obj.replace(\"-\", \"_\")\n \n+ if obj.endswith(\"?\") and obj != \"?\":\n+ obj = \"is_%s\" % (obj[:-1])\n+\n return HySymbol(obj)\n", "issue": "Translate foo? -> is_foo \nAndddd discuss \n\n", "before_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport sys\nfrom functools import wraps\n\nfrom rply import ParserGenerator\n\nfrom hy.models.complex import HyComplex\nfrom hy.models.dict import HyDict\nfrom hy.models.expression import HyExpression\nfrom hy.models.float import HyFloat\nfrom hy.models.integer import HyInteger\nfrom hy.models.keyword import HyKeyword\nfrom hy.models.lambdalist import HyLambdaListKeyword\nfrom hy.models.list import HyList\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\n\nfrom .lexer import lexer\nfrom .exceptions import LexException, PrematureEndOfInput\n\n\npg = ParserGenerator(\n [rule.name for rule in lexer.rules] + ['$end'],\n cache_id=\"hy_parser\"\n)\n\n\ndef set_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n end = p[-1].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n if start is not end:\n ret.end_line = end.lineno\n ret.end_column = end.colno\n else:\n ret.end_line = start.lineno\n ret.end_column = start.colno + len(p[0].value)\n return ret\n return wrapped\n\n\ndef set_quote_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n ret.end_line = p[-1].end_line\n ret.end_column = p[-1].end_column\n return ret\n return wrapped\n\n\[email protected](\"main : HASHBANG real_main\")\ndef main_hashbang(p):\n return p[1]\n\n\[email protected](\"main : real_main\")\ndef main(p):\n return p[0]\n\n\[email protected](\"real_main : list_contents\")\ndef real_main(p):\n return p[0]\n\n\[email protected](\"real_main : $end\")\ndef real_main_empty(p):\n return []\n\n\[email protected](\"paren : LPAREN list_contents RPAREN\")\n@set_boundaries\ndef paren(p):\n return HyExpression(p[1])\n\n\[email protected](\"paren : LPAREN RPAREN\")\n@set_boundaries\ndef empty_paren(p):\n return HyExpression([])\n\n\[email protected](\"list_contents : term list_contents\")\ndef list_contents(p):\n return [p[0]] + p[1]\n\n\[email protected](\"list_contents : term\")\ndef list_contents_single(p):\n return [p[0]]\n\n\[email protected](\"term : identifier\")\[email protected](\"term : paren\")\[email protected](\"term : dict\")\[email protected](\"term : list\")\[email protected](\"term : string\")\ndef term(p):\n return p[0]\n\n\[email protected](\"term : QUOTE term\")\n@set_quote_boundaries\ndef term_quote(p):\n return HyExpression([HySymbol(\"quote\"), p[1]])\n\n\[email protected](\"term : QUASIQUOTE term\")\n@set_quote_boundaries\ndef term_quasiquote(p):\n return HyExpression([HySymbol(\"quasiquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTE term\")\n@set_quote_boundaries\ndef term_unquote(p):\n return HyExpression([HySymbol(\"unquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTESPLICE term\")\n@set_quote_boundaries\ndef term_unquote_splice(p):\n return HyExpression([HySymbol(\"unquote_splice\"), p[1]])\n\n\[email protected](\"dict : LCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_dict(p):\n return HyDict(p[1])\n\n\[email protected](\"dict : LCURLY RCURLY\")\n@set_boundaries\ndef empty_dict(p):\n return HyDict([])\n\n\[email protected](\"list : LBRACKET list_contents RBRACKET\")\n@set_boundaries\ndef t_list(p):\n return HyList(p[1])\n\n\[email protected](\"list : LBRACKET RBRACKET\")\n@set_boundaries\ndef t_empty_list(p):\n return HyList([])\n\n\nif sys.version_info[0] >= 3:\n def uni_hystring(s):\n return HyString(eval(s))\nelse:\n def uni_hystring(s):\n return HyString(eval('u'+s))\n\n\[email protected](\"string : STRING\")\n@set_boundaries\ndef t_string(p):\n # remove trailing quote\n s = p[0].value[:-1]\n # get the header\n header, s = s.split('\"', 1)\n # remove unicode marker\n header = header.replace(\"u\", \"\")\n # build python string\n s = header + '\"\"\"' + s + '\"\"\"'\n return uni_hystring(s)\n\n\[email protected](\"identifier : IDENTIFIER\")\n@set_boundaries\ndef t_identifier(p):\n obj = p[0].value\n\n try:\n return HyInteger(obj)\n except ValueError:\n pass\n\n try:\n return HyFloat(obj)\n except ValueError:\n pass\n\n if obj != 'j':\n try:\n return HyComplex(obj)\n except ValueError:\n pass\n\n table = {\n \"true\": \"True\",\n \"false\": \"False\",\n \"null\": \"None\",\n }\n\n if obj in table:\n return HySymbol(table[obj])\n\n if obj.startswith(\":\"):\n return HyKeyword(obj)\n\n if obj.startswith(\"&\"):\n return HyLambdaListKeyword(obj)\n\n if obj.startswith(\"*\") and obj.endswith(\"*\") and obj not in (\"*\", \"**\"):\n obj = obj[1:-1].upper()\n\n if \"-\" in obj and obj != \"-\":\n obj = obj.replace(\"-\", \"_\")\n\n return HySymbol(obj)\n\n\[email protected]\ndef error_handler(token):\n tokentype = token.gettokentype()\n if tokentype == '$end':\n raise PrematureEndOfInput\n else:\n raise LexException(\n \"Ran into a %s where it wasn't expected at line %s, column %s\" %\n (tokentype, token.source_pos.lineno, token.source_pos.colno)\n )\n\n\nparser = pg.build()\n", "path": "hy/lex/parser.py"}], "after_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport sys\nfrom functools import wraps\n\nfrom rply import ParserGenerator\n\nfrom hy.models.complex import HyComplex\nfrom hy.models.dict import HyDict\nfrom hy.models.expression import HyExpression\nfrom hy.models.float import HyFloat\nfrom hy.models.integer import HyInteger\nfrom hy.models.keyword import HyKeyword\nfrom hy.models.lambdalist import HyLambdaListKeyword\nfrom hy.models.list import HyList\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\n\nfrom .lexer import lexer\nfrom .exceptions import LexException, PrematureEndOfInput\n\n\npg = ParserGenerator(\n [rule.name for rule in lexer.rules] + ['$end'],\n cache_id=\"hy_parser\"\n)\n\n\ndef set_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n end = p[-1].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n if start is not end:\n ret.end_line = end.lineno\n ret.end_column = end.colno\n else:\n ret.end_line = start.lineno\n ret.end_column = start.colno + len(p[0].value)\n return ret\n return wrapped\n\n\ndef set_quote_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n ret.end_line = p[-1].end_line\n ret.end_column = p[-1].end_column\n return ret\n return wrapped\n\n\[email protected](\"main : HASHBANG real_main\")\ndef main_hashbang(p):\n return p[1]\n\n\[email protected](\"main : real_main\")\ndef main(p):\n return p[0]\n\n\[email protected](\"real_main : list_contents\")\ndef real_main(p):\n return p[0]\n\n\[email protected](\"real_main : $end\")\ndef real_main_empty(p):\n return []\n\n\[email protected](\"paren : LPAREN list_contents RPAREN\")\n@set_boundaries\ndef paren(p):\n return HyExpression(p[1])\n\n\[email protected](\"paren : LPAREN RPAREN\")\n@set_boundaries\ndef empty_paren(p):\n return HyExpression([])\n\n\[email protected](\"list_contents : term list_contents\")\ndef list_contents(p):\n return [p[0]] + p[1]\n\n\[email protected](\"list_contents : term\")\ndef list_contents_single(p):\n return [p[0]]\n\n\[email protected](\"term : identifier\")\[email protected](\"term : paren\")\[email protected](\"term : dict\")\[email protected](\"term : list\")\[email protected](\"term : string\")\ndef term(p):\n return p[0]\n\n\[email protected](\"term : QUOTE term\")\n@set_quote_boundaries\ndef term_quote(p):\n return HyExpression([HySymbol(\"quote\"), p[1]])\n\n\[email protected](\"term : QUASIQUOTE term\")\n@set_quote_boundaries\ndef term_quasiquote(p):\n return HyExpression([HySymbol(\"quasiquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTE term\")\n@set_quote_boundaries\ndef term_unquote(p):\n return HyExpression([HySymbol(\"unquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTESPLICE term\")\n@set_quote_boundaries\ndef term_unquote_splice(p):\n return HyExpression([HySymbol(\"unquote_splice\"), p[1]])\n\n\[email protected](\"dict : LCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_dict(p):\n return HyDict(p[1])\n\n\[email protected](\"dict : LCURLY RCURLY\")\n@set_boundaries\ndef empty_dict(p):\n return HyDict([])\n\n\[email protected](\"list : LBRACKET list_contents RBRACKET\")\n@set_boundaries\ndef t_list(p):\n return HyList(p[1])\n\n\[email protected](\"list : LBRACKET RBRACKET\")\n@set_boundaries\ndef t_empty_list(p):\n return HyList([])\n\n\nif sys.version_info[0] >= 3:\n def uni_hystring(s):\n return HyString(eval(s))\nelse:\n def uni_hystring(s):\n return HyString(eval('u'+s))\n\n\[email protected](\"string : STRING\")\n@set_boundaries\ndef t_string(p):\n # remove trailing quote\n s = p[0].value[:-1]\n # get the header\n header, s = s.split('\"', 1)\n # remove unicode marker\n header = header.replace(\"u\", \"\")\n # build python string\n s = header + '\"\"\"' + s + '\"\"\"'\n return uni_hystring(s)\n\n\[email protected](\"identifier : IDENTIFIER\")\n@set_boundaries\ndef t_identifier(p):\n obj = p[0].value\n\n try:\n return HyInteger(obj)\n except ValueError:\n pass\n\n try:\n return HyFloat(obj)\n except ValueError:\n pass\n\n if obj != 'j':\n try:\n return HyComplex(obj)\n except ValueError:\n pass\n\n table = {\n \"true\": \"True\",\n \"false\": \"False\",\n \"null\": \"None\",\n }\n\n if obj in table:\n return HySymbol(table[obj])\n\n if obj.startswith(\":\"):\n return HyKeyword(obj)\n\n if obj.startswith(\"&\"):\n return HyLambdaListKeyword(obj)\n\n if obj.startswith(\"*\") and obj.endswith(\"*\") and obj not in (\"*\", \"**\"):\n obj = obj[1:-1].upper()\n\n if \"-\" in obj and obj != \"-\":\n obj = obj.replace(\"-\", \"_\")\n\n if obj.endswith(\"?\") and obj != \"?\":\n obj = \"is_%s\" % (obj[:-1])\n\n return HySymbol(obj)\n\n\[email protected]\ndef error_handler(token):\n tokentype = token.gettokentype()\n if tokentype == '$end':\n raise PrematureEndOfInput\n else:\n raise LexException(\n \"Ran into a %s where it wasn't expected at line %s, column %s\" %\n (tokentype, token.source_pos.lineno, token.source_pos.colno)\n )\n\n\nparser = pg.build()\n", "path": "hy/lex/parser.py"}]}
| 2,555 | 103 |
gh_patches_debug_10936
|
rasdani/github-patches
|
git_diff
|
pypa__setuptools-2934
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Docs] GitHub reference in changelog rendered as email
### Summary
In [the changelog for v59.0.0](https://setuptools.pypa.io/en/latest/history.html#v59-0-0), a `distutils` commit is referenced using `pypa/distutils@f1b0a2b` in the RST source. Although it should link to pypa/distutils@f1b0a2b (`https://github.com/pypa/distutils/commit/f1b0a2b`) as GitHub automatically does, it instead renders the URL as `mailto:pypa/distutils@f1b0a2b`, which is incorrect.
### OS / Environment
N/a. This is in the source code.
### Additional Information
I would solve this myself, but I don't know the best solution. I can see that some things like GitHub issues are automatically linked, but I'm not sure if it should be added to the linking code in [`docs/conf.py`](https://github.com/pypa/setuptools/blob/4b980ef4072a817aae0da3643d0fa70c30fcb6cf/docs/conf.py) or just manually made a link.
### Code of Conduct
- [X] I agree to follow the PSF Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 import os
2 import sys
3
4 extensions = ['sphinx.ext.autodoc', 'jaraco.packaging.sphinx', 'rst.linker']
5
6 master_doc = "index"
7
8 link_files = {
9 '../CHANGES.rst': dict(
10 using=dict(
11 BB='https://bitbucket.org',
12 GH='https://github.com',
13 ),
14 replace=[
15 dict(
16 pattern=r'(Issue )?#(?P<issue>\d+)',
17 url='{package_url}/issues/{issue}',
18 ),
19 dict(
20 pattern=r'BB Pull Request ?#(?P<bb_pull_request>\d+)',
21 url='{BB}/pypa/setuptools/pull-request/{bb_pull_request}',
22 ),
23 dict(
24 pattern=r'Distribute #(?P<distribute>\d+)',
25 url='{BB}/tarek/distribute/issue/{distribute}',
26 ),
27 dict(
28 pattern=r'Buildout #(?P<buildout>\d+)',
29 url='{GH}/buildout/buildout/issues/{buildout}',
30 ),
31 dict(
32 pattern=r'Old Setuptools #(?P<old_setuptools>\d+)',
33 url='http://bugs.python.org/setuptools/issue{old_setuptools}',
34 ),
35 dict(
36 pattern=r'Jython #(?P<jython>\d+)',
37 url='http://bugs.jython.org/issue{jython}',
38 ),
39 dict(
40 pattern=r'(Python #|bpo-)(?P<python>\d+)',
41 url='http://bugs.python.org/issue{python}',
42 ),
43 dict(
44 pattern=r'Interop #(?P<interop>\d+)',
45 url='{GH}/pypa/interoperability-peps/issues/{interop}',
46 ),
47 dict(
48 pattern=r'Pip #(?P<pip>\d+)',
49 url='{GH}/pypa/pip/issues/{pip}',
50 ),
51 dict(
52 pattern=r'Packaging #(?P<packaging>\d+)',
53 url='{GH}/pypa/packaging/issues/{packaging}',
54 ),
55 dict(
56 pattern=r'[Pp]ackaging (?P<packaging_ver>\d+(\.\d+)+)',
57 url='{GH}/pypa/packaging/blob/{packaging_ver}/CHANGELOG.rst',
58 ),
59 dict(
60 pattern=r'PEP[- ](?P<pep_number>\d+)',
61 url='https://www.python.org/dev/peps/pep-{pep_number:0>4}/',
62 ),
63 dict(
64 pattern=r'setuptools_svn #(?P<setuptools_svn>\d+)',
65 url='{GH}/jaraco/setuptools_svn/issues/{setuptools_svn}',
66 ),
67 dict(
68 pattern=r'pypa/distutils#(?P<distutils>\d+)',
69 url='{GH}/pypa/distutils/issues/{distutils}',
70 ),
71 dict(
72 pattern=r'^(?m)((?P<scm_version>v?\d+(\.\d+){1,2}))\n[-=]+\n',
73 with_scm='{text}\n{rev[timestamp]:%d %b %Y}\n',
74 ),
75 ],
76 ),
77 }
78
79 # Be strict about any broken references:
80 nitpicky = True
81
82 # Include Python intersphinx mapping to prevent failures
83 # jaraco/skeleton#51
84 extensions += ['sphinx.ext.intersphinx']
85 intersphinx_mapping = {
86 'python': ('https://docs.python.org/3', None),
87 }
88
89 intersphinx_mapping.update({
90 'pypa-build': ('https://pypa-build.readthedocs.io/en/latest/', None)
91 })
92
93 # Add support for linking usernames
94 github_url = 'https://github.com'
95 github_sponsors_url = f'{github_url}/sponsors'
96 extlinks = {
97 'user': (f'{github_sponsors_url}/%s', '@'), # noqa: WPS323
98 }
99 extensions += ['sphinx.ext.extlinks']
100
101 # Ref: https://github.com/python-attrs/attrs/pull/571/files\
102 # #diff-85987f48f1258d9ee486e3191495582dR82
103 default_role = 'any'
104
105 # HTML theme
106 html_theme = 'furo'
107 html_logo = "images/logo.svg"
108
109 html_theme_options = {
110 "sidebar_hide_name": True,
111 "light_css_variables": {
112 "color-brand-primary": "#336790", # "blue"
113 "color-brand-content": "#336790",
114 },
115 "dark_css_variables": {
116 "color-brand-primary": "#E5B62F", # "yellow"
117 "color-brand-content": "#E5B62F",
118 },
119 }
120
121 # Add support for inline tabs
122 extensions += ['sphinx_inline_tabs']
123
124 # Support for distutils
125
126 # Ref: https://stackoverflow.com/a/30624034/595220
127 nitpick_ignore = [
128 ('c:func', 'SHGetSpecialFolderPath'), # ref to MS docs
129 ('envvar', 'DISTUTILS_DEBUG'), # undocumented
130 ('envvar', 'HOME'), # undocumented
131 ('envvar', 'PLAT'), # undocumented
132 ('py:attr', 'CCompiler.language_map'), # undocumented
133 ('py:attr', 'CCompiler.language_order'), # undocumented
134 ('py:class', 'distutils.dist.Distribution'), # undocumented
135 ('py:class', 'distutils.extension.Extension'), # undocumented
136 ('py:class', 'BorlandCCompiler'), # undocumented
137 ('py:class', 'CCompiler'), # undocumented
138 ('py:class', 'CygwinCCompiler'), # undocumented
139 ('py:class', 'distutils.dist.DistributionMetadata'), # undocumented
140 ('py:class', 'FileList'), # undocumented
141 ('py:class', 'IShellLink'), # ref to MS docs
142 ('py:class', 'MSVCCompiler'), # undocumented
143 ('py:class', 'OptionDummy'), # undocumented
144 ('py:class', 'UnixCCompiler'), # undocumented
145 ('py:exc', 'CompileError'), # undocumented
146 ('py:exc', 'DistutilsExecError'), # undocumented
147 ('py:exc', 'DistutilsFileError'), # undocumented
148 ('py:exc', 'LibError'), # undocumented
149 ('py:exc', 'LinkError'), # undocumented
150 ('py:exc', 'PreprocessError'), # undocumented
151 ('py:func', 'distutils.CCompiler.new_compiler'), # undocumented
152 # undocumented:
153 ('py:func', 'distutils.dist.DistributionMetadata.read_pkg_file'),
154 ('py:func', 'distutils.file_util._copy_file_contents'), # undocumented
155 ('py:func', 'distutils.log.debug'), # undocumented
156 ('py:func', 'distutils.spawn.find_executable'), # undocumented
157 ('py:func', 'distutils.spawn.spawn'), # undocumented
158 # TODO: check https://docutils.rtfd.io in the future
159 ('py:mod', 'docutils'), # there's no Sphinx site documenting this
160 ]
161
162 # Allow linking objects on other Sphinx sites seamlessly:
163 intersphinx_mapping.update(
164 python=('https://docs.python.org/3', None),
165 python2=('https://docs.python.org/2', None),
166 )
167
168 # Add support for the unreleased "next-version" change notes
169 extensions += ['sphinxcontrib.towncrier']
170 # Extension needs a path from here to the towncrier config.
171 towncrier_draft_working_directory = '..'
172 # Avoid an empty section for unpublished changes.
173 towncrier_draft_include_empty = False
174
175 extensions += ['jaraco.tidelift']
176
177 # Add icons (aka "favicons") to documentation
178 sys.path.append(os.path.join(os.path.dirname(__file__), '_ext'))
179 extensions += ['_custom_icons']
180
181 # List of dicts with <link> HTML attributes
182 # as defined in https://developer.mozilla.org/en-US/docs/Web/HTML/Element/link
183 # except that ``file`` gets replaced with the correct ``href``
184 icons = [
185 { # "Catch-all" goes first, otherwise some browsers will overwrite
186 "rel": "icon",
187 "type": "image/svg+xml",
188 "file": "images/logo-symbol-only.svg",
189 "sizes": "any"
190 },
191 { # Version with thicker strokes for better visibility at smaller sizes
192 "rel": "icon",
193 "type": "image/svg+xml",
194 "file": "images/favicon.svg",
195 "sizes": "16x16 24x24 32x32 48x48"
196 },
197 # rel="apple-touch-icon" does not support SVG yet
198 ]
199
200 intersphinx_mapping['pip'] = 'https://pip.pypa.io/en/latest', None
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -68,6 +68,10 @@
pattern=r'pypa/distutils#(?P<distutils>\d+)',
url='{GH}/pypa/distutils/issues/{distutils}',
),
+ dict(
+ pattern=r'pypa/distutils@(?P<distutils_commit>[\da-f]+)',
+ url='{GH}/pypa/distutils/commit/{distutils_commit}',
+ ),
dict(
pattern=r'^(?m)((?P<scm_version>v?\d+(\.\d+){1,2}))\n[-=]+\n',
with_scm='{text}\n{rev[timestamp]:%d %b %Y}\n',
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -68,6 +68,10 @@\n pattern=r'pypa/distutils#(?P<distutils>\\d+)',\n url='{GH}/pypa/distutils/issues/{distutils}',\n ),\n+ dict(\n+ pattern=r'pypa/distutils@(?P<distutils_commit>[\\da-f]+)',\n+ url='{GH}/pypa/distutils/commit/{distutils_commit}',\n+ ),\n dict(\n pattern=r'^(?m)((?P<scm_version>v?\\d+(\\.\\d+){1,2}))\\n[-=]+\\n',\n with_scm='{text}\\n{rev[timestamp]:%d %b %Y}\\n',\n", "issue": "[Docs] GitHub reference in changelog rendered as email\n### Summary\n\nIn [the changelog for v59.0.0](https://setuptools.pypa.io/en/latest/history.html#v59-0-0), a `distutils` commit is referenced using `pypa/distutils@f1b0a2b` in the RST source. Although it should link to pypa/distutils@f1b0a2b (`https://github.com/pypa/distutils/commit/f1b0a2b`) as GitHub automatically does, it instead renders the URL as `mailto:pypa/distutils@f1b0a2b`, which is incorrect.\n\n### OS / Environment\n\nN/a. This is in the source code.\n\n### Additional Information\n\nI would solve this myself, but I don't know the best solution. I can see that some things like GitHub issues are automatically linked, but I'm not sure if it should be added to the linking code in [`docs/conf.py`](https://github.com/pypa/setuptools/blob/4b980ef4072a817aae0da3643d0fa70c30fcb6cf/docs/conf.py) or just manually made a link.\n\n### Code of Conduct\n\n- [X] I agree to follow the PSF Code of Conduct\n", "before_files": [{"content": "import os\nimport sys\n\nextensions = ['sphinx.ext.autodoc', 'jaraco.packaging.sphinx', 'rst.linker']\n\nmaster_doc = \"index\"\n\nlink_files = {\n '../CHANGES.rst': dict(\n using=dict(\n BB='https://bitbucket.org',\n GH='https://github.com',\n ),\n replace=[\n dict(\n pattern=r'(Issue )?#(?P<issue>\\d+)',\n url='{package_url}/issues/{issue}',\n ),\n dict(\n pattern=r'BB Pull Request ?#(?P<bb_pull_request>\\d+)',\n url='{BB}/pypa/setuptools/pull-request/{bb_pull_request}',\n ),\n dict(\n pattern=r'Distribute #(?P<distribute>\\d+)',\n url='{BB}/tarek/distribute/issue/{distribute}',\n ),\n dict(\n pattern=r'Buildout #(?P<buildout>\\d+)',\n url='{GH}/buildout/buildout/issues/{buildout}',\n ),\n dict(\n pattern=r'Old Setuptools #(?P<old_setuptools>\\d+)',\n url='http://bugs.python.org/setuptools/issue{old_setuptools}',\n ),\n dict(\n pattern=r'Jython #(?P<jython>\\d+)',\n url='http://bugs.jython.org/issue{jython}',\n ),\n dict(\n pattern=r'(Python #|bpo-)(?P<python>\\d+)',\n url='http://bugs.python.org/issue{python}',\n ),\n dict(\n pattern=r'Interop #(?P<interop>\\d+)',\n url='{GH}/pypa/interoperability-peps/issues/{interop}',\n ),\n dict(\n pattern=r'Pip #(?P<pip>\\d+)',\n url='{GH}/pypa/pip/issues/{pip}',\n ),\n dict(\n pattern=r'Packaging #(?P<packaging>\\d+)',\n url='{GH}/pypa/packaging/issues/{packaging}',\n ),\n dict(\n pattern=r'[Pp]ackaging (?P<packaging_ver>\\d+(\\.\\d+)+)',\n url='{GH}/pypa/packaging/blob/{packaging_ver}/CHANGELOG.rst',\n ),\n dict(\n pattern=r'PEP[- ](?P<pep_number>\\d+)',\n url='https://www.python.org/dev/peps/pep-{pep_number:0>4}/',\n ),\n dict(\n pattern=r'setuptools_svn #(?P<setuptools_svn>\\d+)',\n url='{GH}/jaraco/setuptools_svn/issues/{setuptools_svn}',\n ),\n dict(\n pattern=r'pypa/distutils#(?P<distutils>\\d+)',\n url='{GH}/pypa/distutils/issues/{distutils}',\n ),\n dict(\n pattern=r'^(?m)((?P<scm_version>v?\\d+(\\.\\d+){1,2}))\\n[-=]+\\n',\n with_scm='{text}\\n{rev[timestamp]:%d %b %Y}\\n',\n ),\n ],\n ),\n}\n\n# Be strict about any broken references:\nnitpicky = True\n\n# Include Python intersphinx mapping to prevent failures\n# jaraco/skeleton#51\nextensions += ['sphinx.ext.intersphinx']\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n}\n\nintersphinx_mapping.update({\n 'pypa-build': ('https://pypa-build.readthedocs.io/en/latest/', None)\n})\n\n# Add support for linking usernames\ngithub_url = 'https://github.com'\ngithub_sponsors_url = f'{github_url}/sponsors'\nextlinks = {\n 'user': (f'{github_sponsors_url}/%s', '@'), # noqa: WPS323\n}\nextensions += ['sphinx.ext.extlinks']\n\n# Ref: https://github.com/python-attrs/attrs/pull/571/files\\\n# #diff-85987f48f1258d9ee486e3191495582dR82\ndefault_role = 'any'\n\n# HTML theme\nhtml_theme = 'furo'\nhtml_logo = \"images/logo.svg\"\n\nhtml_theme_options = {\n \"sidebar_hide_name\": True,\n \"light_css_variables\": {\n \"color-brand-primary\": \"#336790\", # \"blue\"\n \"color-brand-content\": \"#336790\",\n },\n \"dark_css_variables\": {\n \"color-brand-primary\": \"#E5B62F\", # \"yellow\"\n \"color-brand-content\": \"#E5B62F\",\n },\n}\n\n# Add support for inline tabs\nextensions += ['sphinx_inline_tabs']\n\n# Support for distutils\n\n# Ref: https://stackoverflow.com/a/30624034/595220\nnitpick_ignore = [\n ('c:func', 'SHGetSpecialFolderPath'), # ref to MS docs\n ('envvar', 'DISTUTILS_DEBUG'), # undocumented\n ('envvar', 'HOME'), # undocumented\n ('envvar', 'PLAT'), # undocumented\n ('py:attr', 'CCompiler.language_map'), # undocumented\n ('py:attr', 'CCompiler.language_order'), # undocumented\n ('py:class', 'distutils.dist.Distribution'), # undocumented\n ('py:class', 'distutils.extension.Extension'), # undocumented\n ('py:class', 'BorlandCCompiler'), # undocumented\n ('py:class', 'CCompiler'), # undocumented\n ('py:class', 'CygwinCCompiler'), # undocumented\n ('py:class', 'distutils.dist.DistributionMetadata'), # undocumented\n ('py:class', 'FileList'), # undocumented\n ('py:class', 'IShellLink'), # ref to MS docs\n ('py:class', 'MSVCCompiler'), # undocumented\n ('py:class', 'OptionDummy'), # undocumented\n ('py:class', 'UnixCCompiler'), # undocumented\n ('py:exc', 'CompileError'), # undocumented\n ('py:exc', 'DistutilsExecError'), # undocumented\n ('py:exc', 'DistutilsFileError'), # undocumented\n ('py:exc', 'LibError'), # undocumented\n ('py:exc', 'LinkError'), # undocumented\n ('py:exc', 'PreprocessError'), # undocumented\n ('py:func', 'distutils.CCompiler.new_compiler'), # undocumented\n # undocumented:\n ('py:func', 'distutils.dist.DistributionMetadata.read_pkg_file'),\n ('py:func', 'distutils.file_util._copy_file_contents'), # undocumented\n ('py:func', 'distutils.log.debug'), # undocumented\n ('py:func', 'distutils.spawn.find_executable'), # undocumented\n ('py:func', 'distutils.spawn.spawn'), # undocumented\n # TODO: check https://docutils.rtfd.io in the future\n ('py:mod', 'docutils'), # there's no Sphinx site documenting this\n]\n\n# Allow linking objects on other Sphinx sites seamlessly:\nintersphinx_mapping.update(\n python=('https://docs.python.org/3', None),\n python2=('https://docs.python.org/2', None),\n)\n\n# Add support for the unreleased \"next-version\" change notes\nextensions += ['sphinxcontrib.towncrier']\n# Extension needs a path from here to the towncrier config.\ntowncrier_draft_working_directory = '..'\n# Avoid an empty section for unpublished changes.\ntowncrier_draft_include_empty = False\n\nextensions += ['jaraco.tidelift']\n\n# Add icons (aka \"favicons\") to documentation\nsys.path.append(os.path.join(os.path.dirname(__file__), '_ext'))\nextensions += ['_custom_icons']\n\n# List of dicts with <link> HTML attributes\n# as defined in https://developer.mozilla.org/en-US/docs/Web/HTML/Element/link\n# except that ``file`` gets replaced with the correct ``href``\nicons = [\n { # \"Catch-all\" goes first, otherwise some browsers will overwrite\n \"rel\": \"icon\",\n \"type\": \"image/svg+xml\",\n \"file\": \"images/logo-symbol-only.svg\",\n \"sizes\": \"any\"\n },\n { # Version with thicker strokes for better visibility at smaller sizes\n \"rel\": \"icon\",\n \"type\": \"image/svg+xml\",\n \"file\": \"images/favicon.svg\",\n \"sizes\": \"16x16 24x24 32x32 48x48\"\n },\n # rel=\"apple-touch-icon\" does not support SVG yet\n]\n\nintersphinx_mapping['pip'] = 'https://pip.pypa.io/en/latest', None\n", "path": "docs/conf.py"}], "after_files": [{"content": "import os\nimport sys\n\nextensions = ['sphinx.ext.autodoc', 'jaraco.packaging.sphinx', 'rst.linker']\n\nmaster_doc = \"index\"\n\nlink_files = {\n '../CHANGES.rst': dict(\n using=dict(\n BB='https://bitbucket.org',\n GH='https://github.com',\n ),\n replace=[\n dict(\n pattern=r'(Issue )?#(?P<issue>\\d+)',\n url='{package_url}/issues/{issue}',\n ),\n dict(\n pattern=r'BB Pull Request ?#(?P<bb_pull_request>\\d+)',\n url='{BB}/pypa/setuptools/pull-request/{bb_pull_request}',\n ),\n dict(\n pattern=r'Distribute #(?P<distribute>\\d+)',\n url='{BB}/tarek/distribute/issue/{distribute}',\n ),\n dict(\n pattern=r'Buildout #(?P<buildout>\\d+)',\n url='{GH}/buildout/buildout/issues/{buildout}',\n ),\n dict(\n pattern=r'Old Setuptools #(?P<old_setuptools>\\d+)',\n url='http://bugs.python.org/setuptools/issue{old_setuptools}',\n ),\n dict(\n pattern=r'Jython #(?P<jython>\\d+)',\n url='http://bugs.jython.org/issue{jython}',\n ),\n dict(\n pattern=r'(Python #|bpo-)(?P<python>\\d+)',\n url='http://bugs.python.org/issue{python}',\n ),\n dict(\n pattern=r'Interop #(?P<interop>\\d+)',\n url='{GH}/pypa/interoperability-peps/issues/{interop}',\n ),\n dict(\n pattern=r'Pip #(?P<pip>\\d+)',\n url='{GH}/pypa/pip/issues/{pip}',\n ),\n dict(\n pattern=r'Packaging #(?P<packaging>\\d+)',\n url='{GH}/pypa/packaging/issues/{packaging}',\n ),\n dict(\n pattern=r'[Pp]ackaging (?P<packaging_ver>\\d+(\\.\\d+)+)',\n url='{GH}/pypa/packaging/blob/{packaging_ver}/CHANGELOG.rst',\n ),\n dict(\n pattern=r'PEP[- ](?P<pep_number>\\d+)',\n url='https://www.python.org/dev/peps/pep-{pep_number:0>4}/',\n ),\n dict(\n pattern=r'setuptools_svn #(?P<setuptools_svn>\\d+)',\n url='{GH}/jaraco/setuptools_svn/issues/{setuptools_svn}',\n ),\n dict(\n pattern=r'pypa/distutils#(?P<distutils>\\d+)',\n url='{GH}/pypa/distutils/issues/{distutils}',\n ),\n dict(\n pattern=r'pypa/distutils@(?P<distutils_commit>[\\da-f]+)',\n url='{GH}/pypa/distutils/commit/{distutils_commit}',\n ),\n dict(\n pattern=r'^(?m)((?P<scm_version>v?\\d+(\\.\\d+){1,2}))\\n[-=]+\\n',\n with_scm='{text}\\n{rev[timestamp]:%d %b %Y}\\n',\n ),\n ],\n ),\n}\n\n# Be strict about any broken references:\nnitpicky = True\n\n# Include Python intersphinx mapping to prevent failures\n# jaraco/skeleton#51\nextensions += ['sphinx.ext.intersphinx']\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n}\n\nintersphinx_mapping.update({\n 'pypa-build': ('https://pypa-build.readthedocs.io/en/latest/', None)\n})\n\n# Add support for linking usernames\ngithub_url = 'https://github.com'\ngithub_sponsors_url = f'{github_url}/sponsors'\nextlinks = {\n 'user': (f'{github_sponsors_url}/%s', '@'), # noqa: WPS323\n}\nextensions += ['sphinx.ext.extlinks']\n\n# Ref: https://github.com/python-attrs/attrs/pull/571/files\\\n# #diff-85987f48f1258d9ee486e3191495582dR82\ndefault_role = 'any'\n\n# HTML theme\nhtml_theme = 'furo'\nhtml_logo = \"images/logo.svg\"\n\nhtml_theme_options = {\n \"sidebar_hide_name\": True,\n \"light_css_variables\": {\n \"color-brand-primary\": \"#336790\", # \"blue\"\n \"color-brand-content\": \"#336790\",\n },\n \"dark_css_variables\": {\n \"color-brand-primary\": \"#E5B62F\", # \"yellow\"\n \"color-brand-content\": \"#E5B62F\",\n },\n}\n\n# Add support for inline tabs\nextensions += ['sphinx_inline_tabs']\n\n# Support for distutils\n\n# Ref: https://stackoverflow.com/a/30624034/595220\nnitpick_ignore = [\n ('c:func', 'SHGetSpecialFolderPath'), # ref to MS docs\n ('envvar', 'DISTUTILS_DEBUG'), # undocumented\n ('envvar', 'HOME'), # undocumented\n ('envvar', 'PLAT'), # undocumented\n ('py:attr', 'CCompiler.language_map'), # undocumented\n ('py:attr', 'CCompiler.language_order'), # undocumented\n ('py:class', 'distutils.dist.Distribution'), # undocumented\n ('py:class', 'distutils.extension.Extension'), # undocumented\n ('py:class', 'BorlandCCompiler'), # undocumented\n ('py:class', 'CCompiler'), # undocumented\n ('py:class', 'CygwinCCompiler'), # undocumented\n ('py:class', 'distutils.dist.DistributionMetadata'), # undocumented\n ('py:class', 'FileList'), # undocumented\n ('py:class', 'IShellLink'), # ref to MS docs\n ('py:class', 'MSVCCompiler'), # undocumented\n ('py:class', 'OptionDummy'), # undocumented\n ('py:class', 'UnixCCompiler'), # undocumented\n ('py:exc', 'CompileError'), # undocumented\n ('py:exc', 'DistutilsExecError'), # undocumented\n ('py:exc', 'DistutilsFileError'), # undocumented\n ('py:exc', 'LibError'), # undocumented\n ('py:exc', 'LinkError'), # undocumented\n ('py:exc', 'PreprocessError'), # undocumented\n ('py:func', 'distutils.CCompiler.new_compiler'), # undocumented\n # undocumented:\n ('py:func', 'distutils.dist.DistributionMetadata.read_pkg_file'),\n ('py:func', 'distutils.file_util._copy_file_contents'), # undocumented\n ('py:func', 'distutils.log.debug'), # undocumented\n ('py:func', 'distutils.spawn.find_executable'), # undocumented\n ('py:func', 'distutils.spawn.spawn'), # undocumented\n # TODO: check https://docutils.rtfd.io in the future\n ('py:mod', 'docutils'), # there's no Sphinx site documenting this\n]\n\n# Allow linking objects on other Sphinx sites seamlessly:\nintersphinx_mapping.update(\n python=('https://docs.python.org/3', None),\n python2=('https://docs.python.org/2', None),\n)\n\n# Add support for the unreleased \"next-version\" change notes\nextensions += ['sphinxcontrib.towncrier']\n# Extension needs a path from here to the towncrier config.\ntowncrier_draft_working_directory = '..'\n# Avoid an empty section for unpublished changes.\ntowncrier_draft_include_empty = False\n\nextensions += ['jaraco.tidelift']\n\n# Add icons (aka \"favicons\") to documentation\nsys.path.append(os.path.join(os.path.dirname(__file__), '_ext'))\nextensions += ['_custom_icons']\n\n# List of dicts with <link> HTML attributes\n# as defined in https://developer.mozilla.org/en-US/docs/Web/HTML/Element/link\n# except that ``file`` gets replaced with the correct ``href``\nicons = [\n { # \"Catch-all\" goes first, otherwise some browsers will overwrite\n \"rel\": \"icon\",\n \"type\": \"image/svg+xml\",\n \"file\": \"images/logo-symbol-only.svg\",\n \"sizes\": \"any\"\n },\n { # Version with thicker strokes for better visibility at smaller sizes\n \"rel\": \"icon\",\n \"type\": \"image/svg+xml\",\n \"file\": \"images/favicon.svg\",\n \"sizes\": \"16x16 24x24 32x32 48x48\"\n },\n # rel=\"apple-touch-icon\" does not support SVG yet\n]\n\nintersphinx_mapping['pip'] = 'https://pip.pypa.io/en/latest', None\n", "path": "docs/conf.py"}]}
| 2,989 | 179 |
gh_patches_debug_26167
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-969
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Indent JSON data while exporting it as Python code
I was testing out a web API and used the "Export flow as Python code" feature for the first time as user, and noticed an improvement.
Currently we just export the `flow.request.body` as is (independent of it's content type) but mitmproxy's interface is smart and renders different bodies differently (for eg. it indents JSON)
I think we could add this indent behaviour while exporting things as code too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/flow_export.py`
Content:
```
1 import urllib
2 import netlib.http
3 from textwrap import dedent
4
5
6 def curl_command(flow):
7 data = "curl "
8
9 for k, v in flow.request.headers.fields:
10 data += "-H '%s:%s' " % (k, v)
11
12 if flow.request.method != "GET":
13 data += "-X %s " % flow.request.method
14
15 full_url = flow.request.scheme + "://" + flow.request.host + flow.request.path
16 data += "'%s'" % full_url
17
18 if flow.request.content:
19 data += " --data-binary '%s'" % flow.request.content
20
21 return data
22
23
24 def python_code(flow):
25 code = dedent("""
26 import requests
27
28 url = '{url}'
29 {headers}{params}{data}
30 response = requests.request(
31 method='{method}',
32 url=url,{args}
33 )
34
35 print(response.text)
36 """).strip()
37
38 components = map(lambda x: urllib.quote(x, safe=""), flow.request.path_components)
39 url = flow.request.scheme + "://" + flow.request.host + "/" + "/".join(components)
40
41 args = ""
42 headers = ""
43 if flow.request.headers:
44 lines = [" '%s': '%s',\n" % (k, v) for k, v in flow.request.headers.fields]
45 headers += "\nheaders = {\n%s}\n" % "".join(lines)
46 args += "\n headers=headers,"
47
48 params = ""
49 if flow.request.query:
50 lines = [" '%s': '%s',\n" % (k, v) for k, v in flow.request.query]
51 params = "\nparams = {\n%s}\n" % "".join(lines)
52 args += "\n params=params,"
53
54 data = ""
55 if flow.request.body:
56 data = "\ndata = '''%s'''\n" % flow.request.body
57 args += "\n data=data,"
58
59 code = code.format(
60 url=url,
61 headers=headers,
62 params=params,
63 data=data,
64 method=flow.request.method,
65 args=args,
66 )
67
68 return code
69
70
71 def raw_request(flow):
72 data = netlib.http.http1.assemble_request(flow.request)
73 return data
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/flow_export.py b/mitmproxy/flow_export.py
--- a/mitmproxy/flow_export.py
+++ b/mitmproxy/flow_export.py
@@ -1,7 +1,10 @@
+import json
import urllib
-import netlib.http
from textwrap import dedent
+import netlib.http
+from netlib.utils import parse_content_type
+
def curl_command(flow):
data = "curl "
@@ -53,8 +56,16 @@
data = ""
if flow.request.body:
- data = "\ndata = '''%s'''\n" % flow.request.body
- args += "\n data=data,"
+ json_obj = is_json(flow.request.headers, flow.request.body)
+ if json_obj:
+ # Without the separators field json.dumps() produces
+ # trailing white spaces: https://bugs.python.org/issue16333
+ data = json.dumps(json_obj, indent=4, separators=(',', ': '))
+ data = "\njson = %s\n" % data
+ args += "\n json=json,"
+ else:
+ data = "\ndata = '''%s'''\n" % flow.request.body
+ args += "\n data=data,"
code = code.format(
url=url,
@@ -71,3 +82,14 @@
def raw_request(flow):
data = netlib.http.http1.assemble_request(flow.request)
return data
+
+
+def is_json(headers, content):
+ if headers:
+ ct = parse_content_type(headers.get("content-type", ""))
+ if ct and "%s/%s" % (ct[0], ct[1]) == "application/json":
+ try:
+ return json.loads(content)
+ except ValueError:
+ return False
+ return False
|
{"golden_diff": "diff --git a/mitmproxy/flow_export.py b/mitmproxy/flow_export.py\n--- a/mitmproxy/flow_export.py\n+++ b/mitmproxy/flow_export.py\n@@ -1,7 +1,10 @@\n+import json\n import urllib\n-import netlib.http\n from textwrap import dedent\n \n+import netlib.http\n+from netlib.utils import parse_content_type\n+\n \n def curl_command(flow):\n data = \"curl \"\n@@ -53,8 +56,16 @@\n \n data = \"\"\n if flow.request.body:\n- data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n- args += \"\\n data=data,\"\n+ json_obj = is_json(flow.request.headers, flow.request.body)\n+ if json_obj:\n+ # Without the separators field json.dumps() produces\n+ # trailing white spaces: https://bugs.python.org/issue16333\n+ data = json.dumps(json_obj, indent=4, separators=(',', ': '))\n+ data = \"\\njson = %s\\n\" % data\n+ args += \"\\n json=json,\"\n+ else:\n+ data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n+ args += \"\\n data=data,\"\n \n code = code.format(\n url=url,\n@@ -71,3 +82,14 @@\n def raw_request(flow):\n data = netlib.http.http1.assemble_request(flow.request)\n return data\n+\n+\n+def is_json(headers, content):\n+ if headers:\n+ ct = parse_content_type(headers.get(\"content-type\", \"\"))\n+ if ct and \"%s/%s\" % (ct[0], ct[1]) == \"application/json\":\n+ try:\n+ return json.loads(content)\n+ except ValueError:\n+ return False\n+ return False\n", "issue": "Indent JSON data while exporting it as Python code\nI was testing out a web API and used the \"Export flow as Python code\" feature for the first time as user, and noticed an improvement.\n\nCurrently we just export the `flow.request.body` as is (independent of it's content type) but mitmproxy's interface is smart and renders different bodies differently (for eg. it indents JSON)\n\nI think we could add this indent behaviour while exporting things as code too.\n\n", "before_files": [{"content": "import urllib\nimport netlib.http\nfrom textwrap import dedent\n\n\ndef curl_command(flow):\n data = \"curl \"\n\n for k, v in flow.request.headers.fields:\n data += \"-H '%s:%s' \" % (k, v)\n\n if flow.request.method != \"GET\":\n data += \"-X %s \" % flow.request.method\n\n full_url = flow.request.scheme + \"://\" + flow.request.host + flow.request.path\n data += \"'%s'\" % full_url\n\n if flow.request.content:\n data += \" --data-binary '%s'\" % flow.request.content\n\n return data\n\n\ndef python_code(flow):\n code = dedent(\"\"\"\n import requests\n\n url = '{url}'\n {headers}{params}{data}\n response = requests.request(\n method='{method}',\n url=url,{args}\n )\n\n print(response.text)\n \"\"\").strip()\n\n components = map(lambda x: urllib.quote(x, safe=\"\"), flow.request.path_components)\n url = flow.request.scheme + \"://\" + flow.request.host + \"/\" + \"/\".join(components)\n\n args = \"\"\n headers = \"\"\n if flow.request.headers:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.headers.fields]\n headers += \"\\nheaders = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n headers=headers,\"\n\n params = \"\"\n if flow.request.query:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.query]\n params = \"\\nparams = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n params=params,\"\n\n data = \"\"\n if flow.request.body:\n data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n args += \"\\n data=data,\"\n\n code = code.format(\n url=url,\n headers=headers,\n params=params,\n data=data,\n method=flow.request.method,\n args=args,\n )\n\n return code\n\n\ndef raw_request(flow):\n data = netlib.http.http1.assemble_request(flow.request)\n return data\n", "path": "mitmproxy/flow_export.py"}], "after_files": [{"content": "import json\nimport urllib\nfrom textwrap import dedent\n\nimport netlib.http\nfrom netlib.utils import parse_content_type\n\n\ndef curl_command(flow):\n data = \"curl \"\n\n for k, v in flow.request.headers.fields:\n data += \"-H '%s:%s' \" % (k, v)\n\n if flow.request.method != \"GET\":\n data += \"-X %s \" % flow.request.method\n\n full_url = flow.request.scheme + \"://\" + flow.request.host + flow.request.path\n data += \"'%s'\" % full_url\n\n if flow.request.content:\n data += \" --data-binary '%s'\" % flow.request.content\n\n return data\n\n\ndef python_code(flow):\n code = dedent(\"\"\"\n import requests\n\n url = '{url}'\n {headers}{params}{data}\n response = requests.request(\n method='{method}',\n url=url,{args}\n )\n\n print(response.text)\n \"\"\").strip()\n\n components = map(lambda x: urllib.quote(x, safe=\"\"), flow.request.path_components)\n url = flow.request.scheme + \"://\" + flow.request.host + \"/\" + \"/\".join(components)\n\n args = \"\"\n headers = \"\"\n if flow.request.headers:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.headers.fields]\n headers += \"\\nheaders = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n headers=headers,\"\n\n params = \"\"\n if flow.request.query:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.query]\n params = \"\\nparams = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n params=params,\"\n\n data = \"\"\n if flow.request.body:\n json_obj = is_json(flow.request.headers, flow.request.body)\n if json_obj:\n # Without the separators field json.dumps() produces\n # trailing white spaces: https://bugs.python.org/issue16333\n data = json.dumps(json_obj, indent=4, separators=(',', ': '))\n data = \"\\njson = %s\\n\" % data\n args += \"\\n json=json,\"\n else:\n data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n args += \"\\n data=data,\"\n\n code = code.format(\n url=url,\n headers=headers,\n params=params,\n data=data,\n method=flow.request.method,\n args=args,\n )\n\n return code\n\n\ndef raw_request(flow):\n data = netlib.http.http1.assemble_request(flow.request)\n return data\n\n\ndef is_json(headers, content):\n if headers:\n ct = parse_content_type(headers.get(\"content-type\", \"\"))\n if ct and \"%s/%s\" % (ct[0], ct[1]) == \"application/json\":\n try:\n return json.loads(content)\n except ValueError:\n return False\n return False\n", "path": "mitmproxy/flow_export.py"}]}
| 982 | 408 |
gh_patches_debug_12170
|
rasdani/github-patches
|
git_diff
|
obspy__obspy-3209
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pop check_compression from argument list for readers?
I wrote a small io plugin for ObsPy events based on zipped files, see https://github.com/trichter/obspycsv. Because ObsPy automatically unpacks zip files, I had some difficulties to get it working.
I found the check_compression argument in the uncompress_file decorator with which its working fine. I think, however, that it should be popped from the argument list [here](https://github.com/obspy/obspy/blob/master/obspy/core/util/decorator.py#L139). Otherwise:
```py
In [1]: ev = read_events()
In [2]: ev.write('test.xml', 'QUAKEML')
In [3]: read_events('test.xml', check_compression=False)
TypeError: _read_quakeml() got an unexpected keyword argument 'check_compression'
```
Ideally, a plugin could define on its own if the compression check should be skipped, e.g. by setting an additional entry point. I see, however, that this feature needs quite some refactoring of the reader code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `obspy/core/util/decorator.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Decorator used in ObsPy.
4
5 :copyright:
6 The ObsPy Development Team ([email protected])
7 :license:
8 GNU Lesser General Public License, Version 3
9 (https://www.gnu.org/copyleft/lesser.html)
10 """
11 import functools
12 import inspect
13 from pathlib import Path
14 import re
15 import socket
16 import tarfile
17 import unittest
18 import warnings
19 import zipfile
20
21 import numpy as np
22 from decorator import decorator
23
24 from obspy.core.util import get_example_file
25 from obspy.core.util.base import NamedTemporaryFile
26 from obspy.core.util.deprecation_helpers import ObsPyDeprecationWarning
27
28
29 def deprecated(warning_msg=None):
30 """
31 This is a decorator which can be used to mark functions as deprecated.
32
33 .. note::
34 Actually, this is not a decorator itself but a decorator factory,
35 returning the correct decorator for the specified options. It can be
36 used just like a decorator.
37
38 It will result in a warning being emitted when the function is used.
39 """
40 @decorator
41 def _deprecated(func, *args, **kwargs):
42 if 'deprecated' in str(func.__doc__).lower():
43 msg = func.__doc__
44 elif warning_msg:
45 msg = warning_msg
46 func.__doc__ = warning_msg
47 else:
48 msg = "Call to deprecated function %s." % func.__name__
49 warnings.warn(msg, category=ObsPyDeprecationWarning, stacklevel=3)
50 return func(*args, **kwargs)
51 return _deprecated
52
53
54 def deprecated_keywords(keywords):
55 """
56 Decorator for marking keywords as deprecated.
57
58 .. note::
59 Actually, this is not a decorator itself but a decorator factory,
60 returning the correct decorator for the specified options. It can be
61 used just like a decorator.
62
63 :type keywords: dict
64 :param keywords: old/new keyword names as key/value pairs.
65 """
66 def fdec(func):
67 fname = func.__name__
68 msg = "Deprecated keyword %s in %s() call - please use %s instead."
69 msg2 = "Deprecated keyword %s in %s() call - ignoring."
70 msg3 = ("Conflicting deprecated keywords (%s) in %s() call"
71 " - please use new '%s' keyword instead.")
72
73 @functools.wraps(func)
74 def echo_func(*args, **kwargs):
75 # check if multiple deprecated keywords get mapped to the same new
76 # keyword
77 new_keyword_appearance_counts = dict.fromkeys(keywords.values(), 0)
78 for key, new_key in keywords.items():
79 if key in kwargs:
80 new_keyword_appearance_counts[new_key] += 1
81 for key_ in keywords.values():
82 # ignore `None` as new value, it means that no mapping is
83 # happening..
84 if key_ is None:
85 continue
86 if new_keyword_appearance_counts[key_] > 1:
87 conflicting_keys = ", ".join(
88 [old_key for old_key, new_key in keywords.items()
89 if new_key == key_])
90 raise Exception(msg3 % (conflicting_keys, fname, new_key))
91 # map deprecated keywords to new keywords
92 for kw in list(kwargs):
93 if kw in keywords:
94 nkw = keywords[kw]
95 if nkw is None:
96 warnings.warn(msg2 % (kw, fname),
97 category=ObsPyDeprecationWarning,
98 stacklevel=3)
99 else:
100 warnings.warn(msg % (kw, fname, nkw),
101 category=ObsPyDeprecationWarning,
102 stacklevel=3)
103 kwargs[nkw] = kwargs[kw]
104 del kwargs[kw]
105 return func(*args, **kwargs)
106 return echo_func
107
108 return fdec
109
110
111 @decorator
112 def skip_on_network_error(func, *args, **kwargs):
113 """
114 Decorator for unittest to mark test routines that fail with certain network
115 errors (e.g. timeouts) as "skipped" rather than "Error".
116 """
117 try:
118 return func(*args, **kwargs)
119 ###################################################
120 # add more except clauses like this to add other
121 # network errors that should be skipped
122 except socket.timeout as e:
123 if str(e) == "timed out":
124 raise unittest.SkipTest(str(e))
125 ###################################################
126 except socket.error as e:
127 if str(e) == "[Errno 110] Connection timed out":
128 raise unittest.SkipTest(str(e))
129 # general except to be able to generally reraise
130 except Exception:
131 raise
132
133
134 @decorator
135 def uncompress_file(func, filename, *args, **kwargs):
136 """
137 Decorator used for temporary uncompressing file if .gz or .bz2 archive.
138 """
139 if not kwargs.pop('check_compression', True):
140 return func(filename, *args, **kwargs)
141 if not isinstance(filename, str):
142 return func(filename, *args, **kwargs)
143 elif not Path(filename).exists():
144 msg = "File not found '%s'" % (filename)
145 raise IOError(msg)
146 # check if we got a compressed file or archive
147 obj_list = []
148 if tarfile.is_tarfile(filename):
149 try:
150 # reading with transparent compression
151 with tarfile.open(filename, 'r|*') as tar:
152 for tarinfo in tar:
153 # only handle regular files
154 if not tarinfo.isfile():
155 continue
156 data = tar.extractfile(tarinfo).read()
157 # Skip empty files - we don't need them no matter what
158 # and it guards against rare cases where waveforms files
159 # are also slightly valid tar-files.
160 if not data:
161 continue
162 obj_list.append(data)
163 except Exception:
164 pass
165 elif zipfile.is_zipfile(filename):
166 try:
167 zip = zipfile.ZipFile(filename)
168 obj_list = [zip.read(name) for name in zip.namelist()]
169 except Exception:
170 pass
171 elif filename.endswith('.bz2'):
172 # bz2 module
173 try:
174 import bz2
175 with open(filename, 'rb') as fp:
176 obj_list.append(bz2.decompress(fp.read()))
177 except Exception:
178 pass
179 elif filename.endswith('.gz'):
180 # gzip module
181 try:
182 import gzip
183 with gzip.open(filename, 'rb') as fp:
184 obj_list.append(fp.read())
185 except Exception:
186 pass
187 # handle results
188 if obj_list:
189 # write results to temporary files
190 result = None
191 for obj in obj_list:
192 with NamedTemporaryFile() as tempfile:
193 tempfile._fileobj.write(obj)
194 stream = func(tempfile.name, *args, **kwargs)
195 # just add other stream objects to first stream
196 if result is None:
197 result = stream
198 else:
199 result += stream
200 else:
201 # no compressions
202 result = func(filename, *args, **kwargs)
203 return result
204
205
206 @decorator
207 def raise_if_masked(func, *args, **kwargs):
208 """
209 Raises if the first argument (self in case of methods) is a Trace with
210 masked values or a Stream containing a Trace with masked values.
211 """
212 arrays = []
213 # first arg seems to be a Stream
214 if hasattr(args[0], "traces"):
215 arrays = [tr.data for tr in args[0]]
216 # first arg seems to be a Trace
217 if hasattr(args[0], "data") and isinstance(args[0].data, np.ndarray):
218 arrays = [args[0].data]
219 for arr in arrays:
220 if np.ma.is_masked(arr):
221 msg = "Trace with masked values found. This is not " + \
222 "supported for this operation. Try the split() " + \
223 "method on Trace/Stream to produce a Stream with " + \
224 "unmasked Traces."
225 raise NotImplementedError(msg)
226 return func(*args, **kwargs)
227
228
229 @decorator
230 def skip_if_no_data(func, *args, **kwargs):
231 """
232 Does nothing if the first argument (self in case of methods) is a Trace
233 with no data in it.
234 """
235 if not args[0]:
236 return
237 return func(*args, **kwargs)
238
239
240 def map_example_filename(arg_kwarg_name):
241 """
242 Decorator that replaces "/path/to/filename" patterns in the arg or kwarg
243 of the specified name with the correct file path. If the pattern is not
244 encountered nothing is done.
245
246 .. note::
247 Actually, this is not a decorator itself but a decorator factory,
248 returning the correct decorator for the specified options. It can be
249 used just like a decorator.
250
251 :type arg_kwarg_name: str
252 :param arg_kwarg_name: name of the arg/kwarg that should be (tried) to map
253 """
254 @decorator
255 def _map_example_filename(func, *args, **kwargs):
256 prefix = '/path/to/'
257 # check kwargs
258 if arg_kwarg_name in kwargs:
259 if isinstance(kwargs[arg_kwarg_name], str):
260 if re.match(prefix, kwargs[arg_kwarg_name]):
261 try:
262 kwargs[arg_kwarg_name] = \
263 get_example_file(kwargs[arg_kwarg_name][9:])
264 # file not found by get_example_file:
265 except IOError:
266 pass
267 # check args
268 else:
269 try:
270 inspected_args = [
271 p.name
272 for p in inspect.signature(func).parameters.values()
273 ]
274 except AttributeError:
275 inspected_args = inspect.getargspec(func).args
276 try:
277 ind = inspected_args.index(arg_kwarg_name)
278 except ValueError:
279 pass
280 else:
281 if ind < len(args) and isinstance(args[ind], str):
282 # need to check length of args from inspect
283 if re.match(prefix, args[ind]):
284 try:
285 args = list(args)
286 args[ind] = get_example_file(args[ind][9:])
287 args = tuple(args)
288 # file not found by get_example_file:
289 except IOError:
290 pass
291 return func(*args, **kwargs)
292 return _map_example_filename
293
294
295 if __name__ == '__main__':
296 import doctest
297 doctest.testmod(exclude_empty=True)
298
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/obspy/core/util/decorator.py b/obspy/core/util/decorator.py
--- a/obspy/core/util/decorator.py
+++ b/obspy/core/util/decorator.py
@@ -164,8 +164,14 @@
pass
elif zipfile.is_zipfile(filename):
try:
- zip = zipfile.ZipFile(filename)
- obj_list = [zip.read(name) for name in zip.namelist()]
+ with zipfile.ZipFile(filename) as zip:
+ if b'obspy_no_uncompress' in zip.comment:
+ # be nice to plugins based on zip format
+ # do not uncompress the file if tag is present
+ # see issue #3192
+ obj_list = None
+ else:
+ obj_list = [zip.read(name) for name in zip.namelist()]
except Exception:
pass
elif filename.endswith('.bz2'):
|
{"golden_diff": "diff --git a/obspy/core/util/decorator.py b/obspy/core/util/decorator.py\n--- a/obspy/core/util/decorator.py\n+++ b/obspy/core/util/decorator.py\n@@ -164,8 +164,14 @@\n pass\n elif zipfile.is_zipfile(filename):\n try:\n- zip = zipfile.ZipFile(filename)\n- obj_list = [zip.read(name) for name in zip.namelist()]\n+ with zipfile.ZipFile(filename) as zip:\n+ if b'obspy_no_uncompress' in zip.comment:\n+ # be nice to plugins based on zip format\n+ # do not uncompress the file if tag is present\n+ # see issue #3192\n+ obj_list = None\n+ else:\n+ obj_list = [zip.read(name) for name in zip.namelist()]\n except Exception:\n pass\n elif filename.endswith('.bz2'):\n", "issue": "Pop check_compression from argument list for readers?\nI wrote a small io plugin for ObsPy events based on zipped files, see https://github.com/trichter/obspycsv. Because ObsPy automatically unpacks zip files, I had some difficulties to get it working.\r\nI found the check_compression argument in the uncompress_file decorator with which its working fine. I think, however, that it should be popped from the argument list [here](https://github.com/obspy/obspy/blob/master/obspy/core/util/decorator.py#L139). Otherwise:\r\n\r\n```py\r\nIn [1]: ev = read_events()\r\nIn [2]: ev.write('test.xml', 'QUAKEML')\r\nIn [3]: read_events('test.xml', check_compression=False)\r\nTypeError: _read_quakeml() got an unexpected keyword argument 'check_compression'\r\n```\r\n\r\nIdeally, a plugin could define on its own if the compression check should be skipped, e.g. by setting an additional entry point. I see, however, that this feature needs quite some refactoring of the reader code.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDecorator used in ObsPy.\n\n:copyright:\n The ObsPy Development Team ([email protected])\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nimport functools\nimport inspect\nfrom pathlib import Path\nimport re\nimport socket\nimport tarfile\nimport unittest\nimport warnings\nimport zipfile\n\nimport numpy as np\nfrom decorator import decorator\n\nfrom obspy.core.util import get_example_file\nfrom obspy.core.util.base import NamedTemporaryFile\nfrom obspy.core.util.deprecation_helpers import ObsPyDeprecationWarning\n\n\ndef deprecated(warning_msg=None):\n \"\"\"\n This is a decorator which can be used to mark functions as deprecated.\n\n .. note::\n Actually, this is not a decorator itself but a decorator factory,\n returning the correct decorator for the specified options. It can be\n used just like a decorator.\n\n It will result in a warning being emitted when the function is used.\n \"\"\"\n @decorator\n def _deprecated(func, *args, **kwargs):\n if 'deprecated' in str(func.__doc__).lower():\n msg = func.__doc__\n elif warning_msg:\n msg = warning_msg\n func.__doc__ = warning_msg\n else:\n msg = \"Call to deprecated function %s.\" % func.__name__\n warnings.warn(msg, category=ObsPyDeprecationWarning, stacklevel=3)\n return func(*args, **kwargs)\n return _deprecated\n\n\ndef deprecated_keywords(keywords):\n \"\"\"\n Decorator for marking keywords as deprecated.\n\n .. note::\n Actually, this is not a decorator itself but a decorator factory,\n returning the correct decorator for the specified options. It can be\n used just like a decorator.\n\n :type keywords: dict\n :param keywords: old/new keyword names as key/value pairs.\n \"\"\"\n def fdec(func):\n fname = func.__name__\n msg = \"Deprecated keyword %s in %s() call - please use %s instead.\"\n msg2 = \"Deprecated keyword %s in %s() call - ignoring.\"\n msg3 = (\"Conflicting deprecated keywords (%s) in %s() call\"\n \" - please use new '%s' keyword instead.\")\n\n @functools.wraps(func)\n def echo_func(*args, **kwargs):\n # check if multiple deprecated keywords get mapped to the same new\n # keyword\n new_keyword_appearance_counts = dict.fromkeys(keywords.values(), 0)\n for key, new_key in keywords.items():\n if key in kwargs:\n new_keyword_appearance_counts[new_key] += 1\n for key_ in keywords.values():\n # ignore `None` as new value, it means that no mapping is\n # happening..\n if key_ is None:\n continue\n if new_keyword_appearance_counts[key_] > 1:\n conflicting_keys = \", \".join(\n [old_key for old_key, new_key in keywords.items()\n if new_key == key_])\n raise Exception(msg3 % (conflicting_keys, fname, new_key))\n # map deprecated keywords to new keywords\n for kw in list(kwargs):\n if kw in keywords:\n nkw = keywords[kw]\n if nkw is None:\n warnings.warn(msg2 % (kw, fname),\n category=ObsPyDeprecationWarning,\n stacklevel=3)\n else:\n warnings.warn(msg % (kw, fname, nkw),\n category=ObsPyDeprecationWarning,\n stacklevel=3)\n kwargs[nkw] = kwargs[kw]\n del kwargs[kw]\n return func(*args, **kwargs)\n return echo_func\n\n return fdec\n\n\n@decorator\ndef skip_on_network_error(func, *args, **kwargs):\n \"\"\"\n Decorator for unittest to mark test routines that fail with certain network\n errors (e.g. timeouts) as \"skipped\" rather than \"Error\".\n \"\"\"\n try:\n return func(*args, **kwargs)\n ###################################################\n # add more except clauses like this to add other\n # network errors that should be skipped\n except socket.timeout as e:\n if str(e) == \"timed out\":\n raise unittest.SkipTest(str(e))\n ###################################################\n except socket.error as e:\n if str(e) == \"[Errno 110] Connection timed out\":\n raise unittest.SkipTest(str(e))\n # general except to be able to generally reraise\n except Exception:\n raise\n\n\n@decorator\ndef uncompress_file(func, filename, *args, **kwargs):\n \"\"\"\n Decorator used for temporary uncompressing file if .gz or .bz2 archive.\n \"\"\"\n if not kwargs.pop('check_compression', True):\n return func(filename, *args, **kwargs)\n if not isinstance(filename, str):\n return func(filename, *args, **kwargs)\n elif not Path(filename).exists():\n msg = \"File not found '%s'\" % (filename)\n raise IOError(msg)\n # check if we got a compressed file or archive\n obj_list = []\n if tarfile.is_tarfile(filename):\n try:\n # reading with transparent compression\n with tarfile.open(filename, 'r|*') as tar:\n for tarinfo in tar:\n # only handle regular files\n if not tarinfo.isfile():\n continue\n data = tar.extractfile(tarinfo).read()\n # Skip empty files - we don't need them no matter what\n # and it guards against rare cases where waveforms files\n # are also slightly valid tar-files.\n if not data:\n continue\n obj_list.append(data)\n except Exception:\n pass\n elif zipfile.is_zipfile(filename):\n try:\n zip = zipfile.ZipFile(filename)\n obj_list = [zip.read(name) for name in zip.namelist()]\n except Exception:\n pass\n elif filename.endswith('.bz2'):\n # bz2 module\n try:\n import bz2\n with open(filename, 'rb') as fp:\n obj_list.append(bz2.decompress(fp.read()))\n except Exception:\n pass\n elif filename.endswith('.gz'):\n # gzip module\n try:\n import gzip\n with gzip.open(filename, 'rb') as fp:\n obj_list.append(fp.read())\n except Exception:\n pass\n # handle results\n if obj_list:\n # write results to temporary files\n result = None\n for obj in obj_list:\n with NamedTemporaryFile() as tempfile:\n tempfile._fileobj.write(obj)\n stream = func(tempfile.name, *args, **kwargs)\n # just add other stream objects to first stream\n if result is None:\n result = stream\n else:\n result += stream\n else:\n # no compressions\n result = func(filename, *args, **kwargs)\n return result\n\n\n@decorator\ndef raise_if_masked(func, *args, **kwargs):\n \"\"\"\n Raises if the first argument (self in case of methods) is a Trace with\n masked values or a Stream containing a Trace with masked values.\n \"\"\"\n arrays = []\n # first arg seems to be a Stream\n if hasattr(args[0], \"traces\"):\n arrays = [tr.data for tr in args[0]]\n # first arg seems to be a Trace\n if hasattr(args[0], \"data\") and isinstance(args[0].data, np.ndarray):\n arrays = [args[0].data]\n for arr in arrays:\n if np.ma.is_masked(arr):\n msg = \"Trace with masked values found. This is not \" + \\\n \"supported for this operation. Try the split() \" + \\\n \"method on Trace/Stream to produce a Stream with \" + \\\n \"unmasked Traces.\"\n raise NotImplementedError(msg)\n return func(*args, **kwargs)\n\n\n@decorator\ndef skip_if_no_data(func, *args, **kwargs):\n \"\"\"\n Does nothing if the first argument (self in case of methods) is a Trace\n with no data in it.\n \"\"\"\n if not args[0]:\n return\n return func(*args, **kwargs)\n\n\ndef map_example_filename(arg_kwarg_name):\n \"\"\"\n Decorator that replaces \"/path/to/filename\" patterns in the arg or kwarg\n of the specified name with the correct file path. If the pattern is not\n encountered nothing is done.\n\n .. note::\n Actually, this is not a decorator itself but a decorator factory,\n returning the correct decorator for the specified options. It can be\n used just like a decorator.\n\n :type arg_kwarg_name: str\n :param arg_kwarg_name: name of the arg/kwarg that should be (tried) to map\n \"\"\"\n @decorator\n def _map_example_filename(func, *args, **kwargs):\n prefix = '/path/to/'\n # check kwargs\n if arg_kwarg_name in kwargs:\n if isinstance(kwargs[arg_kwarg_name], str):\n if re.match(prefix, kwargs[arg_kwarg_name]):\n try:\n kwargs[arg_kwarg_name] = \\\n get_example_file(kwargs[arg_kwarg_name][9:])\n # file not found by get_example_file:\n except IOError:\n pass\n # check args\n else:\n try:\n inspected_args = [\n p.name\n for p in inspect.signature(func).parameters.values()\n ]\n except AttributeError:\n inspected_args = inspect.getargspec(func).args\n try:\n ind = inspected_args.index(arg_kwarg_name)\n except ValueError:\n pass\n else:\n if ind < len(args) and isinstance(args[ind], str):\n # need to check length of args from inspect\n if re.match(prefix, args[ind]):\n try:\n args = list(args)\n args[ind] = get_example_file(args[ind][9:])\n args = tuple(args)\n # file not found by get_example_file:\n except IOError:\n pass\n return func(*args, **kwargs)\n return _map_example_filename\n\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod(exclude_empty=True)\n", "path": "obspy/core/util/decorator.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDecorator used in ObsPy.\n\n:copyright:\n The ObsPy Development Team ([email protected])\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nimport functools\nimport inspect\nfrom pathlib import Path\nimport re\nimport socket\nimport tarfile\nimport unittest\nimport warnings\nimport zipfile\n\nimport numpy as np\nfrom decorator import decorator\n\nfrom obspy.core.util import get_example_file\nfrom obspy.core.util.base import NamedTemporaryFile\nfrom obspy.core.util.deprecation_helpers import ObsPyDeprecationWarning\n\n\ndef deprecated(warning_msg=None):\n \"\"\"\n This is a decorator which can be used to mark functions as deprecated.\n\n .. note::\n Actually, this is not a decorator itself but a decorator factory,\n returning the correct decorator for the specified options. It can be\n used just like a decorator.\n\n It will result in a warning being emitted when the function is used.\n \"\"\"\n @decorator\n def _deprecated(func, *args, **kwargs):\n if 'deprecated' in str(func.__doc__).lower():\n msg = func.__doc__\n elif warning_msg:\n msg = warning_msg\n func.__doc__ = warning_msg\n else:\n msg = \"Call to deprecated function %s.\" % func.__name__\n warnings.warn(msg, category=ObsPyDeprecationWarning, stacklevel=3)\n return func(*args, **kwargs)\n return _deprecated\n\n\ndef deprecated_keywords(keywords):\n \"\"\"\n Decorator for marking keywords as deprecated.\n\n .. note::\n Actually, this is not a decorator itself but a decorator factory,\n returning the correct decorator for the specified options. It can be\n used just like a decorator.\n\n :type keywords: dict\n :param keywords: old/new keyword names as key/value pairs.\n \"\"\"\n def fdec(func):\n fname = func.__name__\n msg = \"Deprecated keyword %s in %s() call - please use %s instead.\"\n msg2 = \"Deprecated keyword %s in %s() call - ignoring.\"\n msg3 = (\"Conflicting deprecated keywords (%s) in %s() call\"\n \" - please use new '%s' keyword instead.\")\n\n @functools.wraps(func)\n def echo_func(*args, **kwargs):\n # check if multiple deprecated keywords get mapped to the same new\n # keyword\n new_keyword_appearance_counts = dict.fromkeys(keywords.values(), 0)\n for key, new_key in keywords.items():\n if key in kwargs:\n new_keyword_appearance_counts[new_key] += 1\n for key_ in keywords.values():\n # ignore `None` as new value, it means that no mapping is\n # happening..\n if key_ is None:\n continue\n if new_keyword_appearance_counts[key_] > 1:\n conflicting_keys = \", \".join(\n [old_key for old_key, new_key in keywords.items()\n if new_key == key_])\n raise Exception(msg3 % (conflicting_keys, fname, new_key))\n # map deprecated keywords to new keywords\n for kw in list(kwargs):\n if kw in keywords:\n nkw = keywords[kw]\n if nkw is None:\n warnings.warn(msg2 % (kw, fname),\n category=ObsPyDeprecationWarning,\n stacklevel=3)\n else:\n warnings.warn(msg % (kw, fname, nkw),\n category=ObsPyDeprecationWarning,\n stacklevel=3)\n kwargs[nkw] = kwargs[kw]\n del kwargs[kw]\n return func(*args, **kwargs)\n return echo_func\n\n return fdec\n\n\n@decorator\ndef skip_on_network_error(func, *args, **kwargs):\n \"\"\"\n Decorator for unittest to mark test routines that fail with certain network\n errors (e.g. timeouts) as \"skipped\" rather than \"Error\".\n \"\"\"\n try:\n return func(*args, **kwargs)\n ###################################################\n # add more except clauses like this to add other\n # network errors that should be skipped\n except socket.timeout as e:\n if str(e) == \"timed out\":\n raise unittest.SkipTest(str(e))\n ###################################################\n except socket.error as e:\n if str(e) == \"[Errno 110] Connection timed out\":\n raise unittest.SkipTest(str(e))\n # general except to be able to generally reraise\n except Exception:\n raise\n\n\n@decorator\ndef uncompress_file(func, filename, *args, **kwargs):\n \"\"\"\n Decorator used for temporary uncompressing file if .gz or .bz2 archive.\n \"\"\"\n if not kwargs.pop('check_compression', True):\n return func(filename, *args, **kwargs)\n if not isinstance(filename, str):\n return func(filename, *args, **kwargs)\n elif not Path(filename).exists():\n msg = \"File not found '%s'\" % (filename)\n raise IOError(msg)\n # check if we got a compressed file or archive\n obj_list = []\n if tarfile.is_tarfile(filename):\n try:\n # reading with transparent compression\n with tarfile.open(filename, 'r|*') as tar:\n for tarinfo in tar:\n # only handle regular files\n if not tarinfo.isfile():\n continue\n data = tar.extractfile(tarinfo).read()\n # Skip empty files - we don't need them no matter what\n # and it guards against rare cases where waveforms files\n # are also slightly valid tar-files.\n if not data:\n continue\n obj_list.append(data)\n except Exception:\n pass\n elif zipfile.is_zipfile(filename):\n try:\n with zipfile.ZipFile(filename) as zip:\n if b'obspy_no_uncompress' in zip.comment:\n # be nice to plugins based on zip format\n # do not uncompress the file if tag is present\n # see issue #3192\n obj_list = None\n else:\n obj_list = [zip.read(name) for name in zip.namelist()]\n except Exception:\n pass\n elif filename.endswith('.bz2'):\n # bz2 module\n try:\n import bz2\n with open(filename, 'rb') as fp:\n obj_list.append(bz2.decompress(fp.read()))\n except Exception:\n pass\n elif filename.endswith('.gz'):\n # gzip module\n try:\n import gzip\n with gzip.open(filename, 'rb') as fp:\n obj_list.append(fp.read())\n except Exception:\n pass\n # handle results\n if obj_list:\n # write results to temporary files\n result = None\n for obj in obj_list:\n with NamedTemporaryFile() as tempfile:\n tempfile._fileobj.write(obj)\n stream = func(tempfile.name, *args, **kwargs)\n # just add other stream objects to first stream\n if result is None:\n result = stream\n else:\n result += stream\n else:\n # no compressions\n result = func(filename, *args, **kwargs)\n return result\n\n\n@decorator\ndef raise_if_masked(func, *args, **kwargs):\n \"\"\"\n Raises if the first argument (self in case of methods) is a Trace with\n masked values or a Stream containing a Trace with masked values.\n \"\"\"\n arrays = []\n # first arg seems to be a Stream\n if hasattr(args[0], \"traces\"):\n arrays = [tr.data for tr in args[0]]\n # first arg seems to be a Trace\n if hasattr(args[0], \"data\") and isinstance(args[0].data, np.ndarray):\n arrays = [args[0].data]\n for arr in arrays:\n if np.ma.is_masked(arr):\n msg = \"Trace with masked values found. This is not \" + \\\n \"supported for this operation. Try the split() \" + \\\n \"method on Trace/Stream to produce a Stream with \" + \\\n \"unmasked Traces.\"\n raise NotImplementedError(msg)\n return func(*args, **kwargs)\n\n\n@decorator\ndef skip_if_no_data(func, *args, **kwargs):\n \"\"\"\n Does nothing if the first argument (self in case of methods) is a Trace\n with no data in it.\n \"\"\"\n if not args[0]:\n return\n return func(*args, **kwargs)\n\n\ndef map_example_filename(arg_kwarg_name):\n \"\"\"\n Decorator that replaces \"/path/to/filename\" patterns in the arg or kwarg\n of the specified name with the correct file path. If the pattern is not\n encountered nothing is done.\n\n .. note::\n Actually, this is not a decorator itself but a decorator factory,\n returning the correct decorator for the specified options. It can be\n used just like a decorator.\n\n :type arg_kwarg_name: str\n :param arg_kwarg_name: name of the arg/kwarg that should be (tried) to map\n \"\"\"\n @decorator\n def _map_example_filename(func, *args, **kwargs):\n prefix = '/path/to/'\n # check kwargs\n if arg_kwarg_name in kwargs:\n if isinstance(kwargs[arg_kwarg_name], str):\n if re.match(prefix, kwargs[arg_kwarg_name]):\n try:\n kwargs[arg_kwarg_name] = \\\n get_example_file(kwargs[arg_kwarg_name][9:])\n # file not found by get_example_file:\n except IOError:\n pass\n # check args\n else:\n try:\n inspected_args = [\n p.name\n for p in inspect.signature(func).parameters.values()\n ]\n except AttributeError:\n inspected_args = inspect.getargspec(func).args\n try:\n ind = inspected_args.index(arg_kwarg_name)\n except ValueError:\n pass\n else:\n if ind < len(args) and isinstance(args[ind], str):\n # need to check length of args from inspect\n if re.match(prefix, args[ind]):\n try:\n args = list(args)\n args[ind] = get_example_file(args[ind][9:])\n args = tuple(args)\n # file not found by get_example_file:\n except IOError:\n pass\n return func(*args, **kwargs)\n return _map_example_filename\n\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod(exclude_empty=True)\n", "path": "obspy/core/util/decorator.py"}]}
| 3,489 | 211 |
gh_patches_debug_39291
|
rasdani/github-patches
|
git_diff
|
mabel-dev__opteryx-1375
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
✨ GCS improvements
Create the client object once and reuse
List blobs should only return the name of the blob and not any other details
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opteryx/connectors/gcp_cloudstorage_connector.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import os
14 from typing import Dict
15 from typing import List
16
17 import pyarrow
18 from orso.schema import FlatColumn
19 from orso.schema import RelationSchema
20 from orso.tools import single_item_cache
21 from orso.types import OrsoTypes
22
23 from opteryx.connectors.base.base_connector import BaseConnector
24 from opteryx.connectors.capabilities import Cacheable
25 from opteryx.connectors.capabilities import Partitionable
26 from opteryx.connectors.capabilities import PredicatePushable
27 from opteryx.exceptions import DatasetNotFoundError
28 from opteryx.exceptions import MissingDependencyError
29 from opteryx.exceptions import UnsupportedFileTypeError
30 from opteryx.utils import paths
31 from opteryx.utils.file_decoders import VALID_EXTENSIONS
32 from opteryx.utils.file_decoders import get_decoder
33
34
35 class GcpCloudStorageConnector(BaseConnector, Cacheable, Partitionable, PredicatePushable):
36 __mode__ = "Blob"
37
38 PUSHABLE_OPS: Dict[str, bool] = {
39 "Eq": True,
40 "NotEq": True,
41 "Gt": True,
42 "GtEq": True,
43 "Lt": True,
44 "LtEq": True,
45 }
46
47 PUSHABLE_TYPES = {OrsoTypes.BOOLEAN, OrsoTypes.DOUBLE, OrsoTypes.INTEGER, OrsoTypes.VARCHAR}
48
49 def __init__(self, credentials=None, **kwargs):
50 try:
51 from google.auth.credentials import AnonymousCredentials
52 from google.cloud import storage
53 except ImportError as err:
54 raise MissingDependencyError(err.name) from err
55
56 BaseConnector.__init__(self, **kwargs)
57 Partitionable.__init__(self, **kwargs)
58 Cacheable.__init__(self, **kwargs)
59 PredicatePushable.__init__(self, **kwargs)
60
61 self.dataset = self.dataset.replace(".", "/")
62 self.credentials = credentials
63
64 # we're going to cache the first blob as the schema and dataset reader
65 # sometimes both start here
66 self.cached_first_blob = None
67
68 def _get_storage_client(self):
69 from google.cloud import storage
70
71 if os.environ.get("STORAGE_EMULATOR_HOST"):
72 from google.auth.credentials import AnonymousCredentials
73
74 return storage.Client(credentials=AnonymousCredentials())
75 else: # pragma: no cover
76 return storage.Client()
77
78 def _get_blob(self, bucket: str, blob_name: str):
79 client = self._get_storage_client()
80
81 gcs_bucket = client.get_bucket(bucket)
82 blob = gcs_bucket.get_blob(blob_name)
83 return blob
84
85 def read_blob(self, *, blob_name, **kwargs):
86 bucket, object_path, name, extension = paths.get_parts(blob_name)
87
88 bucket = bucket.replace("va_data", "va-data")
89 bucket = bucket.replace("data_", "data-")
90
91 blob = self._get_blob(
92 bucket=bucket,
93 blob_name=object_path + "/" + name + extension,
94 )
95 return blob.download_as_bytes()
96
97 @single_item_cache
98 def get_list_of_blob_names(self, *, prefix: str) -> List[str]:
99 bucket, object_path, _, _ = paths.get_parts(prefix)
100 bucket = bucket.replace("va_data", "va-data")
101 bucket = bucket.replace("data_", "data-")
102
103 client = self._get_storage_client()
104
105 gcs_bucket = client.get_bucket(bucket)
106 blobs = client.list_blobs(bucket_or_name=gcs_bucket, prefix=object_path)
107 blobs = (bucket + "/" + blob.name for blob in blobs if not blob.name.endswith("/"))
108 return [blob for blob in blobs if ("." + blob.split(".")[-1].lower()) in VALID_EXTENSIONS]
109
110 def read_dataset(
111 self, columns: list = None, predicates: list = None, **kwargs
112 ) -> pyarrow.Table:
113 blob_names = self.partition_scheme.get_blobs_in_partition(
114 start_date=self.start_date,
115 end_date=self.end_date,
116 blob_list_getter=self.get_list_of_blob_names,
117 prefix=self.dataset,
118 )
119
120 # Check if the first blob was cached earlier
121 # if self.cached_first_blob is not None:
122 # yield self.cached_first_blob # Use cached blob
123 # blob_names = blob_names[1:] # Skip first blob
124 # self.cached_first_blob = None
125
126 for blob_name in blob_names:
127 try:
128 decoder = get_decoder(blob_name)
129 blob_bytes = self.read_blob(blob_name=blob_name, statistics=self.statistics)
130 yield decoder(blob_bytes, projection=columns, selection=predicates)
131 except UnsupportedFileTypeError:
132 pass
133
134 def get_dataset_schema(self) -> RelationSchema:
135 # Try to read the schema from the metastore
136 self.schema = self.read_schema_from_metastore()
137 if self.schema:
138 return self.schema
139
140 # Read first blob for schema inference and cache it
141 record = next(self.read_dataset(), None)
142 self.cached_first_blob = record
143
144 if record is None:
145 raise DatasetNotFoundError(dataset=self.dataset)
146
147 arrow_schema = record.schema
148
149 self.schema = RelationSchema(
150 name=self.dataset,
151 columns=[FlatColumn.from_arrow(field) for field in arrow_schema],
152 )
153
154 return self.schema
155
```
Path: `opteryx/__version__.py`
Content:
```
1 __build__ = 189
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 12
31 _revision = 2
32 _status = VersionStatus.RELEASE
33
34 __version__ = f"{_major}.{_minor}.{_revision}" + (
35 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
36 )
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 189
+__build__ = 193
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -28,8 +28,8 @@
_major = 0
_minor = 12
-_revision = 2
-_status = VersionStatus.RELEASE
+_revision = 3
+_status = VersionStatus.BETA
__version__ = f"{_major}.{_minor}.{_revision}" + (
f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
diff --git a/opteryx/connectors/gcp_cloudstorage_connector.py b/opteryx/connectors/gcp_cloudstorage_connector.py
--- a/opteryx/connectors/gcp_cloudstorage_connector.py
+++ b/opteryx/connectors/gcp_cloudstorage_connector.py
@@ -64,6 +64,7 @@
# we're going to cache the first blob as the schema and dataset reader
# sometimes both start here
self.cached_first_blob = None
+ self.client = self._get_storage_client()
def _get_storage_client(self):
from google.cloud import storage
@@ -76,9 +77,7 @@
return storage.Client()
def _get_blob(self, bucket: str, blob_name: str):
- client = self._get_storage_client()
-
- gcs_bucket = client.get_bucket(bucket)
+ gcs_bucket = self.client.get_bucket(bucket)
blob = gcs_bucket.get_blob(blob_name)
return blob
@@ -100,10 +99,8 @@
bucket = bucket.replace("va_data", "va-data")
bucket = bucket.replace("data_", "data-")
- client = self._get_storage_client()
-
- gcs_bucket = client.get_bucket(bucket)
- blobs = client.list_blobs(bucket_or_name=gcs_bucket, prefix=object_path)
+ gcs_bucket = self.client.get_bucket(bucket)
+ blobs = self.client.list_blobs(bucket_or_name=gcs_bucket, prefix=object_path, fields="items(name)")
blobs = (bucket + "/" + blob.name for blob in blobs if not blob.name.endswith("/"))
return [blob for blob in blobs if ("." + blob.split(".")[-1].lower()) in VALID_EXTENSIONS]
@@ -117,12 +114,6 @@
prefix=self.dataset,
)
- # Check if the first blob was cached earlier
- # if self.cached_first_blob is not None:
- # yield self.cached_first_blob # Use cached blob
- # blob_names = blob_names[1:] # Skip first blob
- # self.cached_first_blob = None
-
for blob_name in blob_names:
try:
decoder = get_decoder(blob_name)
|
{"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 189\n+__build__ = 193\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n@@ -28,8 +28,8 @@\n \n _major = 0\n _minor = 12\n-_revision = 2\n-_status = VersionStatus.RELEASE\n+_revision = 3\n+_status = VersionStatus.BETA\n \n __version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\ndiff --git a/opteryx/connectors/gcp_cloudstorage_connector.py b/opteryx/connectors/gcp_cloudstorage_connector.py\n--- a/opteryx/connectors/gcp_cloudstorage_connector.py\n+++ b/opteryx/connectors/gcp_cloudstorage_connector.py\n@@ -64,6 +64,7 @@\n # we're going to cache the first blob as the schema and dataset reader\n # sometimes both start here\n self.cached_first_blob = None\n+ self.client = self._get_storage_client()\n \n def _get_storage_client(self):\n from google.cloud import storage\n@@ -76,9 +77,7 @@\n return storage.Client()\n \n def _get_blob(self, bucket: str, blob_name: str):\n- client = self._get_storage_client()\n-\n- gcs_bucket = client.get_bucket(bucket)\n+ gcs_bucket = self.client.get_bucket(bucket)\n blob = gcs_bucket.get_blob(blob_name)\n return blob\n \n@@ -100,10 +99,8 @@\n bucket = bucket.replace(\"va_data\", \"va-data\")\n bucket = bucket.replace(\"data_\", \"data-\")\n \n- client = self._get_storage_client()\n-\n- gcs_bucket = client.get_bucket(bucket)\n- blobs = client.list_blobs(bucket_or_name=gcs_bucket, prefix=object_path)\n+ gcs_bucket = self.client.get_bucket(bucket)\n+ blobs = self.client.list_blobs(bucket_or_name=gcs_bucket, prefix=object_path, fields=\"items(name)\")\n blobs = (bucket + \"/\" + blob.name for blob in blobs if not blob.name.endswith(\"/\"))\n return [blob for blob in blobs if (\".\" + blob.split(\".\")[-1].lower()) in VALID_EXTENSIONS]\n \n@@ -117,12 +114,6 @@\n prefix=self.dataset,\n )\n \n- # Check if the first blob was cached earlier\n- # if self.cached_first_blob is not None:\n- # yield self.cached_first_blob # Use cached blob\n- # blob_names = blob_names[1:] # Skip first blob\n- # self.cached_first_blob = None\n-\n for blob_name in blob_names:\n try:\n decoder = get_decoder(blob_name)\n", "issue": "\u2728 GCS improvements\nCreate the client object once and reuse\n\nList blobs should only return the name of the blob and not any other details \n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nfrom typing import Dict\nfrom typing import List\n\nimport pyarrow\nfrom orso.schema import FlatColumn\nfrom orso.schema import RelationSchema\nfrom orso.tools import single_item_cache\nfrom orso.types import OrsoTypes\n\nfrom opteryx.connectors.base.base_connector import BaseConnector\nfrom opteryx.connectors.capabilities import Cacheable\nfrom opteryx.connectors.capabilities import Partitionable\nfrom opteryx.connectors.capabilities import PredicatePushable\nfrom opteryx.exceptions import DatasetNotFoundError\nfrom opteryx.exceptions import MissingDependencyError\nfrom opteryx.exceptions import UnsupportedFileTypeError\nfrom opteryx.utils import paths\nfrom opteryx.utils.file_decoders import VALID_EXTENSIONS\nfrom opteryx.utils.file_decoders import get_decoder\n\n\nclass GcpCloudStorageConnector(BaseConnector, Cacheable, Partitionable, PredicatePushable):\n __mode__ = \"Blob\"\n\n PUSHABLE_OPS: Dict[str, bool] = {\n \"Eq\": True,\n \"NotEq\": True,\n \"Gt\": True,\n \"GtEq\": True,\n \"Lt\": True,\n \"LtEq\": True,\n }\n\n PUSHABLE_TYPES = {OrsoTypes.BOOLEAN, OrsoTypes.DOUBLE, OrsoTypes.INTEGER, OrsoTypes.VARCHAR}\n\n def __init__(self, credentials=None, **kwargs):\n try:\n from google.auth.credentials import AnonymousCredentials\n from google.cloud import storage\n except ImportError as err:\n raise MissingDependencyError(err.name) from err\n\n BaseConnector.__init__(self, **kwargs)\n Partitionable.__init__(self, **kwargs)\n Cacheable.__init__(self, **kwargs)\n PredicatePushable.__init__(self, **kwargs)\n\n self.dataset = self.dataset.replace(\".\", \"/\")\n self.credentials = credentials\n\n # we're going to cache the first blob as the schema and dataset reader\n # sometimes both start here\n self.cached_first_blob = None\n\n def _get_storage_client(self):\n from google.cloud import storage\n\n if os.environ.get(\"STORAGE_EMULATOR_HOST\"):\n from google.auth.credentials import AnonymousCredentials\n\n return storage.Client(credentials=AnonymousCredentials())\n else: # pragma: no cover\n return storage.Client()\n\n def _get_blob(self, bucket: str, blob_name: str):\n client = self._get_storage_client()\n\n gcs_bucket = client.get_bucket(bucket)\n blob = gcs_bucket.get_blob(blob_name)\n return blob\n\n def read_blob(self, *, blob_name, **kwargs):\n bucket, object_path, name, extension = paths.get_parts(blob_name)\n\n bucket = bucket.replace(\"va_data\", \"va-data\")\n bucket = bucket.replace(\"data_\", \"data-\")\n\n blob = self._get_blob(\n bucket=bucket,\n blob_name=object_path + \"/\" + name + extension,\n )\n return blob.download_as_bytes()\n\n @single_item_cache\n def get_list_of_blob_names(self, *, prefix: str) -> List[str]:\n bucket, object_path, _, _ = paths.get_parts(prefix)\n bucket = bucket.replace(\"va_data\", \"va-data\")\n bucket = bucket.replace(\"data_\", \"data-\")\n\n client = self._get_storage_client()\n\n gcs_bucket = client.get_bucket(bucket)\n blobs = client.list_blobs(bucket_or_name=gcs_bucket, prefix=object_path)\n blobs = (bucket + \"/\" + blob.name for blob in blobs if not blob.name.endswith(\"/\"))\n return [blob for blob in blobs if (\".\" + blob.split(\".\")[-1].lower()) in VALID_EXTENSIONS]\n\n def read_dataset(\n self, columns: list = None, predicates: list = None, **kwargs\n ) -> pyarrow.Table:\n blob_names = self.partition_scheme.get_blobs_in_partition(\n start_date=self.start_date,\n end_date=self.end_date,\n blob_list_getter=self.get_list_of_blob_names,\n prefix=self.dataset,\n )\n\n # Check if the first blob was cached earlier\n # if self.cached_first_blob is not None:\n # yield self.cached_first_blob # Use cached blob\n # blob_names = blob_names[1:] # Skip first blob\n # self.cached_first_blob = None\n\n for blob_name in blob_names:\n try:\n decoder = get_decoder(blob_name)\n blob_bytes = self.read_blob(blob_name=blob_name, statistics=self.statistics)\n yield decoder(blob_bytes, projection=columns, selection=predicates)\n except UnsupportedFileTypeError:\n pass\n\n def get_dataset_schema(self) -> RelationSchema:\n # Try to read the schema from the metastore\n self.schema = self.read_schema_from_metastore()\n if self.schema:\n return self.schema\n\n # Read first blob for schema inference and cache it\n record = next(self.read_dataset(), None)\n self.cached_first_blob = record\n\n if record is None:\n raise DatasetNotFoundError(dataset=self.dataset)\n\n arrow_schema = record.schema\n\n self.schema = RelationSchema(\n name=self.dataset,\n columns=[FlatColumn.from_arrow(field) for field in arrow_schema],\n )\n\n return self.schema\n", "path": "opteryx/connectors/gcp_cloudstorage_connector.py"}, {"content": "__build__ = 189\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 12\n_revision = 2\n_status = VersionStatus.RELEASE\n\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nfrom typing import Dict\nfrom typing import List\n\nimport pyarrow\nfrom orso.schema import FlatColumn\nfrom orso.schema import RelationSchema\nfrom orso.tools import single_item_cache\nfrom orso.types import OrsoTypes\n\nfrom opteryx.connectors.base.base_connector import BaseConnector\nfrom opteryx.connectors.capabilities import Cacheable\nfrom opteryx.connectors.capabilities import Partitionable\nfrom opteryx.connectors.capabilities import PredicatePushable\nfrom opteryx.exceptions import DatasetNotFoundError\nfrom opteryx.exceptions import MissingDependencyError\nfrom opteryx.exceptions import UnsupportedFileTypeError\nfrom opteryx.utils import paths\nfrom opteryx.utils.file_decoders import VALID_EXTENSIONS\nfrom opteryx.utils.file_decoders import get_decoder\n\n\nclass GcpCloudStorageConnector(BaseConnector, Cacheable, Partitionable, PredicatePushable):\n __mode__ = \"Blob\"\n\n PUSHABLE_OPS: Dict[str, bool] = {\n \"Eq\": True,\n \"NotEq\": True,\n \"Gt\": True,\n \"GtEq\": True,\n \"Lt\": True,\n \"LtEq\": True,\n }\n\n PUSHABLE_TYPES = {OrsoTypes.BOOLEAN, OrsoTypes.DOUBLE, OrsoTypes.INTEGER, OrsoTypes.VARCHAR}\n\n def __init__(self, credentials=None, **kwargs):\n try:\n from google.auth.credentials import AnonymousCredentials\n from google.cloud import storage\n except ImportError as err:\n raise MissingDependencyError(err.name) from err\n\n BaseConnector.__init__(self, **kwargs)\n Partitionable.__init__(self, **kwargs)\n Cacheable.__init__(self, **kwargs)\n PredicatePushable.__init__(self, **kwargs)\n\n self.dataset = self.dataset.replace(\".\", \"/\")\n self.credentials = credentials\n\n # we're going to cache the first blob as the schema and dataset reader\n # sometimes both start here\n self.cached_first_blob = None\n self.client = self._get_storage_client()\n\n def _get_storage_client(self):\n from google.cloud import storage\n\n if os.environ.get(\"STORAGE_EMULATOR_HOST\"):\n from google.auth.credentials import AnonymousCredentials\n\n return storage.Client(credentials=AnonymousCredentials())\n else: # pragma: no cover\n return storage.Client()\n\n def _get_blob(self, bucket: str, blob_name: str):\n gcs_bucket = self.client.get_bucket(bucket)\n blob = gcs_bucket.get_blob(blob_name)\n return blob\n\n def read_blob(self, *, blob_name, **kwargs):\n bucket, object_path, name, extension = paths.get_parts(blob_name)\n\n bucket = bucket.replace(\"va_data\", \"va-data\")\n bucket = bucket.replace(\"data_\", \"data-\")\n\n blob = self._get_blob(\n bucket=bucket,\n blob_name=object_path + \"/\" + name + extension,\n )\n return blob.download_as_bytes()\n\n @single_item_cache\n def get_list_of_blob_names(self, *, prefix: str) -> List[str]:\n bucket, object_path, _, _ = paths.get_parts(prefix)\n bucket = bucket.replace(\"va_data\", \"va-data\")\n bucket = bucket.replace(\"data_\", \"data-\")\n\n gcs_bucket = self.client.get_bucket(bucket)\n blobs = self.client.list_blobs(bucket_or_name=gcs_bucket, prefix=object_path, fields=\"items(name)\")\n blobs = (bucket + \"/\" + blob.name for blob in blobs if not blob.name.endswith(\"/\"))\n return [blob for blob in blobs if (\".\" + blob.split(\".\")[-1].lower()) in VALID_EXTENSIONS]\n\n def read_dataset(\n self, columns: list = None, predicates: list = None, **kwargs\n ) -> pyarrow.Table:\n blob_names = self.partition_scheme.get_blobs_in_partition(\n start_date=self.start_date,\n end_date=self.end_date,\n blob_list_getter=self.get_list_of_blob_names,\n prefix=self.dataset,\n )\n\n for blob_name in blob_names:\n try:\n decoder = get_decoder(blob_name)\n blob_bytes = self.read_blob(blob_name=blob_name, statistics=self.statistics)\n yield decoder(blob_bytes, projection=columns, selection=predicates)\n except UnsupportedFileTypeError:\n pass\n\n def get_dataset_schema(self) -> RelationSchema:\n # Try to read the schema from the metastore\n self.schema = self.read_schema_from_metastore()\n if self.schema:\n return self.schema\n\n # Read first blob for schema inference and cache it\n record = next(self.read_dataset(), None)\n self.cached_first_blob = record\n\n if record is None:\n raise DatasetNotFoundError(dataset=self.dataset)\n\n arrow_schema = record.schema\n\n self.schema = RelationSchema(\n name=self.dataset,\n columns=[FlatColumn.from_arrow(field) for field in arrow_schema],\n )\n\n return self.schema\n", "path": "opteryx/connectors/gcp_cloudstorage_connector.py"}, {"content": "__build__ = 193\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 12\n_revision = 3\n_status = VersionStatus.BETA\n\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]}
| 2,256 | 685 |
gh_patches_debug_39046
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-1393
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Automatically generated toctree for methods and classes
## 🚀 Feature
Idea is to replace our manually created toctree for [metrics](https://github.com/pytorch/ignite/blob/master/docs/source/metrics.rst#complete-list-of-metrics), [handlers](https://github.com/pytorch/ignite/blob/master/docs/source/handlers.rst#complete-list-of-handlers), [regression metrics](https://github.com/pytorch/ignite/blob/master/docs/source/contrib/metrics.rst#regression-metrics) etc.
How to do that :
- check `.. autosummary:: ` tag in Sphinx
- add it and configure for each listed above .rst file : metrics.rst, handlers.rst etc
Example of usage:
- https://numpy.org/devdocs/reference/arrays.ndarray.html#id1
- https://github.com/numpy/numpy/blob/master/doc/source/reference/arrays.rst (edited)
This issue maybe or maybe not blocked by #1272
For Hacktoberfest contributors, feel free to ask questions for details if any and say that you would like to tackle the issue.
Please, take a look at [CONTRIBUTING guide](https://github.com/pytorch/ignite/blob/master/CONTRIBUTING.md).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/stable/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 import os
16 import sys
17
18 sys.path.insert(0, os.path.abspath("../.."))
19 import ignite
20 import pytorch_sphinx_theme
21
22 # -- Project information -----------------------------------------------------
23
24 project = "ignite"
25 copyright = "2020, PyTorch-Ignite Contributors"
26 author = "PyTorch-Ignite Contributors"
27
28 # The short X.Y version
29 try:
30 version = os.environ["code_version"]
31 if "master" in version:
32 version = "master (" + ignite.__version__ + ")"
33 else:
34 version = version.replace("v", "")
35 except KeyError:
36 version = ignite.__version__
37
38 # The full version, including alpha/beta/rc tags
39 release = "master"
40
41
42 # -- General configuration ---------------------------------------------------
43
44 # If your documentation needs a minimal Sphinx version, state it here.
45 #
46 # needs_sphinx = '1.0'
47
48 # Add any Sphinx extension module names here, as strings. They can be
49 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
50 # ones.
51 extensions = [
52 "sphinx.ext.autosummary",
53 "sphinx.ext.doctest",
54 "sphinx.ext.intersphinx",
55 "sphinx.ext.todo",
56 "sphinx.ext.coverage",
57 "sphinx.ext.mathjax",
58 "sphinx.ext.napoleon",
59 "sphinx.ext.viewcode",
60 "sphinx.ext.autosectionlabel",
61 ]
62
63 # Add any paths that contain templates here, relative to this directory.
64 templates_path = ["_templates"]
65
66 # The suffix(es) of source filenames.
67 # You can specify multiple suffix as a list of string:
68 #
69 # source_suffix = ['.rst', '.md']
70 source_suffix = ".rst"
71
72 # The master toctree document.
73 master_doc = "index"
74
75 # The language for content autogenerated by Sphinx. Refer to documentation
76 # for a list of supported languages.
77 #
78 # This is also used if you do content translation via gettext catalogs.
79 # Usually you set "language" from the command line for these cases.
80 language = None
81
82 # List of patterns, relative to source directory, that match files and
83 # directories to ignore when looking for source files.
84 # This pattern also affects html_static_path and html_extra_path .
85 exclude_patterns = []
86
87 # The name of the Pygments (syntax highlighting) style to use.
88 pygments_style = "sphinx"
89
90
91 # -- Options for HTML output -------------------------------------------------
92
93 # The theme to use for HTML and HTML Help pages. See the documentation for
94 # a list of builtin themes.
95 #
96 html_theme = "pytorch_sphinx_theme"
97 html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
98
99 html_theme_options = {
100 "canonical_url": "https://pytorch.org/ignite/index.html",
101 "collapse_navigation": False,
102 "display_version": True,
103 "logo_only": True,
104 }
105
106 html_logo = "_static/img/ignite_logo.svg"
107
108 # Theme options are theme-specific and customize the look and feel of a theme
109 # further. For a list of options available for each theme, see the
110 # documentation.
111 #
112 # html_theme_options = {}
113
114 # Add any paths that contain custom static files (such as style sheets) here,
115 # relative to this directory. They are copied after the builtin static files,
116 # so a file named "default.css" will overwrite the builtin "default.css".
117 html_static_path = ["_static", "_templates/_static"]
118
119 html_context = {
120 "css_files": [
121 # 'https://fonts.googleapis.com/css?family=Lato',
122 # '_static/css/pytorch_theme.css'
123 "_static/css/ignite_theme.css"
124 ],
125 }
126
127
128 # -- Options for HTMLHelp output ---------------------------------------------
129
130 # Output file base name for HTML help builder.
131 htmlhelp_basename = "ignitedoc"
132
133
134 # -- Options for LaTeX output ------------------------------------------------
135
136 latex_elements = {
137 # The paper size ('letterpaper' or 'a4paper').
138 #
139 # 'papersize': 'letterpaper',
140 # The font size ('10pt', '11pt' or '12pt').
141 #
142 # 'pointsize': '10pt',
143 # Additional stuff for the LaTeX preamble.
144 #
145 # 'preamble': '',
146 # Latex figure (float) alignment
147 #
148 # 'figure_align': 'htbp',
149 }
150
151 # Grouping the document tree into LaTeX files. List of tuples
152 # (source start file, target name, title,
153 # author, documentclass [howto, manual, or own class]).
154 latex_documents = [
155 (master_doc, "ignite.tex", "ignite Documentation", "Torch Contributors", "manual"),
156 ]
157
158
159 # -- Options for manual page output ------------------------------------------
160
161 # One entry per manual page. List of tuples
162 # (source start file, name, description, authors, manual section).
163 man_pages = [(master_doc, "ignite", "ignite Documentation", [author], 1)]
164
165
166 # -- Options for Texinfo output ----------------------------------------------
167
168 # Grouping the document tree into Texinfo files. List of tuples
169 # (source start file, target name, title, author,
170 # dir menu entry, description, category)
171 texinfo_documents = [
172 (
173 master_doc,
174 "ignite",
175 "ignite Documentation",
176 author,
177 "ignite",
178 "One line description of project.",
179 "Miscellaneous",
180 ),
181 ]
182
183
184 # -- Extension configuration -------------------------------------------------
185
186 # -- Options for intersphinx extension ---------------------------------------
187
188 # Example configuration for intersphinx: refer to the Python standard library.
189 intersphinx_mapping = {"https://docs.python.org/": None}
190
191 # -- Options for todo extension ----------------------------------------------
192
193 # If true, `todo` and `todoList` produce output, else they produce nothing.
194 todo_include_todos = True
195
196 # -- Type hints configs ------------------------------------------------------
197
198 autodoc_typehints = "signature"
199
200 # -- A patch that turns-off cross refs for type annotations ------------------
201
202 import sphinx.domains.python
203 from docutils import nodes
204 from sphinx import addnodes
205
206 # replaces pending_xref node with desc_type for type annotations
207 sphinx.domains.python.type_to_xref = lambda t, e=None: addnodes.desc_type("", nodes.Text(t))
208
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -205,3 +205,98 @@
# replaces pending_xref node with desc_type for type annotations
sphinx.domains.python.type_to_xref = lambda t, e=None: addnodes.desc_type("", nodes.Text(t))
+
+# -- Autosummary patch to get list of a classes, funcs automatically ----------
+
+from importlib import import_module
+from inspect import getmembers, isclass, isfunction
+import sphinx.ext.autosummary
+from sphinx.ext.autosummary import Autosummary
+from docutils.parsers.rst import directives
+from docutils.statemachine import StringList
+
+
+class BetterAutosummary(Autosummary):
+ """Autosummary with autolisting for modules.
+
+ By default it tries to import all public names (__all__),
+ otherwise import all classes and/or functions in a module.
+
+ Options:
+ - :autolist: option to get list of classes and functions from currentmodule.
+ - :autolist-classes: option to get list of classes from currentmodule.
+ - :autolist-functions: option to get list of functions from currentmodule.
+
+ Example Usage:
+
+ .. currentmodule:: ignite.metrics
+
+ .. autosummary::
+ :nosignatures:
+ :autolist:
+ """
+
+ # Add new option
+ _option_spec = Autosummary.option_spec.copy()
+ _option_spec.update(
+ {
+ "autolist": directives.unchanged,
+ "autolist-classes": directives.unchanged,
+ "autolist-functions": directives.unchanged,
+ }
+ )
+ option_spec = _option_spec
+
+ def run(self):
+ for auto in ("autolist", "autolist-classes", "autolist-functions"):
+ if auto in self.options:
+ # Get current module name
+ module_name = self.env.ref_context.get("py:module")
+ # Import module
+ module = import_module(module_name)
+
+ # Get public names (if possible)
+ try:
+ names = getattr(module, "__all__")
+ except AttributeError:
+ # Get classes defined in the module
+ cls_names = [
+ name[0]
+ for name in getmembers(module, isclass)
+ if name[-1].__module__ == module_name and not (name[0].startswith("_"))
+ ]
+ # Get functions defined in the module
+ fn_names = [
+ name[0]
+ for name in getmembers(module, isfunction)
+ if (name[-1].__module__ == module_name) and not (name[0].startswith("_"))
+ ]
+ names = cls_names + fn_names
+ # It may happen that module doesn't have any defined class or func
+ if not names:
+ names = [name[0] for name in getmembers(module)]
+
+ if auto == "autolist":
+ # Get list of all classes and functions inside module
+ names = [
+ name for name in names if (isclass(getattr(module, name)) or isfunction(getattr(module, name)))
+ ]
+ else:
+ if auto == "autolist-classes":
+ # Get only classes
+ check = isclass
+ elif auto == "autolist-functions":
+ # Get only functions
+ check = isfunction
+ else:
+ raise NotImplementedError
+
+ names = [name for name in names if check(getattr(module, name))]
+
+ # Update content
+ self.content = StringList(names)
+ return super().run()
+
+
+# Patch original Autosummary
+sphinx.ext.autosummary.Autosummary = BetterAutosummary
|
{"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -205,3 +205,98 @@\n \n # replaces pending_xref node with desc_type for type annotations\n sphinx.domains.python.type_to_xref = lambda t, e=None: addnodes.desc_type(\"\", nodes.Text(t))\n+\n+# -- Autosummary patch to get list of a classes, funcs automatically ----------\n+\n+from importlib import import_module\n+from inspect import getmembers, isclass, isfunction\n+import sphinx.ext.autosummary\n+from sphinx.ext.autosummary import Autosummary\n+from docutils.parsers.rst import directives\n+from docutils.statemachine import StringList\n+\n+\n+class BetterAutosummary(Autosummary):\n+ \"\"\"Autosummary with autolisting for modules.\n+\n+ By default it tries to import all public names (__all__),\n+ otherwise import all classes and/or functions in a module.\n+\n+ Options:\n+ - :autolist: option to get list of classes and functions from currentmodule.\n+ - :autolist-classes: option to get list of classes from currentmodule.\n+ - :autolist-functions: option to get list of functions from currentmodule.\n+\n+ Example Usage:\n+\n+ .. currentmodule:: ignite.metrics\n+\n+ .. autosummary::\n+ :nosignatures:\n+ :autolist:\n+ \"\"\"\n+\n+ # Add new option\n+ _option_spec = Autosummary.option_spec.copy()\n+ _option_spec.update(\n+ {\n+ \"autolist\": directives.unchanged,\n+ \"autolist-classes\": directives.unchanged,\n+ \"autolist-functions\": directives.unchanged,\n+ }\n+ )\n+ option_spec = _option_spec\n+\n+ def run(self):\n+ for auto in (\"autolist\", \"autolist-classes\", \"autolist-functions\"):\n+ if auto in self.options:\n+ # Get current module name\n+ module_name = self.env.ref_context.get(\"py:module\")\n+ # Import module\n+ module = import_module(module_name)\n+\n+ # Get public names (if possible)\n+ try:\n+ names = getattr(module, \"__all__\")\n+ except AttributeError:\n+ # Get classes defined in the module\n+ cls_names = [\n+ name[0]\n+ for name in getmembers(module, isclass)\n+ if name[-1].__module__ == module_name and not (name[0].startswith(\"_\"))\n+ ]\n+ # Get functions defined in the module\n+ fn_names = [\n+ name[0]\n+ for name in getmembers(module, isfunction)\n+ if (name[-1].__module__ == module_name) and not (name[0].startswith(\"_\"))\n+ ]\n+ names = cls_names + fn_names\n+ # It may happen that module doesn't have any defined class or func\n+ if not names:\n+ names = [name[0] for name in getmembers(module)]\n+\n+ if auto == \"autolist\":\n+ # Get list of all classes and functions inside module\n+ names = [\n+ name for name in names if (isclass(getattr(module, name)) or isfunction(getattr(module, name)))\n+ ]\n+ else:\n+ if auto == \"autolist-classes\":\n+ # Get only classes\n+ check = isclass\n+ elif auto == \"autolist-functions\":\n+ # Get only functions\n+ check = isfunction\n+ else:\n+ raise NotImplementedError\n+\n+ names = [name for name in names if check(getattr(module, name))]\n+\n+ # Update content\n+ self.content = StringList(names)\n+ return super().run()\n+\n+\n+# Patch original Autosummary\n+sphinx.ext.autosummary.Autosummary = BetterAutosummary\n", "issue": "Automatically generated toctree for methods and classes\n## \ud83d\ude80 Feature\r\n\r\nIdea is to replace our manually created toctree for [metrics](https://github.com/pytorch/ignite/blob/master/docs/source/metrics.rst#complete-list-of-metrics), [handlers](https://github.com/pytorch/ignite/blob/master/docs/source/handlers.rst#complete-list-of-handlers), [regression metrics](https://github.com/pytorch/ignite/blob/master/docs/source/contrib/metrics.rst#regression-metrics) etc.\r\n\r\nHow to do that : \r\n- check `.. autosummary:: ` tag in Sphinx\r\n- add it and configure for each listed above .rst file : metrics.rst, handlers.rst etc\r\n\r\nExample of usage:\r\n- https://numpy.org/devdocs/reference/arrays.ndarray.html#id1\r\n- https://github.com/numpy/numpy/blob/master/doc/source/reference/arrays.rst (edited) \r\n\r\nThis issue maybe or maybe not blocked by #1272 \r\n\r\n\r\nFor Hacktoberfest contributors, feel free to ask questions for details if any and say that you would like to tackle the issue.\r\nPlease, take a look at [CONTRIBUTING guide](https://github.com/pytorch/ignite/blob/master/CONTRIBUTING.md).\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"../..\"))\nimport ignite\nimport pytorch_sphinx_theme\n\n# -- Project information -----------------------------------------------------\n\nproject = \"ignite\"\ncopyright = \"2020, PyTorch-Ignite Contributors\"\nauthor = \"PyTorch-Ignite Contributors\"\n\n# The short X.Y version\ntry:\n version = os.environ[\"code_version\"]\n if \"master\" in version:\n version = \"master (\" + ignite.__version__ + \")\"\n else:\n version = version.replace(\"v\", \"\")\nexcept KeyError:\n version = ignite.__version__\n\n# The full version, including alpha/beta/rc tags\nrelease = \"master\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\nhtml_theme_options = {\n \"canonical_url\": \"https://pytorch.org/ignite/index.html\",\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n}\n\nhtml_logo = \"_static/img/ignite_logo.svg\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\", \"_templates/_static\"]\n\nhtml_context = {\n \"css_files\": [\n # 'https://fonts.googleapis.com/css?family=Lato',\n # '_static/css/pytorch_theme.css'\n \"_static/css/ignite_theme.css\"\n ],\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"ignitedoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"ignite.tex\", \"ignite Documentation\", \"Torch Contributors\", \"manual\"),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"ignite\", \"ignite Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"ignite\",\n \"ignite Documentation\",\n author,\n \"ignite\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/\": None}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# -- Type hints configs ------------------------------------------------------\n\nautodoc_typehints = \"signature\"\n\n# -- A patch that turns-off cross refs for type annotations ------------------\n\nimport sphinx.domains.python\nfrom docutils import nodes\nfrom sphinx import addnodes\n\n# replaces pending_xref node with desc_type for type annotations\nsphinx.domains.python.type_to_xref = lambda t, e=None: addnodes.desc_type(\"\", nodes.Text(t))\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"../..\"))\nimport ignite\nimport pytorch_sphinx_theme\n\n# -- Project information -----------------------------------------------------\n\nproject = \"ignite\"\ncopyright = \"2020, PyTorch-Ignite Contributors\"\nauthor = \"PyTorch-Ignite Contributors\"\n\n# The short X.Y version\ntry:\n version = os.environ[\"code_version\"]\n if \"master\" in version:\n version = \"master (\" + ignite.__version__ + \")\"\n else:\n version = version.replace(\"v\", \"\")\nexcept KeyError:\n version = ignite.__version__\n\n# The full version, including alpha/beta/rc tags\nrelease = \"master\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\nhtml_theme_options = {\n \"canonical_url\": \"https://pytorch.org/ignite/index.html\",\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n}\n\nhtml_logo = \"_static/img/ignite_logo.svg\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\", \"_templates/_static\"]\n\nhtml_context = {\n \"css_files\": [\n # 'https://fonts.googleapis.com/css?family=Lato',\n # '_static/css/pytorch_theme.css'\n \"_static/css/ignite_theme.css\"\n ],\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"ignitedoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"ignite.tex\", \"ignite Documentation\", \"Torch Contributors\", \"manual\"),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"ignite\", \"ignite Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"ignite\",\n \"ignite Documentation\",\n author,\n \"ignite\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/\": None}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# -- Type hints configs ------------------------------------------------------\n\nautodoc_typehints = \"signature\"\n\n# -- A patch that turns-off cross refs for type annotations ------------------\n\nimport sphinx.domains.python\nfrom docutils import nodes\nfrom sphinx import addnodes\n\n# replaces pending_xref node with desc_type for type annotations\nsphinx.domains.python.type_to_xref = lambda t, e=None: addnodes.desc_type(\"\", nodes.Text(t))\n\n# -- Autosummary patch to get list of a classes, funcs automatically ----------\n\nfrom importlib import import_module\nfrom inspect import getmembers, isclass, isfunction\nimport sphinx.ext.autosummary\nfrom sphinx.ext.autosummary import Autosummary\nfrom docutils.parsers.rst import directives\nfrom docutils.statemachine import StringList\n\n\nclass BetterAutosummary(Autosummary):\n \"\"\"Autosummary with autolisting for modules.\n\n By default it tries to import all public names (__all__),\n otherwise import all classes and/or functions in a module.\n\n Options:\n - :autolist: option to get list of classes and functions from currentmodule.\n - :autolist-classes: option to get list of classes from currentmodule.\n - :autolist-functions: option to get list of functions from currentmodule.\n\n Example Usage:\n\n .. currentmodule:: ignite.metrics\n\n .. autosummary::\n :nosignatures:\n :autolist:\n \"\"\"\n\n # Add new option\n _option_spec = Autosummary.option_spec.copy()\n _option_spec.update(\n {\n \"autolist\": directives.unchanged,\n \"autolist-classes\": directives.unchanged,\n \"autolist-functions\": directives.unchanged,\n }\n )\n option_spec = _option_spec\n\n def run(self):\n for auto in (\"autolist\", \"autolist-classes\", \"autolist-functions\"):\n if auto in self.options:\n # Get current module name\n module_name = self.env.ref_context.get(\"py:module\")\n # Import module\n module = import_module(module_name)\n\n # Get public names (if possible)\n try:\n names = getattr(module, \"__all__\")\n except AttributeError:\n # Get classes defined in the module\n cls_names = [\n name[0]\n for name in getmembers(module, isclass)\n if name[-1].__module__ == module_name and not (name[0].startswith(\"_\"))\n ]\n # Get functions defined in the module\n fn_names = [\n name[0]\n for name in getmembers(module, isfunction)\n if (name[-1].__module__ == module_name) and not (name[0].startswith(\"_\"))\n ]\n names = cls_names + fn_names\n # It may happen that module doesn't have any defined class or func\n if not names:\n names = [name[0] for name in getmembers(module)]\n\n if auto == \"autolist\":\n # Get list of all classes and functions inside module\n names = [\n name for name in names if (isclass(getattr(module, name)) or isfunction(getattr(module, name)))\n ]\n else:\n if auto == \"autolist-classes\":\n # Get only classes\n check = isclass\n elif auto == \"autolist-functions\":\n # Get only functions\n check = isfunction\n else:\n raise NotImplementedError\n\n names = [name for name in names if check(getattr(module, name))]\n\n # Update content\n self.content = StringList(names)\n return super().run()\n\n\n# Patch original Autosummary\nsphinx.ext.autosummary.Autosummary = BetterAutosummary\n", "path": "docs/source/conf.py"}]}
| 2,473 | 868 |
gh_patches_debug_8960
|
rasdani/github-patches
|
git_diff
|
scverse__scanpy-1691
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
importlib_metadata >= 2.0 breaks scanpy.logging.print_versions()
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of scanpy.
- [x] (optional) I have confirmed this bug exists on the master branch of scanpy.
---
When scanpy gets installed with the latest version of `importlib_metadata` (2.0), the
command `sc.logging.print_versions()` fails with the following error:
```pytb
WARNING: If you miss a compact list, please try `print_header`!
Traceback (most recent call last):
File "/home/sturm/anaconda3/envs/scanpy_test/lib/python3.7/site-packages/sinfo/main.py", line 195, in sinfo
mod_version = _find_version(mod.__version__)
AttributeError: module 'importlib_metadata' has no attribute '__version__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sturm/anaconda3/envs/scanpy_test/lib/python3.7/site-packages/scanpy/logging.py", line 161, in print_versions
sinfo(dependencies=True)
File "/home/sturm/anaconda3/envs/scanpy_test/lib/python3.7/site-packages/sinfo/main.py", line 198, in sinfo
mod_version = _find_version(mod.version)
File "/home/sturm/anaconda3/envs/scanpy_test/lib/python3.7/site-packages/sinfo/main.py", line 42, in _find_version
return mod_version_attr()
TypeError: version() missing 1 required positional argument: 'distribution_name'
```
According to the `importlib_metadata` changelog, the `__version__` attribute has been removed from the package:
```
=========================
importlib_metadata NEWS
=========================
v2.0.0
======
* ``importlib_metadata`` no longer presents a
``__version__`` attribute. Consumers wishing to
resolve the version of the package should query it
directly with
``importlib_metadata.version('importlib-metadata')``.
Closes #71.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scanpy/logging.py`
Content:
```
1 """Logging and Profiling
2 """
3 import io
4 import logging
5 import sys
6 from functools import update_wrapper, partial
7 from logging import CRITICAL, ERROR, WARNING, INFO, DEBUG, NOTSET
8 from datetime import datetime, timedelta, timezone
9 from typing import Optional
10
11 import anndata.logging
12 from sinfo import sinfo
13
14
15 HINT = (INFO + DEBUG) // 2
16 logging.addLevelName(HINT, 'HINT')
17
18
19 class _RootLogger(logging.RootLogger):
20 def __init__(self, level):
21 super().__init__(level)
22 self.propagate = False
23 _RootLogger.manager = logging.Manager(self)
24
25 def log(
26 self,
27 level: int,
28 msg: str,
29 *,
30 extra: Optional[dict] = None,
31 time: datetime = None,
32 deep: Optional[str] = None,
33 ) -> datetime:
34 from . import settings
35
36 now = datetime.now(timezone.utc)
37 time_passed: timedelta = None if time is None else now - time
38 extra = {
39 **(extra or {}),
40 'deep': deep if settings.verbosity.level < level else None,
41 'time_passed': time_passed,
42 }
43 super().log(level, msg, extra=extra)
44 return now
45
46 def critical(self, msg, *, time=None, deep=None, extra=None) -> datetime:
47 return self.log(CRITICAL, msg, time=time, deep=deep, extra=extra)
48
49 def error(self, msg, *, time=None, deep=None, extra=None) -> datetime:
50 return self.log(ERROR, msg, time=time, deep=deep, extra=extra)
51
52 def warning(self, msg, *, time=None, deep=None, extra=None) -> datetime:
53 return self.log(WARNING, msg, time=time, deep=deep, extra=extra)
54
55 def info(self, msg, *, time=None, deep=None, extra=None) -> datetime:
56 return self.log(INFO, msg, time=time, deep=deep, extra=extra)
57
58 def hint(self, msg, *, time=None, deep=None, extra=None) -> datetime:
59 return self.log(HINT, msg, time=time, deep=deep, extra=extra)
60
61 def debug(self, msg, *, time=None, deep=None, extra=None) -> datetime:
62 return self.log(DEBUG, msg, time=time, deep=deep, extra=extra)
63
64
65 def _set_log_file(settings):
66 file = settings.logfile
67 name = settings.logpath
68 root = settings._root_logger
69 h = logging.StreamHandler(file) if name is None else logging.FileHandler(name)
70 h.setFormatter(_LogFormatter())
71 h.setLevel(root.level)
72 if len(root.handlers) == 1:
73 root.removeHandler(root.handlers[0])
74 elif len(root.handlers) > 1:
75 raise RuntimeError('Scanpy’s root logger somehow got more than one handler')
76 root.addHandler(h)
77
78
79 def _set_log_level(settings, level: int):
80 root = settings._root_logger
81 root.setLevel(level)
82 (h,) = root.handlers # may only be 1
83 h.setLevel(level)
84
85
86 class _LogFormatter(logging.Formatter):
87 def __init__(
88 self, fmt='{levelname}: {message}', datefmt='%Y-%m-%d %H:%M', style='{'
89 ):
90 super().__init__(fmt, datefmt, style)
91
92 def format(self, record: logging.LogRecord):
93 format_orig = self._style._fmt
94 if record.levelno == INFO:
95 self._style._fmt = '{message}'
96 elif record.levelno == HINT:
97 self._style._fmt = '--> {message}'
98 elif record.levelno == DEBUG:
99 self._style._fmt = ' {message}'
100 if record.time_passed:
101 # strip microseconds
102 if record.time_passed.microseconds:
103 record.time_passed = timedelta(
104 seconds=int(record.time_passed.total_seconds())
105 )
106 if '{time_passed}' in record.msg:
107 record.msg = record.msg.replace(
108 '{time_passed}', str(record.time_passed)
109 )
110 else:
111 self._style._fmt += ' ({time_passed})'
112 if record.deep:
113 record.msg = f'{record.msg}: {record.deep}'
114 result = logging.Formatter.format(self, record)
115 self._style._fmt = format_orig
116 return result
117
118
119 print_memory_usage = anndata.logging.print_memory_usage
120 get_memory_usage = anndata.logging.get_memory_usage
121
122
123 _DEPENDENCIES_NUMERICS = [
124 'anndata', # anndata actually shouldn't, but as long as it's in development
125 'umap',
126 'numpy',
127 'scipy',
128 'pandas',
129 ('sklearn', 'scikit-learn'),
130 'statsmodels',
131 ('igraph', 'python-igraph'),
132 'louvain',
133 'leidenalg',
134 ]
135
136
137 def _versions_dependencies(dependencies):
138 # this is not the same as the requirements!
139 for mod in dependencies:
140 mod_name, dist_name = mod if isinstance(mod, tuple) else (mod, mod)
141 try:
142 imp = __import__(mod_name)
143 yield dist_name, imp.__version__
144 except (ImportError, AttributeError):
145 pass
146
147
148 def print_header(*, file=None):
149 """\
150 Versions that might influence the numerical results.
151 Matplotlib and Seaborn are excluded from this.
152 """
153
154 modules = ['scanpy'] + _DEPENDENCIES_NUMERICS
155 print(
156 ' '.join(f'{mod}=={ver}' for mod, ver in _versions_dependencies(modules)),
157 file=file or sys.stdout,
158 )
159
160
161 def print_versions(*, file=None):
162 """Print print versions of imported packages"""
163 if file is None: # Inform people about the behavior change
164 warning('If you miss a compact list, please try `print_header`!')
165 stdout = sys.stdout
166 try:
167 buf = sys.stdout = io.StringIO()
168 sinfo(dependencies=True)
169 finally:
170 sys.stdout = stdout
171 output = buf.getvalue()
172 print(output, file=file)
173
174
175 def print_version_and_date(*, file=None):
176 """\
177 Useful for starting a notebook so you see when you started working.
178 """
179 from . import __version__
180
181 if file is None:
182 file = sys.stdout
183 print(
184 f'Running Scanpy {__version__}, ' f'on {datetime.now():%Y-%m-%d %H:%M}.',
185 file=file,
186 )
187
188
189 def _copy_docs_and_signature(fn):
190 return partial(update_wrapper, wrapped=fn, assigned=['__doc__', '__annotations__'])
191
192
193 def error(
194 msg: str,
195 *,
196 time: datetime = None,
197 deep: Optional[str] = None,
198 extra: Optional[dict] = None,
199 ) -> datetime:
200 """\
201 Log message with specific level and return current time.
202
203 Parameters
204 ==========
205 msg
206 Message to display.
207 time
208 A time in the past. If this is passed, the time difference from then
209 to now is appended to `msg` as ` (HH:MM:SS)`.
210 If `msg` contains `{time_passed}`, the time difference is instead
211 inserted at that position.
212 deep
213 If the current verbosity is higher than the log function’s level,
214 this gets displayed as well
215 extra
216 Additional values you can specify in `msg` like `{time_passed}`.
217 """
218 from ._settings import settings
219
220 return settings._root_logger.error(msg, time=time, deep=deep, extra=extra)
221
222
223 @_copy_docs_and_signature(error)
224 def warning(msg, *, time=None, deep=None, extra=None) -> datetime:
225 from ._settings import settings
226
227 return settings._root_logger.warning(msg, time=time, deep=deep, extra=extra)
228
229
230 @_copy_docs_and_signature(error)
231 def info(msg, *, time=None, deep=None, extra=None) -> datetime:
232 from ._settings import settings
233
234 return settings._root_logger.info(msg, time=time, deep=deep, extra=extra)
235
236
237 @_copy_docs_and_signature(error)
238 def hint(msg, *, time=None, deep=None, extra=None) -> datetime:
239 from ._settings import settings
240
241 return settings._root_logger.hint(msg, time=time, deep=deep, extra=extra)
242
243
244 @_copy_docs_and_signature(error)
245 def debug(msg, *, time=None, deep=None, extra=None) -> datetime:
246 from ._settings import settings
247
248 return settings._root_logger.debug(msg, time=time, deep=deep, extra=extra)
249
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scanpy/logging.py b/scanpy/logging.py
--- a/scanpy/logging.py
+++ b/scanpy/logging.py
@@ -165,7 +165,17 @@
stdout = sys.stdout
try:
buf = sys.stdout = io.StringIO()
- sinfo(dependencies=True)
+ sinfo(
+ dependencies=True,
+ excludes=[
+ 'builtins',
+ 'stdlib_list',
+ 'importlib_metadata',
+ # Special module present if test coverage being calculated
+ # https://gitlab.com/joelostblom/sinfo/-/issues/10
+ "$coverage",
+ ],
+ )
finally:
sys.stdout = stdout
output = buf.getvalue()
|
{"golden_diff": "diff --git a/scanpy/logging.py b/scanpy/logging.py\n--- a/scanpy/logging.py\n+++ b/scanpy/logging.py\n@@ -165,7 +165,17 @@\n stdout = sys.stdout\n try:\n buf = sys.stdout = io.StringIO()\n- sinfo(dependencies=True)\n+ sinfo(\n+ dependencies=True,\n+ excludes=[\n+ 'builtins',\n+ 'stdlib_list',\n+ 'importlib_metadata',\n+ # Special module present if test coverage being calculated\n+ # https://gitlab.com/joelostblom/sinfo/-/issues/10\n+ \"$coverage\",\n+ ],\n+ )\n finally:\n sys.stdout = stdout\n output = buf.getvalue()\n", "issue": "importlib_metadata >= 2.0 breaks scanpy.logging.print_versions()\n- [x] I have checked that this issue has not already been reported.\r\n- [x] I have confirmed this bug exists on the latest version of scanpy.\r\n- [x] (optional) I have confirmed this bug exists on the master branch of scanpy.\r\n\r\n---\r\n\r\nWhen scanpy gets installed with the latest version of `importlib_metadata` (2.0), the \r\ncommand `sc.logging.print_versions()` fails with the following error: \r\n\r\n```pytb\r\nWARNING: If you miss a compact list, please try `print_header`!\r\nTraceback (most recent call last):\r\n File \"/home/sturm/anaconda3/envs/scanpy_test/lib/python3.7/site-packages/sinfo/main.py\", line 195, in sinfo\r\n mod_version = _find_version(mod.__version__)\r\nAttributeError: module 'importlib_metadata' has no attribute '__version__'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/sturm/anaconda3/envs/scanpy_test/lib/python3.7/site-packages/scanpy/logging.py\", line 161, in print_versions\r\n sinfo(dependencies=True)\r\n File \"/home/sturm/anaconda3/envs/scanpy_test/lib/python3.7/site-packages/sinfo/main.py\", line 198, in sinfo\r\n mod_version = _find_version(mod.version)\r\n File \"/home/sturm/anaconda3/envs/scanpy_test/lib/python3.7/site-packages/sinfo/main.py\", line 42, in _find_version\r\n return mod_version_attr()\r\nTypeError: version() missing 1 required positional argument: 'distribution_name'\r\n```\r\n\r\nAccording to the `importlib_metadata` changelog, the `__version__` attribute has been removed from the package: \r\n\r\n```\r\n=========================\r\n importlib_metadata NEWS\r\n=========================\r\n\r\nv2.0.0\r\n======\r\n\r\n* ``importlib_metadata`` no longer presents a\r\n ``__version__`` attribute. Consumers wishing to\r\n resolve the version of the package should query it\r\n directly with\r\n ``importlib_metadata.version('importlib-metadata')``.\r\n Closes #71.\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Logging and Profiling\n\"\"\"\nimport io\nimport logging\nimport sys\nfrom functools import update_wrapper, partial\nfrom logging import CRITICAL, ERROR, WARNING, INFO, DEBUG, NOTSET\nfrom datetime import datetime, timedelta, timezone\nfrom typing import Optional\n\nimport anndata.logging\nfrom sinfo import sinfo\n\n\nHINT = (INFO + DEBUG) // 2\nlogging.addLevelName(HINT, 'HINT')\n\n\nclass _RootLogger(logging.RootLogger):\n def __init__(self, level):\n super().__init__(level)\n self.propagate = False\n _RootLogger.manager = logging.Manager(self)\n\n def log(\n self,\n level: int,\n msg: str,\n *,\n extra: Optional[dict] = None,\n time: datetime = None,\n deep: Optional[str] = None,\n ) -> datetime:\n from . import settings\n\n now = datetime.now(timezone.utc)\n time_passed: timedelta = None if time is None else now - time\n extra = {\n **(extra or {}),\n 'deep': deep if settings.verbosity.level < level else None,\n 'time_passed': time_passed,\n }\n super().log(level, msg, extra=extra)\n return now\n\n def critical(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(CRITICAL, msg, time=time, deep=deep, extra=extra)\n\n def error(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(ERROR, msg, time=time, deep=deep, extra=extra)\n\n def warning(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(WARNING, msg, time=time, deep=deep, extra=extra)\n\n def info(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(INFO, msg, time=time, deep=deep, extra=extra)\n\n def hint(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(HINT, msg, time=time, deep=deep, extra=extra)\n\n def debug(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(DEBUG, msg, time=time, deep=deep, extra=extra)\n\n\ndef _set_log_file(settings):\n file = settings.logfile\n name = settings.logpath\n root = settings._root_logger\n h = logging.StreamHandler(file) if name is None else logging.FileHandler(name)\n h.setFormatter(_LogFormatter())\n h.setLevel(root.level)\n if len(root.handlers) == 1:\n root.removeHandler(root.handlers[0])\n elif len(root.handlers) > 1:\n raise RuntimeError('Scanpy\u2019s root logger somehow got more than one handler')\n root.addHandler(h)\n\n\ndef _set_log_level(settings, level: int):\n root = settings._root_logger\n root.setLevel(level)\n (h,) = root.handlers # may only be 1\n h.setLevel(level)\n\n\nclass _LogFormatter(logging.Formatter):\n def __init__(\n self, fmt='{levelname}: {message}', datefmt='%Y-%m-%d %H:%M', style='{'\n ):\n super().__init__(fmt, datefmt, style)\n\n def format(self, record: logging.LogRecord):\n format_orig = self._style._fmt\n if record.levelno == INFO:\n self._style._fmt = '{message}'\n elif record.levelno == HINT:\n self._style._fmt = '--> {message}'\n elif record.levelno == DEBUG:\n self._style._fmt = ' {message}'\n if record.time_passed:\n # strip microseconds\n if record.time_passed.microseconds:\n record.time_passed = timedelta(\n seconds=int(record.time_passed.total_seconds())\n )\n if '{time_passed}' in record.msg:\n record.msg = record.msg.replace(\n '{time_passed}', str(record.time_passed)\n )\n else:\n self._style._fmt += ' ({time_passed})'\n if record.deep:\n record.msg = f'{record.msg}: {record.deep}'\n result = logging.Formatter.format(self, record)\n self._style._fmt = format_orig\n return result\n\n\nprint_memory_usage = anndata.logging.print_memory_usage\nget_memory_usage = anndata.logging.get_memory_usage\n\n\n_DEPENDENCIES_NUMERICS = [\n 'anndata', # anndata actually shouldn't, but as long as it's in development\n 'umap',\n 'numpy',\n 'scipy',\n 'pandas',\n ('sklearn', 'scikit-learn'),\n 'statsmodels',\n ('igraph', 'python-igraph'),\n 'louvain',\n 'leidenalg',\n]\n\n\ndef _versions_dependencies(dependencies):\n # this is not the same as the requirements!\n for mod in dependencies:\n mod_name, dist_name = mod if isinstance(mod, tuple) else (mod, mod)\n try:\n imp = __import__(mod_name)\n yield dist_name, imp.__version__\n except (ImportError, AttributeError):\n pass\n\n\ndef print_header(*, file=None):\n \"\"\"\\\n Versions that might influence the numerical results.\n Matplotlib and Seaborn are excluded from this.\n \"\"\"\n\n modules = ['scanpy'] + _DEPENDENCIES_NUMERICS\n print(\n ' '.join(f'{mod}=={ver}' for mod, ver in _versions_dependencies(modules)),\n file=file or sys.stdout,\n )\n\n\ndef print_versions(*, file=None):\n \"\"\"Print print versions of imported packages\"\"\"\n if file is None: # Inform people about the behavior change\n warning('If you miss a compact list, please try `print_header`!')\n stdout = sys.stdout\n try:\n buf = sys.stdout = io.StringIO()\n sinfo(dependencies=True)\n finally:\n sys.stdout = stdout\n output = buf.getvalue()\n print(output, file=file)\n\n\ndef print_version_and_date(*, file=None):\n \"\"\"\\\n Useful for starting a notebook so you see when you started working.\n \"\"\"\n from . import __version__\n\n if file is None:\n file = sys.stdout\n print(\n f'Running Scanpy {__version__}, ' f'on {datetime.now():%Y-%m-%d %H:%M}.',\n file=file,\n )\n\n\ndef _copy_docs_and_signature(fn):\n return partial(update_wrapper, wrapped=fn, assigned=['__doc__', '__annotations__'])\n\n\ndef error(\n msg: str,\n *,\n time: datetime = None,\n deep: Optional[str] = None,\n extra: Optional[dict] = None,\n) -> datetime:\n \"\"\"\\\n Log message with specific level and return current time.\n\n Parameters\n ==========\n msg\n Message to display.\n time\n A time in the past. If this is passed, the time difference from then\n to now is appended to `msg` as ` (HH:MM:SS)`.\n If `msg` contains `{time_passed}`, the time difference is instead\n inserted at that position.\n deep\n If the current verbosity is higher than the log function\u2019s level,\n this gets displayed as well\n extra\n Additional values you can specify in `msg` like `{time_passed}`.\n \"\"\"\n from ._settings import settings\n\n return settings._root_logger.error(msg, time=time, deep=deep, extra=extra)\n\n\n@_copy_docs_and_signature(error)\ndef warning(msg, *, time=None, deep=None, extra=None) -> datetime:\n from ._settings import settings\n\n return settings._root_logger.warning(msg, time=time, deep=deep, extra=extra)\n\n\n@_copy_docs_and_signature(error)\ndef info(msg, *, time=None, deep=None, extra=None) -> datetime:\n from ._settings import settings\n\n return settings._root_logger.info(msg, time=time, deep=deep, extra=extra)\n\n\n@_copy_docs_and_signature(error)\ndef hint(msg, *, time=None, deep=None, extra=None) -> datetime:\n from ._settings import settings\n\n return settings._root_logger.hint(msg, time=time, deep=deep, extra=extra)\n\n\n@_copy_docs_and_signature(error)\ndef debug(msg, *, time=None, deep=None, extra=None) -> datetime:\n from ._settings import settings\n\n return settings._root_logger.debug(msg, time=time, deep=deep, extra=extra)\n", "path": "scanpy/logging.py"}], "after_files": [{"content": "\"\"\"Logging and Profiling\n\"\"\"\nimport io\nimport logging\nimport sys\nfrom functools import update_wrapper, partial\nfrom logging import CRITICAL, ERROR, WARNING, INFO, DEBUG, NOTSET\nfrom datetime import datetime, timedelta, timezone\nfrom typing import Optional\n\nimport anndata.logging\nfrom sinfo import sinfo\n\n\nHINT = (INFO + DEBUG) // 2\nlogging.addLevelName(HINT, 'HINT')\n\n\nclass _RootLogger(logging.RootLogger):\n def __init__(self, level):\n super().__init__(level)\n self.propagate = False\n _RootLogger.manager = logging.Manager(self)\n\n def log(\n self,\n level: int,\n msg: str,\n *,\n extra: Optional[dict] = None,\n time: datetime = None,\n deep: Optional[str] = None,\n ) -> datetime:\n from . import settings\n\n now = datetime.now(timezone.utc)\n time_passed: timedelta = None if time is None else now - time\n extra = {\n **(extra or {}),\n 'deep': deep if settings.verbosity.level < level else None,\n 'time_passed': time_passed,\n }\n super().log(level, msg, extra=extra)\n return now\n\n def critical(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(CRITICAL, msg, time=time, deep=deep, extra=extra)\n\n def error(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(ERROR, msg, time=time, deep=deep, extra=extra)\n\n def warning(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(WARNING, msg, time=time, deep=deep, extra=extra)\n\n def info(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(INFO, msg, time=time, deep=deep, extra=extra)\n\n def hint(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(HINT, msg, time=time, deep=deep, extra=extra)\n\n def debug(self, msg, *, time=None, deep=None, extra=None) -> datetime:\n return self.log(DEBUG, msg, time=time, deep=deep, extra=extra)\n\n\ndef _set_log_file(settings):\n file = settings.logfile\n name = settings.logpath\n root = settings._root_logger\n h = logging.StreamHandler(file) if name is None else logging.FileHandler(name)\n h.setFormatter(_LogFormatter())\n h.setLevel(root.level)\n if len(root.handlers) == 1:\n root.removeHandler(root.handlers[0])\n elif len(root.handlers) > 1:\n raise RuntimeError('Scanpy\u2019s root logger somehow got more than one handler')\n root.addHandler(h)\n\n\ndef _set_log_level(settings, level: int):\n root = settings._root_logger\n root.setLevel(level)\n (h,) = root.handlers # may only be 1\n h.setLevel(level)\n\n\nclass _LogFormatter(logging.Formatter):\n def __init__(\n self, fmt='{levelname}: {message}', datefmt='%Y-%m-%d %H:%M', style='{'\n ):\n super().__init__(fmt, datefmt, style)\n\n def format(self, record: logging.LogRecord):\n format_orig = self._style._fmt\n if record.levelno == INFO:\n self._style._fmt = '{message}'\n elif record.levelno == HINT:\n self._style._fmt = '--> {message}'\n elif record.levelno == DEBUG:\n self._style._fmt = ' {message}'\n if record.time_passed:\n # strip microseconds\n if record.time_passed.microseconds:\n record.time_passed = timedelta(\n seconds=int(record.time_passed.total_seconds())\n )\n if '{time_passed}' in record.msg:\n record.msg = record.msg.replace(\n '{time_passed}', str(record.time_passed)\n )\n else:\n self._style._fmt += ' ({time_passed})'\n if record.deep:\n record.msg = f'{record.msg}: {record.deep}'\n result = logging.Formatter.format(self, record)\n self._style._fmt = format_orig\n return result\n\n\nprint_memory_usage = anndata.logging.print_memory_usage\nget_memory_usage = anndata.logging.get_memory_usage\n\n\n_DEPENDENCIES_NUMERICS = [\n 'anndata', # anndata actually shouldn't, but as long as it's in development\n 'umap',\n 'numpy',\n 'scipy',\n 'pandas',\n ('sklearn', 'scikit-learn'),\n 'statsmodels',\n ('igraph', 'python-igraph'),\n 'louvain',\n 'leidenalg',\n]\n\n\ndef _versions_dependencies(dependencies):\n # this is not the same as the requirements!\n for mod in dependencies:\n mod_name, dist_name = mod if isinstance(mod, tuple) else (mod, mod)\n try:\n imp = __import__(mod_name)\n yield dist_name, imp.__version__\n except (ImportError, AttributeError):\n pass\n\n\ndef print_header(*, file=None):\n \"\"\"\\\n Versions that might influence the numerical results.\n Matplotlib and Seaborn are excluded from this.\n \"\"\"\n\n modules = ['scanpy'] + _DEPENDENCIES_NUMERICS\n print(\n ' '.join(f'{mod}=={ver}' for mod, ver in _versions_dependencies(modules)),\n file=file or sys.stdout,\n )\n\n\ndef print_versions(*, file=None):\n \"\"\"Print print versions of imported packages\"\"\"\n if file is None: # Inform people about the behavior change\n warning('If you miss a compact list, please try `print_header`!')\n stdout = sys.stdout\n try:\n buf = sys.stdout = io.StringIO()\n sinfo(\n dependencies=True,\n excludes=[\n 'builtins',\n 'stdlib_list',\n 'importlib_metadata',\n # Special module present if test coverage being calculated\n # https://gitlab.com/joelostblom/sinfo/-/issues/10\n \"$coverage\",\n ],\n )\n finally:\n sys.stdout = stdout\n output = buf.getvalue()\n print(output, file=file)\n\n\ndef print_version_and_date(*, file=None):\n \"\"\"\\\n Useful for starting a notebook so you see when you started working.\n \"\"\"\n from . import __version__\n\n if file is None:\n file = sys.stdout\n print(\n f'Running Scanpy {__version__}, ' f'on {datetime.now():%Y-%m-%d %H:%M}.',\n file=file,\n )\n\n\ndef _copy_docs_and_signature(fn):\n return partial(update_wrapper, wrapped=fn, assigned=['__doc__', '__annotations__'])\n\n\ndef error(\n msg: str,\n *,\n time: datetime = None,\n deep: Optional[str] = None,\n extra: Optional[dict] = None,\n) -> datetime:\n \"\"\"\\\n Log message with specific level and return current time.\n\n Parameters\n ==========\n msg\n Message to display.\n time\n A time in the past. If this is passed, the time difference from then\n to now is appended to `msg` as ` (HH:MM:SS)`.\n If `msg` contains `{time_passed}`, the time difference is instead\n inserted at that position.\n deep\n If the current verbosity is higher than the log function\u2019s level,\n this gets displayed as well\n extra\n Additional values you can specify in `msg` like `{time_passed}`.\n \"\"\"\n from ._settings import settings\n\n return settings._root_logger.error(msg, time=time, deep=deep, extra=extra)\n\n\n@_copy_docs_and_signature(error)\ndef warning(msg, *, time=None, deep=None, extra=None) -> datetime:\n from ._settings import settings\n\n return settings._root_logger.warning(msg, time=time, deep=deep, extra=extra)\n\n\n@_copy_docs_and_signature(error)\ndef info(msg, *, time=None, deep=None, extra=None) -> datetime:\n from ._settings import settings\n\n return settings._root_logger.info(msg, time=time, deep=deep, extra=extra)\n\n\n@_copy_docs_and_signature(error)\ndef hint(msg, *, time=None, deep=None, extra=None) -> datetime:\n from ._settings import settings\n\n return settings._root_logger.hint(msg, time=time, deep=deep, extra=extra)\n\n\n@_copy_docs_and_signature(error)\ndef debug(msg, *, time=None, deep=None, extra=None) -> datetime:\n from ._settings import settings\n\n return settings._root_logger.debug(msg, time=time, deep=deep, extra=extra)\n", "path": "scanpy/logging.py"}]}
| 3,247 | 168 |
gh_patches_debug_11146
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-7568
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-2440] `dbt show` throws `Database Error` for models with `sql_header` required for valid query
If a model is configured with a `sql_header` that is necessary to successfully run the query, `dbt show` currently fails because the [`compiled_node.compiled_code` does not include the sql_header SQL](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/task/show.py#L21).
Reproduction case (run against BQ, but not a BQ-specific issue)
```
-- models/my_model.sql
{% call set_sql_header(config) %}
CREATE TEMPORARY FUNCTION yes_no_to_boolean(answer STRING)
RETURNS BOOLEAN AS (
CASE
WHEN LOWER(answer) = 'yes' THEN True
WHEN LOWER(answer) = 'no' THEN False
ELSE NULL
END
);
{%- endcall %}
select yes_no_to_boolean("yes") as column
```
```
dbt show --select my_model --project-dir
19:00:05 Found 1 model, 0 tests, 0 snapshots, 0 analyses, 551 macros, 0 operations, 0 seed files, 0 sources, 0 exposures, 0 metrics, 0 groups
19:00:05
19:00:06 Concurrency: 1 threads (target='dev')
19:00:06
19:00:08 BigQuery adapter: https://console.cloud.google.com/bigquery?project=dbt-test-env&j=bq:US:9802c6ea-f771-4d46-9da3-bf6f521bd1da&page=queryresults
19:00:08 Encountered an error:
Runtime Error
Database Error in model dummydep (models2/dummydep.sql)
Function not found: yes_no_to_boolean at [8:8]
```
**Acceptance criteria:**
Instead of directly executing `compiled_node.compiled_code`, template it into a multi-statement query that includes the `sql_header` (similar approach to the one proposed for https://github.com/dbt-labs/dbt-core/issues/7390)
[CT-2440] `dbt show` throws `Database Error` for models with `sql_header` required for valid query
If a model is configured with a `sql_header` that is necessary to successfully run the query, `dbt show` currently fails because the [`compiled_node.compiled_code` does not include the sql_header SQL](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/task/show.py#L21).
Reproduction case (run against BQ, but not a BQ-specific issue)
```
-- models/my_model.sql
{% call set_sql_header(config) %}
CREATE TEMPORARY FUNCTION yes_no_to_boolean(answer STRING)
RETURNS BOOLEAN AS (
CASE
WHEN LOWER(answer) = 'yes' THEN True
WHEN LOWER(answer) = 'no' THEN False
ELSE NULL
END
);
{%- endcall %}
select yes_no_to_boolean("yes") as column
```
```
dbt show --select my_model --project-dir
19:00:05 Found 1 model, 0 tests, 0 snapshots, 0 analyses, 551 macros, 0 operations, 0 seed files, 0 sources, 0 exposures, 0 metrics, 0 groups
19:00:05
19:00:06 Concurrency: 1 threads (target='dev')
19:00:06
19:00:08 BigQuery adapter: https://console.cloud.google.com/bigquery?project=dbt-test-env&j=bq:US:9802c6ea-f771-4d46-9da3-bf6f521bd1da&page=queryresults
19:00:08 Encountered an error:
Runtime Error
Database Error in model dummydep (models2/dummydep.sql)
Function not found: yes_no_to_boolean at [8:8]
```
**Acceptance criteria:**
Instead of directly executing `compiled_node.compiled_code`, template it into a multi-statement query that includes the `sql_header` (similar approach to the one proposed for https://github.com/dbt-labs/dbt-core/issues/7390)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/task/show.py`
Content:
```
1 import io
2 import threading
3 import time
4
5 from dbt.contracts.graph.nodes import SeedNode
6 from dbt.contracts.results import RunResult, RunStatus
7 from dbt.events.base_types import EventLevel
8 from dbt.events.functions import fire_event
9 from dbt.events.types import ShowNode, Note
10 from dbt.exceptions import DbtRuntimeError
11 from dbt.task.compile import CompileTask, CompileRunner
12 from dbt.task.seed import SeedRunner
13
14
15 class ShowRunner(CompileRunner):
16 def __init__(self, config, adapter, node, node_index, num_nodes):
17 super().__init__(config, adapter, node, node_index, num_nodes)
18 self.run_ephemeral_models = True
19
20 def execute(self, compiled_node, manifest):
21 start_time = time.time()
22
23 # Allow passing in -1 (or any negative number) to get all rows
24 limit = None if self.config.args.limit < 0 else self.config.args.limit
25
26 adapter_response, execute_result = self.adapter.execute(
27 compiled_node.compiled_code, fetch=True, limit=limit
28 )
29 end_time = time.time()
30
31 return RunResult(
32 node=compiled_node,
33 status=RunStatus.Success,
34 timing=[],
35 thread_id=threading.current_thread().name,
36 execution_time=end_time - start_time,
37 message=None,
38 adapter_response=adapter_response.to_dict(),
39 agate_table=execute_result,
40 failures=None,
41 )
42
43
44 class ShowTask(CompileTask):
45 def _runtime_initialize(self):
46 if not (self.args.select or getattr(self.args, "inline", None)):
47 raise DbtRuntimeError("Either --select or --inline must be passed to show")
48 super()._runtime_initialize()
49
50 def get_runner_type(self, node):
51 if isinstance(node, SeedNode):
52 return SeedRunner
53 else:
54 return ShowRunner
55
56 def task_end_messages(self, results):
57 is_inline = bool(getattr(self.args, "inline", None))
58
59 if is_inline:
60 matched_results = [result for result in results if result.node.name == "inline_query"]
61 else:
62 matched_results = []
63 for result in results:
64 if result.node.name in self.selection_arg[0]:
65 matched_results.append(result)
66 else:
67 fire_event(
68 Note(msg=f"Excluded node '{result.node.name}' from results"),
69 EventLevel.DEBUG,
70 )
71
72 for result in matched_results:
73 table = result.agate_table
74
75 # Hack to get Agate table output as string
76 output = io.StringIO()
77 if self.args.output == "json":
78 table.to_json(path=output)
79 else:
80 table.print_table(output=output, max_rows=None)
81
82 node_name = result.node.name
83
84 if hasattr(result.node, "version") and result.node.version:
85 node_name += f".v{result.node.version}"
86
87 fire_event(
88 ShowNode(
89 node_name=node_name,
90 preview=output.getvalue(),
91 is_inline=is_inline,
92 output_format=self.args.output,
93 unique_id=result.node.unique_id,
94 )
95 )
96
97 def _handle_result(self, result):
98 super()._handle_result(result)
99
100 if (
101 result.node.is_ephemeral_model
102 and type(self) is ShowTask
103 and (self.args.select or getattr(self.args, "inline", None))
104 ):
105 self.node_results.append(result)
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/dbt/task/show.py b/core/dbt/task/show.py
--- a/core/dbt/task/show.py
+++ b/core/dbt/task/show.py
@@ -23,6 +23,11 @@
# Allow passing in -1 (or any negative number) to get all rows
limit = None if self.config.args.limit < 0 else self.config.args.limit
+ if "sql_header" in compiled_node.unrendered_config:
+ compiled_node.compiled_code = (
+ compiled_node.unrendered_config["sql_header"] + compiled_node.compiled_code
+ )
+
adapter_response, execute_result = self.adapter.execute(
compiled_node.compiled_code, fetch=True, limit=limit
)
|
{"golden_diff": "diff --git a/core/dbt/task/show.py b/core/dbt/task/show.py\n--- a/core/dbt/task/show.py\n+++ b/core/dbt/task/show.py\n@@ -23,6 +23,11 @@\n # Allow passing in -1 (or any negative number) to get all rows\n limit = None if self.config.args.limit < 0 else self.config.args.limit\n \n+ if \"sql_header\" in compiled_node.unrendered_config:\n+ compiled_node.compiled_code = (\n+ compiled_node.unrendered_config[\"sql_header\"] + compiled_node.compiled_code\n+ )\n+\n adapter_response, execute_result = self.adapter.execute(\n compiled_node.compiled_code, fetch=True, limit=limit\n )\n", "issue": "[CT-2440] `dbt show` throws `Database Error` for models with `sql_header` required for valid query \nIf a model is configured with a `sql_header` that is necessary to successfully run the query, `dbt show` currently fails because the [`compiled_node.compiled_code` does not include the sql_header SQL](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/task/show.py#L21).\r\n\r\nReproduction case (run against BQ, but not a BQ-specific issue)\r\n\r\n```\r\n-- models/my_model.sql\r\n{% call set_sql_header(config) %}\r\n CREATE TEMPORARY FUNCTION yes_no_to_boolean(answer STRING)\r\n RETURNS BOOLEAN AS (\r\n CASE\r\n WHEN LOWER(answer) = 'yes' THEN True\r\n WHEN LOWER(answer) = 'no' THEN False\r\n ELSE NULL\r\n END\r\n );\r\n{%- endcall %}\r\n\r\nselect yes_no_to_boolean(\"yes\") as column\r\n```\r\n\r\n```\r\ndbt show --select my_model --project-dir\r\n19:00:05 Found 1 model, 0 tests, 0 snapshots, 0 analyses, 551 macros, 0 operations, 0 seed files, 0 sources, 0 exposures, 0 metrics, 0 groups\r\n19:00:05 \r\n19:00:06 Concurrency: 1 threads (target='dev')\r\n19:00:06 \r\n19:00:08 BigQuery adapter: https://console.cloud.google.com/bigquery?project=dbt-test-env&j=bq:US:9802c6ea-f771-4d46-9da3-bf6f521bd1da&page=queryresults\r\n19:00:08 Encountered an error:\r\nRuntime Error\r\n Database Error in model dummydep (models2/dummydep.sql)\r\n Function not found: yes_no_to_boolean at [8:8]\r\n```\r\n\r\n**Acceptance criteria:** \r\nInstead of directly executing `compiled_node.compiled_code`, template it into a multi-statement query that includes the `sql_header` (similar approach to the one proposed for https://github.com/dbt-labs/dbt-core/issues/7390)\r\n\n[CT-2440] `dbt show` throws `Database Error` for models with `sql_header` required for valid query \nIf a model is configured with a `sql_header` that is necessary to successfully run the query, `dbt show` currently fails because the [`compiled_node.compiled_code` does not include the sql_header SQL](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/task/show.py#L21).\r\n\r\nReproduction case (run against BQ, but not a BQ-specific issue)\r\n\r\n```\r\n-- models/my_model.sql\r\n{% call set_sql_header(config) %}\r\n CREATE TEMPORARY FUNCTION yes_no_to_boolean(answer STRING)\r\n RETURNS BOOLEAN AS (\r\n CASE\r\n WHEN LOWER(answer) = 'yes' THEN True\r\n WHEN LOWER(answer) = 'no' THEN False\r\n ELSE NULL\r\n END\r\n );\r\n{%- endcall %}\r\n\r\nselect yes_no_to_boolean(\"yes\") as column\r\n```\r\n\r\n```\r\ndbt show --select my_model --project-dir\r\n19:00:05 Found 1 model, 0 tests, 0 snapshots, 0 analyses, 551 macros, 0 operations, 0 seed files, 0 sources, 0 exposures, 0 metrics, 0 groups\r\n19:00:05 \r\n19:00:06 Concurrency: 1 threads (target='dev')\r\n19:00:06 \r\n19:00:08 BigQuery adapter: https://console.cloud.google.com/bigquery?project=dbt-test-env&j=bq:US:9802c6ea-f771-4d46-9da3-bf6f521bd1da&page=queryresults\r\n19:00:08 Encountered an error:\r\nRuntime Error\r\n Database Error in model dummydep (models2/dummydep.sql)\r\n Function not found: yes_no_to_boolean at [8:8]\r\n```\r\n\r\n**Acceptance criteria:** \r\nInstead of directly executing `compiled_node.compiled_code`, template it into a multi-statement query that includes the `sql_header` (similar approach to the one proposed for https://github.com/dbt-labs/dbt-core/issues/7390)\r\n\n", "before_files": [{"content": "import io\nimport threading\nimport time\n\nfrom dbt.contracts.graph.nodes import SeedNode\nfrom dbt.contracts.results import RunResult, RunStatus\nfrom dbt.events.base_types import EventLevel\nfrom dbt.events.functions import fire_event\nfrom dbt.events.types import ShowNode, Note\nfrom dbt.exceptions import DbtRuntimeError\nfrom dbt.task.compile import CompileTask, CompileRunner\nfrom dbt.task.seed import SeedRunner\n\n\nclass ShowRunner(CompileRunner):\n def __init__(self, config, adapter, node, node_index, num_nodes):\n super().__init__(config, adapter, node, node_index, num_nodes)\n self.run_ephemeral_models = True\n\n def execute(self, compiled_node, manifest):\n start_time = time.time()\n\n # Allow passing in -1 (or any negative number) to get all rows\n limit = None if self.config.args.limit < 0 else self.config.args.limit\n\n adapter_response, execute_result = self.adapter.execute(\n compiled_node.compiled_code, fetch=True, limit=limit\n )\n end_time = time.time()\n\n return RunResult(\n node=compiled_node,\n status=RunStatus.Success,\n timing=[],\n thread_id=threading.current_thread().name,\n execution_time=end_time - start_time,\n message=None,\n adapter_response=adapter_response.to_dict(),\n agate_table=execute_result,\n failures=None,\n )\n\n\nclass ShowTask(CompileTask):\n def _runtime_initialize(self):\n if not (self.args.select or getattr(self.args, \"inline\", None)):\n raise DbtRuntimeError(\"Either --select or --inline must be passed to show\")\n super()._runtime_initialize()\n\n def get_runner_type(self, node):\n if isinstance(node, SeedNode):\n return SeedRunner\n else:\n return ShowRunner\n\n def task_end_messages(self, results):\n is_inline = bool(getattr(self.args, \"inline\", None))\n\n if is_inline:\n matched_results = [result for result in results if result.node.name == \"inline_query\"]\n else:\n matched_results = []\n for result in results:\n if result.node.name in self.selection_arg[0]:\n matched_results.append(result)\n else:\n fire_event(\n Note(msg=f\"Excluded node '{result.node.name}' from results\"),\n EventLevel.DEBUG,\n )\n\n for result in matched_results:\n table = result.agate_table\n\n # Hack to get Agate table output as string\n output = io.StringIO()\n if self.args.output == \"json\":\n table.to_json(path=output)\n else:\n table.print_table(output=output, max_rows=None)\n\n node_name = result.node.name\n\n if hasattr(result.node, \"version\") and result.node.version:\n node_name += f\".v{result.node.version}\"\n\n fire_event(\n ShowNode(\n node_name=node_name,\n preview=output.getvalue(),\n is_inline=is_inline,\n output_format=self.args.output,\n unique_id=result.node.unique_id,\n )\n )\n\n def _handle_result(self, result):\n super()._handle_result(result)\n\n if (\n result.node.is_ephemeral_model\n and type(self) is ShowTask\n and (self.args.select or getattr(self.args, \"inline\", None))\n ):\n self.node_results.append(result)\n", "path": "core/dbt/task/show.py"}], "after_files": [{"content": "import io\nimport threading\nimport time\n\nfrom dbt.contracts.graph.nodes import SeedNode\nfrom dbt.contracts.results import RunResult, RunStatus\nfrom dbt.events.base_types import EventLevel\nfrom dbt.events.functions import fire_event\nfrom dbt.events.types import ShowNode, Note\nfrom dbt.exceptions import DbtRuntimeError\nfrom dbt.task.compile import CompileTask, CompileRunner\nfrom dbt.task.seed import SeedRunner\n\n\nclass ShowRunner(CompileRunner):\n def __init__(self, config, adapter, node, node_index, num_nodes):\n super().__init__(config, adapter, node, node_index, num_nodes)\n self.run_ephemeral_models = True\n\n def execute(self, compiled_node, manifest):\n start_time = time.time()\n\n # Allow passing in -1 (or any negative number) to get all rows\n limit = None if self.config.args.limit < 0 else self.config.args.limit\n\n if \"sql_header\" in compiled_node.unrendered_config:\n compiled_node.compiled_code = (\n compiled_node.unrendered_config[\"sql_header\"] + compiled_node.compiled_code\n )\n\n adapter_response, execute_result = self.adapter.execute(\n compiled_node.compiled_code, fetch=True, limit=limit\n )\n end_time = time.time()\n\n return RunResult(\n node=compiled_node,\n status=RunStatus.Success,\n timing=[],\n thread_id=threading.current_thread().name,\n execution_time=end_time - start_time,\n message=None,\n adapter_response=adapter_response.to_dict(),\n agate_table=execute_result,\n failures=None,\n )\n\n\nclass ShowTask(CompileTask):\n def _runtime_initialize(self):\n if not (self.args.select or getattr(self.args, \"inline\", None)):\n raise DbtRuntimeError(\"Either --select or --inline must be passed to show\")\n super()._runtime_initialize()\n\n def get_runner_type(self, node):\n if isinstance(node, SeedNode):\n return SeedRunner\n else:\n return ShowRunner\n\n def task_end_messages(self, results):\n is_inline = bool(getattr(self.args, \"inline\", None))\n\n if is_inline:\n matched_results = [result for result in results if result.node.name == \"inline_query\"]\n else:\n matched_results = []\n for result in results:\n if result.node.name in self.selection_arg[0]:\n matched_results.append(result)\n else:\n fire_event(\n Note(msg=f\"Excluded node '{result.node.name}' from results\"),\n EventLevel.DEBUG,\n )\n\n for result in matched_results:\n table = result.agate_table\n\n # Hack to get Agate table output as string\n output = io.StringIO()\n if self.args.output == \"json\":\n table.to_json(path=output)\n else:\n table.print_table(output=output, max_rows=None)\n\n node_name = result.node.name\n\n if hasattr(result.node, \"version\") and result.node.version:\n node_name += f\".v{result.node.version}\"\n\n fire_event(\n ShowNode(\n node_name=node_name,\n preview=output.getvalue(),\n is_inline=is_inline,\n output_format=self.args.output,\n unique_id=result.node.unique_id,\n )\n )\n\n def _handle_result(self, result):\n super()._handle_result(result)\n\n if (\n result.node.is_ephemeral_model\n and type(self) is ShowTask\n and (self.args.select or getattr(self.args, \"inline\", None))\n ):\n self.node_results.append(result)\n", "path": "core/dbt/task/show.py"}]}
| 2,175 | 159 |
gh_patches_debug_13045
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-2891
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E2520 false positive for CloudWatch Alarm with expression
### CloudFormation Lint Version
0.80.3
### What operating system are you using?
MacOS
### Describe the bug
A valid CloudWatch alarm that uses a metrics expression is resulting in an E2520 false positive. The alarm was defined in the CloudWatch console and exported via the "View Source | CloudFormation YAML" capability, so it's definitionally a valid CloudWatch alarm. To confirm that the bug isn't in the console, created a copy of the alarm using the generated definition and neither CloudFormation nor CloudWatch have any complaints.
### Expected behavior
E2520 should not be raised when `Dimensions` is present under `MetricStat.Metric`.
### Reproduction template
```yaml
AWSTemplateFormatVersion: "2010-09-09"
Description: AXIS ALB alarms
Parameters:
pLoadBalancerId:
Type: String
Default: app/private-api-proxy/ced2a65499b104e7
pAlarmPrefix:
Type: String
Default: MySampleApp
Resources:
rAlb5xxPercentage:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName: !Sub "${pAlarmPrefix}-ALB-5XX-Percentage"
AlarmDescription: >-
This alarm fires when the ALB is returning HTTP 5XX errors. It is
usually due to a misconfiguration of the ALB or not having any
associated targets.
See [runbook](https://google.com) for more details.
ActionsEnabled: true
OKActions: []
AlarmActions: []
InsufficientDataActions: []
Dimensions: []
EvaluationPeriods: 15
DatapointsToAlarm: 3
Threshold: 5
ComparisonOperator: GreaterThanOrEqualToThreshold
TreatMissingData: notBreaching
Metrics:
- Id: e1
Label: ALB 5XX Percentage
ReturnData: true
Expression: (m2/(m1+m2+m3+0.001))*100
- Id: m1
ReturnData: false
MetricStat:
Metric:
Namespace: AWS/ApplicationELB
MetricName: RequestCount
Dimensions:
- Name: LoadBalancer
Value: !Ref pLoadBalancerId
Period: 60
Stat: Sum
- Id: m2
ReturnData: false
MetricStat:
Metric:
Namespace: AWS/ApplicationELB
MetricName: HTTPCode_ELB_5XX_Count
Dimensions:
- Name: LoadBalancer
Value: !Ref pLoadBalancerId
Period: 60
Stat: Sum
- Id: m3
ReturnData: false
MetricStat:
Metric:
Namespace: AWS/ApplicationELB
MetricName: HTTPCode_ELB_4XX_Count
Dimensions:
- Name: LoadBalancer
Value: !Ref pLoadBalancerId
Period: 60
Stat: Sum
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/properties/Exclusive.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import cfnlint.helpers
6 from cfnlint.data import AdditionalSpecs
7 from cfnlint.rules import CloudFormationLintRule, RuleMatch
8
9
10 class Exclusive(CloudFormationLintRule):
11 """Check Properties Resource Configuration"""
12
13 id = "E2520"
14 shortdesc = "Check Properties that are mutually exclusive"
15 description = (
16 "Making sure CloudFormation properties that are exclusive are not defined"
17 )
18 source_url = "https://github.com/aws-cloudformation/cfn-python-lint"
19 tags = ["resources"]
20
21 def __init__(self):
22 """Init"""
23 super().__init__()
24 exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, "Exclusive.json")
25 self.resource_types_specs = exclusivespec["ResourceTypes"]
26 self.property_types_specs = exclusivespec["PropertyTypes"]
27 for resource_type_spec in self.resource_types_specs:
28 self.resource_property_types.append(resource_type_spec)
29 for property_type_spec in self.property_types_specs:
30 self.resource_sub_property_types.append(property_type_spec)
31
32 def check(self, properties, exclusions, path, cfn):
33 """Check itself"""
34 matches = []
35 for p_value, p_path in properties.items_safe(path[:]):
36 for k, v in exclusions.items():
37 property_sets = cfn.get_object_without_conditions(p_value, [k] + v)
38 for property_set in property_sets:
39 obj = property_set["Object"].clean()
40 for prop in obj:
41 if prop == k:
42 for excl_property in exclusions[prop]:
43 if excl_property in obj:
44 if property_set["Scenario"] is None:
45 message = "Property {0} should NOT exist with {1} for {2}"
46 matches.append(
47 RuleMatch(
48 p_path + [prop],
49 message.format(
50 excl_property,
51 prop,
52 "/".join(map(str, p_path)),
53 ),
54 )
55 )
56 else:
57 scenario_text = " and ".join(
58 [
59 f'when condition "{k}" is {v}'
60 for (k, v) in property_set[
61 "Scenario"
62 ].items()
63 ]
64 )
65 message = "Property {0} should NOT exist with {1} {2} for {3}"
66 matches.append(
67 RuleMatch(
68 p_path + [prop],
69 message.format(
70 excl_property,
71 prop,
72 scenario_text,
73 "/".join(map(str, p_path)),
74 ),
75 )
76 )
77
78 return matches
79
80 def match_resource_sub_properties(self, properties, property_type, path, cfn):
81 """Match for sub properties"""
82 matches = []
83
84 exclusions = self.property_types_specs.get(property_type, {})
85 matches.extend(self.check(properties, exclusions, path, cfn))
86
87 return matches
88
89 def match_resource_properties(self, properties, resource_type, path, cfn):
90 """Check CloudFormation Properties"""
91 matches = []
92
93 exclusions = self.resource_types_specs.get(resource_type, {})
94 matches.extend(self.check(properties, exclusions, path, cfn))
95
96 return matches
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cfnlint/rules/resources/properties/Exclusive.py b/src/cfnlint/rules/resources/properties/Exclusive.py
--- a/src/cfnlint/rules/resources/properties/Exclusive.py
+++ b/src/cfnlint/rules/resources/properties/Exclusive.py
@@ -40,7 +40,7 @@
for prop in obj:
if prop == k:
for excl_property in exclusions[prop]:
- if excl_property in obj:
+ if obj.get(excl_property):
if property_set["Scenario"] is None:
message = "Property {0} should NOT exist with {1} for {2}"
matches.append(
|
{"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/Exclusive.py b/src/cfnlint/rules/resources/properties/Exclusive.py\n--- a/src/cfnlint/rules/resources/properties/Exclusive.py\n+++ b/src/cfnlint/rules/resources/properties/Exclusive.py\n@@ -40,7 +40,7 @@\n for prop in obj:\n if prop == k:\n for excl_property in exclusions[prop]:\n- if excl_property in obj:\n+ if obj.get(excl_property):\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n", "issue": "E2520 false positive for CloudWatch Alarm with expression\n### CloudFormation Lint Version\r\n\r\n0.80.3\r\n\r\n### What operating system are you using?\r\n\r\nMacOS\r\n\r\n### Describe the bug\r\n\r\nA valid CloudWatch alarm that uses a metrics expression is resulting in an E2520 false positive. The alarm was defined in the CloudWatch console and exported via the \"View Source | CloudFormation YAML\" capability, so it's definitionally a valid CloudWatch alarm. To confirm that the bug isn't in the console, created a copy of the alarm using the generated definition and neither CloudFormation nor CloudWatch have any complaints.\r\n\r\n### Expected behavior\r\n\r\nE2520 should not be raised when `Dimensions` is present under `MetricStat.Metric`.\r\n\r\n### Reproduction template\r\n\r\n```yaml\r\nAWSTemplateFormatVersion: \"2010-09-09\"\r\n\r\nDescription: AXIS ALB alarms\r\n\r\nParameters:\r\n pLoadBalancerId:\r\n Type: String\r\n Default: app/private-api-proxy/ced2a65499b104e7\r\n\r\n pAlarmPrefix:\r\n Type: String\r\n Default: MySampleApp\r\n\r\nResources:\r\n rAlb5xxPercentage:\r\n Type: AWS::CloudWatch::Alarm\r\n Properties:\r\n AlarmName: !Sub \"${pAlarmPrefix}-ALB-5XX-Percentage\"\r\n AlarmDescription: >-\r\n This alarm fires when the ALB is returning HTTP 5XX errors. It is\r\n usually due to a misconfiguration of the ALB or not having any\r\n associated targets.\r\n\r\n\r\n See [runbook](https://google.com) for more details.\r\n ActionsEnabled: true\r\n OKActions: []\r\n AlarmActions: []\r\n InsufficientDataActions: []\r\n Dimensions: []\r\n EvaluationPeriods: 15\r\n DatapointsToAlarm: 3\r\n Threshold: 5\r\n ComparisonOperator: GreaterThanOrEqualToThreshold\r\n TreatMissingData: notBreaching\r\n Metrics:\r\n - Id: e1\r\n Label: ALB 5XX Percentage\r\n ReturnData: true\r\n Expression: (m2/(m1+m2+m3+0.001))*100\r\n - Id: m1\r\n ReturnData: false\r\n MetricStat:\r\n Metric:\r\n Namespace: AWS/ApplicationELB\r\n MetricName: RequestCount\r\n Dimensions:\r\n - Name: LoadBalancer\r\n Value: !Ref pLoadBalancerId\r\n Period: 60\r\n Stat: Sum\r\n - Id: m2\r\n ReturnData: false\r\n MetricStat:\r\n Metric:\r\n Namespace: AWS/ApplicationELB\r\n MetricName: HTTPCode_ELB_5XX_Count\r\n Dimensions:\r\n - Name: LoadBalancer\r\n Value: !Ref pLoadBalancerId\r\n Period: 60\r\n Stat: Sum\r\n - Id: m3\r\n ReturnData: false\r\n MetricStat:\r\n Metric:\r\n Namespace: AWS/ApplicationELB\r\n MetricName: HTTPCode_ELB_4XX_Count\r\n Dimensions:\r\n - Name: LoadBalancer\r\n Value: !Ref pLoadBalancerId\r\n Period: 60\r\n Stat: Sum\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport cfnlint.helpers\nfrom cfnlint.data import AdditionalSpecs\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Exclusive(CloudFormationLintRule):\n \"\"\"Check Properties Resource Configuration\"\"\"\n\n id = \"E2520\"\n shortdesc = \"Check Properties that are mutually exclusive\"\n description = (\n \"Making sure CloudFormation properties that are exclusive are not defined\"\n )\n source_url = \"https://github.com/aws-cloudformation/cfn-python-lint\"\n tags = [\"resources\"]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super().__init__()\n exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, \"Exclusive.json\")\n self.resource_types_specs = exclusivespec[\"ResourceTypes\"]\n self.property_types_specs = exclusivespec[\"PropertyTypes\"]\n for resource_type_spec in self.resource_types_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in self.property_types_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n def check(self, properties, exclusions, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for k, v in exclusions.items():\n property_sets = cfn.get_object_without_conditions(p_value, [k] + v)\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n if prop == k:\n for excl_property in exclusions[prop]:\n if excl_property in obj:\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n else:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in property_set[\n \"Scenario\"\n ].items()\n ]\n )\n message = \"Property {0} should NOT exist with {1} {2} for {3}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n scenario_text,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n exclusions = self.property_types_specs.get(property_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n exclusions = self.resource_types_specs.get(resource_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Exclusive.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport cfnlint.helpers\nfrom cfnlint.data import AdditionalSpecs\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Exclusive(CloudFormationLintRule):\n \"\"\"Check Properties Resource Configuration\"\"\"\n\n id = \"E2520\"\n shortdesc = \"Check Properties that are mutually exclusive\"\n description = (\n \"Making sure CloudFormation properties that are exclusive are not defined\"\n )\n source_url = \"https://github.com/aws-cloudformation/cfn-python-lint\"\n tags = [\"resources\"]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super().__init__()\n exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, \"Exclusive.json\")\n self.resource_types_specs = exclusivespec[\"ResourceTypes\"]\n self.property_types_specs = exclusivespec[\"PropertyTypes\"]\n for resource_type_spec in self.resource_types_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in self.property_types_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n def check(self, properties, exclusions, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for k, v in exclusions.items():\n property_sets = cfn.get_object_without_conditions(p_value, [k] + v)\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n if prop == k:\n for excl_property in exclusions[prop]:\n if obj.get(excl_property):\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n else:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in property_set[\n \"Scenario\"\n ].items()\n ]\n )\n message = \"Property {0} should NOT exist with {1} {2} for {3}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n scenario_text,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n exclusions = self.property_types_specs.get(property_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n exclusions = self.resource_types_specs.get(resource_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Exclusive.py"}]}
| 1,814 | 139 |
gh_patches_debug_44040
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-7179
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add is_codespaces to telemetry environment context
Just like [we set `is_ci_environment` when the `CI` env var is set](https://github.com/meltano/meltano/blob/main/src/meltano/core/tracking/contexts/environment.py#L57), we should set `is_codespaces` (or something to that effect) when `CODESPACES` is set (see [docs](https://docs.github.com/en/codespaces/developing-in-codespaces/default-environment-variables-for-your-codespace)).
@tayloramurphy It'd be interesting to compare how far people get into the funnel with codespaces vs having to install locally. On the one hand, the barrier is lower so some people that click the button may be less motivated to make it to the end, but on the other hand, it should be easier to just quickly follow the steps and get to "wow". We may run into the issue that we currently consider any usage of less than 5min a bot, and that these codespaces projects may be treated as one-offs instead of being reused to form the company's official Meltano projects, so they'll never turn active. It'll be good to have the option of treating new codespaces projects differently from new local projects in our reporting.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/core/tracking/contexts/environment.py`
Content:
```
1 """Environment context for the Snowplow tracker."""
2
3 from __future__ import annotations
4
5 import os
6 import platform
7 import uuid
8 from collections import defaultdict
9 from contextlib import suppress
10 from datetime import datetime
11 from pathlib import Path
12 from typing import Any
13 from warnings import warn
14
15 import psutil
16 from cached_property import cached_property
17 from snowplow_tracker import SelfDescribingJson
18 from structlog.stdlib import get_logger
19
20 import meltano
21 from meltano.core.tracking.schemas import EnvironmentContextSchema
22 from meltano.core.utils import hash_sha256, safe_hasattr
23
24 logger = get_logger(__name__)
25
26 # This file is only ever created in CI when building a release
27 release_marker_path = Path(__file__).parent / ".release_marker"
28
29
30 def _get_parent_context_uuid_str() -> str | None:
31 with suppress(KeyError):
32 uuid_str = os.environ["MELTANO_PARENT_CONTEXT_UUID"]
33 try:
34 return str(uuid.UUID(uuid_str))
35 except ValueError:
36 warn(
37 f"Invalid telemetry parent environment context UUID {uuid_str!r} "
38 "from $MELTANO_PARENT_CONTEXT_UUID - Meltano will continue as if "
39 "$MELTANO_PARENT_CONTEXT_UUID had not been set"
40 )
41 return None
42
43
44 class EnvironmentContext(SelfDescribingJson):
45 """Environment context for the Snowplow tracker."""
46
47 def __init__(self):
48 """Initialize the environment context."""
49 ci_markers = ("GITHUB_ACTIONS", "CI")
50 super().__init__(
51 EnvironmentContextSchema.url,
52 {
53 "context_uuid": str(uuid.uuid4()),
54 "parent_context_uuid": _get_parent_context_uuid_str(),
55 "meltano_version": meltano.__version__,
56 "is_dev_build": not release_marker_path.exists(),
57 "is_ci_environment": any(
58 # True if 'true', 'TRUE', 'True', or '1'
59 os.environ.get(marker, "").lower()[:1] in {"1", "t"}
60 for marker in ci_markers
61 ),
62 "python_version": platform.python_version(),
63 "python_implementation": platform.python_implementation(),
64 **self.system_info,
65 **self.process_info,
66 },
67 )
68
69 @cached_property
70 def system_info(self) -> dict[str, Any]:
71 """Get system information.
72
73 Returns:
74 A dictionary containing system information.
75 """
76 try:
77 freedesktop_data = platform.freedesktop_os_release()
78 except Exception:
79 freedesktop_data = defaultdict(type(None))
80
81 return {
82 "system_name": platform.system() or None,
83 "system_release": platform.release() or None,
84 "system_version": platform.version() or None,
85 "machine": platform.machine() or None,
86 "windows_edition": platform.win32_edition()
87 if safe_hasattr(platform, "win32_edition")
88 else None,
89 "freedesktop_id": freedesktop_data["ID"],
90 "freedesktop_id_like": freedesktop_data.get("ID_LIKE", None),
91 "freedesktop_version_id": freedesktop_data.get("VERSION_ID", None),
92 }
93
94 @staticmethod
95 def get_process_timestamp(process: psutil.Process) -> str:
96 """Obtain the creation time of a process as a ISO 8601 timestamp.
97
98 Args:
99 process: The process to obtain the creation time from.
100
101 Returns:
102 A ISO 8601 timestamp formatted string.
103 """
104 return f"{datetime.utcfromtimestamp(process.create_time()).isoformat()}Z"
105
106 @cached_property
107 def process_info(self) -> dict[str, Any]:
108 """Obtain the process information for the current process.
109
110 Returns:
111 A dictionary containing the process information. Such as the hashed process name, pid, core counts, etc
112 """
113 process = psutil.Process()
114 with process.oneshot():
115 return {
116 "num_cpu_cores": psutil.cpu_count(),
117 "num_cpu_cores_available": self.num_available_cores,
118 "process_hierarchy": [
119 {
120 "process_name_hash": hash_sha256(proc.name()),
121 "process_creation_timestamp": self.get_process_timestamp(proc),
122 }
123 for proc in (process, *process.parents())
124 ],
125 }
126
127 @cached_property
128 def num_available_cores(self) -> int:
129 """Obtain the number of available CPU cores.
130
131 Uses sched_getaffinity where available, otherwise falls back to cpu_count().
132
133 Returns:
134 int: The number of available CPU cores.
135 """
136 if safe_hasattr(os, "sched_getaffinity"):
137 return len(os.sched_getaffinity(0))
138 return os.cpu_count()
139
140
141 environment_context = EnvironmentContext()
142
```
Path: `src/meltano/core/tracking/schemas.py`
Content:
```
1 """Meltano Iglu schemas metadata & utilities."""
2
3 from __future__ import annotations
4
5 from dataclasses import dataclass
6
7 DEFAULT_VENDOR = "com.meltano"
8
9
10 @dataclass
11 class IgluSchema:
12 """Dataclass to store the name, version, vendor, and URL for an Iglu schema."""
13
14 name: str
15 version: str
16 vendor: str = DEFAULT_VENDOR
17
18 @property
19 def url(self) -> str:
20 """Construct an iglu schema URL.
21
22 Returns:
23 The URL to the schema.
24 """
25 return f"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}"
26
27
28 CliContextSchema = IgluSchema("cli_context", "1-1-0")
29 CliEventSchema = IgluSchema("cli_event", "1-0-1")
30 BlockEventSchema = IgluSchema("block_event", "1-0-0")
31 EnvironmentContextSchema = IgluSchema("environment_context", "1-1-0")
32 ExceptionContextSchema = IgluSchema("exception_context", "1-0-0")
33 ExitEventSchema = IgluSchema("exit_event", "1-0-1")
34 PluginsContextSchema = IgluSchema("plugins_context", "1-0-0")
35 ProjectContextSchema = IgluSchema("project_context", "1-1-0")
36 TelemetryStateChangeEventSchema = IgluSchema("telemetry_state_change_event", "1-0-0")
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/meltano/core/tracking/contexts/environment.py b/src/meltano/core/tracking/contexts/environment.py
--- a/src/meltano/core/tracking/contexts/environment.py
+++ b/src/meltano/core/tracking/contexts/environment.py
@@ -9,7 +9,7 @@
from contextlib import suppress
from datetime import datetime
from pathlib import Path
-from typing import Any
+from typing import Any, Iterable
from warnings import warn
import psutil
@@ -19,7 +19,7 @@
import meltano
from meltano.core.tracking.schemas import EnvironmentContextSchema
-from meltano.core.utils import hash_sha256, safe_hasattr
+from meltano.core.utils import get_boolean_env_var, hash_sha256, safe_hasattr, strtobool
logger = get_logger(__name__)
@@ -44,9 +44,21 @@
class EnvironmentContext(SelfDescribingJson):
"""Environment context for the Snowplow tracker."""
+ ci_markers = {"GITHUB_ACTIONS", "CI"}
+ notable_flag_env_vars = {"CODESPACES", *ci_markers}
+
+ @classmethod
+ def _notable_flag_env_vars(cls) -> Iterable[str]:
+ for env_var_name in cls.notable_flag_env_vars:
+ with suppress(KeyError): # Skip unset env vars
+ env_var_value = os.environ[env_var_name]
+ try:
+ yield env_var_name, strtobool(env_var_value)
+ except ValueError:
+ yield env_var_name, None
+
def __init__(self):
"""Initialize the environment context."""
- ci_markers = ("GITHUB_ACTIONS", "CI")
super().__init__(
EnvironmentContextSchema.url,
{
@@ -55,10 +67,9 @@
"meltano_version": meltano.__version__,
"is_dev_build": not release_marker_path.exists(),
"is_ci_environment": any(
- # True if 'true', 'TRUE', 'True', or '1'
- os.environ.get(marker, "").lower()[:1] in {"1", "t"}
- for marker in ci_markers
+ get_boolean_env_var(marker) for marker in self.ci_markers
),
+ "notable_flag_env_vars": dict(self._notable_flag_env_vars()),
"python_version": platform.python_version(),
"python_implementation": platform.python_implementation(),
**self.system_info,
@@ -108,7 +119,8 @@
"""Obtain the process information for the current process.
Returns:
- A dictionary containing the process information. Such as the hashed process name, pid, core counts, etc
+ A dictionary containing the process information. Such as the hashed
+ process name, pid, core counts, etc
"""
process = psutil.Process()
with process.oneshot():
@@ -128,10 +140,11 @@
def num_available_cores(self) -> int:
"""Obtain the number of available CPU cores.
- Uses sched_getaffinity where available, otherwise falls back to cpu_count().
+ Uses `sched_getaffinity` where available, otherwise falls back to
+ `cpu_count`.
Returns:
- int: The number of available CPU cores.
+ The number of available CPU cores.
"""
if safe_hasattr(os, "sched_getaffinity"):
return len(os.sched_getaffinity(0))
diff --git a/src/meltano/core/tracking/schemas.py b/src/meltano/core/tracking/schemas.py
--- a/src/meltano/core/tracking/schemas.py
+++ b/src/meltano/core/tracking/schemas.py
@@ -28,7 +28,7 @@
CliContextSchema = IgluSchema("cli_context", "1-1-0")
CliEventSchema = IgluSchema("cli_event", "1-0-1")
BlockEventSchema = IgluSchema("block_event", "1-0-0")
-EnvironmentContextSchema = IgluSchema("environment_context", "1-1-0")
+EnvironmentContextSchema = IgluSchema("environment_context", "1-2-0")
ExceptionContextSchema = IgluSchema("exception_context", "1-0-0")
ExitEventSchema = IgluSchema("exit_event", "1-0-1")
PluginsContextSchema = IgluSchema("plugins_context", "1-0-0")
|
{"golden_diff": "diff --git a/src/meltano/core/tracking/contexts/environment.py b/src/meltano/core/tracking/contexts/environment.py\n--- a/src/meltano/core/tracking/contexts/environment.py\n+++ b/src/meltano/core/tracking/contexts/environment.py\n@@ -9,7 +9,7 @@\n from contextlib import suppress\n from datetime import datetime\n from pathlib import Path\n-from typing import Any\n+from typing import Any, Iterable\n from warnings import warn\n \n import psutil\n@@ -19,7 +19,7 @@\n \n import meltano\n from meltano.core.tracking.schemas import EnvironmentContextSchema\n-from meltano.core.utils import hash_sha256, safe_hasattr\n+from meltano.core.utils import get_boolean_env_var, hash_sha256, safe_hasattr, strtobool\n \n logger = get_logger(__name__)\n \n@@ -44,9 +44,21 @@\n class EnvironmentContext(SelfDescribingJson):\n \"\"\"Environment context for the Snowplow tracker.\"\"\"\n \n+ ci_markers = {\"GITHUB_ACTIONS\", \"CI\"}\n+ notable_flag_env_vars = {\"CODESPACES\", *ci_markers}\n+\n+ @classmethod\n+ def _notable_flag_env_vars(cls) -> Iterable[str]:\n+ for env_var_name in cls.notable_flag_env_vars:\n+ with suppress(KeyError): # Skip unset env vars\n+ env_var_value = os.environ[env_var_name]\n+ try:\n+ yield env_var_name, strtobool(env_var_value)\n+ except ValueError:\n+ yield env_var_name, None\n+\n def __init__(self):\n \"\"\"Initialize the environment context.\"\"\"\n- ci_markers = (\"GITHUB_ACTIONS\", \"CI\")\n super().__init__(\n EnvironmentContextSchema.url,\n {\n@@ -55,10 +67,9 @@\n \"meltano_version\": meltano.__version__,\n \"is_dev_build\": not release_marker_path.exists(),\n \"is_ci_environment\": any(\n- # True if 'true', 'TRUE', 'True', or '1'\n- os.environ.get(marker, \"\").lower()[:1] in {\"1\", \"t\"}\n- for marker in ci_markers\n+ get_boolean_env_var(marker) for marker in self.ci_markers\n ),\n+ \"notable_flag_env_vars\": dict(self._notable_flag_env_vars()),\n \"python_version\": platform.python_version(),\n \"python_implementation\": platform.python_implementation(),\n **self.system_info,\n@@ -108,7 +119,8 @@\n \"\"\"Obtain the process information for the current process.\n \n Returns:\n- A dictionary containing the process information. Such as the hashed process name, pid, core counts, etc\n+ A dictionary containing the process information. Such as the hashed\n+ process name, pid, core counts, etc\n \"\"\"\n process = psutil.Process()\n with process.oneshot():\n@@ -128,10 +140,11 @@\n def num_available_cores(self) -> int:\n \"\"\"Obtain the number of available CPU cores.\n \n- Uses sched_getaffinity where available, otherwise falls back to cpu_count().\n+ Uses `sched_getaffinity` where available, otherwise falls back to\n+ `cpu_count`.\n \n Returns:\n- int: The number of available CPU cores.\n+ The number of available CPU cores.\n \"\"\"\n if safe_hasattr(os, \"sched_getaffinity\"):\n return len(os.sched_getaffinity(0))\ndiff --git a/src/meltano/core/tracking/schemas.py b/src/meltano/core/tracking/schemas.py\n--- a/src/meltano/core/tracking/schemas.py\n+++ b/src/meltano/core/tracking/schemas.py\n@@ -28,7 +28,7 @@\n CliContextSchema = IgluSchema(\"cli_context\", \"1-1-0\")\n CliEventSchema = IgluSchema(\"cli_event\", \"1-0-1\")\n BlockEventSchema = IgluSchema(\"block_event\", \"1-0-0\")\n-EnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-1-0\")\n+EnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-2-0\")\n ExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\n ExitEventSchema = IgluSchema(\"exit_event\", \"1-0-1\")\n PluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\n", "issue": "Add is_codespaces to telemetry environment context\nJust like [we set `is_ci_environment` when the `CI` env var is set](https://github.com/meltano/meltano/blob/main/src/meltano/core/tracking/contexts/environment.py#L57), we should set `is_codespaces` (or something to that effect) when `CODESPACES` is set (see [docs](https://docs.github.com/en/codespaces/developing-in-codespaces/default-environment-variables-for-your-codespace)).\r\n\r\n@tayloramurphy It'd be interesting to compare how far people get into the funnel with codespaces vs having to install locally. On the one hand, the barrier is lower so some people that click the button may be less motivated to make it to the end, but on the other hand, it should be easier to just quickly follow the steps and get to \"wow\". We may run into the issue that we currently consider any usage of less than 5min a bot, and that these codespaces projects may be treated as one-offs instead of being reused to form the company's official Meltano projects, so they'll never turn active. It'll be good to have the option of treating new codespaces projects differently from new local projects in our reporting.\n", "before_files": [{"content": "\"\"\"Environment context for the Snowplow tracker.\"\"\"\n\nfrom __future__ import annotations\n\nimport os\nimport platform\nimport uuid\nfrom collections import defaultdict\nfrom contextlib import suppress\nfrom datetime import datetime\nfrom pathlib import Path\nfrom typing import Any\nfrom warnings import warn\n\nimport psutil\nfrom cached_property import cached_property\nfrom snowplow_tracker import SelfDescribingJson\nfrom structlog.stdlib import get_logger\n\nimport meltano\nfrom meltano.core.tracking.schemas import EnvironmentContextSchema\nfrom meltano.core.utils import hash_sha256, safe_hasattr\n\nlogger = get_logger(__name__)\n\n# This file is only ever created in CI when building a release\nrelease_marker_path = Path(__file__).parent / \".release_marker\"\n\n\ndef _get_parent_context_uuid_str() -> str | None:\n with suppress(KeyError):\n uuid_str = os.environ[\"MELTANO_PARENT_CONTEXT_UUID\"]\n try:\n return str(uuid.UUID(uuid_str))\n except ValueError:\n warn(\n f\"Invalid telemetry parent environment context UUID {uuid_str!r} \"\n \"from $MELTANO_PARENT_CONTEXT_UUID - Meltano will continue as if \"\n \"$MELTANO_PARENT_CONTEXT_UUID had not been set\"\n )\n return None\n\n\nclass EnvironmentContext(SelfDescribingJson):\n \"\"\"Environment context for the Snowplow tracker.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the environment context.\"\"\"\n ci_markers = (\"GITHUB_ACTIONS\", \"CI\")\n super().__init__(\n EnvironmentContextSchema.url,\n {\n \"context_uuid\": str(uuid.uuid4()),\n \"parent_context_uuid\": _get_parent_context_uuid_str(),\n \"meltano_version\": meltano.__version__,\n \"is_dev_build\": not release_marker_path.exists(),\n \"is_ci_environment\": any(\n # True if 'true', 'TRUE', 'True', or '1'\n os.environ.get(marker, \"\").lower()[:1] in {\"1\", \"t\"}\n for marker in ci_markers\n ),\n \"python_version\": platform.python_version(),\n \"python_implementation\": platform.python_implementation(),\n **self.system_info,\n **self.process_info,\n },\n )\n\n @cached_property\n def system_info(self) -> dict[str, Any]:\n \"\"\"Get system information.\n\n Returns:\n A dictionary containing system information.\n \"\"\"\n try:\n freedesktop_data = platform.freedesktop_os_release()\n except Exception:\n freedesktop_data = defaultdict(type(None))\n\n return {\n \"system_name\": platform.system() or None,\n \"system_release\": platform.release() or None,\n \"system_version\": platform.version() or None,\n \"machine\": platform.machine() or None,\n \"windows_edition\": platform.win32_edition()\n if safe_hasattr(platform, \"win32_edition\")\n else None,\n \"freedesktop_id\": freedesktop_data[\"ID\"],\n \"freedesktop_id_like\": freedesktop_data.get(\"ID_LIKE\", None),\n \"freedesktop_version_id\": freedesktop_data.get(\"VERSION_ID\", None),\n }\n\n @staticmethod\n def get_process_timestamp(process: psutil.Process) -> str:\n \"\"\"Obtain the creation time of a process as a ISO 8601 timestamp.\n\n Args:\n process: The process to obtain the creation time from.\n\n Returns:\n A ISO 8601 timestamp formatted string.\n \"\"\"\n return f\"{datetime.utcfromtimestamp(process.create_time()).isoformat()}Z\"\n\n @cached_property\n def process_info(self) -> dict[str, Any]:\n \"\"\"Obtain the process information for the current process.\n\n Returns:\n A dictionary containing the process information. Such as the hashed process name, pid, core counts, etc\n \"\"\"\n process = psutil.Process()\n with process.oneshot():\n return {\n \"num_cpu_cores\": psutil.cpu_count(),\n \"num_cpu_cores_available\": self.num_available_cores,\n \"process_hierarchy\": [\n {\n \"process_name_hash\": hash_sha256(proc.name()),\n \"process_creation_timestamp\": self.get_process_timestamp(proc),\n }\n for proc in (process, *process.parents())\n ],\n }\n\n @cached_property\n def num_available_cores(self) -> int:\n \"\"\"Obtain the number of available CPU cores.\n\n Uses sched_getaffinity where available, otherwise falls back to cpu_count().\n\n Returns:\n int: The number of available CPU cores.\n \"\"\"\n if safe_hasattr(os, \"sched_getaffinity\"):\n return len(os.sched_getaffinity(0))\n return os.cpu_count()\n\n\nenvironment_context = EnvironmentContext()\n", "path": "src/meltano/core/tracking/contexts/environment.py"}, {"content": "\"\"\"Meltano Iglu schemas metadata & utilities.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nDEFAULT_VENDOR = \"com.meltano\"\n\n\n@dataclass\nclass IgluSchema:\n \"\"\"Dataclass to store the name, version, vendor, and URL for an Iglu schema.\"\"\"\n\n name: str\n version: str\n vendor: str = DEFAULT_VENDOR\n\n @property\n def url(self) -> str:\n \"\"\"Construct an iglu schema URL.\n\n Returns:\n The URL to the schema.\n \"\"\"\n return f\"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}\"\n\n\nCliContextSchema = IgluSchema(\"cli_context\", \"1-1-0\")\nCliEventSchema = IgluSchema(\"cli_event\", \"1-0-1\")\nBlockEventSchema = IgluSchema(\"block_event\", \"1-0-0\")\nEnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-1-0\")\nExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\nExitEventSchema = IgluSchema(\"exit_event\", \"1-0-1\")\nPluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\nProjectContextSchema = IgluSchema(\"project_context\", \"1-1-0\")\nTelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "path": "src/meltano/core/tracking/schemas.py"}], "after_files": [{"content": "\"\"\"Environment context for the Snowplow tracker.\"\"\"\n\nfrom __future__ import annotations\n\nimport os\nimport platform\nimport uuid\nfrom collections import defaultdict\nfrom contextlib import suppress\nfrom datetime import datetime\nfrom pathlib import Path\nfrom typing import Any, Iterable\nfrom warnings import warn\n\nimport psutil\nfrom cached_property import cached_property\nfrom snowplow_tracker import SelfDescribingJson\nfrom structlog.stdlib import get_logger\n\nimport meltano\nfrom meltano.core.tracking.schemas import EnvironmentContextSchema\nfrom meltano.core.utils import get_boolean_env_var, hash_sha256, safe_hasattr, strtobool\n\nlogger = get_logger(__name__)\n\n# This file is only ever created in CI when building a release\nrelease_marker_path = Path(__file__).parent / \".release_marker\"\n\n\ndef _get_parent_context_uuid_str() -> str | None:\n with suppress(KeyError):\n uuid_str = os.environ[\"MELTANO_PARENT_CONTEXT_UUID\"]\n try:\n return str(uuid.UUID(uuid_str))\n except ValueError:\n warn(\n f\"Invalid telemetry parent environment context UUID {uuid_str!r} \"\n \"from $MELTANO_PARENT_CONTEXT_UUID - Meltano will continue as if \"\n \"$MELTANO_PARENT_CONTEXT_UUID had not been set\"\n )\n return None\n\n\nclass EnvironmentContext(SelfDescribingJson):\n \"\"\"Environment context for the Snowplow tracker.\"\"\"\n\n ci_markers = {\"GITHUB_ACTIONS\", \"CI\"}\n notable_flag_env_vars = {\"CODESPACES\", *ci_markers}\n\n @classmethod\n def _notable_flag_env_vars(cls) -> Iterable[str]:\n for env_var_name in cls.notable_flag_env_vars:\n with suppress(KeyError): # Skip unset env vars\n env_var_value = os.environ[env_var_name]\n try:\n yield env_var_name, strtobool(env_var_value)\n except ValueError:\n yield env_var_name, None\n\n def __init__(self):\n \"\"\"Initialize the environment context.\"\"\"\n super().__init__(\n EnvironmentContextSchema.url,\n {\n \"context_uuid\": str(uuid.uuid4()),\n \"parent_context_uuid\": _get_parent_context_uuid_str(),\n \"meltano_version\": meltano.__version__,\n \"is_dev_build\": not release_marker_path.exists(),\n \"is_ci_environment\": any(\n get_boolean_env_var(marker) for marker in self.ci_markers\n ),\n \"notable_flag_env_vars\": dict(self._notable_flag_env_vars()),\n \"python_version\": platform.python_version(),\n \"python_implementation\": platform.python_implementation(),\n **self.system_info,\n **self.process_info,\n },\n )\n\n @cached_property\n def system_info(self) -> dict[str, Any]:\n \"\"\"Get system information.\n\n Returns:\n A dictionary containing system information.\n \"\"\"\n try:\n freedesktop_data = platform.freedesktop_os_release()\n except Exception:\n freedesktop_data = defaultdict(type(None))\n\n return {\n \"system_name\": platform.system() or None,\n \"system_release\": platform.release() or None,\n \"system_version\": platform.version() or None,\n \"machine\": platform.machine() or None,\n \"windows_edition\": platform.win32_edition()\n if safe_hasattr(platform, \"win32_edition\")\n else None,\n \"freedesktop_id\": freedesktop_data[\"ID\"],\n \"freedesktop_id_like\": freedesktop_data.get(\"ID_LIKE\", None),\n \"freedesktop_version_id\": freedesktop_data.get(\"VERSION_ID\", None),\n }\n\n @staticmethod\n def get_process_timestamp(process: psutil.Process) -> str:\n \"\"\"Obtain the creation time of a process as a ISO 8601 timestamp.\n\n Args:\n process: The process to obtain the creation time from.\n\n Returns:\n A ISO 8601 timestamp formatted string.\n \"\"\"\n return f\"{datetime.utcfromtimestamp(process.create_time()).isoformat()}Z\"\n\n @cached_property\n def process_info(self) -> dict[str, Any]:\n \"\"\"Obtain the process information for the current process.\n\n Returns:\n A dictionary containing the process information. Such as the hashed\n process name, pid, core counts, etc\n \"\"\"\n process = psutil.Process()\n with process.oneshot():\n return {\n \"num_cpu_cores\": psutil.cpu_count(),\n \"num_cpu_cores_available\": self.num_available_cores,\n \"process_hierarchy\": [\n {\n \"process_name_hash\": hash_sha256(proc.name()),\n \"process_creation_timestamp\": self.get_process_timestamp(proc),\n }\n for proc in (process, *process.parents())\n ],\n }\n\n @cached_property\n def num_available_cores(self) -> int:\n \"\"\"Obtain the number of available CPU cores.\n\n Uses `sched_getaffinity` where available, otherwise falls back to\n `cpu_count`.\n\n Returns:\n The number of available CPU cores.\n \"\"\"\n if safe_hasattr(os, \"sched_getaffinity\"):\n return len(os.sched_getaffinity(0))\n return os.cpu_count()\n\n\nenvironment_context = EnvironmentContext()\n", "path": "src/meltano/core/tracking/contexts/environment.py"}, {"content": "\"\"\"Meltano Iglu schemas metadata & utilities.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nDEFAULT_VENDOR = \"com.meltano\"\n\n\n@dataclass\nclass IgluSchema:\n \"\"\"Dataclass to store the name, version, vendor, and URL for an Iglu schema.\"\"\"\n\n name: str\n version: str\n vendor: str = DEFAULT_VENDOR\n\n @property\n def url(self) -> str:\n \"\"\"Construct an iglu schema URL.\n\n Returns:\n The URL to the schema.\n \"\"\"\n return f\"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}\"\n\n\nCliContextSchema = IgluSchema(\"cli_context\", \"1-1-0\")\nCliEventSchema = IgluSchema(\"cli_event\", \"1-0-1\")\nBlockEventSchema = IgluSchema(\"block_event\", \"1-0-0\")\nEnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-2-0\")\nExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\nExitEventSchema = IgluSchema(\"exit_event\", \"1-0-1\")\nPluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\nProjectContextSchema = IgluSchema(\"project_context\", \"1-1-0\")\nTelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "path": "src/meltano/core/tracking/schemas.py"}]}
| 2,261 | 974 |
gh_patches_debug_13303
|
rasdani/github-patches
|
git_diff
|
tornadoweb__tornado-2395
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError when closing IOLoop
This started showing up in Dask's test suite recently:
```python-traceback
distributed/utils_test.py:144: in pristine_loop
loop.close(all_fds=True)
../../Software/anaconda/envs/test-environment/lib/python3.6/site-packages/tornado/platform/asyncio.py:223: in close
super(AsyncIOLoop, self).close(all_fds=all_fds)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tornado.platform.asyncio.AsyncIOLoop object at 0x7f5751d46eb8>, all_fds = True
def close(self, all_fds=False):
self.closing = True
for fd in list(self.handlers):
fileobj, handler_func = self.handlers[fd]
self.remove_handler(fd)
if all_fds:
self.close_fd(fileobj)
self.asyncio_loop.close()
> del IOLoop._ioloop_for_asyncio[self.asyncio_loop]
E KeyError: <_UnixSelectorEventLoop running=False closed=True debug=False>
```
This is likely due to some change in upstream dependencies. It looks like Tornado hasn't had a release during the time when this arose, so it's likely something else. Still, I thought I'd raise the issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tornado/platform/asyncio.py`
Content:
```
1 """Bridges between the `asyncio` module and Tornado IOLoop.
2
3 .. versionadded:: 3.2
4
5 This module integrates Tornado with the ``asyncio`` module introduced
6 in Python 3.4. This makes it possible to combine the two libraries on
7 the same event loop.
8
9 .. deprecated:: 5.0
10
11 While the code in this module is still used, it is now enabled
12 automatically when `asyncio` is available, so applications should
13 no longer need to refer to this module directly.
14
15 .. note::
16
17 Tornado requires the `~asyncio.AbstractEventLoop.add_reader` family of
18 methods, so it is not compatible with the `~asyncio.ProactorEventLoop` on
19 Windows. Use the `~asyncio.SelectorEventLoop` instead.
20 """
21
22 from __future__ import absolute_import, division, print_function
23 import functools
24
25 from tornado.gen import convert_yielded
26 from tornado.ioloop import IOLoop
27 from tornado import stack_context
28
29 import asyncio
30
31
32 class BaseAsyncIOLoop(IOLoop):
33 def initialize(self, asyncio_loop, **kwargs):
34 self.asyncio_loop = asyncio_loop
35 # Maps fd to (fileobj, handler function) pair (as in IOLoop.add_handler)
36 self.handlers = {}
37 # Set of fds listening for reads/writes
38 self.readers = set()
39 self.writers = set()
40 self.closing = False
41 # If an asyncio loop was closed through an asyncio interface
42 # instead of IOLoop.close(), we'd never hear about it and may
43 # have left a dangling reference in our map. In case an
44 # application (or, more likely, a test suite) creates and
45 # destroys a lot of event loops in this way, check here to
46 # ensure that we don't have a lot of dead loops building up in
47 # the map.
48 #
49 # TODO(bdarnell): consider making self.asyncio_loop a weakref
50 # for AsyncIOMainLoop and make _ioloop_for_asyncio a
51 # WeakKeyDictionary.
52 for loop in list(IOLoop._ioloop_for_asyncio):
53 if loop.is_closed():
54 del IOLoop._ioloop_for_asyncio[loop]
55 IOLoop._ioloop_for_asyncio[asyncio_loop] = self
56 super(BaseAsyncIOLoop, self).initialize(**kwargs)
57
58 def close(self, all_fds=False):
59 self.closing = True
60 for fd in list(self.handlers):
61 fileobj, handler_func = self.handlers[fd]
62 self.remove_handler(fd)
63 if all_fds:
64 self.close_fd(fileobj)
65 self.asyncio_loop.close()
66 del IOLoop._ioloop_for_asyncio[self.asyncio_loop]
67
68 def add_handler(self, fd, handler, events):
69 fd, fileobj = self.split_fd(fd)
70 if fd in self.handlers:
71 raise ValueError("fd %s added twice" % fd)
72 self.handlers[fd] = (fileobj, stack_context.wrap(handler))
73 if events & IOLoop.READ:
74 self.asyncio_loop.add_reader(
75 fd, self._handle_events, fd, IOLoop.READ)
76 self.readers.add(fd)
77 if events & IOLoop.WRITE:
78 self.asyncio_loop.add_writer(
79 fd, self._handle_events, fd, IOLoop.WRITE)
80 self.writers.add(fd)
81
82 def update_handler(self, fd, events):
83 fd, fileobj = self.split_fd(fd)
84 if events & IOLoop.READ:
85 if fd not in self.readers:
86 self.asyncio_loop.add_reader(
87 fd, self._handle_events, fd, IOLoop.READ)
88 self.readers.add(fd)
89 else:
90 if fd in self.readers:
91 self.asyncio_loop.remove_reader(fd)
92 self.readers.remove(fd)
93 if events & IOLoop.WRITE:
94 if fd not in self.writers:
95 self.asyncio_loop.add_writer(
96 fd, self._handle_events, fd, IOLoop.WRITE)
97 self.writers.add(fd)
98 else:
99 if fd in self.writers:
100 self.asyncio_loop.remove_writer(fd)
101 self.writers.remove(fd)
102
103 def remove_handler(self, fd):
104 fd, fileobj = self.split_fd(fd)
105 if fd not in self.handlers:
106 return
107 if fd in self.readers:
108 self.asyncio_loop.remove_reader(fd)
109 self.readers.remove(fd)
110 if fd in self.writers:
111 self.asyncio_loop.remove_writer(fd)
112 self.writers.remove(fd)
113 del self.handlers[fd]
114
115 def _handle_events(self, fd, events):
116 fileobj, handler_func = self.handlers[fd]
117 handler_func(fileobj, events)
118
119 def start(self):
120 try:
121 old_loop = asyncio.get_event_loop()
122 except (RuntimeError, AssertionError):
123 old_loop = None
124 try:
125 self._setup_logging()
126 asyncio.set_event_loop(self.asyncio_loop)
127 self.asyncio_loop.run_forever()
128 finally:
129 asyncio.set_event_loop(old_loop)
130
131 def stop(self):
132 self.asyncio_loop.stop()
133
134 def call_at(self, when, callback, *args, **kwargs):
135 # asyncio.call_at supports *args but not **kwargs, so bind them here.
136 # We do not synchronize self.time and asyncio_loop.time, so
137 # convert from absolute to relative.
138 return self.asyncio_loop.call_later(
139 max(0, when - self.time()), self._run_callback,
140 functools.partial(stack_context.wrap(callback), *args, **kwargs))
141
142 def remove_timeout(self, timeout):
143 timeout.cancel()
144
145 def add_callback(self, callback, *args, **kwargs):
146 try:
147 self.asyncio_loop.call_soon_threadsafe(
148 self._run_callback,
149 functools.partial(stack_context.wrap(callback), *args, **kwargs))
150 except RuntimeError:
151 # "Event loop is closed". Swallow the exception for
152 # consistency with PollIOLoop (and logical consistency
153 # with the fact that we can't guarantee that an
154 # add_callback that completes without error will
155 # eventually execute).
156 pass
157
158 add_callback_from_signal = add_callback
159
160 def run_in_executor(self, executor, func, *args):
161 return self.asyncio_loop.run_in_executor(executor, func, *args)
162
163 def set_default_executor(self, executor):
164 return self.asyncio_loop.set_default_executor(executor)
165
166
167 class AsyncIOMainLoop(BaseAsyncIOLoop):
168 """``AsyncIOMainLoop`` creates an `.IOLoop` that corresponds to the
169 current ``asyncio`` event loop (i.e. the one returned by
170 ``asyncio.get_event_loop()``).
171
172 .. deprecated:: 5.0
173
174 Now used automatically when appropriate; it is no longer necessary
175 to refer to this class directly.
176
177 .. versionchanged:: 5.0
178
179 Closing an `AsyncIOMainLoop` now closes the underlying asyncio loop.
180 """
181 def initialize(self, **kwargs):
182 super(AsyncIOMainLoop, self).initialize(asyncio.get_event_loop(), **kwargs)
183
184 def make_current(self):
185 # AsyncIOMainLoop already refers to the current asyncio loop so
186 # nothing to do here.
187 pass
188
189
190 class AsyncIOLoop(BaseAsyncIOLoop):
191 """``AsyncIOLoop`` is an `.IOLoop` that runs on an ``asyncio`` event loop.
192 This class follows the usual Tornado semantics for creating new
193 ``IOLoops``; these loops are not necessarily related to the
194 ``asyncio`` default event loop.
195
196 Each ``AsyncIOLoop`` creates a new ``asyncio.EventLoop``; this object
197 can be accessed with the ``asyncio_loop`` attribute.
198
199 .. versionchanged:: 5.0
200
201 When an ``AsyncIOLoop`` becomes the current `.IOLoop`, it also sets
202 the current `asyncio` event loop.
203
204 .. deprecated:: 5.0
205
206 Now used automatically when appropriate; it is no longer necessary
207 to refer to this class directly.
208 """
209 def initialize(self, **kwargs):
210 self.is_current = False
211 loop = asyncio.new_event_loop()
212 try:
213 super(AsyncIOLoop, self).initialize(loop, **kwargs)
214 except Exception:
215 # If initialize() does not succeed (taking ownership of the loop),
216 # we have to close it.
217 loop.close()
218 raise
219
220 def close(self, all_fds=False):
221 if self.is_current:
222 self.clear_current()
223 super(AsyncIOLoop, self).close(all_fds=all_fds)
224
225 def make_current(self):
226 if not self.is_current:
227 try:
228 self.old_asyncio = asyncio.get_event_loop()
229 except (RuntimeError, AssertionError):
230 self.old_asyncio = None
231 self.is_current = True
232 asyncio.set_event_loop(self.asyncio_loop)
233
234 def _clear_current_hook(self):
235 if self.is_current:
236 asyncio.set_event_loop(self.old_asyncio)
237 self.is_current = False
238
239
240 def to_tornado_future(asyncio_future):
241 """Convert an `asyncio.Future` to a `tornado.concurrent.Future`.
242
243 .. versionadded:: 4.1
244
245 .. deprecated:: 5.0
246 Tornado ``Futures`` have been merged with `asyncio.Future`,
247 so this method is now a no-op.
248 """
249 return asyncio_future
250
251
252 def to_asyncio_future(tornado_future):
253 """Convert a Tornado yieldable object to an `asyncio.Future`.
254
255 .. versionadded:: 4.1
256
257 .. versionchanged:: 4.3
258 Now accepts any yieldable object, not just
259 `tornado.concurrent.Future`.
260
261 .. deprecated:: 5.0
262 Tornado ``Futures`` have been merged with `asyncio.Future`,
263 so this method is now equivalent to `tornado.gen.convert_yielded`.
264 """
265 return convert_yielded(tornado_future)
266
267
268 class AnyThreadEventLoopPolicy(asyncio.DefaultEventLoopPolicy):
269 """Event loop policy that allows loop creation on any thread.
270
271 The default `asyncio` event loop policy only automatically creates
272 event loops in the main threads. Other threads must create event
273 loops explicitly or `asyncio.get_event_loop` (and therefore
274 `.IOLoop.current`) will fail. Installing this policy allows event
275 loops to be created automatically on any thread, matching the
276 behavior of Tornado versions prior to 5.0 (or 5.0 on Python 2).
277
278 Usage::
279
280 asyncio.set_event_loop_policy(AnyThreadEventLoopPolicy())
281
282 .. versionadded:: 5.0
283
284 """
285 def get_event_loop(self):
286 try:
287 return super().get_event_loop()
288 except (RuntimeError, AssertionError):
289 # This was an AssertionError in python 3.4.2 (which ships with debian jessie)
290 # and changed to a RuntimeError in 3.4.3.
291 # "There is no current event loop in thread %r"
292 loop = self.new_event_loop()
293 self.set_event_loop(loop)
294 return loop
295
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tornado/platform/asyncio.py b/tornado/platform/asyncio.py
--- a/tornado/platform/asyncio.py
+++ b/tornado/platform/asyncio.py
@@ -62,8 +62,13 @@
self.remove_handler(fd)
if all_fds:
self.close_fd(fileobj)
- self.asyncio_loop.close()
+ # Remove the mapping before closing the asyncio loop. If this
+ # happened in the other order, we could race against another
+ # initialize() call which would see the closed asyncio loop,
+ # assume it was closed from the asyncio side, and do this
+ # cleanup for us, leading to a KeyError.
del IOLoop._ioloop_for_asyncio[self.asyncio_loop]
+ self.asyncio_loop.close()
def add_handler(self, fd, handler, events):
fd, fileobj = self.split_fd(fd)
|
{"golden_diff": "diff --git a/tornado/platform/asyncio.py b/tornado/platform/asyncio.py\n--- a/tornado/platform/asyncio.py\n+++ b/tornado/platform/asyncio.py\n@@ -62,8 +62,13 @@\n self.remove_handler(fd)\n if all_fds:\n self.close_fd(fileobj)\n- self.asyncio_loop.close()\n+ # Remove the mapping before closing the asyncio loop. If this\n+ # happened in the other order, we could race against another\n+ # initialize() call which would see the closed asyncio loop,\n+ # assume it was closed from the asyncio side, and do this\n+ # cleanup for us, leading to a KeyError.\n del IOLoop._ioloop_for_asyncio[self.asyncio_loop]\n+ self.asyncio_loop.close()\n \n def add_handler(self, fd, handler, events):\n fd, fileobj = self.split_fd(fd)\n", "issue": "KeyError when closing IOLoop\nThis started showing up in Dask's test suite recently:\r\n\r\n```python-traceback\r\ndistributed/utils_test.py:144: in pristine_loop\r\n loop.close(all_fds=True)\r\n../../Software/anaconda/envs/test-environment/lib/python3.6/site-packages/tornado/platform/asyncio.py:223: in close\r\n super(AsyncIOLoop, self).close(all_fds=all_fds)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <tornado.platform.asyncio.AsyncIOLoop object at 0x7f5751d46eb8>, all_fds = True\r\n\r\n def close(self, all_fds=False):\r\n self.closing = True\r\n for fd in list(self.handlers):\r\n fileobj, handler_func = self.handlers[fd]\r\n self.remove_handler(fd)\r\n if all_fds:\r\n self.close_fd(fileobj)\r\n self.asyncio_loop.close()\r\n> del IOLoop._ioloop_for_asyncio[self.asyncio_loop]\r\nE KeyError: <_UnixSelectorEventLoop running=False closed=True debug=False>\r\n```\r\n\r\nThis is likely due to some change in upstream dependencies. It looks like Tornado hasn't had a release during the time when this arose, so it's likely something else. Still, I thought I'd raise the issue.\n", "before_files": [{"content": "\"\"\"Bridges between the `asyncio` module and Tornado IOLoop.\n\n.. versionadded:: 3.2\n\nThis module integrates Tornado with the ``asyncio`` module introduced\nin Python 3.4. This makes it possible to combine the two libraries on\nthe same event loop.\n\n.. deprecated:: 5.0\n\n While the code in this module is still used, it is now enabled\n automatically when `asyncio` is available, so applications should\n no longer need to refer to this module directly.\n\n.. note::\n\n Tornado requires the `~asyncio.AbstractEventLoop.add_reader` family of\n methods, so it is not compatible with the `~asyncio.ProactorEventLoop` on\n Windows. Use the `~asyncio.SelectorEventLoop` instead.\n\"\"\"\n\nfrom __future__ import absolute_import, division, print_function\nimport functools\n\nfrom tornado.gen import convert_yielded\nfrom tornado.ioloop import IOLoop\nfrom tornado import stack_context\n\nimport asyncio\n\n\nclass BaseAsyncIOLoop(IOLoop):\n def initialize(self, asyncio_loop, **kwargs):\n self.asyncio_loop = asyncio_loop\n # Maps fd to (fileobj, handler function) pair (as in IOLoop.add_handler)\n self.handlers = {}\n # Set of fds listening for reads/writes\n self.readers = set()\n self.writers = set()\n self.closing = False\n # If an asyncio loop was closed through an asyncio interface\n # instead of IOLoop.close(), we'd never hear about it and may\n # have left a dangling reference in our map. In case an\n # application (or, more likely, a test suite) creates and\n # destroys a lot of event loops in this way, check here to\n # ensure that we don't have a lot of dead loops building up in\n # the map.\n #\n # TODO(bdarnell): consider making self.asyncio_loop a weakref\n # for AsyncIOMainLoop and make _ioloop_for_asyncio a\n # WeakKeyDictionary.\n for loop in list(IOLoop._ioloop_for_asyncio):\n if loop.is_closed():\n del IOLoop._ioloop_for_asyncio[loop]\n IOLoop._ioloop_for_asyncio[asyncio_loop] = self\n super(BaseAsyncIOLoop, self).initialize(**kwargs)\n\n def close(self, all_fds=False):\n self.closing = True\n for fd in list(self.handlers):\n fileobj, handler_func = self.handlers[fd]\n self.remove_handler(fd)\n if all_fds:\n self.close_fd(fileobj)\n self.asyncio_loop.close()\n del IOLoop._ioloop_for_asyncio[self.asyncio_loop]\n\n def add_handler(self, fd, handler, events):\n fd, fileobj = self.split_fd(fd)\n if fd in self.handlers:\n raise ValueError(\"fd %s added twice\" % fd)\n self.handlers[fd] = (fileobj, stack_context.wrap(handler))\n if events & IOLoop.READ:\n self.asyncio_loop.add_reader(\n fd, self._handle_events, fd, IOLoop.READ)\n self.readers.add(fd)\n if events & IOLoop.WRITE:\n self.asyncio_loop.add_writer(\n fd, self._handle_events, fd, IOLoop.WRITE)\n self.writers.add(fd)\n\n def update_handler(self, fd, events):\n fd, fileobj = self.split_fd(fd)\n if events & IOLoop.READ:\n if fd not in self.readers:\n self.asyncio_loop.add_reader(\n fd, self._handle_events, fd, IOLoop.READ)\n self.readers.add(fd)\n else:\n if fd in self.readers:\n self.asyncio_loop.remove_reader(fd)\n self.readers.remove(fd)\n if events & IOLoop.WRITE:\n if fd not in self.writers:\n self.asyncio_loop.add_writer(\n fd, self._handle_events, fd, IOLoop.WRITE)\n self.writers.add(fd)\n else:\n if fd in self.writers:\n self.asyncio_loop.remove_writer(fd)\n self.writers.remove(fd)\n\n def remove_handler(self, fd):\n fd, fileobj = self.split_fd(fd)\n if fd not in self.handlers:\n return\n if fd in self.readers:\n self.asyncio_loop.remove_reader(fd)\n self.readers.remove(fd)\n if fd in self.writers:\n self.asyncio_loop.remove_writer(fd)\n self.writers.remove(fd)\n del self.handlers[fd]\n\n def _handle_events(self, fd, events):\n fileobj, handler_func = self.handlers[fd]\n handler_func(fileobj, events)\n\n def start(self):\n try:\n old_loop = asyncio.get_event_loop()\n except (RuntimeError, AssertionError):\n old_loop = None\n try:\n self._setup_logging()\n asyncio.set_event_loop(self.asyncio_loop)\n self.asyncio_loop.run_forever()\n finally:\n asyncio.set_event_loop(old_loop)\n\n def stop(self):\n self.asyncio_loop.stop()\n\n def call_at(self, when, callback, *args, **kwargs):\n # asyncio.call_at supports *args but not **kwargs, so bind them here.\n # We do not synchronize self.time and asyncio_loop.time, so\n # convert from absolute to relative.\n return self.asyncio_loop.call_later(\n max(0, when - self.time()), self._run_callback,\n functools.partial(stack_context.wrap(callback), *args, **kwargs))\n\n def remove_timeout(self, timeout):\n timeout.cancel()\n\n def add_callback(self, callback, *args, **kwargs):\n try:\n self.asyncio_loop.call_soon_threadsafe(\n self._run_callback,\n functools.partial(stack_context.wrap(callback), *args, **kwargs))\n except RuntimeError:\n # \"Event loop is closed\". Swallow the exception for\n # consistency with PollIOLoop (and logical consistency\n # with the fact that we can't guarantee that an\n # add_callback that completes without error will\n # eventually execute).\n pass\n\n add_callback_from_signal = add_callback\n\n def run_in_executor(self, executor, func, *args):\n return self.asyncio_loop.run_in_executor(executor, func, *args)\n\n def set_default_executor(self, executor):\n return self.asyncio_loop.set_default_executor(executor)\n\n\nclass AsyncIOMainLoop(BaseAsyncIOLoop):\n \"\"\"``AsyncIOMainLoop`` creates an `.IOLoop` that corresponds to the\n current ``asyncio`` event loop (i.e. the one returned by\n ``asyncio.get_event_loop()``).\n\n .. deprecated:: 5.0\n\n Now used automatically when appropriate; it is no longer necessary\n to refer to this class directly.\n\n .. versionchanged:: 5.0\n\n Closing an `AsyncIOMainLoop` now closes the underlying asyncio loop.\n \"\"\"\n def initialize(self, **kwargs):\n super(AsyncIOMainLoop, self).initialize(asyncio.get_event_loop(), **kwargs)\n\n def make_current(self):\n # AsyncIOMainLoop already refers to the current asyncio loop so\n # nothing to do here.\n pass\n\n\nclass AsyncIOLoop(BaseAsyncIOLoop):\n \"\"\"``AsyncIOLoop`` is an `.IOLoop` that runs on an ``asyncio`` event loop.\n This class follows the usual Tornado semantics for creating new\n ``IOLoops``; these loops are not necessarily related to the\n ``asyncio`` default event loop.\n\n Each ``AsyncIOLoop`` creates a new ``asyncio.EventLoop``; this object\n can be accessed with the ``asyncio_loop`` attribute.\n\n .. versionchanged:: 5.0\n\n When an ``AsyncIOLoop`` becomes the current `.IOLoop`, it also sets\n the current `asyncio` event loop.\n\n .. deprecated:: 5.0\n\n Now used automatically when appropriate; it is no longer necessary\n to refer to this class directly.\n \"\"\"\n def initialize(self, **kwargs):\n self.is_current = False\n loop = asyncio.new_event_loop()\n try:\n super(AsyncIOLoop, self).initialize(loop, **kwargs)\n except Exception:\n # If initialize() does not succeed (taking ownership of the loop),\n # we have to close it.\n loop.close()\n raise\n\n def close(self, all_fds=False):\n if self.is_current:\n self.clear_current()\n super(AsyncIOLoop, self).close(all_fds=all_fds)\n\n def make_current(self):\n if not self.is_current:\n try:\n self.old_asyncio = asyncio.get_event_loop()\n except (RuntimeError, AssertionError):\n self.old_asyncio = None\n self.is_current = True\n asyncio.set_event_loop(self.asyncio_loop)\n\n def _clear_current_hook(self):\n if self.is_current:\n asyncio.set_event_loop(self.old_asyncio)\n self.is_current = False\n\n\ndef to_tornado_future(asyncio_future):\n \"\"\"Convert an `asyncio.Future` to a `tornado.concurrent.Future`.\n\n .. versionadded:: 4.1\n\n .. deprecated:: 5.0\n Tornado ``Futures`` have been merged with `asyncio.Future`,\n so this method is now a no-op.\n \"\"\"\n return asyncio_future\n\n\ndef to_asyncio_future(tornado_future):\n \"\"\"Convert a Tornado yieldable object to an `asyncio.Future`.\n\n .. versionadded:: 4.1\n\n .. versionchanged:: 4.3\n Now accepts any yieldable object, not just\n `tornado.concurrent.Future`.\n\n .. deprecated:: 5.0\n Tornado ``Futures`` have been merged with `asyncio.Future`,\n so this method is now equivalent to `tornado.gen.convert_yielded`.\n \"\"\"\n return convert_yielded(tornado_future)\n\n\nclass AnyThreadEventLoopPolicy(asyncio.DefaultEventLoopPolicy):\n \"\"\"Event loop policy that allows loop creation on any thread.\n\n The default `asyncio` event loop policy only automatically creates\n event loops in the main threads. Other threads must create event\n loops explicitly or `asyncio.get_event_loop` (and therefore\n `.IOLoop.current`) will fail. Installing this policy allows event\n loops to be created automatically on any thread, matching the\n behavior of Tornado versions prior to 5.0 (or 5.0 on Python 2).\n\n Usage::\n\n asyncio.set_event_loop_policy(AnyThreadEventLoopPolicy())\n\n .. versionadded:: 5.0\n\n \"\"\"\n def get_event_loop(self):\n try:\n return super().get_event_loop()\n except (RuntimeError, AssertionError):\n # This was an AssertionError in python 3.4.2 (which ships with debian jessie)\n # and changed to a RuntimeError in 3.4.3.\n # \"There is no current event loop in thread %r\"\n loop = self.new_event_loop()\n self.set_event_loop(loop)\n return loop\n", "path": "tornado/platform/asyncio.py"}], "after_files": [{"content": "\"\"\"Bridges between the `asyncio` module and Tornado IOLoop.\n\n.. versionadded:: 3.2\n\nThis module integrates Tornado with the ``asyncio`` module introduced\nin Python 3.4. This makes it possible to combine the two libraries on\nthe same event loop.\n\n.. deprecated:: 5.0\n\n While the code in this module is still used, it is now enabled\n automatically when `asyncio` is available, so applications should\n no longer need to refer to this module directly.\n\n.. note::\n\n Tornado requires the `~asyncio.AbstractEventLoop.add_reader` family of\n methods, so it is not compatible with the `~asyncio.ProactorEventLoop` on\n Windows. Use the `~asyncio.SelectorEventLoop` instead.\n\"\"\"\n\nfrom __future__ import absolute_import, division, print_function\nimport functools\n\nfrom tornado.gen import convert_yielded\nfrom tornado.ioloop import IOLoop\nfrom tornado import stack_context\n\nimport asyncio\n\n\nclass BaseAsyncIOLoop(IOLoop):\n def initialize(self, asyncio_loop, **kwargs):\n self.asyncio_loop = asyncio_loop\n # Maps fd to (fileobj, handler function) pair (as in IOLoop.add_handler)\n self.handlers = {}\n # Set of fds listening for reads/writes\n self.readers = set()\n self.writers = set()\n self.closing = False\n # If an asyncio loop was closed through an asyncio interface\n # instead of IOLoop.close(), we'd never hear about it and may\n # have left a dangling reference in our map. In case an\n # application (or, more likely, a test suite) creates and\n # destroys a lot of event loops in this way, check here to\n # ensure that we don't have a lot of dead loops building up in\n # the map.\n #\n # TODO(bdarnell): consider making self.asyncio_loop a weakref\n # for AsyncIOMainLoop and make _ioloop_for_asyncio a\n # WeakKeyDictionary.\n for loop in list(IOLoop._ioloop_for_asyncio):\n if loop.is_closed():\n del IOLoop._ioloop_for_asyncio[loop]\n IOLoop._ioloop_for_asyncio[asyncio_loop] = self\n super(BaseAsyncIOLoop, self).initialize(**kwargs)\n\n def close(self, all_fds=False):\n self.closing = True\n for fd in list(self.handlers):\n fileobj, handler_func = self.handlers[fd]\n self.remove_handler(fd)\n if all_fds:\n self.close_fd(fileobj)\n # Remove the mapping before closing the asyncio loop. If this\n # happened in the other order, we could race against another\n # initialize() call which would see the closed asyncio loop,\n # assume it was closed from the asyncio side, and do this\n # cleanup for us, leading to a KeyError.\n del IOLoop._ioloop_for_asyncio[self.asyncio_loop]\n self.asyncio_loop.close()\n\n def add_handler(self, fd, handler, events):\n fd, fileobj = self.split_fd(fd)\n if fd in self.handlers:\n raise ValueError(\"fd %s added twice\" % fd)\n self.handlers[fd] = (fileobj, stack_context.wrap(handler))\n if events & IOLoop.READ:\n self.asyncio_loop.add_reader(\n fd, self._handle_events, fd, IOLoop.READ)\n self.readers.add(fd)\n if events & IOLoop.WRITE:\n self.asyncio_loop.add_writer(\n fd, self._handle_events, fd, IOLoop.WRITE)\n self.writers.add(fd)\n\n def update_handler(self, fd, events):\n fd, fileobj = self.split_fd(fd)\n if events & IOLoop.READ:\n if fd not in self.readers:\n self.asyncio_loop.add_reader(\n fd, self._handle_events, fd, IOLoop.READ)\n self.readers.add(fd)\n else:\n if fd in self.readers:\n self.asyncio_loop.remove_reader(fd)\n self.readers.remove(fd)\n if events & IOLoop.WRITE:\n if fd not in self.writers:\n self.asyncio_loop.add_writer(\n fd, self._handle_events, fd, IOLoop.WRITE)\n self.writers.add(fd)\n else:\n if fd in self.writers:\n self.asyncio_loop.remove_writer(fd)\n self.writers.remove(fd)\n\n def remove_handler(self, fd):\n fd, fileobj = self.split_fd(fd)\n if fd not in self.handlers:\n return\n if fd in self.readers:\n self.asyncio_loop.remove_reader(fd)\n self.readers.remove(fd)\n if fd in self.writers:\n self.asyncio_loop.remove_writer(fd)\n self.writers.remove(fd)\n del self.handlers[fd]\n\n def _handle_events(self, fd, events):\n fileobj, handler_func = self.handlers[fd]\n handler_func(fileobj, events)\n\n def start(self):\n try:\n old_loop = asyncio.get_event_loop()\n except (RuntimeError, AssertionError):\n old_loop = None\n try:\n self._setup_logging()\n asyncio.set_event_loop(self.asyncio_loop)\n self.asyncio_loop.run_forever()\n finally:\n asyncio.set_event_loop(old_loop)\n\n def stop(self):\n self.asyncio_loop.stop()\n\n def call_at(self, when, callback, *args, **kwargs):\n # asyncio.call_at supports *args but not **kwargs, so bind them here.\n # We do not synchronize self.time and asyncio_loop.time, so\n # convert from absolute to relative.\n return self.asyncio_loop.call_later(\n max(0, when - self.time()), self._run_callback,\n functools.partial(stack_context.wrap(callback), *args, **kwargs))\n\n def remove_timeout(self, timeout):\n timeout.cancel()\n\n def add_callback(self, callback, *args, **kwargs):\n try:\n self.asyncio_loop.call_soon_threadsafe(\n self._run_callback,\n functools.partial(stack_context.wrap(callback), *args, **kwargs))\n except RuntimeError:\n # \"Event loop is closed\". Swallow the exception for\n # consistency with PollIOLoop (and logical consistency\n # with the fact that we can't guarantee that an\n # add_callback that completes without error will\n # eventually execute).\n pass\n\n add_callback_from_signal = add_callback\n\n def run_in_executor(self, executor, func, *args):\n return self.asyncio_loop.run_in_executor(executor, func, *args)\n\n def set_default_executor(self, executor):\n return self.asyncio_loop.set_default_executor(executor)\n\n\nclass AsyncIOMainLoop(BaseAsyncIOLoop):\n \"\"\"``AsyncIOMainLoop`` creates an `.IOLoop` that corresponds to the\n current ``asyncio`` event loop (i.e. the one returned by\n ``asyncio.get_event_loop()``).\n\n .. deprecated:: 5.0\n\n Now used automatically when appropriate; it is no longer necessary\n to refer to this class directly.\n\n .. versionchanged:: 5.0\n\n Closing an `AsyncIOMainLoop` now closes the underlying asyncio loop.\n \"\"\"\n def initialize(self, **kwargs):\n super(AsyncIOMainLoop, self).initialize(asyncio.get_event_loop(), **kwargs)\n\n def make_current(self):\n # AsyncIOMainLoop already refers to the current asyncio loop so\n # nothing to do here.\n pass\n\n\nclass AsyncIOLoop(BaseAsyncIOLoop):\n \"\"\"``AsyncIOLoop`` is an `.IOLoop` that runs on an ``asyncio`` event loop.\n This class follows the usual Tornado semantics for creating new\n ``IOLoops``; these loops are not necessarily related to the\n ``asyncio`` default event loop.\n\n Each ``AsyncIOLoop`` creates a new ``asyncio.EventLoop``; this object\n can be accessed with the ``asyncio_loop`` attribute.\n\n .. versionchanged:: 5.0\n\n When an ``AsyncIOLoop`` becomes the current `.IOLoop`, it also sets\n the current `asyncio` event loop.\n\n .. deprecated:: 5.0\n\n Now used automatically when appropriate; it is no longer necessary\n to refer to this class directly.\n \"\"\"\n def initialize(self, **kwargs):\n self.is_current = False\n loop = asyncio.new_event_loop()\n try:\n super(AsyncIOLoop, self).initialize(loop, **kwargs)\n except Exception:\n # If initialize() does not succeed (taking ownership of the loop),\n # we have to close it.\n loop.close()\n raise\n\n def close(self, all_fds=False):\n if self.is_current:\n self.clear_current()\n super(AsyncIOLoop, self).close(all_fds=all_fds)\n\n def make_current(self):\n if not self.is_current:\n try:\n self.old_asyncio = asyncio.get_event_loop()\n except (RuntimeError, AssertionError):\n self.old_asyncio = None\n self.is_current = True\n asyncio.set_event_loop(self.asyncio_loop)\n\n def _clear_current_hook(self):\n if self.is_current:\n asyncio.set_event_loop(self.old_asyncio)\n self.is_current = False\n\n\ndef to_tornado_future(asyncio_future):\n \"\"\"Convert an `asyncio.Future` to a `tornado.concurrent.Future`.\n\n .. versionadded:: 4.1\n\n .. deprecated:: 5.0\n Tornado ``Futures`` have been merged with `asyncio.Future`,\n so this method is now a no-op.\n \"\"\"\n return asyncio_future\n\n\ndef to_asyncio_future(tornado_future):\n \"\"\"Convert a Tornado yieldable object to an `asyncio.Future`.\n\n .. versionadded:: 4.1\n\n .. versionchanged:: 4.3\n Now accepts any yieldable object, not just\n `tornado.concurrent.Future`.\n\n .. deprecated:: 5.0\n Tornado ``Futures`` have been merged with `asyncio.Future`,\n so this method is now equivalent to `tornado.gen.convert_yielded`.\n \"\"\"\n return convert_yielded(tornado_future)\n\n\nclass AnyThreadEventLoopPolicy(asyncio.DefaultEventLoopPolicy):\n \"\"\"Event loop policy that allows loop creation on any thread.\n\n The default `asyncio` event loop policy only automatically creates\n event loops in the main threads. Other threads must create event\n loops explicitly or `asyncio.get_event_loop` (and therefore\n `.IOLoop.current`) will fail. Installing this policy allows event\n loops to be created automatically on any thread, matching the\n behavior of Tornado versions prior to 5.0 (or 5.0 on Python 2).\n\n Usage::\n\n asyncio.set_event_loop_policy(AnyThreadEventLoopPolicy())\n\n .. versionadded:: 5.0\n\n \"\"\"\n def get_event_loop(self):\n try:\n return super().get_event_loop()\n except (RuntimeError, AssertionError):\n # This was an AssertionError in python 3.4.2 (which ships with debian jessie)\n # and changed to a RuntimeError in 3.4.3.\n # \"There is no current event loop in thread %r\"\n loop = self.new_event_loop()\n self.set_event_loop(loop)\n return loop\n", "path": "tornado/platform/asyncio.py"}]}
| 3,813 | 200 |
gh_patches_debug_252
|
rasdani/github-patches
|
git_diff
|
google-deepmind__dm-haiku-48
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jax version upgrade (AttributeError: CallPrimitive)
Using the current version of master 66f9c69 of Haiku, I am getting the following error on Colab
```
AttributeError Traceback (most recent call last)
<ipython-input-3-3a9e6adbfff5> in <module>()
----> 1 import haiku as hk
/usr/local/lib/python3.6/dist-packages/haiku/__init__.py in <module>()
17
18 from haiku import data_structures
---> 19 from haiku import experimental
20 from haiku import initializers
21 from haiku import nets
/usr/local/lib/python3.6/dist-packages/haiku/experimental.py in <module>()
22 from haiku._src.base import custom_getter
23 from haiku._src.base import ParamContext
---> 24 from haiku._src.dot import to_dot
25 from haiku._src.lift import lift
26 from haiku._src.module import profiler_name_scopes
/usr/local/lib/python3.6/dist-packages/haiku/_src/dot.py in <module>()
23
24 from haiku._src import data_structures
---> 25 from haiku._src import module
26 from haiku._src import utils
27 import jax
/usr/local/lib/python3.6/dist-packages/haiku/_src/module.py in <module>()
26 from haiku._src import base
27 from haiku._src import data_structures
---> 28 from haiku._src import named_call
29 from haiku._src import utils
30 import jax.numpy as jnp
/usr/local/lib/python3.6/dist-packages/haiku/_src/named_call.py in <module>()
29
30 # Registering named call as a primitive
---> 31 named_call_p = core.CallPrimitive('named_call')
32 # named_call is implemented as a plain core.call and only diverges
33 # under compilation (see named_call_translation_rule)
AttributeError: module 'jax.core' has no attribute 'CallPrimitive'
```
I believe that's because Haiku now requires `jax>=0.1.71`, while the version by default on Colab is `jax==0.1.69`. `CallPrimitive` was introduced in jax 0.1.71.
https://github.com/google/jax/blob/1545a29e6d69a7b3c7fdf9a49b38004759a9fbfa/jax/core.py#L1106-L1115
To reproduce (inside a Colab):
```python
import jax
print(jax.__version__) # 0.1.69
!pip install -q git+https://github.com/deepmind/dm-haiku
import haiku as hk
```
Run `!pip install -q --upgrade jax jaxlib` first in your Colab to fix this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Lint as: python3
2 # Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 # ==============================================================================
16 """Setup for pip package."""
17
18 from setuptools import find_namespace_packages
19 from setuptools import setup
20
21
22 def _get_version():
23 with open('haiku/__init__.py') as fp:
24 for line in fp:
25 if line.startswith('__version__'):
26 g = {}
27 exec(line, g) # pylint: disable=exec-used
28 return g['__version__']
29 raise ValueError('`__version__` not defined in `haiku/__init__.py`')
30
31
32 def _parse_requirements(requirements_txt_path):
33 with open(requirements_txt_path) as fp:
34 return fp.read().splitlines()
35
36
37 _VERSION = _get_version()
38
39 EXTRA_PACKAGES = {
40 'jax': ['jax>=0.1.55'],
41 'jaxlib': ['jaxlib>=0.1.37'],
42 }
43
44 setup(
45 name='dm-haiku',
46 version=_VERSION,
47 url='https://github.com/deepmind/dm-haiku',
48 license='Apache 2.0',
49 author='DeepMind',
50 description='Haiku is a library for building neural networks in JAX.',
51 long_description=open('README.md').read(),
52 long_description_content_type='text/markdown',
53 author_email='[email protected]',
54 # Contained modules and scripts.
55 packages=find_namespace_packages(exclude=['*_test.py']),
56 install_requires=_parse_requirements('requirements.txt'),
57 extras_require=EXTRA_PACKAGES,
58 tests_require=_parse_requirements('requirements-test.txt'),
59 requires_python='>=3.6',
60 include_package_data=True,
61 zip_safe=False,
62 # PyPI package information.
63 classifiers=[
64 'Development Status :: 4 - Beta',
65 'Intended Audience :: Developers',
66 'Intended Audience :: Education',
67 'Intended Audience :: Science/Research',
68 'License :: OSI Approved :: Apache Software License',
69 'Programming Language :: Python :: 3',
70 'Programming Language :: Python :: 3.6',
71 'Programming Language :: Python :: 3.7',
72 'Topic :: Scientific/Engineering :: Mathematics',
73 'Topic :: Software Development :: Libraries :: Python Modules',
74 'Topic :: Software Development :: Libraries',
75 ],
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -37,8 +37,8 @@
_VERSION = _get_version()
EXTRA_PACKAGES = {
- 'jax': ['jax>=0.1.55'],
- 'jaxlib': ['jaxlib>=0.1.37'],
+ 'jax': ['jax>=0.1.71'],
+ 'jaxlib': ['jaxlib>=0.1.49'],
}
setup(
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -37,8 +37,8 @@\n _VERSION = _get_version()\n \n EXTRA_PACKAGES = {\n- 'jax': ['jax>=0.1.55'],\n- 'jaxlib': ['jaxlib>=0.1.37'],\n+ 'jax': ['jax>=0.1.71'],\n+ 'jaxlib': ['jaxlib>=0.1.49'],\n }\n \n setup(\n", "issue": "Jax version upgrade (AttributeError: CallPrimitive)\nUsing the current version of master 66f9c69 of Haiku, I am getting the following error on Colab\r\n```\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-3-3a9e6adbfff5> in <module>()\r\n----> 1 import haiku as hk\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/__init__.py in <module>()\r\n 17 \r\n 18 from haiku import data_structures\r\n---> 19 from haiku import experimental\r\n 20 from haiku import initializers\r\n 21 from haiku import nets\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/experimental.py in <module>()\r\n 22 from haiku._src.base import custom_getter\r\n 23 from haiku._src.base import ParamContext\r\n---> 24 from haiku._src.dot import to_dot\r\n 25 from haiku._src.lift import lift\r\n 26 from haiku._src.module import profiler_name_scopes\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/dot.py in <module>()\r\n 23 \r\n 24 from haiku._src import data_structures\r\n---> 25 from haiku._src import module\r\n 26 from haiku._src import utils\r\n 27 import jax\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/module.py in <module>()\r\n 26 from haiku._src import base\r\n 27 from haiku._src import data_structures\r\n---> 28 from haiku._src import named_call\r\n 29 from haiku._src import utils\r\n 30 import jax.numpy as jnp\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/named_call.py in <module>()\r\n 29 \r\n 30 # Registering named call as a primitive\r\n---> 31 named_call_p = core.CallPrimitive('named_call')\r\n 32 # named_call is implemented as a plain core.call and only diverges\r\n 33 # under compilation (see named_call_translation_rule)\r\n\r\nAttributeError: module 'jax.core' has no attribute 'CallPrimitive'\r\n```\r\n\r\nI believe that's because Haiku now requires `jax>=0.1.71`, while the version by default on Colab is `jax==0.1.69`. `CallPrimitive` was introduced in jax 0.1.71.\r\nhttps://github.com/google/jax/blob/1545a29e6d69a7b3c7fdf9a49b38004759a9fbfa/jax/core.py#L1106-L1115\r\n\r\nTo reproduce (inside a Colab):\r\n```python\r\nimport jax\r\nprint(jax.__version__) # 0.1.69\r\n\r\n!pip install -q git+https://github.com/deepmind/dm-haiku\r\nimport haiku as hk\r\n```\r\n\r\nRun `!pip install -q --upgrade jax jaxlib` first in your Colab to fix this issue.\n", "before_files": [{"content": "# Lint as: python3\n# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Setup for pip package.\"\"\"\n\nfrom setuptools import find_namespace_packages\nfrom setuptools import setup\n\n\ndef _get_version():\n with open('haiku/__init__.py') as fp:\n for line in fp:\n if line.startswith('__version__'):\n g = {}\n exec(line, g) # pylint: disable=exec-used\n return g['__version__']\n raise ValueError('`__version__` not defined in `haiku/__init__.py`')\n\n\ndef _parse_requirements(requirements_txt_path):\n with open(requirements_txt_path) as fp:\n return fp.read().splitlines()\n\n\n_VERSION = _get_version()\n\nEXTRA_PACKAGES = {\n 'jax': ['jax>=0.1.55'],\n 'jaxlib': ['jaxlib>=0.1.37'],\n}\n\nsetup(\n name='dm-haiku',\n version=_VERSION,\n url='https://github.com/deepmind/dm-haiku',\n license='Apache 2.0',\n author='DeepMind',\n description='Haiku is a library for building neural networks in JAX.',\n long_description=open('README.md').read(),\n long_description_content_type='text/markdown',\n author_email='[email protected]',\n # Contained modules and scripts.\n packages=find_namespace_packages(exclude=['*_test.py']),\n install_requires=_parse_requirements('requirements.txt'),\n extras_require=EXTRA_PACKAGES,\n tests_require=_parse_requirements('requirements-test.txt'),\n requires_python='>=3.6',\n include_package_data=True,\n zip_safe=False,\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Lint as: python3\n# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Setup for pip package.\"\"\"\n\nfrom setuptools import find_namespace_packages\nfrom setuptools import setup\n\n\ndef _get_version():\n with open('haiku/__init__.py') as fp:\n for line in fp:\n if line.startswith('__version__'):\n g = {}\n exec(line, g) # pylint: disable=exec-used\n return g['__version__']\n raise ValueError('`__version__` not defined in `haiku/__init__.py`')\n\n\ndef _parse_requirements(requirements_txt_path):\n with open(requirements_txt_path) as fp:\n return fp.read().splitlines()\n\n\n_VERSION = _get_version()\n\nEXTRA_PACKAGES = {\n 'jax': ['jax>=0.1.71'],\n 'jaxlib': ['jaxlib>=0.1.49'],\n}\n\nsetup(\n name='dm-haiku',\n version=_VERSION,\n url='https://github.com/deepmind/dm-haiku',\n license='Apache 2.0',\n author='DeepMind',\n description='Haiku is a library for building neural networks in JAX.',\n long_description=open('README.md').read(),\n long_description_content_type='text/markdown',\n author_email='[email protected]',\n # Contained modules and scripts.\n packages=find_namespace_packages(exclude=['*_test.py']),\n install_requires=_parse_requirements('requirements.txt'),\n extras_require=EXTRA_PACKAGES,\n tests_require=_parse_requirements('requirements-test.txt'),\n requires_python='>=3.6',\n include_package_data=True,\n zip_safe=False,\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}]}
| 1,727 | 113 |
gh_patches_debug_3065
|
rasdani/github-patches
|
git_diff
|
coala__coala-3348
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong doc string syntax in coalib.bearlib.aspects.Root
The doc string of the `Root` aspectclass has a formatting issue at https://github.com/coala/coala/blob/master/coalib/bearlib/aspects/__init__.py#L61
You can see the wrongly rendered result at https://api.coala.io/en/latest/coalib.bearlib.aspects.html#module-coalib.bearlib.aspects
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/bearlib/aspects/__init__.py`
Content:
```
1 from .base import aspectbase
2 from .meta import aspectclass
3 from .taste import Taste, TasteError
4
5 __all__ = ['Root', 'Taste', 'TasteError', 'aspectclass']
6
7
8 class Root(aspectbase, metaclass=aspectclass):
9 """
10 The root aspectclass.
11
12 Define sub-aspectclasses with class-bound ``.subaspect`` decorator.
13 Definition string is taken from doc-string of decorated class.
14 Remaining docs are taken from a nested ``docs`` class.
15 Tastes are defined as class attributes that are instances of
16 :class:`coalib.bearlib.aspectclasses.Taste`.
17
18 >>> @Root.subaspect
19 ... class Formatting:
20 ... \"""
21 ... A parent aspect for code formatting aspects...
22 ... \"""
23
24 We can now create subaspects like this:
25
26 >>> @Formatting.subaspect
27 ... class LineLength:
28 ... \"""
29 ... This aspect controls the length of a line...
30 ... \"""
31 ... class docs:
32 ... example = "..."
33 ... example_language = "..."
34 ... importance_reason = "..."
35 ... fix_suggestions = "..."
36 ...
37 ... max_line_length = Taste[int](
38 ... "Maximum length allowed for a line.",
39 ... (80, 90, 120), default=80)
40
41 The representation will show the full "path" to the leaf of the tree:
42
43 >>> Root.Formatting.LineLength
44 <aspectclass 'Root.Formatting.LineLength'>
45
46 We can see, which settings are availables:
47
48 >>> Formatting.tastes
49 {}
50 >>> LineLength.tastes
51 {'max_line_length': <....Taste[int] object at ...>}
52
53 And instantiate the aspect with the values, they will be automatically
54 converted:
55
56 >>> Formatting('Python')
57 <coalib.bearlib.aspects.Root.Formatting object at 0x...>
58 >>> LineLength('Python', max_line_length="100").tastes
59 {'max_line_length': 100}
60
61 If no settings are given, the defaults will be taken>
62 >>> LineLength('Python').tastes
63 {'max_line_length': 80}
64
65 Tastes can also be made available for only specific languages:
66
67 >>> from coalib.bearlib.languages import Language
68 >>> @Language
69 ... class GreaterTrumpScript:
70 ... pass
71
72 >>> @Formatting.subaspect
73 ... class Greatness:
74 ... \"""
75 ... This aspect controls the greatness of a file...
76 ... \"""
77 ...
78 ... min_greatness = Taste[int](
79 ... "Minimum greatness factor needed for a TrumpScript file. "
80 ... "This is fact.",
81 ... (1000000, 1000000000, 1000000000000), default=1000000,
82 ... languages=('GreaterTrumpScript' ,))
83
84 >>> Greatness.tastes
85 {'min_greatness': <....Taste[int] object at ...>}
86 >>> Greatness('GreaterTrumpScript').tastes
87 {'min_greatness': 1000000}
88 >>> Greatness('GreaterTrumpScript', min_greatness=1000000000000).tastes
89 {'min_greatness': 1000000000000}
90
91 >>> Greatness('Python').tastes
92 {}
93
94 >>> Greatness('Python', min_greatness=1000000000)
95 ... # doctest: +NORMALIZE_WHITESPACE
96 Traceback (most recent call last):
97 ...
98 coalib.bearlib.aspects.taste.TasteError:
99 Root.Formatting.Greatness.min_greatness is not available ...
100
101 >>> Greatness('Python').min_greatness
102 ... # doctest: +NORMALIZE_WHITESPACE
103 Traceback (most recent call last):
104 ...
105 coalib.bearlib.aspects.taste.TasteError:
106 Root.Formatting.Greatness.min_greatness is not available ...
107 """
108 parent = None
109
110 _tastes = {}
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/coalib/bearlib/aspects/__init__.py b/coalib/bearlib/aspects/__init__.py
--- a/coalib/bearlib/aspects/__init__.py
+++ b/coalib/bearlib/aspects/__init__.py
@@ -58,7 +58,8 @@
>>> LineLength('Python', max_line_length="100").tastes
{'max_line_length': 100}
- If no settings are given, the defaults will be taken>
+ If no settings are given, the defaults will be taken:
+
>>> LineLength('Python').tastes
{'max_line_length': 80}
|
{"golden_diff": "diff --git a/coalib/bearlib/aspects/__init__.py b/coalib/bearlib/aspects/__init__.py\n--- a/coalib/bearlib/aspects/__init__.py\n+++ b/coalib/bearlib/aspects/__init__.py\n@@ -58,7 +58,8 @@\n >>> LineLength('Python', max_line_length=\"100\").tastes\n {'max_line_length': 100}\n \n- If no settings are given, the defaults will be taken>\n+ If no settings are given, the defaults will be taken:\n+\n >>> LineLength('Python').tastes\n {'max_line_length': 80}\n", "issue": "Wrong doc string syntax in coalib.bearlib.aspects.Root\nThe doc string of the `Root` aspectclass has a formatting issue at https://github.com/coala/coala/blob/master/coalib/bearlib/aspects/__init__.py#L61\r\n\r\nYou can see the wrongly rendered result at https://api.coala.io/en/latest/coalib.bearlib.aspects.html#module-coalib.bearlib.aspects\n", "before_files": [{"content": "from .base import aspectbase\nfrom .meta import aspectclass\nfrom .taste import Taste, TasteError\n\n__all__ = ['Root', 'Taste', 'TasteError', 'aspectclass']\n\n\nclass Root(aspectbase, metaclass=aspectclass):\n \"\"\"\n The root aspectclass.\n\n Define sub-aspectclasses with class-bound ``.subaspect`` decorator.\n Definition string is taken from doc-string of decorated class.\n Remaining docs are taken from a nested ``docs`` class.\n Tastes are defined as class attributes that are instances of\n :class:`coalib.bearlib.aspectclasses.Taste`.\n\n >>> @Root.subaspect\n ... class Formatting:\n ... \\\"\"\"\n ... A parent aspect for code formatting aspects...\n ... \\\"\"\"\n\n We can now create subaspects like this:\n\n >>> @Formatting.subaspect\n ... class LineLength:\n ... \\\"\"\"\n ... This aspect controls the length of a line...\n ... \\\"\"\"\n ... class docs:\n ... example = \"...\"\n ... example_language = \"...\"\n ... importance_reason = \"...\"\n ... fix_suggestions = \"...\"\n ...\n ... max_line_length = Taste[int](\n ... \"Maximum length allowed for a line.\",\n ... (80, 90, 120), default=80)\n\n The representation will show the full \"path\" to the leaf of the tree:\n\n >>> Root.Formatting.LineLength\n <aspectclass 'Root.Formatting.LineLength'>\n\n We can see, which settings are availables:\n\n >>> Formatting.tastes\n {}\n >>> LineLength.tastes\n {'max_line_length': <....Taste[int] object at ...>}\n\n And instantiate the aspect with the values, they will be automatically\n converted:\n\n >>> Formatting('Python')\n <coalib.bearlib.aspects.Root.Formatting object at 0x...>\n >>> LineLength('Python', max_line_length=\"100\").tastes\n {'max_line_length': 100}\n\n If no settings are given, the defaults will be taken>\n >>> LineLength('Python').tastes\n {'max_line_length': 80}\n\n Tastes can also be made available for only specific languages:\n\n >>> from coalib.bearlib.languages import Language\n >>> @Language\n ... class GreaterTrumpScript:\n ... pass\n\n >>> @Formatting.subaspect\n ... class Greatness:\n ... \\\"\"\"\n ... This aspect controls the greatness of a file...\n ... \\\"\"\"\n ...\n ... min_greatness = Taste[int](\n ... \"Minimum greatness factor needed for a TrumpScript file. \"\n ... \"This is fact.\",\n ... (1000000, 1000000000, 1000000000000), default=1000000,\n ... languages=('GreaterTrumpScript' ,))\n\n >>> Greatness.tastes\n {'min_greatness': <....Taste[int] object at ...>}\n >>> Greatness('GreaterTrumpScript').tastes\n {'min_greatness': 1000000}\n >>> Greatness('GreaterTrumpScript', min_greatness=1000000000000).tastes\n {'min_greatness': 1000000000000}\n\n >>> Greatness('Python').tastes\n {}\n\n >>> Greatness('Python', min_greatness=1000000000)\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n\n >>> Greatness('Python').min_greatness\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n \"\"\"\n parent = None\n\n _tastes = {}\n", "path": "coalib/bearlib/aspects/__init__.py"}], "after_files": [{"content": "from .base import aspectbase\nfrom .meta import aspectclass\nfrom .taste import Taste, TasteError\n\n__all__ = ['Root', 'Taste', 'TasteError', 'aspectclass']\n\n\nclass Root(aspectbase, metaclass=aspectclass):\n \"\"\"\n The root aspectclass.\n\n Define sub-aspectclasses with class-bound ``.subaspect`` decorator.\n Definition string is taken from doc-string of decorated class.\n Remaining docs are taken from a nested ``docs`` class.\n Tastes are defined as class attributes that are instances of\n :class:`coalib.bearlib.aspectclasses.Taste`.\n\n >>> @Root.subaspect\n ... class Formatting:\n ... \\\"\"\"\n ... A parent aspect for code formatting aspects...\n ... \\\"\"\"\n\n We can now create subaspects like this:\n\n >>> @Formatting.subaspect\n ... class LineLength:\n ... \\\"\"\"\n ... This aspect controls the length of a line...\n ... \\\"\"\"\n ... class docs:\n ... example = \"...\"\n ... example_language = \"...\"\n ... importance_reason = \"...\"\n ... fix_suggestions = \"...\"\n ...\n ... max_line_length = Taste[int](\n ... \"Maximum length allowed for a line.\",\n ... (80, 90, 120), default=80)\n\n The representation will show the full \"path\" to the leaf of the tree:\n\n >>> Root.Formatting.LineLength\n <aspectclass 'Root.Formatting.LineLength'>\n\n We can see, which settings are availables:\n\n >>> Formatting.tastes\n {}\n >>> LineLength.tastes\n {'max_line_length': <....Taste[int] object at ...>}\n\n And instantiate the aspect with the values, they will be automatically\n converted:\n\n >>> Formatting('Python')\n <coalib.bearlib.aspects.Root.Formatting object at 0x...>\n >>> LineLength('Python', max_line_length=\"100\").tastes\n {'max_line_length': 100}\n\n If no settings are given, the defaults will be taken:\n\n >>> LineLength('Python').tastes\n {'max_line_length': 80}\n\n Tastes can also be made available for only specific languages:\n\n >>> from coalib.bearlib.languages import Language\n >>> @Language\n ... class GreaterTrumpScript:\n ... pass\n\n >>> @Formatting.subaspect\n ... class Greatness:\n ... \\\"\"\"\n ... This aspect controls the greatness of a file...\n ... \\\"\"\"\n ...\n ... min_greatness = Taste[int](\n ... \"Minimum greatness factor needed for a TrumpScript file. \"\n ... \"This is fact.\",\n ... (1000000, 1000000000, 1000000000000), default=1000000,\n ... languages=('GreaterTrumpScript' ,))\n\n >>> Greatness.tastes\n {'min_greatness': <....Taste[int] object at ...>}\n >>> Greatness('GreaterTrumpScript').tastes\n {'min_greatness': 1000000}\n >>> Greatness('GreaterTrumpScript', min_greatness=1000000000000).tastes\n {'min_greatness': 1000000000000}\n\n >>> Greatness('Python').tastes\n {}\n\n >>> Greatness('Python', min_greatness=1000000000)\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n\n >>> Greatness('Python').min_greatness\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n \"\"\"\n parent = None\n\n _tastes = {}\n", "path": "coalib/bearlib/aspects/__init__.py"}]}
| 1,518 | 151 |
gh_patches_debug_2693
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-7665
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Python] jsonschema included twice in setup.py requires list.
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
`jsonschema` is included twice in the Python package [setup.py `requires` list](https://github.com/ray-project/ray/blob/master/python/setup.py#L176-L183). This is causing the usage of the Ray Python library within Bazel to fail during the analysis phase due to label duplication in the generated `py_library` target's `'deps'`:
```
ERROR: .../external/requirements_py3_pypi__ray_0_9_0_dev0/BUILD:6:1: Label '@requirements_py3_pypi__jsonschema_3_2_0//:pkg' is duplicated in the 'deps' attribute of rule 'pkg'
```
This bug was introduced in the [cluster json schema validator PR](https://github.com/ray-project/ray/pull/7261/files#diff-8cf6167d58ce775a08acafcfe6f40966).
*Ray version and other system information (Python version, TensorFlow version, OS):*
Ray master commit 90b553ed058a546e036374cd0919e00604892514 (most recent commit as of this issue filing)
### Reproduction (REQUIRED)
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/setup.py`
Content:
```
1 from itertools import chain
2 import os
3 import re
4 import shutil
5 import subprocess
6 import sys
7
8 from setuptools import setup, find_packages, Distribution
9 import setuptools.command.build_ext as _build_ext
10
11 # Ideally, we could include these files by putting them in a
12 # MANIFEST.in or using the package_data argument to setup, but the
13 # MANIFEST.in gets applied at the very beginning when setup.py runs
14 # before these files have been created, so we have to move the files
15 # manually.
16
17 # NOTE: The lists below must be kept in sync with ray/BUILD.bazel.
18 ray_files = [
19 "ray/core/src/ray/thirdparty/redis/src/redis-server",
20 "ray/core/src/ray/gcs/redis_module/libray_redis_module.so",
21 "ray/core/src/plasma/plasma_store_server",
22 "ray/_raylet.so",
23 "ray/core/src/ray/raylet/raylet_monitor",
24 "ray/core/src/ray/gcs/gcs_server",
25 "ray/core/src/ray/raylet/raylet",
26 "ray/dashboard/dashboard.py",
27 "ray/streaming/_streaming.so",
28 ]
29
30 build_java = os.getenv("RAY_INSTALL_JAVA") == "1"
31 if build_java:
32 ray_files.append("ray/jars/ray_dist.jar")
33
34 # These are the directories where automatically generated Python protobuf
35 # bindings are created.
36 generated_python_directories = [
37 "ray/core/generated",
38 "ray/streaming/generated",
39 ]
40
41 optional_ray_files = []
42
43 ray_autoscaler_files = [
44 "ray/autoscaler/aws/example-full.yaml",
45 "ray/autoscaler/azure/example-full.yaml",
46 "ray/autoscaler/gcp/example-full.yaml",
47 "ray/autoscaler/local/example-full.yaml",
48 "ray/autoscaler/kubernetes/example-full.yaml",
49 "ray/autoscaler/kubernetes/kubectl-rsync.sh",
50 "ray/autoscaler/ray-schema.json"
51 ]
52
53 ray_project_files = [
54 "ray/projects/schema.json", "ray/projects/templates/cluster_template.yaml",
55 "ray/projects/templates/project_template.yaml",
56 "ray/projects/templates/requirements.txt"
57 ]
58
59 ray_dashboard_files = [
60 os.path.join(dirpath, filename)
61 for dirpath, dirnames, filenames in os.walk("ray/dashboard/client/build")
62 for filename in filenames
63 ]
64
65 optional_ray_files += ray_autoscaler_files
66 optional_ray_files += ray_project_files
67 optional_ray_files += ray_dashboard_files
68
69 if "RAY_USE_NEW_GCS" in os.environ and os.environ["RAY_USE_NEW_GCS"] == "on":
70 ray_files += [
71 "ray/core/src/credis/build/src/libmember.so",
72 "ray/core/src/credis/build/src/libmaster.so",
73 "ray/core/src/credis/redis/src/redis-server"
74 ]
75
76 extras = {
77 "debug": [],
78 "dashboard": [],
79 "serve": ["uvicorn", "pygments", "werkzeug", "flask", "pandas", "blist"],
80 "tune": ["tabulate", "tensorboardX"]
81 }
82
83 extras["rllib"] = extras["tune"] + [
84 "atari_py",
85 "dm_tree",
86 "gym[atari]",
87 "lz4",
88 "opencv-python-headless",
89 "pyyaml",
90 "scipy",
91 ]
92
93 extras["streaming"] = ["msgpack >= 0.6.2"]
94
95 extras["all"] = list(set(chain.from_iterable(extras.values())))
96
97
98 class build_ext(_build_ext.build_ext):
99 def run(self):
100 # Note: We are passing in sys.executable so that we use the same
101 # version of Python to build packages inside the build.sh script. Note
102 # that certain flags will not be passed along such as --user or sudo.
103 # TODO(rkn): Fix this.
104 command = ["../build.sh", "-p", sys.executable]
105 if build_java:
106 # Also build binaries for Java if the above env variable exists.
107 command += ["-l", "python,java"]
108 subprocess.check_call(command)
109
110 # We also need to install pickle5 along with Ray, so make sure that the
111 # relevant non-Python pickle5 files get copied.
112 pickle5_files = self.walk_directory("./ray/pickle5_files/pickle5")
113
114 thirdparty_files = self.walk_directory("./ray/thirdparty_files")
115
116 files_to_include = ray_files + pickle5_files + thirdparty_files
117
118 # Copy over the autogenerated protobuf Python bindings.
119 for directory in generated_python_directories:
120 for filename in os.listdir(directory):
121 if filename[-3:] == ".py":
122 files_to_include.append(os.path.join(directory, filename))
123
124 for filename in files_to_include:
125 self.move_file(filename)
126
127 # Try to copy over the optional files.
128 for filename in optional_ray_files:
129 try:
130 self.move_file(filename)
131 except Exception:
132 print("Failed to copy optional file {}. This is ok."
133 .format(filename))
134
135 def walk_directory(self, directory):
136 file_list = []
137 for (root, dirs, filenames) in os.walk(directory):
138 for name in filenames:
139 file_list.append(os.path.join(root, name))
140 return file_list
141
142 def move_file(self, filename):
143 # TODO(rkn): This feels very brittle. It may not handle all cases. See
144 # https://github.com/apache/arrow/blob/master/python/setup.py for an
145 # example.
146 source = filename
147 destination = os.path.join(self.build_lib, filename)
148 # Create the target directory if it doesn't already exist.
149 parent_directory = os.path.dirname(destination)
150 if not os.path.exists(parent_directory):
151 os.makedirs(parent_directory)
152 if not os.path.exists(destination):
153 print("Copying {} to {}.".format(source, destination))
154 shutil.copy(source, destination, follow_symlinks=True)
155
156
157 class BinaryDistribution(Distribution):
158 def has_ext_modules(self):
159 return True
160
161
162 def find_version(*filepath):
163 # Extract version information from filepath
164 here = os.path.abspath(os.path.dirname(__file__))
165 with open(os.path.join(here, *filepath)) as fp:
166 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
167 fp.read(), re.M)
168 if version_match:
169 return version_match.group(1)
170 raise RuntimeError("Unable to find version string.")
171
172
173 requires = [
174 "numpy >= 1.16",
175 "filelock",
176 "jsonschema",
177 "funcsigs",
178 "click",
179 "colorama",
180 "packaging",
181 "pytest",
182 "pyyaml",
183 "jsonschema",
184 "redis>=3.3.2",
185 # NOTE: Don't upgrade the version of six! Doing so causes installation
186 # problems. See https://github.com/ray-project/ray/issues/4169.
187 "six >= 1.0.0",
188 "faulthandler;python_version<'3.3'",
189 "protobuf >= 3.8.0",
190 "cloudpickle",
191 "py-spy >= 0.2.0",
192 "aiohttp",
193 "google",
194 "grpcio"
195 ]
196
197 setup(
198 name="ray",
199 version=find_version("ray", "__init__.py"),
200 author="Ray Team",
201 author_email="[email protected]",
202 description=("A system for parallel and distributed Python that unifies "
203 "the ML ecosystem."),
204 long_description=open("../README.rst").read(),
205 url="https://github.com/ray-project/ray",
206 keywords=("ray distributed parallel machine-learning "
207 "reinforcement-learning deep-learning python"),
208 packages=find_packages(),
209 cmdclass={"build_ext": build_ext},
210 # The BinaryDistribution argument triggers build_ext.
211 distclass=BinaryDistribution,
212 install_requires=requires,
213 setup_requires=["cython >= 0.29"],
214 extras_require=extras,
215 entry_points={
216 "console_scripts": [
217 "ray=ray.scripts.scripts:main",
218 "rllib=ray.rllib.scripts:cli [rllib]", "tune=ray.tune.scripts:cli"
219 ]
220 },
221 include_package_data=True,
222 zip_safe=False,
223 license="Apache 2.0")
224
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/setup.py b/python/setup.py
--- a/python/setup.py
+++ b/python/setup.py
@@ -180,7 +180,6 @@
"packaging",
"pytest",
"pyyaml",
- "jsonschema",
"redis>=3.3.2",
# NOTE: Don't upgrade the version of six! Doing so causes installation
# problems. See https://github.com/ray-project/ray/issues/4169.
|
{"golden_diff": "diff --git a/python/setup.py b/python/setup.py\n--- a/python/setup.py\n+++ b/python/setup.py\n@@ -180,7 +180,6 @@\n \"packaging\",\n \"pytest\",\n \"pyyaml\",\n- \"jsonschema\",\n \"redis>=3.3.2\",\n # NOTE: Don't upgrade the version of six! Doing so causes installation\n # problems. See https://github.com/ray-project/ray/issues/4169.\n", "issue": "[Python] jsonschema included twice in setup.py requires list.\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\n`jsonschema` is included twice in the Python package [setup.py `requires` list](https://github.com/ray-project/ray/blob/master/python/setup.py#L176-L183). This is causing the usage of the Ray Python library within Bazel to fail during the analysis phase due to label duplication in the generated `py_library` target's `'deps'`:\r\n\r\n```\r\nERROR: .../external/requirements_py3_pypi__ray_0_9_0_dev0/BUILD:6:1: Label '@requirements_py3_pypi__jsonschema_3_2_0//:pkg' is duplicated in the 'deps' attribute of rule 'pkg'\r\n```\r\n\r\nThis bug was introduced in the [cluster json schema validator PR](https://github.com/ray-project/ray/pull/7261/files#diff-8cf6167d58ce775a08acafcfe6f40966).\r\n\r\n*Ray version and other system information (Python version, TensorFlow version, OS):*\r\n\r\nRay master commit 90b553ed058a546e036374cd0919e00604892514 (most recent commit as of this issue filing)\r\n\r\n\r\n### Reproduction (REQUIRED)\r\n\r\n\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "from itertools import chain\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\n\n# NOTE: The lists below must be kept in sync with ray/BUILD.bazel.\nray_files = [\n \"ray/core/src/ray/thirdparty/redis/src/redis-server\",\n \"ray/core/src/ray/gcs/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store_server\",\n \"ray/_raylet.so\",\n \"ray/core/src/ray/raylet/raylet_monitor\",\n \"ray/core/src/ray/gcs/gcs_server\",\n \"ray/core/src/ray/raylet/raylet\",\n \"ray/dashboard/dashboard.py\",\n \"ray/streaming/_streaming.so\",\n]\n\nbuild_java = os.getenv(\"RAY_INSTALL_JAVA\") == \"1\"\nif build_java:\n ray_files.append(\"ray/jars/ray_dist.jar\")\n\n# These are the directories where automatically generated Python protobuf\n# bindings are created.\ngenerated_python_directories = [\n \"ray/core/generated\",\n \"ray/streaming/generated\",\n]\n\noptional_ray_files = []\n\nray_autoscaler_files = [\n \"ray/autoscaler/aws/example-full.yaml\",\n \"ray/autoscaler/azure/example-full.yaml\",\n \"ray/autoscaler/gcp/example-full.yaml\",\n \"ray/autoscaler/local/example-full.yaml\",\n \"ray/autoscaler/kubernetes/example-full.yaml\",\n \"ray/autoscaler/kubernetes/kubectl-rsync.sh\",\n \"ray/autoscaler/ray-schema.json\"\n]\n\nray_project_files = [\n \"ray/projects/schema.json\", \"ray/projects/templates/cluster_template.yaml\",\n \"ray/projects/templates/project_template.yaml\",\n \"ray/projects/templates/requirements.txt\"\n]\n\nray_dashboard_files = [\n os.path.join(dirpath, filename)\n for dirpath, dirnames, filenames in os.walk(\"ray/dashboard/client/build\")\n for filename in filenames\n]\n\noptional_ray_files += ray_autoscaler_files\noptional_ray_files += ray_project_files\noptional_ray_files += ray_dashboard_files\n\nif \"RAY_USE_NEW_GCS\" in os.environ and os.environ[\"RAY_USE_NEW_GCS\"] == \"on\":\n ray_files += [\n \"ray/core/src/credis/build/src/libmember.so\",\n \"ray/core/src/credis/build/src/libmaster.so\",\n \"ray/core/src/credis/redis/src/redis-server\"\n ]\n\nextras = {\n \"debug\": [],\n \"dashboard\": [],\n \"serve\": [\"uvicorn\", \"pygments\", \"werkzeug\", \"flask\", \"pandas\", \"blist\"],\n \"tune\": [\"tabulate\", \"tensorboardX\"]\n}\n\nextras[\"rllib\"] = extras[\"tune\"] + [\n \"atari_py\",\n \"dm_tree\",\n \"gym[atari]\",\n \"lz4\",\n \"opencv-python-headless\",\n \"pyyaml\",\n \"scipy\",\n]\n\nextras[\"streaming\"] = [\"msgpack >= 0.6.2\"]\n\nextras[\"all\"] = list(set(chain.from_iterable(extras.values())))\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build packages inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n command = [\"../build.sh\", \"-p\", sys.executable]\n if build_java:\n # Also build binaries for Java if the above env variable exists.\n command += [\"-l\", \"python,java\"]\n subprocess.check_call(command)\n\n # We also need to install pickle5 along with Ray, so make sure that the\n # relevant non-Python pickle5 files get copied.\n pickle5_files = self.walk_directory(\"./ray/pickle5_files/pickle5\")\n\n thirdparty_files = self.walk_directory(\"./ray/thirdparty_files\")\n\n files_to_include = ray_files + pickle5_files + thirdparty_files\n\n # Copy over the autogenerated protobuf Python bindings.\n for directory in generated_python_directories:\n for filename in os.listdir(directory):\n if filename[-3:] == \".py\":\n files_to_include.append(os.path.join(directory, filename))\n\n for filename in files_to_include:\n self.move_file(filename)\n\n # Try to copy over the optional files.\n for filename in optional_ray_files:\n try:\n self.move_file(filename)\n except Exception:\n print(\"Failed to copy optional file {}. This is ok.\"\n .format(filename))\n\n def walk_directory(self, directory):\n file_list = []\n for (root, dirs, filenames) in os.walk(directory):\n for name in filenames:\n file_list.append(os.path.join(root, name))\n return file_list\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n if not os.path.exists(destination):\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination, follow_symlinks=True)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\ndef find_version(*filepath):\n # Extract version information from filepath\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *filepath)) as fp:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n fp.read(), re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = [\n \"numpy >= 1.16\",\n \"filelock\",\n \"jsonschema\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"packaging\",\n \"pytest\",\n \"pyyaml\",\n \"jsonschema\",\n \"redis>=3.3.2\",\n # NOTE: Don't upgrade the version of six! Doing so causes installation\n # problems. See https://github.com/ray-project/ray/issues/4169.\n \"six >= 1.0.0\",\n \"faulthandler;python_version<'3.3'\",\n \"protobuf >= 3.8.0\",\n \"cloudpickle\",\n \"py-spy >= 0.2.0\",\n \"aiohttp\",\n \"google\",\n \"grpcio\"\n]\n\nsetup(\n name=\"ray\",\n version=find_version(\"ray\", \"__init__.py\"),\n author=\"Ray Team\",\n author_email=\"[email protected]\",\n description=(\"A system for parallel and distributed Python that unifies \"\n \"the ML ecosystem.\"),\n long_description=open(\"../README.rst\").read(),\n url=\"https://github.com/ray-project/ray\",\n keywords=(\"ray distributed parallel machine-learning \"\n \"reinforcement-learning deep-learning python\"),\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=requires,\n setup_requires=[\"cython >= 0.29\"],\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"ray=ray.scripts.scripts:main\",\n \"rllib=ray.rllib.scripts:cli [rllib]\", \"tune=ray.tune.scripts:cli\"\n ]\n },\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n", "path": "python/setup.py"}], "after_files": [{"content": "from itertools import chain\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\n\n# NOTE: The lists below must be kept in sync with ray/BUILD.bazel.\nray_files = [\n \"ray/core/src/ray/thirdparty/redis/src/redis-server\",\n \"ray/core/src/ray/gcs/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store_server\",\n \"ray/_raylet.so\",\n \"ray/core/src/ray/raylet/raylet_monitor\",\n \"ray/core/src/ray/gcs/gcs_server\",\n \"ray/core/src/ray/raylet/raylet\",\n \"ray/dashboard/dashboard.py\",\n \"ray/streaming/_streaming.so\",\n]\n\nbuild_java = os.getenv(\"RAY_INSTALL_JAVA\") == \"1\"\nif build_java:\n ray_files.append(\"ray/jars/ray_dist.jar\")\n\n# These are the directories where automatically generated Python protobuf\n# bindings are created.\ngenerated_python_directories = [\n \"ray/core/generated\",\n \"ray/streaming/generated\",\n]\n\noptional_ray_files = []\n\nray_autoscaler_files = [\n \"ray/autoscaler/aws/example-full.yaml\",\n \"ray/autoscaler/azure/example-full.yaml\",\n \"ray/autoscaler/gcp/example-full.yaml\",\n \"ray/autoscaler/local/example-full.yaml\",\n \"ray/autoscaler/kubernetes/example-full.yaml\",\n \"ray/autoscaler/kubernetes/kubectl-rsync.sh\",\n \"ray/autoscaler/ray-schema.json\"\n]\n\nray_project_files = [\n \"ray/projects/schema.json\", \"ray/projects/templates/cluster_template.yaml\",\n \"ray/projects/templates/project_template.yaml\",\n \"ray/projects/templates/requirements.txt\"\n]\n\nray_dashboard_files = [\n os.path.join(dirpath, filename)\n for dirpath, dirnames, filenames in os.walk(\"ray/dashboard/client/build\")\n for filename in filenames\n]\n\noptional_ray_files += ray_autoscaler_files\noptional_ray_files += ray_project_files\noptional_ray_files += ray_dashboard_files\n\nif \"RAY_USE_NEW_GCS\" in os.environ and os.environ[\"RAY_USE_NEW_GCS\"] == \"on\":\n ray_files += [\n \"ray/core/src/credis/build/src/libmember.so\",\n \"ray/core/src/credis/build/src/libmaster.so\",\n \"ray/core/src/credis/redis/src/redis-server\"\n ]\n\nextras = {\n \"debug\": [],\n \"dashboard\": [],\n \"serve\": [\"uvicorn\", \"pygments\", \"werkzeug\", \"flask\", \"pandas\", \"blist\"],\n \"tune\": [\"tabulate\", \"tensorboardX\"]\n}\n\nextras[\"rllib\"] = extras[\"tune\"] + [\n \"atari_py\",\n \"dm_tree\",\n \"gym[atari]\",\n \"lz4\",\n \"opencv-python-headless\",\n \"pyyaml\",\n \"scipy\",\n]\n\nextras[\"streaming\"] = [\"msgpack >= 0.6.2\"]\n\nextras[\"all\"] = list(set(chain.from_iterable(extras.values())))\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build packages inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n command = [\"../build.sh\", \"-p\", sys.executable]\n if build_java:\n # Also build binaries for Java if the above env variable exists.\n command += [\"-l\", \"python,java\"]\n subprocess.check_call(command)\n\n # We also need to install pickle5 along with Ray, so make sure that the\n # relevant non-Python pickle5 files get copied.\n pickle5_files = self.walk_directory(\"./ray/pickle5_files/pickle5\")\n\n thirdparty_files = self.walk_directory(\"./ray/thirdparty_files\")\n\n files_to_include = ray_files + pickle5_files + thirdparty_files\n\n # Copy over the autogenerated protobuf Python bindings.\n for directory in generated_python_directories:\n for filename in os.listdir(directory):\n if filename[-3:] == \".py\":\n files_to_include.append(os.path.join(directory, filename))\n\n for filename in files_to_include:\n self.move_file(filename)\n\n # Try to copy over the optional files.\n for filename in optional_ray_files:\n try:\n self.move_file(filename)\n except Exception:\n print(\"Failed to copy optional file {}. This is ok.\"\n .format(filename))\n\n def walk_directory(self, directory):\n file_list = []\n for (root, dirs, filenames) in os.walk(directory):\n for name in filenames:\n file_list.append(os.path.join(root, name))\n return file_list\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n if not os.path.exists(destination):\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination, follow_symlinks=True)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\ndef find_version(*filepath):\n # Extract version information from filepath\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *filepath)) as fp:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n fp.read(), re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = [\n \"numpy >= 1.16\",\n \"filelock\",\n \"jsonschema\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"packaging\",\n \"pytest\",\n \"pyyaml\",\n \"redis>=3.3.2\",\n # NOTE: Don't upgrade the version of six! Doing so causes installation\n # problems. See https://github.com/ray-project/ray/issues/4169.\n \"six >= 1.0.0\",\n \"faulthandler;python_version<'3.3'\",\n \"protobuf >= 3.8.0\",\n \"cloudpickle\",\n \"py-spy >= 0.2.0\",\n \"aiohttp\",\n \"google\",\n \"grpcio\"\n]\n\nsetup(\n name=\"ray\",\n version=find_version(\"ray\", \"__init__.py\"),\n author=\"Ray Team\",\n author_email=\"[email protected]\",\n description=(\"A system for parallel and distributed Python that unifies \"\n \"the ML ecosystem.\"),\n long_description=open(\"../README.rst\").read(),\n url=\"https://github.com/ray-project/ray\",\n keywords=(\"ray distributed parallel machine-learning \"\n \"reinforcement-learning deep-learning python\"),\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=requires,\n setup_requires=[\"cython >= 0.29\"],\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"ray=ray.scripts.scripts:main\",\n \"rllib=ray.rllib.scripts:cli [rllib]\", \"tune=ray.tune.scripts:cli\"\n ]\n },\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n", "path": "python/setup.py"}]}
| 2,976 | 106 |
gh_patches_debug_27939
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-5556
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KR production parser down
## Description
This is an automatic error report generated for South Korea (KR).
Issues:
- No recent data found for `consumption` parser
- No recent data found for `price` parser
- No recent data found for `production` parser
## Suggestions
- Try running the parser locally using the command `poetry run test_parser KR production`
- <a href="https://storage.googleapis.com/electricitymap-parser-logs/KR.html">Explore the runtime logs</a>
You can see an overview of all parser issues [here](https://github.com/tmrowco/electricitymap-contrib/wiki/Parser-issues).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/KR.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import json
4 import pprint
5 import re
6 from datetime import datetime, timedelta
7 from logging import Logger, getLogger
8 from typing import List, Optional
9
10 import arrow
11 import pandas as pd
12 from bs4 import BeautifulSoup
13 from requests import Session
14
15 from parsers.lib.config import refetch_frequency
16
17 TIMEZONE = "Asia/Seoul"
18 REAL_TIME_URL = "https://new.kpx.or.kr/powerinfoSubmain.es?mid=a10606030000"
19 PRICE_URL = "https://new.kpx.or.kr/smpInland.es?mid=a10606080100&device=pc"
20 LONG_TERM_PRODUCTION_URL = (
21 "https://new.kpx.or.kr/powerSource.es?mid=a10606030000&device=chart"
22 )
23
24 pp = pprint.PrettyPrinter(indent=4)
25
26 #### Classification of New & Renewable Energy Sources ####
27 # Source: https://cms.khnp.co.kr/eng/content/563/main.do?mnCd=EN040101
28 # New energy: Hydrogen, Fuel Cell, Coal liquefied or gasified energy, and vacuum residue gasified energy, etc.
29 # Renewable: Solar, Wind power, Water power, ocean energy, Geothermal, Bio energy, etc.
30
31 # src: https://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object
32 def time_floor(time, delta, epoch=None):
33 if epoch is None:
34 epoch = datetime(1970, 1, 1, tzinfo=time.tzinfo)
35 mod = (time - epoch) % delta
36 return time - mod
37
38
39 def extract_chart_data(html):
40 """
41 Extracts generation breakdown chart data from the source code of the page.
42 """
43 # Extract object with data
44 data_source = re.search(r"var ictArr = (\[\{.+\}\]);", html).group(1)
45 # Un-quoted keys ({key:"value"}) are valid JavaScript but not valid JSON (which requires {"key":"value"}).
46 # Will break if other keys than these are introduced. Alternatively, use a JSON5 library (JSON5 allows un-quoted keys)
47 data_source = re.sub(
48 r'"(localCoal|newRenewable|oil|once|gas|nuclearPower|coal|regDate|raisingWater|waterPower|seq)"',
49 r'"\1"',
50 data_source,
51 )
52 json_obj = json.loads(data_source)
53
54 timed_data = {}
55
56 for item in json_obj:
57 if item["regDate"] == "0":
58 break
59
60 date = datetime.strptime(item["regDate"], "%Y-%m-%d %H:%M")
61 date = arrow.get(date, TIMEZONE).datetime
62
63 timed_data[date] = {
64 "coal": round(float(item["coal"]) + float(item["localCoal"]), 5),
65 "gas": round(float(item["gas"]), 5),
66 "hydro": round(float(item["waterPower"]), 5),
67 "nuclear": round(float(item["nuclearPower"]), 5),
68 "oil": round(float(item["oil"]), 5),
69 "renewable": round(float(item["newRenewable"]), 5),
70 "pumpedHydro": round(float(item["raisingWater"]), 5),
71 }
72
73 return timed_data
74
75
76 @refetch_frequency(timedelta(minutes=5))
77 def fetch_consumption(
78 zone_key: str = "KR",
79 session: Optional[Session] = None,
80 target_datetime: Optional[datetime] = None,
81 logger: Logger = getLogger(__name__),
82 ) -> dict:
83 """
84 Fetches consumption.
85 """
86
87 if target_datetime:
88 raise NotImplementedError("This parser is not yet able to parse past dates")
89
90 r = session or Session()
91 url = REAL_TIME_URL
92
93 response = r.get(url)
94 assert response.status_code == 200
95
96 soup = BeautifulSoup(response.text, "html.parser")
97 consumption_title = soup.find("th", string=re.compile(r"\s*현재부하\s*"))
98 consumption_val = float(
99 consumption_title.find_next_sibling().text.split()[0].replace(",", "")
100 )
101
102 consumption_date_list = soup.find("p", {"class": "info_top"}).text.split(" ")[:2]
103 consumption_date_list[0] = consumption_date_list[0].replace(".", "-").split("(")[0]
104 consumption_date = datetime.strptime(
105 " ".join(consumption_date_list), "%Y-%m-%d %H:%M"
106 )
107 consumption_date = arrow.get(consumption_date, TIMEZONE).datetime
108
109 data = {
110 "consumption": consumption_val,
111 "datetime": consumption_date,
112 "source": url,
113 "zoneKey": zone_key,
114 }
115
116 return data
117
118
119 @refetch_frequency(timedelta(hours=1))
120 def fetch_price(
121 zone_key: str = "KR",
122 session: Optional[Session] = None,
123 target_datetime: Optional[datetime] = None,
124 logger: Logger = getLogger(__name__),
125 ):
126
127 first_available_date = time_floor(
128 arrow.now(TIMEZONE).shift(days=-6), timedelta(days=1)
129 ).shift(hours=1)
130
131 if target_datetime is not None and target_datetime < first_available_date:
132 raise NotImplementedError(
133 "This parser is not able to parse dates more than one week in the past."
134 )
135
136 if target_datetime is None:
137 target_datetime = arrow.now(TIMEZONE).datetime
138
139 r = session or Session()
140 url = PRICE_URL
141
142 response = r.get(url)
143 assert response.status_code == 200
144
145 all_data = []
146 table_prices = pd.read_html(response.text, header=0)[0]
147
148 for col_idx in range(1, table_prices.shape[1]):
149 for row_idx in range(24):
150
151 day = col_idx
152 hour = row_idx + 1
153
154 if hour == 24:
155 hour = 0
156 day += 1
157
158 arw_day = (
159 arrow.now(TIMEZONE)
160 .shift(days=-1 * (7 - day))
161 .replace(hour=hour, minute=0, second=0, microsecond=0)
162 )
163 price_value = (
164 table_prices.iloc[row_idx, col_idx] * 1000
165 ) # Convert from Won/kWh to Won/MWh
166
167 data = {
168 "zoneKey": zone_key,
169 "datetime": arw_day.datetime,
170 "currency": "KRW",
171 "price": price_value,
172 "source": "new.kpx.or.kr",
173 }
174
175 all_data.append(data)
176
177 return all_data
178
179
180 def get_long_term_prod_data(
181 session: Optional[Session] = None, target_datetime: Optional[datetime] = None
182 ) -> List[dict]:
183 target_datetime_formatted_daily = target_datetime.strftime("%Y-%m-%d")
184
185 r = session or Session()
186
187 # CSRF token is needed to access the production data
188 r.get(LONG_TERM_PRODUCTION_URL)
189 cookies_dict = r.cookies.get_dict()
190
191 payload = {
192 "mid": "a10606030000",
193 "device": "chart",
194 "view_sdate": target_datetime_formatted_daily,
195 "view_edate": target_datetime_formatted_daily,
196 "_csrf": cookies_dict["XSRF-TOKEN"],
197 }
198
199 res = r.post(LONG_TERM_PRODUCTION_URL, payload)
200
201 assert res.status_code == 200
202
203 all_data = []
204
205 soup = BeautifulSoup(res.text, "html.parser")
206 table_rows = soup.find_all("tr")[1:]
207
208 for row in table_rows:
209
210 sanitized_date = [value[:-1] for value in row.find_all("td")[0].text.split(" ")]
211 curr_prod_datetime_string = (
212 "-".join(sanitized_date[:3]) + "T" + ":".join(sanitized_date[3:]) + ":00"
213 )
214 arw_datetime = arrow.get(
215 curr_prod_datetime_string, "YYYY-MM-DDTHH:mm:ss", tzinfo=TIMEZONE
216 ).datetime
217
218 data = {
219 "zoneKey": "KR",
220 "datetime": arw_datetime,
221 "capacity": {},
222 "production": {},
223 "storage": {},
224 "source": "https://new.kpx.or.kr",
225 }
226
227 row_values = row.find_all("td")
228 production_values = [
229 int("".join(value.text.split(","))) for value in row_values[1:]
230 ]
231
232 # order of production_values
233 # 0. other, 1. gas, 2. renewable, 3. coal, 4. nuclear
234 # other can be negative as well as positive due to pumped hydro
235
236 data["datetime"] = arw_datetime
237 data["production"]["unknown"] = production_values[0] + production_values[2]
238 data["production"]["gas"] = production_values[1]
239 data["production"]["coal"] = production_values[3]
240 data["production"]["nuclear"] = production_values[4]
241
242 all_data.append(data)
243
244 return all_data
245
246
247 def get_granular_real_time_prod_data(session: Optional[Session] = None) -> dict:
248 r0 = session or Session()
249 res_0 = r0.get(REAL_TIME_URL)
250 chart_data = extract_chart_data(res_0.text)
251
252 return chart_data
253
254
255 @refetch_frequency(timedelta(minutes=5))
256 def fetch_production(
257 zone_key: str = "KR",
258 session: Optional[Session] = None,
259 target_datetime: Optional[datetime] = None,
260 logger: Logger = getLogger(__name__),
261 ) -> List[dict]:
262
263 if target_datetime is not None and target_datetime < arrow.get(
264 2021, 12, 22, 0, 0, 0, tzinfo=TIMEZONE
265 ):
266 raise NotImplementedError(
267 "This parser is not able to parse dates before 2021-12-22."
268 )
269
270 if target_datetime is None:
271 target_datetime = arrow.now(TIMEZONE).datetime
272
273 all_data = []
274
275 if target_datetime.date() == arrow.now(TIMEZONE).date():
276 chart_data = get_granular_real_time_prod_data(session=session)
277
278 for datetime_key, chart_data_values in chart_data.items():
279 data = {
280 "zoneKey": "KR",
281 "datetime": datetime_key,
282 "capacity": {},
283 "production": {},
284 "storage": {},
285 "source": "https://new.kpx.or.kr",
286 }
287
288 data["storage"]["hydro"] = chart_data_values["pumpedHydro"]
289
290 data["production"]["coal"] = chart_data_values["coal"]
291 data["production"]["gas"] = chart_data_values["gas"]
292 data["production"]["nuclear"] = chart_data_values["nuclear"]
293 data["production"]["oil"] = chart_data_values["oil"]
294 data["production"]["hydro"] = chart_data_values["hydro"]
295 data["production"]["unknown"] = chart_data_values["renewable"]
296
297 all_data.append(data)
298
299 else:
300 all_data = get_long_term_prod_data(
301 session=session, target_datetime=target_datetime
302 )
303
304 return all_data
305
306
307 if __name__ == "__main__":
308 # Testing datetime on specific date
309 target_datetime = arrow.get(2022, 2, 7, 16, 35, 0, tzinfo=TIMEZONE).datetime
310
311 print("fetch_production() ->")
312 # pp.pprint(fetch_production(target_datetime=target_datetime))
313 pp.pprint(fetch_production())
314
315 print("fetch_price() -> ")
316 # pp.pprint(fetch_price(target_datetime=target_datetime))
317 pp.pprint(fetch_price())
318
319 print("fetch_consumption() -> ")
320 pp.pprint(fetch_consumption())
321
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/KR.py b/parsers/KR.py
--- a/parsers/KR.py
+++ b/parsers/KR.py
@@ -90,7 +90,7 @@
r = session or Session()
url = REAL_TIME_URL
- response = r.get(url)
+ response = r.get(url, verify=False)
assert response.status_code == 200
soup = BeautifulSoup(response.text, "html.parser")
@@ -139,7 +139,7 @@
r = session or Session()
url = PRICE_URL
- response = r.get(url)
+ response = r.get(url, verify=False)
assert response.status_code == 200
all_data = []
@@ -246,7 +246,7 @@
def get_granular_real_time_prod_data(session: Optional[Session] = None) -> dict:
r0 = session or Session()
- res_0 = r0.get(REAL_TIME_URL)
+ res_0 = r0.get(REAL_TIME_URL, verify=False)
chart_data = extract_chart_data(res_0.text)
return chart_data
@@ -285,7 +285,7 @@
"source": "https://new.kpx.or.kr",
}
- data["storage"]["hydro"] = chart_data_values["pumpedHydro"]
+ data["storage"]["hydro"] = -chart_data_values["pumpedHydro"]
data["production"]["coal"] = chart_data_values["coal"]
data["production"]["gas"] = chart_data_values["gas"]
|
{"golden_diff": "diff --git a/parsers/KR.py b/parsers/KR.py\n--- a/parsers/KR.py\n+++ b/parsers/KR.py\n@@ -90,7 +90,7 @@\n r = session or Session()\n url = REAL_TIME_URL\n \n- response = r.get(url)\n+ response = r.get(url, verify=False)\n assert response.status_code == 200\n \n soup = BeautifulSoup(response.text, \"html.parser\")\n@@ -139,7 +139,7 @@\n r = session or Session()\n url = PRICE_URL\n \n- response = r.get(url)\n+ response = r.get(url, verify=False)\n assert response.status_code == 200\n \n all_data = []\n@@ -246,7 +246,7 @@\n \n def get_granular_real_time_prod_data(session: Optional[Session] = None) -> dict:\n r0 = session or Session()\n- res_0 = r0.get(REAL_TIME_URL)\n+ res_0 = r0.get(REAL_TIME_URL, verify=False)\n chart_data = extract_chart_data(res_0.text)\n \n return chart_data\n@@ -285,7 +285,7 @@\n \"source\": \"https://new.kpx.or.kr\",\n }\n \n- data[\"storage\"][\"hydro\"] = chart_data_values[\"pumpedHydro\"]\n+ data[\"storage\"][\"hydro\"] = -chart_data_values[\"pumpedHydro\"]\n \n data[\"production\"][\"coal\"] = chart_data_values[\"coal\"]\n data[\"production\"][\"gas\"] = chart_data_values[\"gas\"]\n", "issue": "KR production parser down\n## Description\n\nThis is an automatic error report generated for South Korea (KR).\n\nIssues:\n- No recent data found for `consumption` parser\n- No recent data found for `price` parser\n- No recent data found for `production` parser\n\n## Suggestions\n- Try running the parser locally using the command `poetry run test_parser KR production`\n- <a href=\"https://storage.googleapis.com/electricitymap-parser-logs/KR.html\">Explore the runtime logs</a>\n\nYou can see an overview of all parser issues [here](https://github.com/tmrowco/electricitymap-contrib/wiki/Parser-issues).\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport json\nimport pprint\nimport re\nfrom datetime import datetime, timedelta\nfrom logging import Logger, getLogger\nfrom typing import List, Optional\n\nimport arrow\nimport pandas as pd\nfrom bs4 import BeautifulSoup\nfrom requests import Session\n\nfrom parsers.lib.config import refetch_frequency\n\nTIMEZONE = \"Asia/Seoul\"\nREAL_TIME_URL = \"https://new.kpx.or.kr/powerinfoSubmain.es?mid=a10606030000\"\nPRICE_URL = \"https://new.kpx.or.kr/smpInland.es?mid=a10606080100&device=pc\"\nLONG_TERM_PRODUCTION_URL = (\n \"https://new.kpx.or.kr/powerSource.es?mid=a10606030000&device=chart\"\n)\n\npp = pprint.PrettyPrinter(indent=4)\n\n#### Classification of New & Renewable Energy Sources ####\n# Source: https://cms.khnp.co.kr/eng/content/563/main.do?mnCd=EN040101\n# New energy: Hydrogen, Fuel Cell, Coal liquefied or gasified energy, and vacuum residue gasified energy, etc.\n# Renewable: Solar, Wind power, Water power, ocean energy, Geothermal, Bio energy, etc.\n\n# src: https://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object\ndef time_floor(time, delta, epoch=None):\n if epoch is None:\n epoch = datetime(1970, 1, 1, tzinfo=time.tzinfo)\n mod = (time - epoch) % delta\n return time - mod\n\n\ndef extract_chart_data(html):\n \"\"\"\n Extracts generation breakdown chart data from the source code of the page.\n \"\"\"\n # Extract object with data\n data_source = re.search(r\"var ictArr = (\\[\\{.+\\}\\]);\", html).group(1)\n # Un-quoted keys ({key:\"value\"}) are valid JavaScript but not valid JSON (which requires {\"key\":\"value\"}).\n # Will break if other keys than these are introduced. Alternatively, use a JSON5 library (JSON5 allows un-quoted keys)\n data_source = re.sub(\n r'\"(localCoal|newRenewable|oil|once|gas|nuclearPower|coal|regDate|raisingWater|waterPower|seq)\"',\n r'\"\\1\"',\n data_source,\n )\n json_obj = json.loads(data_source)\n\n timed_data = {}\n\n for item in json_obj:\n if item[\"regDate\"] == \"0\":\n break\n\n date = datetime.strptime(item[\"regDate\"], \"%Y-%m-%d %H:%M\")\n date = arrow.get(date, TIMEZONE).datetime\n\n timed_data[date] = {\n \"coal\": round(float(item[\"coal\"]) + float(item[\"localCoal\"]), 5),\n \"gas\": round(float(item[\"gas\"]), 5),\n \"hydro\": round(float(item[\"waterPower\"]), 5),\n \"nuclear\": round(float(item[\"nuclearPower\"]), 5),\n \"oil\": round(float(item[\"oil\"]), 5),\n \"renewable\": round(float(item[\"newRenewable\"]), 5),\n \"pumpedHydro\": round(float(item[\"raisingWater\"]), 5),\n }\n\n return timed_data\n\n\n@refetch_frequency(timedelta(minutes=5))\ndef fetch_consumption(\n zone_key: str = \"KR\",\n session: Optional[Session] = None,\n target_datetime: Optional[datetime] = None,\n logger: Logger = getLogger(__name__),\n) -> dict:\n \"\"\"\n Fetches consumption.\n \"\"\"\n\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates\")\n\n r = session or Session()\n url = REAL_TIME_URL\n\n response = r.get(url)\n assert response.status_code == 200\n\n soup = BeautifulSoup(response.text, \"html.parser\")\n consumption_title = soup.find(\"th\", string=re.compile(r\"\\s*\ud604\uc7ac\ubd80\ud558\\s*\"))\n consumption_val = float(\n consumption_title.find_next_sibling().text.split()[0].replace(\",\", \"\")\n )\n\n consumption_date_list = soup.find(\"p\", {\"class\": \"info_top\"}).text.split(\" \")[:2]\n consumption_date_list[0] = consumption_date_list[0].replace(\".\", \"-\").split(\"(\")[0]\n consumption_date = datetime.strptime(\n \" \".join(consumption_date_list), \"%Y-%m-%d %H:%M\"\n )\n consumption_date = arrow.get(consumption_date, TIMEZONE).datetime\n\n data = {\n \"consumption\": consumption_val,\n \"datetime\": consumption_date,\n \"source\": url,\n \"zoneKey\": zone_key,\n }\n\n return data\n\n\n@refetch_frequency(timedelta(hours=1))\ndef fetch_price(\n zone_key: str = \"KR\",\n session: Optional[Session] = None,\n target_datetime: Optional[datetime] = None,\n logger: Logger = getLogger(__name__),\n):\n\n first_available_date = time_floor(\n arrow.now(TIMEZONE).shift(days=-6), timedelta(days=1)\n ).shift(hours=1)\n\n if target_datetime is not None and target_datetime < first_available_date:\n raise NotImplementedError(\n \"This parser is not able to parse dates more than one week in the past.\"\n )\n\n if target_datetime is None:\n target_datetime = arrow.now(TIMEZONE).datetime\n\n r = session or Session()\n url = PRICE_URL\n\n response = r.get(url)\n assert response.status_code == 200\n\n all_data = []\n table_prices = pd.read_html(response.text, header=0)[0]\n\n for col_idx in range(1, table_prices.shape[1]):\n for row_idx in range(24):\n\n day = col_idx\n hour = row_idx + 1\n\n if hour == 24:\n hour = 0\n day += 1\n\n arw_day = (\n arrow.now(TIMEZONE)\n .shift(days=-1 * (7 - day))\n .replace(hour=hour, minute=0, second=0, microsecond=0)\n )\n price_value = (\n table_prices.iloc[row_idx, col_idx] * 1000\n ) # Convert from Won/kWh to Won/MWh\n\n data = {\n \"zoneKey\": zone_key,\n \"datetime\": arw_day.datetime,\n \"currency\": \"KRW\",\n \"price\": price_value,\n \"source\": \"new.kpx.or.kr\",\n }\n\n all_data.append(data)\n\n return all_data\n\n\ndef get_long_term_prod_data(\n session: Optional[Session] = None, target_datetime: Optional[datetime] = None\n) -> List[dict]:\n target_datetime_formatted_daily = target_datetime.strftime(\"%Y-%m-%d\")\n\n r = session or Session()\n\n # CSRF token is needed to access the production data\n r.get(LONG_TERM_PRODUCTION_URL)\n cookies_dict = r.cookies.get_dict()\n\n payload = {\n \"mid\": \"a10606030000\",\n \"device\": \"chart\",\n \"view_sdate\": target_datetime_formatted_daily,\n \"view_edate\": target_datetime_formatted_daily,\n \"_csrf\": cookies_dict[\"XSRF-TOKEN\"],\n }\n\n res = r.post(LONG_TERM_PRODUCTION_URL, payload)\n\n assert res.status_code == 200\n\n all_data = []\n\n soup = BeautifulSoup(res.text, \"html.parser\")\n table_rows = soup.find_all(\"tr\")[1:]\n\n for row in table_rows:\n\n sanitized_date = [value[:-1] for value in row.find_all(\"td\")[0].text.split(\" \")]\n curr_prod_datetime_string = (\n \"-\".join(sanitized_date[:3]) + \"T\" + \":\".join(sanitized_date[3:]) + \":00\"\n )\n arw_datetime = arrow.get(\n curr_prod_datetime_string, \"YYYY-MM-DDTHH:mm:ss\", tzinfo=TIMEZONE\n ).datetime\n\n data = {\n \"zoneKey\": \"KR\",\n \"datetime\": arw_datetime,\n \"capacity\": {},\n \"production\": {},\n \"storage\": {},\n \"source\": \"https://new.kpx.or.kr\",\n }\n\n row_values = row.find_all(\"td\")\n production_values = [\n int(\"\".join(value.text.split(\",\"))) for value in row_values[1:]\n ]\n\n # order of production_values\n # 0. other, 1. gas, 2. renewable, 3. coal, 4. nuclear\n # other can be negative as well as positive due to pumped hydro\n\n data[\"datetime\"] = arw_datetime\n data[\"production\"][\"unknown\"] = production_values[0] + production_values[2]\n data[\"production\"][\"gas\"] = production_values[1]\n data[\"production\"][\"coal\"] = production_values[3]\n data[\"production\"][\"nuclear\"] = production_values[4]\n\n all_data.append(data)\n\n return all_data\n\n\ndef get_granular_real_time_prod_data(session: Optional[Session] = None) -> dict:\n r0 = session or Session()\n res_0 = r0.get(REAL_TIME_URL)\n chart_data = extract_chart_data(res_0.text)\n\n return chart_data\n\n\n@refetch_frequency(timedelta(minutes=5))\ndef fetch_production(\n zone_key: str = \"KR\",\n session: Optional[Session] = None,\n target_datetime: Optional[datetime] = None,\n logger: Logger = getLogger(__name__),\n) -> List[dict]:\n\n if target_datetime is not None and target_datetime < arrow.get(\n 2021, 12, 22, 0, 0, 0, tzinfo=TIMEZONE\n ):\n raise NotImplementedError(\n \"This parser is not able to parse dates before 2021-12-22.\"\n )\n\n if target_datetime is None:\n target_datetime = arrow.now(TIMEZONE).datetime\n\n all_data = []\n\n if target_datetime.date() == arrow.now(TIMEZONE).date():\n chart_data = get_granular_real_time_prod_data(session=session)\n\n for datetime_key, chart_data_values in chart_data.items():\n data = {\n \"zoneKey\": \"KR\",\n \"datetime\": datetime_key,\n \"capacity\": {},\n \"production\": {},\n \"storage\": {},\n \"source\": \"https://new.kpx.or.kr\",\n }\n\n data[\"storage\"][\"hydro\"] = chart_data_values[\"pumpedHydro\"]\n\n data[\"production\"][\"coal\"] = chart_data_values[\"coal\"]\n data[\"production\"][\"gas\"] = chart_data_values[\"gas\"]\n data[\"production\"][\"nuclear\"] = chart_data_values[\"nuclear\"]\n data[\"production\"][\"oil\"] = chart_data_values[\"oil\"]\n data[\"production\"][\"hydro\"] = chart_data_values[\"hydro\"]\n data[\"production\"][\"unknown\"] = chart_data_values[\"renewable\"]\n\n all_data.append(data)\n\n else:\n all_data = get_long_term_prod_data(\n session=session, target_datetime=target_datetime\n )\n\n return all_data\n\n\nif __name__ == \"__main__\":\n # Testing datetime on specific date\n target_datetime = arrow.get(2022, 2, 7, 16, 35, 0, tzinfo=TIMEZONE).datetime\n\n print(\"fetch_production() ->\")\n # pp.pprint(fetch_production(target_datetime=target_datetime))\n pp.pprint(fetch_production())\n\n print(\"fetch_price() -> \")\n # pp.pprint(fetch_price(target_datetime=target_datetime))\n pp.pprint(fetch_price())\n\n print(\"fetch_consumption() -> \")\n pp.pprint(fetch_consumption())\n", "path": "parsers/KR.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport json\nimport pprint\nimport re\nfrom datetime import datetime, timedelta\nfrom logging import Logger, getLogger\nfrom typing import List, Optional\n\nimport arrow\nimport pandas as pd\nfrom bs4 import BeautifulSoup\nfrom requests import Session\n\nfrom parsers.lib.config import refetch_frequency\n\nTIMEZONE = \"Asia/Seoul\"\nREAL_TIME_URL = \"https://new.kpx.or.kr/powerinfoSubmain.es?mid=a10606030000\"\nPRICE_URL = \"https://new.kpx.or.kr/smpInland.es?mid=a10606080100&device=pc\"\nLONG_TERM_PRODUCTION_URL = (\n \"https://new.kpx.or.kr/powerSource.es?mid=a10606030000&device=chart\"\n)\n\npp = pprint.PrettyPrinter(indent=4)\n\n#### Classification of New & Renewable Energy Sources ####\n# Source: https://cms.khnp.co.kr/eng/content/563/main.do?mnCd=EN040101\n# New energy: Hydrogen, Fuel Cell, Coal liquefied or gasified energy, and vacuum residue gasified energy, etc.\n# Renewable: Solar, Wind power, Water power, ocean energy, Geothermal, Bio energy, etc.\n\n# src: https://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object\ndef time_floor(time, delta, epoch=None):\n if epoch is None:\n epoch = datetime(1970, 1, 1, tzinfo=time.tzinfo)\n mod = (time - epoch) % delta\n return time - mod\n\n\ndef extract_chart_data(html):\n \"\"\"\n Extracts generation breakdown chart data from the source code of the page.\n \"\"\"\n # Extract object with data\n data_source = re.search(r\"var ictArr = (\\[\\{.+\\}\\]);\", html).group(1)\n # Un-quoted keys ({key:\"value\"}) are valid JavaScript but not valid JSON (which requires {\"key\":\"value\"}).\n # Will break if other keys than these are introduced. Alternatively, use a JSON5 library (JSON5 allows un-quoted keys)\n data_source = re.sub(\n r'\"(localCoal|newRenewable|oil|once|gas|nuclearPower|coal|regDate|raisingWater|waterPower|seq)\"',\n r'\"\\1\"',\n data_source,\n )\n json_obj = json.loads(data_source)\n\n timed_data = {}\n\n for item in json_obj:\n if item[\"regDate\"] == \"0\":\n break\n\n date = datetime.strptime(item[\"regDate\"], \"%Y-%m-%d %H:%M\")\n date = arrow.get(date, TIMEZONE).datetime\n\n timed_data[date] = {\n \"coal\": round(float(item[\"coal\"]) + float(item[\"localCoal\"]), 5),\n \"gas\": round(float(item[\"gas\"]), 5),\n \"hydro\": round(float(item[\"waterPower\"]), 5),\n \"nuclear\": round(float(item[\"nuclearPower\"]), 5),\n \"oil\": round(float(item[\"oil\"]), 5),\n \"renewable\": round(float(item[\"newRenewable\"]), 5),\n \"pumpedHydro\": round(float(item[\"raisingWater\"]), 5),\n }\n\n return timed_data\n\n\n@refetch_frequency(timedelta(minutes=5))\ndef fetch_consumption(\n zone_key: str = \"KR\",\n session: Optional[Session] = None,\n target_datetime: Optional[datetime] = None,\n logger: Logger = getLogger(__name__),\n) -> dict:\n \"\"\"\n Fetches consumption.\n \"\"\"\n\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates\")\n\n r = session or Session()\n url = REAL_TIME_URL\n\n response = r.get(url, verify=False)\n assert response.status_code == 200\n\n soup = BeautifulSoup(response.text, \"html.parser\")\n consumption_title = soup.find(\"th\", string=re.compile(r\"\\s*\ud604\uc7ac\ubd80\ud558\\s*\"))\n consumption_val = float(\n consumption_title.find_next_sibling().text.split()[0].replace(\",\", \"\")\n )\n\n consumption_date_list = soup.find(\"p\", {\"class\": \"info_top\"}).text.split(\" \")[:2]\n consumption_date_list[0] = consumption_date_list[0].replace(\".\", \"-\").split(\"(\")[0]\n consumption_date = datetime.strptime(\n \" \".join(consumption_date_list), \"%Y-%m-%d %H:%M\"\n )\n consumption_date = arrow.get(consumption_date, TIMEZONE).datetime\n\n data = {\n \"consumption\": consumption_val,\n \"datetime\": consumption_date,\n \"source\": url,\n \"zoneKey\": zone_key,\n }\n\n return data\n\n\n@refetch_frequency(timedelta(hours=1))\ndef fetch_price(\n zone_key: str = \"KR\",\n session: Optional[Session] = None,\n target_datetime: Optional[datetime] = None,\n logger: Logger = getLogger(__name__),\n):\n\n first_available_date = time_floor(\n arrow.now(TIMEZONE).shift(days=-6), timedelta(days=1)\n ).shift(hours=1)\n\n if target_datetime is not None and target_datetime < first_available_date:\n raise NotImplementedError(\n \"This parser is not able to parse dates more than one week in the past.\"\n )\n\n if target_datetime is None:\n target_datetime = arrow.now(TIMEZONE).datetime\n\n r = session or Session()\n url = PRICE_URL\n\n response = r.get(url, verify=False)\n assert response.status_code == 200\n\n all_data = []\n table_prices = pd.read_html(response.text, header=0)[0]\n\n for col_idx in range(1, table_prices.shape[1]):\n for row_idx in range(24):\n\n day = col_idx\n hour = row_idx + 1\n\n if hour == 24:\n hour = 0\n day += 1\n\n arw_day = (\n arrow.now(TIMEZONE)\n .shift(days=-1 * (7 - day))\n .replace(hour=hour, minute=0, second=0, microsecond=0)\n )\n price_value = (\n table_prices.iloc[row_idx, col_idx] * 1000\n ) # Convert from Won/kWh to Won/MWh\n\n data = {\n \"zoneKey\": zone_key,\n \"datetime\": arw_day.datetime,\n \"currency\": \"KRW\",\n \"price\": price_value,\n \"source\": \"new.kpx.or.kr\",\n }\n\n all_data.append(data)\n\n return all_data\n\n\ndef get_long_term_prod_data(\n session: Optional[Session] = None, target_datetime: Optional[datetime] = None\n) -> List[dict]:\n target_datetime_formatted_daily = target_datetime.strftime(\"%Y-%m-%d\")\n\n r = session or Session()\n\n # CSRF token is needed to access the production data\n r.get(LONG_TERM_PRODUCTION_URL)\n cookies_dict = r.cookies.get_dict()\n\n payload = {\n \"mid\": \"a10606030000\",\n \"device\": \"chart\",\n \"view_sdate\": target_datetime_formatted_daily,\n \"view_edate\": target_datetime_formatted_daily,\n \"_csrf\": cookies_dict[\"XSRF-TOKEN\"],\n }\n\n res = r.post(LONG_TERM_PRODUCTION_URL, payload)\n\n assert res.status_code == 200\n\n all_data = []\n\n soup = BeautifulSoup(res.text, \"html.parser\")\n table_rows = soup.find_all(\"tr\")[1:]\n\n for row in table_rows:\n\n sanitized_date = [value[:-1] for value in row.find_all(\"td\")[0].text.split(\" \")]\n curr_prod_datetime_string = (\n \"-\".join(sanitized_date[:3]) + \"T\" + \":\".join(sanitized_date[3:]) + \":00\"\n )\n arw_datetime = arrow.get(\n curr_prod_datetime_string, \"YYYY-MM-DDTHH:mm:ss\", tzinfo=TIMEZONE\n ).datetime\n\n data = {\n \"zoneKey\": \"KR\",\n \"datetime\": arw_datetime,\n \"capacity\": {},\n \"production\": {},\n \"storage\": {},\n \"source\": \"https://new.kpx.or.kr\",\n }\n\n row_values = row.find_all(\"td\")\n production_values = [\n int(\"\".join(value.text.split(\",\"))) for value in row_values[1:]\n ]\n\n # order of production_values\n # 0. other, 1. gas, 2. renewable, 3. coal, 4. nuclear\n # other can be negative as well as positive due to pumped hydro\n\n data[\"datetime\"] = arw_datetime\n data[\"production\"][\"unknown\"] = production_values[0] + production_values[2]\n data[\"production\"][\"gas\"] = production_values[1]\n data[\"production\"][\"coal\"] = production_values[3]\n data[\"production\"][\"nuclear\"] = production_values[4]\n\n all_data.append(data)\n\n return all_data\n\n\ndef get_granular_real_time_prod_data(session: Optional[Session] = None) -> dict:\n r0 = session or Session()\n res_0 = r0.get(REAL_TIME_URL, verify=False)\n chart_data = extract_chart_data(res_0.text)\n\n return chart_data\n\n\n@refetch_frequency(timedelta(minutes=5))\ndef fetch_production(\n zone_key: str = \"KR\",\n session: Optional[Session] = None,\n target_datetime: Optional[datetime] = None,\n logger: Logger = getLogger(__name__),\n) -> List[dict]:\n\n if target_datetime is not None and target_datetime < arrow.get(\n 2021, 12, 22, 0, 0, 0, tzinfo=TIMEZONE\n ):\n raise NotImplementedError(\n \"This parser is not able to parse dates before 2021-12-22.\"\n )\n\n if target_datetime is None:\n target_datetime = arrow.now(TIMEZONE).datetime\n\n all_data = []\n\n if target_datetime.date() == arrow.now(TIMEZONE).date():\n chart_data = get_granular_real_time_prod_data(session=session)\n\n for datetime_key, chart_data_values in chart_data.items():\n data = {\n \"zoneKey\": \"KR\",\n \"datetime\": datetime_key,\n \"capacity\": {},\n \"production\": {},\n \"storage\": {},\n \"source\": \"https://new.kpx.or.kr\",\n }\n\n data[\"storage\"][\"hydro\"] = -chart_data_values[\"pumpedHydro\"]\n\n data[\"production\"][\"coal\"] = chart_data_values[\"coal\"]\n data[\"production\"][\"gas\"] = chart_data_values[\"gas\"]\n data[\"production\"][\"nuclear\"] = chart_data_values[\"nuclear\"]\n data[\"production\"][\"oil\"] = chart_data_values[\"oil\"]\n data[\"production\"][\"hydro\"] = chart_data_values[\"hydro\"]\n data[\"production\"][\"unknown\"] = chart_data_values[\"renewable\"]\n\n all_data.append(data)\n\n else:\n all_data = get_long_term_prod_data(\n session=session, target_datetime=target_datetime\n )\n\n return all_data\n\n\nif __name__ == \"__main__\":\n # Testing datetime on specific date\n target_datetime = arrow.get(2022, 2, 7, 16, 35, 0, tzinfo=TIMEZONE).datetime\n\n print(\"fetch_production() ->\")\n # pp.pprint(fetch_production(target_datetime=target_datetime))\n pp.pprint(fetch_production())\n\n print(\"fetch_price() -> \")\n # pp.pprint(fetch_price(target_datetime=target_datetime))\n pp.pprint(fetch_price())\n\n print(\"fetch_consumption() -> \")\n pp.pprint(fetch_consumption())\n", "path": "parsers/KR.py"}]}
| 3,865 | 350 |
gh_patches_debug_17047
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-2079
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Skipped Baggage entries in propagation still count against max entries
The decrement operation should be moved after the last continue block if the over-long entry is truly skipped, otherwise this behavior should probably be documented/tested for.
https://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L57-L65
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import typing
16 from urllib.parse import quote_plus, unquote_plus
17
18 from opentelemetry.baggage import get_all, set_baggage
19 from opentelemetry.context import get_current
20 from opentelemetry.context.context import Context
21 from opentelemetry.propagators import textmap
22
23
24 class W3CBaggagePropagator(textmap.TextMapPropagator):
25 """Extracts and injects Baggage which is used to annotate telemetry."""
26
27 _MAX_HEADER_LENGTH = 8192
28 _MAX_PAIR_LENGTH = 4096
29 _MAX_PAIRS = 180
30 _BAGGAGE_HEADER_NAME = "baggage"
31
32 def extract(
33 self,
34 carrier: textmap.CarrierT,
35 context: typing.Optional[Context] = None,
36 getter: textmap.Getter = textmap.default_getter,
37 ) -> Context:
38 """Extract Baggage from the carrier.
39
40 See
41 `opentelemetry.propagators.textmap.TextMapPropagator.extract`
42 """
43
44 if context is None:
45 context = get_current()
46
47 header = _extract_first_element(
48 getter.get(carrier, self._BAGGAGE_HEADER_NAME)
49 )
50
51 if not header or len(header) > self._MAX_HEADER_LENGTH:
52 return context
53
54 baggage_entries = header.split(",")
55 total_baggage_entries = self._MAX_PAIRS
56 for entry in baggage_entries:
57 if total_baggage_entries <= 0:
58 return context
59 total_baggage_entries -= 1
60 if len(entry) > self._MAX_PAIR_LENGTH:
61 continue
62 try:
63 name, value = entry.split("=", 1)
64 except Exception: # pylint: disable=broad-except
65 continue
66 context = set_baggage(
67 unquote_plus(name).strip(),
68 unquote_plus(value).strip(),
69 context=context,
70 )
71
72 return context
73
74 def inject(
75 self,
76 carrier: textmap.CarrierT,
77 context: typing.Optional[Context] = None,
78 setter: textmap.Setter = textmap.default_setter,
79 ) -> None:
80 """Injects Baggage into the carrier.
81
82 See
83 `opentelemetry.propagators.textmap.TextMapPropagator.inject`
84 """
85 baggage_entries = get_all(context=context)
86 if not baggage_entries:
87 return
88
89 baggage_string = _format_baggage(baggage_entries)
90 setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)
91
92 @property
93 def fields(self) -> typing.Set[str]:
94 """Returns a set with the fields set in `inject`."""
95 return {self._BAGGAGE_HEADER_NAME}
96
97
98 def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
99 return ",".join(
100 quote_plus(str(key)) + "=" + quote_plus(str(value))
101 for key, value in baggage_entries.items()
102 )
103
104
105 def _extract_first_element(
106 items: typing.Optional[typing.Iterable[textmap.CarrierT]],
107 ) -> typing.Optional[textmap.CarrierT]:
108 if items is None:
109 return None
110 return next(iter(items), None)
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
@@ -54,9 +54,6 @@
baggage_entries = header.split(",")
total_baggage_entries = self._MAX_PAIRS
for entry in baggage_entries:
- if total_baggage_entries <= 0:
- return context
- total_baggage_entries -= 1
if len(entry) > self._MAX_PAIR_LENGTH:
continue
try:
@@ -68,6 +65,9 @@
unquote_plus(value).strip(),
context=context,
)
+ total_baggage_entries -= 1
+ if total_baggage_entries == 0:
+ break
return context
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n@@ -54,9 +54,6 @@\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n- if total_baggage_entries <= 0:\n- return context\n- total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n@@ -68,6 +65,9 @@\n unquote_plus(value).strip(),\n context=context,\n )\n+ total_baggage_entries -= 1\n+ if total_baggage_entries == 0:\n+ break\n \n return context\n", "issue": "Skipped Baggage entries in propagation still count against max entries\nThe decrement operation should be moved after the last continue block if the over-long entry is truly skipped, otherwise this behavior should probably be documented/tested for.\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L57-L65\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nfrom urllib.parse import quote_plus, unquote_plus\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import textmap\n\n\nclass W3CBaggagePropagator(textmap.TextMapPropagator):\n \"\"\"Extracts and injects Baggage which is used to annotate telemetry.\"\"\"\n\n _MAX_HEADER_LENGTH = 8192\n _MAX_PAIR_LENGTH = 4096\n _MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"baggage\"\n\n def extract(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n getter: textmap.Getter = textmap.default_getter,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n getter.get(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self._MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n if total_baggage_entries <= 0:\n return context\n total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = set_baggage(\n unquote_plus(name).strip(),\n unquote_plus(value).strip(),\n context=context,\n )\n\n return context\n\n def inject(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n setter: textmap.Setter = textmap.default_setter,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n\n @property\n def fields(self) -> typing.Set[str]:\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n return {self._BAGGAGE_HEADER_NAME}\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n quote_plus(str(key)) + \"=\" + quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Optional[typing.Iterable[textmap.CarrierT]],\n) -> typing.Optional[textmap.CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nfrom urllib.parse import quote_plus, unquote_plus\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import textmap\n\n\nclass W3CBaggagePropagator(textmap.TextMapPropagator):\n \"\"\"Extracts and injects Baggage which is used to annotate telemetry.\"\"\"\n\n _MAX_HEADER_LENGTH = 8192\n _MAX_PAIR_LENGTH = 4096\n _MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"baggage\"\n\n def extract(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n getter: textmap.Getter = textmap.default_getter,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n getter.get(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self._MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = set_baggage(\n unquote_plus(name).strip(),\n unquote_plus(value).strip(),\n context=context,\n )\n total_baggage_entries -= 1\n if total_baggage_entries == 0:\n break\n\n return context\n\n def inject(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n setter: textmap.Setter = textmap.default_setter,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n\n @property\n def fields(self) -> typing.Set[str]:\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n return {self._BAGGAGE_HEADER_NAME}\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n quote_plus(str(key)) + \"=\" + quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Optional[typing.Iterable[textmap.CarrierT]],\n) -> typing.Optional[textmap.CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}]}
| 1,429 | 221 |
gh_patches_debug_37831
|
rasdani/github-patches
|
git_diff
|
watchdogpolska__small_eod-996
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Testowanie bez OAuth
> Przy okazji mam pytanie co do lokalnego testowania aplikacji. Z tego co rozumiem, opis w readme już nie jest do końca aktualny - nie wystarczy zalogować się na konto admina, żeby uzyskać dostęp do aplikacji. Prawdopodobnie ta sztuczka przestała działać (o ile czegoś nie mylę) w momencie dodania oauth.
Pytanie moje brzmi - w jaki sposób aplikacja jest testowana lokalnie? Póki co załatwiam to przez postawienie lokalnego serwera udającego oauth i drobne zmiany w kodzie pythona, ale może jest jakiś ładniejszy sposób (@MichalKarol - może masz jakieś sugestie?)?
Jeśli własny prosty serwer to faktycznie najprostsze wyjście, to mogę przy okazji spróbować trochę to moje rozwiązanie uporządkować, dodać do repo i zaktualizować dokumentację, ale najpierw chciałem się dowiedzieć czy jest lepszy sposób.
_Originally posted by @rwakulszowa in https://github.com/watchdogpolska/small_eod/issues/993#issuecomment-863313423_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend-project/small_eod/users/views.py`
Content:
```
1 from django.conf import settings
2 from django_filters.rest_framework import DjangoFilterBackend
3 from drf_yasg2 import openapi
4 from drf_yasg2.utils import swagger_auto_schema
5 from rest_framework import viewsets
6 from rest_framework.decorators import action
7 from rest_framework.filters import OrderingFilter
8 from rest_framework.permissions import AllowAny
9 from rest_framework.response import Response
10 from rest_framework_simplejwt.tokens import RefreshToken
11
12 from .filterset import UserFilterSet
13 from .providers import GoogleProvider
14 from .serializers import (
15 RefreshTokenRequestSerializer,
16 RequestSerializer,
17 TokenResponseSerializer,
18 User,
19 UserSerializer,
20 )
21
22
23 class UserViewSet(viewsets.ModelViewSet):
24 """
25 API endpoint that allows users to be viewed or edited.
26 """
27
28 queryset = User.objects.all()
29 serializer_class = UserSerializer
30 provider = GoogleProvider(
31 client_id=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_KEY,
32 client_secret=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET,
33 scopes=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE,
34 )
35 filter_backends = (DjangoFilterBackend, OrderingFilter)
36 filterset_class = UserFilterSet
37 ordering_fields = [
38 "id",
39 "username",
40 "email",
41 "first_name",
42 "last_name",
43 ]
44
45 def get_permissions(self):
46 if self.action in ["auth", "exchange", "refresh"]:
47 return [
48 AllowAny(),
49 ]
50 return super().get_permissions()
51
52 @swagger_auto_schema(
53 method="get",
54 operation_description="API endpoint to receive URI for OAuth authorization url",
55 responses={200: RequestSerializer()},
56 manual_parameters=[],
57 security=[],
58 )
59 @action(detail=False)
60 def auth(self, request):
61 authorization_url, state = self.provider.callback_url(request)
62 request.session["state"] = state
63 serializer = RequestSerializer({"url": authorization_url})
64 return Response(serializer.data)
65
66 @action(detail=False)
67 @swagger_auto_schema(
68 operation_description="API endpoint to exchange "
69 + "authorization code to access token",
70 responses={200: TokenResponseSerializer()},
71 manual_parameters=[
72 openapi.Parameter("authuser", openapi.IN_QUERY, type=openapi.TYPE_STRING),
73 openapi.Parameter("code", openapi.IN_QUERY, type=openapi.TYPE_STRING),
74 openapi.Parameter("prompt", openapi.IN_QUERY, type=openapi.TYPE_STRING),
75 openapi.Parameter(
76 "scope",
77 openapi.IN_QUERY,
78 description="scope of OAuth consents",
79 type=openapi.TYPE_STRING,
80 ),
81 openapi.Parameter("state", openapi.IN_QUERY, type=openapi.TYPE_STRING),
82 ],
83 security=[],
84 )
85 def exchange(self, request):
86 profile = self.provider.exchange(request)
87 user, _ = User.objects.get_or_create(
88 defaults={
89 "username": profile["email"],
90 "first_name": profile["given_name"],
91 "last_name": profile["family_name"],
92 "email": profile["email"],
93 },
94 email=profile["email"],
95 )
96 refresh = RefreshToken.for_user(user)
97 serializer = TokenResponseSerializer(
98 {"refresh_token": str(refresh), "access_token": str(refresh.access_token)}
99 )
100 return Response(serializer.data)
101
102 @action(detail=False, methods=["post"])
103 @swagger_auto_schema(
104 operation_description="API endpoint to exchange "
105 + "refresh token to fresh access token",
106 responses={200: TokenResponseSerializer()},
107 request_body=RefreshTokenRequestSerializer,
108 manual_parameters=[],
109 security=[],
110 )
111 def refresh(self, request):
112 serializer_input = RefreshTokenRequestSerializer(data=request.data)
113 serializer_input.is_valid(raise_exception=True)
114 refresh = RefreshToken(serializer_input.validated_data["refresh_token"])
115 refresh.set_jti()
116 refresh.set_exp()
117 serializer = TokenResponseSerializer(
118 {"refresh_token": str(refresh), "access_token": str(refresh.access_token)}
119 )
120 return Response(serializer.data)
121
```
Path: `backend-project/config/settings/base.py`
Content:
```
1 """
2 Django settings for small_eod project.
3
4 Generated by 'django-admin startproject' using Django 3.0.1.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.0/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.0/ref/settings/
11 """
12
13 import os
14
15 import environ
16
17 env = environ.Env()
18 env.read_env()
19 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
20 BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
21
22 DEBUG = False
23
24 # Quick-start development settings - unsuitable for production
25 # See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
26
27 # SECURITY WARNING: keep the secret key used in production secret!
28
29 # SECURITY WARNING: don't run with debug turned on in production!
30
31 ALLOWED_HOSTS = []
32 USE_X_FORWARDED_HOST = True
33 SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
34
35 # Application definition
36
37 INSTALLED_APPS = [
38 "django.contrib.admin",
39 "django.contrib.auth",
40 "django.contrib.contenttypes",
41 "django.contrib.sessions",
42 "django.contrib.messages",
43 "django.contrib.staticfiles",
44 "django_filters",
45 "rest_framework",
46 "drf_yasg2",
47 "teryt_tree",
48 "fullurl",
49 "small_eod.users",
50 "small_eod.generic",
51 "small_eod.tags",
52 "small_eod.cases",
53 "small_eod.features",
54 "small_eod.channels",
55 "small_eod.institutions",
56 "small_eod.collections",
57 "small_eod.files",
58 "small_eod.letters",
59 "small_eod.notes",
60 "small_eod.events",
61 "small_eod.administrative_units",
62 "small_eod.authkey",
63 "small_eod.migration_v1",
64 ]
65
66 MIDDLEWARE = [
67 "django.middleware.security.SecurityMiddleware",
68 "whitenoise.middleware.WhiteNoiseMiddleware",
69 "django.contrib.sessions.middleware.SessionMiddleware",
70 "django.middleware.common.CommonMiddleware",
71 "django.middleware.csrf.CsrfViewMiddleware",
72 "django.contrib.auth.middleware.AuthenticationMiddleware",
73 "django.contrib.messages.middleware.MessageMiddleware",
74 "django.middleware.clickjacking.XFrameOptionsMiddleware",
75 ]
76
77 ROOT_URLCONF = "config.urls"
78
79 TEMPLATES = [
80 {
81 "BACKEND": "django.template.backends.django.DjangoTemplates",
82 "DIRS": [],
83 "APP_DIRS": True,
84 "OPTIONS": {
85 "context_processors": [
86 "django.template.context_processors.debug",
87 "django.template.context_processors.request",
88 "django.contrib.auth.context_processors.auth",
89 "django.contrib.messages.context_processors.messages",
90 ],
91 },
92 },
93 ]
94
95 WSGI_APPLICATION = "config.wsgi.application"
96
97
98 # Database
99 # https://docs.djangoproject.com/en/3.0/ref/settings/#databases
100
101 DATABASES = {
102 "default": env.db(),
103 "migration": env.db("MIGRATION_DATABASE_URL"),
104 }
105
106 # Password validation
107 # https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
108
109 AUTH_PASSWORD_VALIDATORS = [
110 {
111 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator"
112 },
113 {"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator"},
114 {"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator"},
115 {"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator"},
116 ]
117
118 # Internationalization
119 # https://docs.djangoproject.com/en/3.0/topics/i18n/
120
121 LANGUAGE_CODE = "en-us"
122
123 TIME_ZONE = "UTC"
124
125 USE_I18N = True
126
127 USE_L10N = True
128
129 USE_TZ = True
130
131
132 # Static files (CSS, JavaScript, Images)
133 # https://docs.djangoproject.com/en/3.0/howto/static-files/
134
135 STATIC_URL = "/static/"
136 STATIC_ROOT = os.path.join(BASE_DIR, "static")
137 MEDIA_ROOT = os.path.join(BASE_DIR, "media")
138 AUTH_USER_MODEL = "users.User"
139
140 SWAGGER_SETTINGS = {
141 "DEFAULT_INFO": "config.swagger.info",
142 "SECURITY_DEFINITIONS": {
143 "Basic": {"type": "basic"},
144 "Bearer": {"type": "apiKey", "name": "Authorization", "in": "header"},
145 "CollectionToken": {"type": "apiKey", "name": "authorization", "in": "query"},
146 },
147 "SECURITY_REQUIREMENTS": [{"Basic": []}, {"Bearer": []}],
148 }
149
150 REST_FRAMEWORK = {
151 "DEFAULT_AUTHENTICATION_CLASSES": [
152 "rest_framework.authentication.BasicAuthentication",
153 "rest_framework.authentication.SessionAuthentication",
154 "rest_framework_simplejwt.authentication.JWTAuthentication",
155 ],
156 "DEFAULT_PERMISSION_CLASSES": ["rest_framework.permissions.IsAuthenticated"],
157 "DEFAULT_RENDERER_CLASSES": (
158 "djangorestframework_camel_case.render.CamelCaseJSONRenderer",
159 "djangorestframework_camel_case.render.CamelCaseBrowsableAPIRenderer",
160 ),
161 "DEFAULT_PARSER_CLASSES": (
162 "djangorestframework_camel_case.parser.CamelCaseJSONParser",
163 ),
164 "DEFAULT_PAGINATION_CLASS": "rest_framework.pagination.LimitOffsetPagination",
165 "PAGE_SIZE": 20,
166 "DEFAULT_FILTER_BACKENDS": ["django_filters.rest_framework.DjangoFilterBackend"],
167 }
168
169 SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = env("SOCIAL_AUTH_GOOGLE_OAUTH2_KEY")
170 SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = env("SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET")
171 SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = [
172 "openid",
173 "https://www.googleapis.com/auth/userinfo.email",
174 "https://www.googleapis.com/auth/userinfo.profile",
175 ]
176
177 MINIO_ACCESS_KEY = env("MINIO_ACCESS_KEY")
178 MINIO_SECRET_KEY = env("MINIO_SECRET_KEY")
179 MINIO_URL = env("MINIO_URL")
180 MINIO_BUCKET = env("MINIO_BUCKET", default="files")
181
182 # Very basic logging config
183 LOGGING = {
184 "version": 1,
185 "disable_existing_loggers": False,
186 "handlers": {
187 "console": {
188 "class": "logging.StreamHandler",
189 },
190 },
191 "root": {
192 "handlers": ["console"],
193 "level": "WARNING",
194 },
195 "loggers": {
196 "migrator": {
197 "handlers": ["console"],
198 "level": "DEBUG",
199 "propagate": False,
200 },
201 },
202 }
203
```
Path: `backend-project/small_eod/users/providers.py`
Content:
```
1 from requests_oauthlib import OAuth2Session
2
3
4 class GoogleProvider:
5 authorization_base_url = "https://accounts.google.com/o/oauth2/v2/auth"
6 token_url = "https://www.googleapis.com/oauth2/v4/token"
7 userinfo_url = "https://www.googleapis.com/oauth2/v1/userinfo"
8
9 def __init__(self, client_id, client_secret, scopes):
10 self.client_id = client_id
11 self.client_secret = client_secret
12 self.scopes = scopes
13
14 def get_oauthlib(self, request):
15 return OAuth2Session(
16 client_id=self.client_id,
17 scope=self.scopes,
18 redirect_uri=request.build_absolute_uri("/login/callback"),
19 )
20
21 def callback_url(self, request):
22 google = self.get_oauthlib(request)
23 return google.authorization_url(url=self.authorization_base_url)
24
25 def exchange(self, request):
26 google = self.get_oauthlib(request)
27 google.fetch_token(
28 token_url=self.token_url,
29 client_secret=self.client_secret,
30 authorization_response=request.build_absolute_uri(),
31 )
32 resp = google.get(self.userinfo_url)
33 return resp.json()
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/backend-project/config/settings/base.py b/backend-project/config/settings/base.py
--- a/backend-project/config/settings/base.py
+++ b/backend-project/config/settings/base.py
@@ -173,6 +173,7 @@
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile",
]
+SOCIAL_AUTH_USE_FAKE_OAUTH = env("SOCIAL_AUTH_USE_FAKE_OAUTH", default=False)
MINIO_ACCESS_KEY = env("MINIO_ACCESS_KEY")
MINIO_SECRET_KEY = env("MINIO_SECRET_KEY")
diff --git a/backend-project/small_eod/users/providers.py b/backend-project/small_eod/users/providers.py
--- a/backend-project/small_eod/users/providers.py
+++ b/backend-project/small_eod/users/providers.py
@@ -1,3 +1,5 @@
+from django.conf import settings
+from django.core.exceptions import ImproperlyConfigured
from requests_oauthlib import OAuth2Session
@@ -31,3 +33,41 @@
)
resp = google.get(self.userinfo_url)
return resp.json()
+
+
+class FakeProvider:
+ base_url = "https://localhost:5678"
+
+ def __init__(self, *args, **kwargs):
+ pass
+
+ def callback_url(self, request):
+ # Hardcode localhost - the provider is expected to be used only in local
+ # deployments.
+ # `build_absolute_uri` doesn't work, because it produces a docker
+ # friendly url.
+ redirect_uri = "http://localhost:8000/login/callback"
+ return f"{self.base_url}?redirect_uri={redirect_uri}", None
+
+ def exchange(self, request):
+ # Hardcoded values.
+ # Simple, but working.
+ return {
+ "email": "[email protected]",
+ "given_name": "GivenName",
+ "family_name": "FamilyName",
+ }
+
+
+def get_provider_cls():
+ flag_value = settings.SOCIAL_AUTH_USE_FAKE_OAUTH
+ if flag_value is True:
+ if not settings.DEBUG:
+ raise ImproperlyConfigured("Fake oauth may only be used in DEBUG mode")
+ return FakeProvider
+ elif flag_value is False:
+ return GoogleProvider
+ else:
+ raise ImproperlyConfigured(
+ f"Fake oauth must be either True or False, is {flag_value}"
+ )
diff --git a/backend-project/small_eod/users/views.py b/backend-project/small_eod/users/views.py
--- a/backend-project/small_eod/users/views.py
+++ b/backend-project/small_eod/users/views.py
@@ -10,7 +10,7 @@
from rest_framework_simplejwt.tokens import RefreshToken
from .filterset import UserFilterSet
-from .providers import GoogleProvider
+from .providers import get_provider_cls
from .serializers import (
RefreshTokenRequestSerializer,
RequestSerializer,
@@ -27,7 +27,7 @@
queryset = User.objects.all()
serializer_class = UserSerializer
- provider = GoogleProvider(
+ provider = get_provider_cls()(
client_id=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_KEY,
client_secret=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET,
scopes=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE,
|
{"golden_diff": "diff --git a/backend-project/config/settings/base.py b/backend-project/config/settings/base.py\n--- a/backend-project/config/settings/base.py\n+++ b/backend-project/config/settings/base.py\n@@ -173,6 +173,7 @@\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n ]\n+SOCIAL_AUTH_USE_FAKE_OAUTH = env(\"SOCIAL_AUTH_USE_FAKE_OAUTH\", default=False)\n \n MINIO_ACCESS_KEY = env(\"MINIO_ACCESS_KEY\")\n MINIO_SECRET_KEY = env(\"MINIO_SECRET_KEY\")\ndiff --git a/backend-project/small_eod/users/providers.py b/backend-project/small_eod/users/providers.py\n--- a/backend-project/small_eod/users/providers.py\n+++ b/backend-project/small_eod/users/providers.py\n@@ -1,3 +1,5 @@\n+from django.conf import settings\n+from django.core.exceptions import ImproperlyConfigured\n from requests_oauthlib import OAuth2Session\n \n \n@@ -31,3 +33,41 @@\n )\n resp = google.get(self.userinfo_url)\n return resp.json()\n+\n+\n+class FakeProvider:\n+ base_url = \"https://localhost:5678\"\n+\n+ def __init__(self, *args, **kwargs):\n+ pass\n+\n+ def callback_url(self, request):\n+ # Hardcode localhost - the provider is expected to be used only in local\n+ # deployments.\n+ # `build_absolute_uri` doesn't work, because it produces a docker\n+ # friendly url.\n+ redirect_uri = \"http://localhost:8000/login/callback\"\n+ return f\"{self.base_url}?redirect_uri={redirect_uri}\", None\n+\n+ def exchange(self, request):\n+ # Hardcoded values.\n+ # Simple, but working.\n+ return {\n+ \"email\": \"[email protected]\",\n+ \"given_name\": \"GivenName\",\n+ \"family_name\": \"FamilyName\",\n+ }\n+\n+\n+def get_provider_cls():\n+ flag_value = settings.SOCIAL_AUTH_USE_FAKE_OAUTH\n+ if flag_value is True:\n+ if not settings.DEBUG:\n+ raise ImproperlyConfigured(\"Fake oauth may only be used in DEBUG mode\")\n+ return FakeProvider\n+ elif flag_value is False:\n+ return GoogleProvider\n+ else:\n+ raise ImproperlyConfigured(\n+ f\"Fake oauth must be either True or False, is {flag_value}\"\n+ )\ndiff --git a/backend-project/small_eod/users/views.py b/backend-project/small_eod/users/views.py\n--- a/backend-project/small_eod/users/views.py\n+++ b/backend-project/small_eod/users/views.py\n@@ -10,7 +10,7 @@\n from rest_framework_simplejwt.tokens import RefreshToken\n \n from .filterset import UserFilterSet\n-from .providers import GoogleProvider\n+from .providers import get_provider_cls\n from .serializers import (\n RefreshTokenRequestSerializer,\n RequestSerializer,\n@@ -27,7 +27,7 @@\n \n queryset = User.objects.all()\n serializer_class = UserSerializer\n- provider = GoogleProvider(\n+ provider = get_provider_cls()(\n client_id=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_KEY,\n client_secret=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET,\n scopes=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE,\n", "issue": "Testowanie bez OAuth\n> Przy okazji mam pytanie co do lokalnego testowania aplikacji. Z tego co rozumiem, opis w readme ju\u017c nie jest do ko\u0144ca aktualny - nie wystarczy zalogowa\u0107 si\u0119 na konto admina, \u017ceby uzyska\u0107 dost\u0119p do aplikacji. Prawdopodobnie ta sztuczka przesta\u0142a dzia\u0142a\u0107 (o ile czego\u015b nie myl\u0119) w momencie dodania oauth.\r\nPytanie moje brzmi - w jaki spos\u00f3b aplikacja jest testowana lokalnie? P\u00f3ki co za\u0142atwiam to przez postawienie lokalnego serwera udaj\u0105cego oauth i drobne zmiany w kodzie pythona, ale mo\u017ce jest jaki\u015b \u0142adniejszy spos\u00f3b (@MichalKarol - mo\u017ce masz jakie\u015b sugestie?)?\r\nJe\u015bli w\u0142asny prosty serwer to faktycznie najprostsze wyj\u015bcie, to mog\u0119 przy okazji spr\u00f3bowa\u0107 troch\u0119 to moje rozwi\u0105zanie uporz\u0105dkowa\u0107, doda\u0107 do repo i zaktualizowa\u0107 dokumentacj\u0119, ale najpierw chcia\u0142em si\u0119 dowiedzie\u0107 czy jest lepszy spos\u00f3b.\r\n\r\n_Originally posted by @rwakulszowa in https://github.com/watchdogpolska/small_eod/issues/993#issuecomment-863313423_\n", "before_files": [{"content": "from django.conf import settings\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom drf_yasg2 import openapi\nfrom drf_yasg2.utils import swagger_auto_schema\nfrom rest_framework import viewsets\nfrom rest_framework.decorators import action\nfrom rest_framework.filters import OrderingFilter\nfrom rest_framework.permissions import AllowAny\nfrom rest_framework.response import Response\nfrom rest_framework_simplejwt.tokens import RefreshToken\n\nfrom .filterset import UserFilterSet\nfrom .providers import GoogleProvider\nfrom .serializers import (\n RefreshTokenRequestSerializer,\n RequestSerializer,\n TokenResponseSerializer,\n User,\n UserSerializer,\n)\n\n\nclass UserViewSet(viewsets.ModelViewSet):\n \"\"\"\n API endpoint that allows users to be viewed or edited.\n \"\"\"\n\n queryset = User.objects.all()\n serializer_class = UserSerializer\n provider = GoogleProvider(\n client_id=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_KEY,\n client_secret=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET,\n scopes=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE,\n )\n filter_backends = (DjangoFilterBackend, OrderingFilter)\n filterset_class = UserFilterSet\n ordering_fields = [\n \"id\",\n \"username\",\n \"email\",\n \"first_name\",\n \"last_name\",\n ]\n\n def get_permissions(self):\n if self.action in [\"auth\", \"exchange\", \"refresh\"]:\n return [\n AllowAny(),\n ]\n return super().get_permissions()\n\n @swagger_auto_schema(\n method=\"get\",\n operation_description=\"API endpoint to receive URI for OAuth authorization url\",\n responses={200: RequestSerializer()},\n manual_parameters=[],\n security=[],\n )\n @action(detail=False)\n def auth(self, request):\n authorization_url, state = self.provider.callback_url(request)\n request.session[\"state\"] = state\n serializer = RequestSerializer({\"url\": authorization_url})\n return Response(serializer.data)\n\n @action(detail=False)\n @swagger_auto_schema(\n operation_description=\"API endpoint to exchange \"\n + \"authorization code to access token\",\n responses={200: TokenResponseSerializer()},\n manual_parameters=[\n openapi.Parameter(\"authuser\", openapi.IN_QUERY, type=openapi.TYPE_STRING),\n openapi.Parameter(\"code\", openapi.IN_QUERY, type=openapi.TYPE_STRING),\n openapi.Parameter(\"prompt\", openapi.IN_QUERY, type=openapi.TYPE_STRING),\n openapi.Parameter(\n \"scope\",\n openapi.IN_QUERY,\n description=\"scope of OAuth consents\",\n type=openapi.TYPE_STRING,\n ),\n openapi.Parameter(\"state\", openapi.IN_QUERY, type=openapi.TYPE_STRING),\n ],\n security=[],\n )\n def exchange(self, request):\n profile = self.provider.exchange(request)\n user, _ = User.objects.get_or_create(\n defaults={\n \"username\": profile[\"email\"],\n \"first_name\": profile[\"given_name\"],\n \"last_name\": profile[\"family_name\"],\n \"email\": profile[\"email\"],\n },\n email=profile[\"email\"],\n )\n refresh = RefreshToken.for_user(user)\n serializer = TokenResponseSerializer(\n {\"refresh_token\": str(refresh), \"access_token\": str(refresh.access_token)}\n )\n return Response(serializer.data)\n\n @action(detail=False, methods=[\"post\"])\n @swagger_auto_schema(\n operation_description=\"API endpoint to exchange \"\n + \"refresh token to fresh access token\",\n responses={200: TokenResponseSerializer()},\n request_body=RefreshTokenRequestSerializer,\n manual_parameters=[],\n security=[],\n )\n def refresh(self, request):\n serializer_input = RefreshTokenRequestSerializer(data=request.data)\n serializer_input.is_valid(raise_exception=True)\n refresh = RefreshToken(serializer_input.validated_data[\"refresh_token\"])\n refresh.set_jti()\n refresh.set_exp()\n serializer = TokenResponseSerializer(\n {\"refresh_token\": str(refresh), \"access_token\": str(refresh.access_token)}\n )\n return Response(serializer.data)\n", "path": "backend-project/small_eod/users/views.py"}, {"content": "\"\"\"\nDjango settings for small_eod project.\n\nGenerated by 'django-admin startproject' using Django 3.0.1.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.0/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.0/ref/settings/\n\"\"\"\n\nimport os\n\nimport environ\n\nenv = environ.Env()\nenv.read_env()\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nDEBUG = False\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\n\n# SECURITY WARNING: don't run with debug turned on in production!\n\nALLOWED_HOSTS = []\nUSE_X_FORWARDED_HOST = True\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django_filters\",\n \"rest_framework\",\n \"drf_yasg2\",\n \"teryt_tree\",\n \"fullurl\",\n \"small_eod.users\",\n \"small_eod.generic\",\n \"small_eod.tags\",\n \"small_eod.cases\",\n \"small_eod.features\",\n \"small_eod.channels\",\n \"small_eod.institutions\",\n \"small_eod.collections\",\n \"small_eod.files\",\n \"small_eod.letters\",\n \"small_eod.notes\",\n \"small_eod.events\",\n \"small_eod.administrative_units\",\n \"small_eod.authkey\",\n \"small_eod.migration_v1\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n\n# Database\n# https://docs.djangoproject.com/en/3.0/ref/settings/#databases\n\nDATABASES = {\n \"default\": env.db(),\n \"migration\": env.db(\"MIGRATION_DATABASE_URL\"),\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\"\n },\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\"},\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.0/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.0/howto/static-files/\n\nSTATIC_URL = \"/static/\"\nSTATIC_ROOT = os.path.join(BASE_DIR, \"static\")\nMEDIA_ROOT = os.path.join(BASE_DIR, \"media\")\nAUTH_USER_MODEL = \"users.User\"\n\nSWAGGER_SETTINGS = {\n \"DEFAULT_INFO\": \"config.swagger.info\",\n \"SECURITY_DEFINITIONS\": {\n \"Basic\": {\"type\": \"basic\"},\n \"Bearer\": {\"type\": \"apiKey\", \"name\": \"Authorization\", \"in\": \"header\"},\n \"CollectionToken\": {\"type\": \"apiKey\", \"name\": \"authorization\", \"in\": \"query\"},\n },\n \"SECURITY_REQUIREMENTS\": [{\"Basic\": []}, {\"Bearer\": []}],\n}\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": [\n \"rest_framework.authentication.BasicAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n \"rest_framework_simplejwt.authentication.JWTAuthentication\",\n ],\n \"DEFAULT_PERMISSION_CLASSES\": [\"rest_framework.permissions.IsAuthenticated\"],\n \"DEFAULT_RENDERER_CLASSES\": (\n \"djangorestframework_camel_case.render.CamelCaseJSONRenderer\",\n \"djangorestframework_camel_case.render.CamelCaseBrowsableAPIRenderer\",\n ),\n \"DEFAULT_PARSER_CLASSES\": (\n \"djangorestframework_camel_case.parser.CamelCaseJSONParser\",\n ),\n \"DEFAULT_PAGINATION_CLASS\": \"rest_framework.pagination.LimitOffsetPagination\",\n \"PAGE_SIZE\": 20,\n \"DEFAULT_FILTER_BACKENDS\": [\"django_filters.rest_framework.DjangoFilterBackend\"],\n}\n\nSOCIAL_AUTH_GOOGLE_OAUTH2_KEY = env(\"SOCIAL_AUTH_GOOGLE_OAUTH2_KEY\")\nSOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = env(\"SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET\")\nSOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\n\nMINIO_ACCESS_KEY = env(\"MINIO_ACCESS_KEY\")\nMINIO_SECRET_KEY = env(\"MINIO_SECRET_KEY\")\nMINIO_URL = env(\"MINIO_URL\")\nMINIO_BUCKET = env(\"MINIO_BUCKET\", default=\"files\")\n\n# Very basic logging config\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": \"WARNING\",\n },\n \"loggers\": {\n \"migrator\": {\n \"handlers\": [\"console\"],\n \"level\": \"DEBUG\",\n \"propagate\": False,\n },\n },\n}\n", "path": "backend-project/config/settings/base.py"}, {"content": "from requests_oauthlib import OAuth2Session\n\n\nclass GoogleProvider:\n authorization_base_url = \"https://accounts.google.com/o/oauth2/v2/auth\"\n token_url = \"https://www.googleapis.com/oauth2/v4/token\"\n userinfo_url = \"https://www.googleapis.com/oauth2/v1/userinfo\"\n\n def __init__(self, client_id, client_secret, scopes):\n self.client_id = client_id\n self.client_secret = client_secret\n self.scopes = scopes\n\n def get_oauthlib(self, request):\n return OAuth2Session(\n client_id=self.client_id,\n scope=self.scopes,\n redirect_uri=request.build_absolute_uri(\"/login/callback\"),\n )\n\n def callback_url(self, request):\n google = self.get_oauthlib(request)\n return google.authorization_url(url=self.authorization_base_url)\n\n def exchange(self, request):\n google = self.get_oauthlib(request)\n google.fetch_token(\n token_url=self.token_url,\n client_secret=self.client_secret,\n authorization_response=request.build_absolute_uri(),\n )\n resp = google.get(self.userinfo_url)\n return resp.json()\n", "path": "backend-project/small_eod/users/providers.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom drf_yasg2 import openapi\nfrom drf_yasg2.utils import swagger_auto_schema\nfrom rest_framework import viewsets\nfrom rest_framework.decorators import action\nfrom rest_framework.filters import OrderingFilter\nfrom rest_framework.permissions import AllowAny\nfrom rest_framework.response import Response\nfrom rest_framework_simplejwt.tokens import RefreshToken\n\nfrom .filterset import UserFilterSet\nfrom .providers import get_provider_cls\nfrom .serializers import (\n RefreshTokenRequestSerializer,\n RequestSerializer,\n TokenResponseSerializer,\n User,\n UserSerializer,\n)\n\n\nclass UserViewSet(viewsets.ModelViewSet):\n \"\"\"\n API endpoint that allows users to be viewed or edited.\n \"\"\"\n\n queryset = User.objects.all()\n serializer_class = UserSerializer\n provider = get_provider_cls()(\n client_id=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_KEY,\n client_secret=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET,\n scopes=settings.SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE,\n )\n filter_backends = (DjangoFilterBackend, OrderingFilter)\n filterset_class = UserFilterSet\n ordering_fields = [\n \"id\",\n \"username\",\n \"email\",\n \"first_name\",\n \"last_name\",\n ]\n\n def get_permissions(self):\n if self.action in [\"auth\", \"exchange\", \"refresh\"]:\n return [\n AllowAny(),\n ]\n return super().get_permissions()\n\n @swagger_auto_schema(\n method=\"get\",\n operation_description=\"API endpoint to receive URI for OAuth authorization url\",\n responses={200: RequestSerializer()},\n manual_parameters=[],\n security=[],\n )\n @action(detail=False)\n def auth(self, request):\n authorization_url, state = self.provider.callback_url(request)\n request.session[\"state\"] = state\n serializer = RequestSerializer({\"url\": authorization_url})\n return Response(serializer.data)\n\n @action(detail=False)\n @swagger_auto_schema(\n operation_description=\"API endpoint to exchange \"\n + \"authorization code to access token\",\n responses={200: TokenResponseSerializer()},\n manual_parameters=[\n openapi.Parameter(\"authuser\", openapi.IN_QUERY, type=openapi.TYPE_STRING),\n openapi.Parameter(\"code\", openapi.IN_QUERY, type=openapi.TYPE_STRING),\n openapi.Parameter(\"prompt\", openapi.IN_QUERY, type=openapi.TYPE_STRING),\n openapi.Parameter(\n \"scope\",\n openapi.IN_QUERY,\n description=\"scope of OAuth consents\",\n type=openapi.TYPE_STRING,\n ),\n openapi.Parameter(\"state\", openapi.IN_QUERY, type=openapi.TYPE_STRING),\n ],\n security=[],\n )\n def exchange(self, request):\n profile = self.provider.exchange(request)\n user, _ = User.objects.get_or_create(\n defaults={\n \"username\": profile[\"email\"],\n \"first_name\": profile[\"given_name\"],\n \"last_name\": profile[\"family_name\"],\n \"email\": profile[\"email\"],\n },\n email=profile[\"email\"],\n )\n refresh = RefreshToken.for_user(user)\n serializer = TokenResponseSerializer(\n {\"refresh_token\": str(refresh), \"access_token\": str(refresh.access_token)}\n )\n return Response(serializer.data)\n\n @action(detail=False, methods=[\"post\"])\n @swagger_auto_schema(\n operation_description=\"API endpoint to exchange \"\n + \"refresh token to fresh access token\",\n responses={200: TokenResponseSerializer()},\n request_body=RefreshTokenRequestSerializer,\n manual_parameters=[],\n security=[],\n )\n def refresh(self, request):\n serializer_input = RefreshTokenRequestSerializer(data=request.data)\n serializer_input.is_valid(raise_exception=True)\n refresh = RefreshToken(serializer_input.validated_data[\"refresh_token\"])\n refresh.set_jti()\n refresh.set_exp()\n serializer = TokenResponseSerializer(\n {\"refresh_token\": str(refresh), \"access_token\": str(refresh.access_token)}\n )\n return Response(serializer.data)\n", "path": "backend-project/small_eod/users/views.py"}, {"content": "\"\"\"\nDjango settings for small_eod project.\n\nGenerated by 'django-admin startproject' using Django 3.0.1.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.0/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.0/ref/settings/\n\"\"\"\n\nimport os\n\nimport environ\n\nenv = environ.Env()\nenv.read_env()\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nDEBUG = False\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\n\n# SECURITY WARNING: don't run with debug turned on in production!\n\nALLOWED_HOSTS = []\nUSE_X_FORWARDED_HOST = True\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django_filters\",\n \"rest_framework\",\n \"drf_yasg2\",\n \"teryt_tree\",\n \"fullurl\",\n \"small_eod.users\",\n \"small_eod.generic\",\n \"small_eod.tags\",\n \"small_eod.cases\",\n \"small_eod.features\",\n \"small_eod.channels\",\n \"small_eod.institutions\",\n \"small_eod.collections\",\n \"small_eod.files\",\n \"small_eod.letters\",\n \"small_eod.notes\",\n \"small_eod.events\",\n \"small_eod.administrative_units\",\n \"small_eod.authkey\",\n \"small_eod.migration_v1\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n\n# Database\n# https://docs.djangoproject.com/en/3.0/ref/settings/#databases\n\nDATABASES = {\n \"default\": env.db(),\n \"migration\": env.db(\"MIGRATION_DATABASE_URL\"),\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\"\n },\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\"},\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.0/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.0/howto/static-files/\n\nSTATIC_URL = \"/static/\"\nSTATIC_ROOT = os.path.join(BASE_DIR, \"static\")\nMEDIA_ROOT = os.path.join(BASE_DIR, \"media\")\nAUTH_USER_MODEL = \"users.User\"\n\nSWAGGER_SETTINGS = {\n \"DEFAULT_INFO\": \"config.swagger.info\",\n \"SECURITY_DEFINITIONS\": {\n \"Basic\": {\"type\": \"basic\"},\n \"Bearer\": {\"type\": \"apiKey\", \"name\": \"Authorization\", \"in\": \"header\"},\n \"CollectionToken\": {\"type\": \"apiKey\", \"name\": \"authorization\", \"in\": \"query\"},\n },\n \"SECURITY_REQUIREMENTS\": [{\"Basic\": []}, {\"Bearer\": []}],\n}\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": [\n \"rest_framework.authentication.BasicAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n \"rest_framework_simplejwt.authentication.JWTAuthentication\",\n ],\n \"DEFAULT_PERMISSION_CLASSES\": [\"rest_framework.permissions.IsAuthenticated\"],\n \"DEFAULT_RENDERER_CLASSES\": (\n \"djangorestframework_camel_case.render.CamelCaseJSONRenderer\",\n \"djangorestframework_camel_case.render.CamelCaseBrowsableAPIRenderer\",\n ),\n \"DEFAULT_PARSER_CLASSES\": (\n \"djangorestframework_camel_case.parser.CamelCaseJSONParser\",\n ),\n \"DEFAULT_PAGINATION_CLASS\": \"rest_framework.pagination.LimitOffsetPagination\",\n \"PAGE_SIZE\": 20,\n \"DEFAULT_FILTER_BACKENDS\": [\"django_filters.rest_framework.DjangoFilterBackend\"],\n}\n\nSOCIAL_AUTH_GOOGLE_OAUTH2_KEY = env(\"SOCIAL_AUTH_GOOGLE_OAUTH2_KEY\")\nSOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = env(\"SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET\")\nSOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nSOCIAL_AUTH_USE_FAKE_OAUTH = env(\"SOCIAL_AUTH_USE_FAKE_OAUTH\", default=False)\n\nMINIO_ACCESS_KEY = env(\"MINIO_ACCESS_KEY\")\nMINIO_SECRET_KEY = env(\"MINIO_SECRET_KEY\")\nMINIO_URL = env(\"MINIO_URL\")\nMINIO_BUCKET = env(\"MINIO_BUCKET\", default=\"files\")\n\n# Very basic logging config\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": \"WARNING\",\n },\n \"loggers\": {\n \"migrator\": {\n \"handlers\": [\"console\"],\n \"level\": \"DEBUG\",\n \"propagate\": False,\n },\n },\n}\n", "path": "backend-project/config/settings/base.py"}, {"content": "from django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom requests_oauthlib import OAuth2Session\n\n\nclass GoogleProvider:\n authorization_base_url = \"https://accounts.google.com/o/oauth2/v2/auth\"\n token_url = \"https://www.googleapis.com/oauth2/v4/token\"\n userinfo_url = \"https://www.googleapis.com/oauth2/v1/userinfo\"\n\n def __init__(self, client_id, client_secret, scopes):\n self.client_id = client_id\n self.client_secret = client_secret\n self.scopes = scopes\n\n def get_oauthlib(self, request):\n return OAuth2Session(\n client_id=self.client_id,\n scope=self.scopes,\n redirect_uri=request.build_absolute_uri(\"/login/callback\"),\n )\n\n def callback_url(self, request):\n google = self.get_oauthlib(request)\n return google.authorization_url(url=self.authorization_base_url)\n\n def exchange(self, request):\n google = self.get_oauthlib(request)\n google.fetch_token(\n token_url=self.token_url,\n client_secret=self.client_secret,\n authorization_response=request.build_absolute_uri(),\n )\n resp = google.get(self.userinfo_url)\n return resp.json()\n\n\nclass FakeProvider:\n base_url = \"https://localhost:5678\"\n\n def __init__(self, *args, **kwargs):\n pass\n\n def callback_url(self, request):\n # Hardcode localhost - the provider is expected to be used only in local\n # deployments.\n # `build_absolute_uri` doesn't work, because it produces a docker\n # friendly url.\n redirect_uri = \"http://localhost:8000/login/callback\"\n return f\"{self.base_url}?redirect_uri={redirect_uri}\", None\n\n def exchange(self, request):\n # Hardcoded values.\n # Simple, but working.\n return {\n \"email\": \"[email protected]\",\n \"given_name\": \"GivenName\",\n \"family_name\": \"FamilyName\",\n }\n\n\ndef get_provider_cls():\n flag_value = settings.SOCIAL_AUTH_USE_FAKE_OAUTH\n if flag_value is True:\n if not settings.DEBUG:\n raise ImproperlyConfigured(\"Fake oauth may only be used in DEBUG mode\")\n return FakeProvider\n elif flag_value is False:\n return GoogleProvider\n else:\n raise ImproperlyConfigured(\n f\"Fake oauth must be either True or False, is {flag_value}\"\n )\n", "path": "backend-project/small_eod/users/providers.py"}]}
| 3,886 | 736 |
gh_patches_debug_30793
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-28849
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Tver] Can`t download Fuji TV video
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2021.04.07**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-f', 'best', 'https://tver.jp/corner/f0072083', '-o', 'D:\\video\\download\\a.mp4', '-v']
[debug] Encodings: locale cp932, fs mbcs, out cp932, pref cp932
[debug] youtube-dl version 2021.04.07
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.19041
[debug] exe versions: ffmpeg 4.2, ffprobe 4.2
[debug] Proxy map: {}
[TVer] Downloading JSON metadata
[TVer] f0072083: Downloading JSON metadata
[FujiTVFODPlus7] 6191645753001: Downloading m3u8 information
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
## Description
[TVer](tver.jp) is Japanese video site. Some TV stations are on this site posting a video.
I can no longer download videos from a TV station called Fuji TV. I think the cause is a specification change. it become the same as any other TV station. (https://tver.jp/info/notice/3137.html)
Can you please support a new specification.
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/tver.py`
Content:
```
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import re
5
6 from .common import InfoExtractor
7 from ..compat import compat_str
8 from ..utils import (
9 int_or_none,
10 remove_start,
11 smuggle_url,
12 strip_or_none,
13 try_get,
14 )
15
16
17 class TVerIE(InfoExtractor):
18 _VALID_URL = r'https?://(?:www\.)?tver\.jp/(?P<path>(?:corner|episode|feature)/(?P<id>f?\d+))'
19 # videos are only available for 7 days
20 _TESTS = [{
21 'url': 'https://tver.jp/corner/f0062178',
22 'only_matching': True,
23 }, {
24 'url': 'https://tver.jp/feature/f0062413',
25 'only_matching': True,
26 }, {
27 'url': 'https://tver.jp/episode/79622438',
28 'only_matching': True,
29 }, {
30 # subtitle = ' '
31 'url': 'https://tver.jp/corner/f0068870',
32 'only_matching': True,
33 }]
34 _TOKEN = None
35 BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'
36
37 def _real_initialize(self):
38 self._TOKEN = self._download_json(
39 'https://tver.jp/api/access_token.php', None)['token']
40
41 def _real_extract(self, url):
42 path, video_id = re.match(self._VALID_URL, url).groups()
43 main = self._download_json(
44 'https://api.tver.jp/v4/' + path, video_id,
45 query={'token': self._TOKEN})['main']
46 p_id = main['publisher_id']
47 service = remove_start(main['service'], 'ts_')
48 info = {
49 '_type': 'url_transparent',
50 'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),
51 'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),
52 }
53
54 if service == 'cx':
55 title = main['title']
56 subtitle = strip_or_none(main.get('subtitle'))
57 if subtitle:
58 title += ' - ' + subtitle
59 info.update({
60 'title': title,
61 'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),
62 'ie_key': 'FujiTVFODPlus7',
63 })
64 else:
65 r_id = main['reference_id']
66 if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):
67 r_id = 'ref:' + r_id
68 bc_url = smuggle_url(
69 self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),
70 {'geo_countries': ['JP']})
71 info.update({
72 'url': bc_url,
73 'ie_key': 'BrightcoveNew',
74 })
75
76 return info
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/youtube_dl/extractor/tver.py b/youtube_dl/extractor/tver.py
--- a/youtube_dl/extractor/tver.py
+++ b/youtube_dl/extractor/tver.py
@@ -9,7 +9,6 @@
int_or_none,
remove_start,
smuggle_url,
- strip_or_none,
try_get,
)
@@ -45,32 +44,18 @@
query={'token': self._TOKEN})['main']
p_id = main['publisher_id']
service = remove_start(main['service'], 'ts_')
- info = {
+
+ r_id = main['reference_id']
+ if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):
+ r_id = 'ref:' + r_id
+ bc_url = smuggle_url(
+ self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),
+ {'geo_countries': ['JP']})
+
+ return {
'_type': 'url_transparent',
'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),
'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),
+ 'url': bc_url,
+ 'ie_key': 'BrightcoveNew',
}
-
- if service == 'cx':
- title = main['title']
- subtitle = strip_or_none(main.get('subtitle'))
- if subtitle:
- title += ' - ' + subtitle
- info.update({
- 'title': title,
- 'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),
- 'ie_key': 'FujiTVFODPlus7',
- })
- else:
- r_id = main['reference_id']
- if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):
- r_id = 'ref:' + r_id
- bc_url = smuggle_url(
- self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),
- {'geo_countries': ['JP']})
- info.update({
- 'url': bc_url,
- 'ie_key': 'BrightcoveNew',
- })
-
- return info
|
{"golden_diff": "diff --git a/youtube_dl/extractor/tver.py b/youtube_dl/extractor/tver.py\n--- a/youtube_dl/extractor/tver.py\n+++ b/youtube_dl/extractor/tver.py\n@@ -9,7 +9,6 @@\n int_or_none,\n remove_start,\n smuggle_url,\n- strip_or_none,\n try_get,\n )\n \n@@ -45,32 +44,18 @@\n query={'token': self._TOKEN})['main']\n p_id = main['publisher_id']\n service = remove_start(main['service'], 'ts_')\n- info = {\n+\n+ r_id = main['reference_id']\n+ if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):\n+ r_id = 'ref:' + r_id\n+ bc_url = smuggle_url(\n+ self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),\n+ {'geo_countries': ['JP']})\n+\n+ return {\n '_type': 'url_transparent',\n 'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),\n 'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),\n+ 'url': bc_url,\n+ 'ie_key': 'BrightcoveNew',\n }\n-\n- if service == 'cx':\n- title = main['title']\n- subtitle = strip_or_none(main.get('subtitle'))\n- if subtitle:\n- title += ' - ' + subtitle\n- info.update({\n- 'title': title,\n- 'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),\n- 'ie_key': 'FujiTVFODPlus7',\n- })\n- else:\n- r_id = main['reference_id']\n- if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):\n- r_id = 'ref:' + r_id\n- bc_url = smuggle_url(\n- self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),\n- {'geo_countries': ['JP']})\n- info.update({\n- 'url': bc_url,\n- 'ie_key': 'BrightcoveNew',\n- })\n-\n- return info\n", "issue": "[Tver] Can`t download Fuji TV video \n<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.\r\n- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.\r\n- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.\r\n- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [x] I'm reporting a broken site support\r\n- [x] I've verified that I'm running youtube-dl version **2021.04.07**\r\n- [x] I've checked that all provided URLs are alive and playable in a browser\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['-f', 'best', 'https://tver.jp/corner/f0072083', '-o', 'D:\\\\video\\\\download\\\\a.mp4', '-v']\r\n[debug] Encodings: locale cp932, fs mbcs, out cp932, pref cp932\r\n[debug] youtube-dl version 2021.04.07\r\n[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.19041\r\n[debug] exe versions: ffmpeg 4.2, ffprobe 4.2\r\n[debug] Proxy map: {}\r\n[TVer] Downloading JSON metadata\r\n[TVer] f0072083: Downloading JSON metadata\r\n[FujiTVFODPlus7] 6191645753001: Downloading m3u8 information\r\nERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\n```\r\n\r\n## Description\r\n\r\n[TVer](tver.jp) is Japanese video site. Some TV stations are on this site posting a video.\r\n\r\nI can no longer download videos from a TV station called Fuji TV. I think the cause is a specification change. it become the same as any other TV station. (https://tver.jp/info/notice/3137.html) \r\nCan you please support a new specification.\r\nThanks. \n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import compat_str\nfrom ..utils import (\n int_or_none,\n remove_start,\n smuggle_url,\n strip_or_none,\n try_get,\n)\n\n\nclass TVerIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?tver\\.jp/(?P<path>(?:corner|episode|feature)/(?P<id>f?\\d+))'\n # videos are only available for 7 days\n _TESTS = [{\n 'url': 'https://tver.jp/corner/f0062178',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/feature/f0062413',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/episode/79622438',\n 'only_matching': True,\n }, {\n # subtitle = ' '\n 'url': 'https://tver.jp/corner/f0068870',\n 'only_matching': True,\n }]\n _TOKEN = None\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'\n\n def _real_initialize(self):\n self._TOKEN = self._download_json(\n 'https://tver.jp/api/access_token.php', None)['token']\n\n def _real_extract(self, url):\n path, video_id = re.match(self._VALID_URL, url).groups()\n main = self._download_json(\n 'https://api.tver.jp/v4/' + path, video_id,\n query={'token': self._TOKEN})['main']\n p_id = main['publisher_id']\n service = remove_start(main['service'], 'ts_')\n info = {\n '_type': 'url_transparent',\n 'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),\n 'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),\n }\n\n if service == 'cx':\n title = main['title']\n subtitle = strip_or_none(main.get('subtitle'))\n if subtitle:\n title += ' - ' + subtitle\n info.update({\n 'title': title,\n 'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),\n 'ie_key': 'FujiTVFODPlus7',\n })\n else:\n r_id = main['reference_id']\n if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):\n r_id = 'ref:' + r_id\n bc_url = smuggle_url(\n self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),\n {'geo_countries': ['JP']})\n info.update({\n 'url': bc_url,\n 'ie_key': 'BrightcoveNew',\n })\n\n return info\n", "path": "youtube_dl/extractor/tver.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import compat_str\nfrom ..utils import (\n int_or_none,\n remove_start,\n smuggle_url,\n try_get,\n)\n\n\nclass TVerIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?tver\\.jp/(?P<path>(?:corner|episode|feature)/(?P<id>f?\\d+))'\n # videos are only available for 7 days\n _TESTS = [{\n 'url': 'https://tver.jp/corner/f0062178',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/feature/f0062413',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/episode/79622438',\n 'only_matching': True,\n }, {\n # subtitle = ' '\n 'url': 'https://tver.jp/corner/f0068870',\n 'only_matching': True,\n }]\n _TOKEN = None\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'\n\n def _real_initialize(self):\n self._TOKEN = self._download_json(\n 'https://tver.jp/api/access_token.php', None)['token']\n\n def _real_extract(self, url):\n path, video_id = re.match(self._VALID_URL, url).groups()\n main = self._download_json(\n 'https://api.tver.jp/v4/' + path, video_id,\n query={'token': self._TOKEN})['main']\n p_id = main['publisher_id']\n service = remove_start(main['service'], 'ts_')\n\n r_id = main['reference_id']\n if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):\n r_id = 'ref:' + r_id\n bc_url = smuggle_url(\n self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),\n {'geo_countries': ['JP']})\n\n return {\n '_type': 'url_transparent',\n 'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),\n 'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),\n 'url': bc_url,\n 'ie_key': 'BrightcoveNew',\n }\n", "path": "youtube_dl/extractor/tver.py"}]}
| 1,811 | 545 |
gh_patches_debug_3979
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-1246
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need binding to void GENERAL_NAMES_free(GENERAL_NAMES *)
the function call to d2i methods on the altSubjectName extension returned a dynamicly allocated memory object that must be garbage collected so binding for GENERAL_NAMES_free should be exposed from hazmat so that higher level code can avoid memory leaks. Not sure which module should expose the binding but I used x509v3.py module in the Proposed solution https://github.com/crc32a/cryptography/commit/24df02646de1e5c1773c9048076b5d67d4c5c0fa
this effects issue https://github.com/pyca/pyopenssl/issues/139 of pyopenssl and an example of its usage to avoid memory leaks is
https://github.com/rackerlabs/pyopenssl/commit/a479a74820619da13dfab8925cf49c4f766b6536
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/hazmat/bindings/openssl/x509v3.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 INCLUDES = """
17 #include <openssl/x509v3.h>
18 """
19
20 TYPES = """
21 typedef struct {
22 X509 *issuer_cert;
23 X509 *subject_cert;
24 ...;
25 } X509V3_CTX;
26
27 typedef void * (*X509V3_EXT_D2I)(void *, const unsigned char **, long);
28
29 typedef struct {
30 ASN1_ITEM_EXP *it;
31 X509V3_EXT_D2I d2i;
32 ...;
33 } X509V3_EXT_METHOD;
34
35 static const int GEN_OTHERNAME;
36 static const int GEN_EMAIL;
37 static const int GEN_X400;
38 static const int GEN_DNS;
39 static const int GEN_URI;
40 static const int GEN_DIRNAME;
41 static const int GEN_EDIPARTY;
42 static const int GEN_IPADD;
43 static const int GEN_RID;
44
45 typedef struct {
46 ...;
47 } OTHERNAME;
48
49 typedef struct {
50 ...;
51 } EDIPARTYNAME;
52
53 typedef struct {
54 int type;
55 union {
56 char *ptr;
57 OTHERNAME *otherName; /* otherName */
58 ASN1_IA5STRING *rfc822Name;
59 ASN1_IA5STRING *dNSName;
60 ASN1_TYPE *x400Address;
61 X509_NAME *directoryName;
62 EDIPARTYNAME *ediPartyName;
63 ASN1_IA5STRING *uniformResourceIdentifier;
64 ASN1_OCTET_STRING *iPAddress;
65 ASN1_OBJECT *registeredID;
66
67 /* Old names */
68 ASN1_OCTET_STRING *ip; /* iPAddress */
69 X509_NAME *dirn; /* dirn */
70 ASN1_IA5STRING *ia5; /* rfc822Name, dNSName, */
71 /* uniformResourceIdentifier */
72 ASN1_OBJECT *rid; /* registeredID */
73 ASN1_TYPE *other; /* x400Address */
74 } d;
75 ...;
76 } GENERAL_NAME;
77
78 typedef struct stack_st_GENERAL_NAME GENERAL_NAMES;
79 """
80
81 FUNCTIONS = """
82 void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);
83 X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);
84 int GENERAL_NAME_print(BIO *, GENERAL_NAME *);
85 """
86
87 MACROS = """
88 void *X509V3_set_ctx_nodb(X509V3_CTX *);
89 int sk_GENERAL_NAME_num(struct stack_st_GENERAL_NAME *);
90 int sk_GENERAL_NAME_push(struct stack_st_GENERAL_NAME *, GENERAL_NAME *);
91 GENERAL_NAME *sk_GENERAL_NAME_value(struct stack_st_GENERAL_NAME *, int);
92
93 /* These aren't macros these functions are all const X on openssl > 1.0.x */
94 const X509V3_EXT_METHOD *X509V3_EXT_get(X509_EXTENSION *);
95 const X509V3_EXT_METHOD *X509V3_EXT_get_nid(int);
96 """
97
98 CUSTOMIZATIONS = """
99 """
100
101 CONDITIONAL_NAMES = {}
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cryptography/hazmat/bindings/openssl/x509v3.py b/cryptography/hazmat/bindings/openssl/x509v3.py
--- a/cryptography/hazmat/bindings/openssl/x509v3.py
+++ b/cryptography/hazmat/bindings/openssl/x509v3.py
@@ -82,6 +82,7 @@
void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);
X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);
int GENERAL_NAME_print(BIO *, GENERAL_NAME *);
+void GENERAL_NAMES_free(GENERAL_NAMES *);
"""
MACROS = """
|
{"golden_diff": "diff --git a/cryptography/hazmat/bindings/openssl/x509v3.py b/cryptography/hazmat/bindings/openssl/x509v3.py\n--- a/cryptography/hazmat/bindings/openssl/x509v3.py\n+++ b/cryptography/hazmat/bindings/openssl/x509v3.py\n@@ -82,6 +82,7 @@\n void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);\n X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);\n int GENERAL_NAME_print(BIO *, GENERAL_NAME *);\n+void GENERAL_NAMES_free(GENERAL_NAMES *);\n \"\"\"\n \n MACROS = \"\"\"\n", "issue": "Need binding to void GENERAL_NAMES_free(GENERAL_NAMES *)\nthe function call to d2i methods on the altSubjectName extension returned a dynamicly allocated memory object that must be garbage collected so binding for GENERAL_NAMES_free should be exposed from hazmat so that higher level code can avoid memory leaks. Not sure which module should expose the binding but I used x509v3.py module in the Proposed solution https://github.com/crc32a/cryptography/commit/24df02646de1e5c1773c9048076b5d67d4c5c0fa\n\nthis effects issue https://github.com/pyca/pyopenssl/issues/139 of pyopenssl and an example of its usage to avoid memory leaks is\nhttps://github.com/rackerlabs/pyopenssl/commit/a479a74820619da13dfab8925cf49c4f766b6536\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <openssl/x509v3.h>\n\"\"\"\n\nTYPES = \"\"\"\ntypedef struct {\n X509 *issuer_cert;\n X509 *subject_cert;\n ...;\n} X509V3_CTX;\n\ntypedef void * (*X509V3_EXT_D2I)(void *, const unsigned char **, long);\n\ntypedef struct {\n ASN1_ITEM_EXP *it;\n X509V3_EXT_D2I d2i;\n ...;\n} X509V3_EXT_METHOD;\n\nstatic const int GEN_OTHERNAME;\nstatic const int GEN_EMAIL;\nstatic const int GEN_X400;\nstatic const int GEN_DNS;\nstatic const int GEN_URI;\nstatic const int GEN_DIRNAME;\nstatic const int GEN_EDIPARTY;\nstatic const int GEN_IPADD;\nstatic const int GEN_RID;\n\ntypedef struct {\n ...;\n} OTHERNAME;\n\ntypedef struct {\n ...;\n} EDIPARTYNAME;\n\ntypedef struct {\n int type;\n union {\n char *ptr;\n OTHERNAME *otherName; /* otherName */\n ASN1_IA5STRING *rfc822Name;\n ASN1_IA5STRING *dNSName;\n ASN1_TYPE *x400Address;\n X509_NAME *directoryName;\n EDIPARTYNAME *ediPartyName;\n ASN1_IA5STRING *uniformResourceIdentifier;\n ASN1_OCTET_STRING *iPAddress;\n ASN1_OBJECT *registeredID;\n\n /* Old names */\n ASN1_OCTET_STRING *ip; /* iPAddress */\n X509_NAME *dirn; /* dirn */\n ASN1_IA5STRING *ia5; /* rfc822Name, dNSName, */\n /* uniformResourceIdentifier */\n ASN1_OBJECT *rid; /* registeredID */\n ASN1_TYPE *other; /* x400Address */\n } d;\n ...;\n} GENERAL_NAME;\n\ntypedef struct stack_st_GENERAL_NAME GENERAL_NAMES;\n\"\"\"\n\nFUNCTIONS = \"\"\"\nvoid X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);\nX509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);\nint GENERAL_NAME_print(BIO *, GENERAL_NAME *);\n\"\"\"\n\nMACROS = \"\"\"\nvoid *X509V3_set_ctx_nodb(X509V3_CTX *);\nint sk_GENERAL_NAME_num(struct stack_st_GENERAL_NAME *);\nint sk_GENERAL_NAME_push(struct stack_st_GENERAL_NAME *, GENERAL_NAME *);\nGENERAL_NAME *sk_GENERAL_NAME_value(struct stack_st_GENERAL_NAME *, int);\n\n/* These aren't macros these functions are all const X on openssl > 1.0.x */\nconst X509V3_EXT_METHOD *X509V3_EXT_get(X509_EXTENSION *);\nconst X509V3_EXT_METHOD *X509V3_EXT_get_nid(int);\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n\"\"\"\n\nCONDITIONAL_NAMES = {}\n", "path": "cryptography/hazmat/bindings/openssl/x509v3.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <openssl/x509v3.h>\n\"\"\"\n\nTYPES = \"\"\"\ntypedef struct {\n X509 *issuer_cert;\n X509 *subject_cert;\n ...;\n} X509V3_CTX;\n\ntypedef void * (*X509V3_EXT_D2I)(void *, const unsigned char **, long);\n\ntypedef struct {\n ASN1_ITEM_EXP *it;\n X509V3_EXT_D2I d2i;\n ...;\n} X509V3_EXT_METHOD;\n\nstatic const int GEN_OTHERNAME;\nstatic const int GEN_EMAIL;\nstatic const int GEN_X400;\nstatic const int GEN_DNS;\nstatic const int GEN_URI;\nstatic const int GEN_DIRNAME;\nstatic const int GEN_EDIPARTY;\nstatic const int GEN_IPADD;\nstatic const int GEN_RID;\n\ntypedef struct {\n ...;\n} OTHERNAME;\n\ntypedef struct {\n ...;\n} EDIPARTYNAME;\n\ntypedef struct {\n int type;\n union {\n char *ptr;\n OTHERNAME *otherName; /* otherName */\n ASN1_IA5STRING *rfc822Name;\n ASN1_IA5STRING *dNSName;\n ASN1_TYPE *x400Address;\n X509_NAME *directoryName;\n EDIPARTYNAME *ediPartyName;\n ASN1_IA5STRING *uniformResourceIdentifier;\n ASN1_OCTET_STRING *iPAddress;\n ASN1_OBJECT *registeredID;\n\n /* Old names */\n ASN1_OCTET_STRING *ip; /* iPAddress */\n X509_NAME *dirn; /* dirn */\n ASN1_IA5STRING *ia5; /* rfc822Name, dNSName, */\n /* uniformResourceIdentifier */\n ASN1_OBJECT *rid; /* registeredID */\n ASN1_TYPE *other; /* x400Address */\n } d;\n ...;\n} GENERAL_NAME;\n\ntypedef struct stack_st_GENERAL_NAME GENERAL_NAMES;\n\"\"\"\n\nFUNCTIONS = \"\"\"\nvoid X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);\nX509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);\nint GENERAL_NAME_print(BIO *, GENERAL_NAME *);\nvoid GENERAL_NAMES_free(GENERAL_NAMES *);\n\"\"\"\n\nMACROS = \"\"\"\nvoid *X509V3_set_ctx_nodb(X509V3_CTX *);\nint sk_GENERAL_NAME_num(struct stack_st_GENERAL_NAME *);\nint sk_GENERAL_NAME_push(struct stack_st_GENERAL_NAME *, GENERAL_NAME *);\nGENERAL_NAME *sk_GENERAL_NAME_value(struct stack_st_GENERAL_NAME *, int);\n\n/* These aren't macros these functions are all const X on openssl > 1.0.x */\nconst X509V3_EXT_METHOD *X509V3_EXT_get(X509_EXTENSION *);\nconst X509V3_EXT_METHOD *X509V3_EXT_get_nid(int);\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n\"\"\"\n\nCONDITIONAL_NAMES = {}\n", "path": "cryptography/hazmat/bindings/openssl/x509v3.py"}]}
| 1,493 | 186 |
gh_patches_debug_5324
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-968
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEAT][CV] Add conditions to checks missing conditions
Some checks are missing conditions:
- [x] Heatmap
- [x] Image Drift
- [x] Train Test Drift
- [x] Robustness
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/vision/suites/default_suites.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """Functions for loading the default (built-in) vision suites for various validation stages.
12
13 Each function returns a new suite that is initialized with a list of checks and default conditions.
14 It is possible to customize these suites by editing the checks and conditions inside it after the suites' creation.
15 """
16 from deepchecks.vision.checks import ClassPerformance, TrainTestLabelDrift, MeanAveragePrecisionReport, \
17 MeanAverageRecallReport, ImagePropertyDrift, ImageDatasetDrift, SimpleModelComparison, ConfusionMatrixReport, \
18 RobustnessReport, TrainTestPredictionDrift
19 from deepchecks.vision import Suite
20
21
22 __all__ = ['train_test_validation', 'model_evaluation', 'full_suite']
23
24 from deepchecks.vision.checks.distribution import HeatmapComparison
25
26
27 def train_test_validation() -> Suite:
28 """Create a suite that is meant to validate correctness of train-test split, including integrity, \
29 distribution and leakage checks."""
30 return Suite(
31 'Train Test Validation Suite',
32 HeatmapComparison(),
33 TrainTestLabelDrift(),
34 TrainTestPredictionDrift(),
35 ImagePropertyDrift().add_condition_drift_score_not_greater_than(),
36 ImageDatasetDrift()
37 )
38
39
40 def model_evaluation() -> Suite:
41 """Create a suite that is meant to test model performance and overfit."""
42 return Suite(
43 'Model Evaluation Suite',
44 ClassPerformance(),
45 MeanAveragePrecisionReport(),
46 MeanAverageRecallReport(),
47 SimpleModelComparison(),
48 ConfusionMatrixReport(),
49 RobustnessReport().add_condition_degradation_not_greater_than()
50 )
51
52
53 def full_suite() -> Suite:
54 """Create a suite that includes many of the implemented checks, for a quick overview of your model and data."""
55 return Suite(
56 'Full Suite',
57 model_evaluation(),
58 train_test_validation(),
59 )
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepchecks/vision/suites/default_suites.py b/deepchecks/vision/suites/default_suites.py
--- a/deepchecks/vision/suites/default_suites.py
+++ b/deepchecks/vision/suites/default_suites.py
@@ -31,7 +31,7 @@
'Train Test Validation Suite',
HeatmapComparison(),
TrainTestLabelDrift(),
- TrainTestPredictionDrift(),
+ TrainTestPredictionDrift().add_condition_drift_score_not_greater_than(),
ImagePropertyDrift().add_condition_drift_score_not_greater_than(),
ImageDatasetDrift()
)
|
{"golden_diff": "diff --git a/deepchecks/vision/suites/default_suites.py b/deepchecks/vision/suites/default_suites.py\n--- a/deepchecks/vision/suites/default_suites.py\n+++ b/deepchecks/vision/suites/default_suites.py\n@@ -31,7 +31,7 @@\n 'Train Test Validation Suite',\n HeatmapComparison(),\n TrainTestLabelDrift(),\n- TrainTestPredictionDrift(),\n+ TrainTestPredictionDrift().add_condition_drift_score_not_greater_than(),\n ImagePropertyDrift().add_condition_drift_score_not_greater_than(),\n ImageDatasetDrift()\n )\n", "issue": "[FEAT][CV] Add conditions to checks missing conditions\nSome checks are missing conditions:\r\n\r\n- [x] Heatmap\r\n- [x] Image Drift\r\n- [x] Train Test Drift\r\n- [x] Robustness \n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Functions for loading the default (built-in) vision suites for various validation stages.\n\nEach function returns a new suite that is initialized with a list of checks and default conditions.\nIt is possible to customize these suites by editing the checks and conditions inside it after the suites' creation.\n\"\"\"\nfrom deepchecks.vision.checks import ClassPerformance, TrainTestLabelDrift, MeanAveragePrecisionReport, \\\n MeanAverageRecallReport, ImagePropertyDrift, ImageDatasetDrift, SimpleModelComparison, ConfusionMatrixReport, \\\n RobustnessReport, TrainTestPredictionDrift\nfrom deepchecks.vision import Suite\n\n\n__all__ = ['train_test_validation', 'model_evaluation', 'full_suite']\n\nfrom deepchecks.vision.checks.distribution import HeatmapComparison\n\n\ndef train_test_validation() -> Suite:\n \"\"\"Create a suite that is meant to validate correctness of train-test split, including integrity, \\\n distribution and leakage checks.\"\"\"\n return Suite(\n 'Train Test Validation Suite',\n HeatmapComparison(),\n TrainTestLabelDrift(),\n TrainTestPredictionDrift(),\n ImagePropertyDrift().add_condition_drift_score_not_greater_than(),\n ImageDatasetDrift()\n )\n\n\ndef model_evaluation() -> Suite:\n \"\"\"Create a suite that is meant to test model performance and overfit.\"\"\"\n return Suite(\n 'Model Evaluation Suite',\n ClassPerformance(),\n MeanAveragePrecisionReport(),\n MeanAverageRecallReport(),\n SimpleModelComparison(),\n ConfusionMatrixReport(),\n RobustnessReport().add_condition_degradation_not_greater_than()\n )\n\n\ndef full_suite() -> Suite:\n \"\"\"Create a suite that includes many of the implemented checks, for a quick overview of your model and data.\"\"\"\n return Suite(\n 'Full Suite',\n model_evaluation(),\n train_test_validation(),\n )\n", "path": "deepchecks/vision/suites/default_suites.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Functions for loading the default (built-in) vision suites for various validation stages.\n\nEach function returns a new suite that is initialized with a list of checks and default conditions.\nIt is possible to customize these suites by editing the checks and conditions inside it after the suites' creation.\n\"\"\"\nfrom deepchecks.vision.checks import ClassPerformance, TrainTestLabelDrift, MeanAveragePrecisionReport, \\\n MeanAverageRecallReport, ImagePropertyDrift, ImageDatasetDrift, SimpleModelComparison, ConfusionMatrixReport, \\\n RobustnessReport, TrainTestPredictionDrift\nfrom deepchecks.vision import Suite\n\n\n__all__ = ['train_test_validation', 'model_evaluation', 'full_suite']\n\nfrom deepchecks.vision.checks.distribution import HeatmapComparison\n\n\ndef train_test_validation() -> Suite:\n \"\"\"Create a suite that is meant to validate correctness of train-test split, including integrity, \\\n distribution and leakage checks.\"\"\"\n return Suite(\n 'Train Test Validation Suite',\n HeatmapComparison(),\n TrainTestLabelDrift(),\n TrainTestPredictionDrift().add_condition_drift_score_not_greater_than(),\n ImagePropertyDrift().add_condition_drift_score_not_greater_than(),\n ImageDatasetDrift()\n )\n\n\ndef model_evaluation() -> Suite:\n \"\"\"Create a suite that is meant to test model performance and overfit.\"\"\"\n return Suite(\n 'Model Evaluation Suite',\n ClassPerformance(),\n MeanAveragePrecisionReport(),\n MeanAverageRecallReport(),\n SimpleModelComparison(),\n ConfusionMatrixReport(),\n RobustnessReport().add_condition_degradation_not_greater_than()\n )\n\n\ndef full_suite() -> Suite:\n \"\"\"Create a suite that includes many of the implemented checks, for a quick overview of your model and data.\"\"\"\n return Suite(\n 'Full Suite',\n model_evaluation(),\n train_test_validation(),\n )\n", "path": "deepchecks/vision/suites/default_suites.py"}]}
| 908 | 143 |
gh_patches_debug_35089
|
rasdani/github-patches
|
git_diff
|
aio-libs__aiohttp-2237
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'NoneType' object has no attribute 'errno'
## Long story short
Trying to resolve a domain which is an alias for another one, which does not have an A or CNAME record, raises AttributeError: 'NoneType' object has no attribute 'errno'
## Expected behaviour
Raise an error correctly, socket.gaierror probably.
## Actual behaviour
```Traceback (most recent call last):
File "xtest.py", line 16, in <module>
process()
File "/usr/lib/python3.6/asyncio/base_events.py", line 449, in run_until_complete
return future.result()
File "/usr/lib/python3.6/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/myenv/lib/python3.6/site-packages/aiohttp/helpers.py", line 72, in send
return self._coro.send(arg)
File "/myenv/lib/python3.6/site-packages/aiohttp/client.py", line 233, in _request
conn = yield from self._connector.connect(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 378, in connect
proto = yield from self._create_connection(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 687, in _create_connection
_, proto = yield from self._create_direct_connection(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 735, in _create_direct_connection
exc.errno,
AttributeError: 'NoneType' object has no attribute 'errno'
```
## Steps to reproduce
This script will reproduce the error.
```
import asyncio
import aiohttp
from aiohttp.resolver import AsyncResolver
def process():
url = 'http://esly.win/'
resolver = AsyncResolver()
conn = aiohttp.TCPConnector(resolver=resolver, verify_ssl=False)
session = aiohttp.ClientSession(connector=conn)
return session.get(url)
loop = asyncio.get_event_loop()
loop.run_until_complete(
process()
)
```
If I use the session without setting the connector it first raises a socket.gaierror but then
> During handling of the above exception, another exception occurred...
And the same traceback appears.
## Your environment
Python 3.6.0b2
Ubuntu 10.10
aiohttp==2.2,5
Also happens with aiohttp==2.3.0a0 (installed from git on 29/Aug/2017)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/resolver.py`
Content:
```
1 import asyncio
2 import socket
3
4 from .abc import AbstractResolver
5
6
7 __all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')
8
9 try:
10 import aiodns
11 # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
12 except ImportError: # pragma: no cover
13 aiodns = None
14
15 aiodns_default = False
16
17
18 class ThreadedResolver(AbstractResolver):
19 """Use Executor for synchronous getaddrinfo() calls, which defaults to
20 concurrent.futures.ThreadPoolExecutor.
21 """
22
23 def __init__(self, loop=None):
24 if loop is None:
25 loop = asyncio.get_event_loop()
26 self._loop = loop
27
28 @asyncio.coroutine
29 def resolve(self, host, port=0, family=socket.AF_INET):
30 infos = yield from self._loop.getaddrinfo(
31 host, port, type=socket.SOCK_STREAM, family=family)
32
33 hosts = []
34 for family, _, proto, _, address in infos:
35 hosts.append(
36 {'hostname': host,
37 'host': address[0], 'port': address[1],
38 'family': family, 'proto': proto,
39 'flags': socket.AI_NUMERICHOST})
40
41 return hosts
42
43 @asyncio.coroutine
44 def close(self):
45 pass
46
47
48 class AsyncResolver(AbstractResolver):
49 """Use the `aiodns` package to make asynchronous DNS lookups"""
50
51 def __init__(self, loop=None, *args, **kwargs):
52 if loop is None:
53 loop = asyncio.get_event_loop()
54
55 if aiodns is None:
56 raise RuntimeError("Resolver requires aiodns library")
57
58 self._loop = loop
59 self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)
60
61 if not hasattr(self._resolver, 'gethostbyname'):
62 # aiodns 1.1 is not available, fallback to DNSResolver.query
63 self.resolve = self.resolve_with_query
64
65 @asyncio.coroutine
66 def resolve(self, host, port=0, family=socket.AF_INET):
67 hosts = []
68 resp = yield from self._resolver.gethostbyname(host, family)
69
70 for address in resp.addresses:
71 hosts.append(
72 {'hostname': host,
73 'host': address, 'port': port,
74 'family': family, 'proto': 0,
75 'flags': socket.AI_NUMERICHOST})
76 return hosts
77
78 @asyncio.coroutine
79 def resolve_with_query(self, host, port=0, family=socket.AF_INET):
80 if family == socket.AF_INET6:
81 qtype = 'AAAA'
82 else:
83 qtype = 'A'
84
85 hosts = []
86 resp = yield from self._resolver.query(host, qtype)
87
88 for rr in resp:
89 hosts.append(
90 {'hostname': host,
91 'host': rr.host, 'port': port,
92 'family': family, 'proto': 0,
93 'flags': socket.AI_NUMERICHOST})
94
95 return hosts
96
97 @asyncio.coroutine
98 def close(self):
99 return self._resolver.cancel()
100
101
102 DefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py
--- a/aiohttp/resolver.py
+++ b/aiohttp/resolver.py
@@ -60,31 +60,42 @@
if not hasattr(self._resolver, 'gethostbyname'):
# aiodns 1.1 is not available, fallback to DNSResolver.query
- self.resolve = self.resolve_with_query
+ self.resolve = self._resolve_with_query
@asyncio.coroutine
def resolve(self, host, port=0, family=socket.AF_INET):
+ try:
+ resp = yield from self._resolver.gethostbyname(host, family)
+ except aiodns.error.DNSError as exc:
+ msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
+ raise OSError(msg) from exc
hosts = []
- resp = yield from self._resolver.gethostbyname(host, family)
-
for address in resp.addresses:
hosts.append(
{'hostname': host,
'host': address, 'port': port,
'family': family, 'proto': 0,
'flags': socket.AI_NUMERICHOST})
+
+ if not hosts:
+ raise OSError("DNS lookup failed")
+
return hosts
@asyncio.coroutine
- def resolve_with_query(self, host, port=0, family=socket.AF_INET):
+ def _resolve_with_query(self, host, port=0, family=socket.AF_INET):
if family == socket.AF_INET6:
qtype = 'AAAA'
else:
qtype = 'A'
- hosts = []
- resp = yield from self._resolver.query(host, qtype)
+ try:
+ resp = yield from self._resolver.query(host, qtype)
+ except aiodns.error.DNSError as exc:
+ msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
+ raise OSError(msg) from exc
+ hosts = []
for rr in resp:
hosts.append(
{'hostname': host,
@@ -92,6 +103,9 @@
'family': family, 'proto': 0,
'flags': socket.AI_NUMERICHOST})
+ if not hosts:
+ raise OSError("DNS lookup failed")
+
return hosts
@asyncio.coroutine
|
{"golden_diff": "diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py\n--- a/aiohttp/resolver.py\n+++ b/aiohttp/resolver.py\n@@ -60,31 +60,42 @@\n \n if not hasattr(self._resolver, 'gethostbyname'):\n # aiodns 1.1 is not available, fallback to DNSResolver.query\n- self.resolve = self.resolve_with_query\n+ self.resolve = self._resolve_with_query\n \n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n+ try:\n+ resp = yield from self._resolver.gethostbyname(host, family)\n+ except aiodns.error.DNSError as exc:\n+ msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n+ raise OSError(msg) from exc\n hosts = []\n- resp = yield from self._resolver.gethostbyname(host, family)\n-\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n+\n+ if not hosts:\n+ raise OSError(\"DNS lookup failed\")\n+\n return hosts\n \n @asyncio.coroutine\n- def resolve_with_query(self, host, port=0, family=socket.AF_INET):\n+ def _resolve_with_query(self, host, port=0, family=socket.AF_INET):\n if family == socket.AF_INET6:\n qtype = 'AAAA'\n else:\n qtype = 'A'\n \n- hosts = []\n- resp = yield from self._resolver.query(host, qtype)\n+ try:\n+ resp = yield from self._resolver.query(host, qtype)\n+ except aiodns.error.DNSError as exc:\n+ msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n+ raise OSError(msg) from exc\n \n+ hosts = []\n for rr in resp:\n hosts.append(\n {'hostname': host,\n@@ -92,6 +103,9 @@\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n \n+ if not hosts:\n+ raise OSError(\"DNS lookup failed\")\n+\n return hosts\n \n @asyncio.coroutine\n", "issue": "AttributeError: 'NoneType' object has no attribute 'errno'\n## Long story short\r\n\r\nTrying to resolve a domain which is an alias for another one, which does not have an A or CNAME record, raises AttributeError: 'NoneType' object has no attribute 'errno'\r\n\r\n## Expected behaviour\r\n\r\nRaise an error correctly, socket.gaierror probably.\r\n\r\n## Actual behaviour\r\n\r\n```Traceback (most recent call last):\r\n File \"xtest.py\", line 16, in <module>\r\n process()\r\n File \"/usr/lib/python3.6/asyncio/base_events.py\", line 449, in run_until_complete\r\n return future.result()\r\n File \"/usr/lib/python3.6/asyncio/tasks.py\", line 239, in _step\r\n result = coro.send(None)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/helpers.py\", line 72, in send\r\n return self._coro.send(arg)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/client.py\", line 233, in _request\r\n conn = yield from self._connector.connect(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 378, in connect\r\n proto = yield from self._create_connection(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 687, in _create_connection\r\n _, proto = yield from self._create_direct_connection(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 735, in _create_direct_connection\r\n exc.errno,\r\nAttributeError: 'NoneType' object has no attribute 'errno'\r\n```\r\n\r\n## Steps to reproduce\r\n\r\nThis script will reproduce the error.\r\n\r\n```\r\nimport asyncio\r\nimport aiohttp\r\nfrom aiohttp.resolver import AsyncResolver\r\n\r\ndef process():\r\n url = 'http://esly.win/'\r\n resolver = AsyncResolver()\r\n conn = aiohttp.TCPConnector(resolver=resolver, verify_ssl=False)\r\n session = aiohttp.ClientSession(connector=conn)\r\n return session.get(url)\r\n\r\nloop = asyncio.get_event_loop()\r\nloop.run_until_complete(\r\n process()\r\n)\r\n```\r\n\r\nIf I use the session without setting the connector it first raises a socket.gaierror but then \r\n> During handling of the above exception, another exception occurred...\r\n\r\nAnd the same traceback appears.\r\n\r\n## Your environment\r\nPython 3.6.0b2\r\nUbuntu 10.10\r\naiohttp==2.2,5 \r\nAlso happens with aiohttp==2.3.0a0 (installed from git on 29/Aug/2017)\n", "before_files": [{"content": "import asyncio\nimport socket\n\nfrom .abc import AbstractResolver\n\n\n__all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')\n\ntry:\n import aiodns\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self, loop=None):\n if loop is None:\n loop = asyncio.get_event_loop()\n self._loop = loop\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n infos = yield from self._loop.getaddrinfo(\n host, port, type=socket.SOCK_STREAM, family=family)\n\n hosts = []\n for family, _, proto, _, address in infos:\n hosts.append(\n {'hostname': host,\n 'host': address[0], 'port': address[1],\n 'family': family, 'proto': proto,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, loop=None, *args, **kwargs):\n if loop is None:\n loop = asyncio.get_event_loop()\n\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = loop\n self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)\n\n if not hasattr(self._resolver, 'gethostbyname'):\n # aiodns 1.1 is not available, fallback to DNSResolver.query\n self.resolve = self.resolve_with_query\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n hosts = []\n resp = yield from self._resolver.gethostbyname(host, family)\n\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n return hosts\n\n @asyncio.coroutine\n def resolve_with_query(self, host, port=0, family=socket.AF_INET):\n if family == socket.AF_INET6:\n qtype = 'AAAA'\n else:\n qtype = 'A'\n\n hosts = []\n resp = yield from self._resolver.query(host, qtype)\n\n for rr in resp:\n hosts.append(\n {'hostname': host,\n 'host': rr.host, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}], "after_files": [{"content": "import asyncio\nimport socket\n\nfrom .abc import AbstractResolver\n\n\n__all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')\n\ntry:\n import aiodns\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self, loop=None):\n if loop is None:\n loop = asyncio.get_event_loop()\n self._loop = loop\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n infos = yield from self._loop.getaddrinfo(\n host, port, type=socket.SOCK_STREAM, family=family)\n\n hosts = []\n for family, _, proto, _, address in infos:\n hosts.append(\n {'hostname': host,\n 'host': address[0], 'port': address[1],\n 'family': family, 'proto': proto,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, loop=None, *args, **kwargs):\n if loop is None:\n loop = asyncio.get_event_loop()\n\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = loop\n self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)\n\n if not hasattr(self._resolver, 'gethostbyname'):\n # aiodns 1.1 is not available, fallback to DNSResolver.query\n self.resolve = self._resolve_with_query\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n try:\n resp = yield from self._resolver.gethostbyname(host, family)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n @asyncio.coroutine\n def _resolve_with_query(self, host, port=0, family=socket.AF_INET):\n if family == socket.AF_INET6:\n qtype = 'AAAA'\n else:\n qtype = 'A'\n\n try:\n resp = yield from self._resolver.query(host, qtype)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n\n hosts = []\n for rr in resp:\n hosts.append(\n {'hostname': host,\n 'host': rr.host, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}]}
| 1,740 | 525 |
gh_patches_debug_2967
|
rasdani/github-patches
|
git_diff
|
canonical__cloud-init-4422
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
package-update-upgrade-install does not work on Gentoo
This bug was originally filed in Launchpad as [LP: #1799544](https://bugs.launchpad.net/cloud-init/+bug/1799544)
<details>
<summary>Launchpad details</summary>
<pre>
affected_projects = []
assignee = holmanb
assignee_name = Brett Holman
date_closed = 2022-07-21T15:16:56.010973+00:00
date_created = 2018-10-23T17:34:36.633424+00:00
date_fix_committed = 2022-07-21T15:16:56.010973+00:00
date_fix_released = 2022-07-21T15:16:56.010973+00:00
id = 1799544
importance = medium
is_complete = True
lp_url = https://bugs.launchpad.net/cloud-init/+bug/1799544
milestone = 22.2
owner = gilles-dartiguelongue
owner_name = Gilles Dartiguelongue
private = False
status = fix_released
submitter = gilles-dartiguelongue
submitter_name = Gilles Dartiguelongue
tags = ['gentoo']
duplicates = []
</pre>
</details>
_Launchpad user **Gilles Dartiguelongue(gilles-dartiguelongue)** wrote on 2018-10-23T17:34:36.633424+00:00_
I'm testing cloud-init in a nocloud setup. I'm trying to perform installation of packages using the appropriate module and after fixing some issues in Gentoo packaging, I hit an error in execution due to cmd = list('emerge') being interpreted as ['e', 'm', 'e', ...] while it was meant as ['emerge'].
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cloudinit/distros/gentoo.py`
Content:
```
1 # Copyright (C) 2014 Rackspace, US Inc.
2 # Copyright (C) 2016 Matthew Thode.
3 #
4 # Author: Nate House <[email protected]>
5 # Author: Matthew Thode <[email protected]>
6 #
7 # This file is part of cloud-init. See LICENSE file for license information.
8
9 from cloudinit import distros, helpers
10 from cloudinit import log as logging
11 from cloudinit import subp, util
12 from cloudinit.distros import net_util
13 from cloudinit.distros.parsers.hostname import HostnameConf
14 from cloudinit.settings import PER_INSTANCE
15
16 LOG = logging.getLogger(__name__)
17
18
19 class Distro(distros.Distro):
20 locale_conf_fn = "/etc/env.d/02locale"
21 locale_gen_fn = "/etc/locale.gen"
22 network_conf_fn = "/etc/conf.d/net"
23 hostname_conf_fn = "/etc/conf.d/hostname"
24 init_cmd = ["rc-service"] # init scripts
25 default_locale = "en_US.UTF-8"
26
27 # C.UTF8 makes sense to generate, but is not selected
28 # Add /etc/locale.gen entries to this list to support more locales
29 locales = ["C.UTF8 UTF-8", "en_US.UTF-8 UTF-8"]
30
31 def __init__(self, name, cfg, paths):
32 distros.Distro.__init__(self, name, cfg, paths)
33 # This will be used to restrict certain
34 # calls from repeatly happening (when they
35 # should only happen say once per instance...)
36 self._runner = helpers.Runners(paths)
37 self.osfamily = "gentoo"
38 # Fix sshd restarts
39 cfg["ssh_svcname"] = "/etc/init.d/sshd"
40 if distros.uses_systemd():
41 LOG.error("Cloud-init does not support systemd with gentoo")
42
43 def apply_locale(self, _, out_fn=None):
44 """rc-only - not compatible with systemd
45
46 Locales need to be added to /etc/locale.gen and generated prior
47 to selection. Default to en_US.UTF-8 for simplicity.
48 """
49 util.write_file(self.locale_gen_fn, "\n".join(self.locales), mode=644)
50
51 # generate locales
52 subp.subp(["locale-gen"], capture=False)
53
54 # select locale
55 subp.subp(
56 ["eselect", "locale", "set", self.default_locale], capture=False
57 )
58
59 def install_packages(self, pkglist):
60 self.update_package_sources()
61 self.package_command("", pkgs=pkglist)
62
63 def _write_network(self, settings):
64 entries = net_util.translate_network(settings)
65 LOG.debug(
66 "Translated ubuntu style network settings %s into %s",
67 settings,
68 entries,
69 )
70 dev_names = entries.keys()
71 nameservers = []
72
73 for (dev, info) in entries.items():
74 if "dns-nameservers" in info:
75 nameservers.extend(info["dns-nameservers"])
76 if dev == "lo":
77 continue
78 net_fn = self.network_conf_fn + "." + dev
79 dns_nameservers = info.get("dns-nameservers")
80 if isinstance(dns_nameservers, (list, tuple)):
81 dns_nameservers = str(tuple(dns_nameservers)).replace(",", "")
82 # eth0, {'auto': True, 'ipv6': {}, 'bootproto': 'dhcp'}
83 # lo, {'dns-nameservers': ['10.0.1.3'], 'ipv6': {}, 'auto': True}
84 results = ""
85 if info.get("bootproto") == "dhcp":
86 results += 'config_{name}="dhcp"'.format(name=dev)
87 else:
88 results += (
89 'config_{name}="{ip_address} netmask {netmask}"\n'
90 'mac_{name}="{hwaddr}"\n'
91 ).format(
92 name=dev,
93 ip_address=info.get("address"),
94 netmask=info.get("netmask"),
95 hwaddr=info.get("hwaddress"),
96 )
97 results += 'routes_{name}="default via {gateway}"\n'.format(
98 name=dev, gateway=info.get("gateway")
99 )
100 if info.get("dns-nameservers"):
101 results += 'dns_servers_{name}="{dnsservers}"\n'.format(
102 name=dev, dnsservers=dns_nameservers
103 )
104 util.write_file(net_fn, results)
105 self._create_network_symlink(dev)
106 if info.get("auto"):
107 cmd = [
108 "rc-update",
109 "add",
110 "net.{name}".format(name=dev),
111 "default",
112 ]
113 try:
114 (_out, err) = subp.subp(cmd)
115 if len(err):
116 LOG.warning(
117 "Running %s resulted in stderr output: %s",
118 cmd,
119 err,
120 )
121 except subp.ProcessExecutionError:
122 util.logexc(
123 LOG, "Running interface command %s failed", cmd
124 )
125
126 if nameservers:
127 util.write_file(
128 self.resolve_conf_fn, convert_resolv_conf(nameservers)
129 )
130
131 return dev_names
132
133 @staticmethod
134 def _create_network_symlink(interface_name):
135 file_path = "/etc/init.d/net.{name}".format(name=interface_name)
136 if not util.is_link(file_path):
137 util.sym_link("/etc/init.d/net.lo", file_path)
138
139 def _bring_up_interface(self, device_name):
140 cmd = ["/etc/init.d/net.%s" % device_name, "restart"]
141 LOG.debug(
142 "Attempting to run bring up interface %s using command %s",
143 device_name,
144 cmd,
145 )
146 try:
147 (_out, err) = subp.subp(cmd)
148 if len(err):
149 LOG.warning(
150 "Running %s resulted in stderr output: %s", cmd, err
151 )
152 return True
153 except subp.ProcessExecutionError:
154 util.logexc(LOG, "Running interface command %s failed", cmd)
155 return False
156
157 def _bring_up_interfaces(self, device_names):
158 use_all = False
159 for d in device_names:
160 if d == "all":
161 use_all = True
162 if use_all:
163 # Grab device names from init scripts
164 cmd = ["ls", "/etc/init.d/net.*"]
165 try:
166 (_out, err) = subp.subp(cmd)
167 if len(err):
168 LOG.warning(
169 "Running %s resulted in stderr output: %s", cmd, err
170 )
171 except subp.ProcessExecutionError:
172 util.logexc(LOG, "Running interface command %s failed", cmd)
173 return False
174 devices = [x.split(".")[2] for x in _out.split(" ")]
175 return distros.Distro._bring_up_interfaces(self, devices)
176 else:
177 return distros.Distro._bring_up_interfaces(self, device_names)
178
179 def _write_hostname(self, hostname, filename):
180 conf = None
181 try:
182 # Try to update the previous one
183 # so lets see if we can read it first.
184 conf = self._read_hostname_conf(filename)
185 except IOError:
186 pass
187 if not conf:
188 conf = HostnameConf("")
189
190 # Many distro's format is the hostname by itself, and that is the
191 # way HostnameConf works but gentoo expects it to be in
192 # hostname="the-actual-hostname"
193 conf.set_hostname('hostname="%s"' % hostname)
194 util.write_file(filename, str(conf), 0o644)
195
196 def _read_system_hostname(self):
197 sys_hostname = self._read_hostname(self.hostname_conf_fn)
198 return self.hostname_conf_fn, sys_hostname
199
200 @staticmethod
201 def _read_hostname_conf(filename):
202 conf = HostnameConf(util.load_file(filename))
203 conf.parse()
204 return conf
205
206 def _read_hostname(self, filename, default=None):
207 hostname = None
208 try:
209 conf = self._read_hostname_conf(filename)
210 hostname = conf.hostname
211 except IOError:
212 pass
213 if not hostname:
214 return default
215 return hostname
216
217 def set_timezone(self, tz):
218 distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))
219
220 def package_command(self, command, args=None, pkgs=None):
221 cmd = list("emerge")
222 # Redirect output
223 cmd.append("--quiet")
224
225 if command == "upgrade":
226 cmd.extend(["--update", "world"])
227 else:
228 if pkgs is None:
229 pkgs = []
230
231 if args and isinstance(args, str):
232 cmd.append(args)
233 elif args and isinstance(args, list):
234 cmd.extend(args)
235
236 if command:
237 cmd.append(command)
238
239 pkglist = util.expand_package_list("%s-%s", pkgs)
240 cmd.extend(pkglist)
241
242 # Allow the output of this to flow outwards (ie not be captured)
243 subp.subp(cmd, capture=False)
244
245 def update_package_sources(self):
246 self._runner.run(
247 "update-sources",
248 self.package_command,
249 ["--sync"],
250 freq=PER_INSTANCE,
251 )
252
253
254 def convert_resolv_conf(settings):
255 """Returns a settings string formatted for resolv.conf."""
256 result = ""
257 if isinstance(settings, list):
258 for ns in settings:
259 result += "nameserver %s\n" % ns
260 return result
261
262
263 # vi: ts=4 expandtab
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cloudinit/distros/gentoo.py b/cloudinit/distros/gentoo.py
--- a/cloudinit/distros/gentoo.py
+++ b/cloudinit/distros/gentoo.py
@@ -218,7 +218,7 @@
distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))
def package_command(self, command, args=None, pkgs=None):
- cmd = list("emerge")
+ cmd = ["emerge"]
# Redirect output
cmd.append("--quiet")
|
{"golden_diff": "diff --git a/cloudinit/distros/gentoo.py b/cloudinit/distros/gentoo.py\n--- a/cloudinit/distros/gentoo.py\n+++ b/cloudinit/distros/gentoo.py\n@@ -218,7 +218,7 @@\n distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))\n \n def package_command(self, command, args=None, pkgs=None):\n- cmd = list(\"emerge\")\n+ cmd = [\"emerge\"]\n # Redirect output\n cmd.append(\"--quiet\")\n", "issue": "package-update-upgrade-install does not work on Gentoo\nThis bug was originally filed in Launchpad as [LP: #1799544](https://bugs.launchpad.net/cloud-init/+bug/1799544)\n<details>\n<summary>Launchpad details</summary>\n<pre>\naffected_projects = []\nassignee = holmanb\nassignee_name = Brett Holman\ndate_closed = 2022-07-21T15:16:56.010973+00:00\ndate_created = 2018-10-23T17:34:36.633424+00:00\ndate_fix_committed = 2022-07-21T15:16:56.010973+00:00\ndate_fix_released = 2022-07-21T15:16:56.010973+00:00\nid = 1799544\nimportance = medium\nis_complete = True\nlp_url = https://bugs.launchpad.net/cloud-init/+bug/1799544\nmilestone = 22.2\nowner = gilles-dartiguelongue\nowner_name = Gilles Dartiguelongue\nprivate = False\nstatus = fix_released\nsubmitter = gilles-dartiguelongue\nsubmitter_name = Gilles Dartiguelongue\ntags = ['gentoo']\nduplicates = []\n</pre>\n</details>\n\n_Launchpad user **Gilles Dartiguelongue(gilles-dartiguelongue)** wrote on 2018-10-23T17:34:36.633424+00:00_\n\nI'm testing cloud-init in a nocloud setup. I'm trying to perform installation of packages using the appropriate module and after fixing some issues in Gentoo packaging, I hit an error in execution due to cmd = list('emerge') being interpreted as ['e', 'm', 'e', ...] while it was meant as ['emerge'].\n", "before_files": [{"content": "# Copyright (C) 2014 Rackspace, US Inc.\n# Copyright (C) 2016 Matthew Thode.\n#\n# Author: Nate House <[email protected]>\n# Author: Matthew Thode <[email protected]>\n#\n# This file is part of cloud-init. See LICENSE file for license information.\n\nfrom cloudinit import distros, helpers\nfrom cloudinit import log as logging\nfrom cloudinit import subp, util\nfrom cloudinit.distros import net_util\nfrom cloudinit.distros.parsers.hostname import HostnameConf\nfrom cloudinit.settings import PER_INSTANCE\n\nLOG = logging.getLogger(__name__)\n\n\nclass Distro(distros.Distro):\n locale_conf_fn = \"/etc/env.d/02locale\"\n locale_gen_fn = \"/etc/locale.gen\"\n network_conf_fn = \"/etc/conf.d/net\"\n hostname_conf_fn = \"/etc/conf.d/hostname\"\n init_cmd = [\"rc-service\"] # init scripts\n default_locale = \"en_US.UTF-8\"\n\n # C.UTF8 makes sense to generate, but is not selected\n # Add /etc/locale.gen entries to this list to support more locales\n locales = [\"C.UTF8 UTF-8\", \"en_US.UTF-8 UTF-8\"]\n\n def __init__(self, name, cfg, paths):\n distros.Distro.__init__(self, name, cfg, paths)\n # This will be used to restrict certain\n # calls from repeatly happening (when they\n # should only happen say once per instance...)\n self._runner = helpers.Runners(paths)\n self.osfamily = \"gentoo\"\n # Fix sshd restarts\n cfg[\"ssh_svcname\"] = \"/etc/init.d/sshd\"\n if distros.uses_systemd():\n LOG.error(\"Cloud-init does not support systemd with gentoo\")\n\n def apply_locale(self, _, out_fn=None):\n \"\"\"rc-only - not compatible with systemd\n\n Locales need to be added to /etc/locale.gen and generated prior\n to selection. Default to en_US.UTF-8 for simplicity.\n \"\"\"\n util.write_file(self.locale_gen_fn, \"\\n\".join(self.locales), mode=644)\n\n # generate locales\n subp.subp([\"locale-gen\"], capture=False)\n\n # select locale\n subp.subp(\n [\"eselect\", \"locale\", \"set\", self.default_locale], capture=False\n )\n\n def install_packages(self, pkglist):\n self.update_package_sources()\n self.package_command(\"\", pkgs=pkglist)\n\n def _write_network(self, settings):\n entries = net_util.translate_network(settings)\n LOG.debug(\n \"Translated ubuntu style network settings %s into %s\",\n settings,\n entries,\n )\n dev_names = entries.keys()\n nameservers = []\n\n for (dev, info) in entries.items():\n if \"dns-nameservers\" in info:\n nameservers.extend(info[\"dns-nameservers\"])\n if dev == \"lo\":\n continue\n net_fn = self.network_conf_fn + \".\" + dev\n dns_nameservers = info.get(\"dns-nameservers\")\n if isinstance(dns_nameservers, (list, tuple)):\n dns_nameservers = str(tuple(dns_nameservers)).replace(\",\", \"\")\n # eth0, {'auto': True, 'ipv6': {}, 'bootproto': 'dhcp'}\n # lo, {'dns-nameservers': ['10.0.1.3'], 'ipv6': {}, 'auto': True}\n results = \"\"\n if info.get(\"bootproto\") == \"dhcp\":\n results += 'config_{name}=\"dhcp\"'.format(name=dev)\n else:\n results += (\n 'config_{name}=\"{ip_address} netmask {netmask}\"\\n'\n 'mac_{name}=\"{hwaddr}\"\\n'\n ).format(\n name=dev,\n ip_address=info.get(\"address\"),\n netmask=info.get(\"netmask\"),\n hwaddr=info.get(\"hwaddress\"),\n )\n results += 'routes_{name}=\"default via {gateway}\"\\n'.format(\n name=dev, gateway=info.get(\"gateway\")\n )\n if info.get(\"dns-nameservers\"):\n results += 'dns_servers_{name}=\"{dnsservers}\"\\n'.format(\n name=dev, dnsservers=dns_nameservers\n )\n util.write_file(net_fn, results)\n self._create_network_symlink(dev)\n if info.get(\"auto\"):\n cmd = [\n \"rc-update\",\n \"add\",\n \"net.{name}\".format(name=dev),\n \"default\",\n ]\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\",\n cmd,\n err,\n )\n except subp.ProcessExecutionError:\n util.logexc(\n LOG, \"Running interface command %s failed\", cmd\n )\n\n if nameservers:\n util.write_file(\n self.resolve_conf_fn, convert_resolv_conf(nameservers)\n )\n\n return dev_names\n\n @staticmethod\n def _create_network_symlink(interface_name):\n file_path = \"/etc/init.d/net.{name}\".format(name=interface_name)\n if not util.is_link(file_path):\n util.sym_link(\"/etc/init.d/net.lo\", file_path)\n\n def _bring_up_interface(self, device_name):\n cmd = [\"/etc/init.d/net.%s\" % device_name, \"restart\"]\n LOG.debug(\n \"Attempting to run bring up interface %s using command %s\",\n device_name,\n cmd,\n )\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\", cmd, err\n )\n return True\n except subp.ProcessExecutionError:\n util.logexc(LOG, \"Running interface command %s failed\", cmd)\n return False\n\n def _bring_up_interfaces(self, device_names):\n use_all = False\n for d in device_names:\n if d == \"all\":\n use_all = True\n if use_all:\n # Grab device names from init scripts\n cmd = [\"ls\", \"/etc/init.d/net.*\"]\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\", cmd, err\n )\n except subp.ProcessExecutionError:\n util.logexc(LOG, \"Running interface command %s failed\", cmd)\n return False\n devices = [x.split(\".\")[2] for x in _out.split(\" \")]\n return distros.Distro._bring_up_interfaces(self, devices)\n else:\n return distros.Distro._bring_up_interfaces(self, device_names)\n\n def _write_hostname(self, hostname, filename):\n conf = None\n try:\n # Try to update the previous one\n # so lets see if we can read it first.\n conf = self._read_hostname_conf(filename)\n except IOError:\n pass\n if not conf:\n conf = HostnameConf(\"\")\n\n # Many distro's format is the hostname by itself, and that is the\n # way HostnameConf works but gentoo expects it to be in\n # hostname=\"the-actual-hostname\"\n conf.set_hostname('hostname=\"%s\"' % hostname)\n util.write_file(filename, str(conf), 0o644)\n\n def _read_system_hostname(self):\n sys_hostname = self._read_hostname(self.hostname_conf_fn)\n return self.hostname_conf_fn, sys_hostname\n\n @staticmethod\n def _read_hostname_conf(filename):\n conf = HostnameConf(util.load_file(filename))\n conf.parse()\n return conf\n\n def _read_hostname(self, filename, default=None):\n hostname = None\n try:\n conf = self._read_hostname_conf(filename)\n hostname = conf.hostname\n except IOError:\n pass\n if not hostname:\n return default\n return hostname\n\n def set_timezone(self, tz):\n distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))\n\n def package_command(self, command, args=None, pkgs=None):\n cmd = list(\"emerge\")\n # Redirect output\n cmd.append(\"--quiet\")\n\n if command == \"upgrade\":\n cmd.extend([\"--update\", \"world\"])\n else:\n if pkgs is None:\n pkgs = []\n\n if args and isinstance(args, str):\n cmd.append(args)\n elif args and isinstance(args, list):\n cmd.extend(args)\n\n if command:\n cmd.append(command)\n\n pkglist = util.expand_package_list(\"%s-%s\", pkgs)\n cmd.extend(pkglist)\n\n # Allow the output of this to flow outwards (ie not be captured)\n subp.subp(cmd, capture=False)\n\n def update_package_sources(self):\n self._runner.run(\n \"update-sources\",\n self.package_command,\n [\"--sync\"],\n freq=PER_INSTANCE,\n )\n\n\ndef convert_resolv_conf(settings):\n \"\"\"Returns a settings string formatted for resolv.conf.\"\"\"\n result = \"\"\n if isinstance(settings, list):\n for ns in settings:\n result += \"nameserver %s\\n\" % ns\n return result\n\n\n# vi: ts=4 expandtab\n", "path": "cloudinit/distros/gentoo.py"}], "after_files": [{"content": "# Copyright (C) 2014 Rackspace, US Inc.\n# Copyright (C) 2016 Matthew Thode.\n#\n# Author: Nate House <[email protected]>\n# Author: Matthew Thode <[email protected]>\n#\n# This file is part of cloud-init. See LICENSE file for license information.\n\nfrom cloudinit import distros, helpers\nfrom cloudinit import log as logging\nfrom cloudinit import subp, util\nfrom cloudinit.distros import net_util\nfrom cloudinit.distros.parsers.hostname import HostnameConf\nfrom cloudinit.settings import PER_INSTANCE\n\nLOG = logging.getLogger(__name__)\n\n\nclass Distro(distros.Distro):\n locale_conf_fn = \"/etc/env.d/02locale\"\n locale_gen_fn = \"/etc/locale.gen\"\n network_conf_fn = \"/etc/conf.d/net\"\n hostname_conf_fn = \"/etc/conf.d/hostname\"\n init_cmd = [\"rc-service\"] # init scripts\n default_locale = \"en_US.UTF-8\"\n\n # C.UTF8 makes sense to generate, but is not selected\n # Add /etc/locale.gen entries to this list to support more locales\n locales = [\"C.UTF8 UTF-8\", \"en_US.UTF-8 UTF-8\"]\n\n def __init__(self, name, cfg, paths):\n distros.Distro.__init__(self, name, cfg, paths)\n # This will be used to restrict certain\n # calls from repeatly happening (when they\n # should only happen say once per instance...)\n self._runner = helpers.Runners(paths)\n self.osfamily = \"gentoo\"\n # Fix sshd restarts\n cfg[\"ssh_svcname\"] = \"/etc/init.d/sshd\"\n if distros.uses_systemd():\n LOG.error(\"Cloud-init does not support systemd with gentoo\")\n\n def apply_locale(self, _, out_fn=None):\n \"\"\"rc-only - not compatible with systemd\n\n Locales need to be added to /etc/locale.gen and generated prior\n to selection. Default to en_US.UTF-8 for simplicity.\n \"\"\"\n util.write_file(self.locale_gen_fn, \"\\n\".join(self.locales), mode=644)\n\n # generate locales\n subp.subp([\"locale-gen\"], capture=False)\n\n # select locale\n subp.subp(\n [\"eselect\", \"locale\", \"set\", self.default_locale], capture=False\n )\n\n def install_packages(self, pkglist):\n self.update_package_sources()\n self.package_command(\"\", pkgs=pkglist)\n\n def _write_network(self, settings):\n entries = net_util.translate_network(settings)\n LOG.debug(\n \"Translated ubuntu style network settings %s into %s\",\n settings,\n entries,\n )\n dev_names = entries.keys()\n nameservers = []\n\n for (dev, info) in entries.items():\n if \"dns-nameservers\" in info:\n nameservers.extend(info[\"dns-nameservers\"])\n if dev == \"lo\":\n continue\n net_fn = self.network_conf_fn + \".\" + dev\n dns_nameservers = info.get(\"dns-nameservers\")\n if isinstance(dns_nameservers, (list, tuple)):\n dns_nameservers = str(tuple(dns_nameservers)).replace(\",\", \"\")\n # eth0, {'auto': True, 'ipv6': {}, 'bootproto': 'dhcp'}\n # lo, {'dns-nameservers': ['10.0.1.3'], 'ipv6': {}, 'auto': True}\n results = \"\"\n if info.get(\"bootproto\") == \"dhcp\":\n results += 'config_{name}=\"dhcp\"'.format(name=dev)\n else:\n results += (\n 'config_{name}=\"{ip_address} netmask {netmask}\"\\n'\n 'mac_{name}=\"{hwaddr}\"\\n'\n ).format(\n name=dev,\n ip_address=info.get(\"address\"),\n netmask=info.get(\"netmask\"),\n hwaddr=info.get(\"hwaddress\"),\n )\n results += 'routes_{name}=\"default via {gateway}\"\\n'.format(\n name=dev, gateway=info.get(\"gateway\")\n )\n if info.get(\"dns-nameservers\"):\n results += 'dns_servers_{name}=\"{dnsservers}\"\\n'.format(\n name=dev, dnsservers=dns_nameservers\n )\n util.write_file(net_fn, results)\n self._create_network_symlink(dev)\n if info.get(\"auto\"):\n cmd = [\n \"rc-update\",\n \"add\",\n \"net.{name}\".format(name=dev),\n \"default\",\n ]\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\",\n cmd,\n err,\n )\n except subp.ProcessExecutionError:\n util.logexc(\n LOG, \"Running interface command %s failed\", cmd\n )\n\n if nameservers:\n util.write_file(\n self.resolve_conf_fn, convert_resolv_conf(nameservers)\n )\n\n return dev_names\n\n @staticmethod\n def _create_network_symlink(interface_name):\n file_path = \"/etc/init.d/net.{name}\".format(name=interface_name)\n if not util.is_link(file_path):\n util.sym_link(\"/etc/init.d/net.lo\", file_path)\n\n def _bring_up_interface(self, device_name):\n cmd = [\"/etc/init.d/net.%s\" % device_name, \"restart\"]\n LOG.debug(\n \"Attempting to run bring up interface %s using command %s\",\n device_name,\n cmd,\n )\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\", cmd, err\n )\n return True\n except subp.ProcessExecutionError:\n util.logexc(LOG, \"Running interface command %s failed\", cmd)\n return False\n\n def _bring_up_interfaces(self, device_names):\n use_all = False\n for d in device_names:\n if d == \"all\":\n use_all = True\n if use_all:\n # Grab device names from init scripts\n cmd = [\"ls\", \"/etc/init.d/net.*\"]\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\", cmd, err\n )\n except subp.ProcessExecutionError:\n util.logexc(LOG, \"Running interface command %s failed\", cmd)\n return False\n devices = [x.split(\".\")[2] for x in _out.split(\" \")]\n return distros.Distro._bring_up_interfaces(self, devices)\n else:\n return distros.Distro._bring_up_interfaces(self, device_names)\n\n def _write_hostname(self, hostname, filename):\n conf = None\n try:\n # Try to update the previous one\n # so lets see if we can read it first.\n conf = self._read_hostname_conf(filename)\n except IOError:\n pass\n if not conf:\n conf = HostnameConf(\"\")\n\n # Many distro's format is the hostname by itself, and that is the\n # way HostnameConf works but gentoo expects it to be in\n # hostname=\"the-actual-hostname\"\n conf.set_hostname('hostname=\"%s\"' % hostname)\n util.write_file(filename, str(conf), 0o644)\n\n def _read_system_hostname(self):\n sys_hostname = self._read_hostname(self.hostname_conf_fn)\n return self.hostname_conf_fn, sys_hostname\n\n @staticmethod\n def _read_hostname_conf(filename):\n conf = HostnameConf(util.load_file(filename))\n conf.parse()\n return conf\n\n def _read_hostname(self, filename, default=None):\n hostname = None\n try:\n conf = self._read_hostname_conf(filename)\n hostname = conf.hostname\n except IOError:\n pass\n if not hostname:\n return default\n return hostname\n\n def set_timezone(self, tz):\n distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))\n\n def package_command(self, command, args=None, pkgs=None):\n cmd = [\"emerge\"]\n # Redirect output\n cmd.append(\"--quiet\")\n\n if command == \"upgrade\":\n cmd.extend([\"--update\", \"world\"])\n else:\n if pkgs is None:\n pkgs = []\n\n if args and isinstance(args, str):\n cmd.append(args)\n elif args and isinstance(args, list):\n cmd.extend(args)\n\n if command:\n cmd.append(command)\n\n pkglist = util.expand_package_list(\"%s-%s\", pkgs)\n cmd.extend(pkglist)\n\n # Allow the output of this to flow outwards (ie not be captured)\n subp.subp(cmd, capture=False)\n\n def update_package_sources(self):\n self._runner.run(\n \"update-sources\",\n self.package_command,\n [\"--sync\"],\n freq=PER_INSTANCE,\n )\n\n\ndef convert_resolv_conf(settings):\n \"\"\"Returns a settings string formatted for resolv.conf.\"\"\"\n result = \"\"\n if isinstance(settings, list):\n for ns in settings:\n result += \"nameserver %s\\n\" % ns\n return result\n\n\n# vi: ts=4 expandtab\n", "path": "cloudinit/distros/gentoo.py"}]}
| 3,482 | 125 |
gh_patches_debug_8326
|
rasdani/github-patches
|
git_diff
|
google__clusterfuzz-1163
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Command field empty in OSS-Fuzz testcases
See https://oss-fuzz.com/testcase-detail/5204819744915456 for example.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/bot/untrusted_runner/tasks_impl.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Tasks RPC implementations."""
15 from __future__ import absolute_import
16
17 from google.protobuf import wrappers_pb2
18 from google.protobuf.any_pb2 import Any
19 import six
20
21 from . import protobuf_utils
22
23 from bot import testcase_manager
24 from bot.fuzzers import engine
25 from bot.tasks import corpus_pruning_task
26 from bot.tasks import fuzz_task
27 from bot.tasks import minimize_task
28 from datastore import data_types
29 from protos import untrusted_runner_pb2
30 from system import environment
31
32
33 def _proto_to_fuzz_target(proto):
34 """Convert protobuf to FuzzTarget."""
35 return data_types.FuzzTarget(
36 engine=proto.engine, project=proto.project, binary=proto.binary)
37
38
39 def _proto_to_cross_pollinate_fuzzer(proto):
40 """Convert protobuf to CrossPollinateFuzzer."""
41 return corpus_pruning_task.CrossPollinateFuzzer(
42 fuzz_target=_proto_to_fuzz_target(proto.fuzz_target),
43 backup_bucket_name=proto.backup_bucket_name,
44 corpus_engine_name=proto.corpus_engine_name)
45
46
47 def prune_corpus(request, _):
48 """Prune corpus."""
49 context = corpus_pruning_task.Context(
50 _proto_to_fuzz_target(request.fuzz_target), [
51 _proto_to_cross_pollinate_fuzzer(proto)
52 for proto in request.cross_pollinate_fuzzers
53 ], environment.get_value('USE_MINIJAIL'))
54
55 result = corpus_pruning_task.do_corpus_pruning(
56 context, request.last_execution_failed, request.revision)
57
58 # Intentionally skip edge and function coverage values as those would come
59 # from fuzzer coverage cron task (see src/go/server/cron/coverage.go).
60 coverage_info = untrusted_runner_pb2.CoverageInfo(
61 corpus_size_units=result.coverage_info.corpus_size_units,
62 corpus_size_bytes=result.coverage_info.corpus_size_bytes,
63 corpus_location=result.coverage_info.corpus_location,
64 corpus_backup_location=result.coverage_info.corpus_backup_location,
65 quarantine_size_units=result.coverage_info.quarantine_size_units,
66 quarantine_size_bytes=result.coverage_info.quarantine_size_bytes,
67 quarantine_location=result.coverage_info.quarantine_location)
68
69 crashes = [
70 untrusted_runner_pb2.CorpusCrash(
71 crash_state=crash.crash_state,
72 crash_type=crash.crash_type,
73 crash_address=crash.crash_address,
74 crash_stacktrace=protobuf_utils.encode_utf8_if_unicode(
75 crash.crash_stacktrace),
76 unit_path=crash.unit_path,
77 security_flag=crash.security_flag,
78 ) for crash in result.crashes
79 ]
80
81 return untrusted_runner_pb2.PruneCorpusResponse(
82 coverage_info=coverage_info,
83 crashes=crashes,
84 fuzzer_binary_name=result.fuzzer_binary_name,
85 revision=result.revision)
86
87
88 def process_testcase(request, _):
89 """Process testcase."""
90 tool_name_map = {
91 untrusted_runner_pb2.ProcessTestcaseRequest.MINIMIZE: 'minimize',
92 untrusted_runner_pb2.ProcessTestcaseRequest.CLEANSE: 'cleanse',
93 }
94
95 # TODO(ochang): Support other engines.
96 assert request.engine == 'libFuzzer'
97 assert request.operation in tool_name_map
98
99 result = minimize_task.run_libfuzzer_engine(
100 tool_name_map[request.operation], request.target_name, request.arguments,
101 request.testcase_path, request.output_path, request.timeout)
102
103 return untrusted_runner_pb2.EngineReproduceResult(
104 return_code=result.return_code,
105 time_executed=result.time_executed,
106 output=result.output)
107
108
109 def engine_fuzz(request, _):
110 """Run engine fuzzer."""
111 engine_impl = engine.get(request.engine)
112 result, fuzzer_metadata = fuzz_task.run_engine_fuzzer(
113 engine_impl, request.target_name, request.sync_corpus_directory,
114 request.testcase_directory)
115
116 crashes = [
117 untrusted_runner_pb2.EngineCrash(
118 input_path=crash.input_path,
119 stacktrace=protobuf_utils.encode_utf8_if_unicode(crash.stacktrace),
120 reproduce_args=crash.reproduce_args,
121 crash_time=crash.crash_time) for crash in result.crashes
122 ]
123
124 packed_stats = {}
125 for key, value in six.iteritems(result.stats):
126 packed_value = Any()
127 if isinstance(value, float):
128 packed_value.Pack(wrappers_pb2.DoubleValue(value=value))
129 elif isinstance(value, int):
130 packed_value.Pack(wrappers_pb2.Int32Value(value=value))
131 elif isinstance(value, six.string_types):
132 packed_value.Pack(wrappers_pb2.StringValue(value=value))
133 else:
134 raise ValueError('Unknown stat type for ' + key)
135
136 packed_stats[key] = packed_value
137
138 return untrusted_runner_pb2.EngineFuzzResponse(
139 logs=protobuf_utils.encode_utf8_if_unicode(result.logs),
140 command=result.command,
141 crashes=crashes,
142 stats=packed_stats,
143 time_executed=result.time_executed,
144 fuzzer_metadata=fuzzer_metadata)
145
146
147 def engine_reproduce(request, _):
148 """Run engine reproduce."""
149 engine_impl = engine.get(request.engine)
150 result = testcase_manager.engine_reproduce(engine_impl, request.target_name,
151 request.testcase_path,
152 request.arguments, request.timeout)
153 return untrusted_runner_pb2.EngineReproduceResult(
154 return_code=result.return_code,
155 time_executed=result.time_executed,
156 output=result.output)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/bot/untrusted_runner/tasks_impl.py b/src/python/bot/untrusted_runner/tasks_impl.py
--- a/src/python/bot/untrusted_runner/tasks_impl.py
+++ b/src/python/bot/untrusted_runner/tasks_impl.py
@@ -151,6 +151,7 @@
request.testcase_path,
request.arguments, request.timeout)
return untrusted_runner_pb2.EngineReproduceResult(
+ command=result.command,
return_code=result.return_code,
time_executed=result.time_executed,
output=result.output)
|
{"golden_diff": "diff --git a/src/python/bot/untrusted_runner/tasks_impl.py b/src/python/bot/untrusted_runner/tasks_impl.py\n--- a/src/python/bot/untrusted_runner/tasks_impl.py\n+++ b/src/python/bot/untrusted_runner/tasks_impl.py\n@@ -151,6 +151,7 @@\n request.testcase_path,\n request.arguments, request.timeout)\n return untrusted_runner_pb2.EngineReproduceResult(\n+ command=result.command,\n return_code=result.return_code,\n time_executed=result.time_executed,\n output=result.output)\n", "issue": "Command field empty in OSS-Fuzz testcases\nSee https://oss-fuzz.com/testcase-detail/5204819744915456 for example.\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tasks RPC implementations.\"\"\"\nfrom __future__ import absolute_import\n\nfrom google.protobuf import wrappers_pb2\nfrom google.protobuf.any_pb2 import Any\nimport six\n\nfrom . import protobuf_utils\n\nfrom bot import testcase_manager\nfrom bot.fuzzers import engine\nfrom bot.tasks import corpus_pruning_task\nfrom bot.tasks import fuzz_task\nfrom bot.tasks import minimize_task\nfrom datastore import data_types\nfrom protos import untrusted_runner_pb2\nfrom system import environment\n\n\ndef _proto_to_fuzz_target(proto):\n \"\"\"Convert protobuf to FuzzTarget.\"\"\"\n return data_types.FuzzTarget(\n engine=proto.engine, project=proto.project, binary=proto.binary)\n\n\ndef _proto_to_cross_pollinate_fuzzer(proto):\n \"\"\"Convert protobuf to CrossPollinateFuzzer.\"\"\"\n return corpus_pruning_task.CrossPollinateFuzzer(\n fuzz_target=_proto_to_fuzz_target(proto.fuzz_target),\n backup_bucket_name=proto.backup_bucket_name,\n corpus_engine_name=proto.corpus_engine_name)\n\n\ndef prune_corpus(request, _):\n \"\"\"Prune corpus.\"\"\"\n context = corpus_pruning_task.Context(\n _proto_to_fuzz_target(request.fuzz_target), [\n _proto_to_cross_pollinate_fuzzer(proto)\n for proto in request.cross_pollinate_fuzzers\n ], environment.get_value('USE_MINIJAIL'))\n\n result = corpus_pruning_task.do_corpus_pruning(\n context, request.last_execution_failed, request.revision)\n\n # Intentionally skip edge and function coverage values as those would come\n # from fuzzer coverage cron task (see src/go/server/cron/coverage.go).\n coverage_info = untrusted_runner_pb2.CoverageInfo(\n corpus_size_units=result.coverage_info.corpus_size_units,\n corpus_size_bytes=result.coverage_info.corpus_size_bytes,\n corpus_location=result.coverage_info.corpus_location,\n corpus_backup_location=result.coverage_info.corpus_backup_location,\n quarantine_size_units=result.coverage_info.quarantine_size_units,\n quarantine_size_bytes=result.coverage_info.quarantine_size_bytes,\n quarantine_location=result.coverage_info.quarantine_location)\n\n crashes = [\n untrusted_runner_pb2.CorpusCrash(\n crash_state=crash.crash_state,\n crash_type=crash.crash_type,\n crash_address=crash.crash_address,\n crash_stacktrace=protobuf_utils.encode_utf8_if_unicode(\n crash.crash_stacktrace),\n unit_path=crash.unit_path,\n security_flag=crash.security_flag,\n ) for crash in result.crashes\n ]\n\n return untrusted_runner_pb2.PruneCorpusResponse(\n coverage_info=coverage_info,\n crashes=crashes,\n fuzzer_binary_name=result.fuzzer_binary_name,\n revision=result.revision)\n\n\ndef process_testcase(request, _):\n \"\"\"Process testcase.\"\"\"\n tool_name_map = {\n untrusted_runner_pb2.ProcessTestcaseRequest.MINIMIZE: 'minimize',\n untrusted_runner_pb2.ProcessTestcaseRequest.CLEANSE: 'cleanse',\n }\n\n # TODO(ochang): Support other engines.\n assert request.engine == 'libFuzzer'\n assert request.operation in tool_name_map\n\n result = minimize_task.run_libfuzzer_engine(\n tool_name_map[request.operation], request.target_name, request.arguments,\n request.testcase_path, request.output_path, request.timeout)\n\n return untrusted_runner_pb2.EngineReproduceResult(\n return_code=result.return_code,\n time_executed=result.time_executed,\n output=result.output)\n\n\ndef engine_fuzz(request, _):\n \"\"\"Run engine fuzzer.\"\"\"\n engine_impl = engine.get(request.engine)\n result, fuzzer_metadata = fuzz_task.run_engine_fuzzer(\n engine_impl, request.target_name, request.sync_corpus_directory,\n request.testcase_directory)\n\n crashes = [\n untrusted_runner_pb2.EngineCrash(\n input_path=crash.input_path,\n stacktrace=protobuf_utils.encode_utf8_if_unicode(crash.stacktrace),\n reproduce_args=crash.reproduce_args,\n crash_time=crash.crash_time) for crash in result.crashes\n ]\n\n packed_stats = {}\n for key, value in six.iteritems(result.stats):\n packed_value = Any()\n if isinstance(value, float):\n packed_value.Pack(wrappers_pb2.DoubleValue(value=value))\n elif isinstance(value, int):\n packed_value.Pack(wrappers_pb2.Int32Value(value=value))\n elif isinstance(value, six.string_types):\n packed_value.Pack(wrappers_pb2.StringValue(value=value))\n else:\n raise ValueError('Unknown stat type for ' + key)\n\n packed_stats[key] = packed_value\n\n return untrusted_runner_pb2.EngineFuzzResponse(\n logs=protobuf_utils.encode_utf8_if_unicode(result.logs),\n command=result.command,\n crashes=crashes,\n stats=packed_stats,\n time_executed=result.time_executed,\n fuzzer_metadata=fuzzer_metadata)\n\n\ndef engine_reproduce(request, _):\n \"\"\"Run engine reproduce.\"\"\"\n engine_impl = engine.get(request.engine)\n result = testcase_manager.engine_reproduce(engine_impl, request.target_name,\n request.testcase_path,\n request.arguments, request.timeout)\n return untrusted_runner_pb2.EngineReproduceResult(\n return_code=result.return_code,\n time_executed=result.time_executed,\n output=result.output)\n", "path": "src/python/bot/untrusted_runner/tasks_impl.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tasks RPC implementations.\"\"\"\nfrom __future__ import absolute_import\n\nfrom google.protobuf import wrappers_pb2\nfrom google.protobuf.any_pb2 import Any\nimport six\n\nfrom . import protobuf_utils\n\nfrom bot import testcase_manager\nfrom bot.fuzzers import engine\nfrom bot.tasks import corpus_pruning_task\nfrom bot.tasks import fuzz_task\nfrom bot.tasks import minimize_task\nfrom datastore import data_types\nfrom protos import untrusted_runner_pb2\nfrom system import environment\n\n\ndef _proto_to_fuzz_target(proto):\n \"\"\"Convert protobuf to FuzzTarget.\"\"\"\n return data_types.FuzzTarget(\n engine=proto.engine, project=proto.project, binary=proto.binary)\n\n\ndef _proto_to_cross_pollinate_fuzzer(proto):\n \"\"\"Convert protobuf to CrossPollinateFuzzer.\"\"\"\n return corpus_pruning_task.CrossPollinateFuzzer(\n fuzz_target=_proto_to_fuzz_target(proto.fuzz_target),\n backup_bucket_name=proto.backup_bucket_name,\n corpus_engine_name=proto.corpus_engine_name)\n\n\ndef prune_corpus(request, _):\n \"\"\"Prune corpus.\"\"\"\n context = corpus_pruning_task.Context(\n _proto_to_fuzz_target(request.fuzz_target), [\n _proto_to_cross_pollinate_fuzzer(proto)\n for proto in request.cross_pollinate_fuzzers\n ], environment.get_value('USE_MINIJAIL'))\n\n result = corpus_pruning_task.do_corpus_pruning(\n context, request.last_execution_failed, request.revision)\n\n # Intentionally skip edge and function coverage values as those would come\n # from fuzzer coverage cron task (see src/go/server/cron/coverage.go).\n coverage_info = untrusted_runner_pb2.CoverageInfo(\n corpus_size_units=result.coverage_info.corpus_size_units,\n corpus_size_bytes=result.coverage_info.corpus_size_bytes,\n corpus_location=result.coverage_info.corpus_location,\n corpus_backup_location=result.coverage_info.corpus_backup_location,\n quarantine_size_units=result.coverage_info.quarantine_size_units,\n quarantine_size_bytes=result.coverage_info.quarantine_size_bytes,\n quarantine_location=result.coverage_info.quarantine_location)\n\n crashes = [\n untrusted_runner_pb2.CorpusCrash(\n crash_state=crash.crash_state,\n crash_type=crash.crash_type,\n crash_address=crash.crash_address,\n crash_stacktrace=protobuf_utils.encode_utf8_if_unicode(\n crash.crash_stacktrace),\n unit_path=crash.unit_path,\n security_flag=crash.security_flag,\n ) for crash in result.crashes\n ]\n\n return untrusted_runner_pb2.PruneCorpusResponse(\n coverage_info=coverage_info,\n crashes=crashes,\n fuzzer_binary_name=result.fuzzer_binary_name,\n revision=result.revision)\n\n\ndef process_testcase(request, _):\n \"\"\"Process testcase.\"\"\"\n tool_name_map = {\n untrusted_runner_pb2.ProcessTestcaseRequest.MINIMIZE: 'minimize',\n untrusted_runner_pb2.ProcessTestcaseRequest.CLEANSE: 'cleanse',\n }\n\n # TODO(ochang): Support other engines.\n assert request.engine == 'libFuzzer'\n assert request.operation in tool_name_map\n\n result = minimize_task.run_libfuzzer_engine(\n tool_name_map[request.operation], request.target_name, request.arguments,\n request.testcase_path, request.output_path, request.timeout)\n\n return untrusted_runner_pb2.EngineReproduceResult(\n return_code=result.return_code,\n time_executed=result.time_executed,\n output=result.output)\n\n\ndef engine_fuzz(request, _):\n \"\"\"Run engine fuzzer.\"\"\"\n engine_impl = engine.get(request.engine)\n result, fuzzer_metadata = fuzz_task.run_engine_fuzzer(\n engine_impl, request.target_name, request.sync_corpus_directory,\n request.testcase_directory)\n\n crashes = [\n untrusted_runner_pb2.EngineCrash(\n input_path=crash.input_path,\n stacktrace=protobuf_utils.encode_utf8_if_unicode(crash.stacktrace),\n reproduce_args=crash.reproduce_args,\n crash_time=crash.crash_time) for crash in result.crashes\n ]\n\n packed_stats = {}\n for key, value in six.iteritems(result.stats):\n packed_value = Any()\n if isinstance(value, float):\n packed_value.Pack(wrappers_pb2.DoubleValue(value=value))\n elif isinstance(value, int):\n packed_value.Pack(wrappers_pb2.Int32Value(value=value))\n elif isinstance(value, six.string_types):\n packed_value.Pack(wrappers_pb2.StringValue(value=value))\n else:\n raise ValueError('Unknown stat type for ' + key)\n\n packed_stats[key] = packed_value\n\n return untrusted_runner_pb2.EngineFuzzResponse(\n logs=protobuf_utils.encode_utf8_if_unicode(result.logs),\n command=result.command,\n crashes=crashes,\n stats=packed_stats,\n time_executed=result.time_executed,\n fuzzer_metadata=fuzzer_metadata)\n\n\ndef engine_reproduce(request, _):\n \"\"\"Run engine reproduce.\"\"\"\n engine_impl = engine.get(request.engine)\n result = testcase_manager.engine_reproduce(engine_impl, request.target_name,\n request.testcase_path,\n request.arguments, request.timeout)\n return untrusted_runner_pb2.EngineReproduceResult(\n command=result.command,\n return_code=result.return_code,\n time_executed=result.time_executed,\n output=result.output)\n", "path": "src/python/bot/untrusted_runner/tasks_impl.py"}]}
| 1,942 | 118 |
gh_patches_debug_3051
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-2533
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pubsub message getting wrong attribute for publishTime
According the [REST docs](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage), a `PubsubMessage` has the field `publishTime`
In [message.py](https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/pubsub/google/cloud/pubsub/message.py), `from_api_repr` is getting the field `publishTimestamp` below:
```
instance._service_timestamp = api_repr.get('publishTimestamp')
```
The current tests are self-confirming of this issue as they simply set up the api_repr with `publishTimestamp`
A quick fix seems to adjust the following:
**message.py**
``` python
@classmethod
def from_api_repr(cls, api_repr):
"""Factory: construct message from API representation.
:type api_repr: dict or None
:param api_repr: The API representation of the message
:rtype: :class:`Message`
:returns: The message created from the response.
"""
data = base64.b64decode(api_repr.get('data', b''))
instance = cls(
data=data, message_id=api_repr['messageId'],
attributes=api_repr.get('attributes'))
instance._service_timestamp = api_repr.get('publishTime')
return instance
```
**test_message.py**
``` python
def test_from_api_repr_no_attributes(self):
from base64 import b64encode as b64
DATA = b'DEADBEEF'
B64_DATA = b64(DATA)
MESSAGE_ID = '12345'
TIMESTAMP = '2016-03-18-19:38:22.001393427Z'
api_repr = {
'data': B64_DATA,
'messageId': MESSAGE_ID,
'publishTime': TIMESTAMP,
}
message = self._getTargetClass().from_api_repr(api_repr)
self.assertEqual(message.data, DATA)
self.assertEqual(message.message_id, MESSAGE_ID)
self.assertEqual(message.attributes, {})
self.assertEqual(message.service_timestamp, TIMESTAMP)
def test_from_api_repr_w_attributes(self):
from base64 import b64encode as b64
DATA = b'DEADBEEF'
B64_DATA = b64(DATA)
MESSAGE_ID = '12345'
ATTRS = {'a': 'b'}
TIMESTAMP = '2016-03-18-19:38:22.001393427Z'
api_repr = {
'data': B64_DATA,
'messageId': MESSAGE_ID,
'publishTime': TIMESTAMP,
'attributes': ATTRS,
}
message = self._getTargetClass().from_api_repr(api_repr)
self.assertEqual(message.data, DATA)
self.assertEqual(message.message_id, MESSAGE_ID)
self.assertEqual(message.service_timestamp, TIMESTAMP)
self.assertEqual(message.attributes, ATTRS)
```
I don't currently have a contributor license signed, but will work on that. In the meantime, hoping that someone can pick this up.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pubsub/google/cloud/pubsub/message.py`
Content:
```
1 # Copyright 2015 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Define API Topics."""
16
17 import base64
18
19 from google.cloud._helpers import _rfc3339_to_datetime
20
21
22 class Message(object):
23 """Messages can be published to a topic and received by subscribers.
24
25 See:
26 https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage
27
28 :type data: bytes
29 :param data: the payload of the message.
30
31 :type message_id: string
32 :param message_id: An ID assigned to the message by the API.
33
34 :type attributes: dict or None
35 :param attributes: Extra metadata associated by the publisher with the
36 message.
37 """
38 _service_timestamp = None
39
40 def __init__(self, data, message_id, attributes=None):
41 self.data = data
42 self.message_id = message_id
43 self._attributes = attributes
44
45 @property
46 def attributes(self):
47 """Lazily-constructed attribute dictionary."""
48 if self._attributes is None:
49 self._attributes = {}
50 return self._attributes
51
52 @property
53 def timestamp(self):
54 """Return sortable timestamp from attributes, if passed.
55
56 Allows sorting messages in publication order (assuming consistent
57 clocks across all publishers).
58
59 :rtype: :class:`datetime.datetime`
60 :returns: timestamp (in UTC timezone) parsed from RFC 3339 timestamp
61 :raises: ValueError if timestamp not in ``attributes``, or if it does
62 not match the RFC 3339 format.
63 """
64 stamp = self.attributes.get('timestamp')
65 if stamp is None:
66 raise ValueError('No timestamp')
67 return _rfc3339_to_datetime(stamp)
68
69 @property
70 def service_timestamp(self):
71 """Return server-set timestamp.
72
73 :rtype: string
74 :returns: timestamp (in UTC timezone) in RFC 3339 format
75 """
76 return self._service_timestamp
77
78 @classmethod
79 def from_api_repr(cls, api_repr):
80 """Factory: construct message from API representation.
81
82 :type api_repr: dict or None
83 :param api_repr: The API representation of the message
84
85 :rtype: :class:`Message`
86 :returns: The message created from the response.
87 """
88 data = base64.b64decode(api_repr.get('data', b''))
89 instance = cls(
90 data=data, message_id=api_repr['messageId'],
91 attributes=api_repr.get('attributes'))
92 instance._service_timestamp = api_repr.get('publishTimestamp')
93 return instance
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pubsub/google/cloud/pubsub/message.py b/pubsub/google/cloud/pubsub/message.py
--- a/pubsub/google/cloud/pubsub/message.py
+++ b/pubsub/google/cloud/pubsub/message.py
@@ -89,5 +89,5 @@
instance = cls(
data=data, message_id=api_repr['messageId'],
attributes=api_repr.get('attributes'))
- instance._service_timestamp = api_repr.get('publishTimestamp')
+ instance._service_timestamp = api_repr.get('publishTime')
return instance
|
{"golden_diff": "diff --git a/pubsub/google/cloud/pubsub/message.py b/pubsub/google/cloud/pubsub/message.py\n--- a/pubsub/google/cloud/pubsub/message.py\n+++ b/pubsub/google/cloud/pubsub/message.py\n@@ -89,5 +89,5 @@\n instance = cls(\n data=data, message_id=api_repr['messageId'],\n attributes=api_repr.get('attributes'))\n- instance._service_timestamp = api_repr.get('publishTimestamp')\n+ instance._service_timestamp = api_repr.get('publishTime')\n return instance\n", "issue": "Pubsub message getting wrong attribute for publishTime\nAccording the [REST docs](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage), a `PubsubMessage` has the field `publishTime`\n\nIn [message.py](https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/pubsub/google/cloud/pubsub/message.py), `from_api_repr` is getting the field `publishTimestamp` below:\n\n```\ninstance._service_timestamp = api_repr.get('publishTimestamp')\n```\n\nThe current tests are self-confirming of this issue as they simply set up the api_repr with `publishTimestamp`\n\nA quick fix seems to adjust the following:\n**message.py**\n\n``` python\n @classmethod\n def from_api_repr(cls, api_repr):\n \"\"\"Factory: construct message from API representation.\n\n :type api_repr: dict or None\n :param api_repr: The API representation of the message\n\n :rtype: :class:`Message`\n :returns: The message created from the response.\n \"\"\"\n data = base64.b64decode(api_repr.get('data', b''))\n instance = cls(\n data=data, message_id=api_repr['messageId'],\n attributes=api_repr.get('attributes'))\n instance._service_timestamp = api_repr.get('publishTime')\n return instance\n```\n\n**test_message.py**\n\n``` python\n def test_from_api_repr_no_attributes(self):\n from base64 import b64encode as b64\n DATA = b'DEADBEEF'\n B64_DATA = b64(DATA)\n MESSAGE_ID = '12345'\n TIMESTAMP = '2016-03-18-19:38:22.001393427Z'\n api_repr = {\n 'data': B64_DATA,\n 'messageId': MESSAGE_ID,\n 'publishTime': TIMESTAMP,\n }\n message = self._getTargetClass().from_api_repr(api_repr)\n self.assertEqual(message.data, DATA)\n self.assertEqual(message.message_id, MESSAGE_ID)\n self.assertEqual(message.attributes, {})\n self.assertEqual(message.service_timestamp, TIMESTAMP)\n\n def test_from_api_repr_w_attributes(self):\n from base64 import b64encode as b64\n DATA = b'DEADBEEF'\n B64_DATA = b64(DATA)\n MESSAGE_ID = '12345'\n ATTRS = {'a': 'b'}\n TIMESTAMP = '2016-03-18-19:38:22.001393427Z'\n api_repr = {\n 'data': B64_DATA,\n 'messageId': MESSAGE_ID,\n 'publishTime': TIMESTAMP,\n 'attributes': ATTRS,\n }\n message = self._getTargetClass().from_api_repr(api_repr)\n self.assertEqual(message.data, DATA)\n self.assertEqual(message.message_id, MESSAGE_ID)\n self.assertEqual(message.service_timestamp, TIMESTAMP)\n self.assertEqual(message.attributes, ATTRS)\n```\n\nI don't currently have a contributor license signed, but will work on that. In the meantime, hoping that someone can pick this up.\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Define API Topics.\"\"\"\n\nimport base64\n\nfrom google.cloud._helpers import _rfc3339_to_datetime\n\n\nclass Message(object):\n \"\"\"Messages can be published to a topic and received by subscribers.\n\n See:\n https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage\n\n :type data: bytes\n :param data: the payload of the message.\n\n :type message_id: string\n :param message_id: An ID assigned to the message by the API.\n\n :type attributes: dict or None\n :param attributes: Extra metadata associated by the publisher with the\n message.\n \"\"\"\n _service_timestamp = None\n\n def __init__(self, data, message_id, attributes=None):\n self.data = data\n self.message_id = message_id\n self._attributes = attributes\n\n @property\n def attributes(self):\n \"\"\"Lazily-constructed attribute dictionary.\"\"\"\n if self._attributes is None:\n self._attributes = {}\n return self._attributes\n\n @property\n def timestamp(self):\n \"\"\"Return sortable timestamp from attributes, if passed.\n\n Allows sorting messages in publication order (assuming consistent\n clocks across all publishers).\n\n :rtype: :class:`datetime.datetime`\n :returns: timestamp (in UTC timezone) parsed from RFC 3339 timestamp\n :raises: ValueError if timestamp not in ``attributes``, or if it does\n not match the RFC 3339 format.\n \"\"\"\n stamp = self.attributes.get('timestamp')\n if stamp is None:\n raise ValueError('No timestamp')\n return _rfc3339_to_datetime(stamp)\n\n @property\n def service_timestamp(self):\n \"\"\"Return server-set timestamp.\n\n :rtype: string\n :returns: timestamp (in UTC timezone) in RFC 3339 format\n \"\"\"\n return self._service_timestamp\n\n @classmethod\n def from_api_repr(cls, api_repr):\n \"\"\"Factory: construct message from API representation.\n\n :type api_repr: dict or None\n :param api_repr: The API representation of the message\n\n :rtype: :class:`Message`\n :returns: The message created from the response.\n \"\"\"\n data = base64.b64decode(api_repr.get('data', b''))\n instance = cls(\n data=data, message_id=api_repr['messageId'],\n attributes=api_repr.get('attributes'))\n instance._service_timestamp = api_repr.get('publishTimestamp')\n return instance\n", "path": "pubsub/google/cloud/pubsub/message.py"}], "after_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Define API Topics.\"\"\"\n\nimport base64\n\nfrom google.cloud._helpers import _rfc3339_to_datetime\n\n\nclass Message(object):\n \"\"\"Messages can be published to a topic and received by subscribers.\n\n See:\n https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage\n\n :type data: bytes\n :param data: the payload of the message.\n\n :type message_id: string\n :param message_id: An ID assigned to the message by the API.\n\n :type attributes: dict or None\n :param attributes: Extra metadata associated by the publisher with the\n message.\n \"\"\"\n _service_timestamp = None\n\n def __init__(self, data, message_id, attributes=None):\n self.data = data\n self.message_id = message_id\n self._attributes = attributes\n\n @property\n def attributes(self):\n \"\"\"Lazily-constructed attribute dictionary.\"\"\"\n if self._attributes is None:\n self._attributes = {}\n return self._attributes\n\n @property\n def timestamp(self):\n \"\"\"Return sortable timestamp from attributes, if passed.\n\n Allows sorting messages in publication order (assuming consistent\n clocks across all publishers).\n\n :rtype: :class:`datetime.datetime`\n :returns: timestamp (in UTC timezone) parsed from RFC 3339 timestamp\n :raises: ValueError if timestamp not in ``attributes``, or if it does\n not match the RFC 3339 format.\n \"\"\"\n stamp = self.attributes.get('timestamp')\n if stamp is None:\n raise ValueError('No timestamp')\n return _rfc3339_to_datetime(stamp)\n\n @property\n def service_timestamp(self):\n \"\"\"Return server-set timestamp.\n\n :rtype: string\n :returns: timestamp (in UTC timezone) in RFC 3339 format\n \"\"\"\n return self._service_timestamp\n\n @classmethod\n def from_api_repr(cls, api_repr):\n \"\"\"Factory: construct message from API representation.\n\n :type api_repr: dict or None\n :param api_repr: The API representation of the message\n\n :rtype: :class:`Message`\n :returns: The message created from the response.\n \"\"\"\n data = base64.b64decode(api_repr.get('data', b''))\n instance = cls(\n data=data, message_id=api_repr['messageId'],\n attributes=api_repr.get('attributes'))\n instance._service_timestamp = api_repr.get('publishTime')\n return instance\n", "path": "pubsub/google/cloud/pubsub/message.py"}]}
| 1,805 | 115 |
gh_patches_debug_18674
|
rasdani/github-patches
|
git_diff
|
facebookresearch__CompilerGym-512
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
protoc segfaults on macOS 12.0.1
## 🐛 Bug
The version of protoc used by the CompilerGym build segfaults on macOS 12.0.1:
```
$ bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/protoc
[1] 82656 segmentation fault bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/protoc
```
Thanks @mostafaelhoushi for discovering this!
## To Reproduce
Steps to reproduce the behavior:
1. Update to macOS 12.0.1.
1. Start from a clean build: `make distclean`.
1. Attempt to build CompilerGym:
```
$ make install BAZEL_BUILD_OPTS='--sandbox_debug'
...
/usr/bin/sandbox-exec -f /private/var/tmp/_bazel_cummins/c3f286fbbefcd6317d9b13e427e86632/sandbox/darwin-sandbox/3008/sandbox.sb /var/tmp/_bazel_cummins/install/97cf8d40e3de7fca7ef885fa763bde13/process-wrapper '--timeout=0' '--kill_delay=15' bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/protoc '--python_out=bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/python' -Iexternal/com_github_protocolbuffers_protobuf/python -Ibazel-out/host/bin/external/com_github_protocolbuffers_protobuf/python bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/python/google/protobuf/timestamp.proto) sandbox-exec failed: error executing command
...
```
## Environment
Please fill in this checklist:
- CompilerGym: 0.2.1
- How you installed CompilerGym (conda, pip, source): source
- OS: macOS 12.0.1
- Python version: 3.8
- GCC/clang version (if compiling from source): Apple clang 12.0.5
- Bazel version (if compiling from source):
- Versions of any other relevant libraries:
You may use the PyTorch
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
to generate most of this information. You can get the script and run it with:
```sh
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 """Extract a list of passes form the LLVM source tree.
6
7 Usage:
8
9 $ extract_passes_from_llvm_source_tree /path/to/llvm/source/root
10
11 Optionally accepts a list of specific files to examine:
12
13 $ extract_passes_from_llvm_source_tree /path/to/llvm/source/root /path/to/llvm/source/file
14
15 Implementation notes
16 --------------------
17
18 This implements a not-very-good parser for the INITIALIZE_PASS() family of
19 macros, which are used in the LLVM sources to declare a pass using it's name,
20 flag, and docstring. Parsing known macros like this is fragile and likely to
21 break as the LLVM sources evolve. Currently only tested on LLVM 10.0.
22
23 A more robust solution would be to parse the C++ sources and extract all classes
24 which inherit from ModulePass etc.
25 """
26 import codecs
27 import csv
28 import logging
29 import os
30 import re
31 import subprocess
32 import sys
33 from pathlib import Path
34 from typing import Dict, Iterable, List, Optional, Tuple
35
36 from common import Pass
37 from config import CREATE_PASS_NAME_MAP
38
39 logger = logging.getLogger(__name__)
40
41 # A regular expression to match the start of an invocation of one of the
42 # InitializePass helper macros.
43 INITIALIZE_PASS_RE = r"(INITIALIZE_PASS|INITIALIZE_PASS_BEGIN|INITIALIZE_PASS_WITH_OPTIONS|INITIALIZE_PASS_WITH_OPTIONS_BEGIN)\("
44 # A regular expression to match static const string definitions.
45 CONST_CHAR_RE = r'^\s*static\s+const\s+char(\s+(?P<name>[a-zA-Z_]+)\s*\[\s*\]|\s*\*\s*(?P<ptr_name>[a-zA-Z_]+))\s*=\s*(?P<value>".+")\s*;'
46
47
48 class ParseError(ValueError):
49 def __init__(self, message: str, source: str, components: List[str]):
50 self.message = message
51 self.source = source
52 self.components = components
53
54
55 def parse_initialize_pass(
56 source_path: Path, header: Optional[str], input_source: str, defines: Dict[str, str]
57 ) -> Iterable[Pass]:
58 """A shitty parser for INITIALIZE_PASS() macro invocations.."""
59 # Squish down to a single line.
60 source = re.sub(r"\n\s*", " ", input_source, re.MULTILINE)
61 # Contract multi-spaces to single space.
62 source = re.sub(r",", ", ", source)
63 source = re.sub(r"\s+", " ", source)
64 source = re.sub(r"\(\s+", "(", source)
65 source = re.sub(r"\)\s+", ")", source)
66
67 # Strip the INITIALIZE_PASS(...) macro.
68 match = re.match(rf"^\s*{INITIALIZE_PASS_RE}(?P<args>.+)\)", source)
69 if not match:
70 raise ParseError("Failed to match INITIALIZE_PASS regex", source, [])
71 source = match.group("args")
72
73 components = []
74 start = 0
75 in_quotes = False
76 in_comment = False
77 for i in range(len(source)):
78 if (
79 not in_comment
80 and source[i] == "/"
81 and i < len(source) - 1
82 and source[i + 1] == "*"
83 ):
84 in_comment = True
85 if (
86 in_comment
87 and source[i] == "*"
88 and i < len(source) - 1
89 and source[i + 1] == "/"
90 ):
91 in_comment = False
92 start = i + 2
93 if source[i] == '"':
94 in_quotes = not in_quotes
95 if not in_quotes and source[i] == ",":
96 components.append(source[start:i].strip())
97 start = i + 2
98 components.append(source[start:].strip())
99 if len(components) != 5:
100 raise ParseError(
101 f"Expected 5 components, found {len(components)}", source, components
102 )
103
104 pass_name, arg, name, cfg, analysis = components
105 # Strip quotation marks in arg and name.
106 if not arg:
107 raise ParseError(f"Empty arg: `{arg}`", source, components)
108 if not name:
109 raise ParseError(f"Empty name: `{name}`", source, components)
110
111 while arg in defines:
112 arg = defines[arg]
113 while name in defines:
114 name = defines[name]
115
116 if not (arg[0] == '"' and arg[-1] == '"'):
117 raise ParseError(f"Could not interpret arg `{arg}`", source, components)
118 arg = arg[1:-1]
119 if not (name[0] == '"' and name[-1] == '"'):
120 raise ParseError(f"Could not interpret name `{name}`", source, components)
121 name = name[1:-1]
122
123 # Convert cfg and analysis to bool.
124 if cfg not in {"true", "false"}:
125 raise ParseError(
126 f"Could not interpret bool cfg argument `{cfg}`", source, components
127 )
128 if analysis not in {"true", "false"}:
129 raise ParseError(
130 f"Could not interpret bool analysis argument `{analysis}`",
131 source,
132 components,
133 )
134 cfg = cfg == "true"
135 analysis = analysis == "true"
136
137 opts = {
138 "source": source_path,
139 "header": header,
140 "name": pass_name,
141 "flag": f"-{arg}",
142 "description": name,
143 "cfg": cfg,
144 "is_analysis": analysis,
145 }
146
147 pass_name_or_list = CREATE_PASS_NAME_MAP.get(pass_name, pass_name)
148
149 if isinstance(pass_name_or_list, str):
150 opts["name"] = pass_name_or_list
151 yield Pass(**opts)
152 else:
153 for name in pass_name_or_list:
154 opts["name"] = name
155 yield Pass(**opts)
156
157
158 def build_defines(source: str) -> Dict[str, str]:
159 """A quick-and-dirty technique to build a translation table from #defines
160 and string literals to their values."""
161 defines = {}
162 lines = source.split("\n")
163 for i in range(len(lines)):
164 line = lines[i].strip()
165 if line.startswith("#define"):
166 # Match #define strings.
167 components = line[len("#define ") :].split()
168 name = components[0]
169 value = " ".join(components[1:]).strip()
170 if value == "\\":
171 value = lines[i + 1].strip()
172 defines[name] = value
173 else:
174 # Match string literals.
175 match = re.match(CONST_CHAR_RE, line)
176 if match:
177 defines[match.group("name") or match.group("ptr_name")] = match.group(
178 "value"
179 )
180 return defines
181
182
183 def handle_file(source_path: Path) -> Tuple[Path, List[Pass]]:
184 """Parse the passes declared in a file."""
185 assert str(source_path).endswith(".cpp"), f"Unexpected file type: {source_path}"
186
187 header = Path("include/llvm/" + str(source_path)[len("lib") : -len("cpp")] + "h")
188 if not header.is_file():
189 header = ""
190
191 with codecs.open(source_path, "r", "utf-8") as f:
192 source = f.read()
193
194 defines = build_defines(source)
195
196 passes: List[Pass] = []
197
198 for match in re.finditer(INITIALIZE_PASS_RE, source):
199 start = match.start()
200 first_bracket = source.find("(", start)
201 bracket_depth = 1
202 end = first_bracket
203 for end in range(first_bracket + 1, len(source)):
204 if source[end] == "(":
205 bracket_depth += 1
206 elif source[end] == ")":
207 bracket_depth -= 1
208 if not bracket_depth:
209 break
210
211 try:
212 passes += list(
213 parse_initialize_pass(
214 source_path, header, source[start : end + 1], defines
215 )
216 )
217 except ParseError as e:
218 print(f"Parsing error: {e.message}", file=sys.stderr)
219 print(f"Parsed components: {e.components}", file=sys.stderr)
220 print(f"In line: {e.source}", file=sys.stderr)
221 print(f"In file: {source_path}", file=sys.stderr)
222 print("Fatal error. Aborting now.", file=sys.stderr)
223 sys.exit(1)
224
225 if passes:
226 logger.debug(
227 f"Extracted {len(passes)} {'passes' if len(passes) - 1 else 'pass'} from {source_path}",
228 )
229 else:
230 logger.debug(f"Found no passes in {source_path}")
231
232 return passes
233
234
235 def main(argv):
236 root = Path(argv[1])
237 assert root.is_dir(), f"Not a directory: {root}"
238 os.chdir(root)
239
240 if len(argv) > 2:
241 paths = [Path(path) for path in argv[2:]]
242 else:
243 # Get the names of all files which contain a pass definition.
244 matching_paths = []
245 grep = subprocess.check_output(
246 ["grep", "-l", "-E", rf"^\s*{INITIALIZE_PASS_RE}", "-R", "lib/"],
247 universal_newlines=True,
248 )
249 matching_paths += grep.strip().split("\n")
250 logger.debug("Processing %s files ...", len(matching_paths))
251 paths = [Path(path) for path in matching_paths]
252
253 # Build a list of pass entries.
254 rows = []
255 for path in sorted(paths):
256 passes = handle_file(path)
257 if passes:
258 rows += passes
259
260 writer = csv.writer(sys.stdout, delimiter=",", quotechar='"')
261 writer.writerow(Pass._fields)
262 writer.writerows(sorted(rows, key=lambda r: r.name))
263
264
265 if __name__ == "__main__":
266 main(sys.argv)
267
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py b/compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py
--- a/compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py
+++ b/compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py
@@ -242,10 +242,17 @@
else:
# Get the names of all files which contain a pass definition.
matching_paths = []
- grep = subprocess.check_output(
- ["grep", "-l", "-E", rf"^\s*{INITIALIZE_PASS_RE}", "-R", "lib/"],
- universal_newlines=True,
- )
+ try:
+ grep = subprocess.check_output(
+ ["grep", "-l", "-E", rf"^\s*{INITIALIZE_PASS_RE}", "-R", "lib/"],
+ universal_newlines=True,
+ )
+ except subprocess.CalledProcessError:
+ print(
+ f"fatal: Failed to find any LLVM pass declarations in {root}",
+ file=sys.stderr,
+ )
+ sys.exit(1)
matching_paths += grep.strip().split("\n")
logger.debug("Processing %s files ...", len(matching_paths))
paths = [Path(path) for path in matching_paths]
|
{"golden_diff": "diff --git a/compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py b/compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py\n--- a/compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py\n+++ b/compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py\n@@ -242,10 +242,17 @@\n else:\n # Get the names of all files which contain a pass definition.\n matching_paths = []\n- grep = subprocess.check_output(\n- [\"grep\", \"-l\", \"-E\", rf\"^\\s*{INITIALIZE_PASS_RE}\", \"-R\", \"lib/\"],\n- universal_newlines=True,\n- )\n+ try:\n+ grep = subprocess.check_output(\n+ [\"grep\", \"-l\", \"-E\", rf\"^\\s*{INITIALIZE_PASS_RE}\", \"-R\", \"lib/\"],\n+ universal_newlines=True,\n+ )\n+ except subprocess.CalledProcessError:\n+ print(\n+ f\"fatal: Failed to find any LLVM pass declarations in {root}\",\n+ file=sys.stderr,\n+ )\n+ sys.exit(1)\n matching_paths += grep.strip().split(\"\\n\")\n logger.debug(\"Processing %s files ...\", len(matching_paths))\n paths = [Path(path) for path in matching_paths]\n", "issue": "protoc segfaults on macOS 12.0.1\n## \ud83d\udc1b Bug\r\n\r\nThe version of protoc used by the CompilerGym build segfaults on macOS 12.0.1:\r\n\r\n```\r\n$ bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/protoc\r\n[1] 82656 segmentation fault bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/protoc\r\n```\r\n\r\nThanks @mostafaelhoushi for discovering this!\r\n\r\n## To Reproduce\r\n\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Update to macOS 12.0.1.\r\n1. Start from a clean build: `make distclean`.\r\n1. Attempt to build CompilerGym:\r\n\r\n```\r\n$ make install BAZEL_BUILD_OPTS='--sandbox_debug'\r\n...\r\n /usr/bin/sandbox-exec -f /private/var/tmp/_bazel_cummins/c3f286fbbefcd6317d9b13e427e86632/sandbox/darwin-sandbox/3008/sandbox.sb /var/tmp/_bazel_cummins/install/97cf8d40e3de7fca7ef885fa763bde13/process-wrapper '--timeout=0' '--kill_delay=15' bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/protoc '--python_out=bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/python' -Iexternal/com_github_protocolbuffers_protobuf/python -Ibazel-out/host/bin/external/com_github_protocolbuffers_protobuf/python bazel-out/host/bin/external/com_github_protocolbuffers_protobuf/python/google/protobuf/timestamp.proto) sandbox-exec failed: error executing command\r\n...\r\n```\r\n\r\n## Environment\r\n\r\nPlease fill in this checklist:\r\n\r\n- CompilerGym: 0.2.1\r\n- How you installed CompilerGym (conda, pip, source): source\r\n- OS: macOS 12.0.1\r\n- Python version: 3.8\r\n- GCC/clang version (if compiling from source): Apple clang 12.0.5\r\n- Bazel version (if compiling from source): \r\n- Versions of any other relevant libraries:\r\n\r\nYou may use the PyTorch\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\nto generate most of this information. You can get the script and run it with:\r\n\r\n```sh\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Extract a list of passes form the LLVM source tree.\n\nUsage:\n\n $ extract_passes_from_llvm_source_tree /path/to/llvm/source/root\n\nOptionally accepts a list of specific files to examine:\n\n $ extract_passes_from_llvm_source_tree /path/to/llvm/source/root /path/to/llvm/source/file\n\nImplementation notes\n--------------------\n\nThis implements a not-very-good parser for the INITIALIZE_PASS() family of\nmacros, which are used in the LLVM sources to declare a pass using it's name,\nflag, and docstring. Parsing known macros like this is fragile and likely to\nbreak as the LLVM sources evolve. Currently only tested on LLVM 10.0.\n\nA more robust solution would be to parse the C++ sources and extract all classes\nwhich inherit from ModulePass etc.\n\"\"\"\nimport codecs\nimport csv\nimport logging\nimport os\nimport re\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Dict, Iterable, List, Optional, Tuple\n\nfrom common import Pass\nfrom config import CREATE_PASS_NAME_MAP\n\nlogger = logging.getLogger(__name__)\n\n# A regular expression to match the start of an invocation of one of the\n# InitializePass helper macros.\nINITIALIZE_PASS_RE = r\"(INITIALIZE_PASS|INITIALIZE_PASS_BEGIN|INITIALIZE_PASS_WITH_OPTIONS|INITIALIZE_PASS_WITH_OPTIONS_BEGIN)\\(\"\n# A regular expression to match static const string definitions.\nCONST_CHAR_RE = r'^\\s*static\\s+const\\s+char(\\s+(?P<name>[a-zA-Z_]+)\\s*\\[\\s*\\]|\\s*\\*\\s*(?P<ptr_name>[a-zA-Z_]+))\\s*=\\s*(?P<value>\".+\")\\s*;'\n\n\nclass ParseError(ValueError):\n def __init__(self, message: str, source: str, components: List[str]):\n self.message = message\n self.source = source\n self.components = components\n\n\ndef parse_initialize_pass(\n source_path: Path, header: Optional[str], input_source: str, defines: Dict[str, str]\n) -> Iterable[Pass]:\n \"\"\"A shitty parser for INITIALIZE_PASS() macro invocations..\"\"\"\n # Squish down to a single line.\n source = re.sub(r\"\\n\\s*\", \" \", input_source, re.MULTILINE)\n # Contract multi-spaces to single space.\n source = re.sub(r\",\", \", \", source)\n source = re.sub(r\"\\s+\", \" \", source)\n source = re.sub(r\"\\(\\s+\", \"(\", source)\n source = re.sub(r\"\\)\\s+\", \")\", source)\n\n # Strip the INITIALIZE_PASS(...) macro.\n match = re.match(rf\"^\\s*{INITIALIZE_PASS_RE}(?P<args>.+)\\)\", source)\n if not match:\n raise ParseError(\"Failed to match INITIALIZE_PASS regex\", source, [])\n source = match.group(\"args\")\n\n components = []\n start = 0\n in_quotes = False\n in_comment = False\n for i in range(len(source)):\n if (\n not in_comment\n and source[i] == \"/\"\n and i < len(source) - 1\n and source[i + 1] == \"*\"\n ):\n in_comment = True\n if (\n in_comment\n and source[i] == \"*\"\n and i < len(source) - 1\n and source[i + 1] == \"/\"\n ):\n in_comment = False\n start = i + 2\n if source[i] == '\"':\n in_quotes = not in_quotes\n if not in_quotes and source[i] == \",\":\n components.append(source[start:i].strip())\n start = i + 2\n components.append(source[start:].strip())\n if len(components) != 5:\n raise ParseError(\n f\"Expected 5 components, found {len(components)}\", source, components\n )\n\n pass_name, arg, name, cfg, analysis = components\n # Strip quotation marks in arg and name.\n if not arg:\n raise ParseError(f\"Empty arg: `{arg}`\", source, components)\n if not name:\n raise ParseError(f\"Empty name: `{name}`\", source, components)\n\n while arg in defines:\n arg = defines[arg]\n while name in defines:\n name = defines[name]\n\n if not (arg[0] == '\"' and arg[-1] == '\"'):\n raise ParseError(f\"Could not interpret arg `{arg}`\", source, components)\n arg = arg[1:-1]\n if not (name[0] == '\"' and name[-1] == '\"'):\n raise ParseError(f\"Could not interpret name `{name}`\", source, components)\n name = name[1:-1]\n\n # Convert cfg and analysis to bool.\n if cfg not in {\"true\", \"false\"}:\n raise ParseError(\n f\"Could not interpret bool cfg argument `{cfg}`\", source, components\n )\n if analysis not in {\"true\", \"false\"}:\n raise ParseError(\n f\"Could not interpret bool analysis argument `{analysis}`\",\n source,\n components,\n )\n cfg = cfg == \"true\"\n analysis = analysis == \"true\"\n\n opts = {\n \"source\": source_path,\n \"header\": header,\n \"name\": pass_name,\n \"flag\": f\"-{arg}\",\n \"description\": name,\n \"cfg\": cfg,\n \"is_analysis\": analysis,\n }\n\n pass_name_or_list = CREATE_PASS_NAME_MAP.get(pass_name, pass_name)\n\n if isinstance(pass_name_or_list, str):\n opts[\"name\"] = pass_name_or_list\n yield Pass(**opts)\n else:\n for name in pass_name_or_list:\n opts[\"name\"] = name\n yield Pass(**opts)\n\n\ndef build_defines(source: str) -> Dict[str, str]:\n \"\"\"A quick-and-dirty technique to build a translation table from #defines\n and string literals to their values.\"\"\"\n defines = {}\n lines = source.split(\"\\n\")\n for i in range(len(lines)):\n line = lines[i].strip()\n if line.startswith(\"#define\"):\n # Match #define strings.\n components = line[len(\"#define \") :].split()\n name = components[0]\n value = \" \".join(components[1:]).strip()\n if value == \"\\\\\":\n value = lines[i + 1].strip()\n defines[name] = value\n else:\n # Match string literals.\n match = re.match(CONST_CHAR_RE, line)\n if match:\n defines[match.group(\"name\") or match.group(\"ptr_name\")] = match.group(\n \"value\"\n )\n return defines\n\n\ndef handle_file(source_path: Path) -> Tuple[Path, List[Pass]]:\n \"\"\"Parse the passes declared in a file.\"\"\"\n assert str(source_path).endswith(\".cpp\"), f\"Unexpected file type: {source_path}\"\n\n header = Path(\"include/llvm/\" + str(source_path)[len(\"lib\") : -len(\"cpp\")] + \"h\")\n if not header.is_file():\n header = \"\"\n\n with codecs.open(source_path, \"r\", \"utf-8\") as f:\n source = f.read()\n\n defines = build_defines(source)\n\n passes: List[Pass] = []\n\n for match in re.finditer(INITIALIZE_PASS_RE, source):\n start = match.start()\n first_bracket = source.find(\"(\", start)\n bracket_depth = 1\n end = first_bracket\n for end in range(first_bracket + 1, len(source)):\n if source[end] == \"(\":\n bracket_depth += 1\n elif source[end] == \")\":\n bracket_depth -= 1\n if not bracket_depth:\n break\n\n try:\n passes += list(\n parse_initialize_pass(\n source_path, header, source[start : end + 1], defines\n )\n )\n except ParseError as e:\n print(f\"Parsing error: {e.message}\", file=sys.stderr)\n print(f\"Parsed components: {e.components}\", file=sys.stderr)\n print(f\"In line: {e.source}\", file=sys.stderr)\n print(f\"In file: {source_path}\", file=sys.stderr)\n print(\"Fatal error. Aborting now.\", file=sys.stderr)\n sys.exit(1)\n\n if passes:\n logger.debug(\n f\"Extracted {len(passes)} {'passes' if len(passes) - 1 else 'pass'} from {source_path}\",\n )\n else:\n logger.debug(f\"Found no passes in {source_path}\")\n\n return passes\n\n\ndef main(argv):\n root = Path(argv[1])\n assert root.is_dir(), f\"Not a directory: {root}\"\n os.chdir(root)\n\n if len(argv) > 2:\n paths = [Path(path) for path in argv[2:]]\n else:\n # Get the names of all files which contain a pass definition.\n matching_paths = []\n grep = subprocess.check_output(\n [\"grep\", \"-l\", \"-E\", rf\"^\\s*{INITIALIZE_PASS_RE}\", \"-R\", \"lib/\"],\n universal_newlines=True,\n )\n matching_paths += grep.strip().split(\"\\n\")\n logger.debug(\"Processing %s files ...\", len(matching_paths))\n paths = [Path(path) for path in matching_paths]\n\n # Build a list of pass entries.\n rows = []\n for path in sorted(paths):\n passes = handle_file(path)\n if passes:\n rows += passes\n\n writer = csv.writer(sys.stdout, delimiter=\",\", quotechar='\"')\n writer.writerow(Pass._fields)\n writer.writerows(sorted(rows, key=lambda r: r.name))\n\n\nif __name__ == \"__main__\":\n main(sys.argv)\n", "path": "compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Extract a list of passes form the LLVM source tree.\n\nUsage:\n\n $ extract_passes_from_llvm_source_tree /path/to/llvm/source/root\n\nOptionally accepts a list of specific files to examine:\n\n $ extract_passes_from_llvm_source_tree /path/to/llvm/source/root /path/to/llvm/source/file\n\nImplementation notes\n--------------------\n\nThis implements a not-very-good parser for the INITIALIZE_PASS() family of\nmacros, which are used in the LLVM sources to declare a pass using it's name,\nflag, and docstring. Parsing known macros like this is fragile and likely to\nbreak as the LLVM sources evolve. Currently only tested on LLVM 10.0.\n\nA more robust solution would be to parse the C++ sources and extract all classes\nwhich inherit from ModulePass etc.\n\"\"\"\nimport codecs\nimport csv\nimport logging\nimport os\nimport re\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Dict, Iterable, List, Optional, Tuple\n\nfrom common import Pass\nfrom config import CREATE_PASS_NAME_MAP\n\nlogger = logging.getLogger(__name__)\n\n# A regular expression to match the start of an invocation of one of the\n# InitializePass helper macros.\nINITIALIZE_PASS_RE = r\"(INITIALIZE_PASS|INITIALIZE_PASS_BEGIN|INITIALIZE_PASS_WITH_OPTIONS|INITIALIZE_PASS_WITH_OPTIONS_BEGIN)\\(\"\n# A regular expression to match static const string definitions.\nCONST_CHAR_RE = r'^\\s*static\\s+const\\s+char(\\s+(?P<name>[a-zA-Z_]+)\\s*\\[\\s*\\]|\\s*\\*\\s*(?P<ptr_name>[a-zA-Z_]+))\\s*=\\s*(?P<value>\".+\")\\s*;'\n\n\nclass ParseError(ValueError):\n def __init__(self, message: str, source: str, components: List[str]):\n self.message = message\n self.source = source\n self.components = components\n\n\ndef parse_initialize_pass(\n source_path: Path, header: Optional[str], input_source: str, defines: Dict[str, str]\n) -> Iterable[Pass]:\n \"\"\"A shitty parser for INITIALIZE_PASS() macro invocations..\"\"\"\n # Squish down to a single line.\n source = re.sub(r\"\\n\\s*\", \" \", input_source, re.MULTILINE)\n # Contract multi-spaces to single space.\n source = re.sub(r\",\", \", \", source)\n source = re.sub(r\"\\s+\", \" \", source)\n source = re.sub(r\"\\(\\s+\", \"(\", source)\n source = re.sub(r\"\\)\\s+\", \")\", source)\n\n # Strip the INITIALIZE_PASS(...) macro.\n match = re.match(rf\"^\\s*{INITIALIZE_PASS_RE}(?P<args>.+)\\)\", source)\n if not match:\n raise ParseError(\"Failed to match INITIALIZE_PASS regex\", source, [])\n source = match.group(\"args\")\n\n components = []\n start = 0\n in_quotes = False\n in_comment = False\n for i in range(len(source)):\n if (\n not in_comment\n and source[i] == \"/\"\n and i < len(source) - 1\n and source[i + 1] == \"*\"\n ):\n in_comment = True\n if (\n in_comment\n and source[i] == \"*\"\n and i < len(source) - 1\n and source[i + 1] == \"/\"\n ):\n in_comment = False\n start = i + 2\n if source[i] == '\"':\n in_quotes = not in_quotes\n if not in_quotes and source[i] == \",\":\n components.append(source[start:i].strip())\n start = i + 2\n components.append(source[start:].strip())\n if len(components) != 5:\n raise ParseError(\n f\"Expected 5 components, found {len(components)}\", source, components\n )\n\n pass_name, arg, name, cfg, analysis = components\n # Strip quotation marks in arg and name.\n if not arg:\n raise ParseError(f\"Empty arg: `{arg}`\", source, components)\n if not name:\n raise ParseError(f\"Empty name: `{name}`\", source, components)\n\n while arg in defines:\n arg = defines[arg]\n while name in defines:\n name = defines[name]\n\n if not (arg[0] == '\"' and arg[-1] == '\"'):\n raise ParseError(f\"Could not interpret arg `{arg}`\", source, components)\n arg = arg[1:-1]\n if not (name[0] == '\"' and name[-1] == '\"'):\n raise ParseError(f\"Could not interpret name `{name}`\", source, components)\n name = name[1:-1]\n\n # Convert cfg and analysis to bool.\n if cfg not in {\"true\", \"false\"}:\n raise ParseError(\n f\"Could not interpret bool cfg argument `{cfg}`\", source, components\n )\n if analysis not in {\"true\", \"false\"}:\n raise ParseError(\n f\"Could not interpret bool analysis argument `{analysis}`\",\n source,\n components,\n )\n cfg = cfg == \"true\"\n analysis = analysis == \"true\"\n\n opts = {\n \"source\": source_path,\n \"header\": header,\n \"name\": pass_name,\n \"flag\": f\"-{arg}\",\n \"description\": name,\n \"cfg\": cfg,\n \"is_analysis\": analysis,\n }\n\n pass_name_or_list = CREATE_PASS_NAME_MAP.get(pass_name, pass_name)\n\n if isinstance(pass_name_or_list, str):\n opts[\"name\"] = pass_name_or_list\n yield Pass(**opts)\n else:\n for name in pass_name_or_list:\n opts[\"name\"] = name\n yield Pass(**opts)\n\n\ndef build_defines(source: str) -> Dict[str, str]:\n \"\"\"A quick-and-dirty technique to build a translation table from #defines\n and string literals to their values.\"\"\"\n defines = {}\n lines = source.split(\"\\n\")\n for i in range(len(lines)):\n line = lines[i].strip()\n if line.startswith(\"#define\"):\n # Match #define strings.\n components = line[len(\"#define \") :].split()\n name = components[0]\n value = \" \".join(components[1:]).strip()\n if value == \"\\\\\":\n value = lines[i + 1].strip()\n defines[name] = value\n else:\n # Match string literals.\n match = re.match(CONST_CHAR_RE, line)\n if match:\n defines[match.group(\"name\") or match.group(\"ptr_name\")] = match.group(\n \"value\"\n )\n return defines\n\n\ndef handle_file(source_path: Path) -> Tuple[Path, List[Pass]]:\n \"\"\"Parse the passes declared in a file.\"\"\"\n assert str(source_path).endswith(\".cpp\"), f\"Unexpected file type: {source_path}\"\n\n header = Path(\"include/llvm/\" + str(source_path)[len(\"lib\") : -len(\"cpp\")] + \"h\")\n if not header.is_file():\n header = \"\"\n\n with codecs.open(source_path, \"r\", \"utf-8\") as f:\n source = f.read()\n\n defines = build_defines(source)\n\n passes: List[Pass] = []\n\n for match in re.finditer(INITIALIZE_PASS_RE, source):\n start = match.start()\n first_bracket = source.find(\"(\", start)\n bracket_depth = 1\n end = first_bracket\n for end in range(first_bracket + 1, len(source)):\n if source[end] == \"(\":\n bracket_depth += 1\n elif source[end] == \")\":\n bracket_depth -= 1\n if not bracket_depth:\n break\n\n try:\n passes += list(\n parse_initialize_pass(\n source_path, header, source[start : end + 1], defines\n )\n )\n except ParseError as e:\n print(f\"Parsing error: {e.message}\", file=sys.stderr)\n print(f\"Parsed components: {e.components}\", file=sys.stderr)\n print(f\"In line: {e.source}\", file=sys.stderr)\n print(f\"In file: {source_path}\", file=sys.stderr)\n print(\"Fatal error. Aborting now.\", file=sys.stderr)\n sys.exit(1)\n\n if passes:\n logger.debug(\n f\"Extracted {len(passes)} {'passes' if len(passes) - 1 else 'pass'} from {source_path}\",\n )\n else:\n logger.debug(f\"Found no passes in {source_path}\")\n\n return passes\n\n\ndef main(argv):\n root = Path(argv[1])\n assert root.is_dir(), f\"Not a directory: {root}\"\n os.chdir(root)\n\n if len(argv) > 2:\n paths = [Path(path) for path in argv[2:]]\n else:\n # Get the names of all files which contain a pass definition.\n matching_paths = []\n try:\n grep = subprocess.check_output(\n [\"grep\", \"-l\", \"-E\", rf\"^\\s*{INITIALIZE_PASS_RE}\", \"-R\", \"lib/\"],\n universal_newlines=True,\n )\n except subprocess.CalledProcessError:\n print(\n f\"fatal: Failed to find any LLVM pass declarations in {root}\",\n file=sys.stderr,\n )\n sys.exit(1)\n matching_paths += grep.strip().split(\"\\n\")\n logger.debug(\"Processing %s files ...\", len(matching_paths))\n paths = [Path(path) for path in matching_paths]\n\n # Build a list of pass entries.\n rows = []\n for path in sorted(paths):\n passes = handle_file(path)\n if passes:\n rows += passes\n\n writer = csv.writer(sys.stdout, delimiter=\",\", quotechar='\"')\n writer.writerow(Pass._fields)\n writer.writerows(sorted(rows, key=lambda r: r.name))\n\n\nif __name__ == \"__main__\":\n main(sys.argv)\n", "path": "compiler_gym/envs/llvm/service/passes/extract_passes_from_llvm_source_tree.py"}]}
| 3,703 | 315 |
gh_patches_debug_35542
|
rasdani/github-patches
|
git_diff
|
sublimelsp__LSP-1906
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When disabling a LSP-* plugin, I the error popup shows up infinitely
**Describe the bug**
When I disable a LSP-* plugin.
the error popup shows to infinity.
**To Reproduce**
Steps to reproduce the behavior:
1. Have LSP-css installed.
2. Open a css file.
3. Disable the LSP-css plugin.
4. See error
**Expected behavior**
I would not expect the server to shut down gracefully without any popup showing.
**Screenshots**
But instead, the popup shows until I kill the ST app.
[If applicable, add screenshots to help explain your problem.](https://user-images.githubusercontent.com/22029477/143683180-5b1a5a71-57ae-4e35-ac9f-773eefa0076b.mp4)
**Environment (please complete the following information):**
- OS: macOS 11.5.2 Big Sur
- Sublime Text version: 4122
- LSP version: acfd6406ba4680a0e537dc87a72aa5b410a154e7
- Language servers used: [e.g. clangd, gopls, dart, Vetur, intelephense, HIE]
**Additional context**
Here is the error from the ST console:
```
LSP: starting ['', '', '--stdio'] in /Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP
Unable to start subprocess for LSP-css
Traceback (most recent call last):
File "/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/windows.py", line 356, in start_async
transport = create_transport(transport_config, transport_cwd, session)
File "/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py", line 252, in create_transport
process = start_subprocess()
File "/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py", line 241, in start_subprocess
return _start_subprocess(config.command, stdin, stdout, subprocess.PIPE, startupinfo, config.env, cwd)
File "/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py", line 323, in _start_subprocess
cwd=cwd)
File "./python3.3/subprocess.py", line 819, in __init__
File "./python3.3/subprocess.py", line 1448, in _execute_child
PermissionError: [Errno 13] Permission denied
LSP: starting ['', '', '--stdio'] in /Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP
Unable to start subprocess for LSP-css
Traceback (most recent call last):
File "/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/windows.py", line 356, in start_async
transport = create_transport(transport_config, transport_cwd, session)
File "/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py", line 252, in create_transport
process = start_subprocess()
File "/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py", line 241, in start_subprocess
return _start_subprocess(config.command, stdin, stdout, subprocess.PIPE, startupinfo, config.env, cwd)
File "/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py", line 323, in _start_subprocess
cwd=cwd)
File "./python3.3/subprocess.py", line 819, in __init__
File "./python3.3/subprocess.py", line 1448, in _execute_child
PermissionError: [Errno 13] Permission denied
# .... it keeps repeating to infinity ...
```
I sent my work MAC laptop to service, and got it back a few weeks ago,
I remember that I could remember that I couldn't type ls when inside the Documents directory.
I run into this issue:
https://osxdaily.com/2018/10/09/fix-operation-not-permitted-terminal-error-macos/
So maybe I have some problems with permissions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/core/configurations.py`
Content:
```
1 from .logging import debug
2 from .types import ClientConfig
3 from .typing import Generator, List, Optional, Set, Dict
4 from .workspace import enable_in_project, disable_in_project
5 import sublime
6 import urllib.parse
7
8
9 class ConfigManager(object):
10 """Distributes language client configuration between windows"""
11
12 def __init__(self, global_configs: Dict[str, ClientConfig]) -> None:
13 self._configs = global_configs
14 self._managers = {} # type: Dict[int, WindowConfigManager]
15
16 def for_window(self, window: sublime.Window) -> 'WindowConfigManager':
17 window_configs = WindowConfigManager(window, self._configs)
18 self._managers[window.id()] = window_configs
19 return window_configs
20
21 def update(self, config_name: Optional[str] = None) -> None:
22 for window in sublime.windows():
23 if window.id() in self._managers:
24 self._managers[window.id()].update(config_name)
25
26
27 class WindowConfigManager(object):
28 def __init__(self, window: sublime.Window, global_configs: Dict[str, ClientConfig]) -> None:
29 self._window = window
30 self._global_configs = global_configs
31 self._disabled_for_session = set() # type: Set[str]
32 self.all = {} # type: Dict[str, ClientConfig]
33 self.update()
34
35 def get_configs(self) -> List[ClientConfig]:
36 return sorted(self.all.values(), key=lambda config: config.name)
37
38 def match_view(self, view: sublime.View, include_disabled: bool = False) -> Generator[ClientConfig, None, None]:
39 """
40 Yields configurations where:
41
42 - the configuration's "selector" matches with the view's base scope, and
43 - the view's URI scheme is an element of the configuration's "schemes".
44 """
45 try:
46 uri = view.settings().get("lsp_uri")
47 if not isinstance(uri, str):
48 return
49 scheme = urllib.parse.urlparse(uri).scheme
50 for config in self.all.values():
51 if config.match_view(view, scheme) and (config.enabled or include_disabled):
52 yield config
53 except (IndexError, RuntimeError):
54 pass
55
56 def update(self, config_name: Optional[str] = None) -> None:
57 project_settings = (self._window.project_data() or {}).get("settings", {}).get("LSP", {})
58 if config_name is None:
59 self.all.clear()
60 for name, config in self._global_configs.items():
61 if config_name and config_name != name:
62 continue
63 overrides = project_settings.pop(name, None)
64 if isinstance(overrides, dict):
65 debug("applying .sublime-project override for", name)
66 else:
67 overrides = {}
68 if name in self._disabled_for_session:
69 overrides["enabled"] = False
70 self.all[name] = ClientConfig.from_config(config, overrides)
71 for name, c in project_settings.items():
72 if config_name and config_name != name:
73 continue
74 debug("loading project-only configuration", name)
75 self.all[name] = ClientConfig.from_dict(name, c)
76 self._window.run_command("lsp_recheck_sessions", {'config_name': config_name})
77
78 def enable_config(self, config_name: str) -> None:
79 if not self._reenable_disabled_for_session(config_name):
80 enable_in_project(self._window, config_name)
81 self.update(config_name)
82
83 def disable_config(self, config_name: str, only_for_session: bool = False) -> None:
84 if only_for_session:
85 self._disable_for_session(config_name)
86 else:
87 disable_in_project(self._window, config_name)
88 self.update(config_name)
89
90 def _disable_for_session(self, config_name: str) -> None:
91 self._disabled_for_session.add(config_name)
92
93 def _reenable_disabled_for_session(self, config_name: str) -> bool:
94 try:
95 self._disabled_for_session.remove(config_name)
96 return True
97 except KeyError:
98 return False
99
```
Path: `plugin/core/settings.py`
Content:
```
1 from .collections import DottedDict
2 from .logging import debug
3 from .types import ClientConfig, debounced
4 from .types import read_dict_setting
5 from .types import Settings
6 from .types import SettingsRegistration
7 from .typing import Any, Optional, Dict, Callable
8 import sublime
9
10
11 PLUGIN_NAME = 'LSP'
12
13
14 class ClientConfigs:
15
16 def __init__(self) -> None:
17 self.all = {} # type: Dict[str, ClientConfig]
18 self.external = {} # type: Dict[str, ClientConfig]
19 self._listener = None # type: Optional[Callable[[Optional[str]], None]]
20
21 def _notify_listener(self, config_name: Optional[str] = None) -> None:
22 if callable(self._listener):
23 self._listener(config_name)
24
25 def add_for_testing(self, config: ClientConfig) -> None:
26 assert config.name not in self.all
27 self.all[config.name] = config
28 self._notify_listener()
29
30 def remove_for_testing(self, config: ClientConfig) -> None:
31 self.all.pop(config.name)
32 self._notify_listener()
33
34 def add_external_config(self, name: str, s: sublime.Settings, file: str, notify_listener: bool) -> bool:
35 if name in self.external:
36 return False
37 config = ClientConfig.from_sublime_settings(name, s, file)
38 self.external[name] = config
39 self.all[name] = config
40 if notify_listener:
41 size = len(self.external)
42 # A debounced call is necessary here because of the following problem.
43 # When Sublime Text starts, it loads plugins in alphabetical order.
44 # Each plugin is loaded 100 milliseconds after the previous plugin.
45 # Therefore, we get a sequence of calls to `register_plugin` from all LSP-* helper packages, separated
46 # in time intervals of 100 milliseconds.
47 # When calling self._notify_listener, we are calling ConfigManager.update.
48 # That object, in turn, calls WindowConfigManager.update for each window.
49 # In turn, each window starts iterating all of its attached views for language servers to attach.
50 # That causes many calls to WindowConfigManager.match_view, which is relatively speaking an expensive
51 # operation. To ensure that this dance is done only once, we delay notifying the ConfigManager until all
52 # plugins have done their `register_plugin` call.
53 debounced(lambda: self._notify_listener(name), 200, lambda: len(self.external) == size)
54 return True
55
56 def remove_external_config(self, name: str) -> None:
57 self.external.pop(name, None)
58 if self.all.pop(name, None):
59 self._notify_listener(name)
60
61 def update_external_config(self, name: str, s: sublime.Settings, file: str) -> None:
62 try:
63 config = ClientConfig.from_sublime_settings(name, s, file)
64 except IOError:
65 # The plugin is about to be disabled (for example by Package Control for an upgrade), let unregister_plugin
66 # handle this
67 return
68 self.external[name] = config
69 self.all[name] = config
70 self._notify_listener(name)
71
72 def update_configs(self) -> None:
73 global _settings_obj
74 if _settings_obj is None:
75 return
76 clients = DottedDict(read_dict_setting(_settings_obj, "default_clients", {}))
77 clients.update(read_dict_setting(_settings_obj, "clients", {}))
78 self.all.clear()
79 self.all.update({name: ClientConfig.from_dict(name, d) for name, d in clients.get().items()})
80 self.all.update(self.external)
81 debug("enabled configs:", ", ".join(sorted(c.name for c in self.all.values() if c.enabled)))
82 debug("disabled configs:", ", ".join(sorted(c.name for c in self.all.values() if not c.enabled)))
83 self._notify_listener()
84
85 def _set_enabled(self, config_name: str, is_enabled: bool) -> None:
86 settings = sublime.load_settings("LSP.sublime-settings")
87 clients = settings.get("clients")
88 if isinstance(clients, dict):
89 config = clients.setdefault(config_name, {})
90 config["enabled"] = is_enabled
91 settings.set("clients", clients)
92 sublime.save_settings("LSP.sublime-settings")
93
94 def enable(self, config_name: str) -> None:
95 self._set_enabled(config_name, True)
96
97 def disable(self, config_name: str) -> None:
98 self._set_enabled(config_name, False)
99
100 def set_listener(self, recipient: Callable[[Optional[str]], None]) -> None:
101 self._listener = recipient
102
103
104 _settings_obj = None # type: Optional[sublime.Settings]
105 _settings = None # type: Optional[Settings]
106 _settings_registration = None # type: Optional[SettingsRegistration]
107 _global_settings = None # type: Optional[sublime.Settings]
108 client_configs = ClientConfigs()
109
110
111 def _on_sublime_settings_changed() -> None:
112 global _settings_obj
113 global _settings
114 global client_configs
115 if _settings_obj is None or _settings is None:
116 return
117 _settings.update(_settings_obj)
118 client_configs.update_configs()
119
120
121 def load_settings() -> None:
122 global _global_settings
123 global _settings_obj
124 global _settings
125 global _settings_registration
126 if _settings_obj is None:
127 _global_settings = sublime.load_settings("Preferences.sublime-settings")
128 _settings_obj = sublime.load_settings("LSP.sublime-settings")
129 _settings = Settings(_settings_obj)
130 _settings_registration = SettingsRegistration(_settings_obj, _on_sublime_settings_changed)
131
132
133 def unload_settings() -> None:
134 global _global_settings
135 global _settings_obj
136 global _settings_registration
137 if _settings_obj is not None:
138 _global_settings = None
139 _settings_registration = None
140 _settings_obj = None
141
142
143 def userprefs() -> Settings:
144 global _settings
145 return _settings # type: ignore
146
147
148 def globalprefs() -> sublime.Settings:
149 global _global_settings
150 return _global_settings # type: ignore
151
152
153 def read_client_config(name: str, d: Dict[str, Any]) -> ClientConfig:
154 return ClientConfig.from_dict(name, d)
155
156
157 def update_client_config(external_config: ClientConfig, user_override_config: Dict[str, Any]) -> ClientConfig:
158 return ClientConfig.from_config(external_config, user_override_config)
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugin/core/configurations.py b/plugin/core/configurations.py
--- a/plugin/core/configurations.py
+++ b/plugin/core/configurations.py
@@ -53,12 +53,12 @@
except (IndexError, RuntimeError):
pass
- def update(self, config_name: Optional[str] = None) -> None:
+ def update(self, updated_config_name: Optional[str] = None) -> None:
project_settings = (self._window.project_data() or {}).get("settings", {}).get("LSP", {})
- if config_name is None:
+ if updated_config_name is None:
self.all.clear()
for name, config in self._global_configs.items():
- if config_name and config_name != name:
+ if updated_config_name and updated_config_name != name:
continue
overrides = project_settings.pop(name, None)
if isinstance(overrides, dict):
@@ -69,11 +69,11 @@
overrides["enabled"] = False
self.all[name] = ClientConfig.from_config(config, overrides)
for name, c in project_settings.items():
- if config_name and config_name != name:
+ if updated_config_name and updated_config_name != name:
continue
debug("loading project-only configuration", name)
self.all[name] = ClientConfig.from_dict(name, c)
- self._window.run_command("lsp_recheck_sessions", {'config_name': config_name})
+ self._window.run_command("lsp_recheck_sessions", {'config_name': updated_config_name})
def enable_config(self, config_name: str) -> None:
if not self._reenable_disabled_for_session(config_name):
diff --git a/plugin/core/settings.py b/plugin/core/settings.py
--- a/plugin/core/settings.py
+++ b/plugin/core/settings.py
@@ -56,7 +56,7 @@
def remove_external_config(self, name: str) -> None:
self.external.pop(name, None)
if self.all.pop(name, None):
- self._notify_listener(name)
+ self._notify_listener()
def update_external_config(self, name: str, s: sublime.Settings, file: str) -> None:
try:
|
{"golden_diff": "diff --git a/plugin/core/configurations.py b/plugin/core/configurations.py\n--- a/plugin/core/configurations.py\n+++ b/plugin/core/configurations.py\n@@ -53,12 +53,12 @@\n except (IndexError, RuntimeError):\n pass\n \n- def update(self, config_name: Optional[str] = None) -> None:\n+ def update(self, updated_config_name: Optional[str] = None) -> None:\n project_settings = (self._window.project_data() or {}).get(\"settings\", {}).get(\"LSP\", {})\n- if config_name is None:\n+ if updated_config_name is None:\n self.all.clear()\n for name, config in self._global_configs.items():\n- if config_name and config_name != name:\n+ if updated_config_name and updated_config_name != name:\n continue\n overrides = project_settings.pop(name, None)\n if isinstance(overrides, dict):\n@@ -69,11 +69,11 @@\n overrides[\"enabled\"] = False\n self.all[name] = ClientConfig.from_config(config, overrides)\n for name, c in project_settings.items():\n- if config_name and config_name != name:\n+ if updated_config_name and updated_config_name != name:\n continue\n debug(\"loading project-only configuration\", name)\n self.all[name] = ClientConfig.from_dict(name, c)\n- self._window.run_command(\"lsp_recheck_sessions\", {'config_name': config_name})\n+ self._window.run_command(\"lsp_recheck_sessions\", {'config_name': updated_config_name})\n \n def enable_config(self, config_name: str) -> None:\n if not self._reenable_disabled_for_session(config_name):\ndiff --git a/plugin/core/settings.py b/plugin/core/settings.py\n--- a/plugin/core/settings.py\n+++ b/plugin/core/settings.py\n@@ -56,7 +56,7 @@\n def remove_external_config(self, name: str) -> None:\n self.external.pop(name, None)\n if self.all.pop(name, None):\n- self._notify_listener(name)\n+ self._notify_listener()\n \n def update_external_config(self, name: str, s: sublime.Settings, file: str) -> None:\n try:\n", "issue": "When disabling a LSP-* plugin, I the error popup shows up infinitely \n**Describe the bug**\r\nWhen I disable a LSP-* plugin. \r\nthe error popup shows to infinity.\r\n\r\n\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Have LSP-css installed.\r\n2. Open a css file.\r\n3. Disable the LSP-css plugin.\r\n4. See error\r\n\r\n\r\n**Expected behavior**\r\n\r\nI would not expect the server to shut down gracefully without any popup showing.\r\n\r\n**Screenshots**\r\n\r\nBut instead, the popup shows until I kill the ST app.\r\n\r\n[If applicable, add screenshots to help explain your problem.](https://user-images.githubusercontent.com/22029477/143683180-5b1a5a71-57ae-4e35-ac9f-773eefa0076b.mp4)\r\n\r\n**Environment (please complete the following information):**\r\n- OS: macOS 11.5.2 Big Sur\r\n- Sublime Text version: 4122\r\n- LSP version: acfd6406ba4680a0e537dc87a72aa5b410a154e7\r\n- Language servers used: [e.g. clangd, gopls, dart, Vetur, intelephense, HIE]\r\n\r\n**Additional context**\r\n\r\nHere is the error from the ST console:\r\n```\r\nLSP: starting ['', '', '--stdio'] in /Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP\r\nUnable to start subprocess for LSP-css\r\nTraceback (most recent call last):\r\n File \"/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/windows.py\", line 356, in start_async\r\n transport = create_transport(transport_config, transport_cwd, session)\r\n File \"/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py\", line 252, in create_transport\r\n process = start_subprocess()\r\n File \"/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py\", line 241, in start_subprocess\r\n return _start_subprocess(config.command, stdin, stdout, subprocess.PIPE, startupinfo, config.env, cwd)\r\n File \"/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py\", line 323, in _start_subprocess\r\n cwd=cwd)\r\n File \"./python3.3/subprocess.py\", line 819, in __init__\r\n File \"./python3.3/subprocess.py\", line 1448, in _execute_child\r\nPermissionError: [Errno 13] Permission denied\r\n\r\nLSP: starting ['', '', '--stdio'] in /Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP\r\nUnable to start subprocess for LSP-css\r\nTraceback (most recent call last):\r\n File \"/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/windows.py\", line 356, in start_async\r\n transport = create_transport(transport_config, transport_cwd, session)\r\n File \"/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py\", line 252, in create_transport\r\n process = start_subprocess()\r\n File \"/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py\", line 241, in start_subprocess\r\n return _start_subprocess(config.command, stdin, stdout, subprocess.PIPE, startupinfo, config.env, cwd)\r\n File \"/Users/codetribe/Library/Application Support/Sublime Text/Packages/LSP/plugin/core/transports.py\", line 323, in _start_subprocess\r\n cwd=cwd)\r\n File \"./python3.3/subprocess.py\", line 819, in __init__\r\n File \"./python3.3/subprocess.py\", line 1448, in _execute_child\r\nPermissionError: [Errno 13] Permission denied\r\n\r\n# .... it keeps repeating to infinity ...\r\n```\r\n\r\nI sent my work MAC laptop to service, and got it back a few weeks ago,\r\nI remember that I could remember that I couldn't type ls when inside the Documents directory.\r\nI run into this issue:\r\nhttps://osxdaily.com/2018/10/09/fix-operation-not-permitted-terminal-error-macos/\r\n\r\nSo maybe I have some problems with permissions.\n", "before_files": [{"content": "from .logging import debug\nfrom .types import ClientConfig\nfrom .typing import Generator, List, Optional, Set, Dict\nfrom .workspace import enable_in_project, disable_in_project\nimport sublime\nimport urllib.parse\n\n\nclass ConfigManager(object):\n \"\"\"Distributes language client configuration between windows\"\"\"\n\n def __init__(self, global_configs: Dict[str, ClientConfig]) -> None:\n self._configs = global_configs\n self._managers = {} # type: Dict[int, WindowConfigManager]\n\n def for_window(self, window: sublime.Window) -> 'WindowConfigManager':\n window_configs = WindowConfigManager(window, self._configs)\n self._managers[window.id()] = window_configs\n return window_configs\n\n def update(self, config_name: Optional[str] = None) -> None:\n for window in sublime.windows():\n if window.id() in self._managers:\n self._managers[window.id()].update(config_name)\n\n\nclass WindowConfigManager(object):\n def __init__(self, window: sublime.Window, global_configs: Dict[str, ClientConfig]) -> None:\n self._window = window\n self._global_configs = global_configs\n self._disabled_for_session = set() # type: Set[str]\n self.all = {} # type: Dict[str, ClientConfig]\n self.update()\n\n def get_configs(self) -> List[ClientConfig]:\n return sorted(self.all.values(), key=lambda config: config.name)\n\n def match_view(self, view: sublime.View, include_disabled: bool = False) -> Generator[ClientConfig, None, None]:\n \"\"\"\n Yields configurations where:\n\n - the configuration's \"selector\" matches with the view's base scope, and\n - the view's URI scheme is an element of the configuration's \"schemes\".\n \"\"\"\n try:\n uri = view.settings().get(\"lsp_uri\")\n if not isinstance(uri, str):\n return\n scheme = urllib.parse.urlparse(uri).scheme\n for config in self.all.values():\n if config.match_view(view, scheme) and (config.enabled or include_disabled):\n yield config\n except (IndexError, RuntimeError):\n pass\n\n def update(self, config_name: Optional[str] = None) -> None:\n project_settings = (self._window.project_data() or {}).get(\"settings\", {}).get(\"LSP\", {})\n if config_name is None:\n self.all.clear()\n for name, config in self._global_configs.items():\n if config_name and config_name != name:\n continue\n overrides = project_settings.pop(name, None)\n if isinstance(overrides, dict):\n debug(\"applying .sublime-project override for\", name)\n else:\n overrides = {}\n if name in self._disabled_for_session:\n overrides[\"enabled\"] = False\n self.all[name] = ClientConfig.from_config(config, overrides)\n for name, c in project_settings.items():\n if config_name and config_name != name:\n continue\n debug(\"loading project-only configuration\", name)\n self.all[name] = ClientConfig.from_dict(name, c)\n self._window.run_command(\"lsp_recheck_sessions\", {'config_name': config_name})\n\n def enable_config(self, config_name: str) -> None:\n if not self._reenable_disabled_for_session(config_name):\n enable_in_project(self._window, config_name)\n self.update(config_name)\n\n def disable_config(self, config_name: str, only_for_session: bool = False) -> None:\n if only_for_session:\n self._disable_for_session(config_name)\n else:\n disable_in_project(self._window, config_name)\n self.update(config_name)\n\n def _disable_for_session(self, config_name: str) -> None:\n self._disabled_for_session.add(config_name)\n\n def _reenable_disabled_for_session(self, config_name: str) -> bool:\n try:\n self._disabled_for_session.remove(config_name)\n return True\n except KeyError:\n return False\n", "path": "plugin/core/configurations.py"}, {"content": "from .collections import DottedDict\nfrom .logging import debug\nfrom .types import ClientConfig, debounced\nfrom .types import read_dict_setting\nfrom .types import Settings\nfrom .types import SettingsRegistration\nfrom .typing import Any, Optional, Dict, Callable\nimport sublime\n\n\nPLUGIN_NAME = 'LSP'\n\n\nclass ClientConfigs:\n\n def __init__(self) -> None:\n self.all = {} # type: Dict[str, ClientConfig]\n self.external = {} # type: Dict[str, ClientConfig]\n self._listener = None # type: Optional[Callable[[Optional[str]], None]]\n\n def _notify_listener(self, config_name: Optional[str] = None) -> None:\n if callable(self._listener):\n self._listener(config_name)\n\n def add_for_testing(self, config: ClientConfig) -> None:\n assert config.name not in self.all\n self.all[config.name] = config\n self._notify_listener()\n\n def remove_for_testing(self, config: ClientConfig) -> None:\n self.all.pop(config.name)\n self._notify_listener()\n\n def add_external_config(self, name: str, s: sublime.Settings, file: str, notify_listener: bool) -> bool:\n if name in self.external:\n return False\n config = ClientConfig.from_sublime_settings(name, s, file)\n self.external[name] = config\n self.all[name] = config\n if notify_listener:\n size = len(self.external)\n # A debounced call is necessary here because of the following problem.\n # When Sublime Text starts, it loads plugins in alphabetical order.\n # Each plugin is loaded 100 milliseconds after the previous plugin.\n # Therefore, we get a sequence of calls to `register_plugin` from all LSP-* helper packages, separated\n # in time intervals of 100 milliseconds.\n # When calling self._notify_listener, we are calling ConfigManager.update.\n # That object, in turn, calls WindowConfigManager.update for each window.\n # In turn, each window starts iterating all of its attached views for language servers to attach.\n # That causes many calls to WindowConfigManager.match_view, which is relatively speaking an expensive\n # operation. To ensure that this dance is done only once, we delay notifying the ConfigManager until all\n # plugins have done their `register_plugin` call.\n debounced(lambda: self._notify_listener(name), 200, lambda: len(self.external) == size)\n return True\n\n def remove_external_config(self, name: str) -> None:\n self.external.pop(name, None)\n if self.all.pop(name, None):\n self._notify_listener(name)\n\n def update_external_config(self, name: str, s: sublime.Settings, file: str) -> None:\n try:\n config = ClientConfig.from_sublime_settings(name, s, file)\n except IOError:\n # The plugin is about to be disabled (for example by Package Control for an upgrade), let unregister_plugin\n # handle this\n return\n self.external[name] = config\n self.all[name] = config\n self._notify_listener(name)\n\n def update_configs(self) -> None:\n global _settings_obj\n if _settings_obj is None:\n return\n clients = DottedDict(read_dict_setting(_settings_obj, \"default_clients\", {}))\n clients.update(read_dict_setting(_settings_obj, \"clients\", {}))\n self.all.clear()\n self.all.update({name: ClientConfig.from_dict(name, d) for name, d in clients.get().items()})\n self.all.update(self.external)\n debug(\"enabled configs:\", \", \".join(sorted(c.name for c in self.all.values() if c.enabled)))\n debug(\"disabled configs:\", \", \".join(sorted(c.name for c in self.all.values() if not c.enabled)))\n self._notify_listener()\n\n def _set_enabled(self, config_name: str, is_enabled: bool) -> None:\n settings = sublime.load_settings(\"LSP.sublime-settings\")\n clients = settings.get(\"clients\")\n if isinstance(clients, dict):\n config = clients.setdefault(config_name, {})\n config[\"enabled\"] = is_enabled\n settings.set(\"clients\", clients)\n sublime.save_settings(\"LSP.sublime-settings\")\n\n def enable(self, config_name: str) -> None:\n self._set_enabled(config_name, True)\n\n def disable(self, config_name: str) -> None:\n self._set_enabled(config_name, False)\n\n def set_listener(self, recipient: Callable[[Optional[str]], None]) -> None:\n self._listener = recipient\n\n\n_settings_obj = None # type: Optional[sublime.Settings]\n_settings = None # type: Optional[Settings]\n_settings_registration = None # type: Optional[SettingsRegistration]\n_global_settings = None # type: Optional[sublime.Settings]\nclient_configs = ClientConfigs()\n\n\ndef _on_sublime_settings_changed() -> None:\n global _settings_obj\n global _settings\n global client_configs\n if _settings_obj is None or _settings is None:\n return\n _settings.update(_settings_obj)\n client_configs.update_configs()\n\n\ndef load_settings() -> None:\n global _global_settings\n global _settings_obj\n global _settings\n global _settings_registration\n if _settings_obj is None:\n _global_settings = sublime.load_settings(\"Preferences.sublime-settings\")\n _settings_obj = sublime.load_settings(\"LSP.sublime-settings\")\n _settings = Settings(_settings_obj)\n _settings_registration = SettingsRegistration(_settings_obj, _on_sublime_settings_changed)\n\n\ndef unload_settings() -> None:\n global _global_settings\n global _settings_obj\n global _settings_registration\n if _settings_obj is not None:\n _global_settings = None\n _settings_registration = None\n _settings_obj = None\n\n\ndef userprefs() -> Settings:\n global _settings\n return _settings # type: ignore\n\n\ndef globalprefs() -> sublime.Settings:\n global _global_settings\n return _global_settings # type: ignore\n\n\ndef read_client_config(name: str, d: Dict[str, Any]) -> ClientConfig:\n return ClientConfig.from_dict(name, d)\n\n\ndef update_client_config(external_config: ClientConfig, user_override_config: Dict[str, Any]) -> ClientConfig:\n return ClientConfig.from_config(external_config, user_override_config)\n", "path": "plugin/core/settings.py"}], "after_files": [{"content": "from .logging import debug\nfrom .types import ClientConfig\nfrom .typing import Generator, List, Optional, Set, Dict\nfrom .workspace import enable_in_project, disable_in_project\nimport sublime\nimport urllib.parse\n\n\nclass ConfigManager(object):\n \"\"\"Distributes language client configuration between windows\"\"\"\n\n def __init__(self, global_configs: Dict[str, ClientConfig]) -> None:\n self._configs = global_configs\n self._managers = {} # type: Dict[int, WindowConfigManager]\n\n def for_window(self, window: sublime.Window) -> 'WindowConfigManager':\n window_configs = WindowConfigManager(window, self._configs)\n self._managers[window.id()] = window_configs\n return window_configs\n\n def update(self, config_name: Optional[str] = None) -> None:\n for window in sublime.windows():\n if window.id() in self._managers:\n self._managers[window.id()].update(config_name)\n\n\nclass WindowConfigManager(object):\n def __init__(self, window: sublime.Window, global_configs: Dict[str, ClientConfig]) -> None:\n self._window = window\n self._global_configs = global_configs\n self._disabled_for_session = set() # type: Set[str]\n self.all = {} # type: Dict[str, ClientConfig]\n self.update()\n\n def get_configs(self) -> List[ClientConfig]:\n return sorted(self.all.values(), key=lambda config: config.name)\n\n def match_view(self, view: sublime.View, include_disabled: bool = False) -> Generator[ClientConfig, None, None]:\n \"\"\"\n Yields configurations where:\n\n - the configuration's \"selector\" matches with the view's base scope, and\n - the view's URI scheme is an element of the configuration's \"schemes\".\n \"\"\"\n try:\n uri = view.settings().get(\"lsp_uri\")\n if not isinstance(uri, str):\n return\n scheme = urllib.parse.urlparse(uri).scheme\n for config in self.all.values():\n if config.match_view(view, scheme) and (config.enabled or include_disabled):\n yield config\n except (IndexError, RuntimeError):\n pass\n\n def update(self, updated_config_name: Optional[str] = None) -> None:\n project_settings = (self._window.project_data() or {}).get(\"settings\", {}).get(\"LSP\", {})\n if updated_config_name is None:\n self.all.clear()\n for name, config in self._global_configs.items():\n if updated_config_name and updated_config_name != name:\n continue\n overrides = project_settings.pop(name, None)\n if isinstance(overrides, dict):\n debug(\"applying .sublime-project override for\", name)\n else:\n overrides = {}\n if name in self._disabled_for_session:\n overrides[\"enabled\"] = False\n self.all[name] = ClientConfig.from_config(config, overrides)\n for name, c in project_settings.items():\n if updated_config_name and updated_config_name != name:\n continue\n debug(\"loading project-only configuration\", name)\n self.all[name] = ClientConfig.from_dict(name, c)\n self._window.run_command(\"lsp_recheck_sessions\", {'config_name': updated_config_name})\n\n def enable_config(self, config_name: str) -> None:\n if not self._reenable_disabled_for_session(config_name):\n enable_in_project(self._window, config_name)\n self.update(config_name)\n\n def disable_config(self, config_name: str, only_for_session: bool = False) -> None:\n if only_for_session:\n self._disable_for_session(config_name)\n else:\n disable_in_project(self._window, config_name)\n self.update(config_name)\n\n def _disable_for_session(self, config_name: str) -> None:\n self._disabled_for_session.add(config_name)\n\n def _reenable_disabled_for_session(self, config_name: str) -> bool:\n try:\n self._disabled_for_session.remove(config_name)\n return True\n except KeyError:\n return False\n", "path": "plugin/core/configurations.py"}, {"content": "from .collections import DottedDict\nfrom .logging import debug\nfrom .types import ClientConfig, debounced\nfrom .types import read_dict_setting\nfrom .types import Settings\nfrom .types import SettingsRegistration\nfrom .typing import Any, Optional, Dict, Callable\nimport sublime\n\n\nPLUGIN_NAME = 'LSP'\n\n\nclass ClientConfigs:\n\n def __init__(self) -> None:\n self.all = {} # type: Dict[str, ClientConfig]\n self.external = {} # type: Dict[str, ClientConfig]\n self._listener = None # type: Optional[Callable[[Optional[str]], None]]\n\n def _notify_listener(self, config_name: Optional[str] = None) -> None:\n if callable(self._listener):\n self._listener(config_name)\n\n def add_for_testing(self, config: ClientConfig) -> None:\n assert config.name not in self.all\n self.all[config.name] = config\n self._notify_listener()\n\n def remove_for_testing(self, config: ClientConfig) -> None:\n self.all.pop(config.name)\n self._notify_listener()\n\n def add_external_config(self, name: str, s: sublime.Settings, file: str, notify_listener: bool) -> bool:\n if name in self.external:\n return False\n config = ClientConfig.from_sublime_settings(name, s, file)\n self.external[name] = config\n self.all[name] = config\n if notify_listener:\n size = len(self.external)\n # A debounced call is necessary here because of the following problem.\n # When Sublime Text starts, it loads plugins in alphabetical order.\n # Each plugin is loaded 100 milliseconds after the previous plugin.\n # Therefore, we get a sequence of calls to `register_plugin` from all LSP-* helper packages, separated\n # in time intervals of 100 milliseconds.\n # When calling self._notify_listener, we are calling ConfigManager.update.\n # That object, in turn, calls WindowConfigManager.update for each window.\n # In turn, each window starts iterating all of its attached views for language servers to attach.\n # That causes many calls to WindowConfigManager.match_view, which is relatively speaking an expensive\n # operation. To ensure that this dance is done only once, we delay notifying the ConfigManager until all\n # plugins have done their `register_plugin` call.\n debounced(lambda: self._notify_listener(name), 200, lambda: len(self.external) == size)\n return True\n\n def remove_external_config(self, name: str) -> None:\n self.external.pop(name, None)\n if self.all.pop(name, None):\n self._notify_listener()\n\n def update_external_config(self, name: str, s: sublime.Settings, file: str) -> None:\n try:\n config = ClientConfig.from_sublime_settings(name, s, file)\n except IOError:\n # The plugin is about to be disabled (for example by Package Control for an upgrade), let unregister_plugin\n # handle this\n return\n self.external[name] = config\n self.all[name] = config\n self._notify_listener(name)\n\n def update_configs(self) -> None:\n global _settings_obj\n if _settings_obj is None:\n return\n clients = DottedDict(read_dict_setting(_settings_obj, \"default_clients\", {}))\n clients.update(read_dict_setting(_settings_obj, \"clients\", {}))\n self.all.clear()\n self.all.update({name: ClientConfig.from_dict(name, d) for name, d in clients.get().items()})\n self.all.update(self.external)\n debug(\"enabled configs:\", \", \".join(sorted(c.name for c in self.all.values() if c.enabled)))\n debug(\"disabled configs:\", \", \".join(sorted(c.name for c in self.all.values() if not c.enabled)))\n self._notify_listener()\n\n def _set_enabled(self, config_name: str, is_enabled: bool) -> None:\n settings = sublime.load_settings(\"LSP.sublime-settings\")\n clients = settings.get(\"clients\")\n if isinstance(clients, dict):\n config = clients.setdefault(config_name, {})\n config[\"enabled\"] = is_enabled\n settings.set(\"clients\", clients)\n sublime.save_settings(\"LSP.sublime-settings\")\n\n def enable(self, config_name: str) -> None:\n self._set_enabled(config_name, True)\n\n def disable(self, config_name: str) -> None:\n self._set_enabled(config_name, False)\n\n def set_listener(self, recipient: Callable[[Optional[str]], None]) -> None:\n self._listener = recipient\n\n\n_settings_obj = None # type: Optional[sublime.Settings]\n_settings = None # type: Optional[Settings]\n_settings_registration = None # type: Optional[SettingsRegistration]\n_global_settings = None # type: Optional[sublime.Settings]\nclient_configs = ClientConfigs()\n\n\ndef _on_sublime_settings_changed() -> None:\n global _settings_obj\n global _settings\n global client_configs\n if _settings_obj is None or _settings is None:\n return\n _settings.update(_settings_obj)\n client_configs.update_configs()\n\n\ndef load_settings() -> None:\n global _global_settings\n global _settings_obj\n global _settings\n global _settings_registration\n if _settings_obj is None:\n _global_settings = sublime.load_settings(\"Preferences.sublime-settings\")\n _settings_obj = sublime.load_settings(\"LSP.sublime-settings\")\n _settings = Settings(_settings_obj)\n _settings_registration = SettingsRegistration(_settings_obj, _on_sublime_settings_changed)\n\n\ndef unload_settings() -> None:\n global _global_settings\n global _settings_obj\n global _settings_registration\n if _settings_obj is not None:\n _global_settings = None\n _settings_registration = None\n _settings_obj = None\n\n\ndef userprefs() -> Settings:\n global _settings\n return _settings # type: ignore\n\n\ndef globalprefs() -> sublime.Settings:\n global _global_settings\n return _global_settings # type: ignore\n\n\ndef read_client_config(name: str, d: Dict[str, Any]) -> ClientConfig:\n return ClientConfig.from_dict(name, d)\n\n\ndef update_client_config(external_config: ClientConfig, user_override_config: Dict[str, Any]) -> ClientConfig:\n return ClientConfig.from_config(external_config, user_override_config)\n", "path": "plugin/core/settings.py"}]}
| 4,066 | 478 |
gh_patches_debug_13455
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-3811
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gcp serverless runtime error on implicit boto dependency
reported in gitter, gcp functions should not need to depend on boto3, looks like some of the securityhub work caused an implicit dependency on boto3.
```
textPayload: "ModuleNotFoundError: No module named 'boto3'" - Getting this error for the cloud function to stop a instance in GCP
instance-off
qte7iow5dhzi
Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 346, in run_http_function result = _function_handler.invoke_user_function(flask.request) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 210, in call_user_function return self._user_function(request_or_event) File "/user_code/main.py", line 21, in run from c7n_gcp.handler import run File "/user_code/c7n_gcp/handler.py", line 24, in <module> from c7n_gcp.entry import initialize_gcp File "/user_code/c7n_gcp/entry.py", line 18, in <module> import c7n_gcp.resources.bigquery File "/user_code/c7n_gcp/resources/bigquery.py", line 16, in <module> from c7n_gcp.query import QueryResourceManager, TypeInfo File "/user_code/c7n_gcp/query.py", line 23, in <module> from c7n.filters import FilterRegistry File "/user_code/c7n/filters/init.py", line 32, in <module> from .securityhub import SecurityHubFindingFilter File "/user_code/c7n/filters/securityhub.py", line 19, in <module> from c7n.resources import aws File "/user_code/c7n/resources/aws.py", line 31, in <module> import boto3 ModuleNotFoundError: No module named 'boto3
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/filters/securityhub.py`
Content:
```
1 # Copyright 2019 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 from c7n.utils import local_session, type_schema
17 from .core import Filter
18 from c7n.manager import resources
19 from c7n.resources import aws
20
21
22 class SecurityHubFindingFilter(Filter):
23 """Check if there are Security Hub Findings related to the resources
24 """
25 schema = type_schema(
26 'finding',
27 # Many folks do an aggregator region, allow them to use that
28 # for filtering.
29 region={'type': 'string'},
30 query={'type': 'object'})
31
32 permissions = ('securityhub:GetFindings',)
33 annotation_key = 'c7n:finding-filter'
34 query_shape = 'AwsSecurityFindingFilters'
35
36 def validate(self):
37 query = self.data.get('query')
38 if query:
39 aws.shape_validate(query, self.query_shape, 'securityhub')
40
41 def process(self, resources, event=None):
42 client = local_session(
43 self.manager.session_factory).client(
44 'securityhub', region_name=self.data.get('region'))
45 found = []
46 params = dict(self.data.get('query', {}))
47
48 for r_arn, resource in zip(self.manager.get_arns(resources), resources):
49 params['ResourceId'] = [{"Value": r_arn, "Comparison": "EQUALS"}]
50 findings = client.get_findings(Filters=params).get("Findings")
51 if len(findings) > 0:
52 resource[self.annotation_key] = findings
53 found.append(resource)
54 return found
55
56 @classmethod
57 def register_resources(klass, registry, resource_class):
58 """ meta model subscriber on resource registration.
59
60 SecurityHub Findings Filter
61 """
62 for rtype, resource_manager in registry.items():
63 if not resource_manager.has_arn():
64 continue
65 if 'post-finding' in resource_manager.action_registry:
66 continue
67 resource_class.filter_registry.register('finding', klass)
68
69
70 resources.subscribe(resources.EVENT_REGISTER, SecurityHubFindingFilter.register_resources)
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/c7n/filters/securityhub.py b/c7n/filters/securityhub.py
--- a/c7n/filters/securityhub.py
+++ b/c7n/filters/securityhub.py
@@ -16,7 +16,6 @@
from c7n.utils import local_session, type_schema
from .core import Filter
from c7n.manager import resources
-from c7n.resources import aws
class SecurityHubFindingFilter(Filter):
@@ -36,6 +35,7 @@
def validate(self):
query = self.data.get('query')
if query:
+ from c7n.resources import aws
aws.shape_validate(query, self.query_shape, 'securityhub')
def process(self, resources, event=None):
|
{"golden_diff": "diff --git a/c7n/filters/securityhub.py b/c7n/filters/securityhub.py\n--- a/c7n/filters/securityhub.py\n+++ b/c7n/filters/securityhub.py\n@@ -16,7 +16,6 @@\n from c7n.utils import local_session, type_schema\n from .core import Filter\n from c7n.manager import resources\n-from c7n.resources import aws\n \n \n class SecurityHubFindingFilter(Filter):\n@@ -36,6 +35,7 @@\n def validate(self):\n query = self.data.get('query')\n if query:\n+ from c7n.resources import aws\n aws.shape_validate(query, self.query_shape, 'securityhub')\n \n def process(self, resources, event=None):\n", "issue": "gcp serverless runtime error on implicit boto dependency\nreported in gitter, gcp functions should not need to depend on boto3, looks like some of the securityhub work caused an implicit dependency on boto3.\r\n\r\n```\r\ntextPayload: \"ModuleNotFoundError: No module named 'boto3'\" - Getting this error for the cloud function to stop a instance in GCP\r\ninstance-off\r\nqte7iow5dhzi\r\nTraceback (most recent call last): File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py\", line 346, in run_http_function result = _function_handler.invoke_user_function(flask.request) File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py\", line 217, in invoke_user_function return call_user_function(request_or_event) File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py\", line 210, in call_user_function return self._user_function(request_or_event) File \"/user_code/main.py\", line 21, in run from c7n_gcp.handler import run File \"/user_code/c7n_gcp/handler.py\", line 24, in <module> from c7n_gcp.entry import initialize_gcp File \"/user_code/c7n_gcp/entry.py\", line 18, in <module> import c7n_gcp.resources.bigquery File \"/user_code/c7n_gcp/resources/bigquery.py\", line 16, in <module> from c7n_gcp.query import QueryResourceManager, TypeInfo File \"/user_code/c7n_gcp/query.py\", line 23, in <module> from c7n.filters import FilterRegistry File \"/user_code/c7n/filters/init.py\", line 32, in <module> from .securityhub import SecurityHubFindingFilter File \"/user_code/c7n/filters/securityhub.py\", line 19, in <module> from c7n.resources import aws File \"/user_code/c7n/resources/aws.py\", line 31, in <module> import boto3 ModuleNotFoundError: No module named 'boto3\r\n```\n", "before_files": [{"content": "# Copyright 2019 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.utils import local_session, type_schema\nfrom .core import Filter\nfrom c7n.manager import resources\nfrom c7n.resources import aws\n\n\nclass SecurityHubFindingFilter(Filter):\n \"\"\"Check if there are Security Hub Findings related to the resources\n \"\"\"\n schema = type_schema(\n 'finding',\n # Many folks do an aggregator region, allow them to use that\n # for filtering.\n region={'type': 'string'},\n query={'type': 'object'})\n\n permissions = ('securityhub:GetFindings',)\n annotation_key = 'c7n:finding-filter'\n query_shape = 'AwsSecurityFindingFilters'\n\n def validate(self):\n query = self.data.get('query')\n if query:\n aws.shape_validate(query, self.query_shape, 'securityhub')\n\n def process(self, resources, event=None):\n client = local_session(\n self.manager.session_factory).client(\n 'securityhub', region_name=self.data.get('region'))\n found = []\n params = dict(self.data.get('query', {}))\n\n for r_arn, resource in zip(self.manager.get_arns(resources), resources):\n params['ResourceId'] = [{\"Value\": r_arn, \"Comparison\": \"EQUALS\"}]\n findings = client.get_findings(Filters=params).get(\"Findings\")\n if len(findings) > 0:\n resource[self.annotation_key] = findings\n found.append(resource)\n return found\n\n @classmethod\n def register_resources(klass, registry, resource_class):\n \"\"\" meta model subscriber on resource registration.\n\n SecurityHub Findings Filter\n \"\"\"\n for rtype, resource_manager in registry.items():\n if not resource_manager.has_arn():\n continue\n if 'post-finding' in resource_manager.action_registry:\n continue\n resource_class.filter_registry.register('finding', klass)\n\n\nresources.subscribe(resources.EVENT_REGISTER, SecurityHubFindingFilter.register_resources)\n", "path": "c7n/filters/securityhub.py"}], "after_files": [{"content": "# Copyright 2019 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.utils import local_session, type_schema\nfrom .core import Filter\nfrom c7n.manager import resources\n\n\nclass SecurityHubFindingFilter(Filter):\n \"\"\"Check if there are Security Hub Findings related to the resources\n \"\"\"\n schema = type_schema(\n 'finding',\n # Many folks do an aggregator region, allow them to use that\n # for filtering.\n region={'type': 'string'},\n query={'type': 'object'})\n\n permissions = ('securityhub:GetFindings',)\n annotation_key = 'c7n:finding-filter'\n query_shape = 'AwsSecurityFindingFilters'\n\n def validate(self):\n query = self.data.get('query')\n if query:\n from c7n.resources import aws\n aws.shape_validate(query, self.query_shape, 'securityhub')\n\n def process(self, resources, event=None):\n client = local_session(\n self.manager.session_factory).client(\n 'securityhub', region_name=self.data.get('region'))\n found = []\n params = dict(self.data.get('query', {}))\n\n for r_arn, resource in zip(self.manager.get_arns(resources), resources):\n params['ResourceId'] = [{\"Value\": r_arn, \"Comparison\": \"EQUALS\"}]\n findings = client.get_findings(Filters=params).get(\"Findings\")\n if len(findings) > 0:\n resource[self.annotation_key] = findings\n found.append(resource)\n return found\n\n @classmethod\n def register_resources(klass, registry, resource_class):\n \"\"\" meta model subscriber on resource registration.\n\n SecurityHub Findings Filter\n \"\"\"\n for rtype, resource_manager in registry.items():\n if not resource_manager.has_arn():\n continue\n if 'post-finding' in resource_manager.action_registry:\n continue\n resource_class.filter_registry.register('finding', klass)\n\n\nresources.subscribe(resources.EVENT_REGISTER, SecurityHubFindingFilter.register_resources)\n", "path": "c7n/filters/securityhub.py"}]}
| 1,411 | 163 |
gh_patches_debug_20277
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-1080
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reduce detail level of timestamp on posts
**Is your feature request related to a problem? Please describe.**
I think the time when a post was posted is a tad too detailed. For posts in the last 24h, it changes every time you refresh.

**Describe the solution you'd like**
I think the firstmost unit would be enough.
Also, after a few days (I suggest 3), the date (Apr 28) rather than "2 weeks(, 4 days in the current version)" seems a bit more helpful. After 1 year, the date could be shown in "Apr 2021",
This is subjective of course, but imho Bookwyrm is a platform where the "when" doesn't really matter (in comparison to e.g. Mastodon where many are posting news and other stuff where the temporal context is more important).
**Describe alternatives you've considered**
Hovering over the time could show the exact time as a tooltip. I think of this rather as an addition than an alternative and think both would complement each other.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/templatetags/status_display.py`
Content:
```
1 """ template filters """
2 from django import template
3
4 from bookwyrm import models
5 from bookwyrm.templatetags.utilities import get_user_identifier
6
7
8 register = template.Library()
9
10
11 @register.filter(name="mentions")
12 def get_mentions(status, user):
13 """people to @ in a reply: the parent and all mentions"""
14 mentions = set([status.user] + list(status.mention_users.all()))
15 return (
16 " ".join("@" + get_user_identifier(m) for m in mentions if not m == user) + " "
17 )
18
19
20 @register.filter(name="replies")
21 def get_replies(status):
22 """get all direct replies to a status"""
23 # TODO: this limit could cause problems
24 return models.Status.objects.filter(
25 reply_parent=status,
26 deleted=False,
27 ).select_subclasses()[:10]
28
29
30 @register.filter(name="parent")
31 def get_parent(status):
32 """get the reply parent for a status"""
33 return (
34 models.Status.objects.filter(id=status.reply_parent_id)
35 .select_subclasses()
36 .get()
37 )
38
39
40 @register.filter(name="boosted_status")
41 def get_boosted(boost):
42 """load a boosted status. have to do this or it won't get foreign keys"""
43 return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/templatetags/status_display.py b/bookwyrm/templatetags/status_display.py
--- a/bookwyrm/templatetags/status_display.py
+++ b/bookwyrm/templatetags/status_display.py
@@ -1,6 +1,8 @@
""" template filters """
+from dateutil.relativedelta import relativedelta
from django import template
-
+from django.contrib.humanize.templatetags.humanize import naturaltime, naturalday
+from django.utils import timezone
from bookwyrm import models
from bookwyrm.templatetags.utilities import get_user_identifier
@@ -41,3 +43,17 @@
def get_boosted(boost):
"""load a boosted status. have to do this or it won't get foreign keys"""
return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)
+
+
[email protected](name="published_date")
+def get_published_date(date):
+ """less verbose combo of humanize filters"""
+ if not date:
+ return ""
+ now = timezone.now()
+ delta = relativedelta(now, date)
+ if delta.years:
+ return naturalday(date)
+ if delta.days:
+ return naturalday(date, "M j")
+ return naturaltime(date)
|
{"golden_diff": "diff --git a/bookwyrm/templatetags/status_display.py b/bookwyrm/templatetags/status_display.py\n--- a/bookwyrm/templatetags/status_display.py\n+++ b/bookwyrm/templatetags/status_display.py\n@@ -1,6 +1,8 @@\n \"\"\" template filters \"\"\"\n+from dateutil.relativedelta import relativedelta\n from django import template\n-\n+from django.contrib.humanize.templatetags.humanize import naturaltime, naturalday\n+from django.utils import timezone\n from bookwyrm import models\n from bookwyrm.templatetags.utilities import get_user_identifier\n \n@@ -41,3 +43,17 @@\n def get_boosted(boost):\n \"\"\"load a boosted status. have to do this or it won't get foreign keys\"\"\"\n return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)\n+\n+\[email protected](name=\"published_date\")\n+def get_published_date(date):\n+ \"\"\"less verbose combo of humanize filters\"\"\"\n+ if not date:\n+ return \"\"\n+ now = timezone.now()\n+ delta = relativedelta(now, date)\n+ if delta.years:\n+ return naturalday(date)\n+ if delta.days:\n+ return naturalday(date, \"M j\")\n+ return naturaltime(date)\n", "issue": "Reduce detail level of timestamp on posts\n**Is your feature request related to a problem? Please describe.**\r\nI think the time when a post was posted is a tad too detailed. For posts in the last 24h, it changes every time you refresh.\r\n\r\n\r\n**Describe the solution you'd like**\r\nI think the firstmost unit would be enough.\r\n\r\nAlso, after a few days (I suggest 3), the date (Apr 28) rather than \"2 weeks(, 4 days in the current version)\" seems a bit more helpful. After 1 year, the date could be shown in \"Apr 2021\",\r\n\r\nThis is subjective of course, but imho Bookwyrm is a platform where the \"when\" doesn't really matter (in comparison to e.g. Mastodon where many are posting news and other stuff where the temporal context is more important). \r\n\r\n**Describe alternatives you've considered**\r\nHovering over the time could show the exact time as a tooltip. I think of this rather as an addition than an alternative and think both would complement each other.\n", "before_files": [{"content": "\"\"\" template filters \"\"\"\nfrom django import template\n\nfrom bookwyrm import models\nfrom bookwyrm.templatetags.utilities import get_user_identifier\n\n\nregister = template.Library()\n\n\[email protected](name=\"mentions\")\ndef get_mentions(status, user):\n \"\"\"people to @ in a reply: the parent and all mentions\"\"\"\n mentions = set([status.user] + list(status.mention_users.all()))\n return (\n \" \".join(\"@\" + get_user_identifier(m) for m in mentions if not m == user) + \" \"\n )\n\n\[email protected](name=\"replies\")\ndef get_replies(status):\n \"\"\"get all direct replies to a status\"\"\"\n # TODO: this limit could cause problems\n return models.Status.objects.filter(\n reply_parent=status,\n deleted=False,\n ).select_subclasses()[:10]\n\n\[email protected](name=\"parent\")\ndef get_parent(status):\n \"\"\"get the reply parent for a status\"\"\"\n return (\n models.Status.objects.filter(id=status.reply_parent_id)\n .select_subclasses()\n .get()\n )\n\n\[email protected](name=\"boosted_status\")\ndef get_boosted(boost):\n \"\"\"load a boosted status. have to do this or it won't get foreign keys\"\"\"\n return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)\n", "path": "bookwyrm/templatetags/status_display.py"}], "after_files": [{"content": "\"\"\" template filters \"\"\"\nfrom dateutil.relativedelta import relativedelta\nfrom django import template\nfrom django.contrib.humanize.templatetags.humanize import naturaltime, naturalday\nfrom django.utils import timezone\nfrom bookwyrm import models\nfrom bookwyrm.templatetags.utilities import get_user_identifier\n\n\nregister = template.Library()\n\n\[email protected](name=\"mentions\")\ndef get_mentions(status, user):\n \"\"\"people to @ in a reply: the parent and all mentions\"\"\"\n mentions = set([status.user] + list(status.mention_users.all()))\n return (\n \" \".join(\"@\" + get_user_identifier(m) for m in mentions if not m == user) + \" \"\n )\n\n\[email protected](name=\"replies\")\ndef get_replies(status):\n \"\"\"get all direct replies to a status\"\"\"\n # TODO: this limit could cause problems\n return models.Status.objects.filter(\n reply_parent=status,\n deleted=False,\n ).select_subclasses()[:10]\n\n\[email protected](name=\"parent\")\ndef get_parent(status):\n \"\"\"get the reply parent for a status\"\"\"\n return (\n models.Status.objects.filter(id=status.reply_parent_id)\n .select_subclasses()\n .get()\n )\n\n\[email protected](name=\"boosted_status\")\ndef get_boosted(boost):\n \"\"\"load a boosted status. have to do this or it won't get foreign keys\"\"\"\n return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)\n\n\[email protected](name=\"published_date\")\ndef get_published_date(date):\n \"\"\"less verbose combo of humanize filters\"\"\"\n if not date:\n return \"\"\n now = timezone.now()\n delta = relativedelta(now, date)\n if delta.years:\n return naturalday(date)\n if delta.days:\n return naturalday(date, \"M j\")\n return naturaltime(date)\n", "path": "bookwyrm/templatetags/status_display.py"}]}
| 910 | 291 |
gh_patches_debug_35071
|
rasdani/github-patches
|
git_diff
|
microsoft__playwright-python-53
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto release on PyPi on tags
General interest in that? Should be pretty easy with GitHub Actions, only have to set the a Pypi API key on your end.
Example: https://github.com/microsoft/playwright-python/new/master?filename=.github%2Fworkflows%2Fpython-publish.yml&workflow_template=python-publish
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `upload_package.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import subprocess
16
17 subprocess.run("python -m twine upload dist/*", shell=True)
18
```
Path: `setup.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import setuptools
16
17 with open("README.md", "r", encoding="utf-8") as fh:
18 long_description = fh.read()
19
20 setuptools.setup(
21 name="playwright",
22 version="0.0.3",
23 author="Microsoft Corporation",
24 author_email="",
25 description="A high-level API to automate web browsers",
26 long_description=long_description,
27 long_description_content_type="text/markdown",
28 url="https://github.com/Microsoft/playwright-python",
29 packages=setuptools.find_packages(),
30 include_package_data=True,
31 install_requires=["pyee", "typing-extensions",],
32 classifiers=[
33 "Topic :: Software Development :: Testing",
34 "Topic :: Internet :: WWW/HTTP :: Browsers",
35 "Intended Audience :: Developers",
36 "Programming Language :: Python :: 3",
37 "Programming Language :: Python :: 3.7",
38 "Programming Language :: Python :: 3.8",
39 "License :: OSI Approved :: Apache Software License",
40 "Operating System :: OS Independent",
41 ],
42 python_requires=">=3.7",
43 )
44
```
Path: `playwright/__init__.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from playwright.main import playwright_object
16 import playwright.helper as helper
17
18 chromium = playwright_object.chromium
19 firefox = playwright_object.firefox
20 webkit = playwright_object.webkit
21 devices = playwright_object.devices
22 browser_types = playwright_object.browser_types
23 Error = helper.Error
24 TimeoutError = helper.TimeoutError
25
26 __all__ = [
27 "browser_types",
28 "chromium",
29 "firefox",
30 "webkit",
31 "devices",
32 "Error",
33 "TimeoutError",
34 ]
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/playwright/__init__.py b/playwright/__init__.py
--- a/playwright/__init__.py
+++ b/playwright/__init__.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from playwright._repo_version import version as __version__ # noqa:F401
from playwright.main import playwright_object
import playwright.helper as helper
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -19,16 +19,15 @@
setuptools.setup(
name="playwright",
- version="0.0.3",
author="Microsoft Corporation",
author_email="",
description="A high-level API to automate web browsers",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/Microsoft/playwright-python",
- packages=setuptools.find_packages(),
+ packages=["playwright"],
include_package_data=True,
- install_requires=["pyee", "typing-extensions",],
+ install_requires=["pyee", "typing-extensions"],
classifiers=[
"Topic :: Software Development :: Testing",
"Topic :: Internet :: WWW/HTTP :: Browsers",
@@ -40,4 +39,10 @@
"Operating System :: OS Independent",
],
python_requires=">=3.7",
+ use_scm_version={
+ "version_scheme": "post-release",
+ "write_to": "playwright/_repo_version.py",
+ "write_to_template": 'version = "{version}"\n',
+ },
+ setup_requires=["setuptools_scm"],
)
diff --git a/upload_package.py b/upload_package.py
deleted file mode 100644
--- a/upload_package.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import subprocess
-
-subprocess.run("python -m twine upload dist/*", shell=True)
|
{"golden_diff": "diff --git a/playwright/__init__.py b/playwright/__init__.py\n--- a/playwright/__init__.py\n+++ b/playwright/__init__.py\n@@ -12,6 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+from playwright._repo_version import version as __version__ # noqa:F401\n from playwright.main import playwright_object\n import playwright.helper as helper\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -19,16 +19,15 @@\n \n setuptools.setup(\n name=\"playwright\",\n- version=\"0.0.3\",\n author=\"Microsoft Corporation\",\n author_email=\"\",\n description=\"A high-level API to automate web browsers\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/Microsoft/playwright-python\",\n- packages=setuptools.find_packages(),\n+ packages=[\"playwright\"],\n include_package_data=True,\n- install_requires=[\"pyee\", \"typing-extensions\",],\n+ install_requires=[\"pyee\", \"typing-extensions\"],\n classifiers=[\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Internet :: WWW/HTTP :: Browsers\",\n@@ -40,4 +39,10 @@\n \"Operating System :: OS Independent\",\n ],\n python_requires=\">=3.7\",\n+ use_scm_version={\n+ \"version_scheme\": \"post-release\",\n+ \"write_to\": \"playwright/_repo_version.py\",\n+ \"write_to_template\": 'version = \"{version}\"\\n',\n+ },\n+ setup_requires=[\"setuptools_scm\"],\n )\ndiff --git a/upload_package.py b/upload_package.py\ndeleted file mode 100644\n--- a/upload_package.py\n+++ /dev/null\n@@ -1,17 +0,0 @@\n-# Copyright (c) Microsoft Corporation.\n-#\n-# Licensed under the Apache License, Version 2.0 (the \"License\");\n-# you may not use this file except in compliance with the License.\n-# You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software\n-# distributed under the License is distributed on an \"AS IS\" BASIS,\n-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-# See the License for the specific language governing permissions and\n-# limitations under the License.\n-\n-import subprocess\n-\n-subprocess.run(\"python -m twine upload dist/*\", shell=True)\n", "issue": "Auto release on PyPi on tags\nGeneral interest in that? Should be pretty easy with GitHub Actions, only have to set the a Pypi API key on your end.\r\n\r\nExample: https://github.com/microsoft/playwright-python/new/master?filename=.github%2Fworkflows%2Fpython-publish.yml&workflow_template=python-publish\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport subprocess\n\nsubprocess.run(\"python -m twine upload dist/*\", shell=True)\n", "path": "upload_package.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\nsetuptools.setup(\n name=\"playwright\",\n version=\"0.0.3\",\n author=\"Microsoft Corporation\",\n author_email=\"\",\n description=\"A high-level API to automate web browsers\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/Microsoft/playwright-python\",\n packages=setuptools.find_packages(),\n include_package_data=True,\n install_requires=[\"pyee\", \"typing-extensions\",],\n classifiers=[\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Internet :: WWW/HTTP :: Browsers\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n ],\n python_requires=\">=3.7\",\n)\n", "path": "setup.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom playwright.main import playwright_object\nimport playwright.helper as helper\n\nchromium = playwright_object.chromium\nfirefox = playwright_object.firefox\nwebkit = playwright_object.webkit\ndevices = playwright_object.devices\nbrowser_types = playwright_object.browser_types\nError = helper.Error\nTimeoutError = helper.TimeoutError\n\n__all__ = [\n \"browser_types\",\n \"chromium\",\n \"firefox\",\n \"webkit\",\n \"devices\",\n \"Error\",\n \"TimeoutError\",\n]\n", "path": "playwright/__init__.py"}], "after_files": [{"content": null, "path": "upload_package.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\nsetuptools.setup(\n name=\"playwright\",\n author=\"Microsoft Corporation\",\n author_email=\"\",\n description=\"A high-level API to automate web browsers\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/Microsoft/playwright-python\",\n packages=[\"playwright\"],\n include_package_data=True,\n install_requires=[\"pyee\", \"typing-extensions\"],\n classifiers=[\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Internet :: WWW/HTTP :: Browsers\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n ],\n python_requires=\">=3.7\",\n use_scm_version={\n \"version_scheme\": \"post-release\",\n \"write_to\": \"playwright/_repo_version.py\",\n \"write_to_template\": 'version = \"{version}\"\\n',\n },\n setup_requires=[\"setuptools_scm\"],\n)\n", "path": "setup.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom playwright._repo_version import version as __version__ # noqa:F401\nfrom playwright.main import playwright_object\nimport playwright.helper as helper\n\nchromium = playwright_object.chromium\nfirefox = playwright_object.firefox\nwebkit = playwright_object.webkit\ndevices = playwright_object.devices\nbrowser_types = playwright_object.browser_types\nError = helper.Error\nTimeoutError = helper.TimeoutError\n\n__all__ = [\n \"browser_types\",\n \"chromium\",\n \"firefox\",\n \"webkit\",\n \"devices\",\n \"Error\",\n \"TimeoutError\",\n]\n", "path": "playwright/__init__.py"}]}
| 1,251 | 579 |
gh_patches_debug_16968
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-204
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add short description title to POIs
Additionally to the name of a POI, it might be beneficial to have a short title which describes the purpose of the POI. For example, if names of associations or locations are not self-explanatory, it could be helpful to show this title in a list view or similar whenever it is not suitable to show the full-text description of a POI.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/cms/views/pois/poi_form.py`
Content:
```
1 """
2 Form for creating a poi object and poi translation object
3 """
4
5 import logging
6
7 from django import forms
8 from django.utils.translation import ugettext_lazy as _
9
10 from ...models import POI, POITranslation
11 from ..utils.slug_utils import generate_unique_slug
12
13 logger = logging.getLogger(__name__)
14
15
16 class POIForm(forms.ModelForm):
17 """
18 DjangoForm Class, that can be rendered to create deliverable HTML
19
20 Args:
21 forms : Defines the form as an Model form related to a database object
22 """
23
24 class Meta:
25 model = POI
26 fields = ['address', 'postcode', 'city', 'country', 'latitude', 'longitude']
27
28 def __init__(self, *args, **kwargs):
29
30 logger.info(
31 'New POIForm instantiated with args %s and kwargs %s',
32 args,
33 kwargs
34 )
35
36 # pop kwarg to make sure the super class does not get this param
37 self.region = kwargs.pop('region', None)
38
39 # instantiate ModelForm
40 super(POIForm, self).__init__(*args, **kwargs)
41
42
43 # pylint: disable=W0221
44 def save(self, *args, **kwargs):
45
46 logger.info(
47 'POIForm saved with args %s and kwargs %s',
48 args,
49 kwargs
50 )
51
52 # don't commit saving of ModelForm, because required fields are still missing
53 kwargs['commit'] = False
54 poi = super(POIForm, self).save(*args, **kwargs)
55
56 if not self.instance.id:
57 # only update these values when poi is created
58 poi.region = self.region
59 poi.save()
60 return poi
61
62
63 class POITranslationForm(forms.ModelForm):
64 """
65 DjangoForm Class, that can be rendered to create deliverable HTML
66
67 Args:
68 forms : Defines the form as an Model form related to a database object
69 """
70
71 PUBLIC_CHOICES = (
72 (True, _('Public')),
73 (False, _('Private')),
74 )
75
76 class Meta:
77 model = POITranslation
78 fields = ['title', 'status', 'description', 'slug', 'public']
79
80 def __init__(self, *args, **kwargs):
81
82 logger.info(
83 'New POITranslationForm with args %s and kwargs %s',
84 args,
85 kwargs
86 )
87
88 # pop kwarg to make sure the super class does not get this param
89 self.region = kwargs.pop('region', None)
90 self.language = kwargs.pop('language', None)
91
92 super(POITranslationForm, self).__init__(*args, **kwargs)
93
94 self.fields['public'].widget = forms.Select(choices=self.PUBLIC_CHOICES)
95
96 # pylint: disable=W0221
97 def save(self, *args, **kwargs):
98
99 logger.info(
100 'POITranslationForm saved with args %s and kwargs %s',
101 args,
102 kwargs
103 )
104
105 # pop kwarg to make sure the super class does not get this param
106 poi = kwargs.pop('poi', None)
107 user = kwargs.pop('user', None)
108
109 if not self.instance.id:
110 # don't commit saving of ModelForm, because required fields are still missing
111 kwargs['commit'] = False
112
113 poi_translation = super(POITranslationForm, self).save(*args, **kwargs)
114
115 if not self.instance.id:
116 # only update these values when poi translation is created
117 poi_translation.poi = poi
118 poi_translation.creator = user
119 poi_translation.language = self.language
120
121 poi_translation.save()
122
123 return poi_translation
124
125 def clean_slug(self):
126 return generate_unique_slug(self, 'poi')
127
```
Path: `backend/cms/models/poi.py`
Content:
```
1 """Model for Point of Interests
2
3 """
4 from django.db import models
5 from django.core.exceptions import ObjectDoesNotExist
6 from django.conf import settings
7 from django.utils import timezone
8
9 from .region import Region
10 from .language import Language
11
12
13 class POI(models.Model):
14 """Object for Point of Interests
15
16 Args:
17 models : Databas model inherit from the standard django models
18 """
19
20 region = models.ForeignKey(Region, related_name='pois', on_delete=models.CASCADE)
21 address = models.CharField(max_length=250)
22 postcode = models.CharField(max_length=10)
23 city = models.CharField(max_length=250)
24 country = models.CharField(max_length=250)
25 latitude = models.FloatField()
26 longitude = models.FloatField()
27
28 @classmethod
29 def get_list_view(cls):
30 """Provides List of all POIs in german
31
32 Returns:
33 [POI]: List of all german POIs
34 """
35
36 poi_translations = POITranslation.objects.filter(
37 language='de'
38 ).select_related('creator')
39 pois = cls.objects.all().prefetch_related(
40 models.Prefetch('poi_translations', queryset=poi_translations)
41 ).filter(poi_translations__language='de')
42
43 return pois
44
45 class Meta:
46 default_permissions = ()
47 permissions = (
48 ('manage_pois', 'Can manage points of interest'),
49 )
50
51 @property
52 def languages(self):
53 poi_translations = self.translations.prefetch_related('language').all()
54 languages = []
55 for poi_translation in poi_translations:
56 languages.append(poi_translation.language)
57 return languages
58
59 def get_translation(self, language_code):
60 try:
61 poi_translation = self.translations.get(language__code=language_code)
62 except ObjectDoesNotExist:
63 poi_translation = None
64 return poi_translation
65
66
67 class POITranslation(models.Model):
68 """Translation of an Point of Interest
69
70 Args:
71 models : Databas model inherit from the standard django models
72 """
73 title = models.CharField(max_length=250)
74 slug = models.SlugField(max_length=200, blank=True)
75 poi = models.ForeignKey(POI, related_name='translations', null=True,
76 on_delete=models.SET_NULL)
77 permalink = models.CharField(max_length=60)
78 STATUS = (
79 ('draft', 'Entwurf'),
80 ('in-review', 'Ausstehender Review'),
81 ('reviewed', 'Review abgeschlossen'),
82 )
83 status = models.CharField(max_length=9, choices=STATUS, default='draft')
84 description = models.TextField()
85 language = models.ForeignKey(Language, on_delete=models.CASCADE)
86 version = models.PositiveIntegerField(default=0)
87 minor_edit = models.BooleanField(default=False)
88 public = models.BooleanField(default=False)
89 created_date = models.DateTimeField(default=timezone.now)
90 last_updated = models.DateTimeField(auto_now=True)
91 creator = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=models.SET_NULL)
92
93 class Meta:
94 default_permissions = ()
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/backend/cms/models/poi.py b/backend/cms/models/poi.py
--- a/backend/cms/models/poi.py
+++ b/backend/cms/models/poi.py
@@ -81,6 +81,7 @@
('reviewed', 'Review abgeschlossen'),
)
status = models.CharField(max_length=9, choices=STATUS, default='draft')
+ short_description = models.CharField(max_length=250)
description = models.TextField()
language = models.ForeignKey(Language, on_delete=models.CASCADE)
version = models.PositiveIntegerField(default=0)
diff --git a/backend/cms/views/pois/poi_form.py b/backend/cms/views/pois/poi_form.py
--- a/backend/cms/views/pois/poi_form.py
+++ b/backend/cms/views/pois/poi_form.py
@@ -75,7 +75,7 @@
class Meta:
model = POITranslation
- fields = ['title', 'status', 'description', 'slug', 'public']
+ fields = ['title', 'short_description', 'status', 'description', 'slug', 'public']
def __init__(self, *args, **kwargs):
|
{"golden_diff": "diff --git a/backend/cms/models/poi.py b/backend/cms/models/poi.py\n--- a/backend/cms/models/poi.py\n+++ b/backend/cms/models/poi.py\n@@ -81,6 +81,7 @@\n ('reviewed', 'Review abgeschlossen'),\n )\n status = models.CharField(max_length=9, choices=STATUS, default='draft')\n+ short_description = models.CharField(max_length=250)\n description = models.TextField()\n language = models.ForeignKey(Language, on_delete=models.CASCADE)\n version = models.PositiveIntegerField(default=0)\ndiff --git a/backend/cms/views/pois/poi_form.py b/backend/cms/views/pois/poi_form.py\n--- a/backend/cms/views/pois/poi_form.py\n+++ b/backend/cms/views/pois/poi_form.py\n@@ -75,7 +75,7 @@\n \n class Meta:\n model = POITranslation\n- fields = ['title', 'status', 'description', 'slug', 'public']\n+ fields = ['title', 'short_description', 'status', 'description', 'slug', 'public']\n \n def __init__(self, *args, **kwargs):\n", "issue": "Add short description title to POIs\nAdditionally to the name of a POI, it might be beneficial to have a short title which describes the purpose of the POI. For example, if names of associations or locations are not self-explanatory, it could be helpful to show this title in a list view or similar whenever it is not suitable to show the full-text description of a POI.\n", "before_files": [{"content": "\"\"\"\nForm for creating a poi object and poi translation object\n\"\"\"\n\nimport logging\n\nfrom django import forms\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ...models import POI, POITranslation\nfrom ..utils.slug_utils import generate_unique_slug\n\nlogger = logging.getLogger(__name__)\n\n\nclass POIForm(forms.ModelForm):\n \"\"\"\n DjangoForm Class, that can be rendered to create deliverable HTML\n\n Args:\n forms : Defines the form as an Model form related to a database object\n \"\"\"\n\n class Meta:\n model = POI\n fields = ['address', 'postcode', 'city', 'country', 'latitude', 'longitude']\n\n def __init__(self, *args, **kwargs):\n\n logger.info(\n 'New POIForm instantiated with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # pop kwarg to make sure the super class does not get this param\n self.region = kwargs.pop('region', None)\n\n # instantiate ModelForm\n super(POIForm, self).__init__(*args, **kwargs)\n\n\n # pylint: disable=W0221\n def save(self, *args, **kwargs):\n\n logger.info(\n 'POIForm saved with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # don't commit saving of ModelForm, because required fields are still missing\n kwargs['commit'] = False\n poi = super(POIForm, self).save(*args, **kwargs)\n\n if not self.instance.id:\n # only update these values when poi is created\n poi.region = self.region\n poi.save()\n return poi\n\n\nclass POITranslationForm(forms.ModelForm):\n \"\"\"\n DjangoForm Class, that can be rendered to create deliverable HTML\n\n Args:\n forms : Defines the form as an Model form related to a database object\n \"\"\"\n\n PUBLIC_CHOICES = (\n (True, _('Public')),\n (False, _('Private')),\n )\n\n class Meta:\n model = POITranslation\n fields = ['title', 'status', 'description', 'slug', 'public']\n\n def __init__(self, *args, **kwargs):\n\n logger.info(\n 'New POITranslationForm with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # pop kwarg to make sure the super class does not get this param\n self.region = kwargs.pop('region', None)\n self.language = kwargs.pop('language', None)\n\n super(POITranslationForm, self).__init__(*args, **kwargs)\n\n self.fields['public'].widget = forms.Select(choices=self.PUBLIC_CHOICES)\n\n # pylint: disable=W0221\n def save(self, *args, **kwargs):\n\n logger.info(\n 'POITranslationForm saved with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # pop kwarg to make sure the super class does not get this param\n poi = kwargs.pop('poi', None)\n user = kwargs.pop('user', None)\n\n if not self.instance.id:\n # don't commit saving of ModelForm, because required fields are still missing\n kwargs['commit'] = False\n\n poi_translation = super(POITranslationForm, self).save(*args, **kwargs)\n\n if not self.instance.id:\n # only update these values when poi translation is created\n poi_translation.poi = poi\n poi_translation.creator = user\n poi_translation.language = self.language\n\n poi_translation.save()\n\n return poi_translation\n\n def clean_slug(self):\n return generate_unique_slug(self, 'poi')\n", "path": "backend/cms/views/pois/poi_form.py"}, {"content": "\"\"\"Model for Point of Interests\n\n\"\"\"\nfrom django.db import models\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.conf import settings\nfrom django.utils import timezone\n\nfrom .region import Region\nfrom .language import Language\n\n\nclass POI(models.Model):\n \"\"\"Object for Point of Interests\n\n Args:\n models : Databas model inherit from the standard django models\n \"\"\"\n\n region = models.ForeignKey(Region, related_name='pois', on_delete=models.CASCADE)\n address = models.CharField(max_length=250)\n postcode = models.CharField(max_length=10)\n city = models.CharField(max_length=250)\n country = models.CharField(max_length=250)\n latitude = models.FloatField()\n longitude = models.FloatField()\n\n @classmethod\n def get_list_view(cls):\n \"\"\"Provides List of all POIs in german\n\n Returns:\n [POI]: List of all german POIs\n \"\"\"\n\n poi_translations = POITranslation.objects.filter(\n language='de'\n ).select_related('creator')\n pois = cls.objects.all().prefetch_related(\n models.Prefetch('poi_translations', queryset=poi_translations)\n ).filter(poi_translations__language='de')\n\n return pois\n\n class Meta:\n default_permissions = ()\n permissions = (\n ('manage_pois', 'Can manage points of interest'),\n )\n\n @property\n def languages(self):\n poi_translations = self.translations.prefetch_related('language').all()\n languages = []\n for poi_translation in poi_translations:\n languages.append(poi_translation.language)\n return languages\n\n def get_translation(self, language_code):\n try:\n poi_translation = self.translations.get(language__code=language_code)\n except ObjectDoesNotExist:\n poi_translation = None\n return poi_translation\n\n\nclass POITranslation(models.Model):\n \"\"\"Translation of an Point of Interest\n\n Args:\n models : Databas model inherit from the standard django models\n \"\"\"\n title = models.CharField(max_length=250)\n slug = models.SlugField(max_length=200, blank=True)\n poi = models.ForeignKey(POI, related_name='translations', null=True,\n on_delete=models.SET_NULL)\n permalink = models.CharField(max_length=60)\n STATUS = (\n ('draft', 'Entwurf'),\n ('in-review', 'Ausstehender Review'),\n ('reviewed', 'Review abgeschlossen'),\n )\n status = models.CharField(max_length=9, choices=STATUS, default='draft')\n description = models.TextField()\n language = models.ForeignKey(Language, on_delete=models.CASCADE)\n version = models.PositiveIntegerField(default=0)\n minor_edit = models.BooleanField(default=False)\n public = models.BooleanField(default=False)\n created_date = models.DateTimeField(default=timezone.now)\n last_updated = models.DateTimeField(auto_now=True)\n creator = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=models.SET_NULL)\n\n class Meta:\n default_permissions = ()\n", "path": "backend/cms/models/poi.py"}], "after_files": [{"content": "\"\"\"\nForm for creating a poi object and poi translation object\n\"\"\"\n\nimport logging\n\nfrom django import forms\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ...models import POI, POITranslation\nfrom ..utils.slug_utils import generate_unique_slug\n\nlogger = logging.getLogger(__name__)\n\n\nclass POIForm(forms.ModelForm):\n \"\"\"\n DjangoForm Class, that can be rendered to create deliverable HTML\n\n Args:\n forms : Defines the form as an Model form related to a database object\n \"\"\"\n\n class Meta:\n model = POI\n fields = ['address', 'postcode', 'city', 'country', 'latitude', 'longitude']\n\n def __init__(self, *args, **kwargs):\n\n logger.info(\n 'New POIForm instantiated with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # pop kwarg to make sure the super class does not get this param\n self.region = kwargs.pop('region', None)\n\n # instantiate ModelForm\n super(POIForm, self).__init__(*args, **kwargs)\n\n\n # pylint: disable=W0221\n def save(self, *args, **kwargs):\n\n logger.info(\n 'POIForm saved with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # don't commit saving of ModelForm, because required fields are still missing\n kwargs['commit'] = False\n poi = super(POIForm, self).save(*args, **kwargs)\n\n if not self.instance.id:\n # only update these values when poi is created\n poi.region = self.region\n poi.save()\n return poi\n\n\nclass POITranslationForm(forms.ModelForm):\n \"\"\"\n DjangoForm Class, that can be rendered to create deliverable HTML\n\n Args:\n forms : Defines the form as an Model form related to a database object\n \"\"\"\n\n PUBLIC_CHOICES = (\n (True, _('Public')),\n (False, _('Private')),\n )\n\n class Meta:\n model = POITranslation\n fields = ['title', 'short_description', 'status', 'description', 'slug', 'public']\n\n def __init__(self, *args, **kwargs):\n\n logger.info(\n 'New POITranslationForm with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # pop kwarg to make sure the super class does not get this param\n self.region = kwargs.pop('region', None)\n self.language = kwargs.pop('language', None)\n\n super(POITranslationForm, self).__init__(*args, **kwargs)\n\n self.fields['public'].widget = forms.Select(choices=self.PUBLIC_CHOICES)\n\n # pylint: disable=W0221\n def save(self, *args, **kwargs):\n\n logger.info(\n 'POITranslationForm saved with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # pop kwarg to make sure the super class does not get this param\n poi = kwargs.pop('poi', None)\n user = kwargs.pop('user', None)\n\n if not self.instance.id:\n # don't commit saving of ModelForm, because required fields are still missing\n kwargs['commit'] = False\n\n poi_translation = super(POITranslationForm, self).save(*args, **kwargs)\n\n if not self.instance.id:\n # only update these values when poi translation is created\n poi_translation.poi = poi\n poi_translation.creator = user\n poi_translation.language = self.language\n\n poi_translation.save()\n\n return poi_translation\n\n def clean_slug(self):\n return generate_unique_slug(self, 'poi')\n", "path": "backend/cms/views/pois/poi_form.py"}, {"content": "\"\"\"Model for Point of Interests\n\n\"\"\"\nfrom django.db import models\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.conf import settings\nfrom django.utils import timezone\n\nfrom .region import Region\nfrom .language import Language\n\n\nclass POI(models.Model):\n \"\"\"Object for Point of Interests\n\n Args:\n models : Databas model inherit from the standard django models\n \"\"\"\n\n region = models.ForeignKey(Region, related_name='pois', on_delete=models.CASCADE)\n address = models.CharField(max_length=250)\n postcode = models.CharField(max_length=10)\n city = models.CharField(max_length=250)\n country = models.CharField(max_length=250)\n latitude = models.FloatField()\n longitude = models.FloatField()\n\n @classmethod\n def get_list_view(cls):\n \"\"\"Provides List of all POIs in german\n\n Returns:\n [POI]: List of all german POIs\n \"\"\"\n\n poi_translations = POITranslation.objects.filter(\n language='de'\n ).select_related('creator')\n pois = cls.objects.all().prefetch_related(\n models.Prefetch('poi_translations', queryset=poi_translations)\n ).filter(poi_translations__language='de')\n\n return pois\n\n class Meta:\n default_permissions = ()\n permissions = (\n ('manage_pois', 'Can manage points of interest'),\n )\n\n @property\n def languages(self):\n poi_translations = self.translations.prefetch_related('language').all()\n languages = []\n for poi_translation in poi_translations:\n languages.append(poi_translation.language)\n return languages\n\n def get_translation(self, language_code):\n try:\n poi_translation = self.translations.get(language__code=language_code)\n except ObjectDoesNotExist:\n poi_translation = None\n return poi_translation\n\n\nclass POITranslation(models.Model):\n \"\"\"Translation of an Point of Interest\n\n Args:\n models : Databas model inherit from the standard django models\n \"\"\"\n title = models.CharField(max_length=250)\n slug = models.SlugField(max_length=200, blank=True)\n poi = models.ForeignKey(POI, related_name='translations', null=True,\n on_delete=models.SET_NULL)\n permalink = models.CharField(max_length=60)\n STATUS = (\n ('draft', 'Entwurf'),\n ('in-review', 'Ausstehender Review'),\n ('reviewed', 'Review abgeschlossen'),\n )\n status = models.CharField(max_length=9, choices=STATUS, default='draft')\n short_description = models.CharField(max_length=250)\n description = models.TextField()\n language = models.ForeignKey(Language, on_delete=models.CASCADE)\n version = models.PositiveIntegerField(default=0)\n minor_edit = models.BooleanField(default=False)\n public = models.BooleanField(default=False)\n created_date = models.DateTimeField(default=timezone.now)\n last_updated = models.DateTimeField(auto_now=True)\n creator = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=models.SET_NULL)\n\n class Meta:\n default_permissions = ()\n", "path": "backend/cms/models/poi.py"}]}
| 2,279 | 250 |
gh_patches_debug_36108
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1463
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User creation in the admin is broken
Sentry Issue: [CONCREXIT-3F](https://sentry.io/organizations/thalia/issues/1844597243/?referrer=github_integration)
```
FieldError: Unknown field(s) (password2, password1) specified for User
File "django/contrib/admin/options.py", line 702, in get_form
return modelform_factory(self.model, **defaults)
File "django/forms/models.py", line 554, in modelform_factory
return type(form)(class_name, (form,), form_class_attrs)
File "django/forms/models.py", line 267, in __new__
raise FieldError(message)
FieldError: Unknown field(s) (password2, password1) specified for User. Check fields/fieldsets/exclude attributes of class UserAdmin.
(15 additional frame(s) were not displayed)
...
File "django/utils/decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "django/contrib/admin/options.py", line 1522, in changeform_view
return self._changeform_view(request, object_id, form_url, extra_context)
File "django/contrib/admin/options.py", line 1555, in _changeform_view
ModelForm = self.get_form(request, obj, change=not add)
File "django/contrib/auth/admin.py", line 80, in get_form
return super().get_form(request, obj, **defaults)
File "django/contrib/admin/options.py", line 704, in get_form
raise FieldError(
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/members/forms.py`
Content:
```
1 """Forms defined by the members package."""
2 from django import forms
3 from django.conf import settings
4 from django.contrib.auth import get_user_model
5 from django.contrib.auth.forms import UserChangeForm as BaseUserChangeForm
6 from django.contrib.auth.forms import UserCreationForm as BaseUserCreationForm
7 from django.core.validators import RegexValidator
8 from django.utils.translation import gettext_lazy as _
9
10 from members import emails
11 from .models import Profile
12
13
14 class ProfileForm(forms.ModelForm):
15 """Form with all the user editable fields of a Profile model."""
16
17 class Meta:
18 fields = [
19 "show_birthday",
20 "address_street",
21 "address_street2",
22 "address_postal_code",
23 "address_city",
24 "address_country",
25 "phone_number",
26 "emergency_contact",
27 "emergency_contact_phone_number",
28 "website",
29 "profile_description",
30 "nickname",
31 "initials",
32 "display_name_preference",
33 "photo",
34 "receive_optin",
35 "receive_newsletter",
36 "receive_magazine",
37 "email_gsuite_only",
38 ]
39 model = Profile
40
41 def __init__(self, *args, **kwargs):
42 super().__init__(*args, **kwargs)
43 if not kwargs["instance"].user.is_staff:
44 self.fields["email_gsuite_only"].widget = self.fields[
45 "email_gsuite_only"
46 ].hidden_widget()
47
48
49 class UserCreationForm(BaseUserCreationForm):
50 """Custom Form that removes the password fields from user creation and sends a welcome message when a user is created."""
51
52 # Don't forget to edit the formset in admin.py!
53 # This is a stupid quirk of the user admin.
54
55 # shadow the password fields to prevent validation errors,
56 # since we generate the passwords dynamically.
57 password1 = None
58 password2 = None
59
60 def __init__(self, *args, **kwargs):
61 super().__init__(*args, **kwargs)
62 for field in ("email", "first_name", "last_name"):
63 self.fields[field].required = True
64
65 send_welcome_email = forms.BooleanField(
66 label=_("Send welcome email"),
67 help_text=_("This email will include the generated password"),
68 required=False,
69 initial=True,
70 )
71
72 def clean(self):
73 if "username" in self.cleaned_data:
74 self.cleaned_data["username"] = self.cleaned_data["username"].lower()
75 super().clean()
76
77 def save(self, commit=True):
78 password = get_user_model().objects.make_random_password(length=15)
79 # pass the password on as if it was filled in, so that save() works
80 self.cleaned_data["password1"] = password
81 user = super().save(commit=False)
82 user.set_password(password)
83 if commit:
84 user.save()
85 if self.cleaned_data["send_welcome_email"]:
86 language = settings.LANGUAGE_CODE
87 emails.send_welcome_message(user, password, language)
88 return user
89
90 class Meta:
91 fields = ("username", "first_name", "last_name", "send_welcome_email")
92
93
94 class UserChangeForm(BaseUserChangeForm):
95 """Custom user edit form that adds fields for first/last name and email.
96
97 It also force-lowercases the username on save
98 """
99
100 username = forms.CharField(
101 label=_("Username"),
102 required=True,
103 help_text=_("Required. 64 characters or fewer. Letters and digits only."),
104 widget=forms.TextInput(attrs={"class": "vTextField", "maxlength": 64}),
105 validators=[
106 RegexValidator(
107 regex="^[a-zA-Z0-9]{1,64}$",
108 message=_(
109 "Please use 64 characters or fewer. Letters and digits only."
110 ),
111 )
112 ],
113 )
114
115 first_name = forms.CharField(
116 label=_("First name"),
117 required=True,
118 widget=forms.TextInput(attrs={"class": "vTextField", "maxlength": 30}),
119 )
120 last_name = forms.CharField(
121 label=_("Last name"),
122 required=True,
123 widget=forms.TextInput(attrs={"class": "vTextField", "maxlength": 150}),
124 )
125 email = forms.CharField(
126 label=_("Email address"),
127 required=True,
128 widget=forms.EmailInput(attrs={"class": "vTextField", "maxlength": 254}),
129 )
130
131 def clean(self):
132 if "username" in self.cleaned_data:
133 self.cleaned_data["username"] = self.cleaned_data["username"].lower()
134 super().clean()
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/members/forms.py b/website/members/forms.py
--- a/website/members/forms.py
+++ b/website/members/forms.py
@@ -1,13 +1,10 @@
"""Forms defined by the members package."""
from django import forms
-from django.conf import settings
-from django.contrib.auth import get_user_model
from django.contrib.auth.forms import UserChangeForm as BaseUserChangeForm
from django.contrib.auth.forms import UserCreationForm as BaseUserCreationForm
from django.core.validators import RegexValidator
from django.utils.translation import gettext_lazy as _
-from members import emails
from .models import Profile
@@ -47,48 +44,15 @@
class UserCreationForm(BaseUserCreationForm):
- """Custom Form that removes the password fields from user creation and sends a welcome message when a user is created."""
-
- # Don't forget to edit the formset in admin.py!
- # This is a stupid quirk of the user admin.
-
- # shadow the password fields to prevent validation errors,
- # since we generate the passwords dynamically.
- password1 = None
- password2 = None
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- for field in ("email", "first_name", "last_name"):
- self.fields[field].required = True
-
- send_welcome_email = forms.BooleanField(
- label=_("Send welcome email"),
- help_text=_("This email will include the generated password"),
- required=False,
- initial=True,
- )
+ """Custom Form that lowercases the username on creation."""
def clean(self):
if "username" in self.cleaned_data:
self.cleaned_data["username"] = self.cleaned_data["username"].lower()
super().clean()
- def save(self, commit=True):
- password = get_user_model().objects.make_random_password(length=15)
- # pass the password on as if it was filled in, so that save() works
- self.cleaned_data["password1"] = password
- user = super().save(commit=False)
- user.set_password(password)
- if commit:
- user.save()
- if self.cleaned_data["send_welcome_email"]:
- language = settings.LANGUAGE_CODE
- emails.send_welcome_message(user, password, language)
- return user
-
class Meta:
- fields = ("username", "first_name", "last_name", "send_welcome_email")
+ fields = ("username", "first_name", "last_name")
class UserChangeForm(BaseUserChangeForm):
|
{"golden_diff": "diff --git a/website/members/forms.py b/website/members/forms.py\n--- a/website/members/forms.py\n+++ b/website/members/forms.py\n@@ -1,13 +1,10 @@\n \"\"\"Forms defined by the members package.\"\"\"\n from django import forms\n-from django.conf import settings\n-from django.contrib.auth import get_user_model\n from django.contrib.auth.forms import UserChangeForm as BaseUserChangeForm\n from django.contrib.auth.forms import UserCreationForm as BaseUserCreationForm\n from django.core.validators import RegexValidator\n from django.utils.translation import gettext_lazy as _\n \n-from members import emails\n from .models import Profile\n \n \n@@ -47,48 +44,15 @@\n \n \n class UserCreationForm(BaseUserCreationForm):\n- \"\"\"Custom Form that removes the password fields from user creation and sends a welcome message when a user is created.\"\"\"\n-\n- # Don't forget to edit the formset in admin.py!\n- # This is a stupid quirk of the user admin.\n-\n- # shadow the password fields to prevent validation errors,\n- # since we generate the passwords dynamically.\n- password1 = None\n- password2 = None\n-\n- def __init__(self, *args, **kwargs):\n- super().__init__(*args, **kwargs)\n- for field in (\"email\", \"first_name\", \"last_name\"):\n- self.fields[field].required = True\n-\n- send_welcome_email = forms.BooleanField(\n- label=_(\"Send welcome email\"),\n- help_text=_(\"This email will include the generated password\"),\n- required=False,\n- initial=True,\n- )\n+ \"\"\"Custom Form that lowercases the username on creation.\"\"\"\n \n def clean(self):\n if \"username\" in self.cleaned_data:\n self.cleaned_data[\"username\"] = self.cleaned_data[\"username\"].lower()\n super().clean()\n \n- def save(self, commit=True):\n- password = get_user_model().objects.make_random_password(length=15)\n- # pass the password on as if it was filled in, so that save() works\n- self.cleaned_data[\"password1\"] = password\n- user = super().save(commit=False)\n- user.set_password(password)\n- if commit:\n- user.save()\n- if self.cleaned_data[\"send_welcome_email\"]:\n- language = settings.LANGUAGE_CODE\n- emails.send_welcome_message(user, password, language)\n- return user\n-\n class Meta:\n- fields = (\"username\", \"first_name\", \"last_name\", \"send_welcome_email\")\n+ fields = (\"username\", \"first_name\", \"last_name\")\n \n \n class UserChangeForm(BaseUserChangeForm):\n", "issue": "User creation in the admin is broken\nSentry Issue: [CONCREXIT-3F](https://sentry.io/organizations/thalia/issues/1844597243/?referrer=github_integration)\n\n```\nFieldError: Unknown field(s) (password2, password1) specified for User\n File \"django/contrib/admin/options.py\", line 702, in get_form\n return modelform_factory(self.model, **defaults)\n File \"django/forms/models.py\", line 554, in modelform_factory\n return type(form)(class_name, (form,), form_class_attrs)\n File \"django/forms/models.py\", line 267, in __new__\n raise FieldError(message)\n\nFieldError: Unknown field(s) (password2, password1) specified for User. Check fields/fieldsets/exclude attributes of class UserAdmin.\n(15 additional frame(s) were not displayed)\n...\n File \"django/utils/decorators.py\", line 130, in _wrapped_view\n response = view_func(request, *args, **kwargs)\n File \"django/contrib/admin/options.py\", line 1522, in changeform_view\n return self._changeform_view(request, object_id, form_url, extra_context)\n File \"django/contrib/admin/options.py\", line 1555, in _changeform_view\n ModelForm = self.get_form(request, obj, change=not add)\n File \"django/contrib/auth/admin.py\", line 80, in get_form\n return super().get_form(request, obj, **defaults)\n File \"django/contrib/admin/options.py\", line 704, in get_form\n raise FieldError(\n```\n", "before_files": [{"content": "\"\"\"Forms defined by the members package.\"\"\"\nfrom django import forms\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.forms import UserChangeForm as BaseUserChangeForm\nfrom django.contrib.auth.forms import UserCreationForm as BaseUserCreationForm\nfrom django.core.validators import RegexValidator\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members import emails\nfrom .models import Profile\n\n\nclass ProfileForm(forms.ModelForm):\n \"\"\"Form with all the user editable fields of a Profile model.\"\"\"\n\n class Meta:\n fields = [\n \"show_birthday\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"phone_number\",\n \"emergency_contact\",\n \"emergency_contact_phone_number\",\n \"website\",\n \"profile_description\",\n \"nickname\",\n \"initials\",\n \"display_name_preference\",\n \"photo\",\n \"receive_optin\",\n \"receive_newsletter\",\n \"receive_magazine\",\n \"email_gsuite_only\",\n ]\n model = Profile\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n if not kwargs[\"instance\"].user.is_staff:\n self.fields[\"email_gsuite_only\"].widget = self.fields[\n \"email_gsuite_only\"\n ].hidden_widget()\n\n\nclass UserCreationForm(BaseUserCreationForm):\n \"\"\"Custom Form that removes the password fields from user creation and sends a welcome message when a user is created.\"\"\"\n\n # Don't forget to edit the formset in admin.py!\n # This is a stupid quirk of the user admin.\n\n # shadow the password fields to prevent validation errors,\n # since we generate the passwords dynamically.\n password1 = None\n password2 = None\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n for field in (\"email\", \"first_name\", \"last_name\"):\n self.fields[field].required = True\n\n send_welcome_email = forms.BooleanField(\n label=_(\"Send welcome email\"),\n help_text=_(\"This email will include the generated password\"),\n required=False,\n initial=True,\n )\n\n def clean(self):\n if \"username\" in self.cleaned_data:\n self.cleaned_data[\"username\"] = self.cleaned_data[\"username\"].lower()\n super().clean()\n\n def save(self, commit=True):\n password = get_user_model().objects.make_random_password(length=15)\n # pass the password on as if it was filled in, so that save() works\n self.cleaned_data[\"password1\"] = password\n user = super().save(commit=False)\n user.set_password(password)\n if commit:\n user.save()\n if self.cleaned_data[\"send_welcome_email\"]:\n language = settings.LANGUAGE_CODE\n emails.send_welcome_message(user, password, language)\n return user\n\n class Meta:\n fields = (\"username\", \"first_name\", \"last_name\", \"send_welcome_email\")\n\n\nclass UserChangeForm(BaseUserChangeForm):\n \"\"\"Custom user edit form that adds fields for first/last name and email.\n\n It also force-lowercases the username on save\n \"\"\"\n\n username = forms.CharField(\n label=_(\"Username\"),\n required=True,\n help_text=_(\"Required. 64 characters or fewer. Letters and digits only.\"),\n widget=forms.TextInput(attrs={\"class\": \"vTextField\", \"maxlength\": 64}),\n validators=[\n RegexValidator(\n regex=\"^[a-zA-Z0-9]{1,64}$\",\n message=_(\n \"Please use 64 characters or fewer. Letters and digits only.\"\n ),\n )\n ],\n )\n\n first_name = forms.CharField(\n label=_(\"First name\"),\n required=True,\n widget=forms.TextInput(attrs={\"class\": \"vTextField\", \"maxlength\": 30}),\n )\n last_name = forms.CharField(\n label=_(\"Last name\"),\n required=True,\n widget=forms.TextInput(attrs={\"class\": \"vTextField\", \"maxlength\": 150}),\n )\n email = forms.CharField(\n label=_(\"Email address\"),\n required=True,\n widget=forms.EmailInput(attrs={\"class\": \"vTextField\", \"maxlength\": 254}),\n )\n\n def clean(self):\n if \"username\" in self.cleaned_data:\n self.cleaned_data[\"username\"] = self.cleaned_data[\"username\"].lower()\n super().clean()\n", "path": "website/members/forms.py"}], "after_files": [{"content": "\"\"\"Forms defined by the members package.\"\"\"\nfrom django import forms\nfrom django.contrib.auth.forms import UserChangeForm as BaseUserChangeForm\nfrom django.contrib.auth.forms import UserCreationForm as BaseUserCreationForm\nfrom django.core.validators import RegexValidator\nfrom django.utils.translation import gettext_lazy as _\n\nfrom .models import Profile\n\n\nclass ProfileForm(forms.ModelForm):\n \"\"\"Form with all the user editable fields of a Profile model.\"\"\"\n\n class Meta:\n fields = [\n \"show_birthday\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"phone_number\",\n \"emergency_contact\",\n \"emergency_contact_phone_number\",\n \"website\",\n \"profile_description\",\n \"nickname\",\n \"initials\",\n \"display_name_preference\",\n \"photo\",\n \"receive_optin\",\n \"receive_newsletter\",\n \"receive_magazine\",\n \"email_gsuite_only\",\n ]\n model = Profile\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n if not kwargs[\"instance\"].user.is_staff:\n self.fields[\"email_gsuite_only\"].widget = self.fields[\n \"email_gsuite_only\"\n ].hidden_widget()\n\n\nclass UserCreationForm(BaseUserCreationForm):\n \"\"\"Custom Form that lowercases the username on creation.\"\"\"\n\n def clean(self):\n if \"username\" in self.cleaned_data:\n self.cleaned_data[\"username\"] = self.cleaned_data[\"username\"].lower()\n super().clean()\n\n class Meta:\n fields = (\"username\", \"first_name\", \"last_name\")\n\n\nclass UserChangeForm(BaseUserChangeForm):\n \"\"\"Custom user edit form that adds fields for first/last name and email.\n\n It also force-lowercases the username on save\n \"\"\"\n\n username = forms.CharField(\n label=_(\"Username\"),\n required=True,\n help_text=_(\"Required. 64 characters or fewer. Letters and digits only.\"),\n widget=forms.TextInput(attrs={\"class\": \"vTextField\", \"maxlength\": 64}),\n validators=[\n RegexValidator(\n regex=\"^[a-zA-Z0-9]{1,64}$\",\n message=_(\n \"Please use 64 characters or fewer. Letters and digits only.\"\n ),\n )\n ],\n )\n\n first_name = forms.CharField(\n label=_(\"First name\"),\n required=True,\n widget=forms.TextInput(attrs={\"class\": \"vTextField\", \"maxlength\": 30}),\n )\n last_name = forms.CharField(\n label=_(\"Last name\"),\n required=True,\n widget=forms.TextInput(attrs={\"class\": \"vTextField\", \"maxlength\": 150}),\n )\n email = forms.CharField(\n label=_(\"Email address\"),\n required=True,\n widget=forms.EmailInput(attrs={\"class\": \"vTextField\", \"maxlength\": 254}),\n )\n\n def clean(self):\n if \"username\" in self.cleaned_data:\n self.cleaned_data[\"username\"] = self.cleaned_data[\"username\"].lower()\n super().clean()\n", "path": "website/members/forms.py"}]}
| 1,880 | 575 |
gh_patches_debug_166
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-9516
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
2024.4.0 LongRunningTransaction
**Describe the bug**
Prometheus alert for a long running transaction.
I think the transaction is
```
SELECT pg_advisory_unlock($1)
```
**To Reproduce**
No activity, sitting idle
**Expected behavior**
Shouldn't have the alert
**Screenshots**
**Logs**
**Version and Deployment (please complete the following information):**
2024.4.0 kubernetes
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lifecycle/migrate.py`
Content:
```
1 #!/usr/bin/env python
2 """System Migration handler"""
3 from importlib.util import module_from_spec, spec_from_file_location
4 from inspect import getmembers, isclass
5 from os import environ, system
6 from pathlib import Path
7 from typing import Any
8
9 from psycopg import Connection, Cursor, connect
10 from structlog.stdlib import get_logger
11
12 from authentik.lib.config import CONFIG
13
14 LOGGER = get_logger()
15 ADV_LOCK_UID = 1000
16 LOCKED = False
17
18
19 class CommandError(Exception):
20 """Error raised when a system_crit command fails"""
21
22
23 class BaseMigration:
24 """Base System Migration"""
25
26 cur: Cursor
27 con: Connection
28
29 def __init__(self, cur: Any, con: Any):
30 self.cur = cur
31 self.con = con
32
33 def system_crit(self, command: str):
34 """Run system command"""
35 LOGGER.debug("Running system_crit command", command=command)
36 retval = system(command) # nosec
37 if retval != 0:
38 raise CommandError("Migration error")
39
40 def fake_migration(self, *app_migration: tuple[str, str]):
41 """Fake apply a list of migrations, arguments are
42 expected to be tuples of (app_label, migration_name)"""
43 for app, _migration in app_migration:
44 self.system_crit(f"./manage.py migrate {app} {_migration} --fake")
45
46 def needs_migration(self) -> bool:
47 """Return true if Migration needs to be run"""
48 return False
49
50 def run(self):
51 """Run the actual migration"""
52
53
54 def wait_for_lock(cursor: Cursor):
55 """lock an advisory lock to prevent multiple instances from migrating at once"""
56 LOGGER.info("waiting to acquire database lock")
57 cursor.execute("SELECT pg_advisory_lock(%s)", (ADV_LOCK_UID,))
58
59 global LOCKED # noqa: PLW0603
60 LOCKED = True
61
62
63 def release_lock(cursor: Cursor):
64 """Release database lock"""
65 if not LOCKED:
66 return
67 LOGGER.info("releasing database lock")
68 cursor.execute("SELECT pg_advisory_unlock(%s)", (ADV_LOCK_UID,))
69
70
71 def run_migrations():
72 conn = connect(
73 dbname=CONFIG.get("postgresql.name"),
74 user=CONFIG.get("postgresql.user"),
75 password=CONFIG.get("postgresql.password"),
76 host=CONFIG.get("postgresql.host"),
77 port=CONFIG.get_int("postgresql.port"),
78 sslmode=CONFIG.get("postgresql.sslmode"),
79 sslrootcert=CONFIG.get("postgresql.sslrootcert"),
80 sslcert=CONFIG.get("postgresql.sslcert"),
81 sslkey=CONFIG.get("postgresql.sslkey"),
82 )
83 curr = conn.cursor()
84 try:
85 for migration_path in Path(__file__).parent.absolute().glob("system_migrations/*.py"):
86 spec = spec_from_file_location("lifecycle.system_migrations", migration_path)
87 if not spec:
88 continue
89 mod = module_from_spec(spec)
90 spec.loader.exec_module(mod)
91
92 for name, sub in getmembers(mod, isclass):
93 if name != "Migration":
94 continue
95 migration = sub(curr, conn)
96 if migration.needs_migration():
97 wait_for_lock(curr)
98 LOGGER.info("Migration needs to be applied", migration=migration_path.name)
99 migration.run()
100 LOGGER.info("Migration finished applying", migration=migration_path.name)
101 release_lock(curr)
102 LOGGER.info("applying django migrations")
103 environ.setdefault("DJANGO_SETTINGS_MODULE", "authentik.root.settings")
104 wait_for_lock(curr)
105 try:
106 from django.core.management import execute_from_command_line
107 except ImportError as exc:
108 raise ImportError(
109 "Couldn't import Django. Are you sure it's installed and "
110 "available on your PYTHONPATH environment variable? Did you "
111 "forget to activate a virtual environment?"
112 ) from exc
113 execute_from_command_line(["", "migrate_schemas"])
114 execute_from_command_line(["", "migrate_schemas", "--schema", "template", "--tenant"])
115 execute_from_command_line(
116 ["", "check"] + ([] if CONFIG.get_bool("debug") else ["--deploy"])
117 )
118 finally:
119 release_lock(curr)
120
121
122 if __name__ == "__main__":
123 run_migrations()
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lifecycle/migrate.py b/lifecycle/migrate.py
--- a/lifecycle/migrate.py
+++ b/lifecycle/migrate.py
@@ -117,6 +117,8 @@
)
finally:
release_lock(curr)
+ curr.close()
+ conn.close()
if __name__ == "__main__":
|
{"golden_diff": "diff --git a/lifecycle/migrate.py b/lifecycle/migrate.py\n--- a/lifecycle/migrate.py\n+++ b/lifecycle/migrate.py\n@@ -117,6 +117,8 @@\n )\n finally:\n release_lock(curr)\n+ curr.close()\n+ conn.close()\n \n \n if __name__ == \"__main__\":\n", "issue": "2024.4.0 LongRunningTransaction\n**Describe the bug**\r\nPrometheus alert for a long running transaction.\r\n\r\nI think the transaction is\r\n\r\n```\r\nSELECT pg_advisory_unlock($1)\r\n```\r\n\r\n**To Reproduce**\r\nNo activity, sitting idle\r\n\r\n**Expected behavior**\r\nShouldn't have the alert\r\n\r\n**Screenshots**\r\n\r\n**Logs**\r\n\r\n**Version and Deployment (please complete the following information):**\r\n2024.4.0 kubernetes\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"System Migration handler\"\"\"\nfrom importlib.util import module_from_spec, spec_from_file_location\nfrom inspect import getmembers, isclass\nfrom os import environ, system\nfrom pathlib import Path\nfrom typing import Any\n\nfrom psycopg import Connection, Cursor, connect\nfrom structlog.stdlib import get_logger\n\nfrom authentik.lib.config import CONFIG\n\nLOGGER = get_logger()\nADV_LOCK_UID = 1000\nLOCKED = False\n\n\nclass CommandError(Exception):\n \"\"\"Error raised when a system_crit command fails\"\"\"\n\n\nclass BaseMigration:\n \"\"\"Base System Migration\"\"\"\n\n cur: Cursor\n con: Connection\n\n def __init__(self, cur: Any, con: Any):\n self.cur = cur\n self.con = con\n\n def system_crit(self, command: str):\n \"\"\"Run system command\"\"\"\n LOGGER.debug(\"Running system_crit command\", command=command)\n retval = system(command) # nosec\n if retval != 0:\n raise CommandError(\"Migration error\")\n\n def fake_migration(self, *app_migration: tuple[str, str]):\n \"\"\"Fake apply a list of migrations, arguments are\n expected to be tuples of (app_label, migration_name)\"\"\"\n for app, _migration in app_migration:\n self.system_crit(f\"./manage.py migrate {app} {_migration} --fake\")\n\n def needs_migration(self) -> bool:\n \"\"\"Return true if Migration needs to be run\"\"\"\n return False\n\n def run(self):\n \"\"\"Run the actual migration\"\"\"\n\n\ndef wait_for_lock(cursor: Cursor):\n \"\"\"lock an advisory lock to prevent multiple instances from migrating at once\"\"\"\n LOGGER.info(\"waiting to acquire database lock\")\n cursor.execute(\"SELECT pg_advisory_lock(%s)\", (ADV_LOCK_UID,))\n\n global LOCKED # noqa: PLW0603\n LOCKED = True\n\n\ndef release_lock(cursor: Cursor):\n \"\"\"Release database lock\"\"\"\n if not LOCKED:\n return\n LOGGER.info(\"releasing database lock\")\n cursor.execute(\"SELECT pg_advisory_unlock(%s)\", (ADV_LOCK_UID,))\n\n\ndef run_migrations():\n conn = connect(\n dbname=CONFIG.get(\"postgresql.name\"),\n user=CONFIG.get(\"postgresql.user\"),\n password=CONFIG.get(\"postgresql.password\"),\n host=CONFIG.get(\"postgresql.host\"),\n port=CONFIG.get_int(\"postgresql.port\"),\n sslmode=CONFIG.get(\"postgresql.sslmode\"),\n sslrootcert=CONFIG.get(\"postgresql.sslrootcert\"),\n sslcert=CONFIG.get(\"postgresql.sslcert\"),\n sslkey=CONFIG.get(\"postgresql.sslkey\"),\n )\n curr = conn.cursor()\n try:\n for migration_path in Path(__file__).parent.absolute().glob(\"system_migrations/*.py\"):\n spec = spec_from_file_location(\"lifecycle.system_migrations\", migration_path)\n if not spec:\n continue\n mod = module_from_spec(spec)\n spec.loader.exec_module(mod)\n\n for name, sub in getmembers(mod, isclass):\n if name != \"Migration\":\n continue\n migration = sub(curr, conn)\n if migration.needs_migration():\n wait_for_lock(curr)\n LOGGER.info(\"Migration needs to be applied\", migration=migration_path.name)\n migration.run()\n LOGGER.info(\"Migration finished applying\", migration=migration_path.name)\n release_lock(curr)\n LOGGER.info(\"applying django migrations\")\n environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"authentik.root.settings\")\n wait_for_lock(curr)\n try:\n from django.core.management import execute_from_command_line\n except ImportError as exc:\n raise ImportError(\n \"Couldn't import Django. Are you sure it's installed and \"\n \"available on your PYTHONPATH environment variable? Did you \"\n \"forget to activate a virtual environment?\"\n ) from exc\n execute_from_command_line([\"\", \"migrate_schemas\"])\n execute_from_command_line([\"\", \"migrate_schemas\", \"--schema\", \"template\", \"--tenant\"])\n execute_from_command_line(\n [\"\", \"check\"] + ([] if CONFIG.get_bool(\"debug\") else [\"--deploy\"])\n )\n finally:\n release_lock(curr)\n\n\nif __name__ == \"__main__\":\n run_migrations()\n", "path": "lifecycle/migrate.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"System Migration handler\"\"\"\nfrom importlib.util import module_from_spec, spec_from_file_location\nfrom inspect import getmembers, isclass\nfrom os import environ, system\nfrom pathlib import Path\nfrom typing import Any\n\nfrom psycopg import Connection, Cursor, connect\nfrom structlog.stdlib import get_logger\n\nfrom authentik.lib.config import CONFIG\n\nLOGGER = get_logger()\nADV_LOCK_UID = 1000\nLOCKED = False\n\n\nclass CommandError(Exception):\n \"\"\"Error raised when a system_crit command fails\"\"\"\n\n\nclass BaseMigration:\n \"\"\"Base System Migration\"\"\"\n\n cur: Cursor\n con: Connection\n\n def __init__(self, cur: Any, con: Any):\n self.cur = cur\n self.con = con\n\n def system_crit(self, command: str):\n \"\"\"Run system command\"\"\"\n LOGGER.debug(\"Running system_crit command\", command=command)\n retval = system(command) # nosec\n if retval != 0:\n raise CommandError(\"Migration error\")\n\n def fake_migration(self, *app_migration: tuple[str, str]):\n \"\"\"Fake apply a list of migrations, arguments are\n expected to be tuples of (app_label, migration_name)\"\"\"\n for app, _migration in app_migration:\n self.system_crit(f\"./manage.py migrate {app} {_migration} --fake\")\n\n def needs_migration(self) -> bool:\n \"\"\"Return true if Migration needs to be run\"\"\"\n return False\n\n def run(self):\n \"\"\"Run the actual migration\"\"\"\n\n\ndef wait_for_lock(cursor: Cursor):\n \"\"\"lock an advisory lock to prevent multiple instances from migrating at once\"\"\"\n LOGGER.info(\"waiting to acquire database lock\")\n cursor.execute(\"SELECT pg_advisory_lock(%s)\", (ADV_LOCK_UID,))\n\n global LOCKED # noqa: PLW0603\n LOCKED = True\n\n\ndef release_lock(cursor: Cursor):\n \"\"\"Release database lock\"\"\"\n if not LOCKED:\n return\n LOGGER.info(\"releasing database lock\")\n cursor.execute(\"SELECT pg_advisory_unlock(%s)\", (ADV_LOCK_UID,))\n\n\ndef run_migrations():\n conn = connect(\n dbname=CONFIG.get(\"postgresql.name\"),\n user=CONFIG.get(\"postgresql.user\"),\n password=CONFIG.get(\"postgresql.password\"),\n host=CONFIG.get(\"postgresql.host\"),\n port=CONFIG.get_int(\"postgresql.port\"),\n sslmode=CONFIG.get(\"postgresql.sslmode\"),\n sslrootcert=CONFIG.get(\"postgresql.sslrootcert\"),\n sslcert=CONFIG.get(\"postgresql.sslcert\"),\n sslkey=CONFIG.get(\"postgresql.sslkey\"),\n )\n curr = conn.cursor()\n try:\n for migration_path in Path(__file__).parent.absolute().glob(\"system_migrations/*.py\"):\n spec = spec_from_file_location(\"lifecycle.system_migrations\", migration_path)\n if not spec:\n continue\n mod = module_from_spec(spec)\n spec.loader.exec_module(mod)\n\n for name, sub in getmembers(mod, isclass):\n if name != \"Migration\":\n continue\n migration = sub(curr, conn)\n if migration.needs_migration():\n wait_for_lock(curr)\n LOGGER.info(\"Migration needs to be applied\", migration=migration_path.name)\n migration.run()\n LOGGER.info(\"Migration finished applying\", migration=migration_path.name)\n release_lock(curr)\n LOGGER.info(\"applying django migrations\")\n environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"authentik.root.settings\")\n wait_for_lock(curr)\n try:\n from django.core.management import execute_from_command_line\n except ImportError as exc:\n raise ImportError(\n \"Couldn't import Django. Are you sure it's installed and \"\n \"available on your PYTHONPATH environment variable? Did you \"\n \"forget to activate a virtual environment?\"\n ) from exc\n execute_from_command_line([\"\", \"migrate_schemas\"])\n execute_from_command_line([\"\", \"migrate_schemas\", \"--schema\", \"template\", \"--tenant\"])\n execute_from_command_line(\n [\"\", \"check\"] + ([] if CONFIG.get_bool(\"debug\") else [\"--deploy\"])\n )\n finally:\n release_lock(curr)\n curr.close()\n conn.close()\n\n\nif __name__ == \"__main__\":\n run_migrations()\n", "path": "lifecycle/migrate.py"}]}
| 1,540 | 75 |
gh_patches_debug_31693
|
rasdani/github-patches
|
git_diff
|
mlflow__mlflow-10923
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Security Vulnerability
Please check it here https://huntr.com/bounties/e3d7a994-bfd6-4772-ac9b-9aee1aa16a5f/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/store/artifact/local_artifact_repo.py`
Content:
```
1 import os
2 import shutil
3
4 from mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path
5 from mlflow.utils.file_utils import (
6 get_file_info,
7 list_all,
8 local_file_uri_to_path,
9 mkdir,
10 relative_path_to_artifact_path,
11 )
12
13
14 class LocalArtifactRepository(ArtifactRepository):
15 """Stores artifacts as files in a local directory."""
16
17 def __init__(self, *args, **kwargs):
18 super().__init__(*args, **kwargs)
19 self._artifact_dir = local_file_uri_to_path(self.artifact_uri)
20
21 @property
22 def artifact_dir(self):
23 return self._artifact_dir
24
25 def log_artifact(self, local_file, artifact_path=None):
26 verify_artifact_path(artifact_path)
27 # NOTE: The artifact_path is expected to be in posix format.
28 # Posix paths work fine on windows but just in case we normalize it here.
29 if artifact_path:
30 artifact_path = os.path.normpath(artifact_path)
31
32 artifact_dir = (
33 os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir
34 )
35 if not os.path.exists(artifact_dir):
36 mkdir(artifact_dir)
37 try:
38 shutil.copy2(local_file, os.path.join(artifact_dir, os.path.basename(local_file)))
39 except shutil.SameFileError:
40 pass
41
42 def _is_directory(self, artifact_path):
43 # NOTE: The path is expected to be in posix format.
44 # Posix paths work fine on windows but just in case we normalize it here.
45 path = os.path.normpath(artifact_path) if artifact_path else ""
46 list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir
47 return os.path.isdir(list_dir)
48
49 def log_artifacts(self, local_dir, artifact_path=None):
50 verify_artifact_path(artifact_path)
51 # NOTE: The artifact_path is expected to be in posix format.
52 # Posix paths work fine on windows but just in case we normalize it here.
53 if artifact_path:
54 artifact_path = os.path.normpath(artifact_path)
55 artifact_dir = (
56 os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir
57 )
58 if not os.path.exists(artifact_dir):
59 mkdir(artifact_dir)
60 shutil.copytree(src=local_dir, dst=artifact_dir, dirs_exist_ok=True)
61
62 def download_artifacts(self, artifact_path, dst_path=None):
63 """
64 Artifacts tracked by ``LocalArtifactRepository`` already exist on the local filesystem.
65 If ``dst_path`` is ``None``, the absolute filesystem path of the specified artifact is
66 returned. If ``dst_path`` is not ``None``, the local artifact is copied to ``dst_path``.
67
68 :param artifact_path: Relative source path to the desired artifacts.
69 :param dst_path: Absolute path of the local filesystem destination directory to which to
70 download the specified artifacts. This directory must already exist. If
71 unspecified, the absolute path of the local artifact will be returned.
72
73 :return: Absolute path of the local filesystem location containing the desired artifacts.
74 """
75 if dst_path:
76 return super().download_artifacts(artifact_path, dst_path)
77 # NOTE: The artifact_path is expected to be in posix format.
78 # Posix paths work fine on windows but just in case we normalize it here.
79 local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))
80 if not os.path.exists(local_artifact_path):
81 raise OSError(f"No such file or directory: '{local_artifact_path}'")
82 return os.path.abspath(local_artifact_path)
83
84 def list_artifacts(self, path=None):
85 # NOTE: The path is expected to be in posix format.
86 # Posix paths work fine on windows but just in case we normalize it here.
87 if path:
88 path = os.path.normpath(path)
89 list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir
90 if os.path.isdir(list_dir):
91 artifact_files = list_all(list_dir, full_path=True)
92 infos = [
93 get_file_info(
94 f, relative_path_to_artifact_path(os.path.relpath(f, self.artifact_dir))
95 )
96 for f in artifact_files
97 ]
98 return sorted(infos, key=lambda f: f.path)
99 else:
100 return []
101
102 def _download_file(self, remote_file_path, local_path):
103 # NOTE: The remote_file_path is expected to be in posix format.
104 # Posix paths work fine on windows but just in case we normalize it here.
105 remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))
106 shutil.copy2(remote_file_path, local_path)
107
108 def delete_artifacts(self, artifact_path=None):
109 artifact_path = local_file_uri_to_path(
110 os.path.join(self._artifact_dir, artifact_path) if artifact_path else self._artifact_dir
111 )
112
113 if os.path.exists(artifact_path):
114 shutil.rmtree(artifact_path)
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlflow/store/artifact/local_artifact_repo.py b/mlflow/store/artifact/local_artifact_repo.py
--- a/mlflow/store/artifact/local_artifact_repo.py
+++ b/mlflow/store/artifact/local_artifact_repo.py
@@ -9,6 +9,7 @@
mkdir,
relative_path_to_artifact_path,
)
+from mlflow.utils.uri import validate_path_is_safe
class LocalArtifactRepository(ArtifactRepository):
@@ -74,8 +75,9 @@
"""
if dst_path:
return super().download_artifacts(artifact_path, dst_path)
- # NOTE: The artifact_path is expected to be in posix format.
+ # NOTE: The artifact_path is expected to be a relative path in posix format.
# Posix paths work fine on windows but just in case we normalize it here.
+ artifact_path = validate_path_is_safe(artifact_path)
local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))
if not os.path.exists(local_artifact_path):
raise OSError(f"No such file or directory: '{local_artifact_path}'")
@@ -100,8 +102,9 @@
return []
def _download_file(self, remote_file_path, local_path):
- # NOTE: The remote_file_path is expected to be in posix format.
+ # NOTE: The remote_file_path is expected to be a relative path in posix format.
# Posix paths work fine on windows but just in case we normalize it here.
+ remote_file_path = validate_path_is_safe(remote_file_path)
remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))
shutil.copy2(remote_file_path, local_path)
|
{"golden_diff": "diff --git a/mlflow/store/artifact/local_artifact_repo.py b/mlflow/store/artifact/local_artifact_repo.py\n--- a/mlflow/store/artifact/local_artifact_repo.py\n+++ b/mlflow/store/artifact/local_artifact_repo.py\n@@ -9,6 +9,7 @@\n mkdir,\n relative_path_to_artifact_path,\n )\n+from mlflow.utils.uri import validate_path_is_safe\n \n \n class LocalArtifactRepository(ArtifactRepository):\n@@ -74,8 +75,9 @@\n \"\"\"\n if dst_path:\n return super().download_artifacts(artifact_path, dst_path)\n- # NOTE: The artifact_path is expected to be in posix format.\n+ # NOTE: The artifact_path is expected to be a relative path in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n+ artifact_path = validate_path_is_safe(artifact_path)\n local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))\n if not os.path.exists(local_artifact_path):\n raise OSError(f\"No such file or directory: '{local_artifact_path}'\")\n@@ -100,8 +102,9 @@\n return []\n \n def _download_file(self, remote_file_path, local_path):\n- # NOTE: The remote_file_path is expected to be in posix format.\n+ # NOTE: The remote_file_path is expected to be a relative path in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n+ remote_file_path = validate_path_is_safe(remote_file_path)\n remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))\n shutil.copy2(remote_file_path, local_path)\n", "issue": "[BUG] Security Vulnerability\nPlease check it here https://huntr.com/bounties/e3d7a994-bfd6-4772-ac9b-9aee1aa16a5f/\n", "before_files": [{"content": "import os\nimport shutil\n\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path\nfrom mlflow.utils.file_utils import (\n get_file_info,\n list_all,\n local_file_uri_to_path,\n mkdir,\n relative_path_to_artifact_path,\n)\n\n\nclass LocalArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts as files in a local directory.\"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._artifact_dir = local_file_uri_to_path(self.artifact_uri)\n\n @property\n def artifact_dir(self):\n return self._artifact_dir\n\n def log_artifact(self, local_file, artifact_path=None):\n verify_artifact_path(artifact_path)\n # NOTE: The artifact_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if artifact_path:\n artifact_path = os.path.normpath(artifact_path)\n\n artifact_dir = (\n os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir\n )\n if not os.path.exists(artifact_dir):\n mkdir(artifact_dir)\n try:\n shutil.copy2(local_file, os.path.join(artifact_dir, os.path.basename(local_file)))\n except shutil.SameFileError:\n pass\n\n def _is_directory(self, artifact_path):\n # NOTE: The path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n path = os.path.normpath(artifact_path) if artifact_path else \"\"\n list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir\n return os.path.isdir(list_dir)\n\n def log_artifacts(self, local_dir, artifact_path=None):\n verify_artifact_path(artifact_path)\n # NOTE: The artifact_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if artifact_path:\n artifact_path = os.path.normpath(artifact_path)\n artifact_dir = (\n os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir\n )\n if not os.path.exists(artifact_dir):\n mkdir(artifact_dir)\n shutil.copytree(src=local_dir, dst=artifact_dir, dirs_exist_ok=True)\n\n def download_artifacts(self, artifact_path, dst_path=None):\n \"\"\"\n Artifacts tracked by ``LocalArtifactRepository`` already exist on the local filesystem.\n If ``dst_path`` is ``None``, the absolute filesystem path of the specified artifact is\n returned. If ``dst_path`` is not ``None``, the local artifact is copied to ``dst_path``.\n\n :param artifact_path: Relative source path to the desired artifacts.\n :param dst_path: Absolute path of the local filesystem destination directory to which to\n download the specified artifacts. This directory must already exist. If\n unspecified, the absolute path of the local artifact will be returned.\n\n :return: Absolute path of the local filesystem location containing the desired artifacts.\n \"\"\"\n if dst_path:\n return super().download_artifacts(artifact_path, dst_path)\n # NOTE: The artifact_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))\n if not os.path.exists(local_artifact_path):\n raise OSError(f\"No such file or directory: '{local_artifact_path}'\")\n return os.path.abspath(local_artifact_path)\n\n def list_artifacts(self, path=None):\n # NOTE: The path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if path:\n path = os.path.normpath(path)\n list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir\n if os.path.isdir(list_dir):\n artifact_files = list_all(list_dir, full_path=True)\n infos = [\n get_file_info(\n f, relative_path_to_artifact_path(os.path.relpath(f, self.artifact_dir))\n )\n for f in artifact_files\n ]\n return sorted(infos, key=lambda f: f.path)\n else:\n return []\n\n def _download_file(self, remote_file_path, local_path):\n # NOTE: The remote_file_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))\n shutil.copy2(remote_file_path, local_path)\n\n def delete_artifacts(self, artifact_path=None):\n artifact_path = local_file_uri_to_path(\n os.path.join(self._artifact_dir, artifact_path) if artifact_path else self._artifact_dir\n )\n\n if os.path.exists(artifact_path):\n shutil.rmtree(artifact_path)\n", "path": "mlflow/store/artifact/local_artifact_repo.py"}], "after_files": [{"content": "import os\nimport shutil\n\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path\nfrom mlflow.utils.file_utils import (\n get_file_info,\n list_all,\n local_file_uri_to_path,\n mkdir,\n relative_path_to_artifact_path,\n)\nfrom mlflow.utils.uri import validate_path_is_safe\n\n\nclass LocalArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts as files in a local directory.\"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._artifact_dir = local_file_uri_to_path(self.artifact_uri)\n\n @property\n def artifact_dir(self):\n return self._artifact_dir\n\n def log_artifact(self, local_file, artifact_path=None):\n verify_artifact_path(artifact_path)\n # NOTE: The artifact_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if artifact_path:\n artifact_path = os.path.normpath(artifact_path)\n\n artifact_dir = (\n os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir\n )\n if not os.path.exists(artifact_dir):\n mkdir(artifact_dir)\n try:\n shutil.copy2(local_file, os.path.join(artifact_dir, os.path.basename(local_file)))\n except shutil.SameFileError:\n pass\n\n def _is_directory(self, artifact_path):\n # NOTE: The path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n path = os.path.normpath(artifact_path) if artifact_path else \"\"\n list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir\n return os.path.isdir(list_dir)\n\n def log_artifacts(self, local_dir, artifact_path=None):\n verify_artifact_path(artifact_path)\n # NOTE: The artifact_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if artifact_path:\n artifact_path = os.path.normpath(artifact_path)\n artifact_dir = (\n os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir\n )\n if not os.path.exists(artifact_dir):\n mkdir(artifact_dir)\n shutil.copytree(src=local_dir, dst=artifact_dir, dirs_exist_ok=True)\n\n def download_artifacts(self, artifact_path, dst_path=None):\n \"\"\"\n Artifacts tracked by ``LocalArtifactRepository`` already exist on the local filesystem.\n If ``dst_path`` is ``None``, the absolute filesystem path of the specified artifact is\n returned. If ``dst_path`` is not ``None``, the local artifact is copied to ``dst_path``.\n\n :param artifact_path: Relative source path to the desired artifacts.\n :param dst_path: Absolute path of the local filesystem destination directory to which to\n download the specified artifacts. This directory must already exist. If\n unspecified, the absolute path of the local artifact will be returned.\n\n :return: Absolute path of the local filesystem location containing the desired artifacts.\n \"\"\"\n if dst_path:\n return super().download_artifacts(artifact_path, dst_path)\n # NOTE: The artifact_path is expected to be a relative path in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n artifact_path = validate_path_is_safe(artifact_path)\n local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))\n if not os.path.exists(local_artifact_path):\n raise OSError(f\"No such file or directory: '{local_artifact_path}'\")\n return os.path.abspath(local_artifact_path)\n\n def list_artifacts(self, path=None):\n # NOTE: The path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if path:\n path = os.path.normpath(path)\n list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir\n if os.path.isdir(list_dir):\n artifact_files = list_all(list_dir, full_path=True)\n infos = [\n get_file_info(\n f, relative_path_to_artifact_path(os.path.relpath(f, self.artifact_dir))\n )\n for f in artifact_files\n ]\n return sorted(infos, key=lambda f: f.path)\n else:\n return []\n\n def _download_file(self, remote_file_path, local_path):\n # NOTE: The remote_file_path is expected to be a relative path in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n remote_file_path = validate_path_is_safe(remote_file_path)\n remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))\n shutil.copy2(remote_file_path, local_path)\n\n def delete_artifacts(self, artifact_path=None):\n artifact_path = local_file_uri_to_path(\n os.path.join(self._artifact_dir, artifact_path) if artifact_path else self._artifact_dir\n )\n\n if os.path.exists(artifact_path):\n shutil.rmtree(artifact_path)\n", "path": "mlflow/store/artifact/local_artifact_repo.py"}]}
| 1,642 | 377 |
gh_patches_debug_16611
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-2011
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Opengl -a flag not working with opengl
## Description of bug / unexpected behavior
<!-- Add a clear and concise description of the problem you encountered. -->
The behaviour of the -a flag is to output all scenes, however only a single scene is output when using the opengl renderer
## Expected behavior
<!-- Add a clear and concise description of what you expected to happen. -->
Expect all scenes to be previewed with the -p flag and output. I guess it is not applicable with interactive mode?
## How to reproduce the issue
<!-- Provide a piece of code illustrating the undesired behavior. -->
Run multple scenes with the `-a` flag and opengl renderer, for example `python -m manim example_scenes.py --renderer opengl -a -pql`
<details><summary>Code for reproducing the problem</summary>
```py
class SquareToCircle(Scene):
def construct(self):
circle = Circle()
circle.set_fill(PINK, opacity=0.5)
square = Square()
square.rotate(PI / 4)
self.play(Create(square))
self.play(Transform(square, circle))
self.play(FadeOut(square))
class CircleToSquare(Scene):
def construct(self):
circle = Circle()
circle.set_fill(PINK, opacity=0.5)
square = Square()
square.rotate(PI / 4)
self.play(Create(circle))
self.play(Transform(circle, square))
self.play(FadeOut(circle))
```
</details>
## Additional media files
<!-- Paste in the files manim produced on rendering the code above. -->
<details><summary>Images/GIFs</summary>
<!-- PASTE MEDIA HERE -->
</details>
## Logs
<details><summary>Terminal output</summary>
<!-- Add "-v DEBUG" when calling manim to generate more detailed logs -->
```
PASTE HERE OR PROVIDE LINK TO https://pastebin.com/ OR SIMILAR
```
<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->
</details>
## System specifications
<details><summary>System Details</summary>
- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)):
- RAM:
- Python version (`python/py/python3 --version`):
- Installed modules (provide output from `pip list`):
```
PASTE HERE
```
</details>
<details><summary>LaTeX details</summary>
+ LaTeX distribution (e.g. TeX Live 2020):
+ Installed LaTeX packages:
<!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX -->
</details>
<details><summary>FFMPEG</summary>
Output of `ffmpeg -version`:
```
PASTE HERE
```
</details>
## Additional comments
<!-- Add further context that you think might be relevant for this issue here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/cli/render/commands.py`
Content:
```
1 """Manim's default subcommand, render.
2
3 Manim's render subcommand is accessed in the command-line interface via
4 ``manim``, but can be more explicitly accessed with ``manim render``. Here you
5 can specify options, and arguments for the render command.
6
7 """
8 import json
9 import sys
10 from pathlib import Path
11
12 import click
13 import cloup
14 import requests
15
16 from ... import __version__, config, console, error_console, logger
17 from ...constants import EPILOG
18 from ...utils.module_ops import scene_classes_from_file
19 from .ease_of_access_options import ease_of_access_options
20 from .global_options import global_options
21 from .output_options import output_options
22 from .render_options import render_options
23
24
25 @cloup.command(
26 context_settings=None,
27 epilog=EPILOG,
28 )
29 @click.argument("file", type=Path, required=True)
30 @click.argument("scene_names", required=False, nargs=-1)
31 @global_options
32 @output_options
33 @render_options
34 @ease_of_access_options
35 def render(
36 **args,
37 ):
38 """Render SCENE(S) from the input FILE.
39
40 FILE is the file path of the script.
41
42 SCENES is an optional list of scenes in the file.
43 """
44
45 if args["use_opengl_renderer"]:
46 logger.warning(
47 "--use_opengl_renderer is deprecated, please use --renderer=opengl instead!",
48 )
49 args["renderer"] = "opengl"
50
51 if args["use_webgl_renderer"]:
52 logger.warning(
53 "--use_webgl_renderer is deprecated, please use --renderer=webgl instead!",
54 )
55 args["renderer"] = "webgl"
56
57 if args["use_webgl_renderer"] and args["use_opengl_renderer"]:
58 logger.warning("You may select only one renderer!")
59 sys.exit()
60
61 if args["save_as_gif"]:
62 logger.warning("--save_as_gif is deprecated, please use --format=gif instead!")
63 args["format"] = "gif"
64
65 if args["save_pngs"]:
66 logger.warning("--save_pngs is deprecated, please use --format=png instead!")
67 args["format"] = "png"
68
69 if args["show_in_file_browser"]:
70 logger.warning(
71 "The short form of show_in_file_browser is deprecated and will be moved to support --format.",
72 )
73
74 class ClickArgs:
75 def __init__(self, args):
76 for name in args:
77 setattr(self, name, args[name])
78
79 def _get_kwargs(self):
80 return list(self.__dict__.items())
81
82 def __eq__(self, other):
83 if not isinstance(other, ClickArgs):
84 return NotImplemented
85 return vars(self) == vars(other)
86
87 def __contains__(self, key):
88 return key in self.__dict__
89
90 def __repr__(self):
91 return str(self.__dict__)
92
93 click_args = ClickArgs(args)
94 if args["jupyter"]:
95 return click_args
96
97 config.digest_args(click_args)
98 file = args["file"]
99 if config.renderer == "opengl":
100 from manim.renderer.opengl_renderer import OpenGLRenderer
101
102 try:
103 renderer = OpenGLRenderer()
104 keep_running = True
105 while keep_running:
106 for SceneClass in scene_classes_from_file(file):
107 scene = SceneClass(renderer)
108 status = scene.render()
109 if status:
110 continue
111 else:
112 keep_running = False
113 break
114 except Exception:
115 error_console.print_exception()
116 sys.exit(1)
117 elif config.renderer == "webgl":
118 try:
119 from manim.grpc.impl import frame_server_impl
120
121 server = frame_server_impl.get(file)
122 server.start()
123 server.wait_for_termination()
124 except ModuleNotFoundError:
125 console.print(
126 "Dependencies for the WebGL render are missing. Run "
127 "pip install manim[webgl_renderer] to install them.",
128 )
129 error_console.print_exception()
130 sys.exit(1)
131 else:
132 for SceneClass in scene_classes_from_file(file):
133 try:
134 scene = SceneClass()
135 scene.render()
136 except Exception:
137 error_console.print_exception()
138 sys.exit(1)
139
140 if config.notify_outdated_version:
141 manim_info_url = "https://pypi.org/pypi/manim/json"
142 warn_prompt = "Cannot check if latest release of manim is installed"
143 req_info = {}
144
145 try:
146 req_info = requests.get(manim_info_url)
147 req_info.raise_for_status()
148
149 stable = req_info.json()["info"]["version"]
150 if stable != __version__:
151 console.print(
152 f"You are using manim version [red]v{__version__}[/red], but version [green]v{stable}[/green] is available.",
153 )
154 console.print(
155 "You should consider upgrading via [yellow]pip install -U manim[/yellow]",
156 )
157 except requests.exceptions.HTTPError:
158 logger.debug(f"HTTP Error: {warn_prompt}")
159 except requests.exceptions.ConnectionError:
160 logger.debug(f"Connection Error: {warn_prompt}")
161 except requests.exceptions.Timeout:
162 logger.debug(f"Timed Out: {warn_prompt}")
163 except json.JSONDecodeError:
164 logger.debug(warn_prompt)
165 logger.debug(f"Error decoding JSON from {manim_info_url}")
166 except Exception:
167 logger.debug(f"Something went wrong: {warn_prompt}")
168
169 return args
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/manim/cli/render/commands.py b/manim/cli/render/commands.py
--- a/manim/cli/render/commands.py
+++ b/manim/cli/render/commands.py
@@ -105,12 +105,16 @@
while keep_running:
for SceneClass in scene_classes_from_file(file):
scene = SceneClass(renderer)
- status = scene.render()
- if status:
+ rerun = scene.render()
+ if rerun or config["write_all"]:
+ renderer.num_plays = 0
continue
else:
keep_running = False
break
+ if config["write_all"]:
+ keep_running = False
+
except Exception:
error_console.print_exception()
sys.exit(1)
|
{"golden_diff": "diff --git a/manim/cli/render/commands.py b/manim/cli/render/commands.py\n--- a/manim/cli/render/commands.py\n+++ b/manim/cli/render/commands.py\n@@ -105,12 +105,16 @@\n while keep_running:\n for SceneClass in scene_classes_from_file(file):\n scene = SceneClass(renderer)\n- status = scene.render()\n- if status:\n+ rerun = scene.render()\n+ if rerun or config[\"write_all\"]:\n+ renderer.num_plays = 0\n continue\n else:\n keep_running = False\n break\n+ if config[\"write_all\"]:\n+ keep_running = False\n+\n except Exception:\n error_console.print_exception()\n sys.exit(1)\n", "issue": "Opengl -a flag not working with opengl\n## Description of bug / unexpected behavior\r\n<!-- Add a clear and concise description of the problem you encountered. -->\r\n\r\nThe behaviour of the -a flag is to output all scenes, however only a single scene is output when using the opengl renderer\r\n\r\n## Expected behavior\r\n<!-- Add a clear and concise description of what you expected to happen. -->\r\n\r\nExpect all scenes to be previewed with the -p flag and output. I guess it is not applicable with interactive mode?\r\n\r\n## How to reproduce the issue\r\n<!-- Provide a piece of code illustrating the undesired behavior. -->\r\n\r\nRun multple scenes with the `-a` flag and opengl renderer, for example `python -m manim example_scenes.py --renderer opengl -a -pql`\r\n\r\n<details><summary>Code for reproducing the problem</summary>\r\n\r\n```py\r\nclass SquareToCircle(Scene):\r\n def construct(self):\r\n circle = Circle()\r\n circle.set_fill(PINK, opacity=0.5)\r\n\r\n square = Square() \r\n square.rotate(PI / 4)\r\n\r\n self.play(Create(square))\r\n self.play(Transform(square, circle))\r\n self.play(FadeOut(square))\r\n\r\nclass CircleToSquare(Scene):\r\n def construct(self):\r\n circle = Circle() \r\n circle.set_fill(PINK, opacity=0.5)\r\n\r\n square = Square()\r\n square.rotate(PI / 4) \r\n\r\n self.play(Create(circle)) \r\n self.play(Transform(circle, square)) \r\n self.play(FadeOut(circle)) \r\n```\r\n\r\n</details>\r\n\r\n\r\n## Additional media files\r\n<!-- Paste in the files manim produced on rendering the code above. -->\r\n\r\n<details><summary>Images/GIFs</summary>\r\n\r\n<!-- PASTE MEDIA HERE -->\r\n\r\n</details>\r\n\r\n\r\n## Logs\r\n<details><summary>Terminal output</summary>\r\n<!-- Add \"-v DEBUG\" when calling manim to generate more detailed logs -->\r\n\r\n```\r\nPASTE HERE OR PROVIDE LINK TO https://pastebin.com/ OR SIMILAR\r\n```\r\n\r\n<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->\r\n\r\n</details>\r\n\r\n\r\n## System specifications\r\n\r\n<details><summary>System Details</summary>\r\n\r\n- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)):\r\n- RAM:\r\n- Python version (`python/py/python3 --version`):\r\n- Installed modules (provide output from `pip list`):\r\n```\r\nPASTE HERE\r\n```\r\n</details>\r\n\r\n<details><summary>LaTeX details</summary>\r\n\r\n+ LaTeX distribution (e.g. TeX Live 2020):\r\n+ Installed LaTeX packages:\r\n<!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX -->\r\n</details>\r\n\r\n<details><summary>FFMPEG</summary>\r\n\r\nOutput of `ffmpeg -version`:\r\n\r\n```\r\nPASTE HERE\r\n```\r\n</details>\r\n\r\n## Additional comments\r\n<!-- Add further context that you think might be relevant for this issue here. -->\r\n\n", "before_files": [{"content": "\"\"\"Manim's default subcommand, render.\n\nManim's render subcommand is accessed in the command-line interface via\n``manim``, but can be more explicitly accessed with ``manim render``. Here you\ncan specify options, and arguments for the render command.\n\n\"\"\"\nimport json\nimport sys\nfrom pathlib import Path\n\nimport click\nimport cloup\nimport requests\n\nfrom ... import __version__, config, console, error_console, logger\nfrom ...constants import EPILOG\nfrom ...utils.module_ops import scene_classes_from_file\nfrom .ease_of_access_options import ease_of_access_options\nfrom .global_options import global_options\nfrom .output_options import output_options\nfrom .render_options import render_options\n\n\[email protected](\n context_settings=None,\n epilog=EPILOG,\n)\[email protected](\"file\", type=Path, required=True)\[email protected](\"scene_names\", required=False, nargs=-1)\n@global_options\n@output_options\n@render_options\n@ease_of_access_options\ndef render(\n **args,\n):\n \"\"\"Render SCENE(S) from the input FILE.\n\n FILE is the file path of the script.\n\n SCENES is an optional list of scenes in the file.\n \"\"\"\n\n if args[\"use_opengl_renderer\"]:\n logger.warning(\n \"--use_opengl_renderer is deprecated, please use --renderer=opengl instead!\",\n )\n args[\"renderer\"] = \"opengl\"\n\n if args[\"use_webgl_renderer\"]:\n logger.warning(\n \"--use_webgl_renderer is deprecated, please use --renderer=webgl instead!\",\n )\n args[\"renderer\"] = \"webgl\"\n\n if args[\"use_webgl_renderer\"] and args[\"use_opengl_renderer\"]:\n logger.warning(\"You may select only one renderer!\")\n sys.exit()\n\n if args[\"save_as_gif\"]:\n logger.warning(\"--save_as_gif is deprecated, please use --format=gif instead!\")\n args[\"format\"] = \"gif\"\n\n if args[\"save_pngs\"]:\n logger.warning(\"--save_pngs is deprecated, please use --format=png instead!\")\n args[\"format\"] = \"png\"\n\n if args[\"show_in_file_browser\"]:\n logger.warning(\n \"The short form of show_in_file_browser is deprecated and will be moved to support --format.\",\n )\n\n class ClickArgs:\n def __init__(self, args):\n for name in args:\n setattr(self, name, args[name])\n\n def _get_kwargs(self):\n return list(self.__dict__.items())\n\n def __eq__(self, other):\n if not isinstance(other, ClickArgs):\n return NotImplemented\n return vars(self) == vars(other)\n\n def __contains__(self, key):\n return key in self.__dict__\n\n def __repr__(self):\n return str(self.__dict__)\n\n click_args = ClickArgs(args)\n if args[\"jupyter\"]:\n return click_args\n\n config.digest_args(click_args)\n file = args[\"file\"]\n if config.renderer == \"opengl\":\n from manim.renderer.opengl_renderer import OpenGLRenderer\n\n try:\n renderer = OpenGLRenderer()\n keep_running = True\n while keep_running:\n for SceneClass in scene_classes_from_file(file):\n scene = SceneClass(renderer)\n status = scene.render()\n if status:\n continue\n else:\n keep_running = False\n break\n except Exception:\n error_console.print_exception()\n sys.exit(1)\n elif config.renderer == \"webgl\":\n try:\n from manim.grpc.impl import frame_server_impl\n\n server = frame_server_impl.get(file)\n server.start()\n server.wait_for_termination()\n except ModuleNotFoundError:\n console.print(\n \"Dependencies for the WebGL render are missing. Run \"\n \"pip install manim[webgl_renderer] to install them.\",\n )\n error_console.print_exception()\n sys.exit(1)\n else:\n for SceneClass in scene_classes_from_file(file):\n try:\n scene = SceneClass()\n scene.render()\n except Exception:\n error_console.print_exception()\n sys.exit(1)\n\n if config.notify_outdated_version:\n manim_info_url = \"https://pypi.org/pypi/manim/json\"\n warn_prompt = \"Cannot check if latest release of manim is installed\"\n req_info = {}\n\n try:\n req_info = requests.get(manim_info_url)\n req_info.raise_for_status()\n\n stable = req_info.json()[\"info\"][\"version\"]\n if stable != __version__:\n console.print(\n f\"You are using manim version [red]v{__version__}[/red], but version [green]v{stable}[/green] is available.\",\n )\n console.print(\n \"You should consider upgrading via [yellow]pip install -U manim[/yellow]\",\n )\n except requests.exceptions.HTTPError:\n logger.debug(f\"HTTP Error: {warn_prompt}\")\n except requests.exceptions.ConnectionError:\n logger.debug(f\"Connection Error: {warn_prompt}\")\n except requests.exceptions.Timeout:\n logger.debug(f\"Timed Out: {warn_prompt}\")\n except json.JSONDecodeError:\n logger.debug(warn_prompt)\n logger.debug(f\"Error decoding JSON from {manim_info_url}\")\n except Exception:\n logger.debug(f\"Something went wrong: {warn_prompt}\")\n\n return args\n", "path": "manim/cli/render/commands.py"}], "after_files": [{"content": "\"\"\"Manim's default subcommand, render.\n\nManim's render subcommand is accessed in the command-line interface via\n``manim``, but can be more explicitly accessed with ``manim render``. Here you\ncan specify options, and arguments for the render command.\n\n\"\"\"\nimport json\nimport sys\nfrom pathlib import Path\n\nimport click\nimport cloup\nimport requests\n\nfrom ... import __version__, config, console, error_console, logger\nfrom ...constants import EPILOG\nfrom ...utils.module_ops import scene_classes_from_file\nfrom .ease_of_access_options import ease_of_access_options\nfrom .global_options import global_options\nfrom .output_options import output_options\nfrom .render_options import render_options\n\n\[email protected](\n context_settings=None,\n epilog=EPILOG,\n)\[email protected](\"file\", type=Path, required=True)\[email protected](\"scene_names\", required=False, nargs=-1)\n@global_options\n@output_options\n@render_options\n@ease_of_access_options\ndef render(\n **args,\n):\n \"\"\"Render SCENE(S) from the input FILE.\n\n FILE is the file path of the script.\n\n SCENES is an optional list of scenes in the file.\n \"\"\"\n\n if args[\"use_opengl_renderer\"]:\n logger.warning(\n \"--use_opengl_renderer is deprecated, please use --renderer=opengl instead!\",\n )\n args[\"renderer\"] = \"opengl\"\n\n if args[\"use_webgl_renderer\"]:\n logger.warning(\n \"--use_webgl_renderer is deprecated, please use --renderer=webgl instead!\",\n )\n args[\"renderer\"] = \"webgl\"\n\n if args[\"use_webgl_renderer\"] and args[\"use_opengl_renderer\"]:\n logger.warning(\"You may select only one renderer!\")\n sys.exit()\n\n if args[\"save_as_gif\"]:\n logger.warning(\"--save_as_gif is deprecated, please use --format=gif instead!\")\n args[\"format\"] = \"gif\"\n\n if args[\"save_pngs\"]:\n logger.warning(\"--save_pngs is deprecated, please use --format=png instead!\")\n args[\"format\"] = \"png\"\n\n if args[\"show_in_file_browser\"]:\n logger.warning(\n \"The short form of show_in_file_browser is deprecated and will be moved to support --format.\",\n )\n\n class ClickArgs:\n def __init__(self, args):\n for name in args:\n setattr(self, name, args[name])\n\n def _get_kwargs(self):\n return list(self.__dict__.items())\n\n def __eq__(self, other):\n if not isinstance(other, ClickArgs):\n return NotImplemented\n return vars(self) == vars(other)\n\n def __contains__(self, key):\n return key in self.__dict__\n\n def __repr__(self):\n return str(self.__dict__)\n\n click_args = ClickArgs(args)\n if args[\"jupyter\"]:\n return click_args\n\n config.digest_args(click_args)\n file = args[\"file\"]\n if config.renderer == \"opengl\":\n from manim.renderer.opengl_renderer import OpenGLRenderer\n\n try:\n renderer = OpenGLRenderer()\n keep_running = True\n while keep_running:\n for SceneClass in scene_classes_from_file(file):\n scene = SceneClass(renderer)\n rerun = scene.render()\n if rerun or config[\"write_all\"]:\n renderer.num_plays = 0\n continue\n else:\n keep_running = False\n break\n if config[\"write_all\"]:\n keep_running = False\n\n except Exception:\n error_console.print_exception()\n sys.exit(1)\n elif config.renderer == \"webgl\":\n try:\n from manim.grpc.impl import frame_server_impl\n\n server = frame_server_impl.get(file)\n server.start()\n server.wait_for_termination()\n except ModuleNotFoundError:\n console.print(\n \"Dependencies for the WebGL render are missing. Run \"\n \"pip install manim[webgl_renderer] to install them.\",\n )\n error_console.print_exception()\n sys.exit(1)\n else:\n for SceneClass in scene_classes_from_file(file):\n try:\n scene = SceneClass()\n scene.render()\n except Exception:\n error_console.print_exception()\n sys.exit(1)\n\n if config.notify_outdated_version:\n manim_info_url = \"https://pypi.org/pypi/manim/json\"\n warn_prompt = \"Cannot check if latest release of manim is installed\"\n req_info = {}\n\n try:\n req_info = requests.get(manim_info_url)\n req_info.raise_for_status()\n\n stable = req_info.json()[\"info\"][\"version\"]\n if stable != __version__:\n console.print(\n f\"You are using manim version [red]v{__version__}[/red], but version [green]v{stable}[/green] is available.\",\n )\n console.print(\n \"You should consider upgrading via [yellow]pip install -U manim[/yellow]\",\n )\n except requests.exceptions.HTTPError:\n logger.debug(f\"HTTP Error: {warn_prompt}\")\n except requests.exceptions.ConnectionError:\n logger.debug(f\"Connection Error: {warn_prompt}\")\n except requests.exceptions.Timeout:\n logger.debug(f\"Timed Out: {warn_prompt}\")\n except json.JSONDecodeError:\n logger.debug(warn_prompt)\n logger.debug(f\"Error decoding JSON from {manim_info_url}\")\n except Exception:\n logger.debug(f\"Something went wrong: {warn_prompt}\")\n\n return args\n", "path": "manim/cli/render/commands.py"}]}
| 2,440 | 167 |
gh_patches_debug_25018
|
rasdani/github-patches
|
git_diff
|
magenta__magenta-1851
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError: 'tfds_data_dir'(GANSynth)
Hi, I got this error on GANSynth demo colab . How can I resolve it?

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `magenta/models/gansynth/gansynth_generate.py`
Content:
```
1 # Copyright 2020 The Magenta Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Lint as: python3
16 r"""Generate samples with a pretrained GANSynth model.
17
18 To use a config of hyperparameters and manual hparams:
19 >>> python magenta/models/gansynth/generate.py \
20 >>> --ckpt_dir=/path/to/ckpt/dir --output_dir=/path/to/output/dir \
21 >>> --midi_file=/path/to/file.mid
22
23 If a MIDI file is specified, notes are synthesized with interpolation between
24 latent vectors in time. If no MIDI file is given, a random batch of notes is
25 synthesized.
26 """
27
28 import os
29
30 import absl.flags
31 from magenta.models.gansynth.lib import flags as lib_flags
32 from magenta.models.gansynth.lib import generate_util as gu
33 from magenta.models.gansynth.lib import model as lib_model
34 from magenta.models.gansynth.lib import util
35 import tensorflow.compat.v1 as tf
36
37
38 absl.flags.DEFINE_string('ckpt_dir',
39 '/tmp/gansynth/acoustic_only',
40 'Path to the base directory of pretrained checkpoints.'
41 'The base directory should contain many '
42 '"stage_000*" subdirectories.')
43 absl.flags.DEFINE_string('output_dir',
44 '/tmp/gansynth/samples',
45 'Path to directory to save wave files.')
46 absl.flags.DEFINE_string('midi_file',
47 '',
48 'Path to a MIDI file (.mid) to synthesize.')
49 absl.flags.DEFINE_integer('batch_size', 8, 'Batch size for generation.')
50 absl.flags.DEFINE_float('secs_per_instrument', 6.0,
51 'In random interpolations, the seconds it takes to '
52 'interpolate from one instrument to another.')
53
54 FLAGS = absl.flags.FLAGS
55 tf.logging.set_verbosity(tf.logging.INFO)
56
57
58 def main(unused_argv):
59 absl.flags.FLAGS.alsologtostderr = True
60
61 # Load the model
62 flags = lib_flags.Flags({'batch_size_schedule': [FLAGS.batch_size]})
63 model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)
64
65 # Make an output directory if it doesn't exist
66 output_dir = util.expand_path(FLAGS.output_dir)
67 if not tf.gfile.Exists(output_dir):
68 tf.gfile.MakeDirs(output_dir)
69
70 if FLAGS.midi_file:
71 # If a MIDI file is provided, synthesize interpolations across the clip
72 unused_ns, notes = gu.load_midi(FLAGS.midi_file)
73
74 # Distribute latent vectors linearly in time
75 z_instruments, t_instruments = gu.get_random_instruments(
76 model,
77 notes['end_times'][-1],
78 secs_per_instrument=FLAGS.secs_per_instrument)
79
80 # Get latent vectors for each note
81 z_notes = gu.get_z_notes(notes['start_times'], z_instruments, t_instruments)
82
83 # Generate audio for each note
84 print('Generating {} samples...'.format(len(z_notes)))
85 audio_notes = model.generate_samples_from_z(z_notes, notes['pitches'])
86
87 # Make a single audio clip
88 audio_clip = gu.combine_notes(audio_notes,
89 notes['start_times'],
90 notes['end_times'],
91 notes['velocities'])
92
93 # Write the wave files
94 fname = os.path.join(output_dir, 'generated_clip.wav')
95 gu.save_wav(audio_clip, fname)
96 else:
97 # Otherwise, just generate a batch of random sounds
98 waves = model.generate_samples(FLAGS.batch_size)
99 # Write the wave files
100 for i in range(len(waves)):
101 fname = os.path.join(output_dir, 'generated_{}.wav'.format(i))
102 gu.save_wav(waves[i], fname)
103
104
105 def console_entry_point():
106 tf.disable_v2_behavior()
107 tf.app.run(main)
108
109
110 if __name__ == '__main__':
111 console_entry_point()
112
```
Path: `magenta/version.py`
Content:
```
1 # Copyright 2020 The Magenta Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 r"""Separate file for storing the current version of Magenta.
16
17 Stored in a separate file so that setup.py can reference the version without
18 pulling in all the dependencies in __init__.py.
19 """
20
21 __version__ = '2.1.2'
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/magenta/models/gansynth/gansynth_generate.py b/magenta/models/gansynth/gansynth_generate.py
--- a/magenta/models/gansynth/gansynth_generate.py
+++ b/magenta/models/gansynth/gansynth_generate.py
@@ -50,6 +50,9 @@
absl.flags.DEFINE_float('secs_per_instrument', 6.0,
'In random interpolations, the seconds it takes to '
'interpolate from one instrument to another.')
+absl.flags.DEFINE_string('tfds_data_dir',
+ 'gs://tfds-data/datasets',
+ 'Data directory for the TFDS dataset used to train.')
FLAGS = absl.flags.FLAGS
tf.logging.set_verbosity(tf.logging.INFO)
@@ -59,7 +62,11 @@
absl.flags.FLAGS.alsologtostderr = True
# Load the model
- flags = lib_flags.Flags({'batch_size_schedule': [FLAGS.batch_size]})
+ flags = lib_flags.Flags(
+ {
+ 'batch_size_schedule': [FLAGS.batch_size],
+ 'tfds_data_dir': FLAGS.tfds_data_dir
+ })
model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)
# Make an output directory if it doesn't exist
diff --git a/magenta/version.py b/magenta/version.py
--- a/magenta/version.py
+++ b/magenta/version.py
@@ -18,4 +18,4 @@
pulling in all the dependencies in __init__.py.
"""
-__version__ = '2.1.2'
+__version__ = '2.1.3'
|
{"golden_diff": "diff --git a/magenta/models/gansynth/gansynth_generate.py b/magenta/models/gansynth/gansynth_generate.py\n--- a/magenta/models/gansynth/gansynth_generate.py\n+++ b/magenta/models/gansynth/gansynth_generate.py\n@@ -50,6 +50,9 @@\n absl.flags.DEFINE_float('secs_per_instrument', 6.0,\n 'In random interpolations, the seconds it takes to '\n 'interpolate from one instrument to another.')\n+absl.flags.DEFINE_string('tfds_data_dir',\n+ 'gs://tfds-data/datasets',\n+ 'Data directory for the TFDS dataset used to train.')\n \n FLAGS = absl.flags.FLAGS\n tf.logging.set_verbosity(tf.logging.INFO)\n@@ -59,7 +62,11 @@\n absl.flags.FLAGS.alsologtostderr = True\n \n # Load the model\n- flags = lib_flags.Flags({'batch_size_schedule': [FLAGS.batch_size]})\n+ flags = lib_flags.Flags(\n+ {\n+ 'batch_size_schedule': [FLAGS.batch_size],\n+ 'tfds_data_dir': FLAGS.tfds_data_dir\n+ })\n model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)\n \n # Make an output directory if it doesn't exist\ndiff --git a/magenta/version.py b/magenta/version.py\n--- a/magenta/version.py\n+++ b/magenta/version.py\n@@ -18,4 +18,4 @@\n pulling in all the dependencies in __init__.py.\n \"\"\"\n \n-__version__ = '2.1.2'\n+__version__ = '2.1.3'\n", "issue": "KeyError: 'tfds_data_dir'(GANSynth)\nHi, I got this error on GANSynth demo colab . How can I resolve it?\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\nr\"\"\"Generate samples with a pretrained GANSynth model.\n\nTo use a config of hyperparameters and manual hparams:\n>>> python magenta/models/gansynth/generate.py \\\n>>> --ckpt_dir=/path/to/ckpt/dir --output_dir=/path/to/output/dir \\\n>>> --midi_file=/path/to/file.mid\n\nIf a MIDI file is specified, notes are synthesized with interpolation between\nlatent vectors in time. If no MIDI file is given, a random batch of notes is\nsynthesized.\n\"\"\"\n\nimport os\n\nimport absl.flags\nfrom magenta.models.gansynth.lib import flags as lib_flags\nfrom magenta.models.gansynth.lib import generate_util as gu\nfrom magenta.models.gansynth.lib import model as lib_model\nfrom magenta.models.gansynth.lib import util\nimport tensorflow.compat.v1 as tf\n\n\nabsl.flags.DEFINE_string('ckpt_dir',\n '/tmp/gansynth/acoustic_only',\n 'Path to the base directory of pretrained checkpoints.'\n 'The base directory should contain many '\n '\"stage_000*\" subdirectories.')\nabsl.flags.DEFINE_string('output_dir',\n '/tmp/gansynth/samples',\n 'Path to directory to save wave files.')\nabsl.flags.DEFINE_string('midi_file',\n '',\n 'Path to a MIDI file (.mid) to synthesize.')\nabsl.flags.DEFINE_integer('batch_size', 8, 'Batch size for generation.')\nabsl.flags.DEFINE_float('secs_per_instrument', 6.0,\n 'In random interpolations, the seconds it takes to '\n 'interpolate from one instrument to another.')\n\nFLAGS = absl.flags.FLAGS\ntf.logging.set_verbosity(tf.logging.INFO)\n\n\ndef main(unused_argv):\n absl.flags.FLAGS.alsologtostderr = True\n\n # Load the model\n flags = lib_flags.Flags({'batch_size_schedule': [FLAGS.batch_size]})\n model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)\n\n # Make an output directory if it doesn't exist\n output_dir = util.expand_path(FLAGS.output_dir)\n if not tf.gfile.Exists(output_dir):\n tf.gfile.MakeDirs(output_dir)\n\n if FLAGS.midi_file:\n # If a MIDI file is provided, synthesize interpolations across the clip\n unused_ns, notes = gu.load_midi(FLAGS.midi_file)\n\n # Distribute latent vectors linearly in time\n z_instruments, t_instruments = gu.get_random_instruments(\n model,\n notes['end_times'][-1],\n secs_per_instrument=FLAGS.secs_per_instrument)\n\n # Get latent vectors for each note\n z_notes = gu.get_z_notes(notes['start_times'], z_instruments, t_instruments)\n\n # Generate audio for each note\n print('Generating {} samples...'.format(len(z_notes)))\n audio_notes = model.generate_samples_from_z(z_notes, notes['pitches'])\n\n # Make a single audio clip\n audio_clip = gu.combine_notes(audio_notes,\n notes['start_times'],\n notes['end_times'],\n notes['velocities'])\n\n # Write the wave files\n fname = os.path.join(output_dir, 'generated_clip.wav')\n gu.save_wav(audio_clip, fname)\n else:\n # Otherwise, just generate a batch of random sounds\n waves = model.generate_samples(FLAGS.batch_size)\n # Write the wave files\n for i in range(len(waves)):\n fname = os.path.join(output_dir, 'generated_{}.wav'.format(i))\n gu.save_wav(waves[i], fname)\n\n\ndef console_entry_point():\n tf.disable_v2_behavior()\n tf.app.run(main)\n\n\nif __name__ == '__main__':\n console_entry_point()\n", "path": "magenta/models/gansynth/gansynth_generate.py"}, {"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nr\"\"\"Separate file for storing the current version of Magenta.\n\nStored in a separate file so that setup.py can reference the version without\npulling in all the dependencies in __init__.py.\n\"\"\"\n\n__version__ = '2.1.2'\n", "path": "magenta/version.py"}], "after_files": [{"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\nr\"\"\"Generate samples with a pretrained GANSynth model.\n\nTo use a config of hyperparameters and manual hparams:\n>>> python magenta/models/gansynth/generate.py \\\n>>> --ckpt_dir=/path/to/ckpt/dir --output_dir=/path/to/output/dir \\\n>>> --midi_file=/path/to/file.mid\n\nIf a MIDI file is specified, notes are synthesized with interpolation between\nlatent vectors in time. If no MIDI file is given, a random batch of notes is\nsynthesized.\n\"\"\"\n\nimport os\n\nimport absl.flags\nfrom magenta.models.gansynth.lib import flags as lib_flags\nfrom magenta.models.gansynth.lib import generate_util as gu\nfrom magenta.models.gansynth.lib import model as lib_model\nfrom magenta.models.gansynth.lib import util\nimport tensorflow.compat.v1 as tf\n\n\nabsl.flags.DEFINE_string('ckpt_dir',\n '/tmp/gansynth/acoustic_only',\n 'Path to the base directory of pretrained checkpoints.'\n 'The base directory should contain many '\n '\"stage_000*\" subdirectories.')\nabsl.flags.DEFINE_string('output_dir',\n '/tmp/gansynth/samples',\n 'Path to directory to save wave files.')\nabsl.flags.DEFINE_string('midi_file',\n '',\n 'Path to a MIDI file (.mid) to synthesize.')\nabsl.flags.DEFINE_integer('batch_size', 8, 'Batch size for generation.')\nabsl.flags.DEFINE_float('secs_per_instrument', 6.0,\n 'In random interpolations, the seconds it takes to '\n 'interpolate from one instrument to another.')\nabsl.flags.DEFINE_string('tfds_data_dir',\n 'gs://tfds-data/datasets',\n 'Data directory for the TFDS dataset used to train.')\n\nFLAGS = absl.flags.FLAGS\ntf.logging.set_verbosity(tf.logging.INFO)\n\n\ndef main(unused_argv):\n absl.flags.FLAGS.alsologtostderr = True\n\n # Load the model\n flags = lib_flags.Flags(\n {\n 'batch_size_schedule': [FLAGS.batch_size],\n 'tfds_data_dir': FLAGS.tfds_data_dir\n })\n model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)\n\n # Make an output directory if it doesn't exist\n output_dir = util.expand_path(FLAGS.output_dir)\n if not tf.gfile.Exists(output_dir):\n tf.gfile.MakeDirs(output_dir)\n\n if FLAGS.midi_file:\n # If a MIDI file is provided, synthesize interpolations across the clip\n unused_ns, notes = gu.load_midi(FLAGS.midi_file)\n\n # Distribute latent vectors linearly in time\n z_instruments, t_instruments = gu.get_random_instruments(\n model,\n notes['end_times'][-1],\n secs_per_instrument=FLAGS.secs_per_instrument)\n\n # Get latent vectors for each note\n z_notes = gu.get_z_notes(notes['start_times'], z_instruments, t_instruments)\n\n # Generate audio for each note\n print('Generating {} samples...'.format(len(z_notes)))\n audio_notes = model.generate_samples_from_z(z_notes, notes['pitches'])\n\n # Make a single audio clip\n audio_clip = gu.combine_notes(audio_notes,\n notes['start_times'],\n notes['end_times'],\n notes['velocities'])\n\n # Write the wave files\n fname = os.path.join(output_dir, 'generated_clip.wav')\n gu.save_wav(audio_clip, fname)\n else:\n # Otherwise, just generate a batch of random sounds\n waves = model.generate_samples(FLAGS.batch_size)\n # Write the wave files\n for i in range(len(waves)):\n fname = os.path.join(output_dir, 'generated_{}.wav'.format(i))\n gu.save_wav(waves[i], fname)\n\n\ndef console_entry_point():\n tf.disable_v2_behavior()\n tf.app.run(main)\n\n\nif __name__ == '__main__':\n console_entry_point()\n", "path": "magenta/models/gansynth/gansynth_generate.py"}, {"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nr\"\"\"Separate file for storing the current version of Magenta.\n\nStored in a separate file so that setup.py can reference the version without\npulling in all the dependencies in __init__.py.\n\"\"\"\n\n__version__ = '2.1.3'\n", "path": "magenta/version.py"}]}
| 1,760 | 357 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.