problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_12977 | rasdani/github-patches | git_diff | mindee__doctr-810 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to run the demo using the main branch
### Bug description
tried using latest master with streamlit demo as described in intro page.
provided a picture with a UK Car in it, to see if it reads the plate text. It returned with the following failure.
Strange, i used this 3 months ago, and was totally fine, and the recognition was really good, some how now its simply not working as good, infact alot worse. I have used the api way, and that seems to work ok, but still alot worse than what it was before.
Picture was:
```
KeyError: 'out_map'
Traceback:
File "D:\Python39\lib\site-packages\streamlit\script_runner.py", line 354, in _run_script
exec(code, module.__dict__)
File "D:\gitsrc\opensource\doctr\demo\app.py", line 109, in <module>
main()
File "D:\gitsrc\opensource\doctr\demo\app.py", line 83, in main
seg_map = out["out_map"]
```
### Code snippet to reproduce the bug
for Web app:
```shell
streamlit run demo/app.py
```
For API:
```shell
uvicorn --reload --workers 1 --host 0.0.0.0 --port=8002 --app-dir api/ app.main:app
```
with client code as:
```python
import requests
import io
import json
with open('D:/ImagesTest/NOREG_4127_20190324_2113499334_cpc2jh1m.jpg', 'rb') as f:
data = f.read()
response = requests.post("http://localhost:8002/ocr", files={'file': data}).json()
with open('dataapi.json', 'w', encoding='utf-8') as f:
json.dump(response, f, ensure_ascii=False, indent=4)
```
### Error traceback
```
KeyError: 'out_map'
Traceback:
File "D:\Python39\lib\site-packages\streamlit\script_runner.py", line 354, in _run_script
exec(code, module.__dict__)
File "D:\gitsrc\opensource\doctr\demo\app.py", line 109, in <module>
main()
File "D:\gitsrc\opensource\doctr\demo\app.py", line 83, in main
seg_map = out["out_map"]
```
### Environment
running on Windows 10 Pro
latest python
the collect_env.py wont work properly under wsl2.
```
Traceback (most recent call last):
File "collect_env.py", line 24, in <module>
import doctr
File "/mnt/d/gitsrc/opensource/doctr/doctr/__init__.py", line 1, in <module>
from . import datasets, io, models, transforms, utils
File "/mnt/d/gitsrc/opensource/doctr/doctr/datasets/__init__.py", line 1, in <module>
from doctr.file_utils import is_tf_available
File "/mnt/d/gitsrc/opensource/doctr/doctr/file_utils.py", line 33
logging.info(f"PyTorch version {_torch_version} available.")
^
SyntaxError: invalid syntax
```
</issue>
<code>
[start of demo/app.py]
1 # Copyright (C) 2021-2022, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import os
7
8 import matplotlib.pyplot as plt
9 import streamlit as st
10
11 os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
12
13 import cv2
14 import tensorflow as tf
15
16 gpu_devices = tf.config.experimental.list_physical_devices('GPU')
17 if any(gpu_devices):
18 tf.config.experimental.set_memory_growth(gpu_devices[0], True)
19
20 from doctr.io import DocumentFile
21 from doctr.models import ocr_predictor
22 from doctr.utils.visualization import visualize_page
23
24 DET_ARCHS = ["db_resnet50", "db_mobilenet_v3_large"]
25 RECO_ARCHS = ["crnn_vgg16_bn", "crnn_mobilenet_v3_small", "master", "sar_resnet31"]
26
27
28 def main():
29
30 # Wide mode
31 st.set_page_config(layout="wide")
32
33 # Designing the interface
34 st.title("docTR: Document Text Recognition")
35 # For newline
36 st.write('\n')
37 # Instructions
38 st.markdown("*Hint: click on the top-right corner of an image to enlarge it!*")
39 # Set the columns
40 cols = st.columns((1, 1, 1, 1))
41 cols[0].subheader("Input page")
42 cols[1].subheader("Segmentation heatmap")
43 cols[2].subheader("OCR output")
44 cols[3].subheader("Page reconstitution")
45
46 # Sidebar
47 # File selection
48 st.sidebar.title("Document selection")
49 # Disabling warning
50 st.set_option('deprecation.showfileUploaderEncoding', False)
51 # Choose your own image
52 uploaded_file = st.sidebar.file_uploader("Upload files", type=['pdf', 'png', 'jpeg', 'jpg'])
53 if uploaded_file is not None:
54 if uploaded_file.name.endswith('.pdf'):
55 doc = DocumentFile.from_pdf(uploaded_file.read()).as_images()
56 else:
57 doc = DocumentFile.from_images(uploaded_file.read())
58 page_idx = st.sidebar.selectbox("Page selection", [idx + 1 for idx in range(len(doc))]) - 1
59 cols[0].image(doc[page_idx])
60
61 # Model selection
62 st.sidebar.title("Model selection")
63 det_arch = st.sidebar.selectbox("Text detection model", DET_ARCHS)
64 reco_arch = st.sidebar.selectbox("Text recognition model", RECO_ARCHS)
65
66 # For newline
67 st.sidebar.write('\n')
68
69 if st.sidebar.button("Analyze page"):
70
71 if uploaded_file is None:
72 st.sidebar.write("Please upload a document")
73
74 else:
75 with st.spinner('Loading model...'):
76 predictor = ocr_predictor(det_arch, reco_arch, pretrained=True)
77
78 with st.spinner('Analyzing...'):
79
80 # Forward the image to the model
81 processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]])
82 out = predictor.det_predictor.model(processed_batches[0], return_preds=True)
83 seg_map = out["out_map"]
84 seg_map = tf.squeeze(seg_map[0, ...], axis=[2])
85 seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),
86 interpolation=cv2.INTER_LINEAR)
87 # Plot the raw heatmap
88 fig, ax = plt.subplots()
89 ax.imshow(seg_map)
90 ax.axis('off')
91 cols[1].pyplot(fig)
92
93 # Plot OCR output
94 out = predictor([doc[page_idx]])
95 fig = visualize_page(out.pages[0].export(), doc[page_idx], interactive=False)
96 cols[2].pyplot(fig)
97
98 # Page reconsitution under input page
99 page_export = out.pages[0].export()
100 img = out.pages[0].synthesize()
101 cols[3].image(img, clamp=True)
102
103 # Display JSON
104 st.markdown("\nHere are your analysis results in JSON format:")
105 st.json(page_export)
106
107
108 if __name__ == '__main__':
109 main()
110
[end of demo/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/demo/app.py b/demo/app.py
--- a/demo/app.py
+++ b/demo/app.py
@@ -79,7 +79,7 @@
# Forward the image to the model
processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]])
- out = predictor.det_predictor.model(processed_batches[0], return_preds=True)
+ out = predictor.det_predictor.model(processed_batches[0], return_model_output=True)
seg_map = out["out_map"]
seg_map = tf.squeeze(seg_map[0, ...], axis=[2])
seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),
| {"golden_diff": "diff --git a/demo/app.py b/demo/app.py\n--- a/demo/app.py\n+++ b/demo/app.py\n@@ -79,7 +79,7 @@\n \n # Forward the image to the model\n processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]])\n- out = predictor.det_predictor.model(processed_batches[0], return_preds=True)\n+ out = predictor.det_predictor.model(processed_batches[0], return_model_output=True)\n seg_map = out[\"out_map\"]\n seg_map = tf.squeeze(seg_map[0, ...], axis=[2])\n seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),\n", "issue": "Unable to run the demo using the main branch\n### Bug description\r\n\r\ntried using latest master with streamlit demo as described in intro page.\r\n\r\nprovided a picture with a UK Car in it, to see if it reads the plate text. It returned with the following failure.\r\n\r\nStrange, i used this 3 months ago, and was totally fine, and the recognition was really good, some how now its simply not working as good, infact alot worse. I have used the api way, and that seems to work ok, but still alot worse than what it was before.\r\nPicture was: \r\n\r\n```\r\nKeyError: 'out_map'\r\nTraceback:\r\nFile \"D:\\Python39\\lib\\site-packages\\streamlit\\script_runner.py\", line 354, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"D:\\gitsrc\\opensource\\doctr\\demo\\app.py\", line 109, in <module>\r\n main()\r\nFile \"D:\\gitsrc\\opensource\\doctr\\demo\\app.py\", line 83, in main\r\n seg_map = out[\"out_map\"]\r\n```\r\n### Code snippet to reproduce the bug\r\n\r\nfor Web app:\r\n```shell\r\nstreamlit run demo/app.py\r\n```\r\nFor API:\r\n```shell\r\nuvicorn --reload --workers 1 --host 0.0.0.0 --port=8002 --app-dir api/ app.main:app\r\n```\r\nwith client code as:\r\n```python\r\nimport requests\r\nimport io\r\nimport json\r\n\r\nwith open('D:/ImagesTest/NOREG_4127_20190324_2113499334_cpc2jh1m.jpg', 'rb') as f:\r\n data = f.read()\r\nresponse = requests.post(\"http://localhost:8002/ocr\", files={'file': data}).json()\r\n\r\nwith open('dataapi.json', 'w', encoding='utf-8') as f:\r\n json.dump(response, f, ensure_ascii=False, indent=4)\r\n```\r\n### Error traceback\r\n```\r\nKeyError: 'out_map'\r\nTraceback:\r\nFile \"D:\\Python39\\lib\\site-packages\\streamlit\\script_runner.py\", line 354, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"D:\\gitsrc\\opensource\\doctr\\demo\\app.py\", line 109, in <module>\r\n main()\r\nFile \"D:\\gitsrc\\opensource\\doctr\\demo\\app.py\", line 83, in main\r\n seg_map = out[\"out_map\"]\r\n```\r\n### Environment\r\n\r\nrunning on Windows 10 Pro\r\nlatest python\r\n\r\nthe collect_env.py wont work properly under wsl2.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"collect_env.py\", line 24, in <module>\r\n import doctr\r\n File \"/mnt/d/gitsrc/opensource/doctr/doctr/__init__.py\", line 1, in <module>\r\n from . import datasets, io, models, transforms, utils\r\n File \"/mnt/d/gitsrc/opensource/doctr/doctr/datasets/__init__.py\", line 1, in <module>\r\n from doctr.file_utils import is_tf_available\r\n File \"/mnt/d/gitsrc/opensource/doctr/doctr/file_utils.py\", line 33\r\n logging.info(f\"PyTorch version {_torch_version} available.\")\r\n ^\r\nSyntaxError: invalid syntax\r\n```\n", "before_files": [{"content": "# Copyright (C) 2021-2022, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport os\n\nimport matplotlib.pyplot as plt\nimport streamlit as st\n\nos.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"2\"\n\nimport cv2\nimport tensorflow as tf\n\ngpu_devices = tf.config.experimental.list_physical_devices('GPU')\nif any(gpu_devices):\n tf.config.experimental.set_memory_growth(gpu_devices[0], True)\n\nfrom doctr.io import DocumentFile\nfrom doctr.models import ocr_predictor\nfrom doctr.utils.visualization import visualize_page\n\nDET_ARCHS = [\"db_resnet50\", \"db_mobilenet_v3_large\"]\nRECO_ARCHS = [\"crnn_vgg16_bn\", \"crnn_mobilenet_v3_small\", \"master\", \"sar_resnet31\"]\n\n\ndef main():\n\n # Wide mode\n st.set_page_config(layout=\"wide\")\n\n # Designing the interface\n st.title(\"docTR: Document Text Recognition\")\n # For newline\n st.write('\\n')\n # Instructions\n st.markdown(\"*Hint: click on the top-right corner of an image to enlarge it!*\")\n # Set the columns\n cols = st.columns((1, 1, 1, 1))\n cols[0].subheader(\"Input page\")\n cols[1].subheader(\"Segmentation heatmap\")\n cols[2].subheader(\"OCR output\")\n cols[3].subheader(\"Page reconstitution\")\n\n # Sidebar\n # File selection\n st.sidebar.title(\"Document selection\")\n # Disabling warning\n st.set_option('deprecation.showfileUploaderEncoding', False)\n # Choose your own image\n uploaded_file = st.sidebar.file_uploader(\"Upload files\", type=['pdf', 'png', 'jpeg', 'jpg'])\n if uploaded_file is not None:\n if uploaded_file.name.endswith('.pdf'):\n doc = DocumentFile.from_pdf(uploaded_file.read()).as_images()\n else:\n doc = DocumentFile.from_images(uploaded_file.read())\n page_idx = st.sidebar.selectbox(\"Page selection\", [idx + 1 for idx in range(len(doc))]) - 1\n cols[0].image(doc[page_idx])\n\n # Model selection\n st.sidebar.title(\"Model selection\")\n det_arch = st.sidebar.selectbox(\"Text detection model\", DET_ARCHS)\n reco_arch = st.sidebar.selectbox(\"Text recognition model\", RECO_ARCHS)\n\n # For newline\n st.sidebar.write('\\n')\n\n if st.sidebar.button(\"Analyze page\"):\n\n if uploaded_file is None:\n st.sidebar.write(\"Please upload a document\")\n\n else:\n with st.spinner('Loading model...'):\n predictor = ocr_predictor(det_arch, reco_arch, pretrained=True)\n\n with st.spinner('Analyzing...'):\n\n # Forward the image to the model\n processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]])\n out = predictor.det_predictor.model(processed_batches[0], return_preds=True)\n seg_map = out[\"out_map\"]\n seg_map = tf.squeeze(seg_map[0, ...], axis=[2])\n seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),\n interpolation=cv2.INTER_LINEAR)\n # Plot the raw heatmap\n fig, ax = plt.subplots()\n ax.imshow(seg_map)\n ax.axis('off')\n cols[1].pyplot(fig)\n\n # Plot OCR output\n out = predictor([doc[page_idx]])\n fig = visualize_page(out.pages[0].export(), doc[page_idx], interactive=False)\n cols[2].pyplot(fig)\n\n # Page reconsitution under input page\n page_export = out.pages[0].export()\n img = out.pages[0].synthesize()\n cols[3].image(img, clamp=True)\n\n # Display JSON\n st.markdown(\"\\nHere are your analysis results in JSON format:\")\n st.json(page_export)\n\n\nif __name__ == '__main__':\n main()\n", "path": "demo/app.py"}]} | 2,420 | 157 |
gh_patches_debug_27066 | rasdani/github-patches | git_diff | iterative__dvc-6236 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
add: Path does not overlap with base path
## Description
Using `dvc add` on Windows can produce the error
```
ERROR: Path: C:\Users\my_user\Desktop\DVC-2.4-TEST does not overlap with base path: C:\Users\my_user\Desktop\dvc-2.4-test
```
I believe this is because paths in Windows are case insensitive (note the difference in case between `DVC-2.4-TEST` and `dvc-2.4-test`)
### Reproduce
```
mkdir my_test
cd MY_TEST
git init
dvc init
```
Then, move data into the project folder.
```
dvc add mnist --verbose
```
Which outputs
```console
2021-06-23 08:22:44,410 DEBUG: Check for update is enabled.
2021-06-23 08:22:44,491 DEBUG: Trying to spawn '['daemon', '-q', 'updater']'
2021-06-23 08:22:44,536 DEBUG: Spawned '['daemon', '-q', 'updater']'
Adding...
2021-06-23 08:22:44,558 ERROR: Path: C:\Users\my_user\Desktop\MY_TEST does not overlap with base path: C:\Users\my_user\Desktop\my_test
------------------------------------------------------------
Traceback (most recent call last):
File "c:\users\my_user\anaconda3\envs\dvc-2.4\lib\site-packages\dvc\command\add.py", line 21, in run
self.repo.add(
File "c:\users\my_user\anaconda3\envs\dvc-2.4\lib\site-packages\dvc\repo\__init__.py", line 51, in wrapper
return f(repo, *args, **kwargs)
File "c:\users\my_user\anaconda3\envs\dvc-2.4\lib\site-packages\dvc\repo\scm_context.py", line 14, in run
return method(repo, *args, **kw)
File "c:\users\my_user\anaconda3\envs\dvc-2.4\lib\site-packages\dvc\repo\add.py", line 92, in add
stages = _create_stages(
File "c:\users\my_user\anaconda3\envs\dvc-2.4\lib\site-packages\dvc\repo\add.py", line 250, in _create_stages
path, wdir, out = resolve_paths(
File "c:\users\my_user\anaconda3\envs\dvc-2.4\lib\site-packages\dvc\utils\__init__.py", line 386, in resolve_paths
elif contains_symlink_up_to(dirname, repo.root_dir) or (
File "c:\users\my_user\anaconda3\envs\dvc-2.4\lib\site-packages\dvc\utils\fs.py", line 81, in contains_symlink_up_to
raise BasePathNotInCheckedPathException(path, base_path)
dvc.utils.fs.BasePathNotInCheckedPathException: Path: C:\Users\my_user\Desktop\MY_TEST does not overlap with base path: C:\Users\my_user\Desktop\my_test
------------------------------------------------------------
2021-06-23 08:22:44,563 DEBUG: Analytics is enabled.
2021-06-23 08:22:44,565 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', 'C:\\Users\\my_user\\AppData\\Local\\Temp\\tmpgejvwwr6']'
2021-06-23 08:22:44,610
```
### Expected
```
dvc add mnist
```
Which would output
```
100% Add|█████████████████████████████████████████████████████████████████████████████████████|1/1 [00:00, 1.76file/s]
To track the changes with git, run:
git add mnist.dvc .gitignore
```
### Environment information
**Output of `dvc doctor`:**
```console
DVC version: 2.4.0+b7c6df
---------------------------------
Platform: Python 3.8.10 on Windows-10-10.0.19041-SP0
Supports: http, https
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: local
Workspace directory: NTFS on C:\
Repo: dvc, git
```
**Additional Information (if any):**
This error does not happen if you do `cd my_test` instead of `cd MY_TEST`, or in general, if you specify the path with exactly the same case as the true file path.
</issue>
<code>
[start of dvc/utils/fs.py]
1 import errno
2 import logging
3 import os
4 import shutil
5 import stat
6 import sys
7
8 from dvc.exceptions import DvcException
9 from dvc.system import System
10 from dvc.utils import dict_md5
11
12 logger = logging.getLogger(__name__)
13
14 LOCAL_CHUNK_SIZE = 2 ** 20 # 1 MB
15
16 umask = os.umask(0)
17 os.umask(umask)
18
19
20 def fs_copy(src, dst, ignore=None):
21 if os.path.isdir(src):
22 shutil.copytree(src, dst, ignore=ignore)
23 else:
24 shutil.copy2(src, dst)
25
26
27 def get_inode(path):
28 inode = System.inode(path)
29 logger.trace("Path '%s' inode '%d'", path, inode)
30 return inode
31
32
33 def get_mtime_and_size(path, fs, dvcignore=None):
34 import nanotime
35
36 if fs.isdir(path):
37 size = 0
38 files_mtimes = {}
39 if dvcignore:
40 walk_iterator = dvcignore.walk_files(fs, path)
41 else:
42 walk_iterator = fs.walk_files(path)
43 for file_path in walk_iterator:
44 try:
45 stats = fs.stat(file_path)
46 except OSError as exc:
47 # NOTE: broken symlink case.
48 if exc.errno != errno.ENOENT:
49 raise
50 continue
51 size += stats.st_size
52 files_mtimes[os.fspath(file_path)] = stats.st_mtime
53
54 # We track file changes and moves, which cannot be detected with simply
55 # max(mtime(f) for f in non_ignored_files)
56 mtime = dict_md5(files_mtimes)
57 else:
58 base_stat = fs.stat(path)
59 size = base_stat.st_size
60 mtime = base_stat.st_mtime
61 mtime = int(nanotime.timestamp(mtime))
62
63 # State of files handled by dvc is stored in db as TEXT.
64 # We cast results to string for later comparisons with stored values.
65 return str(mtime), str(size)
66
67
68 class BasePathNotInCheckedPathException(DvcException):
69 def __init__(self, path, base_path):
70 msg = "Path: {} does not overlap with base path: {}".format(
71 path, base_path
72 )
73 super().__init__(msg)
74
75
76 def contains_symlink_up_to(path, base_path):
77 base_path = os.fspath(base_path)
78 path = os.fspath(path)
79
80 if base_path not in path:
81 raise BasePathNotInCheckedPathException(path, base_path)
82
83 if path == base_path:
84 return False
85 if System.is_symlink(path):
86 return True
87 if os.path.dirname(path) == path:
88 return False
89 return contains_symlink_up_to(os.path.dirname(path), base_path)
90
91
92 def move(src, dst):
93 """Atomically move src to dst and chmod it with mode.
94
95 Moving is performed in two stages to make the whole operation atomic in
96 case src and dst are on different filesystems and actual physical copying
97 of data is happening.
98 """
99 from shortuuid import uuid
100
101 dst = os.path.abspath(dst)
102 tmp = f"{dst}.{uuid()}"
103
104 if os.path.islink(src):
105 shutil.copy(src, tmp)
106 _unlink(src, _chmod)
107 else:
108 shutil.move(src, tmp)
109
110 shutil.move(tmp, dst)
111
112
113 def _chmod(func, p, excinfo): # pylint: disable=unused-argument
114 perm = os.lstat(p).st_mode
115 perm |= stat.S_IWRITE
116
117 try:
118 os.chmod(p, perm)
119 except OSError as exc:
120 # broken symlink or file is not owned by us
121 if exc.errno not in [errno.ENOENT, errno.EPERM]:
122 raise
123
124 func(p)
125
126
127 def _unlink(path, onerror):
128 try:
129 os.unlink(path)
130 except OSError:
131 onerror(os.unlink, path, sys.exc_info())
132
133
134 def remove(path):
135 logger.debug("Removing '%s'", path)
136
137 try:
138 if os.path.isdir(path):
139 shutil.rmtree(path, onerror=_chmod)
140 else:
141 _unlink(path, _chmod)
142 except OSError as exc:
143 if exc.errno != errno.ENOENT:
144 raise
145
146
147 def path_isin(child, parent):
148 """Check if given `child` path is inside `parent`."""
149
150 def normalize_path(path):
151 return os.path.normpath(path)
152
153 parent = os.path.join(normalize_path(parent), "")
154 child = normalize_path(child)
155 return child != parent and child.startswith(parent)
156
157
158 def makedirs(path, exist_ok=False, mode=None):
159 if mode is None:
160 os.makedirs(path, exist_ok=exist_ok)
161 return
162
163 # Modified version of os.makedirs() with support for extended mode
164 # (e.g. S_ISGID)
165 head, tail = os.path.split(path)
166 if not tail:
167 head, tail = os.path.split(head)
168 if head and tail and not os.path.exists(head):
169 try:
170 makedirs(head, exist_ok=exist_ok, mode=mode)
171 except FileExistsError:
172 # Defeats race condition when another thread created the path
173 pass
174 cdir = os.curdir
175 if isinstance(tail, bytes):
176 cdir = bytes(os.curdir, "ASCII")
177 if tail == cdir: # xxx/newdir/. exists if xxx/newdir exists
178 return
179 try:
180 os.mkdir(path, mode)
181 except OSError:
182 # Cannot rely on checking for EEXIST, since the operating system
183 # could give priority to other errors like EACCES or EROFS
184 if not exist_ok or not os.path.isdir(path):
185 raise
186
187 try:
188 os.chmod(path, mode)
189 except OSError:
190 logger.trace("failed to chmod '%o' '%s'", mode, path, exc_info=True)
191
192
193 def copyfile(src, dest, no_progress_bar=False, name=None):
194 """Copy file with progress bar"""
195 from dvc.progress import Tqdm
196
197 name = name if name else os.path.basename(dest)
198 total = os.stat(src).st_size
199
200 if os.path.isdir(dest):
201 dest = os.path.join(dest, os.path.basename(src))
202
203 try:
204 System.reflink(src, dest)
205 except DvcException:
206 with open(src, "rb") as fsrc, open(dest, "wb+") as fdest:
207 with Tqdm.wrapattr(
208 fdest,
209 "write",
210 desc=name,
211 disable=no_progress_bar,
212 total=total,
213 bytes=True,
214 ) as fdest_wrapped:
215 while True:
216 buf = fsrc.read(LOCAL_CHUNK_SIZE)
217 if not buf:
218 break
219 fdest_wrapped.write(buf)
220
221
222 def copy_fobj_to_file(fsrc, dest):
223 """Copy contents of open file object to destination path."""
224 with open(dest, "wb+") as fdest:
225 shutil.copyfileobj(fsrc, fdest)
226
227
228 def walk_files(directory):
229 for root, _, files in os.walk(directory):
230 for f in files:
231 yield os.path.join(root, f)
232
[end of dvc/utils/fs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/utils/fs.py b/dvc/utils/fs.py
--- a/dvc/utils/fs.py
+++ b/dvc/utils/fs.py
@@ -4,11 +4,15 @@
import shutil
import stat
import sys
+from typing import TYPE_CHECKING
from dvc.exceptions import DvcException
from dvc.system import System
from dvc.utils import dict_md5
+if TYPE_CHECKING:
+ from dvc.types import StrPath
+
logger = logging.getLogger(__name__)
LOCAL_CHUNK_SIZE = 2 ** 20 # 1 MB
@@ -73,9 +77,9 @@
super().__init__(msg)
-def contains_symlink_up_to(path, base_path):
- base_path = os.fspath(base_path)
- path = os.fspath(path)
+def contains_symlink_up_to(path: "StrPath", base_path: "StrPath"):
+ base_path = os.path.normcase(os.fspath(base_path))
+ path = os.path.normcase(os.fspath(path))
if base_path not in path:
raise BasePathNotInCheckedPathException(path, base_path)
@@ -144,11 +148,11 @@
raise
-def path_isin(child, parent):
+def path_isin(child: "StrPath", parent: "StrPath"):
"""Check if given `child` path is inside `parent`."""
- def normalize_path(path):
- return os.path.normpath(path)
+ def normalize_path(path) -> str:
+ return os.path.normcase(os.path.normpath(path))
parent = os.path.join(normalize_path(parent), "")
child = normalize_path(child)
| {"golden_diff": "diff --git a/dvc/utils/fs.py b/dvc/utils/fs.py\n--- a/dvc/utils/fs.py\n+++ b/dvc/utils/fs.py\n@@ -4,11 +4,15 @@\n import shutil\n import stat\n import sys\n+from typing import TYPE_CHECKING\n \n from dvc.exceptions import DvcException\n from dvc.system import System\n from dvc.utils import dict_md5\n \n+if TYPE_CHECKING:\n+ from dvc.types import StrPath\n+\n logger = logging.getLogger(__name__)\n \n LOCAL_CHUNK_SIZE = 2 ** 20 # 1 MB\n@@ -73,9 +77,9 @@\n super().__init__(msg)\n \n \n-def contains_symlink_up_to(path, base_path):\n- base_path = os.fspath(base_path)\n- path = os.fspath(path)\n+def contains_symlink_up_to(path: \"StrPath\", base_path: \"StrPath\"):\n+ base_path = os.path.normcase(os.fspath(base_path))\n+ path = os.path.normcase(os.fspath(path))\n \n if base_path not in path:\n raise BasePathNotInCheckedPathException(path, base_path)\n@@ -144,11 +148,11 @@\n raise\n \n \n-def path_isin(child, parent):\n+def path_isin(child: \"StrPath\", parent: \"StrPath\"):\n \"\"\"Check if given `child` path is inside `parent`.\"\"\"\n \n- def normalize_path(path):\n- return os.path.normpath(path)\n+ def normalize_path(path) -> str:\n+ return os.path.normcase(os.path.normpath(path))\n \n parent = os.path.join(normalize_path(parent), \"\")\n child = normalize_path(child)\n", "issue": "add: Path does not overlap with base path\n## Description\r\n\r\nUsing `dvc add` on Windows can produce the error \r\n\r\n```\r\nERROR: Path: C:\\Users\\my_user\\Desktop\\DVC-2.4-TEST does not overlap with base path: C:\\Users\\my_user\\Desktop\\dvc-2.4-test\r\n```\r\n\r\nI believe this is because paths in Windows are case insensitive (note the difference in case between `DVC-2.4-TEST` and `dvc-2.4-test`)\r\n\r\n\r\n### Reproduce\r\n\r\n```\r\nmkdir my_test\r\ncd MY_TEST\r\ngit init\r\ndvc init\r\n```\r\n\r\nThen, move data into the project folder.\r\n\r\n```\r\ndvc add mnist --verbose\r\n```\r\n\r\nWhich outputs\r\n\r\n```console\r\n2021-06-23 08:22:44,410 DEBUG: Check for update is enabled.\r\n2021-06-23 08:22:44,491 DEBUG: Trying to spawn '['daemon', '-q', 'updater']'\r\n2021-06-23 08:22:44,536 DEBUG: Spawned '['daemon', '-q', 'updater']'\r\nAdding...\r\n2021-06-23 08:22:44,558 ERROR: Path: C:\\Users\\my_user\\Desktop\\MY_TEST does not overlap with base path: C:\\Users\\my_user\\Desktop\\my_test\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"c:\\users\\my_user\\anaconda3\\envs\\dvc-2.4\\lib\\site-packages\\dvc\\command\\add.py\", line 21, in run\r\n self.repo.add(\r\n File \"c:\\users\\my_user\\anaconda3\\envs\\dvc-2.4\\lib\\site-packages\\dvc\\repo\\__init__.py\", line 51, in wrapper\r\n return f(repo, *args, **kwargs)\r\n File \"c:\\users\\my_user\\anaconda3\\envs\\dvc-2.4\\lib\\site-packages\\dvc\\repo\\scm_context.py\", line 14, in run\r\n return method(repo, *args, **kw)\r\n File \"c:\\users\\my_user\\anaconda3\\envs\\dvc-2.4\\lib\\site-packages\\dvc\\repo\\add.py\", line 92, in add\r\n stages = _create_stages(\r\n File \"c:\\users\\my_user\\anaconda3\\envs\\dvc-2.4\\lib\\site-packages\\dvc\\repo\\add.py\", line 250, in _create_stages\r\n path, wdir, out = resolve_paths(\r\n File \"c:\\users\\my_user\\anaconda3\\envs\\dvc-2.4\\lib\\site-packages\\dvc\\utils\\__init__.py\", line 386, in resolve_paths\r\n elif contains_symlink_up_to(dirname, repo.root_dir) or (\r\n File \"c:\\users\\my_user\\anaconda3\\envs\\dvc-2.4\\lib\\site-packages\\dvc\\utils\\fs.py\", line 81, in contains_symlink_up_to\r\n raise BasePathNotInCheckedPathException(path, base_path)\r\ndvc.utils.fs.BasePathNotInCheckedPathException: Path: C:\\Users\\my_user\\Desktop\\MY_TEST does not overlap with base path: C:\\Users\\my_user\\Desktop\\my_test\r\n------------------------------------------------------------\r\n2021-06-23 08:22:44,563 DEBUG: Analytics is enabled.\r\n2021-06-23 08:22:44,565 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', 'C:\\\\Users\\\\my_user\\\\AppData\\\\Local\\\\Temp\\\\tmpgejvwwr6']'\r\n2021-06-23 08:22:44,610 \r\n```\r\n\r\n\r\n### Expected\r\n\r\n```\r\ndvc add mnist\r\n```\r\n\r\nWhich would output\r\n\r\n```\r\n100% Add|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588|1/1 [00:00, 1.76file/s]\r\n\r\nTo track the changes with git, run:\r\n\r\n git add mnist.dvc .gitignore\r\n```\r\n\r\n### Environment information\r\n\r\n**Output of `dvc doctor`:**\r\n\r\n```console\r\nDVC version: 2.4.0+b7c6df\r\n---------------------------------\r\nPlatform: Python 3.8.10 on Windows-10-10.0.19041-SP0\r\nSupports: http, https\r\nCache types: <https://error.dvc.org/no-dvc-cache>\r\nCaches: local\r\nRemotes: local\r\nWorkspace directory: NTFS on C:\\\r\nRepo: dvc, git\r\n```\r\n\r\n**Additional Information (if any):**\r\n\r\nThis error does not happen if you do `cd my_test` instead of `cd MY_TEST`, or in general, if you specify the path with exactly the same case as the true file path.\r\n\n", "before_files": [{"content": "import errno\nimport logging\nimport os\nimport shutil\nimport stat\nimport sys\n\nfrom dvc.exceptions import DvcException\nfrom dvc.system import System\nfrom dvc.utils import dict_md5\n\nlogger = logging.getLogger(__name__)\n\nLOCAL_CHUNK_SIZE = 2 ** 20 # 1 MB\n\numask = os.umask(0)\nos.umask(umask)\n\n\ndef fs_copy(src, dst, ignore=None):\n if os.path.isdir(src):\n shutil.copytree(src, dst, ignore=ignore)\n else:\n shutil.copy2(src, dst)\n\n\ndef get_inode(path):\n inode = System.inode(path)\n logger.trace(\"Path '%s' inode '%d'\", path, inode)\n return inode\n\n\ndef get_mtime_and_size(path, fs, dvcignore=None):\n import nanotime\n\n if fs.isdir(path):\n size = 0\n files_mtimes = {}\n if dvcignore:\n walk_iterator = dvcignore.walk_files(fs, path)\n else:\n walk_iterator = fs.walk_files(path)\n for file_path in walk_iterator:\n try:\n stats = fs.stat(file_path)\n except OSError as exc:\n # NOTE: broken symlink case.\n if exc.errno != errno.ENOENT:\n raise\n continue\n size += stats.st_size\n files_mtimes[os.fspath(file_path)] = stats.st_mtime\n\n # We track file changes and moves, which cannot be detected with simply\n # max(mtime(f) for f in non_ignored_files)\n mtime = dict_md5(files_mtimes)\n else:\n base_stat = fs.stat(path)\n size = base_stat.st_size\n mtime = base_stat.st_mtime\n mtime = int(nanotime.timestamp(mtime))\n\n # State of files handled by dvc is stored in db as TEXT.\n # We cast results to string for later comparisons with stored values.\n return str(mtime), str(size)\n\n\nclass BasePathNotInCheckedPathException(DvcException):\n def __init__(self, path, base_path):\n msg = \"Path: {} does not overlap with base path: {}\".format(\n path, base_path\n )\n super().__init__(msg)\n\n\ndef contains_symlink_up_to(path, base_path):\n base_path = os.fspath(base_path)\n path = os.fspath(path)\n\n if base_path not in path:\n raise BasePathNotInCheckedPathException(path, base_path)\n\n if path == base_path:\n return False\n if System.is_symlink(path):\n return True\n if os.path.dirname(path) == path:\n return False\n return contains_symlink_up_to(os.path.dirname(path), base_path)\n\n\ndef move(src, dst):\n \"\"\"Atomically move src to dst and chmod it with mode.\n\n Moving is performed in two stages to make the whole operation atomic in\n case src and dst are on different filesystems and actual physical copying\n of data is happening.\n \"\"\"\n from shortuuid import uuid\n\n dst = os.path.abspath(dst)\n tmp = f\"{dst}.{uuid()}\"\n\n if os.path.islink(src):\n shutil.copy(src, tmp)\n _unlink(src, _chmod)\n else:\n shutil.move(src, tmp)\n\n shutil.move(tmp, dst)\n\n\ndef _chmod(func, p, excinfo): # pylint: disable=unused-argument\n perm = os.lstat(p).st_mode\n perm |= stat.S_IWRITE\n\n try:\n os.chmod(p, perm)\n except OSError as exc:\n # broken symlink or file is not owned by us\n if exc.errno not in [errno.ENOENT, errno.EPERM]:\n raise\n\n func(p)\n\n\ndef _unlink(path, onerror):\n try:\n os.unlink(path)\n except OSError:\n onerror(os.unlink, path, sys.exc_info())\n\n\ndef remove(path):\n logger.debug(\"Removing '%s'\", path)\n\n try:\n if os.path.isdir(path):\n shutil.rmtree(path, onerror=_chmod)\n else:\n _unlink(path, _chmod)\n except OSError as exc:\n if exc.errno != errno.ENOENT:\n raise\n\n\ndef path_isin(child, parent):\n \"\"\"Check if given `child` path is inside `parent`.\"\"\"\n\n def normalize_path(path):\n return os.path.normpath(path)\n\n parent = os.path.join(normalize_path(parent), \"\")\n child = normalize_path(child)\n return child != parent and child.startswith(parent)\n\n\ndef makedirs(path, exist_ok=False, mode=None):\n if mode is None:\n os.makedirs(path, exist_ok=exist_ok)\n return\n\n # Modified version of os.makedirs() with support for extended mode\n # (e.g. S_ISGID)\n head, tail = os.path.split(path)\n if not tail:\n head, tail = os.path.split(head)\n if head and tail and not os.path.exists(head):\n try:\n makedirs(head, exist_ok=exist_ok, mode=mode)\n except FileExistsError:\n # Defeats race condition when another thread created the path\n pass\n cdir = os.curdir\n if isinstance(tail, bytes):\n cdir = bytes(os.curdir, \"ASCII\")\n if tail == cdir: # xxx/newdir/. exists if xxx/newdir exists\n return\n try:\n os.mkdir(path, mode)\n except OSError:\n # Cannot rely on checking for EEXIST, since the operating system\n # could give priority to other errors like EACCES or EROFS\n if not exist_ok or not os.path.isdir(path):\n raise\n\n try:\n os.chmod(path, mode)\n except OSError:\n logger.trace(\"failed to chmod '%o' '%s'\", mode, path, exc_info=True)\n\n\ndef copyfile(src, dest, no_progress_bar=False, name=None):\n \"\"\"Copy file with progress bar\"\"\"\n from dvc.progress import Tqdm\n\n name = name if name else os.path.basename(dest)\n total = os.stat(src).st_size\n\n if os.path.isdir(dest):\n dest = os.path.join(dest, os.path.basename(src))\n\n try:\n System.reflink(src, dest)\n except DvcException:\n with open(src, \"rb\") as fsrc, open(dest, \"wb+\") as fdest:\n with Tqdm.wrapattr(\n fdest,\n \"write\",\n desc=name,\n disable=no_progress_bar,\n total=total,\n bytes=True,\n ) as fdest_wrapped:\n while True:\n buf = fsrc.read(LOCAL_CHUNK_SIZE)\n if not buf:\n break\n fdest_wrapped.write(buf)\n\n\ndef copy_fobj_to_file(fsrc, dest):\n \"\"\"Copy contents of open file object to destination path.\"\"\"\n with open(dest, \"wb+\") as fdest:\n shutil.copyfileobj(fsrc, fdest)\n\n\ndef walk_files(directory):\n for root, _, files in os.walk(directory):\n for f in files:\n yield os.path.join(root, f)\n", "path": "dvc/utils/fs.py"}]} | 3,818 | 369 |
gh_patches_debug_39677 | rasdani/github-patches | git_diff | blaze__blaze-1385 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue running blaze-server in 0.9.0 release
```
$ python --version
Python 3.4.4 :: Continuum Analytics, Inc.
$ python -c 'import blaze; print(blaze.__version__)'
0.9.0
$ blaze-server ./foo.yaml
Traceback (most recent call last):
File "/Users/ksmith/anaconda/envs/test-blaze-0.9.0/bin/blaze-server", line 6, in <module>
sys.exit(blaze.server.spider._main())
AttributeError: 'function' object has no attribute '_main'
```
The best I can tell, this issue is related to the `import blaze.server.spider` line, and an ambiguity in this `import`. There's both a `blaze/server/spider.py` module, and inside `blaze/server/__init__.py` is a `from .spider import spider`, which imports the spider _function_ in the `blaze.server` namespace.
The strange thing is that this particular logic is unchanged since Blaze 0.8.3, and `blaze-server` worked fine then.
In any case, dealiasing the `spider` name in `blaze.server` is a good idea anyway to rule out this issue once and for all. Fix forthcoming.
</issue>
<code>
[start of blaze/server/spider.py]
1 #!/usr/bin/env python
2
3 from __future__ import absolute_import
4
5 import os
6 import sys
7 import argparse
8
9 import yaml
10
11 from odo import resource
12 from odo.utils import ignoring
13
14 from .server import Server, DEFAULT_PORT
15
16 try:
17 import __builtin__ as builtins
18 except ImportError:
19 import builtins
20
21
22 __all__ = 'spider', 'from_yaml'
23
24
25 def _spider(resource_path, ignore, followlinks, hidden):
26 resources = {}
27 for filename in (os.path.join(resource_path, x)
28 for x in os.listdir(resource_path)):
29 basename = os.path.basename(filename)
30 if (basename.startswith(os.curdir) and not hidden or
31 os.path.islink(filename) and not followlinks):
32 continue
33 if os.path.isdir(filename):
34 new_resources = _spider(filename, ignore=ignore,
35 followlinks=followlinks, hidden=hidden)
36 if new_resources:
37 resources[basename] = new_resources
38 else:
39 with ignoring(*ignore):
40 resources[basename] = resource(filename)
41 return resources
42
43
44 def spider(path, ignore=(ValueError, NotImplementedError), followlinks=True,
45 hidden=False):
46 """Traverse a directory and call ``odo.resource`` on its contentso
47
48 Parameters
49 ----------
50 path : str
51 Path to a directory of resources to load
52 ignore : tuple of Exception, optional
53 Ignore these exceptions when calling resource
54 followlinks : bool, optional
55 Follow symbolic links
56 hidden : bool, optional
57 Load hidden files
58
59 Returns
60 -------
61 dict
62 Possibly nested dictionary of containing basenames mapping to resources
63 """
64 return {
65 os.path.basename(path): _spider(path, ignore=ignore,
66 followlinks=followlinks,
67 hidden=hidden)
68 }
69
70
71 def from_yaml(path, ignore=(ValueError, NotImplementedError), followlinks=True,
72 hidden=False):
73 """Construct a dictionary of resources from a YAML specification.
74
75 Parameters
76 ----------
77 path : str
78 Path to a YAML specification of resources to load
79 ignore : tuple of Exception, optional
80 Ignore these exceptions when calling resource
81 followlinks : bool, optional
82 Follow symbolic links
83 hidden : bool, optional
84 Load hidden files
85
86 Returns
87 -------
88 dict
89 A dictionary mapping top level keys in a YAML file to resources.
90
91 See Also
92 --------
93 spider : Traverse a directory tree for resources
94 """
95 resources = {}
96 for name, info in yaml.load(path.read()).items():
97 if 'source' not in info:
98 raise ValueError('source key not found for data source named %r' %
99 name)
100 source = info['source']
101 if os.path.isdir(source):
102 resources[name] = spider(os.path.expanduser(source),
103 ignore=ignore,
104 followlinks=followlinks,
105 hidden=hidden)
106 else:
107 resources[name] = resource(source, dshape=info.get('dshape'))
108 return resources
109
110
111 def _parse_args():
112 p = argparse.ArgumentParser(
113 formatter_class=argparse.ArgumentDefaultsHelpFormatter)
114 p.add_argument('path', type=argparse.FileType('r'), nargs='?',
115 default=sys.stdin,
116 help='A YAML file specifying the resources to load')
117 p.add_argument('-p', '--port', type=int, default=DEFAULT_PORT,
118 help='Port number')
119 p.add_argument('-H', '--host', type=str, default='127.0.0.1',
120 help='Host name. Use 0.0.0.0 to listen on all public IPs')
121 p.add_argument('-l', '--follow-links', action='store_true',
122 help='Follow links when listing files')
123 p.add_argument('-e', '--ignored-exception', nargs='+',
124 default=['Exception'],
125 help='Exceptions to ignore when calling resource on a file')
126 p.add_argument('-d', '--hidden', action='store_true',
127 help='Call resource on hidden files')
128 p.add_argument('-D', '--debug', action='store_true',
129 help='Start the Flask server in debug mode')
130 return p.parse_args()
131
132
133 def _main():
134 args = _parse_args()
135 ignore = tuple(getattr(builtins, e) for e in args.ignored_exception)
136 resources = from_yaml(args.path,
137 ignore=ignore,
138 followlinks=args.follow_links,
139 hidden=args.hidden)
140 Server(resources).run(host=args.host, port=args.port, debug=args.debug)
141
142
143 if __name__ == '__main__':
144 _main()
145
[end of blaze/server/spider.py]
[start of blaze/server/__init__.py]
1 from __future__ import absolute_import, division, print_function
2
3 from .server import Server, to_tree, from_tree, api
4 from .spider import spider, from_yaml
5 from .client import Client
6 from .serialization import (
7 SerializationFormat,
8 all_formats,
9 json as json_format,
10 pickle as pickle_format,
11 msgpack as msgpack_format,
12 )
13
14
15 __all__ = [
16 'Client',
17 'SerializationFormat',
18 'Server',
19 'spider',
20 'from_yaml',
21 'all_formats',
22 'api',
23 'from_tree',
24 'json_format',
25 'msgpack_format',
26 'pickle_format',
27 'to_tree',
28 ]
29
[end of blaze/server/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/blaze/server/__init__.py b/blaze/server/__init__.py
--- a/blaze/server/__init__.py
+++ b/blaze/server/__init__.py
@@ -1,7 +1,7 @@
from __future__ import absolute_import, division, print_function
from .server import Server, to_tree, from_tree, api
-from .spider import spider, from_yaml
+from .spider import data_spider, from_yaml
from .client import Client
from .serialization import (
SerializationFormat,
@@ -16,7 +16,7 @@
'Client',
'SerializationFormat',
'Server',
- 'spider',
+ 'data_spider',
'from_yaml',
'all_formats',
'api',
diff --git a/blaze/server/spider.py b/blaze/server/spider.py
--- a/blaze/server/spider.py
+++ b/blaze/server/spider.py
@@ -19,7 +19,7 @@
import builtins
-__all__ = 'spider', 'from_yaml'
+__all__ = 'data_spider', 'from_yaml'
def _spider(resource_path, ignore, followlinks, hidden):
@@ -41,9 +41,11 @@
return resources
-def spider(path, ignore=(ValueError, NotImplementedError), followlinks=True,
- hidden=False):
- """Traverse a directory and call ``odo.resource`` on its contentso
+def data_spider(path,
+ ignore=(ValueError, NotImplementedError),
+ followlinks=True,
+ hidden=False):
+ """Traverse a directory and call ``odo.resource`` on its contents.
Parameters
----------
@@ -61,6 +63,8 @@
dict
Possibly nested dictionary of containing basenames mapping to resources
"""
+ # NOTE: this is named `data_spider` rather than just `spider` to
+ # disambiguate this function from the `blaze.server.spider` module.
return {
os.path.basename(path): _spider(path, ignore=ignore,
followlinks=followlinks,
@@ -90,7 +94,7 @@
See Also
--------
- spider : Traverse a directory tree for resources
+ data_spider : Traverse a directory tree for resources
"""
resources = {}
for name, info in yaml.load(path.read()).items():
@@ -99,10 +103,10 @@
name)
source = info['source']
if os.path.isdir(source):
- resources[name] = spider(os.path.expanduser(source),
- ignore=ignore,
- followlinks=followlinks,
- hidden=hidden)
+ resources[name] = data_spider(os.path.expanduser(source),
+ ignore=ignore,
+ followlinks=followlinks,
+ hidden=hidden)
else:
resources[name] = resource(source, dshape=info.get('dshape'))
return resources
| {"golden_diff": "diff --git a/blaze/server/__init__.py b/blaze/server/__init__.py\n--- a/blaze/server/__init__.py\n+++ b/blaze/server/__init__.py\n@@ -1,7 +1,7 @@\n from __future__ import absolute_import, division, print_function\n \n from .server import Server, to_tree, from_tree, api\n-from .spider import spider, from_yaml\n+from .spider import data_spider, from_yaml\n from .client import Client\n from .serialization import (\n SerializationFormat,\n@@ -16,7 +16,7 @@\n 'Client',\n 'SerializationFormat',\n 'Server',\n- 'spider',\n+ 'data_spider',\n 'from_yaml',\n 'all_formats',\n 'api',\ndiff --git a/blaze/server/spider.py b/blaze/server/spider.py\n--- a/blaze/server/spider.py\n+++ b/blaze/server/spider.py\n@@ -19,7 +19,7 @@\n import builtins\n \n \n-__all__ = 'spider', 'from_yaml'\n+__all__ = 'data_spider', 'from_yaml'\n \n \n def _spider(resource_path, ignore, followlinks, hidden):\n@@ -41,9 +41,11 @@\n return resources\n \n \n-def spider(path, ignore=(ValueError, NotImplementedError), followlinks=True,\n- hidden=False):\n- \"\"\"Traverse a directory and call ``odo.resource`` on its contentso\n+def data_spider(path,\n+ ignore=(ValueError, NotImplementedError),\n+ followlinks=True,\n+ hidden=False):\n+ \"\"\"Traverse a directory and call ``odo.resource`` on its contents.\n \n Parameters\n ----------\n@@ -61,6 +63,8 @@\n dict\n Possibly nested dictionary of containing basenames mapping to resources\n \"\"\"\n+ # NOTE: this is named `data_spider` rather than just `spider` to\n+ # disambiguate this function from the `blaze.server.spider` module.\n return {\n os.path.basename(path): _spider(path, ignore=ignore,\n followlinks=followlinks,\n@@ -90,7 +94,7 @@\n \n See Also\n --------\n- spider : Traverse a directory tree for resources\n+ data_spider : Traverse a directory tree for resources\n \"\"\"\n resources = {}\n for name, info in yaml.load(path.read()).items():\n@@ -99,10 +103,10 @@\n name)\n source = info['source']\n if os.path.isdir(source):\n- resources[name] = spider(os.path.expanduser(source),\n- ignore=ignore,\n- followlinks=followlinks,\n- hidden=hidden)\n+ resources[name] = data_spider(os.path.expanduser(source),\n+ ignore=ignore,\n+ followlinks=followlinks,\n+ hidden=hidden)\n else:\n resources[name] = resource(source, dshape=info.get('dshape'))\n return resources\n", "issue": "Issue running blaze-server in 0.9.0 release\n```\n$ python --version\nPython 3.4.4 :: Continuum Analytics, Inc.\n\n$ python -c 'import blaze; print(blaze.__version__)'\n0.9.0\n\n$ blaze-server ./foo.yaml\nTraceback (most recent call last):\n File \"/Users/ksmith/anaconda/envs/test-blaze-0.9.0/bin/blaze-server\", line 6, in <module>\n sys.exit(blaze.server.spider._main())\nAttributeError: 'function' object has no attribute '_main'\n```\n\nThe best I can tell, this issue is related to the `import blaze.server.spider` line, and an ambiguity in this `import`. There's both a `blaze/server/spider.py` module, and inside `blaze/server/__init__.py` is a `from .spider import spider`, which imports the spider _function_ in the `blaze.server` namespace.\n\nThe strange thing is that this particular logic is unchanged since Blaze 0.8.3, and `blaze-server` worked fine then.\n\nIn any case, dealiasing the `spider` name in `blaze.server` is a good idea anyway to rule out this issue once and for all. Fix forthcoming.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport argparse\n\nimport yaml\n\nfrom odo import resource\nfrom odo.utils import ignoring\n\nfrom .server import Server, DEFAULT_PORT\n\ntry:\n import __builtin__ as builtins\nexcept ImportError:\n import builtins\n\n\n__all__ = 'spider', 'from_yaml'\n\n\ndef _spider(resource_path, ignore, followlinks, hidden):\n resources = {}\n for filename in (os.path.join(resource_path, x)\n for x in os.listdir(resource_path)):\n basename = os.path.basename(filename)\n if (basename.startswith(os.curdir) and not hidden or\n os.path.islink(filename) and not followlinks):\n continue\n if os.path.isdir(filename):\n new_resources = _spider(filename, ignore=ignore,\n followlinks=followlinks, hidden=hidden)\n if new_resources:\n resources[basename] = new_resources\n else:\n with ignoring(*ignore):\n resources[basename] = resource(filename)\n return resources\n\n\ndef spider(path, ignore=(ValueError, NotImplementedError), followlinks=True,\n hidden=False):\n \"\"\"Traverse a directory and call ``odo.resource`` on its contentso\n\n Parameters\n ----------\n path : str\n Path to a directory of resources to load\n ignore : tuple of Exception, optional\n Ignore these exceptions when calling resource\n followlinks : bool, optional\n Follow symbolic links\n hidden : bool, optional\n Load hidden files\n\n Returns\n -------\n dict\n Possibly nested dictionary of containing basenames mapping to resources\n \"\"\"\n return {\n os.path.basename(path): _spider(path, ignore=ignore,\n followlinks=followlinks,\n hidden=hidden)\n }\n\n\ndef from_yaml(path, ignore=(ValueError, NotImplementedError), followlinks=True,\n hidden=False):\n \"\"\"Construct a dictionary of resources from a YAML specification.\n\n Parameters\n ----------\n path : str\n Path to a YAML specification of resources to load\n ignore : tuple of Exception, optional\n Ignore these exceptions when calling resource\n followlinks : bool, optional\n Follow symbolic links\n hidden : bool, optional\n Load hidden files\n\n Returns\n -------\n dict\n A dictionary mapping top level keys in a YAML file to resources.\n\n See Also\n --------\n spider : Traverse a directory tree for resources\n \"\"\"\n resources = {}\n for name, info in yaml.load(path.read()).items():\n if 'source' not in info:\n raise ValueError('source key not found for data source named %r' %\n name)\n source = info['source']\n if os.path.isdir(source):\n resources[name] = spider(os.path.expanduser(source),\n ignore=ignore,\n followlinks=followlinks,\n hidden=hidden)\n else:\n resources[name] = resource(source, dshape=info.get('dshape'))\n return resources\n\n\ndef _parse_args():\n p = argparse.ArgumentParser(\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n p.add_argument('path', type=argparse.FileType('r'), nargs='?',\n default=sys.stdin,\n help='A YAML file specifying the resources to load')\n p.add_argument('-p', '--port', type=int, default=DEFAULT_PORT,\n help='Port number')\n p.add_argument('-H', '--host', type=str, default='127.0.0.1',\n help='Host name. Use 0.0.0.0 to listen on all public IPs')\n p.add_argument('-l', '--follow-links', action='store_true',\n help='Follow links when listing files')\n p.add_argument('-e', '--ignored-exception', nargs='+',\n default=['Exception'],\n help='Exceptions to ignore when calling resource on a file')\n p.add_argument('-d', '--hidden', action='store_true',\n help='Call resource on hidden files')\n p.add_argument('-D', '--debug', action='store_true',\n help='Start the Flask server in debug mode')\n return p.parse_args()\n\n\ndef _main():\n args = _parse_args()\n ignore = tuple(getattr(builtins, e) for e in args.ignored_exception)\n resources = from_yaml(args.path,\n ignore=ignore,\n followlinks=args.follow_links,\n hidden=args.hidden)\n Server(resources).run(host=args.host, port=args.port, debug=args.debug)\n\n\nif __name__ == '__main__':\n _main()\n", "path": "blaze/server/spider.py"}, {"content": "from __future__ import absolute_import, division, print_function\n\nfrom .server import Server, to_tree, from_tree, api\nfrom .spider import spider, from_yaml\nfrom .client import Client\nfrom .serialization import (\n SerializationFormat,\n all_formats,\n json as json_format,\n pickle as pickle_format,\n msgpack as msgpack_format,\n)\n\n\n__all__ = [\n 'Client',\n 'SerializationFormat',\n 'Server',\n 'spider',\n 'from_yaml',\n 'all_formats',\n 'api',\n 'from_tree',\n 'json_format',\n 'msgpack_format',\n 'pickle_format',\n 'to_tree',\n]\n", "path": "blaze/server/__init__.py"}]} | 2,328 | 651 |
gh_patches_debug_41691 | rasdani/github-patches | git_diff | TileDB-Inc__TileDB-Py-44 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python 2.7 support
</issue>
<code>
[start of doc/source/gensidebar.py]
1 #
2 # This file generates the sidebar/toctree for all TileDB projects and should
3 # be copied to each project when it is updated.
4 #
5 # This file is originally from the RobotPy documentation project https://github.com/robotpy/robotpy-docs, licensed under
6 #
7
8 import os
9
10 def write_if_changed(fname, contents):
11
12 try:
13 with open(fname, 'r') as fp:
14 old_contents = fp.read()
15 except:
16 old_contents = ''
17
18 if old_contents != contents:
19 with open(fname, 'w') as fp:
20 fp.write(contents)
21
22 def generate_sidebar(conf, conf_api):
23
24 version = conf['rtd_version']
25
26 lines = [
27 '', '.. DO NOT MODIFY! THIS PAGE IS AUTOGENERATED!', ''
28 ]
29
30 url_base = 'https://docs.tiledb.io'
31 lang = 'en'
32
33 def toctree(name):
34 lines.extend(['.. toctree::',
35 ' :caption: %s' % name,
36 ' :maxdepth: 1',
37 ''])
38
39 def endl():
40 lines.append('')
41
42 def write(desc, link):
43 if conf_api == 'tiledb':
44 args = desc, link
45 else:
46 args = desc, '%s/%s/%s/%s.html' % (url_base, lang, version, link)
47
48 lines.append(' %s <%s>' % args)
49
50 def write_api(project, desc, rst_page):
51 # From non-root project to root project link
52 if project == 'tiledb' and conf_api != 'tiledb':
53 args = desc, url_base, lang, version, rst_page
54 lines.append(' %s API <%s/%s/%s/%s.html>' % args)
55 # From anything to non-root project link
56 elif project != conf_api:
57 args = desc, url_base, project, lang, version, rst_page
58 lines.append(' %s API <%s/projects/%s/%s/%s/%s.html>' % args)
59 # Local project link
60 else:
61 args = desc, rst_page
62 lines.append(' %s API <%s>' % args)
63
64 #
65 # Specify the sidebar contents here
66 #
67
68 toctree('Getting Started')
69 write('Introduction', 'introduction')
70 write('Installation', 'installation')
71 write('Usage', 'usage')
72 write('Working with S3', 's3')
73 endl()
74
75 toctree('API Reference')
76 write_api('tiledb', 'C', 'c-api')
77 write_api('tiledb', 'C++', 'c++-api')
78 write_api('tiledb-py', 'Python', 'python-api')
79 endl()
80
81 toctree('TileDB 101')
82 write('Data Model', 'tiledb101/data-model')
83 write('Basic Concepts', 'tiledb101/basic-concepts')
84 write('System Architecture', 'tiledb101/system-architecture')
85 write('Physical Organization', 'tiledb101/physical-organization')
86 write('Writing', 'tiledb101/writing/index')
87 write('Updating', 'tiledb101/updating')
88 write('Consolidation', 'tiledb101/consolidation')
89 write('Reading', 'tiledb101/reading/index')
90 write('Compression', 'tiledb101/compression')
91 write('Asynchronous I/O', 'tiledb101/asynchronous-io')
92 write('Key-Value Store', 'tiledb101/key-value-store')
93 write('Object Management', 'tiledb101/object-management')
94 write('Storage Backends', 'tiledb101/storage-backends')
95 write('Virtual Filesystem', 'tiledb101/virtual-filesystem')
96 write('Language Bindings', 'tiledb101/language-bindings')
97 write('Concurrency', 'tiledb101/concurrency')
98 write('Consistency', 'tiledb101/consistency')
99 endl()
100
101 write_if_changed('_sidebar.rst.inc', '\n'.join(lines))
102
[end of doc/source/gensidebar.py]
[start of setup.py]
1 from __future__ import absolute_import, print_function
2
3 import os
4 from setuptools import setup, Extension, find_packages
5 from pkg_resources import resource_filename
6
7 import sys
8 from sys import version_info as ver
9
10 # Check if Python version is supported
11 if any([ver < (3, 4)]):
12 raise Exception("Unsupported Python version %d.%d. Requires Python >= 3.4")
13
14
15 class LazyCommandClass(dict):
16 """
17 Lazy command class that defers operations requiring Cython and numpy until
18 they've actually been downloaded and installed by setup_requires.
19 """
20 def __contains__(self, key):
21 return (
22 key == 'build_ext'
23 or super(LazyCommandClass, self).__contains__(key)
24 )
25
26 def __setitem__(self, key, value):
27 if key == 'build_ext':
28 raise AssertionError("build_ext overridden!")
29 super(LazyCommandClass, self).__setitem__(key, value)
30
31 def __getitem__(self, key):
32 if key != 'build_ext':
33 return super(LazyCommandClass, self).__getitem__(key)
34
35 from Cython.Distutils import build_ext as cython_build_ext
36
37 class build_ext(cython_build_ext):
38 """
39 Custom build_ext command that lazily adds numpy's include_dir to
40 extensions.
41 """
42 def build_extensions(self):
43 """
44 Lazily append numpy's include directory to Extension includes.
45
46 This is done here rather than at module scope because setup.py
47 may be run before numpy has been installed, in which case
48 importing numpy and calling `numpy.get_include()` will fail.
49 """
50 numpy_incl = resource_filename('numpy', 'core/include')
51 for ext in self.extensions:
52 ext.include_dirs.append(numpy_incl)
53
54 # This explicitly calls the superclass method rather than the
55 # usual super() invocation because distutils' build_class, of
56 # which Cython's build_ext is a subclass, is an old-style class
57 # in Python 2, which doesn't support `super`.
58 cython_build_ext.build_extensions(self)
59 return build_ext
60
61
62 tests_require = []
63 if ver < (3,):
64 tests_require.extend(["unittest2", "mock"])
65
66 # Globals variables
67 CXXFLAGS = os.environ.get("CXXFLAGS", "-std=c++11").split()
68 LFLAGS = os.environ.get("LFLAGS", "").split()
69
70 # Allow setting (lib) TileDB directory if it is installed on the system
71 TILEDB_DIR = os.environ.get("TILEDB_DIR", "")
72
73 # Sources & libraries
74 inc_dirs = []
75 lib_dirs = []
76 libs = ["tiledb"]
77 def_macros = []
78 sources = ["tiledb/libtiledb.pyx"]
79 optional_libs = []
80
81 # Pass command line flags to setup.py script
82 # handle --tiledb=[PATH] --lflags=[FLAGS] --cxxflags=[FLAGS]
83 args = sys.argv[:]
84 for arg in args:
85 if arg.find('--tiledb=') == 0:
86 TILEDB_DIR = os.path.expanduser(arg.split('=')[1])
87 sys.argv.remove(arg)
88 if arg.find('--lflags=') == 0:
89 LFLAGS = arg.split('=')[1].split()
90 sys.argv.remove(arg)
91 if arg.find('--cxxflags=') == 0:
92 CXXFLAGS = arg.split('=')[1].split()
93 sys.argv.remove(arg)
94
95 if TILEDB_DIR != '':
96 lib_dirs += [os.path.join(TILEDB_DIR, 'lib')]
97 inc_dirs += [os.path.join(TILEDB_DIR, 'include')]
98
99 setup(
100 name='tiledb',
101 description="Pythonic interface to the TileDB array storage manager",
102 author='TileDB, Inc.',
103 author_email='[email protected]',
104 maintainer='TileDB, Inc.',
105 maintainer_email='[email protected]',
106 url='https://github.com/TileDB-Inc/TileDB-Py',
107 license='MIT',
108 platforms=['any'],
109 use_scm_version={
110 'version_scheme': 'guess-next-dev',
111 'local_scheme': 'dirty-tag',
112 'write_to': 'tiledb/version.py'
113 },
114 ext_modules=[
115 Extension(
116 "tiledb.libtiledb",
117 include_dirs=inc_dirs,
118 define_macros=def_macros,
119 sources=sources,
120 library_dirs=lib_dirs,
121 libraries=libs,
122 extra_link_args=LFLAGS,
123 extra_compile_args=CXXFLAGS,
124 language="c++"
125 )
126 ],
127 setup_requires=[
128 'cython>=0.22',
129 'numpy>=1.7',
130 'setuptools>18.0',
131 'setuptools-scm>1.5.4'
132 ],
133 install_requires=[
134 'numpy>=1.7',
135 ],
136 tests_require=tests_require,
137 packages=find_packages(),
138 cmdclass=LazyCommandClass(),
139 classifiers=[
140 'Intended Audience :: Developers',
141 'Intended Audience :: Information Technology',
142 'Intended Audience :: Science/Research',
143 'License :: OSI Approved :: MIT License',
144 'Programming Language :: Python',
145 'Topic :: Software Development :: Libraries :: Python Modules',
146 'Operating System :: Unix',
147 'Programming Language :: Python :: 3',
148 'Programming Language :: Python :: 3.4',
149 'Programming Language :: Python :: 3.5',
150 'Programming Language :: Python :: 3.6',
151 ],
152
153 )
154
[end of setup.py]
[start of tiledb/__init__.py]
1 from __future__ import absolute_import
2
3 from .libtiledb import (Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, DenseArray, SparseArray,
4 TileDBError, VFS, array_consolidate, group_create, object_type,
5 ls, walk, remove, move)
6
7 __all__ = [Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, SparseArray, TileDBError, VFS,
8 array_consolidate, group_create, object_type, ls, walk, remove, move]
9
[end of tiledb/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doc/source/gensidebar.py b/doc/source/gensidebar.py
--- a/doc/source/gensidebar.py
+++ b/doc/source/gensidebar.py
@@ -64,12 +64,11 @@
#
# Specify the sidebar contents here
#
-
+
toctree('Getting Started')
write('Introduction', 'introduction')
write('Installation', 'installation')
write('Usage', 'usage')
- write('Working with S3', 's3')
endl()
toctree('API Reference')
@@ -93,6 +92,7 @@
write('Object Management', 'tiledb101/object-management')
write('Storage Backends', 'tiledb101/storage-backends')
write('Virtual Filesystem', 'tiledb101/virtual-filesystem')
+ write('Working with S3', 'tiledb101/s3')
write('Language Bindings', 'tiledb101/language-bindings')
write('Concurrency', 'tiledb101/concurrency')
write('Consistency', 'tiledb101/consistency')
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -7,11 +7,6 @@
import sys
from sys import version_info as ver
-# Check if Python version is supported
-if any([ver < (3, 4)]):
- raise Exception("Unsupported Python version %d.%d. Requires Python >= 3.4")
-
-
class LazyCommandClass(dict):
"""
Lazy command class that defers operations requiring Cython and numpy until
@@ -68,7 +63,7 @@
LFLAGS = os.environ.get("LFLAGS", "").split()
# Allow setting (lib) TileDB directory if it is installed on the system
-TILEDB_DIR = os.environ.get("TILEDB_DIR", "")
+TILEDB_PATH = os.environ.get("TILEDB_PATH", "")
# Sources & libraries
inc_dirs = []
@@ -83,7 +78,7 @@
args = sys.argv[:]
for arg in args:
if arg.find('--tiledb=') == 0:
- TILEDB_DIR = os.path.expanduser(arg.split('=')[1])
+ TILEDB_PATH = os.path.expanduser(arg.split('=')[1])
sys.argv.remove(arg)
if arg.find('--lflags=') == 0:
LFLAGS = arg.split('=')[1].split()
@@ -92,9 +87,9 @@
CXXFLAGS = arg.split('=')[1].split()
sys.argv.remove(arg)
-if TILEDB_DIR != '':
- lib_dirs += [os.path.join(TILEDB_DIR, 'lib')]
- inc_dirs += [os.path.join(TILEDB_DIR, 'include')]
+if TILEDB_PATH != '':
+ lib_dirs += [os.path.join(TILEDB_PATH, 'lib')]
+ inc_dirs += [os.path.join(TILEDB_PATH, 'include')]
setup(
name='tiledb',
diff --git a/tiledb/__init__.py b/tiledb/__init__.py
--- a/tiledb/__init__.py
+++ b/tiledb/__init__.py
@@ -1,8 +1,26 @@
from __future__ import absolute_import
-from .libtiledb import (Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, DenseArray, SparseArray,
- TileDBError, VFS, array_consolidate, group_create, object_type,
- ls, walk, remove, move)
-
-__all__ = [Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, SparseArray, TileDBError, VFS,
- array_consolidate, group_create, object_type, ls, walk, remove, move]
+from .libtiledb import (
+ Ctx,
+ Config,
+ Dim,
+ Domain,
+ Attr,
+ KVSchema,
+ KV,
+ ArraySchema,
+ DenseArray,
+ SparseArray,
+ TileDBError,
+ VFS,
+ FileIO,
+ consolidate,
+ group_create,
+ object_type,
+ ls,
+ walk,
+ remove
+)
+#
+# __all__ = [Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, SparseArray, TileDBError, VFS,
+# array_consolidate, group_create, object_type, ls, walk, remove]
| {"golden_diff": "diff --git a/doc/source/gensidebar.py b/doc/source/gensidebar.py\n--- a/doc/source/gensidebar.py\n+++ b/doc/source/gensidebar.py\n@@ -64,12 +64,11 @@\n #\n # Specify the sidebar contents here\n #\n- \n+\n toctree('Getting Started')\n write('Introduction', 'introduction')\n write('Installation', 'installation')\n write('Usage', 'usage')\n- write('Working with S3', 's3')\n endl()\n \n toctree('API Reference')\n@@ -93,6 +92,7 @@\n write('Object Management', 'tiledb101/object-management')\n write('Storage Backends', 'tiledb101/storage-backends')\n write('Virtual Filesystem', 'tiledb101/virtual-filesystem')\n+ write('Working with S3', 'tiledb101/s3')\n write('Language Bindings', 'tiledb101/language-bindings')\n write('Concurrency', 'tiledb101/concurrency')\n write('Consistency', 'tiledb101/consistency')\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,11 +7,6 @@\n import sys\n from sys import version_info as ver\n \n-# Check if Python version is supported\n-if any([ver < (3, 4)]):\n- raise Exception(\"Unsupported Python version %d.%d. Requires Python >= 3.4\")\n-\n-\n class LazyCommandClass(dict):\n \"\"\"\n Lazy command class that defers operations requiring Cython and numpy until\n@@ -68,7 +63,7 @@\n LFLAGS = os.environ.get(\"LFLAGS\", \"\").split()\n \n # Allow setting (lib) TileDB directory if it is installed on the system\n-TILEDB_DIR = os.environ.get(\"TILEDB_DIR\", \"\")\n+TILEDB_PATH = os.environ.get(\"TILEDB_PATH\", \"\")\n \n # Sources & libraries\n inc_dirs = []\n@@ -83,7 +78,7 @@\n args = sys.argv[:]\n for arg in args:\n if arg.find('--tiledb=') == 0:\n- TILEDB_DIR = os.path.expanduser(arg.split('=')[1])\n+ TILEDB_PATH = os.path.expanduser(arg.split('=')[1])\n sys.argv.remove(arg)\n if arg.find('--lflags=') == 0:\n LFLAGS = arg.split('=')[1].split()\n@@ -92,9 +87,9 @@\n CXXFLAGS = arg.split('=')[1].split()\n sys.argv.remove(arg)\n \n-if TILEDB_DIR != '':\n- lib_dirs += [os.path.join(TILEDB_DIR, 'lib')]\n- inc_dirs += [os.path.join(TILEDB_DIR, 'include')]\n+if TILEDB_PATH != '':\n+ lib_dirs += [os.path.join(TILEDB_PATH, 'lib')]\n+ inc_dirs += [os.path.join(TILEDB_PATH, 'include')]\n \n setup(\n name='tiledb',\ndiff --git a/tiledb/__init__.py b/tiledb/__init__.py\n--- a/tiledb/__init__.py\n+++ b/tiledb/__init__.py\n@@ -1,8 +1,26 @@\n from __future__ import absolute_import\n \n-from .libtiledb import (Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, DenseArray, SparseArray,\n- TileDBError, VFS, array_consolidate, group_create, object_type,\n- ls, walk, remove, move)\n-\n-__all__ = [Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, SparseArray, TileDBError, VFS,\n- array_consolidate, group_create, object_type, ls, walk, remove, move]\n+from .libtiledb import (\n+ Ctx,\n+ Config,\n+ Dim,\n+ Domain,\n+ Attr,\n+ KVSchema,\n+ KV,\n+ ArraySchema,\n+ DenseArray,\n+ SparseArray,\n+ TileDBError,\n+ VFS,\n+ FileIO,\n+ consolidate,\n+ group_create,\n+ object_type,\n+ ls,\n+ walk,\n+ remove\n+)\n+#\n+# __all__ = [Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, SparseArray, TileDBError, VFS,\n+# array_consolidate, group_create, object_type, ls, walk, remove]\n", "issue": "Python 2.7 support\n\n", "before_files": [{"content": "#\n# This file generates the sidebar/toctree for all TileDB projects and should\n# be copied to each project when it is updated.\n#\n# This file is originally from the RobotPy documentation project https://github.com/robotpy/robotpy-docs, licensed under\n#\n\nimport os\n\ndef write_if_changed(fname, contents):\n\n try:\n with open(fname, 'r') as fp:\n old_contents = fp.read()\n except:\n old_contents = ''\n\n if old_contents != contents:\n with open(fname, 'w') as fp:\n fp.write(contents)\n\ndef generate_sidebar(conf, conf_api):\n\n version = conf['rtd_version']\n\n lines = [\n '', '.. DO NOT MODIFY! THIS PAGE IS AUTOGENERATED!', ''\n ]\n\n url_base = 'https://docs.tiledb.io'\n lang = 'en'\n\n def toctree(name):\n lines.extend(['.. toctree::',\n ' :caption: %s' % name,\n ' :maxdepth: 1',\n ''])\n\n def endl():\n lines.append('')\n\n def write(desc, link):\n if conf_api == 'tiledb':\n args = desc, link\n else:\n args = desc, '%s/%s/%s/%s.html' % (url_base, lang, version, link)\n\n lines.append(' %s <%s>' % args)\n\n def write_api(project, desc, rst_page):\n # From non-root project to root project link\n if project == 'tiledb' and conf_api != 'tiledb':\n args = desc, url_base, lang, version, rst_page\n lines.append(' %s API <%s/%s/%s/%s.html>' % args)\n # From anything to non-root project link\n elif project != conf_api:\n args = desc, url_base, project, lang, version, rst_page\n lines.append(' %s API <%s/projects/%s/%s/%s/%s.html>' % args)\n # Local project link\n else:\n args = desc, rst_page\n lines.append(' %s API <%s>' % args)\n\n #\n # Specify the sidebar contents here\n #\n \n toctree('Getting Started')\n write('Introduction', 'introduction')\n write('Installation', 'installation')\n write('Usage', 'usage')\n write('Working with S3', 's3')\n endl()\n\n toctree('API Reference')\n write_api('tiledb', 'C', 'c-api')\n write_api('tiledb', 'C++', 'c++-api')\n write_api('tiledb-py', 'Python', 'python-api')\n endl()\n\n toctree('TileDB 101')\n write('Data Model', 'tiledb101/data-model')\n write('Basic Concepts', 'tiledb101/basic-concepts')\n write('System Architecture', 'tiledb101/system-architecture')\n write('Physical Organization', 'tiledb101/physical-organization')\n write('Writing', 'tiledb101/writing/index')\n write('Updating', 'tiledb101/updating')\n write('Consolidation', 'tiledb101/consolidation')\n write('Reading', 'tiledb101/reading/index')\n write('Compression', 'tiledb101/compression')\n write('Asynchronous I/O', 'tiledb101/asynchronous-io')\n write('Key-Value Store', 'tiledb101/key-value-store')\n write('Object Management', 'tiledb101/object-management')\n write('Storage Backends', 'tiledb101/storage-backends')\n write('Virtual Filesystem', 'tiledb101/virtual-filesystem')\n write('Language Bindings', 'tiledb101/language-bindings')\n write('Concurrency', 'tiledb101/concurrency')\n write('Consistency', 'tiledb101/consistency')\n endl()\n\n write_if_changed('_sidebar.rst.inc', '\\n'.join(lines))\n", "path": "doc/source/gensidebar.py"}, {"content": "from __future__ import absolute_import, print_function\n\nimport os\nfrom setuptools import setup, Extension, find_packages\nfrom pkg_resources import resource_filename\n\nimport sys\nfrom sys import version_info as ver\n\n# Check if Python version is supported\nif any([ver < (3, 4)]):\n raise Exception(\"Unsupported Python version %d.%d. Requires Python >= 3.4\")\n\n\nclass LazyCommandClass(dict):\n \"\"\"\n Lazy command class that defers operations requiring Cython and numpy until\n they've actually been downloaded and installed by setup_requires.\n \"\"\"\n def __contains__(self, key):\n return (\n key == 'build_ext'\n or super(LazyCommandClass, self).__contains__(key)\n )\n\n def __setitem__(self, key, value):\n if key == 'build_ext':\n raise AssertionError(\"build_ext overridden!\")\n super(LazyCommandClass, self).__setitem__(key, value)\n\n def __getitem__(self, key):\n if key != 'build_ext':\n return super(LazyCommandClass, self).__getitem__(key)\n\n from Cython.Distutils import build_ext as cython_build_ext\n\n class build_ext(cython_build_ext):\n \"\"\"\n Custom build_ext command that lazily adds numpy's include_dir to\n extensions.\n \"\"\"\n def build_extensions(self):\n \"\"\"\n Lazily append numpy's include directory to Extension includes.\n\n This is done here rather than at module scope because setup.py\n may be run before numpy has been installed, in which case\n importing numpy and calling `numpy.get_include()` will fail.\n \"\"\"\n numpy_incl = resource_filename('numpy', 'core/include')\n for ext in self.extensions:\n ext.include_dirs.append(numpy_incl)\n\n # This explicitly calls the superclass method rather than the\n # usual super() invocation because distutils' build_class, of\n # which Cython's build_ext is a subclass, is an old-style class\n # in Python 2, which doesn't support `super`.\n cython_build_ext.build_extensions(self)\n return build_ext\n\n\ntests_require = []\nif ver < (3,):\n tests_require.extend([\"unittest2\", \"mock\"])\n\n# Globals variables\nCXXFLAGS = os.environ.get(\"CXXFLAGS\", \"-std=c++11\").split()\nLFLAGS = os.environ.get(\"LFLAGS\", \"\").split()\n\n# Allow setting (lib) TileDB directory if it is installed on the system\nTILEDB_DIR = os.environ.get(\"TILEDB_DIR\", \"\")\n\n# Sources & libraries\ninc_dirs = []\nlib_dirs = []\nlibs = [\"tiledb\"]\ndef_macros = []\nsources = [\"tiledb/libtiledb.pyx\"]\noptional_libs = []\n\n# Pass command line flags to setup.py script\n# handle --tiledb=[PATH] --lflags=[FLAGS] --cxxflags=[FLAGS]\nargs = sys.argv[:]\nfor arg in args:\n if arg.find('--tiledb=') == 0:\n TILEDB_DIR = os.path.expanduser(arg.split('=')[1])\n sys.argv.remove(arg)\n if arg.find('--lflags=') == 0:\n LFLAGS = arg.split('=')[1].split()\n sys.argv.remove(arg)\n if arg.find('--cxxflags=') == 0:\n CXXFLAGS = arg.split('=')[1].split()\n sys.argv.remove(arg)\n\nif TILEDB_DIR != '':\n lib_dirs += [os.path.join(TILEDB_DIR, 'lib')]\n inc_dirs += [os.path.join(TILEDB_DIR, 'include')]\n\nsetup(\n name='tiledb',\n description=\"Pythonic interface to the TileDB array storage manager\",\n author='TileDB, Inc.',\n author_email='[email protected]',\n maintainer='TileDB, Inc.',\n maintainer_email='[email protected]',\n url='https://github.com/TileDB-Inc/TileDB-Py',\n license='MIT',\n platforms=['any'],\n use_scm_version={\n 'version_scheme': 'guess-next-dev',\n 'local_scheme': 'dirty-tag',\n 'write_to': 'tiledb/version.py'\n },\n ext_modules=[\n Extension(\n \"tiledb.libtiledb\",\n include_dirs=inc_dirs,\n define_macros=def_macros,\n sources=sources,\n library_dirs=lib_dirs,\n libraries=libs,\n extra_link_args=LFLAGS,\n extra_compile_args=CXXFLAGS,\n language=\"c++\"\n )\n ],\n setup_requires=[\n 'cython>=0.22',\n 'numpy>=1.7',\n 'setuptools>18.0',\n 'setuptools-scm>1.5.4'\n ],\n install_requires=[\n 'numpy>=1.7',\n ],\n tests_require=tests_require,\n packages=find_packages(),\n cmdclass=LazyCommandClass(),\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Operating System :: Unix',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n\n)\n", "path": "setup.py"}, {"content": "from __future__ import absolute_import\n\nfrom .libtiledb import (Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, DenseArray, SparseArray,\n TileDBError, VFS, array_consolidate, group_create, object_type,\n ls, walk, remove, move)\n\n__all__ = [Ctx, Config, Dim, Domain, Attr, KV, ArraySchema, SparseArray, TileDBError, VFS,\n array_consolidate, group_create, object_type, ls, walk, remove, move]\n", "path": "tiledb/__init__.py"}]} | 3,307 | 983 |
gh_patches_debug_2698 | rasdani/github-patches | git_diff | learningequality__kolibri-4343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enable ePUB plugin to run by default
### Observed behavior
ePUB plugin is not enabled by default, and it prevents from importing & viewing ePUB files, until the command `kolibri plugin kolibri.plugins.document_epub_render enable` is run.
### User-facing consequences
Inability to view and import ePUB files.
### Context
dev environment, tried on `develop` and `0.11.a7` branches
</issue>
<code>
[start of kolibri/utils/conf.py]
1 """
2 Kolibri configuration data
3 ==========================
4
5 .. warning::
6 Do not load any django.conf.settings stuff here. This configuration data
7 precedes loading of settings, it is not part of the settings stack.
8
9 TODO: We need to figure out our conf API. Do we store in ini/json/yaml?
10
11 * How do we retrieve config data?
12 * When should configuration files be loaded and written?
13
14 This module should be easier to document, for instance by having VARIABLES
15 instead of a dict.
16
17 """
18 from __future__ import absolute_import
19 from __future__ import print_function
20 from __future__ import unicode_literals
21
22 import json
23 import logging
24 import os
25
26 from .compat import module_exists
27 from .options import read_options_file
28
29 logger = logging.getLogger(__name__)
30
31 # use default OS encoding
32 with open(os.path.join(os.path.dirname(__file__), 'KOLIBRI_CORE_JS_NAME')) as f:
33 KOLIBRI_CORE_JS_NAME = f.read().strip()
34
35 #: Absolute path of the main user data directory.
36 #: Will be created automatically if it doesn't exist.
37 KOLIBRI_HOME = os.path.abspath(os.path.expanduser(os.environ["KOLIBRI_HOME"]))
38
39 # Creating KOLIBRI_HOME atm. has to happen here as for instance utils.cli is not
40 # called through py.test. This file is the first basic entry point of
41 # Kolibri, although utils.cli may or may not precede it.
42 if not os.path.exists(KOLIBRI_HOME):
43 parent = os.path.dirname(KOLIBRI_HOME)
44 if not os.path.exists(parent):
45 raise RuntimeError("The parent of your KOLIBRI_HOME does not exist: {}".format(parent))
46 os.mkdir(KOLIBRI_HOME)
47
48 #: Set defaults before updating the dict
49 config = {}
50
51 try:
52 # The default list for this is populated from build_tools/default_plugins.txt
53 # in the root of the Kolibri repository. The default list is identical to the list below,
54 # except that the style_guide plugin is not enabled in production builds.
55 # Caveat: this list may have been changed at build time to specify a different list of plugins.
56 from .build_config.default_plugins import plugins
57 DEFAULT_PLUGINS = plugins
58 except ImportError:
59 DEFAULT_PLUGINS = [
60 "kolibri.plugins.facility_management",
61 "kolibri.plugins.device_management",
62 "kolibri.plugins.learn",
63 "kolibri.plugins.document_pdf_render",
64 "kolibri.plugins.html5_app_renderer",
65 "kolibri.plugins.media_player",
66 "kolibri.plugins.setup_wizard",
67 "kolibri.plugins.coach",
68 "kolibri.plugins.user",
69 "kolibri_exercise_perseus_plugin",
70 "kolibri.plugins.style_guide",
71 ]
72
73 #: Everything in this list is added to django.conf.settings.INSTALLED_APPS
74 config['INSTALLED_APPS'] = DEFAULT_PLUGINS
75
76 #: Well-known plugin names that are automatically searched for and enabled on
77 #: first-run.
78 config['AUTO_SEARCH_PLUGINS'] = []
79
80 #: If a config file does not exist, we assume it's the first run
81 config['FIRST_RUN'] = True
82
83 conf_file = os.path.join(KOLIBRI_HOME, "kolibri_settings.json")
84
85
86 def update(new_values):
87 """
88 Updates current configuration with ``new_values``. Does not save to file.
89 """
90 config.update(new_values)
91
92
93 def save(first_run=False):
94 """Saves the current state of the configuration"""
95 config['FIRST_RUN'] = first_run
96 # use default OS encoding
97 with open(conf_file, 'w') as kolibri_conf_file:
98 json.dump(config, kolibri_conf_file, indent=2, sort_keys=True)
99
100
101 if not os.path.isfile(conf_file):
102 logger.info("Initialize kolibri_settings.json..")
103 save(True)
104 else:
105 # Open up the config file and overwrite defaults
106 # use default OS encoding
107 with open(conf_file, 'r') as kolibri_conf_file:
108 config.update(json.load(kolibri_conf_file))
109
110
111 def autoremove_unavailable_plugins():
112 """
113 Sanitize INSTALLED_APPS - something that should be done separately for all
114 build in plugins, but we should not auto-remove plugins that are actually
115 configured by the user or some other kind of hard dependency that should
116 make execution stop if not loadable.
117 """
118 global config
119 changed = False
120 # Iterate over a copy of the list so that it is not modified during the loop
121 for module_path in config['INSTALLED_APPS'][:]:
122 if not module_exists(module_path):
123 config['INSTALLED_APPS'].remove(module_path)
124 logger.error(
125 (
126 "Plugin {mod} not found and disabled. To re-enable it, run:\n"
127 " $ kolibri plugin {mod} enable"
128 ).format(mod=module_path)
129 )
130 changed = True
131 if changed:
132 save()
133
134
135 def enable_default_plugins():
136 """
137 Enable new plugins that have been added between versions
138 This will have the undesired side effect of reactivating
139 default plugins that have been explicitly disabled by a user.
140 However, until we add disabled plugins to a blacklist, this is
141 unavoidable.
142 """
143 global config
144 changed = False
145 for module_path in DEFAULT_PLUGINS:
146 if module_path not in config['INSTALLED_APPS']:
147 config['INSTALLED_APPS'].append(module_path)
148 logger.warning(
149 (
150 "Default plugin {mod} not found in configuration. To re-disable it, run:\n"
151 " $ kolibri plugin {mod} disable"
152 ).format(mod=module_path)
153 )
154 changed = True
155
156 if changed:
157 save()
158
159
160 # read the config file options in here so they can be accessed from a standard location
161 OPTIONS = read_options_file(KOLIBRI_HOME)
162
[end of kolibri/utils/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/utils/conf.py b/kolibri/utils/conf.py
--- a/kolibri/utils/conf.py
+++ b/kolibri/utils/conf.py
@@ -68,6 +68,7 @@
"kolibri.plugins.user",
"kolibri_exercise_perseus_plugin",
"kolibri.plugins.style_guide",
+ "kolibri.plugins.document_epub_render",
]
#: Everything in this list is added to django.conf.settings.INSTALLED_APPS
| {"golden_diff": "diff --git a/kolibri/utils/conf.py b/kolibri/utils/conf.py\n--- a/kolibri/utils/conf.py\n+++ b/kolibri/utils/conf.py\n@@ -68,6 +68,7 @@\n \"kolibri.plugins.user\",\n \"kolibri_exercise_perseus_plugin\",\n \"kolibri.plugins.style_guide\",\n+ \"kolibri.plugins.document_epub_render\",\n ]\n \n #: Everything in this list is added to django.conf.settings.INSTALLED_APPS\n", "issue": "Enable ePUB plugin to run by default\n\r\n### Observed behavior\r\n\r\nePUB plugin is not enabled by default, and it prevents from importing & viewing ePUB files, until the command `kolibri plugin kolibri.plugins.document_epub_render enable` is run.\r\n\r\n### User-facing consequences\r\nInability to view and import ePUB files.\r\n\r\n\r\n### Context\r\ndev environment, tried on `develop` and `0.11.a7` branches\r\n\n", "before_files": [{"content": "\"\"\"\nKolibri configuration data\n==========================\n\n.. warning::\n Do not load any django.conf.settings stuff here. This configuration data\n precedes loading of settings, it is not part of the settings stack.\n\nTODO: We need to figure out our conf API. Do we store in ini/json/yaml?\n\n * How do we retrieve config data?\n * When should configuration files be loaded and written?\n\nThis module should be easier to document, for instance by having VARIABLES\ninstead of a dict.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport json\nimport logging\nimport os\n\nfrom .compat import module_exists\nfrom .options import read_options_file\n\nlogger = logging.getLogger(__name__)\n\n# use default OS encoding\nwith open(os.path.join(os.path.dirname(__file__), 'KOLIBRI_CORE_JS_NAME')) as f:\n KOLIBRI_CORE_JS_NAME = f.read().strip()\n\n#: Absolute path of the main user data directory.\n#: Will be created automatically if it doesn't exist.\nKOLIBRI_HOME = os.path.abspath(os.path.expanduser(os.environ[\"KOLIBRI_HOME\"]))\n\n# Creating KOLIBRI_HOME atm. has to happen here as for instance utils.cli is not\n# called through py.test. This file is the first basic entry point of\n# Kolibri, although utils.cli may or may not precede it.\nif not os.path.exists(KOLIBRI_HOME):\n parent = os.path.dirname(KOLIBRI_HOME)\n if not os.path.exists(parent):\n raise RuntimeError(\"The parent of your KOLIBRI_HOME does not exist: {}\".format(parent))\n os.mkdir(KOLIBRI_HOME)\n\n#: Set defaults before updating the dict\nconfig = {}\n\ntry:\n # The default list for this is populated from build_tools/default_plugins.txt\n # in the root of the Kolibri repository. The default list is identical to the list below,\n # except that the style_guide plugin is not enabled in production builds.\n # Caveat: this list may have been changed at build time to specify a different list of plugins.\n from .build_config.default_plugins import plugins\n DEFAULT_PLUGINS = plugins\nexcept ImportError:\n DEFAULT_PLUGINS = [\n \"kolibri.plugins.facility_management\",\n \"kolibri.plugins.device_management\",\n \"kolibri.plugins.learn\",\n \"kolibri.plugins.document_pdf_render\",\n \"kolibri.plugins.html5_app_renderer\",\n \"kolibri.plugins.media_player\",\n \"kolibri.plugins.setup_wizard\",\n \"kolibri.plugins.coach\",\n \"kolibri.plugins.user\",\n \"kolibri_exercise_perseus_plugin\",\n \"kolibri.plugins.style_guide\",\n ]\n\n#: Everything in this list is added to django.conf.settings.INSTALLED_APPS\nconfig['INSTALLED_APPS'] = DEFAULT_PLUGINS\n\n#: Well-known plugin names that are automatically searched for and enabled on\n#: first-run.\nconfig['AUTO_SEARCH_PLUGINS'] = []\n\n#: If a config file does not exist, we assume it's the first run\nconfig['FIRST_RUN'] = True\n\nconf_file = os.path.join(KOLIBRI_HOME, \"kolibri_settings.json\")\n\n\ndef update(new_values):\n \"\"\"\n Updates current configuration with ``new_values``. Does not save to file.\n \"\"\"\n config.update(new_values)\n\n\ndef save(first_run=False):\n \"\"\"Saves the current state of the configuration\"\"\"\n config['FIRST_RUN'] = first_run\n # use default OS encoding\n with open(conf_file, 'w') as kolibri_conf_file:\n json.dump(config, kolibri_conf_file, indent=2, sort_keys=True)\n\n\nif not os.path.isfile(conf_file):\n logger.info(\"Initialize kolibri_settings.json..\")\n save(True)\nelse:\n # Open up the config file and overwrite defaults\n # use default OS encoding\n with open(conf_file, 'r') as kolibri_conf_file:\n config.update(json.load(kolibri_conf_file))\n\n\ndef autoremove_unavailable_plugins():\n \"\"\"\n Sanitize INSTALLED_APPS - something that should be done separately for all\n build in plugins, but we should not auto-remove plugins that are actually\n configured by the user or some other kind of hard dependency that should\n make execution stop if not loadable.\n \"\"\"\n global config\n changed = False\n # Iterate over a copy of the list so that it is not modified during the loop\n for module_path in config['INSTALLED_APPS'][:]:\n if not module_exists(module_path):\n config['INSTALLED_APPS'].remove(module_path)\n logger.error(\n (\n \"Plugin {mod} not found and disabled. To re-enable it, run:\\n\"\n \" $ kolibri plugin {mod} enable\"\n ).format(mod=module_path)\n )\n changed = True\n if changed:\n save()\n\n\ndef enable_default_plugins():\n \"\"\"\n Enable new plugins that have been added between versions\n This will have the undesired side effect of reactivating\n default plugins that have been explicitly disabled by a user.\n However, until we add disabled plugins to a blacklist, this is\n unavoidable.\n \"\"\"\n global config\n changed = False\n for module_path in DEFAULT_PLUGINS:\n if module_path not in config['INSTALLED_APPS']:\n config['INSTALLED_APPS'].append(module_path)\n logger.warning(\n (\n \"Default plugin {mod} not found in configuration. To re-disable it, run:\\n\"\n \" $ kolibri plugin {mod} disable\"\n ).format(mod=module_path)\n )\n changed = True\n\n if changed:\n save()\n\n\n# read the config file options in here so they can be accessed from a standard location\nOPTIONS = read_options_file(KOLIBRI_HOME)\n", "path": "kolibri/utils/conf.py"}]} | 2,252 | 104 |
gh_patches_debug_51356 | rasdani/github-patches | git_diff | beetbox__beets-888 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow on-demand only acoustid fingerprinting
It would be great to be able to have the chroma plugin activated, but only for the explicit `submit` use case, so that you don't have to keep enabling and disabling it to avoid the dramatic slowdown when doing imports.
Some kind of an option like:
``` yaml
acoustid:
auto: no
```
</issue>
<code>
[start of beetsplug/chroma.py]
1 # This file is part of beets.
2 # Copyright 2013, Adrian Sampson.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Adds Chromaprint/Acoustid acoustic fingerprinting support to the
16 autotagger. Requires the pyacoustid library.
17 """
18 from beets import plugins
19 from beets import ui
20 from beets import util
21 from beets import config
22 from beets.util import confit
23 from beets.autotag import hooks
24 import acoustid
25 import logging
26 from collections import defaultdict
27
28 API_KEY = '1vOwZtEn'
29 SCORE_THRESH = 0.5
30 TRACK_ID_WEIGHT = 10.0
31 COMMON_REL_THRESH = 0.6 # How many tracks must have an album in common?
32
33 log = logging.getLogger('beets')
34
35 # Stores the Acoustid match information for each track. This is
36 # populated when an import task begins and then used when searching for
37 # candidates. It maps audio file paths to (recording_ids, release_ids)
38 # pairs. If a given path is not present in the mapping, then no match
39 # was found.
40 _matches = {}
41
42 # Stores the fingerprint and Acoustid ID for each track. This is stored
43 # as metadata for each track for later use but is not relevant for
44 # autotagging.
45 _fingerprints = {}
46 _acoustids = {}
47
48
49 def acoustid_match(path):
50 """Gets metadata for a file from Acoustid and populates the
51 _matches, _fingerprints, and _acoustids dictionaries accordingly.
52 """
53 try:
54 duration, fp = acoustid.fingerprint_file(util.syspath(path))
55 except acoustid.FingerprintGenerationError as exc:
56 log.error('fingerprinting of %s failed: %s' %
57 (repr(path), str(exc)))
58 return None
59 _fingerprints[path] = fp
60 try:
61 res = acoustid.lookup(API_KEY, fp, duration,
62 meta='recordings releases')
63 except acoustid.AcoustidError as exc:
64 log.debug('fingerprint matching %s failed: %s' %
65 (repr(path), str(exc)))
66 return None
67 log.debug('chroma: fingerprinted %s' % repr(path))
68
69 # Ensure the response is usable and parse it.
70 if res['status'] != 'ok' or not res.get('results'):
71 log.debug('chroma: no match found')
72 return None
73 result = res['results'][0] # Best match.
74 if result['score'] < SCORE_THRESH:
75 log.debug('chroma: no results above threshold')
76 return None
77 _acoustids[path] = result['id']
78
79 # Get recording and releases from the result.
80 if not result.get('recordings'):
81 log.debug('chroma: no recordings found')
82 return None
83 recording_ids = []
84 release_ids = []
85 for recording in result['recordings']:
86 recording_ids.append(recording['id'])
87 if 'releases' in recording:
88 release_ids += [rel['id'] for rel in recording['releases']]
89
90 log.debug('chroma: matched recordings {0}'.format(recording_ids))
91 _matches[path] = recording_ids, release_ids
92
93
94 # Plugin structure and autotagging logic.
95
96
97 def _all_releases(items):
98 """Given an iterable of Items, determines (according to Acoustid)
99 which releases the items have in common. Generates release IDs.
100 """
101 # Count the number of "hits" for each release.
102 relcounts = defaultdict(int)
103 for item in items:
104 if item.path not in _matches:
105 continue
106
107 _, release_ids = _matches[item.path]
108 for release_id in release_ids:
109 relcounts[release_id] += 1
110
111 for release_id, count in relcounts.iteritems():
112 if float(count) / len(items) > COMMON_REL_THRESH:
113 yield release_id
114
115
116 class AcoustidPlugin(plugins.BeetsPlugin):
117 def track_distance(self, item, info):
118 dist = hooks.Distance()
119 if item.path not in _matches or not info.track_id:
120 # Match failed or no track ID.
121 return dist
122
123 recording_ids, _ = _matches[item.path]
124 dist.add_expr('track_id', info.track_id not in recording_ids)
125 return dist
126
127 def candidates(self, items, artist, album, va_likely):
128 albums = []
129 for relid in _all_releases(items):
130 album = hooks.album_for_mbid(relid)
131 if album:
132 albums.append(album)
133
134 log.debug('acoustid album candidates: %i' % len(albums))
135 return albums
136
137 def item_candidates(self, item, artist, title):
138 if item.path not in _matches:
139 return []
140
141 recording_ids, _ = _matches[item.path]
142 tracks = []
143 for recording_id in recording_ids:
144 track = hooks.track_for_mbid(recording_id)
145 if track:
146 tracks.append(track)
147 log.debug('acoustid item candidates: {0}'.format(len(tracks)))
148 return tracks
149
150 def commands(self):
151 submit_cmd = ui.Subcommand('submit',
152 help='submit Acoustid fingerprints')
153
154 def submit_cmd_func(lib, opts, args):
155 try:
156 apikey = config['acoustid']['apikey'].get(unicode)
157 except confit.NotFoundError:
158 raise ui.UserError('no Acoustid user API key provided')
159 submit_items(apikey, lib.items(ui.decargs(args)))
160 submit_cmd.func = submit_cmd_func
161
162 fingerprint_cmd = ui.Subcommand(
163 'fingerprint',
164 help='generate fingerprints for items without them'
165 )
166
167 def fingerprint_cmd_func(lib, opts, args):
168 for item in lib.items(ui.decargs(args)):
169 fingerprint_item(item,
170 write=config['import']['write'].get(bool))
171 fingerprint_cmd.func = fingerprint_cmd_func
172
173 return [submit_cmd, fingerprint_cmd]
174
175
176 # Hooks into import process.
177
178
179 @AcoustidPlugin.listen('import_task_start')
180 def fingerprint_task(task, session):
181 """Fingerprint each item in the task for later use during the
182 autotagging candidate search.
183 """
184 items = task.items if task.is_album else [task.item]
185 for item in items:
186 acoustid_match(item.path)
187
188
189 @AcoustidPlugin.listen('import_task_apply')
190 def apply_acoustid_metadata(task, session):
191 """Apply Acoustid metadata (fingerprint and ID) to the task's items.
192 """
193 for item in task.imported_items():
194 if item.path in _fingerprints:
195 item.acoustid_fingerprint = _fingerprints[item.path]
196 if item.path in _acoustids:
197 item.acoustid_id = _acoustids[item.path]
198
199
200 # UI commands.
201
202
203 def submit_items(userkey, items, chunksize=64):
204 """Submit fingerprints for the items to the Acoustid server.
205 """
206 data = [] # The running list of dictionaries to submit.
207
208 def submit_chunk():
209 """Submit the current accumulated fingerprint data."""
210 log.info('submitting {0} fingerprints'.format(len(data)))
211 try:
212 acoustid.submit(API_KEY, userkey, data)
213 except acoustid.AcoustidError as exc:
214 log.warn(u'acoustid submission error: {0}'.format(exc))
215 del data[:]
216
217 for item in items:
218 fp = fingerprint_item(item)
219
220 # Construct a submission dictionary for this item.
221 item_data = {
222 'duration': int(item.length),
223 'fingerprint': fp,
224 }
225 if item.mb_trackid:
226 item_data['mbid'] = item.mb_trackid
227 log.debug('submitting MBID')
228 else:
229 item_data.update({
230 'track': item.title,
231 'artist': item.artist,
232 'album': item.album,
233 'albumartist': item.albumartist,
234 'year': item.year,
235 'trackno': item.track,
236 'discno': item.disc,
237 })
238 log.debug('submitting textual metadata')
239 data.append(item_data)
240
241 # If we have enough data, submit a chunk.
242 if len(data) >= chunksize:
243 submit_chunk()
244
245 # Submit remaining data in a final chunk.
246 if data:
247 submit_chunk()
248
249
250 def fingerprint_item(item, write=False):
251 """Get the fingerprint for an Item. If the item already has a
252 fingerprint, it is not regenerated. If fingerprint generation fails,
253 return None. If the items are associated with a library, they are
254 saved to the database. If `write` is set, then the new fingerprints
255 are also written to files' metadata.
256 """
257 # Get a fingerprint and length for this track.
258 if not item.length:
259 log.info(u'{0}: no duration available'.format(
260 util.displayable_path(item.path)
261 ))
262 elif item.acoustid_fingerprint:
263 if write:
264 log.info(u'{0}: fingerprint exists, skipping'.format(
265 util.displayable_path(item.path)
266 ))
267 else:
268 log.info(u'{0}: using existing fingerprint'.format(
269 util.displayable_path(item.path)
270 ))
271 return item.acoustid_fingerprint
272 else:
273 log.info(u'{0}: fingerprinting'.format(
274 util.displayable_path(item.path)
275 ))
276 try:
277 _, fp = acoustid.fingerprint_file(item.path)
278 item.acoustid_fingerprint = fp
279 if write:
280 log.info(u'{0}: writing fingerprint'.format(
281 util.displayable_path(item.path)
282 ))
283 item.try_write()
284 if item._db:
285 item.store()
286 return item.acoustid_fingerprint
287 except acoustid.FingerprintGenerationError as exc:
288 log.info(
289 'fingerprint generation failed: {0}'.format(exc)
290 )
291
[end of beetsplug/chroma.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beetsplug/chroma.py b/beetsplug/chroma.py
--- a/beetsplug/chroma.py
+++ b/beetsplug/chroma.py
@@ -181,9 +181,11 @@
"""Fingerprint each item in the task for later use during the
autotagging candidate search.
"""
- items = task.items if task.is_album else [task.item]
- for item in items:
- acoustid_match(item.path)
+ auto = config['acoustid']['auto']
+ if auto:
+ items = task.items if task.is_album else [task.item]
+ for item in items:
+ acoustid_match(item.path)
@AcoustidPlugin.listen('import_task_apply')
| {"golden_diff": "diff --git a/beetsplug/chroma.py b/beetsplug/chroma.py\n--- a/beetsplug/chroma.py\n+++ b/beetsplug/chroma.py\n@@ -181,9 +181,11 @@\n \"\"\"Fingerprint each item in the task for later use during the\n autotagging candidate search.\n \"\"\"\n- items = task.items if task.is_album else [task.item]\n- for item in items:\n- acoustid_match(item.path)\n+ auto = config['acoustid']['auto']\n+ if auto:\n+ items = task.items if task.is_album else [task.item]\n+ for item in items:\n+ acoustid_match(item.path)\n \n \n @AcoustidPlugin.listen('import_task_apply')\n", "issue": "Allow on-demand only acoustid fingerprinting\nIt would be great to be able to have the chroma plugin activated, but only for the explicit `submit` use case, so that you don't have to keep enabling and disabling it to avoid the dramatic slowdown when doing imports.\n\nSome kind of an option like:\n\n``` yaml\nacoustid:\n auto: no\n```\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2013, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Adds Chromaprint/Acoustid acoustic fingerprinting support to the\nautotagger. Requires the pyacoustid library.\n\"\"\"\nfrom beets import plugins\nfrom beets import ui\nfrom beets import util\nfrom beets import config\nfrom beets.util import confit\nfrom beets.autotag import hooks\nimport acoustid\nimport logging\nfrom collections import defaultdict\n\nAPI_KEY = '1vOwZtEn'\nSCORE_THRESH = 0.5\nTRACK_ID_WEIGHT = 10.0\nCOMMON_REL_THRESH = 0.6 # How many tracks must have an album in common?\n\nlog = logging.getLogger('beets')\n\n# Stores the Acoustid match information for each track. This is\n# populated when an import task begins and then used when searching for\n# candidates. It maps audio file paths to (recording_ids, release_ids)\n# pairs. If a given path is not present in the mapping, then no match\n# was found.\n_matches = {}\n\n# Stores the fingerprint and Acoustid ID for each track. This is stored\n# as metadata for each track for later use but is not relevant for\n# autotagging.\n_fingerprints = {}\n_acoustids = {}\n\n\ndef acoustid_match(path):\n \"\"\"Gets metadata for a file from Acoustid and populates the\n _matches, _fingerprints, and _acoustids dictionaries accordingly.\n \"\"\"\n try:\n duration, fp = acoustid.fingerprint_file(util.syspath(path))\n except acoustid.FingerprintGenerationError as exc:\n log.error('fingerprinting of %s failed: %s' %\n (repr(path), str(exc)))\n return None\n _fingerprints[path] = fp\n try:\n res = acoustid.lookup(API_KEY, fp, duration,\n meta='recordings releases')\n except acoustid.AcoustidError as exc:\n log.debug('fingerprint matching %s failed: %s' %\n (repr(path), str(exc)))\n return None\n log.debug('chroma: fingerprinted %s' % repr(path))\n\n # Ensure the response is usable and parse it.\n if res['status'] != 'ok' or not res.get('results'):\n log.debug('chroma: no match found')\n return None\n result = res['results'][0] # Best match.\n if result['score'] < SCORE_THRESH:\n log.debug('chroma: no results above threshold')\n return None\n _acoustids[path] = result['id']\n\n # Get recording and releases from the result.\n if not result.get('recordings'):\n log.debug('chroma: no recordings found')\n return None\n recording_ids = []\n release_ids = []\n for recording in result['recordings']:\n recording_ids.append(recording['id'])\n if 'releases' in recording:\n release_ids += [rel['id'] for rel in recording['releases']]\n\n log.debug('chroma: matched recordings {0}'.format(recording_ids))\n _matches[path] = recording_ids, release_ids\n\n\n# Plugin structure and autotagging logic.\n\n\ndef _all_releases(items):\n \"\"\"Given an iterable of Items, determines (according to Acoustid)\n which releases the items have in common. Generates release IDs.\n \"\"\"\n # Count the number of \"hits\" for each release.\n relcounts = defaultdict(int)\n for item in items:\n if item.path not in _matches:\n continue\n\n _, release_ids = _matches[item.path]\n for release_id in release_ids:\n relcounts[release_id] += 1\n\n for release_id, count in relcounts.iteritems():\n if float(count) / len(items) > COMMON_REL_THRESH:\n yield release_id\n\n\nclass AcoustidPlugin(plugins.BeetsPlugin):\n def track_distance(self, item, info):\n dist = hooks.Distance()\n if item.path not in _matches or not info.track_id:\n # Match failed or no track ID.\n return dist\n\n recording_ids, _ = _matches[item.path]\n dist.add_expr('track_id', info.track_id not in recording_ids)\n return dist\n\n def candidates(self, items, artist, album, va_likely):\n albums = []\n for relid in _all_releases(items):\n album = hooks.album_for_mbid(relid)\n if album:\n albums.append(album)\n\n log.debug('acoustid album candidates: %i' % len(albums))\n return albums\n\n def item_candidates(self, item, artist, title):\n if item.path not in _matches:\n return []\n\n recording_ids, _ = _matches[item.path]\n tracks = []\n for recording_id in recording_ids:\n track = hooks.track_for_mbid(recording_id)\n if track:\n tracks.append(track)\n log.debug('acoustid item candidates: {0}'.format(len(tracks)))\n return tracks\n\n def commands(self):\n submit_cmd = ui.Subcommand('submit',\n help='submit Acoustid fingerprints')\n\n def submit_cmd_func(lib, opts, args):\n try:\n apikey = config['acoustid']['apikey'].get(unicode)\n except confit.NotFoundError:\n raise ui.UserError('no Acoustid user API key provided')\n submit_items(apikey, lib.items(ui.decargs(args)))\n submit_cmd.func = submit_cmd_func\n\n fingerprint_cmd = ui.Subcommand(\n 'fingerprint',\n help='generate fingerprints for items without them'\n )\n\n def fingerprint_cmd_func(lib, opts, args):\n for item in lib.items(ui.decargs(args)):\n fingerprint_item(item,\n write=config['import']['write'].get(bool))\n fingerprint_cmd.func = fingerprint_cmd_func\n\n return [submit_cmd, fingerprint_cmd]\n\n\n# Hooks into import process.\n\n\[email protected]('import_task_start')\ndef fingerprint_task(task, session):\n \"\"\"Fingerprint each item in the task for later use during the\n autotagging candidate search.\n \"\"\"\n items = task.items if task.is_album else [task.item]\n for item in items:\n acoustid_match(item.path)\n\n\[email protected]('import_task_apply')\ndef apply_acoustid_metadata(task, session):\n \"\"\"Apply Acoustid metadata (fingerprint and ID) to the task's items.\n \"\"\"\n for item in task.imported_items():\n if item.path in _fingerprints:\n item.acoustid_fingerprint = _fingerprints[item.path]\n if item.path in _acoustids:\n item.acoustid_id = _acoustids[item.path]\n\n\n# UI commands.\n\n\ndef submit_items(userkey, items, chunksize=64):\n \"\"\"Submit fingerprints for the items to the Acoustid server.\n \"\"\"\n data = [] # The running list of dictionaries to submit.\n\n def submit_chunk():\n \"\"\"Submit the current accumulated fingerprint data.\"\"\"\n log.info('submitting {0} fingerprints'.format(len(data)))\n try:\n acoustid.submit(API_KEY, userkey, data)\n except acoustid.AcoustidError as exc:\n log.warn(u'acoustid submission error: {0}'.format(exc))\n del data[:]\n\n for item in items:\n fp = fingerprint_item(item)\n\n # Construct a submission dictionary for this item.\n item_data = {\n 'duration': int(item.length),\n 'fingerprint': fp,\n }\n if item.mb_trackid:\n item_data['mbid'] = item.mb_trackid\n log.debug('submitting MBID')\n else:\n item_data.update({\n 'track': item.title,\n 'artist': item.artist,\n 'album': item.album,\n 'albumartist': item.albumartist,\n 'year': item.year,\n 'trackno': item.track,\n 'discno': item.disc,\n })\n log.debug('submitting textual metadata')\n data.append(item_data)\n\n # If we have enough data, submit a chunk.\n if len(data) >= chunksize:\n submit_chunk()\n\n # Submit remaining data in a final chunk.\n if data:\n submit_chunk()\n\n\ndef fingerprint_item(item, write=False):\n \"\"\"Get the fingerprint for an Item. If the item already has a\n fingerprint, it is not regenerated. If fingerprint generation fails,\n return None. If the items are associated with a library, they are\n saved to the database. If `write` is set, then the new fingerprints\n are also written to files' metadata.\n \"\"\"\n # Get a fingerprint and length for this track.\n if not item.length:\n log.info(u'{0}: no duration available'.format(\n util.displayable_path(item.path)\n ))\n elif item.acoustid_fingerprint:\n if write:\n log.info(u'{0}: fingerprint exists, skipping'.format(\n util.displayable_path(item.path)\n ))\n else:\n log.info(u'{0}: using existing fingerprint'.format(\n util.displayable_path(item.path)\n ))\n return item.acoustid_fingerprint\n else:\n log.info(u'{0}: fingerprinting'.format(\n util.displayable_path(item.path)\n ))\n try:\n _, fp = acoustid.fingerprint_file(item.path)\n item.acoustid_fingerprint = fp\n if write:\n log.info(u'{0}: writing fingerprint'.format(\n util.displayable_path(item.path)\n ))\n item.try_write()\n if item._db:\n item.store()\n return item.acoustid_fingerprint\n except acoustid.FingerprintGenerationError as exc:\n log.info(\n 'fingerprint generation failed: {0}'.format(exc)\n )\n", "path": "beetsplug/chroma.py"}]} | 3,624 | 167 |
gh_patches_debug_2999 | rasdani/github-patches | git_diff | iterative__dvc-2457 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dvc remove CLI documentation inconsistency
`dvc remove` (without `targets`) prints help which states that `targets` are optional, and if not specified will remove all DVC-files. Clearly not the case.
```bash
$ dvc remove
[...]
targets DVC-files to remove. Optional. (Finds all DVC-files in the
workspace by default.)
```
</issue>
<code>
[start of dvc/command/remove.py]
1 from __future__ import unicode_literals
2
3 import argparse
4 import logging
5
6 import dvc.prompt as prompt
7 from dvc.exceptions import DvcException
8 from dvc.command.base import CmdBase, append_doc_link
9
10
11 logger = logging.getLogger(__name__)
12
13
14 class CmdRemove(CmdBase):
15 def _is_outs_only(self, target):
16 if not self.args.purge:
17 return True
18
19 if self.args.force:
20 return False
21
22 msg = "Are you sure you want to remove {} with its outputs?".format(
23 target
24 )
25
26 if prompt.confirm(msg):
27 return False
28
29 raise DvcException(
30 "Cannot purge without a confirmation from the user."
31 " Use '-f' to force."
32 )
33
34 def run(self):
35 for target in self.args.targets:
36 try:
37 outs_only = self._is_outs_only(target)
38 self.repo.remove(target, outs_only=outs_only)
39 except DvcException:
40 logger.exception("failed to remove {}".format(target))
41 return 1
42 return 0
43
44
45 def add_parser(subparsers, parent_parser):
46 REMOVE_HELP = "Remove DVC-file outputs."
47 remove_parser = subparsers.add_parser(
48 "remove",
49 parents=[parent_parser],
50 description=append_doc_link(REMOVE_HELP, "remove"),
51 help=REMOVE_HELP,
52 formatter_class=argparse.RawDescriptionHelpFormatter,
53 )
54 remove_parser_group = remove_parser.add_mutually_exclusive_group()
55 remove_parser_group.add_argument(
56 "-o",
57 "--outs",
58 action="store_true",
59 default=True,
60 help="Only remove DVC-file outputs. (Default)",
61 )
62 remove_parser_group.add_argument(
63 "-p",
64 "--purge",
65 action="store_true",
66 default=False,
67 help="Remove DVC-file and all its outputs.",
68 )
69 remove_parser.add_argument(
70 "-f",
71 "--force",
72 action="store_true",
73 default=False,
74 help="Force purge.",
75 )
76 remove_parser.add_argument(
77 "targets",
78 nargs="+",
79 help="DVC-files to remove. Optional. "
80 "(Finds all DVC-files in the workspace by default.)",
81 )
82 remove_parser.set_defaults(func=CmdRemove)
83
[end of dvc/command/remove.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/command/remove.py b/dvc/command/remove.py
--- a/dvc/command/remove.py
+++ b/dvc/command/remove.py
@@ -74,9 +74,6 @@
help="Force purge.",
)
remove_parser.add_argument(
- "targets",
- nargs="+",
- help="DVC-files to remove. Optional. "
- "(Finds all DVC-files in the workspace by default.)",
+ "targets", nargs="+", help="DVC-files to remove."
)
remove_parser.set_defaults(func=CmdRemove)
| {"golden_diff": "diff --git a/dvc/command/remove.py b/dvc/command/remove.py\n--- a/dvc/command/remove.py\n+++ b/dvc/command/remove.py\n@@ -74,9 +74,6 @@\n help=\"Force purge.\",\n )\n remove_parser.add_argument(\n- \"targets\",\n- nargs=\"+\",\n- help=\"DVC-files to remove. Optional. \"\n- \"(Finds all DVC-files in the workspace by default.)\",\n+ \"targets\", nargs=\"+\", help=\"DVC-files to remove.\"\n )\n remove_parser.set_defaults(func=CmdRemove)\n", "issue": "dvc remove CLI documentation inconsistency\n`dvc remove` (without `targets`) prints help which states that `targets` are optional, and if not specified will remove all DVC-files. Clearly not the case.\r\n\r\n```bash\r\n$ dvc remove\r\n[...]\r\n targets DVC-files to remove. Optional. (Finds all DVC-files in the\r\n workspace by default.)\r\n```\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport logging\n\nimport dvc.prompt as prompt\nfrom dvc.exceptions import DvcException\nfrom dvc.command.base import CmdBase, append_doc_link\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRemove(CmdBase):\n def _is_outs_only(self, target):\n if not self.args.purge:\n return True\n\n if self.args.force:\n return False\n\n msg = \"Are you sure you want to remove {} with its outputs?\".format(\n target\n )\n\n if prompt.confirm(msg):\n return False\n\n raise DvcException(\n \"Cannot purge without a confirmation from the user.\"\n \" Use '-f' to force.\"\n )\n\n def run(self):\n for target in self.args.targets:\n try:\n outs_only = self._is_outs_only(target)\n self.repo.remove(target, outs_only=outs_only)\n except DvcException:\n logger.exception(\"failed to remove {}\".format(target))\n return 1\n return 0\n\n\ndef add_parser(subparsers, parent_parser):\n REMOVE_HELP = \"Remove DVC-file outputs.\"\n remove_parser = subparsers.add_parser(\n \"remove\",\n parents=[parent_parser],\n description=append_doc_link(REMOVE_HELP, \"remove\"),\n help=REMOVE_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n remove_parser_group = remove_parser.add_mutually_exclusive_group()\n remove_parser_group.add_argument(\n \"-o\",\n \"--outs\",\n action=\"store_true\",\n default=True,\n help=\"Only remove DVC-file outputs. (Default)\",\n )\n remove_parser_group.add_argument(\n \"-p\",\n \"--purge\",\n action=\"store_true\",\n default=False,\n help=\"Remove DVC-file and all its outputs.\",\n )\n remove_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Force purge.\",\n )\n remove_parser.add_argument(\n \"targets\",\n nargs=\"+\",\n help=\"DVC-files to remove. Optional. \"\n \"(Finds all DVC-files in the workspace by default.)\",\n )\n remove_parser.set_defaults(func=CmdRemove)\n", "path": "dvc/command/remove.py"}]} | 1,258 | 125 |
gh_patches_debug_82 | rasdani/github-patches | git_diff | fidals__shopelectro-719 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add canonicals to category page
For example this two pages contains no canonicals:
- https://www.shopelectro.ru/catalog/categories/akkumuliatory-270/tags/li-ro_hbced/?page=2
- ~https://www.shopelectro.ru/catalog/categories/akkumuliatory-270/?page=2~ checked - it contains canonical
Add canonicals to category page
For example this two pages contains no canonicals:
- https://www.shopelectro.ru/catalog/categories/akkumuliatory-270/tags/li-ro_hbced/?page=2
- ~https://www.shopelectro.ru/catalog/categories/akkumuliatory-270/?page=2~ checked - it contains canonical
</issue>
<code>
[start of shopelectro/context.py]
1 from functools import partial
2
3 from catalog.newcontext import Context, Tags
4
5
6 class Page(Context):
7
8 def __init__(self, page, tags: Tags):
9 self._page = page
10 self._tags = tags
11
12 def context(self):
13 def template_context(page, tag_titles, tags):
14 return {
15 'page': page,
16 'tag_titles': tag_titles,
17 'tags': tags,
18 }
19
20 tags_qs = self._tags.qs()
21 self._page.get_template_render_context = partial(
22 template_context, self._page, tags_qs.as_title(), tags_qs
23 )
24
25 return {
26 'page': self._page,
27 'skip_canonical': tags_qs.exists(),
28 }
29
[end of shopelectro/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shopelectro/context.py b/shopelectro/context.py
--- a/shopelectro/context.py
+++ b/shopelectro/context.py
@@ -24,5 +24,4 @@
return {
'page': self._page,
- 'skip_canonical': tags_qs.exists(),
}
| {"golden_diff": "diff --git a/shopelectro/context.py b/shopelectro/context.py\n--- a/shopelectro/context.py\n+++ b/shopelectro/context.py\n@@ -24,5 +24,4 @@\n \n return {\n 'page': self._page,\n- 'skip_canonical': tags_qs.exists(),\n }\n", "issue": "Add canonicals to category page\nFor example this two pages contains no canonicals:\r\n- https://www.shopelectro.ru/catalog/categories/akkumuliatory-270/tags/li-ro_hbced/?page=2\r\n- ~https://www.shopelectro.ru/catalog/categories/akkumuliatory-270/?page=2~ checked - it contains canonical\nAdd canonicals to category page\nFor example this two pages contains no canonicals:\r\n- https://www.shopelectro.ru/catalog/categories/akkumuliatory-270/tags/li-ro_hbced/?page=2\r\n- ~https://www.shopelectro.ru/catalog/categories/akkumuliatory-270/?page=2~ checked - it contains canonical\n", "before_files": [{"content": "from functools import partial\n\nfrom catalog.newcontext import Context, Tags\n\n\nclass Page(Context):\n\n def __init__(self, page, tags: Tags):\n self._page = page\n self._tags = tags\n\n def context(self):\n def template_context(page, tag_titles, tags):\n return {\n 'page': page,\n 'tag_titles': tag_titles,\n 'tags': tags,\n }\n\n tags_qs = self._tags.qs()\n self._page.get_template_render_context = partial(\n template_context, self._page, tags_qs.as_title(), tags_qs\n )\n\n return {\n 'page': self._page,\n 'skip_canonical': tags_qs.exists(),\n }\n", "path": "shopelectro/context.py"}]} | 901 | 73 |
gh_patches_debug_17860 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-6112 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cleanup exception that are not logged as error
After #4495 got merged @agjohnson suggested to have an attribute in the Exception class and check for that attribute before log the exception, instead of defining a list for the warning exceptions as I did at:
https://github.com/rtfd/readthedocs.org/pull/4495/files#diff-ca52b098301dd315a834b3556ab9a7d5R424
Also, there are more exceptions that have to treat in the same way: `ProjectConfigurationError` for example.
https://sentry.io/read-the-docs/readthedocs-org/issues/668248681/
</issue>
<code>
[start of readthedocs/vcs_support/base.py]
1 # -*- coding: utf-8 -*-
2
3 """Base classes for VCS backends."""
4 import logging
5 import os
6 import shutil
7
8
9 log = logging.getLogger(__name__)
10
11
12 class VCSVersion:
13
14 """
15 Represents a Version (tag or branch) in a VCS.
16
17 This class should only be instantiated in BaseVCS subclasses.
18
19 It can act as a context manager to temporarily switch to this tag (eg to
20 build docs for this tag).
21 """
22
23 def __init__(self, repository, identifier, verbose_name):
24 self.repository = repository
25 self.identifier = identifier
26 self.verbose_name = verbose_name
27
28 def __repr__(self):
29 return '<VCSVersion: {}:{}'.format(
30 self.repository.repo_url,
31 self.verbose_name,
32 )
33
34
35 class BaseVCS:
36
37 """
38 Base for VCS Classes.
39
40 VCS commands are ran inside a ``LocalEnvironment``.
41 """
42
43 supports_tags = False # Whether this VCS supports tags or not.
44 supports_branches = False # Whether this VCS supports branches or not.
45 supports_submodules = False
46
47 # =========================================================================
48 # General methods
49 # =========================================================================
50
51 # Defining a base API, so we'll have unused args
52 # pylint: disable=unused-argument
53 def __init__(
54 self, project, version_slug, environment=None,
55 verbose_name=None, version_type=None, **kwargs
56 ):
57 self.default_branch = project.default_branch
58 self.project = project
59 self.name = project.name
60 self.repo_url = project.clean_repo
61 self.working_dir = project.checkout_path(version_slug)
62 # required for External versions
63 self.verbose_name = verbose_name
64 self.version_type = version_type
65
66 from readthedocs.doc_builder.environments import LocalEnvironment
67 self.environment = environment or LocalEnvironment(project)
68
69 # Update the env variables with the proper VCS env variables
70 self.environment.environment.update(self.env)
71
72 def check_working_dir(self):
73 if not os.path.exists(self.working_dir):
74 os.makedirs(self.working_dir)
75
76 def make_clean_working_dir(self):
77 """Ensures that the working dir exists and is empty."""
78 shutil.rmtree(self.working_dir, ignore_errors=True)
79 self.check_working_dir()
80
81 @property
82 def env(self):
83 environment = os.environ.copy()
84
85 # TODO: kind of a hack
86 del environment['PATH']
87
88 return environment
89
90 def update(self):
91 """
92 Update a local copy of the repository in self.working_dir.
93
94 If self.working_dir is already a valid local copy of the repository,
95 update the repository, else create a new local copy of the repository.
96 """
97 self.check_working_dir()
98
99 def run(self, *cmd, **kwargs):
100 kwargs.update({
101 'cwd': self.working_dir,
102 'shell': False,
103 })
104
105 build_cmd = self.environment.run(*cmd, **kwargs)
106 # Return a tuple to keep compatibility
107 return (build_cmd.exit_code, build_cmd.output, build_cmd.error)
108
109 # =========================================================================
110 # Tag / Branch related methods
111 # These methods only apply if supports_tags = True and/or
112 # support_branches = True
113 # =========================================================================
114
115 @property
116 def tags(self):
117 """
118 Returns a list of VCSVersion objects.
119
120 See VCSVersion for more information.
121 """
122 raise NotImplementedError
123
124 @property
125 def branches(self):
126 """
127 Returns a list of VCSVersion objects.
128
129 See VCSVersion for more information.
130 """
131 raise NotImplementedError
132
133 @property
134 def commit(self):
135 """Returns a string representing the current commit."""
136 raise NotImplementedError
137
138 def checkout(self, identifier=None):
139 """
140 Set the state to the given identifier.
141
142 If identifier is None, checkout to the latest revision.
143
144 The type and format of identifier may change from VCS to VCS, so each
145 backend is responsible to understand it's identifiers.
146 """
147 self.check_working_dir()
148
149 def update_submodules(self, config):
150 """
151 Update the submodules of the current checkout.
152
153 :type config: readthedocs.config.BuildConfigBase
154 """
155 raise NotImplementedError
156
[end of readthedocs/vcs_support/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/vcs_support/base.py b/readthedocs/vcs_support/base.py
--- a/readthedocs/vcs_support/base.py
+++ b/readthedocs/vcs_support/base.py
@@ -1,10 +1,11 @@
-# -*- coding: utf-8 -*-
-
"""Base classes for VCS backends."""
import logging
import os
import shutil
+from readthedocs.doc_builder.exceptions import BuildEnvironmentWarning
+from readthedocs.projects.exceptions import RepositoryError
+
log = logging.getLogger(__name__)
@@ -102,7 +103,13 @@
'shell': False,
})
- build_cmd = self.environment.run(*cmd, **kwargs)
+ try:
+ build_cmd = self.environment.run(*cmd, **kwargs)
+ except BuildEnvironmentWarning as e:
+ # Re raise as RepositoryError,
+ # so isn't logged as ERROR.
+ raise RepositoryError(str(e))
+
# Return a tuple to keep compatibility
return (build_cmd.exit_code, build_cmd.output, build_cmd.error)
| {"golden_diff": "diff --git a/readthedocs/vcs_support/base.py b/readthedocs/vcs_support/base.py\n--- a/readthedocs/vcs_support/base.py\n+++ b/readthedocs/vcs_support/base.py\n@@ -1,10 +1,11 @@\n-# -*- coding: utf-8 -*-\n-\n \"\"\"Base classes for VCS backends.\"\"\"\n import logging\n import os\n import shutil\n \n+from readthedocs.doc_builder.exceptions import BuildEnvironmentWarning\n+from readthedocs.projects.exceptions import RepositoryError\n+\n \n log = logging.getLogger(__name__)\n \n@@ -102,7 +103,13 @@\n 'shell': False,\n })\n \n- build_cmd = self.environment.run(*cmd, **kwargs)\n+ try:\n+ build_cmd = self.environment.run(*cmd, **kwargs)\n+ except BuildEnvironmentWarning as e:\n+ # Re raise as RepositoryError,\n+ # so isn't logged as ERROR.\n+ raise RepositoryError(str(e))\n+\n # Return a tuple to keep compatibility\n return (build_cmd.exit_code, build_cmd.output, build_cmd.error)\n", "issue": "Cleanup exception that are not logged as error\nAfter #4495 got merged @agjohnson suggested to have an attribute in the Exception class and check for that attribute before log the exception, instead of defining a list for the warning exceptions as I did at:\r\n\r\nhttps://github.com/rtfd/readthedocs.org/pull/4495/files#diff-ca52b098301dd315a834b3556ab9a7d5R424\r\n\r\nAlso, there are more exceptions that have to treat in the same way: `ProjectConfigurationError` for example.\r\n\r\nhttps://sentry.io/read-the-docs/readthedocs-org/issues/668248681/\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Base classes for VCS backends.\"\"\"\nimport logging\nimport os\nimport shutil\n\n\nlog = logging.getLogger(__name__)\n\n\nclass VCSVersion:\n\n \"\"\"\n Represents a Version (tag or branch) in a VCS.\n\n This class should only be instantiated in BaseVCS subclasses.\n\n It can act as a context manager to temporarily switch to this tag (eg to\n build docs for this tag).\n \"\"\"\n\n def __init__(self, repository, identifier, verbose_name):\n self.repository = repository\n self.identifier = identifier\n self.verbose_name = verbose_name\n\n def __repr__(self):\n return '<VCSVersion: {}:{}'.format(\n self.repository.repo_url,\n self.verbose_name,\n )\n\n\nclass BaseVCS:\n\n \"\"\"\n Base for VCS Classes.\n\n VCS commands are ran inside a ``LocalEnvironment``.\n \"\"\"\n\n supports_tags = False # Whether this VCS supports tags or not.\n supports_branches = False # Whether this VCS supports branches or not.\n supports_submodules = False\n\n # =========================================================================\n # General methods\n # =========================================================================\n\n # Defining a base API, so we'll have unused args\n # pylint: disable=unused-argument\n def __init__(\n self, project, version_slug, environment=None,\n verbose_name=None, version_type=None, **kwargs\n ):\n self.default_branch = project.default_branch\n self.project = project\n self.name = project.name\n self.repo_url = project.clean_repo\n self.working_dir = project.checkout_path(version_slug)\n # required for External versions\n self.verbose_name = verbose_name\n self.version_type = version_type\n\n from readthedocs.doc_builder.environments import LocalEnvironment\n self.environment = environment or LocalEnvironment(project)\n\n # Update the env variables with the proper VCS env variables\n self.environment.environment.update(self.env)\n\n def check_working_dir(self):\n if not os.path.exists(self.working_dir):\n os.makedirs(self.working_dir)\n\n def make_clean_working_dir(self):\n \"\"\"Ensures that the working dir exists and is empty.\"\"\"\n shutil.rmtree(self.working_dir, ignore_errors=True)\n self.check_working_dir()\n\n @property\n def env(self):\n environment = os.environ.copy()\n\n # TODO: kind of a hack\n del environment['PATH']\n\n return environment\n\n def update(self):\n \"\"\"\n Update a local copy of the repository in self.working_dir.\n\n If self.working_dir is already a valid local copy of the repository,\n update the repository, else create a new local copy of the repository.\n \"\"\"\n self.check_working_dir()\n\n def run(self, *cmd, **kwargs):\n kwargs.update({\n 'cwd': self.working_dir,\n 'shell': False,\n })\n\n build_cmd = self.environment.run(*cmd, **kwargs)\n # Return a tuple to keep compatibility\n return (build_cmd.exit_code, build_cmd.output, build_cmd.error)\n\n # =========================================================================\n # Tag / Branch related methods\n # These methods only apply if supports_tags = True and/or\n # support_branches = True\n # =========================================================================\n\n @property\n def tags(self):\n \"\"\"\n Returns a list of VCSVersion objects.\n\n See VCSVersion for more information.\n \"\"\"\n raise NotImplementedError\n\n @property\n def branches(self):\n \"\"\"\n Returns a list of VCSVersion objects.\n\n See VCSVersion for more information.\n \"\"\"\n raise NotImplementedError\n\n @property\n def commit(self):\n \"\"\"Returns a string representing the current commit.\"\"\"\n raise NotImplementedError\n\n def checkout(self, identifier=None):\n \"\"\"\n Set the state to the given identifier.\n\n If identifier is None, checkout to the latest revision.\n\n The type and format of identifier may change from VCS to VCS, so each\n backend is responsible to understand it's identifiers.\n \"\"\"\n self.check_working_dir()\n\n def update_submodules(self, config):\n \"\"\"\n Update the submodules of the current checkout.\n\n :type config: readthedocs.config.BuildConfigBase\n \"\"\"\n raise NotImplementedError\n", "path": "readthedocs/vcs_support/base.py"}]} | 1,976 | 236 |
gh_patches_debug_42123 | rasdani/github-patches | git_diff | scrapy__scrapy-2400 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cookies from the Cookie request header are not processed
I am new in scrapy, and I meet some problems which I can not get answer from google, so I post it here:
1 Cookie not work even set in DEFAULT_REQUEST_HEADERS:
```
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'accept-encoding': 'gzip, deflate, sdch',
'cache-control': 'no-cache',
'cookie': 'xx=yy',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36'
}
```
```
class MySpider(scrapy.Spider):
def make_requests_from_url(self, url):
return scrapy.http.Request(url, headers=DEFAULT_REQUEST_HEADERS)
```
I know the `make_requests_from_url` will only called once for the start_urls, and in my opinion, the first request will send the cookie I set in the `DEFAULT_REQUEST_HEADERS`, however it does not.
2 Share settings between spiders.
I have multiple spiders in the project which share most of the settings like `RandomAgentMiddleware` `RandomProxyMiddleware` `UserAgent` `DEFAULT_REQUEST_HEADERS` and etc, however they are configured inside the settings.py for each spider.
Is it possible to share these settings?
---
The
`COOKIES_ENABLED` is set to true.
Double-encoded cookies
When cookies are passed as UTF8 encoded bytes to the `Request` constructor, they end up being encoded twice and escaped in the `Cookie` header.
```
$ scrapy shell
(...)
In [1]: fetch(scrapy.Request('https://httpbin.org/cookies', cookies={'a': u'á'.encode('utf8')}))
In [2]: request.headers['Cookie']
Out[2]: b"a=b'\\xc3\\xa1'"
In [3]: print(response.text)
{
"cookies": {
"a": "b'\\xc3\\xa1'"
}
}
```
This seems to happen only in Python 3.
```
$ scrapy version -v
Scrapy : 1.5.0
lxml : 4.2.6.0
libxml2 : 2.9.8
cssselect : 1.0.3
parsel : 1.5.1
w3lib : 1.19.0
Twisted : 18.9.0
Python : 3.6.0 (default, Sep 1 2017, 10:59:37) - [GCC 4.8.4]
pyOpenSSL : 18.0.0 (OpenSSL 1.1.0j 20 Nov 2018)
cryptography : 2.4.2
Platform : Linux-4.4.0-134-generic-x86_64-with-debian-jessie-sid
```
</issue>
<code>
[start of scrapy/downloadermiddlewares/cookies.py]
1 import logging
2 from collections import defaultdict
3
4 from scrapy.exceptions import NotConfigured
5 from scrapy.http import Response
6 from scrapy.http.cookies import CookieJar
7 from scrapy.utils.python import to_unicode
8
9
10 logger = logging.getLogger(__name__)
11
12
13 class CookiesMiddleware:
14 """This middleware enables working with sites that need cookies"""
15
16 def __init__(self, debug=False):
17 self.jars = defaultdict(CookieJar)
18 self.debug = debug
19
20 @classmethod
21 def from_crawler(cls, crawler):
22 if not crawler.settings.getbool('COOKIES_ENABLED'):
23 raise NotConfigured
24 return cls(crawler.settings.getbool('COOKIES_DEBUG'))
25
26 def process_request(self, request, spider):
27 if request.meta.get('dont_merge_cookies', False):
28 return
29
30 cookiejarkey = request.meta.get("cookiejar")
31 jar = self.jars[cookiejarkey]
32 cookies = self._get_request_cookies(jar, request)
33 for cookie in cookies:
34 jar.set_cookie_if_ok(cookie, request)
35
36 # set Cookie header
37 request.headers.pop('Cookie', None)
38 jar.add_cookie_header(request)
39 self._debug_cookie(request, spider)
40
41 def process_response(self, request, response, spider):
42 if request.meta.get('dont_merge_cookies', False):
43 return response
44
45 # extract cookies from Set-Cookie and drop invalid/expired cookies
46 cookiejarkey = request.meta.get("cookiejar")
47 jar = self.jars[cookiejarkey]
48 jar.extract_cookies(response, request)
49 self._debug_set_cookie(response, spider)
50
51 return response
52
53 def _debug_cookie(self, request, spider):
54 if self.debug:
55 cl = [to_unicode(c, errors='replace')
56 for c in request.headers.getlist('Cookie')]
57 if cl:
58 cookies = "\n".join("Cookie: {}\n".format(c) for c in cl)
59 msg = "Sending cookies to: {}\n{}".format(request, cookies)
60 logger.debug(msg, extra={'spider': spider})
61
62 def _debug_set_cookie(self, response, spider):
63 if self.debug:
64 cl = [to_unicode(c, errors='replace')
65 for c in response.headers.getlist('Set-Cookie')]
66 if cl:
67 cookies = "\n".join("Set-Cookie: {}\n".format(c) for c in cl)
68 msg = "Received cookies from: {}\n{}".format(response, cookies)
69 logger.debug(msg, extra={'spider': spider})
70
71 def _format_cookie(self, cookie):
72 # build cookie string
73 cookie_str = '%s=%s' % (cookie['name'], cookie['value'])
74
75 if cookie.get('path', None):
76 cookie_str += '; Path=%s' % cookie['path']
77 if cookie.get('domain', None):
78 cookie_str += '; Domain=%s' % cookie['domain']
79
80 return cookie_str
81
82 def _get_request_cookies(self, jar, request):
83 if isinstance(request.cookies, dict):
84 cookie_list = [
85 {'name': k, 'value': v}
86 for k, v in request.cookies.items()
87 ]
88 else:
89 cookie_list = request.cookies
90
91 cookies = [self._format_cookie(x) for x in cookie_list]
92 headers = {'Set-Cookie': cookies}
93 response = Response(request.url, headers=headers)
94
95 return jar.make_cookies(response, request)
96
[end of scrapy/downloadermiddlewares/cookies.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/downloadermiddlewares/cookies.py b/scrapy/downloadermiddlewares/cookies.py
--- a/scrapy/downloadermiddlewares/cookies.py
+++ b/scrapy/downloadermiddlewares/cookies.py
@@ -29,8 +29,7 @@
cookiejarkey = request.meta.get("cookiejar")
jar = self.jars[cookiejarkey]
- cookies = self._get_request_cookies(jar, request)
- for cookie in cookies:
+ for cookie in self._get_request_cookies(jar, request):
jar.set_cookie_if_ok(cookie, request)
# set Cookie header
@@ -68,28 +67,65 @@
msg = "Received cookies from: {}\n{}".format(response, cookies)
logger.debug(msg, extra={'spider': spider})
- def _format_cookie(self, cookie):
- # build cookie string
- cookie_str = '%s=%s' % (cookie['name'], cookie['value'])
-
- if cookie.get('path', None):
- cookie_str += '; Path=%s' % cookie['path']
- if cookie.get('domain', None):
- cookie_str += '; Domain=%s' % cookie['domain']
-
+ def _format_cookie(self, cookie, request):
+ """
+ Given a dict consisting of cookie components, return its string representation.
+ Decode from bytes if necessary.
+ """
+ decoded = {}
+ for key in ("name", "value", "path", "domain"):
+ if not cookie.get(key):
+ if key in ("name", "value"):
+ msg = "Invalid cookie found in request {}: {} ('{}' is missing)"
+ logger.warning(msg.format(request, cookie, key))
+ return
+ continue
+ if isinstance(cookie[key], str):
+ decoded[key] = cookie[key]
+ else:
+ try:
+ decoded[key] = cookie[key].decode("utf8")
+ except UnicodeDecodeError:
+ logger.warning("Non UTF-8 encoded cookie found in request %s: %s",
+ request, cookie)
+ decoded[key] = cookie[key].decode("latin1", errors="replace")
+
+ cookie_str = "{}={}".format(decoded.pop("name"), decoded.pop("value"))
+ for key, value in decoded.items(): # path, domain
+ cookie_str += "; {}={}".format(key.capitalize(), value)
return cookie_str
def _get_request_cookies(self, jar, request):
- if isinstance(request.cookies, dict):
- cookie_list = [
- {'name': k, 'value': v}
- for k, v in request.cookies.items()
- ]
- else:
- cookie_list = request.cookies
-
- cookies = [self._format_cookie(x) for x in cookie_list]
- headers = {'Set-Cookie': cookies}
- response = Response(request.url, headers=headers)
-
- return jar.make_cookies(response, request)
+ """
+ Extract cookies from a Request. Values from the `Request.cookies` attribute
+ take precedence over values from the `Cookie` request header.
+ """
+ def get_cookies_from_header(jar, request):
+ cookie_header = request.headers.get("Cookie")
+ if not cookie_header:
+ return []
+ cookie_gen_bytes = (s.strip() for s in cookie_header.split(b";"))
+ cookie_list_unicode = []
+ for cookie_bytes in cookie_gen_bytes:
+ try:
+ cookie_unicode = cookie_bytes.decode("utf8")
+ except UnicodeDecodeError:
+ logger.warning("Non UTF-8 encoded cookie found in request %s: %s",
+ request, cookie_bytes)
+ cookie_unicode = cookie_bytes.decode("latin1", errors="replace")
+ cookie_list_unicode.append(cookie_unicode)
+ response = Response(request.url, headers={"Set-Cookie": cookie_list_unicode})
+ return jar.make_cookies(response, request)
+
+ def get_cookies_from_attribute(jar, request):
+ if not request.cookies:
+ return []
+ elif isinstance(request.cookies, dict):
+ cookies = ({"name": k, "value": v} for k, v in request.cookies.items())
+ else:
+ cookies = request.cookies
+ formatted = filter(None, (self._format_cookie(c, request) for c in cookies))
+ response = Response(request.url, headers={"Set-Cookie": formatted})
+ return jar.make_cookies(response, request)
+
+ return get_cookies_from_header(jar, request) + get_cookies_from_attribute(jar, request)
| {"golden_diff": "diff --git a/scrapy/downloadermiddlewares/cookies.py b/scrapy/downloadermiddlewares/cookies.py\n--- a/scrapy/downloadermiddlewares/cookies.py\n+++ b/scrapy/downloadermiddlewares/cookies.py\n@@ -29,8 +29,7 @@\n \n cookiejarkey = request.meta.get(\"cookiejar\")\n jar = self.jars[cookiejarkey]\n- cookies = self._get_request_cookies(jar, request)\n- for cookie in cookies:\n+ for cookie in self._get_request_cookies(jar, request):\n jar.set_cookie_if_ok(cookie, request)\n \n # set Cookie header\n@@ -68,28 +67,65 @@\n msg = \"Received cookies from: {}\\n{}\".format(response, cookies)\n logger.debug(msg, extra={'spider': spider})\n \n- def _format_cookie(self, cookie):\n- # build cookie string\n- cookie_str = '%s=%s' % (cookie['name'], cookie['value'])\n-\n- if cookie.get('path', None):\n- cookie_str += '; Path=%s' % cookie['path']\n- if cookie.get('domain', None):\n- cookie_str += '; Domain=%s' % cookie['domain']\n-\n+ def _format_cookie(self, cookie, request):\n+ \"\"\"\n+ Given a dict consisting of cookie components, return its string representation.\n+ Decode from bytes if necessary.\n+ \"\"\"\n+ decoded = {}\n+ for key in (\"name\", \"value\", \"path\", \"domain\"):\n+ if not cookie.get(key):\n+ if key in (\"name\", \"value\"):\n+ msg = \"Invalid cookie found in request {}: {} ('{}' is missing)\"\n+ logger.warning(msg.format(request, cookie, key))\n+ return\n+ continue\n+ if isinstance(cookie[key], str):\n+ decoded[key] = cookie[key]\n+ else:\n+ try:\n+ decoded[key] = cookie[key].decode(\"utf8\")\n+ except UnicodeDecodeError:\n+ logger.warning(\"Non UTF-8 encoded cookie found in request %s: %s\",\n+ request, cookie)\n+ decoded[key] = cookie[key].decode(\"latin1\", errors=\"replace\")\n+\n+ cookie_str = \"{}={}\".format(decoded.pop(\"name\"), decoded.pop(\"value\"))\n+ for key, value in decoded.items(): # path, domain\n+ cookie_str += \"; {}={}\".format(key.capitalize(), value)\n return cookie_str\n \n def _get_request_cookies(self, jar, request):\n- if isinstance(request.cookies, dict):\n- cookie_list = [\n- {'name': k, 'value': v}\n- for k, v in request.cookies.items()\n- ]\n- else:\n- cookie_list = request.cookies\n-\n- cookies = [self._format_cookie(x) for x in cookie_list]\n- headers = {'Set-Cookie': cookies}\n- response = Response(request.url, headers=headers)\n-\n- return jar.make_cookies(response, request)\n+ \"\"\"\n+ Extract cookies from a Request. Values from the `Request.cookies` attribute\n+ take precedence over values from the `Cookie` request header.\n+ \"\"\"\n+ def get_cookies_from_header(jar, request):\n+ cookie_header = request.headers.get(\"Cookie\")\n+ if not cookie_header:\n+ return []\n+ cookie_gen_bytes = (s.strip() for s in cookie_header.split(b\";\"))\n+ cookie_list_unicode = []\n+ for cookie_bytes in cookie_gen_bytes:\n+ try:\n+ cookie_unicode = cookie_bytes.decode(\"utf8\")\n+ except UnicodeDecodeError:\n+ logger.warning(\"Non UTF-8 encoded cookie found in request %s: %s\",\n+ request, cookie_bytes)\n+ cookie_unicode = cookie_bytes.decode(\"latin1\", errors=\"replace\")\n+ cookie_list_unicode.append(cookie_unicode)\n+ response = Response(request.url, headers={\"Set-Cookie\": cookie_list_unicode})\n+ return jar.make_cookies(response, request)\n+\n+ def get_cookies_from_attribute(jar, request):\n+ if not request.cookies:\n+ return []\n+ elif isinstance(request.cookies, dict):\n+ cookies = ({\"name\": k, \"value\": v} for k, v in request.cookies.items())\n+ else:\n+ cookies = request.cookies\n+ formatted = filter(None, (self._format_cookie(c, request) for c in cookies))\n+ response = Response(request.url, headers={\"Set-Cookie\": formatted})\n+ return jar.make_cookies(response, request)\n+\n+ return get_cookies_from_header(jar, request) + get_cookies_from_attribute(jar, request)\n", "issue": "Cookies from the Cookie request header are not processed\nI am new in scrapy, and I meet some problems which I can not get answer from google, so I post it here:\n\n1 Cookie not work even set in DEFAULT_REQUEST_HEADERS:\n\n```\nDEFAULT_REQUEST_HEADERS = {\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n 'accept-encoding': 'gzip, deflate, sdch',\n 'cache-control': 'no-cache',\n 'cookie': 'xx=yy',\n 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36'\n}\n```\n\n```\nclass MySpider(scrapy.Spider):\n def make_requests_from_url(self, url):\n return scrapy.http.Request(url, headers=DEFAULT_REQUEST_HEADERS)\n```\n\nI know the `make_requests_from_url` will only called once for the start_urls, and in my opinion, the first request will send the cookie I set in the `DEFAULT_REQUEST_HEADERS`, however it does not.\n\n2 Share settings between spiders.\n\nI have multiple spiders in the project which share most of the settings like `RandomAgentMiddleware` `RandomProxyMiddleware` `UserAgent` `DEFAULT_REQUEST_HEADERS` and etc, however they are configured inside the settings.py for each spider.\n\nIs it possible to share these settings?\n\n---\n\nThe \n`COOKIES_ENABLED` is set to true.\n\nDouble-encoded cookies\nWhen cookies are passed as UTF8 encoded bytes to the `Request` constructor, they end up being encoded twice and escaped in the `Cookie` header.\r\n\r\n```\r\n$ scrapy shell\r\n(...)\r\nIn [1]: fetch(scrapy.Request('https://httpbin.org/cookies', cookies={'a': u'\u00e1'.encode('utf8')}))\r\n\r\nIn [2]: request.headers['Cookie']\r\nOut[2]: b\"a=b'\\\\xc3\\\\xa1'\"\r\n\r\nIn [3]: print(response.text)\r\n{\r\n \"cookies\": {\r\n \"a\": \"b'\\\\xc3\\\\xa1'\"\r\n }\r\n}\r\n```\r\n\r\nThis seems to happen only in Python 3.\r\n```\r\n$ scrapy version -v\r\nScrapy : 1.5.0\r\nlxml : 4.2.6.0\r\nlibxml2 : 2.9.8\r\ncssselect : 1.0.3\r\nparsel : 1.5.1\r\nw3lib : 1.19.0\r\nTwisted : 18.9.0\r\nPython : 3.6.0 (default, Sep 1 2017, 10:59:37) - [GCC 4.8.4]\r\npyOpenSSL : 18.0.0 (OpenSSL 1.1.0j 20 Nov 2018)\r\ncryptography : 2.4.2\r\nPlatform : Linux-4.4.0-134-generic-x86_64-with-debian-jessie-sid\r\n```\n", "before_files": [{"content": "import logging\nfrom collections import defaultdict\n\nfrom scrapy.exceptions import NotConfigured\nfrom scrapy.http import Response\nfrom scrapy.http.cookies import CookieJar\nfrom scrapy.utils.python import to_unicode\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CookiesMiddleware:\n \"\"\"This middleware enables working with sites that need cookies\"\"\"\n\n def __init__(self, debug=False):\n self.jars = defaultdict(CookieJar)\n self.debug = debug\n\n @classmethod\n def from_crawler(cls, crawler):\n if not crawler.settings.getbool('COOKIES_ENABLED'):\n raise NotConfigured\n return cls(crawler.settings.getbool('COOKIES_DEBUG'))\n\n def process_request(self, request, spider):\n if request.meta.get('dont_merge_cookies', False):\n return\n\n cookiejarkey = request.meta.get(\"cookiejar\")\n jar = self.jars[cookiejarkey]\n cookies = self._get_request_cookies(jar, request)\n for cookie in cookies:\n jar.set_cookie_if_ok(cookie, request)\n\n # set Cookie header\n request.headers.pop('Cookie', None)\n jar.add_cookie_header(request)\n self._debug_cookie(request, spider)\n\n def process_response(self, request, response, spider):\n if request.meta.get('dont_merge_cookies', False):\n return response\n\n # extract cookies from Set-Cookie and drop invalid/expired cookies\n cookiejarkey = request.meta.get(\"cookiejar\")\n jar = self.jars[cookiejarkey]\n jar.extract_cookies(response, request)\n self._debug_set_cookie(response, spider)\n\n return response\n\n def _debug_cookie(self, request, spider):\n if self.debug:\n cl = [to_unicode(c, errors='replace')\n for c in request.headers.getlist('Cookie')]\n if cl:\n cookies = \"\\n\".join(\"Cookie: {}\\n\".format(c) for c in cl)\n msg = \"Sending cookies to: {}\\n{}\".format(request, cookies)\n logger.debug(msg, extra={'spider': spider})\n\n def _debug_set_cookie(self, response, spider):\n if self.debug:\n cl = [to_unicode(c, errors='replace')\n for c in response.headers.getlist('Set-Cookie')]\n if cl:\n cookies = \"\\n\".join(\"Set-Cookie: {}\\n\".format(c) for c in cl)\n msg = \"Received cookies from: {}\\n{}\".format(response, cookies)\n logger.debug(msg, extra={'spider': spider})\n\n def _format_cookie(self, cookie):\n # build cookie string\n cookie_str = '%s=%s' % (cookie['name'], cookie['value'])\n\n if cookie.get('path', None):\n cookie_str += '; Path=%s' % cookie['path']\n if cookie.get('domain', None):\n cookie_str += '; Domain=%s' % cookie['domain']\n\n return cookie_str\n\n def _get_request_cookies(self, jar, request):\n if isinstance(request.cookies, dict):\n cookie_list = [\n {'name': k, 'value': v}\n for k, v in request.cookies.items()\n ]\n else:\n cookie_list = request.cookies\n\n cookies = [self._format_cookie(x) for x in cookie_list]\n headers = {'Set-Cookie': cookies}\n response = Response(request.url, headers=headers)\n\n return jar.make_cookies(response, request)\n", "path": "scrapy/downloadermiddlewares/cookies.py"}]} | 2,131 | 996 |
gh_patches_debug_19309 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2048 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support absolute-form HTTP requests with IPv6 addresses
##### Steps to reproduce the problem:
1. MITMDump proxy IPv6 flow
2. Log
```
172.17.15.1:53074: HTTP protocol error in client request: Bad HTTP request line: b'GET http://[::ffff:180.97.8.37]/mmsns/9KavCVwReibwDKBMmibrWUdVZZbHCQ0bV3R89mboKO6QDls7Sxcl4tfbHvLIHFbj3NASftTH2VAGw/150?tp=wxpc&length=2208&width=1242&idx=1&token=WSEN6qDsKwV8A02w3onOGQYfxnkibdqSOkmHhZGNB4DGicdGyTltMQXCTF7lr4IJR8Jz4lKQBBW47EV1CP33SGjg HTTP/1.1'
172.17.15.1:53075: HTTP protocol error in client request: Bad HTTP request line: b'GET http://[::ffff:b461:819]/mmcrhead/Q3auHgzwzM606QEH0kXoF60vMh5Iiay7B3DiauET3kCpbBwEfgzhNqOSeJ6y4geORGPxEcKf36Totd4sHQcwvBEg/0 HTTP/1.1'
```
##### Any other comments? What have you tried so far?
No
##### System information
```
Mitmproxy version: 1.0.2
Python version: 3.6.0
Platform: Darwin-15.6.0-x86_64-i386-64bit
SSL version: OpenSSL 1.0.2j 26 Sep 2016
Mac version: 10.11.6 ('', '', '') x86_6
```
</issue>
<code>
[start of mitmproxy/net/check.py]
1 import re
2
3 # Allow underscore in host name
4 _label_valid = re.compile(b"(?!-)[A-Z\d\-_]{1,63}(?<!-)$", re.IGNORECASE)
5
6
7 def is_valid_host(host: bytes) -> bool:
8 """
9 Checks if a hostname is valid.
10 """
11 try:
12 host.decode("idna")
13 except ValueError:
14 return False
15 if len(host) > 255:
16 return False
17 if host and host[-1:] == b".":
18 host = host[:-1]
19 return all(_label_valid.match(x) for x in host.split(b"."))
20
21
22 def is_valid_port(port):
23 return 0 <= port <= 65535
24
[end of mitmproxy/net/check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/net/check.py b/mitmproxy/net/check.py
--- a/mitmproxy/net/check.py
+++ b/mitmproxy/net/check.py
@@ -1,3 +1,4 @@
+import ipaddress
import re
# Allow underscore in host name
@@ -6,17 +7,26 @@
def is_valid_host(host: bytes) -> bool:
"""
- Checks if a hostname is valid.
+ Checks if the passed bytes are a valid DNS hostname or an IPv4/IPv6 address.
"""
try:
host.decode("idna")
except ValueError:
return False
+ # RFC1035: 255 bytes or less.
if len(host) > 255:
return False
if host and host[-1:] == b".":
host = host[:-1]
- return all(_label_valid.match(x) for x in host.split(b"."))
+ # DNS hostname
+ if all(_label_valid.match(x) for x in host.split(b".")):
+ return True
+ # IPv4/IPv6 address
+ try:
+ ipaddress.ip_address(host.decode('idna'))
+ return True
+ except ValueError:
+ return False
def is_valid_port(port):
| {"golden_diff": "diff --git a/mitmproxy/net/check.py b/mitmproxy/net/check.py\n--- a/mitmproxy/net/check.py\n+++ b/mitmproxy/net/check.py\n@@ -1,3 +1,4 @@\n+import ipaddress\n import re\n \n # Allow underscore in host name\n@@ -6,17 +7,26 @@\n \n def is_valid_host(host: bytes) -> bool:\n \"\"\"\n- Checks if a hostname is valid.\n+ Checks if the passed bytes are a valid DNS hostname or an IPv4/IPv6 address.\n \"\"\"\n try:\n host.decode(\"idna\")\n except ValueError:\n return False\n+ # RFC1035: 255 bytes or less.\n if len(host) > 255:\n return False\n if host and host[-1:] == b\".\":\n host = host[:-1]\n- return all(_label_valid.match(x) for x in host.split(b\".\"))\n+ # DNS hostname\n+ if all(_label_valid.match(x) for x in host.split(b\".\")):\n+ return True\n+ # IPv4/IPv6 address\n+ try:\n+ ipaddress.ip_address(host.decode('idna'))\n+ return True\n+ except ValueError:\n+ return False\n \n \n def is_valid_port(port):\n", "issue": "Support absolute-form HTTP requests with IPv6 addresses\n##### Steps to reproduce the problem:\r\n\r\n1. MITMDump proxy IPv6 flow\r\n2. Log\r\n```\r\n172.17.15.1:53074: HTTP protocol error in client request: Bad HTTP request line: b'GET http://[::ffff:180.97.8.37]/mmsns/9KavCVwReibwDKBMmibrWUdVZZbHCQ0bV3R89mboKO6QDls7Sxcl4tfbHvLIHFbj3NASftTH2VAGw/150?tp=wxpc&length=2208&width=1242&idx=1&token=WSEN6qDsKwV8A02w3onOGQYfxnkibdqSOkmHhZGNB4DGicdGyTltMQXCTF7lr4IJR8Jz4lKQBBW47EV1CP33SGjg HTTP/1.1'\r\n172.17.15.1:53075: HTTP protocol error in client request: Bad HTTP request line: b'GET http://[::ffff:b461:819]/mmcrhead/Q3auHgzwzM606QEH0kXoF60vMh5Iiay7B3DiauET3kCpbBwEfgzhNqOSeJ6y4geORGPxEcKf36Totd4sHQcwvBEg/0 HTTP/1.1'\r\n```\r\n\r\n\r\n##### Any other comments? What have you tried so far?\r\nNo\r\n\r\n\r\n##### System information\r\n```\r\nMitmproxy version: 1.0.2\r\nPython version: 3.6.0\r\nPlatform: Darwin-15.6.0-x86_64-i386-64bit\r\nSSL version: OpenSSL 1.0.2j 26 Sep 2016\r\nMac version: 10.11.6 ('', '', '') x86_6\r\n```\r\n\r\n\n", "before_files": [{"content": "import re\n\n# Allow underscore in host name\n_label_valid = re.compile(b\"(?!-)[A-Z\\d\\-_]{1,63}(?<!-)$\", re.IGNORECASE)\n\n\ndef is_valid_host(host: bytes) -> bool:\n \"\"\"\n Checks if a hostname is valid.\n \"\"\"\n try:\n host.decode(\"idna\")\n except ValueError:\n return False\n if len(host) > 255:\n return False\n if host and host[-1:] == b\".\":\n host = host[:-1]\n return all(_label_valid.match(x) for x in host.split(b\".\"))\n\n\ndef is_valid_port(port):\n return 0 <= port <= 65535\n", "path": "mitmproxy/net/check.py"}]} | 1,217 | 283 |
gh_patches_debug_7403 | rasdani/github-patches | git_diff | pytorch__vision-5844 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
INaturalist download is broken for version="2021_train"
*Originally reported in https://discuss.pytorch.org/t/inaturalist-download-crashes-towards-the-end/149006*
> *INaturalist* download proceeds fine up to the end, then crashes with this error:
>
> ```py
> File "/scratch/user/myenvs/env/lib/python3.8/site-packages/torchvision/datasets/utils.py", line 152, in download_url
> raise RuntimeError("File not found or corrupted.")
> ```
> Using torchvision automatic downloading of the `2021_train` dataset.
---
The error points to
https://github.com/pytorch/vision/blob/05eae32f9663bbecad10a8d367ccbec50130e2f5/torchvision/datasets/utils.py#L151-L152
So my guess is that the MD5 checksum that we have on record is wrong:
https://github.com/pytorch/vision/blob/05eae32f9663bbecad10a8d367ccbec50130e2f5/torchvision/datasets/inaturalist.py#L24
I'll download and check. Given that this archive is > 200GB, this will take a while. I've tried with `version="2021_valid"` and this works fine. Thus, I'm guessing this is not a systematic problem.
cc @pmeier @YosuaMichael
</issue>
<code>
[start of torchvision/datasets/inaturalist.py]
1 import os
2 import os.path
3 from typing import Any, Callable, Dict, List, Optional, Union, Tuple
4
5 from PIL import Image
6
7 from .utils import download_and_extract_archive, verify_str_arg
8 from .vision import VisionDataset
9
10 CATEGORIES_2021 = ["kingdom", "phylum", "class", "order", "family", "genus"]
11
12 DATASET_URLS = {
13 "2017": "https://ml-inat-competition-datasets.s3.amazonaws.com/2017/train_val_images.tar.gz",
14 "2018": "https://ml-inat-competition-datasets.s3.amazonaws.com/2018/train_val2018.tar.gz",
15 "2019": "https://ml-inat-competition-datasets.s3.amazonaws.com/2019/train_val2019.tar.gz",
16 "2021_train": "https://ml-inat-competition-datasets.s3.amazonaws.com/2021/train.tar.gz",
17 "2021_train_mini": "https://ml-inat-competition-datasets.s3.amazonaws.com/2021/train_mini.tar.gz",
18 "2021_valid": "https://ml-inat-competition-datasets.s3.amazonaws.com/2021/val.tar.gz",
19 }
20
21 DATASET_MD5 = {
22 "2017": "7c784ea5e424efaec655bd392f87301f",
23 "2018": "b1c6952ce38f31868cc50ea72d066cc3",
24 "2019": "c60a6e2962c9b8ccbd458d12c8582644",
25 "2021_train": "38a7bb733f7a09214d44293460ec0021",
26 "2021_train_mini": "db6ed8330e634445efc8fec83ae81442",
27 "2021_valid": "f6f6e0e242e3d4c9569ba56400938afc",
28 }
29
30
31 class INaturalist(VisionDataset):
32 """`iNaturalist <https://github.com/visipedia/inat_comp>`_ Dataset.
33
34 Args:
35 root (string): Root directory of dataset where the image files are stored.
36 This class does not require/use annotation files.
37 version (string, optional): Which version of the dataset to download/use. One of
38 '2017', '2018', '2019', '2021_train', '2021_train_mini', '2021_valid'.
39 Default: `2021_train`.
40 target_type (string or list, optional): Type of target to use, for 2021 versions, one of:
41
42 - ``full``: the full category (species)
43 - ``kingdom``: e.g. "Animalia"
44 - ``phylum``: e.g. "Arthropoda"
45 - ``class``: e.g. "Insecta"
46 - ``order``: e.g. "Coleoptera"
47 - ``family``: e.g. "Cleridae"
48 - ``genus``: e.g. "Trichodes"
49
50 for 2017-2019 versions, one of:
51
52 - ``full``: the full (numeric) category
53 - ``super``: the super category, e.g. "Amphibians"
54
55 Can also be a list to output a tuple with all specified target types.
56 Defaults to ``full``.
57 transform (callable, optional): A function/transform that takes in an PIL image
58 and returns a transformed version. E.g, ``transforms.RandomCrop``
59 target_transform (callable, optional): A function/transform that takes in the
60 target and transforms it.
61 download (bool, optional): If true, downloads the dataset from the internet and
62 puts it in root directory. If dataset is already downloaded, it is not
63 downloaded again.
64 """
65
66 def __init__(
67 self,
68 root: str,
69 version: str = "2021_train",
70 target_type: Union[List[str], str] = "full",
71 transform: Optional[Callable] = None,
72 target_transform: Optional[Callable] = None,
73 download: bool = False,
74 ) -> None:
75 self.version = verify_str_arg(version, "version", DATASET_URLS.keys())
76
77 super().__init__(os.path.join(root, version), transform=transform, target_transform=target_transform)
78
79 os.makedirs(root, exist_ok=True)
80 if download:
81 self.download()
82
83 if not self._check_integrity():
84 raise RuntimeError("Dataset not found or corrupted. You can use download=True to download it")
85
86 self.all_categories: List[str] = []
87
88 # map: category type -> name of category -> index
89 self.categories_index: Dict[str, Dict[str, int]] = {}
90
91 # list indexed by category id, containing mapping from category type -> index
92 self.categories_map: List[Dict[str, int]] = []
93
94 if not isinstance(target_type, list):
95 target_type = [target_type]
96 if self.version[:4] == "2021":
97 self.target_type = [verify_str_arg(t, "target_type", ("full", *CATEGORIES_2021)) for t in target_type]
98 self._init_2021()
99 else:
100 self.target_type = [verify_str_arg(t, "target_type", ("full", "super")) for t in target_type]
101 self._init_pre2021()
102
103 # index of all files: (full category id, filename)
104 self.index: List[Tuple[int, str]] = []
105
106 for dir_index, dir_name in enumerate(self.all_categories):
107 files = os.listdir(os.path.join(self.root, dir_name))
108 for fname in files:
109 self.index.append((dir_index, fname))
110
111 def _init_2021(self) -> None:
112 """Initialize based on 2021 layout"""
113
114 self.all_categories = sorted(os.listdir(self.root))
115
116 # map: category type -> name of category -> index
117 self.categories_index = {k: {} for k in CATEGORIES_2021}
118
119 for dir_index, dir_name in enumerate(self.all_categories):
120 pieces = dir_name.split("_")
121 if len(pieces) != 8:
122 raise RuntimeError(f"Unexpected category name {dir_name}, wrong number of pieces")
123 if pieces[0] != f"{dir_index:05d}":
124 raise RuntimeError(f"Unexpected category id {pieces[0]}, expecting {dir_index:05d}")
125 cat_map = {}
126 for cat, name in zip(CATEGORIES_2021, pieces[1:7]):
127 if name in self.categories_index[cat]:
128 cat_id = self.categories_index[cat][name]
129 else:
130 cat_id = len(self.categories_index[cat])
131 self.categories_index[cat][name] = cat_id
132 cat_map[cat] = cat_id
133 self.categories_map.append(cat_map)
134
135 def _init_pre2021(self) -> None:
136 """Initialize based on 2017-2019 layout"""
137
138 # map: category type -> name of category -> index
139 self.categories_index = {"super": {}}
140
141 cat_index = 0
142 super_categories = sorted(os.listdir(self.root))
143 for sindex, scat in enumerate(super_categories):
144 self.categories_index["super"][scat] = sindex
145 subcategories = sorted(os.listdir(os.path.join(self.root, scat)))
146 for subcat in subcategories:
147 if self.version == "2017":
148 # this version does not use ids as directory names
149 subcat_i = cat_index
150 cat_index += 1
151 else:
152 try:
153 subcat_i = int(subcat)
154 except ValueError:
155 raise RuntimeError(f"Unexpected non-numeric dir name: {subcat}")
156 if subcat_i >= len(self.categories_map):
157 old_len = len(self.categories_map)
158 self.categories_map.extend([{}] * (subcat_i - old_len + 1))
159 self.all_categories.extend([""] * (subcat_i - old_len + 1))
160 if self.categories_map[subcat_i]:
161 raise RuntimeError(f"Duplicate category {subcat}")
162 self.categories_map[subcat_i] = {"super": sindex}
163 self.all_categories[subcat_i] = os.path.join(scat, subcat)
164
165 # validate the dictionary
166 for cindex, c in enumerate(self.categories_map):
167 if not c:
168 raise RuntimeError(f"Missing category {cindex}")
169
170 def __getitem__(self, index: int) -> Tuple[Any, Any]:
171 """
172 Args:
173 index (int): Index
174
175 Returns:
176 tuple: (image, target) where the type of target specified by target_type.
177 """
178
179 cat_id, fname = self.index[index]
180 img = Image.open(os.path.join(self.root, self.all_categories[cat_id], fname))
181
182 target: Any = []
183 for t in self.target_type:
184 if t == "full":
185 target.append(cat_id)
186 else:
187 target.append(self.categories_map[cat_id][t])
188 target = tuple(target) if len(target) > 1 else target[0]
189
190 if self.transform is not None:
191 img = self.transform(img)
192
193 if self.target_transform is not None:
194 target = self.target_transform(target)
195
196 return img, target
197
198 def __len__(self) -> int:
199 return len(self.index)
200
201 def category_name(self, category_type: str, category_id: int) -> str:
202 """
203 Args:
204 category_type(str): one of "full", "kingdom", "phylum", "class", "order", "family", "genus" or "super"
205 category_id(int): an index (class id) from this category
206
207 Returns:
208 the name of the category
209 """
210 if category_type == "full":
211 return self.all_categories[category_id]
212 else:
213 if category_type not in self.categories_index:
214 raise ValueError(f"Invalid category type '{category_type}'")
215 else:
216 for name, id in self.categories_index[category_type].items():
217 if id == category_id:
218 return name
219 raise ValueError(f"Invalid category id {category_id} for {category_type}")
220
221 def _check_integrity(self) -> bool:
222 return os.path.exists(self.root) and len(os.listdir(self.root)) > 0
223
224 def download(self) -> None:
225 if self._check_integrity():
226 raise RuntimeError(
227 f"The directory {self.root} already exists. "
228 f"If you want to re-download or re-extract the images, delete the directory."
229 )
230
231 base_root = os.path.dirname(self.root)
232
233 download_and_extract_archive(
234 DATASET_URLS[self.version], base_root, filename=f"{self.version}.tgz", md5=DATASET_MD5[self.version]
235 )
236
237 orig_dir_name = os.path.join(base_root, os.path.basename(DATASET_URLS[self.version]).rstrip(".tar.gz"))
238 if not os.path.exists(orig_dir_name):
239 raise RuntimeError(f"Unable to find downloaded files at {orig_dir_name}")
240 os.rename(orig_dir_name, self.root)
241 print(f"Dataset version '{self.version}' has been downloaded and prepared for use")
242
[end of torchvision/datasets/inaturalist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/datasets/inaturalist.py b/torchvision/datasets/inaturalist.py
--- a/torchvision/datasets/inaturalist.py
+++ b/torchvision/datasets/inaturalist.py
@@ -22,7 +22,7 @@
"2017": "7c784ea5e424efaec655bd392f87301f",
"2018": "b1c6952ce38f31868cc50ea72d066cc3",
"2019": "c60a6e2962c9b8ccbd458d12c8582644",
- "2021_train": "38a7bb733f7a09214d44293460ec0021",
+ "2021_train": "e0526d53c7f7b2e3167b2b43bb2690ed",
"2021_train_mini": "db6ed8330e634445efc8fec83ae81442",
"2021_valid": "f6f6e0e242e3d4c9569ba56400938afc",
}
| {"golden_diff": "diff --git a/torchvision/datasets/inaturalist.py b/torchvision/datasets/inaturalist.py\n--- a/torchvision/datasets/inaturalist.py\n+++ b/torchvision/datasets/inaturalist.py\n@@ -22,7 +22,7 @@\n \"2017\": \"7c784ea5e424efaec655bd392f87301f\",\n \"2018\": \"b1c6952ce38f31868cc50ea72d066cc3\",\n \"2019\": \"c60a6e2962c9b8ccbd458d12c8582644\",\n- \"2021_train\": \"38a7bb733f7a09214d44293460ec0021\",\n+ \"2021_train\": \"e0526d53c7f7b2e3167b2b43bb2690ed\",\n \"2021_train_mini\": \"db6ed8330e634445efc8fec83ae81442\",\n \"2021_valid\": \"f6f6e0e242e3d4c9569ba56400938afc\",\n }\n", "issue": "INaturalist download is broken for version=\"2021_train\"\n*Originally reported in https://discuss.pytorch.org/t/inaturalist-download-crashes-towards-the-end/149006*\r\n\r\n> *INaturalist* download proceeds fine up to the end, then crashes with this error:\r\n> \r\n> ```py\r\n> File \"/scratch/user/myenvs/env/lib/python3.8/site-packages/torchvision/datasets/utils.py\", line 152, in download_url\r\n> raise RuntimeError(\"File not found or corrupted.\")\r\n> ```\r\n> Using torchvision automatic downloading of the `2021_train` dataset.\r\n\r\n---\r\n\r\nThe error points to\r\n\r\nhttps://github.com/pytorch/vision/blob/05eae32f9663bbecad10a8d367ccbec50130e2f5/torchvision/datasets/utils.py#L151-L152\r\n\r\nSo my guess is that the MD5 checksum that we have on record is wrong:\r\n\r\nhttps://github.com/pytorch/vision/blob/05eae32f9663bbecad10a8d367ccbec50130e2f5/torchvision/datasets/inaturalist.py#L24\r\n\r\nI'll download and check. Given that this archive is > 200GB, this will take a while. I've tried with `version=\"2021_valid\"` and this works fine. Thus, I'm guessing this is not a systematic problem.\n\ncc @pmeier @YosuaMichael\n", "before_files": [{"content": "import os\nimport os.path\nfrom typing import Any, Callable, Dict, List, Optional, Union, Tuple\n\nfrom PIL import Image\n\nfrom .utils import download_and_extract_archive, verify_str_arg\nfrom .vision import VisionDataset\n\nCATEGORIES_2021 = [\"kingdom\", \"phylum\", \"class\", \"order\", \"family\", \"genus\"]\n\nDATASET_URLS = {\n \"2017\": \"https://ml-inat-competition-datasets.s3.amazonaws.com/2017/train_val_images.tar.gz\",\n \"2018\": \"https://ml-inat-competition-datasets.s3.amazonaws.com/2018/train_val2018.tar.gz\",\n \"2019\": \"https://ml-inat-competition-datasets.s3.amazonaws.com/2019/train_val2019.tar.gz\",\n \"2021_train\": \"https://ml-inat-competition-datasets.s3.amazonaws.com/2021/train.tar.gz\",\n \"2021_train_mini\": \"https://ml-inat-competition-datasets.s3.amazonaws.com/2021/train_mini.tar.gz\",\n \"2021_valid\": \"https://ml-inat-competition-datasets.s3.amazonaws.com/2021/val.tar.gz\",\n}\n\nDATASET_MD5 = {\n \"2017\": \"7c784ea5e424efaec655bd392f87301f\",\n \"2018\": \"b1c6952ce38f31868cc50ea72d066cc3\",\n \"2019\": \"c60a6e2962c9b8ccbd458d12c8582644\",\n \"2021_train\": \"38a7bb733f7a09214d44293460ec0021\",\n \"2021_train_mini\": \"db6ed8330e634445efc8fec83ae81442\",\n \"2021_valid\": \"f6f6e0e242e3d4c9569ba56400938afc\",\n}\n\n\nclass INaturalist(VisionDataset):\n \"\"\"`iNaturalist <https://github.com/visipedia/inat_comp>`_ Dataset.\n\n Args:\n root (string): Root directory of dataset where the image files are stored.\n This class does not require/use annotation files.\n version (string, optional): Which version of the dataset to download/use. One of\n '2017', '2018', '2019', '2021_train', '2021_train_mini', '2021_valid'.\n Default: `2021_train`.\n target_type (string or list, optional): Type of target to use, for 2021 versions, one of:\n\n - ``full``: the full category (species)\n - ``kingdom``: e.g. \"Animalia\"\n - ``phylum``: e.g. \"Arthropoda\"\n - ``class``: e.g. \"Insecta\"\n - ``order``: e.g. \"Coleoptera\"\n - ``family``: e.g. \"Cleridae\"\n - ``genus``: e.g. \"Trichodes\"\n\n for 2017-2019 versions, one of:\n\n - ``full``: the full (numeric) category\n - ``super``: the super category, e.g. \"Amphibians\"\n\n Can also be a list to output a tuple with all specified target types.\n Defaults to ``full``.\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n download (bool, optional): If true, downloads the dataset from the internet and\n puts it in root directory. If dataset is already downloaded, it is not\n downloaded again.\n \"\"\"\n\n def __init__(\n self,\n root: str,\n version: str = \"2021_train\",\n target_type: Union[List[str], str] = \"full\",\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n download: bool = False,\n ) -> None:\n self.version = verify_str_arg(version, \"version\", DATASET_URLS.keys())\n\n super().__init__(os.path.join(root, version), transform=transform, target_transform=target_transform)\n\n os.makedirs(root, exist_ok=True)\n if download:\n self.download()\n\n if not self._check_integrity():\n raise RuntimeError(\"Dataset not found or corrupted. You can use download=True to download it\")\n\n self.all_categories: List[str] = []\n\n # map: category type -> name of category -> index\n self.categories_index: Dict[str, Dict[str, int]] = {}\n\n # list indexed by category id, containing mapping from category type -> index\n self.categories_map: List[Dict[str, int]] = []\n\n if not isinstance(target_type, list):\n target_type = [target_type]\n if self.version[:4] == \"2021\":\n self.target_type = [verify_str_arg(t, \"target_type\", (\"full\", *CATEGORIES_2021)) for t in target_type]\n self._init_2021()\n else:\n self.target_type = [verify_str_arg(t, \"target_type\", (\"full\", \"super\")) for t in target_type]\n self._init_pre2021()\n\n # index of all files: (full category id, filename)\n self.index: List[Tuple[int, str]] = []\n\n for dir_index, dir_name in enumerate(self.all_categories):\n files = os.listdir(os.path.join(self.root, dir_name))\n for fname in files:\n self.index.append((dir_index, fname))\n\n def _init_2021(self) -> None:\n \"\"\"Initialize based on 2021 layout\"\"\"\n\n self.all_categories = sorted(os.listdir(self.root))\n\n # map: category type -> name of category -> index\n self.categories_index = {k: {} for k in CATEGORIES_2021}\n\n for dir_index, dir_name in enumerate(self.all_categories):\n pieces = dir_name.split(\"_\")\n if len(pieces) != 8:\n raise RuntimeError(f\"Unexpected category name {dir_name}, wrong number of pieces\")\n if pieces[0] != f\"{dir_index:05d}\":\n raise RuntimeError(f\"Unexpected category id {pieces[0]}, expecting {dir_index:05d}\")\n cat_map = {}\n for cat, name in zip(CATEGORIES_2021, pieces[1:7]):\n if name in self.categories_index[cat]:\n cat_id = self.categories_index[cat][name]\n else:\n cat_id = len(self.categories_index[cat])\n self.categories_index[cat][name] = cat_id\n cat_map[cat] = cat_id\n self.categories_map.append(cat_map)\n\n def _init_pre2021(self) -> None:\n \"\"\"Initialize based on 2017-2019 layout\"\"\"\n\n # map: category type -> name of category -> index\n self.categories_index = {\"super\": {}}\n\n cat_index = 0\n super_categories = sorted(os.listdir(self.root))\n for sindex, scat in enumerate(super_categories):\n self.categories_index[\"super\"][scat] = sindex\n subcategories = sorted(os.listdir(os.path.join(self.root, scat)))\n for subcat in subcategories:\n if self.version == \"2017\":\n # this version does not use ids as directory names\n subcat_i = cat_index\n cat_index += 1\n else:\n try:\n subcat_i = int(subcat)\n except ValueError:\n raise RuntimeError(f\"Unexpected non-numeric dir name: {subcat}\")\n if subcat_i >= len(self.categories_map):\n old_len = len(self.categories_map)\n self.categories_map.extend([{}] * (subcat_i - old_len + 1))\n self.all_categories.extend([\"\"] * (subcat_i - old_len + 1))\n if self.categories_map[subcat_i]:\n raise RuntimeError(f\"Duplicate category {subcat}\")\n self.categories_map[subcat_i] = {\"super\": sindex}\n self.all_categories[subcat_i] = os.path.join(scat, subcat)\n\n # validate the dictionary\n for cindex, c in enumerate(self.categories_map):\n if not c:\n raise RuntimeError(f\"Missing category {cindex}\")\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n tuple: (image, target) where the type of target specified by target_type.\n \"\"\"\n\n cat_id, fname = self.index[index]\n img = Image.open(os.path.join(self.root, self.all_categories[cat_id], fname))\n\n target: Any = []\n for t in self.target_type:\n if t == \"full\":\n target.append(cat_id)\n else:\n target.append(self.categories_map[cat_id][t])\n target = tuple(target) if len(target) > 1 else target[0]\n\n if self.transform is not None:\n img = self.transform(img)\n\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return img, target\n\n def __len__(self) -> int:\n return len(self.index)\n\n def category_name(self, category_type: str, category_id: int) -> str:\n \"\"\"\n Args:\n category_type(str): one of \"full\", \"kingdom\", \"phylum\", \"class\", \"order\", \"family\", \"genus\" or \"super\"\n category_id(int): an index (class id) from this category\n\n Returns:\n the name of the category\n \"\"\"\n if category_type == \"full\":\n return self.all_categories[category_id]\n else:\n if category_type not in self.categories_index:\n raise ValueError(f\"Invalid category type '{category_type}'\")\n else:\n for name, id in self.categories_index[category_type].items():\n if id == category_id:\n return name\n raise ValueError(f\"Invalid category id {category_id} for {category_type}\")\n\n def _check_integrity(self) -> bool:\n return os.path.exists(self.root) and len(os.listdir(self.root)) > 0\n\n def download(self) -> None:\n if self._check_integrity():\n raise RuntimeError(\n f\"The directory {self.root} already exists. \"\n f\"If you want to re-download or re-extract the images, delete the directory.\"\n )\n\n base_root = os.path.dirname(self.root)\n\n download_and_extract_archive(\n DATASET_URLS[self.version], base_root, filename=f\"{self.version}.tgz\", md5=DATASET_MD5[self.version]\n )\n\n orig_dir_name = os.path.join(base_root, os.path.basename(DATASET_URLS[self.version]).rstrip(\".tar.gz\"))\n if not os.path.exists(orig_dir_name):\n raise RuntimeError(f\"Unable to find downloaded files at {orig_dir_name}\")\n os.rename(orig_dir_name, self.root)\n print(f\"Dataset version '{self.version}' has been downloaded and prepared for use\")\n", "path": "torchvision/datasets/inaturalist.py"}]} | 4,066 | 331 |
gh_patches_debug_32193 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3536 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error when trying to reset password of other user
## Steps to reproduce
1. Set up another Mathesar user (other than the one you're logged in as).
1. Edit the another user and try to reset their password.
1. Observe this error message:

An API request is made to `/api/ui/v0/users/2/password_reset/` which returns a Django error
> AttributeError at /api/ui/v0/users/2/password_reset/
>
> 'PasswordResetSerializer' object has no attribute 'validate_password'
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/ui/v0/users/2/password_reset/
Django Version: 4.2.10
Python Version: 3.9.19
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'whitenoise.runserver_nostatic',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'drf_spectacular',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware',
'mathesar.middleware.PasswordChangeNeededMiddleware',
'django_userforeignkey.middleware.UserForeignKeyMiddleware',
'django_request_cache.middleware.RequestCacheMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 63, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/ui/viewsets/users.py", line 29, in password_reset
serializer.is_valid(raise_exception=True)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 235, in is_valid
raise ValidationError(self.errors)
File "/code/mathesar/api/exceptions/mixins.py", line 98, in errors
pretty_errors = self.build_pretty_errors(ugly_errors)
File "/code/mathesar/api/exceptions/mixins.py", line 64, in build_pretty_errors
pretty.extend(self.get_field_error_entries(errors[error_type], field))
File "/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py", line 180, in get_field_error_entries
return [self.get_field_error_entry(error, field) for error in errors]
File "/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py", line 180, in <listcomp>
return [self.get_field_error_entry(error, field) for error in errors]
File "/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py", line 168, in get_field_error_entry
validator = getattr(self, "validate_%s" % field.field_name)
Exception Type: AttributeError at /api/ui/v0/users/2/password_reset/
Exception Value: 'PasswordResetSerializer' object has no attribute 'validate_password'
```
</details>
I can reproduce this on the latest develop branch as well as the most recent release (Mathesar 0.1.6).
</issue>
<code>
[start of mathesar/api/ui/serializers/users.py]
1 from django.contrib.auth.password_validation import validate_password
2 from rest_access_policy import FieldAccessMixin, PermittedPkRelatedField
3 from rest_framework import serializers
4
5 from mathesar.api.db.permissions.database import DatabaseAccessPolicy
6 from mathesar.api.db.permissions.schema import SchemaAccessPolicy
7 from mathesar.api.exceptions.mixins import MathesarErrorMessageMixin
8 from mathesar.api.exceptions.validation_exceptions.exceptions import IncorrectOldPassword
9 from mathesar.api.ui.permissions.users import UserAccessPolicy
10 from mathesar.models.base import Database, Schema
11 from mathesar.models.users import User, DatabaseRole, SchemaRole
12
13
14 class NestedDatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
15 class Meta:
16 model = DatabaseRole
17 fields = ['id', 'database', 'role']
18
19
20 class NestedSchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
21 class Meta:
22 model = SchemaRole
23 fields = ['id', 'schema', 'role']
24
25
26 class UserSerializer(MathesarErrorMessageMixin, FieldAccessMixin, serializers.ModelSerializer):
27 database_roles = NestedDatabaseRoleSerializer(many=True, required=False)
28 schema_roles = NestedSchemaRoleSerializer(many=True, required=False)
29 access_policy = UserAccessPolicy
30
31 class Meta:
32 model = User
33 fields = [
34 'id',
35 'full_name',
36 'short_name',
37 'username',
38 'password',
39 'email',
40 'is_superuser',
41 'database_roles',
42 'schema_roles',
43 'display_language'
44 ]
45 extra_kwargs = {
46 'password': {'write_only': True},
47 'database_roles': {'read_only': True},
48 'schema_roles': {'read_only': True}
49 }
50
51 def get_fields(self):
52 fields = super().get_fields()
53 request = self.context.get("request", None)
54 if not hasattr(request, 'parser_context'):
55 return fields
56 kwargs = request.parser_context.get('kwargs')
57 if kwargs:
58 user_pk = kwargs.get('pk')
59 if user_pk:
60 if request.user.id == int(user_pk) or not request.user.is_superuser:
61 fields["is_superuser"].read_only = True
62 return fields
63
64 def create(self, validated_data):
65 password = validated_data.pop('password')
66 user = User(**validated_data)
67 user.password_change_needed = True
68 user.set_password(password)
69 user.save()
70 return user
71
72
73 class ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):
74 password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
75 old_password = serializers.CharField(write_only=True, required=True)
76
77 def validate_old_password(self, value):
78 user = self.context['request'].user
79 if user.check_password(value) is True:
80 return value
81 raise IncorrectOldPassword(field='old_password')
82
83 def update(self, instance, validated_data):
84 instance.set_password(validated_data['password'])
85 instance.save()
86 return instance
87
88
89 class PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):
90 password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
91
92
93 class DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
94 class Meta:
95 model = DatabaseRole
96 fields = ['id', 'user', 'database', 'role']
97
98 # Restrict the list of databases to which the user has access to create a database role
99 # Refer https://rsinger86.github.io/drf-access-policy/policy_reuse/ for the usage of `PermittedPkRelatedField`
100 database = PermittedPkRelatedField(
101 access_policy=DatabaseAccessPolicy,
102 queryset=Database.current_objects.all()
103 )
104
105
106 class SchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
107 class Meta:
108 model = SchemaRole
109 fields = ['id', 'user', 'schema', 'role']
110
111 schema = PermittedPkRelatedField(
112 access_policy=SchemaAccessPolicy,
113 queryset=Schema.current_objects.all()
114 )
115
[end of mathesar/api/ui/serializers/users.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mathesar/api/ui/serializers/users.py b/mathesar/api/ui/serializers/users.py
--- a/mathesar/api/ui/serializers/users.py
+++ b/mathesar/api/ui/serializers/users.py
@@ -1,4 +1,5 @@
from django.contrib.auth.password_validation import validate_password
+from django.core.exceptions import ValidationError as DjangoValidationError
from rest_access_policy import FieldAccessMixin, PermittedPkRelatedField
from rest_framework import serializers
@@ -71,7 +72,7 @@
class ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):
- password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
+ password = serializers.CharField(write_only=True, required=True)
old_password = serializers.CharField(write_only=True, required=True)
def validate_old_password(self, value):
@@ -80,6 +81,13 @@
return value
raise IncorrectOldPassword(field='old_password')
+ def validate_password(self, value):
+ try:
+ validate_password(value)
+ except DjangoValidationError as e:
+ raise e
+ return value
+
def update(self, instance, validated_data):
instance.set_password(validated_data['password'])
instance.save()
@@ -87,7 +95,7 @@
class PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):
- password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
+ password = serializers.CharField(write_only=True, required=True)
class DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
| {"golden_diff": "diff --git a/mathesar/api/ui/serializers/users.py b/mathesar/api/ui/serializers/users.py\n--- a/mathesar/api/ui/serializers/users.py\n+++ b/mathesar/api/ui/serializers/users.py\n@@ -1,4 +1,5 @@\n from django.contrib.auth.password_validation import validate_password\n+from django.core.exceptions import ValidationError as DjangoValidationError\n from rest_access_policy import FieldAccessMixin, PermittedPkRelatedField\n from rest_framework import serializers\n \n@@ -71,7 +72,7 @@\n \n \n class ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n- password = serializers.CharField(write_only=True, required=True, validators=[validate_password])\n+ password = serializers.CharField(write_only=True, required=True)\n old_password = serializers.CharField(write_only=True, required=True)\n \n def validate_old_password(self, value):\n@@ -80,6 +81,13 @@\n return value\n raise IncorrectOldPassword(field='old_password')\n \n+ def validate_password(self, value):\n+ try:\n+ validate_password(value)\n+ except DjangoValidationError as e:\n+ raise e\n+ return value\n+\n def update(self, instance, validated_data):\n instance.set_password(validated_data['password'])\n instance.save()\n@@ -87,7 +95,7 @@\n \n \n class PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n- password = serializers.CharField(write_only=True, required=True, validators=[validate_password])\n+ password = serializers.CharField(write_only=True, required=True)\n \n \n class DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n", "issue": "Error when trying to reset password of other user\n## Steps to reproduce\n\n1. Set up another Mathesar user (other than the one you're logged in as).\n\n1. Edit the another user and try to reset their password.\n\n1. Observe this error message:\n\n \n \n An API request is made to `/api/ui/v0/users/2/password_reset/` which returns a Django error\n\n > AttributeError at /api/ui/v0/users/2/password_reset/\n >\n > 'PasswordResetSerializer' object has no attribute 'validate_password'\n\n <details>\n <summary>Traceback</summary>\n\n ```\n Environment:\n\n\n Request Method: POST\n Request URL: http://localhost:8000/api/ui/v0/users/2/password_reset/\n\n Django Version: 4.2.10\n Python Version: 3.9.19\n Installed Applications:\n ['django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'whitenoise.runserver_nostatic',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'django_filters',\n 'django_property_filter',\n 'drf_spectacular',\n 'mathesar']\n Installed Middleware:\n ['django.middleware.security.SecurityMiddleware',\n 'whitenoise.middleware.WhiteNoiseMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'mathesar.middleware.CursorClosedHandlerMiddleware',\n 'mathesar.middleware.PasswordChangeNeededMiddleware',\n 'django_userforeignkey.middleware.UserForeignKeyMiddleware',\n 'django_request_cache.middleware.RequestCacheMiddleware']\n\n\n\n Traceback (most recent call last):\n File \"/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py\", line 55, in inner\n response = get_response(request)\n File \"/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py\", line 197, in _get_response\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\n File \"/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py\", line 56, in wrapper_view\n return view_func(*args, **kwargs)\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py\", line 125, in view\n return self.dispatch(request, *args, **kwargs)\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 509, in dispatch\n response = self.handle_exception(exc)\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 466, in handle_exception\n response = exception_handler(exc, context)\n File \"/code/mathesar/exception_handlers.py\", line 63, in mathesar_exception_handler\n raise exc\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 506, in dispatch\n response = handler(request, *args, **kwargs)\n File \"/code/mathesar/api/ui/viewsets/users.py\", line 29, in password_reset\n serializer.is_valid(raise_exception=True)\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py\", line 235, in is_valid\n raise ValidationError(self.errors)\n File \"/code/mathesar/api/exceptions/mixins.py\", line 98, in errors\n pretty_errors = self.build_pretty_errors(ugly_errors)\n File \"/code/mathesar/api/exceptions/mixins.py\", line 64, in build_pretty_errors\n pretty.extend(self.get_field_error_entries(errors[error_type], field))\n File \"/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py\", line 180, in get_field_error_entries\n return [self.get_field_error_entry(error, field) for error in errors]\n File \"/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py\", line 180, in <listcomp>\n return [self.get_field_error_entry(error, field) for error in errors]\n File \"/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py\", line 168, in get_field_error_entry\n validator = getattr(self, \"validate_%s\" % field.field_name)\n\n Exception Type: AttributeError at /api/ui/v0/users/2/password_reset/\n Exception Value: 'PasswordResetSerializer' object has no attribute 'validate_password'\n ```\n\n </details>\n\nI can reproduce this on the latest develop branch as well as the most recent release (Mathesar 0.1.6).\n\n\n", "before_files": [{"content": "from django.contrib.auth.password_validation import validate_password\nfrom rest_access_policy import FieldAccessMixin, PermittedPkRelatedField\nfrom rest_framework import serializers\n\nfrom mathesar.api.db.permissions.database import DatabaseAccessPolicy\nfrom mathesar.api.db.permissions.schema import SchemaAccessPolicy\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.api.exceptions.validation_exceptions.exceptions import IncorrectOldPassword\nfrom mathesar.api.ui.permissions.users import UserAccessPolicy\nfrom mathesar.models.base import Database, Schema\nfrom mathesar.models.users import User, DatabaseRole, SchemaRole\n\n\nclass NestedDatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = DatabaseRole\n fields = ['id', 'database', 'role']\n\n\nclass NestedSchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = SchemaRole\n fields = ['id', 'schema', 'role']\n\n\nclass UserSerializer(MathesarErrorMessageMixin, FieldAccessMixin, serializers.ModelSerializer):\n database_roles = NestedDatabaseRoleSerializer(many=True, required=False)\n schema_roles = NestedSchemaRoleSerializer(many=True, required=False)\n access_policy = UserAccessPolicy\n\n class Meta:\n model = User\n fields = [\n 'id',\n 'full_name',\n 'short_name',\n 'username',\n 'password',\n 'email',\n 'is_superuser',\n 'database_roles',\n 'schema_roles',\n 'display_language'\n ]\n extra_kwargs = {\n 'password': {'write_only': True},\n 'database_roles': {'read_only': True},\n 'schema_roles': {'read_only': True}\n }\n\n def get_fields(self):\n fields = super().get_fields()\n request = self.context.get(\"request\", None)\n if not hasattr(request, 'parser_context'):\n return fields\n kwargs = request.parser_context.get('kwargs')\n if kwargs:\n user_pk = kwargs.get('pk')\n if user_pk:\n if request.user.id == int(user_pk) or not request.user.is_superuser:\n fields[\"is_superuser\"].read_only = True\n return fields\n\n def create(self, validated_data):\n password = validated_data.pop('password')\n user = User(**validated_data)\n user.password_change_needed = True\n user.set_password(password)\n user.save()\n return user\n\n\nclass ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n password = serializers.CharField(write_only=True, required=True, validators=[validate_password])\n old_password = serializers.CharField(write_only=True, required=True)\n\n def validate_old_password(self, value):\n user = self.context['request'].user\n if user.check_password(value) is True:\n return value\n raise IncorrectOldPassword(field='old_password')\n\n def update(self, instance, validated_data):\n instance.set_password(validated_data['password'])\n instance.save()\n return instance\n\n\nclass PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n password = serializers.CharField(write_only=True, required=True, validators=[validate_password])\n\n\nclass DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = DatabaseRole\n fields = ['id', 'user', 'database', 'role']\n\n # Restrict the list of databases to which the user has access to create a database role\n # Refer https://rsinger86.github.io/drf-access-policy/policy_reuse/ for the usage of `PermittedPkRelatedField`\n database = PermittedPkRelatedField(\n access_policy=DatabaseAccessPolicy,\n queryset=Database.current_objects.all()\n )\n\n\nclass SchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = SchemaRole\n fields = ['id', 'user', 'schema', 'role']\n\n schema = PermittedPkRelatedField(\n access_policy=SchemaAccessPolicy,\n queryset=Schema.current_objects.all()\n )\n", "path": "mathesar/api/ui/serializers/users.py"}]} | 2,725 | 338 |
gh_patches_debug_16329 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3934 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't submit new team after changing image.
Can't believe this didn't come up yet. I noticed this while exploring [create.json.spt](https://github.com/gratipay/gratipay.com/blob/master/www/teams/create.json.spt) which inspires the new [edit.json.spt](https://github.com/gratipay/gratipay.com/pull/3923/files#diff-6).
The way it is written right now, we first write the team details to the db (with a unique generated `slug`) and _then_ try to save the team image. If a user uploads an image of size > 1Mb or an image which is not a jpg or png, the team creation won't be successful as far as the user is concerned and he'll resubmit the team application form with an appropriate image. But when he does again, we would have already created a slug for that team name resulting in a misleading message of `Sorry, there is already a team using <slug>.` when in fact the `slug` was created because we wrote the team details to the db first.
</issue>
<code>
[start of gratipay/utils/images.py]
1 import zipfile
2 from cStringIO import StringIO
3
4 import requests
5
6 def imgize(image, image_type):
7 large = None
8 small = None
9 crops = requests.post( 'http://gip.rocks/v1',
10 data=image,
11 headers={'Content-Type': image_type}
12 )
13 if crops.status_code == 200:
14 zf = zipfile.ZipFile(StringIO(crops.content))
15 large = zf.open('160').read()
16 small = zf.open('48').read()
17
18 return crops.status_code, large, small
[end of gratipay/utils/images.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/utils/images.py b/gratipay/utils/images.py
--- a/gratipay/utils/images.py
+++ b/gratipay/utils/images.py
@@ -8,11 +8,22 @@
small = None
crops = requests.post( 'http://gip.rocks/v1',
data=image,
- headers={'Content-Type': image_type}
- )
+ headers={'Content-Type': image_type})
+
if crops.status_code == 200:
zf = zipfile.ZipFile(StringIO(crops.content))
large = zf.open('160').read()
small = zf.open('48').read()
+ return large, small
+ elif crops.status_code == 413:
+ raise ImageTooLarge
+ elif crops.status_code == 415:
+ raise InvalidImageType
+ else:
+ raise UnknownImageError
+
+class ImageTooLarge(Exception): pass
+
+class InvalidImageType(Exception): pass
- return crops.status_code, large, small
\ No newline at end of file
+class UnknownImageError(Exception): pass
| {"golden_diff": "diff --git a/gratipay/utils/images.py b/gratipay/utils/images.py\n--- a/gratipay/utils/images.py\n+++ b/gratipay/utils/images.py\n@@ -8,11 +8,22 @@\n small = None\n crops = requests.post( 'http://gip.rocks/v1',\n data=image,\n- headers={'Content-Type': image_type}\n- )\n+ headers={'Content-Type': image_type})\n+\n if crops.status_code == 200:\n zf = zipfile.ZipFile(StringIO(crops.content))\n large = zf.open('160').read()\n small = zf.open('48').read()\n+ return large, small\n+ elif crops.status_code == 413:\n+ raise ImageTooLarge\n+ elif crops.status_code == 415:\n+ raise InvalidImageType\n+ else:\n+ raise UnknownImageError\n+\n+class ImageTooLarge(Exception): pass\n+\n+class InvalidImageType(Exception): pass\n \n- return crops.status_code, large, small\n\\ No newline at end of file\n+class UnknownImageError(Exception): pass\n", "issue": "Can't submit new team after changing image.\nCan't believe this didn't come up yet. I noticed this while exploring [create.json.spt](https://github.com/gratipay/gratipay.com/blob/master/www/teams/create.json.spt) which inspires the new [edit.json.spt](https://github.com/gratipay/gratipay.com/pull/3923/files#diff-6). \n\nThe way it is written right now, we first write the team details to the db (with a unique generated `slug`) and _then_ try to save the team image. If a user uploads an image of size > 1Mb or an image which is not a jpg or png, the team creation won't be successful as far as the user is concerned and he'll resubmit the team application form with an appropriate image. But when he does again, we would have already created a slug for that team name resulting in a misleading message of `Sorry, there is already a team using <slug>.` when in fact the `slug` was created because we wrote the team details to the db first.\n\n", "before_files": [{"content": "import zipfile\nfrom cStringIO import StringIO\n\nimport requests\n\ndef imgize(image, image_type):\n large = None\n small = None\n crops = requests.post( 'http://gip.rocks/v1',\n data=image,\n headers={'Content-Type': image_type}\n )\n if crops.status_code == 200:\n zf = zipfile.ZipFile(StringIO(crops.content))\n large = zf.open('160').read()\n small = zf.open('48').read()\n\n return crops.status_code, large, small", "path": "gratipay/utils/images.py"}]} | 913 | 247 |
gh_patches_debug_24991 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2167 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Follow Request bug: looks as if you already follow a person if you go to profile of requester
When an account with moderated follows has a follow request incoming the following happens:
You see a notification about someone wanting to follow you. You go to their profile to see what they are like and... it looks like you already follow this person? Because the "Follow" button erroneously reads "Unfollow".
Additionally, there is no other indication of a pending follow request while on their profile.
**To Reproduce**
1. Set your account to prompt follow requests
2. Get a follow request
3. Go to the profile of the requester
4. The follow button now reads "Unfollow", implying you follow them already (you don't)
As seen on bookwyrm.social
(anyway thanks for this excellent network of book nerds mouse!!)
</issue>
<code>
[start of bookwyrm/models/relationship.py]
1 """ defines relationships between users """
2 from django.apps import apps
3 from django.core.cache import cache
4 from django.db import models, transaction, IntegrityError
5 from django.db.models import Q
6
7 from bookwyrm import activitypub
8 from .activitypub_mixin import ActivitypubMixin, ActivityMixin
9 from .activitypub_mixin import generate_activity
10 from .base_model import BookWyrmModel
11 from . import fields
12
13
14 class UserRelationship(BookWyrmModel):
15 """many-to-many through table for followers"""
16
17 user_subject = fields.ForeignKey(
18 "User",
19 on_delete=models.PROTECT,
20 related_name="%(class)s_user_subject",
21 activitypub_field="actor",
22 )
23 user_object = fields.ForeignKey(
24 "User",
25 on_delete=models.PROTECT,
26 related_name="%(class)s_user_object",
27 activitypub_field="object",
28 )
29
30 @property
31 def privacy(self):
32 """all relationships are handled directly with the participants"""
33 return "direct"
34
35 @property
36 def recipients(self):
37 """the remote user needs to recieve direct broadcasts"""
38 return [u for u in [self.user_subject, self.user_object] if not u.local]
39
40 def save(self, *args, **kwargs):
41 """clear the template cache"""
42 clear_cache(self.user_subject, self.user_object)
43 super().save(*args, **kwargs)
44
45 def delete(self, *args, **kwargs):
46 """clear the template cache"""
47 clear_cache(self.user_subject, self.user_object)
48 super().delete(*args, **kwargs)
49
50 class Meta:
51 """relationships should be unique"""
52
53 abstract = True
54 constraints = [
55 models.UniqueConstraint(
56 fields=["user_subject", "user_object"], name="%(class)s_unique"
57 ),
58 models.CheckConstraint(
59 check=~models.Q(user_subject=models.F("user_object")),
60 name="%(class)s_no_self",
61 ),
62 ]
63
64 def get_remote_id(self):
65 """use shelf identifier in remote_id"""
66 base_path = self.user_subject.remote_id
67 return f"{base_path}#follows/{self.id}"
68
69
70 class UserFollows(ActivityMixin, UserRelationship):
71 """Following a user"""
72
73 status = "follows"
74
75 def to_activity(self): # pylint: disable=arguments-differ
76 """overrides default to manually set serializer"""
77 return activitypub.Follow(**generate_activity(self))
78
79 def save(self, *args, **kwargs):
80 """really really don't let a user follow someone who blocked them"""
81 # blocking in either direction is a no-go
82 if UserBlocks.objects.filter(
83 Q(
84 user_subject=self.user_subject,
85 user_object=self.user_object,
86 )
87 | Q(
88 user_subject=self.user_object,
89 user_object=self.user_subject,
90 )
91 ).exists():
92 raise IntegrityError(
93 "Attempting to follow blocked user", self.user_subject, self.user_object
94 )
95 # don't broadcast this type of relationship -- accepts and requests
96 # are handled by the UserFollowRequest model
97 super().save(*args, broadcast=False, **kwargs)
98
99 @classmethod
100 def from_request(cls, follow_request):
101 """converts a follow request into a follow relationship"""
102 obj, _ = cls.objects.get_or_create(
103 user_subject=follow_request.user_subject,
104 user_object=follow_request.user_object,
105 remote_id=follow_request.remote_id,
106 )
107 return obj
108
109
110 class UserFollowRequest(ActivitypubMixin, UserRelationship):
111 """following a user requires manual or automatic confirmation"""
112
113 status = "follow_request"
114 activity_serializer = activitypub.Follow
115
116 def save(self, *args, broadcast=True, **kwargs): # pylint: disable=arguments-differ
117 """make sure the follow or block relationship doesn't already exist"""
118 # if there's a request for a follow that already exists, accept it
119 # without changing the local database state
120 if UserFollows.objects.filter(
121 user_subject=self.user_subject,
122 user_object=self.user_object,
123 ).exists():
124 self.accept(broadcast_only=True)
125 return
126
127 # blocking in either direction is a no-go
128 if UserBlocks.objects.filter(
129 Q(
130 user_subject=self.user_subject,
131 user_object=self.user_object,
132 )
133 | Q(
134 user_subject=self.user_object,
135 user_object=self.user_subject,
136 )
137 ).exists():
138 raise IntegrityError(
139 "Attempting to follow blocked user", self.user_subject, self.user_object
140 )
141 super().save(*args, **kwargs)
142
143 if broadcast and self.user_subject.local and not self.user_object.local:
144 self.broadcast(self.to_activity(), self.user_subject)
145
146 if self.user_object.local:
147 manually_approves = self.user_object.manually_approves_followers
148 if not manually_approves:
149 self.accept()
150
151 model = apps.get_model("bookwyrm.Notification", require_ready=True)
152 notification_type = "FOLLOW_REQUEST" if manually_approves else "FOLLOW"
153 model.objects.create(
154 user=self.user_object,
155 related_user=self.user_subject,
156 notification_type=notification_type,
157 )
158
159 def get_accept_reject_id(self, status):
160 """get id for sending an accept or reject of a local user"""
161
162 base_path = self.user_object.remote_id
163 status_id = self.id or 0
164 return f"{base_path}#{status}/{status_id}"
165
166 def accept(self, broadcast_only=False):
167 """turn this request into the real deal"""
168 user = self.user_object
169 if not self.user_subject.local:
170 activity = activitypub.Accept(
171 id=self.get_accept_reject_id(status="accepts"),
172 actor=self.user_object.remote_id,
173 object=self.to_activity(),
174 ).serialize()
175 self.broadcast(activity, user)
176 if broadcast_only:
177 return
178
179 with transaction.atomic():
180 UserFollows.from_request(self)
181 if self.id:
182 self.delete()
183
184 def reject(self):
185 """generate a Reject for this follow request"""
186 if self.user_object.local:
187 activity = activitypub.Reject(
188 id=self.get_accept_reject_id(status="rejects"),
189 actor=self.user_object.remote_id,
190 object=self.to_activity(),
191 ).serialize()
192 self.broadcast(activity, self.user_object)
193
194 self.delete()
195
196
197 class UserBlocks(ActivityMixin, UserRelationship):
198 """prevent another user from following you and seeing your posts"""
199
200 status = "blocks"
201 activity_serializer = activitypub.Block
202
203 def save(self, *args, **kwargs):
204 """remove follow or follow request rels after a block is created"""
205 super().save(*args, **kwargs)
206
207 UserFollows.objects.filter(
208 Q(user_subject=self.user_subject, user_object=self.user_object)
209 | Q(user_subject=self.user_object, user_object=self.user_subject)
210 ).delete()
211 UserFollowRequest.objects.filter(
212 Q(user_subject=self.user_subject, user_object=self.user_object)
213 | Q(user_subject=self.user_object, user_object=self.user_subject)
214 ).delete()
215
216
217 def clear_cache(user_subject, user_object):
218 """clear relationship cache"""
219 cache.delete_many(
220 [
221 f"relationship-{user_subject.id}-{user_object.id}",
222 f"relationship-{user_object.id}-{user_subject.id}",
223 ]
224 )
225
[end of bookwyrm/models/relationship.py]
[start of bookwyrm/templatetags/interaction.py]
1 """ template filters for status interaction buttons """
2 from django import template
3
4 from bookwyrm import models
5 from bookwyrm.utils.cache import get_or_set
6
7
8 register = template.Library()
9
10
11 @register.filter(name="liked")
12 def get_user_liked(user, status):
13 """did the given user fav a status?"""
14 return get_or_set(
15 f"fav-{user.id}-{status.id}",
16 lambda u, s: models.Favorite.objects.filter(user=u, status=s).exists(),
17 user,
18 status,
19 timeout=259200,
20 )
21
22
23 @register.filter(name="boosted")
24 def get_user_boosted(user, status):
25 """did the given user fav a status?"""
26 return get_or_set(
27 f"boost-{user.id}-{status.id}",
28 lambda u: status.boosters.filter(user=u).exists(),
29 user,
30 timeout=259200,
31 )
32
33
34 @register.filter(name="saved")
35 def get_user_saved_lists(user, book_list):
36 """did the user save a list"""
37 return user.saved_lists.filter(id=book_list.id).exists()
38
39
40 @register.simple_tag(takes_context=True)
41 def get_relationship(context, user_object):
42 """caches the relationship between the logged in user and another user"""
43 user = context["request"].user
44 return get_or_set(
45 f"relationship-{user.id}-{user_object.id}",
46 get_relationship_name,
47 user,
48 user_object,
49 timeout=259200,
50 )
51
52
53 def get_relationship_name(user, user_object):
54 """returns the relationship type"""
55 types = {
56 "is_following": False,
57 "is_follow_pending": False,
58 "is_blocked": False,
59 }
60 if user_object in user.blocks.all():
61 types["is_blocked"] = True
62 elif user_object in user.following.all():
63 types["is_following"] = True
64 elif user_object in user.follower_requests.all():
65 types["is_follow_pending"] = True
66 return types
67
[end of bookwyrm/templatetags/interaction.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/models/relationship.py b/bookwyrm/models/relationship.py
--- a/bookwyrm/models/relationship.py
+++ b/bookwyrm/models/relationship.py
@@ -218,7 +218,7 @@
"""clear relationship cache"""
cache.delete_many(
[
- f"relationship-{user_subject.id}-{user_object.id}",
- f"relationship-{user_object.id}-{user_subject.id}",
+ f"cached-relationship-{user_subject.id}-{user_object.id}",
+ f"cached-relationship-{user_object.id}-{user_subject.id}",
]
)
diff --git a/bookwyrm/templatetags/interaction.py b/bookwyrm/templatetags/interaction.py
--- a/bookwyrm/templatetags/interaction.py
+++ b/bookwyrm/templatetags/interaction.py
@@ -42,7 +42,7 @@
"""caches the relationship between the logged in user and another user"""
user = context["request"].user
return get_or_set(
- f"relationship-{user.id}-{user_object.id}",
+ f"cached-relationship-{user.id}-{user_object.id}",
get_relationship_name,
user,
user_object,
@@ -61,6 +61,6 @@
types["is_blocked"] = True
elif user_object in user.following.all():
types["is_following"] = True
- elif user_object in user.follower_requests.all():
+ elif user in user_object.follower_requests.all():
types["is_follow_pending"] = True
return types
| {"golden_diff": "diff --git a/bookwyrm/models/relationship.py b/bookwyrm/models/relationship.py\n--- a/bookwyrm/models/relationship.py\n+++ b/bookwyrm/models/relationship.py\n@@ -218,7 +218,7 @@\n \"\"\"clear relationship cache\"\"\"\n cache.delete_many(\n [\n- f\"relationship-{user_subject.id}-{user_object.id}\",\n- f\"relationship-{user_object.id}-{user_subject.id}\",\n+ f\"cached-relationship-{user_subject.id}-{user_object.id}\",\n+ f\"cached-relationship-{user_object.id}-{user_subject.id}\",\n ]\n )\ndiff --git a/bookwyrm/templatetags/interaction.py b/bookwyrm/templatetags/interaction.py\n--- a/bookwyrm/templatetags/interaction.py\n+++ b/bookwyrm/templatetags/interaction.py\n@@ -42,7 +42,7 @@\n \"\"\"caches the relationship between the logged in user and another user\"\"\"\n user = context[\"request\"].user\n return get_or_set(\n- f\"relationship-{user.id}-{user_object.id}\",\n+ f\"cached-relationship-{user.id}-{user_object.id}\",\n get_relationship_name,\n user,\n user_object,\n@@ -61,6 +61,6 @@\n types[\"is_blocked\"] = True\n elif user_object in user.following.all():\n types[\"is_following\"] = True\n- elif user_object in user.follower_requests.all():\n+ elif user in user_object.follower_requests.all():\n types[\"is_follow_pending\"] = True\n return types\n", "issue": "Follow Request bug: looks as if you already follow a person if you go to profile of requester\nWhen an account with moderated follows has a follow request incoming the following happens:\r\n\r\nYou see a notification about someone wanting to follow you. You go to their profile to see what they are like and... it looks like you already follow this person? Because the \"Follow\" button erroneously reads \"Unfollow\". \r\n\r\nAdditionally, there is no other indication of a pending follow request while on their profile.\r\n\r\n**To Reproduce**\r\n\r\n1. Set your account to prompt follow requests\r\n2. Get a follow request\r\n3. Go to the profile of the requester\r\n4. The follow button now reads \"Unfollow\", implying you follow them already (you don't)\r\n\r\nAs seen on bookwyrm.social\r\n\r\n(anyway thanks for this excellent network of book nerds mouse!!)\r\n\n", "before_files": [{"content": "\"\"\" defines relationships between users \"\"\"\nfrom django.apps import apps\nfrom django.core.cache import cache\nfrom django.db import models, transaction, IntegrityError\nfrom django.db.models import Q\n\nfrom bookwyrm import activitypub\nfrom .activitypub_mixin import ActivitypubMixin, ActivityMixin\nfrom .activitypub_mixin import generate_activity\nfrom .base_model import BookWyrmModel\nfrom . import fields\n\n\nclass UserRelationship(BookWyrmModel):\n \"\"\"many-to-many through table for followers\"\"\"\n\n user_subject = fields.ForeignKey(\n \"User\",\n on_delete=models.PROTECT,\n related_name=\"%(class)s_user_subject\",\n activitypub_field=\"actor\",\n )\n user_object = fields.ForeignKey(\n \"User\",\n on_delete=models.PROTECT,\n related_name=\"%(class)s_user_object\",\n activitypub_field=\"object\",\n )\n\n @property\n def privacy(self):\n \"\"\"all relationships are handled directly with the participants\"\"\"\n return \"direct\"\n\n @property\n def recipients(self):\n \"\"\"the remote user needs to recieve direct broadcasts\"\"\"\n return [u for u in [self.user_subject, self.user_object] if not u.local]\n\n def save(self, *args, **kwargs):\n \"\"\"clear the template cache\"\"\"\n clear_cache(self.user_subject, self.user_object)\n super().save(*args, **kwargs)\n\n def delete(self, *args, **kwargs):\n \"\"\"clear the template cache\"\"\"\n clear_cache(self.user_subject, self.user_object)\n super().delete(*args, **kwargs)\n\n class Meta:\n \"\"\"relationships should be unique\"\"\"\n\n abstract = True\n constraints = [\n models.UniqueConstraint(\n fields=[\"user_subject\", \"user_object\"], name=\"%(class)s_unique\"\n ),\n models.CheckConstraint(\n check=~models.Q(user_subject=models.F(\"user_object\")),\n name=\"%(class)s_no_self\",\n ),\n ]\n\n def get_remote_id(self):\n \"\"\"use shelf identifier in remote_id\"\"\"\n base_path = self.user_subject.remote_id\n return f\"{base_path}#follows/{self.id}\"\n\n\nclass UserFollows(ActivityMixin, UserRelationship):\n \"\"\"Following a user\"\"\"\n\n status = \"follows\"\n\n def to_activity(self): # pylint: disable=arguments-differ\n \"\"\"overrides default to manually set serializer\"\"\"\n return activitypub.Follow(**generate_activity(self))\n\n def save(self, *args, **kwargs):\n \"\"\"really really don't let a user follow someone who blocked them\"\"\"\n # blocking in either direction is a no-go\n if UserBlocks.objects.filter(\n Q(\n user_subject=self.user_subject,\n user_object=self.user_object,\n )\n | Q(\n user_subject=self.user_object,\n user_object=self.user_subject,\n )\n ).exists():\n raise IntegrityError(\n \"Attempting to follow blocked user\", self.user_subject, self.user_object\n )\n # don't broadcast this type of relationship -- accepts and requests\n # are handled by the UserFollowRequest model\n super().save(*args, broadcast=False, **kwargs)\n\n @classmethod\n def from_request(cls, follow_request):\n \"\"\"converts a follow request into a follow relationship\"\"\"\n obj, _ = cls.objects.get_or_create(\n user_subject=follow_request.user_subject,\n user_object=follow_request.user_object,\n remote_id=follow_request.remote_id,\n )\n return obj\n\n\nclass UserFollowRequest(ActivitypubMixin, UserRelationship):\n \"\"\"following a user requires manual or automatic confirmation\"\"\"\n\n status = \"follow_request\"\n activity_serializer = activitypub.Follow\n\n def save(self, *args, broadcast=True, **kwargs): # pylint: disable=arguments-differ\n \"\"\"make sure the follow or block relationship doesn't already exist\"\"\"\n # if there's a request for a follow that already exists, accept it\n # without changing the local database state\n if UserFollows.objects.filter(\n user_subject=self.user_subject,\n user_object=self.user_object,\n ).exists():\n self.accept(broadcast_only=True)\n return\n\n # blocking in either direction is a no-go\n if UserBlocks.objects.filter(\n Q(\n user_subject=self.user_subject,\n user_object=self.user_object,\n )\n | Q(\n user_subject=self.user_object,\n user_object=self.user_subject,\n )\n ).exists():\n raise IntegrityError(\n \"Attempting to follow blocked user\", self.user_subject, self.user_object\n )\n super().save(*args, **kwargs)\n\n if broadcast and self.user_subject.local and not self.user_object.local:\n self.broadcast(self.to_activity(), self.user_subject)\n\n if self.user_object.local:\n manually_approves = self.user_object.manually_approves_followers\n if not manually_approves:\n self.accept()\n\n model = apps.get_model(\"bookwyrm.Notification\", require_ready=True)\n notification_type = \"FOLLOW_REQUEST\" if manually_approves else \"FOLLOW\"\n model.objects.create(\n user=self.user_object,\n related_user=self.user_subject,\n notification_type=notification_type,\n )\n\n def get_accept_reject_id(self, status):\n \"\"\"get id for sending an accept or reject of a local user\"\"\"\n\n base_path = self.user_object.remote_id\n status_id = self.id or 0\n return f\"{base_path}#{status}/{status_id}\"\n\n def accept(self, broadcast_only=False):\n \"\"\"turn this request into the real deal\"\"\"\n user = self.user_object\n if not self.user_subject.local:\n activity = activitypub.Accept(\n id=self.get_accept_reject_id(status=\"accepts\"),\n actor=self.user_object.remote_id,\n object=self.to_activity(),\n ).serialize()\n self.broadcast(activity, user)\n if broadcast_only:\n return\n\n with transaction.atomic():\n UserFollows.from_request(self)\n if self.id:\n self.delete()\n\n def reject(self):\n \"\"\"generate a Reject for this follow request\"\"\"\n if self.user_object.local:\n activity = activitypub.Reject(\n id=self.get_accept_reject_id(status=\"rejects\"),\n actor=self.user_object.remote_id,\n object=self.to_activity(),\n ).serialize()\n self.broadcast(activity, self.user_object)\n\n self.delete()\n\n\nclass UserBlocks(ActivityMixin, UserRelationship):\n \"\"\"prevent another user from following you and seeing your posts\"\"\"\n\n status = \"blocks\"\n activity_serializer = activitypub.Block\n\n def save(self, *args, **kwargs):\n \"\"\"remove follow or follow request rels after a block is created\"\"\"\n super().save(*args, **kwargs)\n\n UserFollows.objects.filter(\n Q(user_subject=self.user_subject, user_object=self.user_object)\n | Q(user_subject=self.user_object, user_object=self.user_subject)\n ).delete()\n UserFollowRequest.objects.filter(\n Q(user_subject=self.user_subject, user_object=self.user_object)\n | Q(user_subject=self.user_object, user_object=self.user_subject)\n ).delete()\n\n\ndef clear_cache(user_subject, user_object):\n \"\"\"clear relationship cache\"\"\"\n cache.delete_many(\n [\n f\"relationship-{user_subject.id}-{user_object.id}\",\n f\"relationship-{user_object.id}-{user_subject.id}\",\n ]\n )\n", "path": "bookwyrm/models/relationship.py"}, {"content": "\"\"\" template filters for status interaction buttons \"\"\"\nfrom django import template\n\nfrom bookwyrm import models\nfrom bookwyrm.utils.cache import get_or_set\n\n\nregister = template.Library()\n\n\[email protected](name=\"liked\")\ndef get_user_liked(user, status):\n \"\"\"did the given user fav a status?\"\"\"\n return get_or_set(\n f\"fav-{user.id}-{status.id}\",\n lambda u, s: models.Favorite.objects.filter(user=u, status=s).exists(),\n user,\n status,\n timeout=259200,\n )\n\n\[email protected](name=\"boosted\")\ndef get_user_boosted(user, status):\n \"\"\"did the given user fav a status?\"\"\"\n return get_or_set(\n f\"boost-{user.id}-{status.id}\",\n lambda u: status.boosters.filter(user=u).exists(),\n user,\n timeout=259200,\n )\n\n\[email protected](name=\"saved\")\ndef get_user_saved_lists(user, book_list):\n \"\"\"did the user save a list\"\"\"\n return user.saved_lists.filter(id=book_list.id).exists()\n\n\[email protected]_tag(takes_context=True)\ndef get_relationship(context, user_object):\n \"\"\"caches the relationship between the logged in user and another user\"\"\"\n user = context[\"request\"].user\n return get_or_set(\n f\"relationship-{user.id}-{user_object.id}\",\n get_relationship_name,\n user,\n user_object,\n timeout=259200,\n )\n\n\ndef get_relationship_name(user, user_object):\n \"\"\"returns the relationship type\"\"\"\n types = {\n \"is_following\": False,\n \"is_follow_pending\": False,\n \"is_blocked\": False,\n }\n if user_object in user.blocks.all():\n types[\"is_blocked\"] = True\n elif user_object in user.following.all():\n types[\"is_following\"] = True\n elif user_object in user.follower_requests.all():\n types[\"is_follow_pending\"] = True\n return types\n", "path": "bookwyrm/templatetags/interaction.py"}]} | 3,420 | 348 |
gh_patches_debug_8456 | rasdani/github-patches | git_diff | microsoft__ptvsd-797 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add ability to launch the debugger in non-debug mode
Currently we can only launch the debugger in non-debug mode when using `-m`.
I'd like to have the same feature by importing PTVSD and invoking a function, similar to debugging using the `debug` function in `debugger.py`
Basically this is necessary to launch the debugger in non-debug mode when using a launcher script.
</issue>
<code>
[start of ptvsd/debugger.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7 from ptvsd._local import run_module, run_file
8
9
10 # TODO: not needed?
11 DONT_DEBUG = []
12
13 LOCALHOST = 'localhost'
14
15 RUNNERS = {
16 'module': run_module, # python -m spam
17 'script': run_file, # python spam.py
18 'code': run_file, # python -c 'print("spam")'
19 None: run_file, # catchall
20 }
21
22
23 def debug(filename, port_num, debug_id, debug_options, run_as,
24 _runners=RUNNERS, _extra=None, *args, **kwargs):
25 # TODO: docstring
26 if _extra is None:
27 _extra = sys.argv[1:]
28 address = (LOCALHOST, port_num)
29 try:
30 run = _runners[run_as]
31 except KeyError:
32 # TODO: fail?
33 run = _runners[None]
34 if _extra:
35 args = _extra + list(args)
36 kwargs.setdefault('singlesession', True)
37 run(address, filename, *args, **kwargs)
38
[end of ptvsd/debugger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py
--- a/ptvsd/debugger.py
+++ b/ptvsd/debugger.py
@@ -4,7 +4,7 @@
import sys
-from ptvsd._local import run_module, run_file
+from ptvsd._local import run_module, run_file, run_main
# TODO: not needed?
@@ -35,3 +35,9 @@
args = _extra + list(args)
kwargs.setdefault('singlesession', True)
run(address, filename, *args, **kwargs)
+
+
+def run(filename, port_num, run_as,
+ *args, **kwargs):
+ address = (LOCALHOST, port_num)
+ run_main(address, filename, run_as, *args, **kwargs)
| {"golden_diff": "diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py\n--- a/ptvsd/debugger.py\n+++ b/ptvsd/debugger.py\n@@ -4,7 +4,7 @@\n \n import sys\n \n-from ptvsd._local import run_module, run_file\n+from ptvsd._local import run_module, run_file, run_main\n \n \n # TODO: not needed?\n@@ -35,3 +35,9 @@\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n+\n+\n+def run(filename, port_num, run_as,\n+ *args, **kwargs):\n+ address = (LOCALHOST, port_num)\n+ run_main(address, filename, run_as, *args, **kwargs)\n", "issue": "Add ability to launch the debugger in non-debug mode\nCurrently we can only launch the debugger in non-debug mode when using `-m`.\r\nI'd like to have the same feature by importing PTVSD and invoking a function, similar to debugging using the `debug` function in `debugger.py`\r\n\r\nBasically this is necessary to launch the debugger in non-debug mode when using a launcher script.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\nfrom ptvsd._local import run_module, run_file\n\n\n# TODO: not needed?\nDONT_DEBUG = []\n\nLOCALHOST = 'localhost'\n\nRUNNERS = {\n 'module': run_module, # python -m spam\n 'script': run_file, # python spam.py\n 'code': run_file, # python -c 'print(\"spam\")'\n None: run_file, # catchall\n}\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n # TODO: docstring\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n try:\n run = _runners[run_as]\n except KeyError:\n # TODO: fail?\n run = _runners[None]\n if _extra:\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n", "path": "ptvsd/debugger.py"}]} | 954 | 183 |
gh_patches_debug_6322 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2968 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade to latest uvicorn
A cant install latest uvicorn 0.23 with latest strawberry 0.195.2 and debug-server, just uvicorn and strawberry, without debug server, or uvicorn 0.21.3 and strawberry with debug server
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
## System Information
- Arch Linux
- Strawberry version (if applicable): latest
## Additional Context
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/2956">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2956/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2956/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of noxfile.py]
1 import nox
2 from nox_poetry import Session, session
3
4 nox.options.reuse_existing_virtualenvs = True
5 nox.options.error_on_external_run = True
6
7 PYTHON_VERSIONS = ["3.11", "3.10", "3.9", "3.8", "3.7"]
8
9
10 COMMON_PYTEST_OPTIONS = [
11 "--cov=.",
12 "--cov-append",
13 "--cov-report=xml",
14 "-n",
15 "auto",
16 "--showlocals",
17 "-vv",
18 "--ignore=tests/mypy",
19 "--ignore=tests/pyright",
20 "--ignore=tests/cli",
21 # TODO: reintroduce this in its own test session
22 "--ignore=tests/experimental/pydantic",
23 ]
24
25 INTEGRATIONS = [
26 "asgi",
27 "aiohttp",
28 "chalice",
29 "channels",
30 "django",
31 "fastapi",
32 "flask",
33 "sanic",
34 "starlite",
35 "pydantic",
36 ]
37
38
39 @session(python=PYTHON_VERSIONS, name="Tests", tags=["tests"])
40 def tests(session: Session) -> None:
41 session.run_always("poetry", "install", external=True)
42
43 markers = (
44 ["-m", f"not {integration}", f"--ignore=tests/{integration}"]
45 for integration in INTEGRATIONS
46 )
47 markers = [item for sublist in markers for item in sublist]
48
49 session.run(
50 "pytest",
51 *COMMON_PYTEST_OPTIONS,
52 *markers,
53 )
54
55
56 @session(python=["3.11"], name="Django tests", tags=["tests"])
57 @nox.parametrize("django", ["4.2.0", "4.1.0", "4.0.0", "3.2.0"])
58 def tests_django(session: Session, django: str) -> None:
59 session.run_always("poetry", "install", external=True)
60
61 session._session.install(f"django~={django}") # type: ignore
62 session._session.install("pytest-django") # type: ignore
63
64 session.run("pytest", *COMMON_PYTEST_OPTIONS, "-m", "django")
65
66
67 @session(python=["3.11"], name="Starlette tests", tags=["tests"])
68 @nox.parametrize("starlette", ["0.28.0", "0.27.0", "0.26.1"])
69 def tests_starlette(session: Session, starlette: str) -> None:
70 session.run_always("poetry", "install", external=True)
71
72 session._session.install(f"starlette=={starlette}") # type: ignore
73
74 session.run("pytest", *COMMON_PYTEST_OPTIONS, "-m", "asgi")
75
76
77 @session(python=["3.11"], name="Test integrations", tags=["tests"])
78 @nox.parametrize(
79 "integration",
80 [
81 "aiohttp",
82 "chalice",
83 "channels",
84 "fastapi",
85 "flask",
86 "sanic",
87 "starlite",
88 ],
89 )
90 def tests_integrations(session: Session, integration: str) -> None:
91 session.run_always("poetry", "install", external=True)
92
93 session._session.install(integration) # type: ignore
94
95 if integration == "aiohttp":
96 session._session.install("pytest-aiohttp") # type: ignore
97 elif integration == "flask":
98 session._session.install("pytest-flask") # type: ignore
99 elif integration == "channels":
100 session._session.install("pytest-django") # type: ignore
101 session._session.install("daphne") # type: ignore
102 elif integration == "starlite":
103 session._session.install("pydantic<2.0") # type: ignore
104
105 session.run("pytest", *COMMON_PYTEST_OPTIONS, "-m", integration)
106
107
108 @session(python=["3.11"], name="Pydantic tests", tags=["tests"])
109 # TODO: add pydantic 2.0 here :)
110 @nox.parametrize("pydantic", ["1.10"])
111 def test_pydantic(session: Session, pydantic: str) -> None:
112 session.run_always("poetry", "install", external=True)
113
114 session._session.install(f"pydantic~={pydantic}") # type: ignore
115
116 session.run(
117 "pytest",
118 "--cov=.",
119 "--cov-append",
120 "--cov-report=xml",
121 "-m",
122 "pydantic",
123 )
124
125
126 @session(python=PYTHON_VERSIONS, name="Mypy tests")
127 def tests_mypy(session: Session) -> None:
128 session.run_always("poetry", "install", "--with", "integrations", external=True)
129
130 session.run(
131 "pytest",
132 "--cov=.",
133 "--cov-append",
134 "--cov-report=xml",
135 "tests/mypy",
136 "-vv",
137 )
138
139
140 @session(python=PYTHON_VERSIONS, name="Pyright tests", tags=["tests"])
141 def tests_pyright(session: Session) -> None:
142 session.run_always("poetry", "install", external=True)
143 session.install("pyright")
144
145 session.run(
146 "pytest",
147 "--cov=.",
148 "--cov-append",
149 "--cov-report=xml",
150 "tests/pyright",
151 "-vv",
152 )
153
154
155 @session(name="Mypy", tags=["lint"])
156 def mypy(session: Session) -> None:
157 session.run_always("poetry", "install", "--with", "integrations", external=True)
158
159 session.run("mypy", "--config-file", "mypy.ini")
160
161
162 @session(python=PYTHON_VERSIONS, name="CLI tests", tags=["tests"])
163 def tests_cli(session: Session) -> None:
164 session.run_always("poetry", "install", external=True)
165
166 session._session.install("uvicorn") # type: ignore
167 session._session.install("starlette") # type: ignore
168
169 session.run(
170 "pytest",
171 "--cov=.",
172 "--cov-append",
173 "--cov-report=xml",
174 "tests/cli",
175 "-vv",
176 )
177
[end of noxfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -18,7 +18,6 @@
"--ignore=tests/mypy",
"--ignore=tests/pyright",
"--ignore=tests/cli",
- # TODO: reintroduce this in its own test session
"--ignore=tests/experimental/pydantic",
]
@@ -120,6 +119,7 @@
"--cov-report=xml",
"-m",
"pydantic",
+ "--ignore=tests/cli",
)
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -18,7 +18,6 @@\n \"--ignore=tests/mypy\",\n \"--ignore=tests/pyright\",\n \"--ignore=tests/cli\",\n- # TODO: reintroduce this in its own test session\n \"--ignore=tests/experimental/pydantic\",\n ]\n \n@@ -120,6 +119,7 @@\n \"--cov-report=xml\",\n \"-m\",\n \"pydantic\",\n+ \"--ignore=tests/cli\",\n )\n", "issue": "Upgrade to latest uvicorn\nA cant install latest uvicorn 0.23 with latest strawberry 0.195.2 and debug-server, just uvicorn and strawberry, without debug server, or uvicorn 0.21.3 and strawberry with debug server\r\n\r\n## Describe the Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## System Information\r\n\r\n - Arch Linux \r\n - Strawberry version (if applicable): latest\r\n\r\n## Additional Context\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/2956\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2956/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2956/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "import nox\nfrom nox_poetry import Session, session\n\nnox.options.reuse_existing_virtualenvs = True\nnox.options.error_on_external_run = True\n\nPYTHON_VERSIONS = [\"3.11\", \"3.10\", \"3.9\", \"3.8\", \"3.7\"]\n\n\nCOMMON_PYTEST_OPTIONS = [\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"-n\",\n \"auto\",\n \"--showlocals\",\n \"-vv\",\n \"--ignore=tests/mypy\",\n \"--ignore=tests/pyright\",\n \"--ignore=tests/cli\",\n # TODO: reintroduce this in its own test session\n \"--ignore=tests/experimental/pydantic\",\n]\n\nINTEGRATIONS = [\n \"asgi\",\n \"aiohttp\",\n \"chalice\",\n \"channels\",\n \"django\",\n \"fastapi\",\n \"flask\",\n \"sanic\",\n \"starlite\",\n \"pydantic\",\n]\n\n\n@session(python=PYTHON_VERSIONS, name=\"Tests\", tags=[\"tests\"])\ndef tests(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n markers = (\n [\"-m\", f\"not {integration}\", f\"--ignore=tests/{integration}\"]\n for integration in INTEGRATIONS\n )\n markers = [item for sublist in markers for item in sublist]\n\n session.run(\n \"pytest\",\n *COMMON_PYTEST_OPTIONS,\n *markers,\n )\n\n\n@session(python=[\"3.11\"], name=\"Django tests\", tags=[\"tests\"])\[email protected](\"django\", [\"4.2.0\", \"4.1.0\", \"4.0.0\", \"3.2.0\"])\ndef tests_django(session: Session, django: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"django~={django}\") # type: ignore\n session._session.install(\"pytest-django\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", \"django\")\n\n\n@session(python=[\"3.11\"], name=\"Starlette tests\", tags=[\"tests\"])\[email protected](\"starlette\", [\"0.28.0\", \"0.27.0\", \"0.26.1\"])\ndef tests_starlette(session: Session, starlette: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"starlette=={starlette}\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", \"asgi\")\n\n\n@session(python=[\"3.11\"], name=\"Test integrations\", tags=[\"tests\"])\[email protected](\n \"integration\",\n [\n \"aiohttp\",\n \"chalice\",\n \"channels\",\n \"fastapi\",\n \"flask\",\n \"sanic\",\n \"starlite\",\n ],\n)\ndef tests_integrations(session: Session, integration: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(integration) # type: ignore\n\n if integration == \"aiohttp\":\n session._session.install(\"pytest-aiohttp\") # type: ignore\n elif integration == \"flask\":\n session._session.install(\"pytest-flask\") # type: ignore\n elif integration == \"channels\":\n session._session.install(\"pytest-django\") # type: ignore\n session._session.install(\"daphne\") # type: ignore\n elif integration == \"starlite\":\n session._session.install(\"pydantic<2.0\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", integration)\n\n\n@session(python=[\"3.11\"], name=\"Pydantic tests\", tags=[\"tests\"])\n# TODO: add pydantic 2.0 here :)\[email protected](\"pydantic\", [\"1.10\"])\ndef test_pydantic(session: Session, pydantic: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"pydantic~={pydantic}\") # type: ignore\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"-m\",\n \"pydantic\",\n )\n\n\n@session(python=PYTHON_VERSIONS, name=\"Mypy tests\")\ndef tests_mypy(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", \"--with\", \"integrations\", external=True)\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/mypy\",\n \"-vv\",\n )\n\n\n@session(python=PYTHON_VERSIONS, name=\"Pyright tests\", tags=[\"tests\"])\ndef tests_pyright(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n session.install(\"pyright\")\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/pyright\",\n \"-vv\",\n )\n\n\n@session(name=\"Mypy\", tags=[\"lint\"])\ndef mypy(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", \"--with\", \"integrations\", external=True)\n\n session.run(\"mypy\", \"--config-file\", \"mypy.ini\")\n\n\n@session(python=PYTHON_VERSIONS, name=\"CLI tests\", tags=[\"tests\"])\ndef tests_cli(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(\"uvicorn\") # type: ignore\n session._session.install(\"starlette\") # type: ignore\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/cli\",\n \"-vv\",\n )\n", "path": "noxfile.py"}]} | 2,607 | 131 |
gh_patches_debug_1811 | rasdani/github-patches | git_diff | iterative__dvc-2364 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
status: change nothing to reproduce message
If I use DVC only to version data/models and don't care about pipelines, this message:
`Pipelines are up to date. Nothing to reproduce.`
looks really strange.
Let's change it to something more generic:
`Data and pipelines are up to date.`
or something similar
</issue>
<code>
[start of dvc/command/status.py]
1 from __future__ import unicode_literals
2
3 import logging
4
5 from dvc.command.data_sync import CmdDataBase
6 from dvc.utils.compat import str
7
8
9 logger = logging.getLogger(__name__)
10
11
12 class CmdDataStatus(CmdDataBase):
13 STATUS_LEN = 20
14 STATUS_INDENT = "\t"
15 UP_TO_DATE_MSG = "Pipelines are up to date. Nothing to reproduce."
16
17 def _normalize(self, s):
18 s += ":"
19 assert len(s) < self.STATUS_LEN
20 return s + (self.STATUS_LEN - len(s)) * " "
21
22 def _show(self, status, indent=0):
23 ind = indent * self.STATUS_INDENT
24
25 if isinstance(status, str):
26 logger.info("{}{}".format(ind, status))
27 return
28
29 if isinstance(status, list):
30 for entry in status:
31 self._show(entry, indent)
32 return
33
34 assert isinstance(status, dict)
35
36 for key, value in status.items():
37 if isinstance(value, str):
38 logger.info("{}{}{}".format(ind, self._normalize(value), key))
39 elif value:
40 logger.info("{}{}:".format(ind, key))
41 self._show(value, indent + 1)
42
43 def run(self):
44 indent = 1 if self.args.cloud else 0
45 try:
46 st = self.repo.status(
47 targets=self.args.targets,
48 jobs=self.args.jobs,
49 cloud=self.args.cloud,
50 remote=self.args.remote,
51 all_branches=self.args.all_branches,
52 all_tags=self.args.all_tags,
53 with_deps=self.args.with_deps,
54 )
55 if st:
56 if self.args.quiet:
57 return 1
58 else:
59 self._show(st, indent)
60 else:
61 logger.info(self.UP_TO_DATE_MSG)
62
63 except Exception:
64 logger.exception("failed to obtain data status")
65 return 1
66 return 0
67
[end of dvc/command/status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/command/status.py b/dvc/command/status.py
--- a/dvc/command/status.py
+++ b/dvc/command/status.py
@@ -12,7 +12,7 @@
class CmdDataStatus(CmdDataBase):
STATUS_LEN = 20
STATUS_INDENT = "\t"
- UP_TO_DATE_MSG = "Pipelines are up to date. Nothing to reproduce."
+ UP_TO_DATE_MSG = "Data and pipelines are up to date."
def _normalize(self, s):
s += ":"
| {"golden_diff": "diff --git a/dvc/command/status.py b/dvc/command/status.py\n--- a/dvc/command/status.py\n+++ b/dvc/command/status.py\n@@ -12,7 +12,7 @@\n class CmdDataStatus(CmdDataBase):\n STATUS_LEN = 20\n STATUS_INDENT = \"\\t\"\n- UP_TO_DATE_MSG = \"Pipelines are up to date. Nothing to reproduce.\"\n+ UP_TO_DATE_MSG = \"Data and pipelines are up to date.\"\n \n def _normalize(self, s):\n s += \":\"\n", "issue": "status: change nothing to reproduce message\nIf I use DVC only to version data/models and don't care about pipelines, this message:\r\n\r\n`Pipelines are up to date. Nothing to reproduce.` \r\n\r\nlooks really strange.\r\n\r\nLet's change it to something more generic:\r\n\r\n`Data and pipelines are up to date.` \r\n\r\nor something similar\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\n\nfrom dvc.command.data_sync import CmdDataBase\nfrom dvc.utils.compat import str\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdDataStatus(CmdDataBase):\n STATUS_LEN = 20\n STATUS_INDENT = \"\\t\"\n UP_TO_DATE_MSG = \"Pipelines are up to date. Nothing to reproduce.\"\n\n def _normalize(self, s):\n s += \":\"\n assert len(s) < self.STATUS_LEN\n return s + (self.STATUS_LEN - len(s)) * \" \"\n\n def _show(self, status, indent=0):\n ind = indent * self.STATUS_INDENT\n\n if isinstance(status, str):\n logger.info(\"{}{}\".format(ind, status))\n return\n\n if isinstance(status, list):\n for entry in status:\n self._show(entry, indent)\n return\n\n assert isinstance(status, dict)\n\n for key, value in status.items():\n if isinstance(value, str):\n logger.info(\"{}{}{}\".format(ind, self._normalize(value), key))\n elif value:\n logger.info(\"{}{}:\".format(ind, key))\n self._show(value, indent + 1)\n\n def run(self):\n indent = 1 if self.args.cloud else 0\n try:\n st = self.repo.status(\n targets=self.args.targets,\n jobs=self.args.jobs,\n cloud=self.args.cloud,\n remote=self.args.remote,\n all_branches=self.args.all_branches,\n all_tags=self.args.all_tags,\n with_deps=self.args.with_deps,\n )\n if st:\n if self.args.quiet:\n return 1\n else:\n self._show(st, indent)\n else:\n logger.info(self.UP_TO_DATE_MSG)\n\n except Exception:\n logger.exception(\"failed to obtain data status\")\n return 1\n return 0\n", "path": "dvc/command/status.py"}]} | 1,133 | 117 |
gh_patches_debug_43367 | rasdani/github-patches | git_diff | akvo__akvo-rsr-4519 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
support saving nested comments at indicator_period_data_framework endpoint
</issue>
<code>
[start of akvo/rest/serializers/indicator_period_data.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 from rest_framework import serializers
7 from django.db.models import Sum
8
9 from akvo.rest.serializers.disaggregation import DisaggregationSerializer, DisaggregationReadOnlySerializer
10 from akvo.rest.serializers.rsr_serializer import BaseRSRSerializer
11 from akvo.rest.serializers.user import UserDetailsSerializer
12 from akvo.rsr.models import (
13 IndicatorPeriod, IndicatorPeriodData, IndicatorPeriodDataComment, IndicatorPeriodDataFile, IndicatorPeriodDataPhoto,
14 IndicatorDimensionValue, Disaggregation
15 )
16 from akvo.utils import ensure_decimal
17
18
19 class IndicatorPeriodDataCommentSerializer(BaseRSRSerializer):
20
21 user_details = UserDetailsSerializer(read_only=True, source='user')
22
23 class Meta:
24 model = IndicatorPeriodDataComment
25 fields = '__all__'
26 read_only_fields = ['user']
27
28
29 class IndicatorPeriodDataFileSerializer(BaseRSRSerializer):
30 class Meta:
31 model = IndicatorPeriodDataFile
32 fields = '__all__'
33
34
35 class IndicatorPeriodDataPhotoSerializer(BaseRSRSerializer):
36 class Meta:
37 model = IndicatorPeriodDataPhoto
38 fields = '__all__'
39
40
41 class IndicatorPeriodDataSerializer(BaseRSRSerializer):
42
43 user_details = UserDetailsSerializer(read_only=True, source='user')
44 approver_details = UserDetailsSerializer(read_only=True, source='approved_by')
45 status_display = serializers.ReadOnlyField()
46 photo_url = serializers.ReadOnlyField()
47 file_url = serializers.ReadOnlyField()
48
49 class Meta:
50 model = IndicatorPeriodData
51 fields = '__all__'
52 read_only_fields = ['user']
53
54
55 class IndicatorPeriodDataLiteSerializer(BaseRSRSerializer):
56
57 user_details = UserDetailsSerializer(required=False, source='user')
58 status_display = serializers.ReadOnlyField()
59 photo_url = serializers.ReadOnlyField()
60 file_url = serializers.ReadOnlyField()
61 disaggregations = DisaggregationReadOnlySerializer(many=True, required=False)
62 value = serializers.SerializerMethodField()
63 file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')
64 photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')
65 comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)
66
67 def get_value(self, obj):
68 return ensure_decimal(obj.value)
69
70 class Meta:
71 model = IndicatorPeriodData
72 fields = (
73 'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',
74 'disaggregations', 'narrative', 'photo_url', 'file_url', 'period_actual_value', 'created_at', 'last_modified_at',
75 'file_set', 'photo_set', 'review_note', 'comments',
76 )
77
78
79 class IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):
80
81 period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())
82 comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)
83 disaggregations = DisaggregationSerializer(many=True, required=False)
84 user_details = UserDetailsSerializer(read_only=True, source='user')
85 approver_details = UserDetailsSerializer(read_only=True, source='approved_by')
86 status_display = serializers.ReadOnlyField()
87 photo_url = serializers.ReadOnlyField()
88 file_url = serializers.ReadOnlyField()
89 period_can_add_update = serializers.ReadOnlyField(source='period.can_save_update')
90 files = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)
91 photos = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)
92 file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')
93 photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')
94
95 class Meta:
96 model = IndicatorPeriodData
97 fields = '__all__'
98 read_only_fields = ['user']
99
100 def create(self, validated_data):
101 self._validate_disaggregations(
102 self._disaggregations_data,
103 value=ensure_decimal(validated_data.get('value', 0)),
104 numerator=ensure_decimal(validated_data.get('numerator', None)),
105 denominator=ensure_decimal(validated_data.get('denominator', None))
106 )
107 """Over-ridden to handle nested writes."""
108 files = validated_data.pop('files', [])
109 photos = validated_data.pop('photos', [])
110 update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)
111 for disaggregation in self._disaggregations_data:
112 disaggregation['update'] = update.id
113 if 'type_id' in disaggregation and 'dimension_value' not in disaggregation:
114 disaggregation['dimension_value'] = disaggregation['type_id']
115 serializer = DisaggregationSerializer(data=disaggregation)
116 serializer.is_valid(raise_exception=True)
117 serializer.create(serializer.validated_data)
118 for file in files:
119 IndicatorPeriodDataFile.objects.create(update=update, file=file)
120 for photo in photos:
121 IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)
122
123 return update
124
125 def update(self, instance, validated_data):
126 self._validate_disaggregations(
127 self._disaggregations_data,
128 value=ensure_decimal(validated_data.get('value', instance.value)),
129 numerator=ensure_decimal(validated_data.get('numerator', instance.numerator)),
130 denominator=ensure_decimal(validated_data.get('denominator', instance.denominator)),
131 update=instance
132 )
133 """Over-ridden to handle nested updates."""
134 files = validated_data.pop('files', [])
135 photos = validated_data.pop('photos', [])
136 super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)
137 for disaggregation in self._disaggregations_data:
138 disaggregation['update'] = instance.id
139 serializer = DisaggregationSerializer(data=disaggregation)
140 serializer.is_valid(raise_exception=True)
141 disaggregation_instance, _ = instance.disaggregations.get_or_create(
142 update=instance,
143 dimension_value=serializer.validated_data['dimension_value'],
144 )
145 serializer.update(disaggregation_instance, serializer.validated_data)
146 for file in files:
147 IndicatorPeriodDataFile.objects.create(update=instance, file=file)
148 for photo in photos:
149 IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)
150
151 return instance._meta.model.objects.select_related(
152 'period',
153 'user',
154 'approved_by',
155 ).prefetch_related(
156 'comments',
157 'disaggregations',
158 ).get(id=instance.id)
159
160 def _validate_disaggregations(self, disaggregations, value, numerator=None, denominator=None, update=None):
161 adjustments = {}
162 for disaggregation in disaggregations:
163 type_id = disaggregation.get('type_id', disaggregation.get('dimension_value', None))
164 if type_id is None:
165 continue
166 if denominator is not None:
167 disaggregation_denominator = ensure_decimal(disaggregation.get('denominator', 0))
168 if disaggregation_denominator > denominator:
169 raise serializers.ValidationError("disaggregations denominator should not exceed update denominator")
170 category = IndicatorDimensionValue.objects.get(pk=type_id).name
171 if category.id not in adjustments:
172 adjustments[category.id] = {'values': 0, 'numerators': 0, 'type_ids': []}
173 adjustments[category.id]['values'] += ensure_decimal(disaggregation.get('value', 0))
174 adjustments[category.id]['numerators'] += ensure_decimal(disaggregation.get('numerator', 0))
175 adjustments[category.id]['type_ids'].append(type_id)
176 for key, adjustment in adjustments.items():
177 unmodifieds = Disaggregation.objects.filter(update=update, dimension_value__name=key)\
178 .exclude(dimension_value__in=adjustment['type_ids'])\
179 .aggregate(values=Sum('value'))
180 total = adjustment['values'] + ensure_decimal(unmodifieds['values'])
181 if numerator is not None and adjustment['numerators'] > numerator:
182 raise serializers.ValidationError("The disaggregation numerator should not exceed update numerator")
183 if total > value:
184 raise serializers.ValidationError("The accumulated disaggregations value should not exceed update value")
185
186 def is_valid(self, raise_exception=False):
187 # HACK to allow nested posting...
188 self._disaggregations_data = self.initial_data.pop('disaggregations', [])
189 super(IndicatorPeriodDataFrameworkSerializer, self).is_valid(raise_exception=raise_exception)
190
[end of akvo/rest/serializers/indicator_period_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/serializers/indicator_period_data.py b/akvo/rest/serializers/indicator_period_data.py
--- a/akvo/rest/serializers/indicator_period_data.py
+++ b/akvo/rest/serializers/indicator_period_data.py
@@ -26,6 +26,15 @@
read_only_fields = ['user']
+class IndicatorPeriodDataCommentNestedSerializer(BaseRSRSerializer):
+ id = serializers.IntegerField(required=False)
+
+ class Meta:
+ model = IndicatorPeriodDataComment
+ fields = '__all__'
+ read_only_fields = ('id', 'data', 'user')
+
+
class IndicatorPeriodDataFileSerializer(BaseRSRSerializer):
class Meta:
model = IndicatorPeriodDataFile
@@ -79,7 +88,7 @@
class IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):
period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())
- comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)
+ comments = IndicatorPeriodDataCommentNestedSerializer(many=True, required=False)
disaggregations = DisaggregationSerializer(many=True, required=False)
user_details = UserDetailsSerializer(read_only=True, source='user')
approver_details = UserDetailsSerializer(read_only=True, source='approved_by')
@@ -107,6 +116,7 @@
"""Over-ridden to handle nested writes."""
files = validated_data.pop('files', [])
photos = validated_data.pop('photos', [])
+ comments = validated_data.pop('comments', [])
update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)
for disaggregation in self._disaggregations_data:
disaggregation['update'] = update.id
@@ -119,6 +129,8 @@
IndicatorPeriodDataFile.objects.create(update=update, file=file)
for photo in photos:
IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)
+ for comment in comments:
+ IndicatorPeriodDataComment.objects.create(data=update, user=update.user, comment=comment['comment'])
return update
@@ -133,6 +145,7 @@
"""Over-ridden to handle nested updates."""
files = validated_data.pop('files', [])
photos = validated_data.pop('photos', [])
+ comments = validated_data.pop('comments', [])
super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)
for disaggregation in self._disaggregations_data:
disaggregation['update'] = instance.id
@@ -147,6 +160,18 @@
IndicatorPeriodDataFile.objects.create(update=instance, file=file)
for photo in photos:
IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)
+ for comment in comments:
+ comment_id = int(comment.get('id', 0))
+ comment_txt = str(comment.get('comment', ''))
+ if not comment_id:
+ IndicatorPeriodDataComment.objects.create(data=instance, user=instance.user, comment=comment['comment'])
+ else:
+ comment_obj = IndicatorPeriodDataComment.objects.get(id=comment_id)
+ if not comment_txt:
+ comment_obj.delete()
+ else:
+ comment_obj.comment = comment_txt
+ comment_obj.save()
return instance._meta.model.objects.select_related(
'period',
| {"golden_diff": "diff --git a/akvo/rest/serializers/indicator_period_data.py b/akvo/rest/serializers/indicator_period_data.py\n--- a/akvo/rest/serializers/indicator_period_data.py\n+++ b/akvo/rest/serializers/indicator_period_data.py\n@@ -26,6 +26,15 @@\n read_only_fields = ['user']\n \n \n+class IndicatorPeriodDataCommentNestedSerializer(BaseRSRSerializer):\n+ id = serializers.IntegerField(required=False)\n+\n+ class Meta:\n+ model = IndicatorPeriodDataComment\n+ fields = '__all__'\n+ read_only_fields = ('id', 'data', 'user')\n+\n+\n class IndicatorPeriodDataFileSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataFile\n@@ -79,7 +88,7 @@\n class IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):\n \n period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())\n- comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)\n+ comments = IndicatorPeriodDataCommentNestedSerializer(many=True, required=False)\n disaggregations = DisaggregationSerializer(many=True, required=False)\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n@@ -107,6 +116,7 @@\n \"\"\"Over-ridden to handle nested writes.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n+ comments = validated_data.pop('comments', [])\n update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = update.id\n@@ -119,6 +129,8 @@\n IndicatorPeriodDataFile.objects.create(update=update, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)\n+ for comment in comments:\n+ IndicatorPeriodDataComment.objects.create(data=update, user=update.user, comment=comment['comment'])\n \n return update\n \n@@ -133,6 +145,7 @@\n \"\"\"Over-ridden to handle nested updates.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n+ comments = validated_data.pop('comments', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = instance.id\n@@ -147,6 +160,18 @@\n IndicatorPeriodDataFile.objects.create(update=instance, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)\n+ for comment in comments:\n+ comment_id = int(comment.get('id', 0))\n+ comment_txt = str(comment.get('comment', ''))\n+ if not comment_id:\n+ IndicatorPeriodDataComment.objects.create(data=instance, user=instance.user, comment=comment['comment'])\n+ else:\n+ comment_obj = IndicatorPeriodDataComment.objects.get(id=comment_id)\n+ if not comment_txt:\n+ comment_obj.delete()\n+ else:\n+ comment_obj.comment = comment_txt\n+ comment_obj.save()\n \n return instance._meta.model.objects.select_related(\n 'period',\n", "issue": "support saving nested comments at indicator_period_data_framework endpoint\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\nfrom rest_framework import serializers\nfrom django.db.models import Sum\n\nfrom akvo.rest.serializers.disaggregation import DisaggregationSerializer, DisaggregationReadOnlySerializer\nfrom akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\nfrom akvo.rest.serializers.user import UserDetailsSerializer\nfrom akvo.rsr.models import (\n IndicatorPeriod, IndicatorPeriodData, IndicatorPeriodDataComment, IndicatorPeriodDataFile, IndicatorPeriodDataPhoto,\n IndicatorDimensionValue, Disaggregation\n)\nfrom akvo.utils import ensure_decimal\n\n\nclass IndicatorPeriodDataCommentSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(read_only=True, source='user')\n\n class Meta:\n model = IndicatorPeriodDataComment\n fields = '__all__'\n read_only_fields = ['user']\n\n\nclass IndicatorPeriodDataFileSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataFile\n fields = '__all__'\n\n\nclass IndicatorPeriodDataPhotoSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataPhoto\n fields = '__all__'\n\n\nclass IndicatorPeriodDataSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n\n class Meta:\n model = IndicatorPeriodData\n fields = '__all__'\n read_only_fields = ['user']\n\n\nclass IndicatorPeriodDataLiteSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(required=False, source='user')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n disaggregations = DisaggregationReadOnlySerializer(many=True, required=False)\n value = serializers.SerializerMethodField()\n file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')\n photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')\n comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)\n\n def get_value(self, obj):\n return ensure_decimal(obj.value)\n\n class Meta:\n model = IndicatorPeriodData\n fields = (\n 'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',\n 'disaggregations', 'narrative', 'photo_url', 'file_url', 'period_actual_value', 'created_at', 'last_modified_at',\n 'file_set', 'photo_set', 'review_note', 'comments',\n )\n\n\nclass IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):\n\n period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())\n comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)\n disaggregations = DisaggregationSerializer(many=True, required=False)\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n period_can_add_update = serializers.ReadOnlyField(source='period.can_save_update')\n files = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)\n photos = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)\n file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')\n photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')\n\n class Meta:\n model = IndicatorPeriodData\n fields = '__all__'\n read_only_fields = ['user']\n\n def create(self, validated_data):\n self._validate_disaggregations(\n self._disaggregations_data,\n value=ensure_decimal(validated_data.get('value', 0)),\n numerator=ensure_decimal(validated_data.get('numerator', None)),\n denominator=ensure_decimal(validated_data.get('denominator', None))\n )\n \"\"\"Over-ridden to handle nested writes.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = update.id\n if 'type_id' in disaggregation and 'dimension_value' not in disaggregation:\n disaggregation['dimension_value'] = disaggregation['type_id']\n serializer = DisaggregationSerializer(data=disaggregation)\n serializer.is_valid(raise_exception=True)\n serializer.create(serializer.validated_data)\n for file in files:\n IndicatorPeriodDataFile.objects.create(update=update, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)\n\n return update\n\n def update(self, instance, validated_data):\n self._validate_disaggregations(\n self._disaggregations_data,\n value=ensure_decimal(validated_data.get('value', instance.value)),\n numerator=ensure_decimal(validated_data.get('numerator', instance.numerator)),\n denominator=ensure_decimal(validated_data.get('denominator', instance.denominator)),\n update=instance\n )\n \"\"\"Over-ridden to handle nested updates.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = instance.id\n serializer = DisaggregationSerializer(data=disaggregation)\n serializer.is_valid(raise_exception=True)\n disaggregation_instance, _ = instance.disaggregations.get_or_create(\n update=instance,\n dimension_value=serializer.validated_data['dimension_value'],\n )\n serializer.update(disaggregation_instance, serializer.validated_data)\n for file in files:\n IndicatorPeriodDataFile.objects.create(update=instance, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)\n\n return instance._meta.model.objects.select_related(\n 'period',\n 'user',\n 'approved_by',\n ).prefetch_related(\n 'comments',\n 'disaggregations',\n ).get(id=instance.id)\n\n def _validate_disaggregations(self, disaggregations, value, numerator=None, denominator=None, update=None):\n adjustments = {}\n for disaggregation in disaggregations:\n type_id = disaggregation.get('type_id', disaggregation.get('dimension_value', None))\n if type_id is None:\n continue\n if denominator is not None:\n disaggregation_denominator = ensure_decimal(disaggregation.get('denominator', 0))\n if disaggregation_denominator > denominator:\n raise serializers.ValidationError(\"disaggregations denominator should not exceed update denominator\")\n category = IndicatorDimensionValue.objects.get(pk=type_id).name\n if category.id not in adjustments:\n adjustments[category.id] = {'values': 0, 'numerators': 0, 'type_ids': []}\n adjustments[category.id]['values'] += ensure_decimal(disaggregation.get('value', 0))\n adjustments[category.id]['numerators'] += ensure_decimal(disaggregation.get('numerator', 0))\n adjustments[category.id]['type_ids'].append(type_id)\n for key, adjustment in adjustments.items():\n unmodifieds = Disaggregation.objects.filter(update=update, dimension_value__name=key)\\\n .exclude(dimension_value__in=adjustment['type_ids'])\\\n .aggregate(values=Sum('value'))\n total = adjustment['values'] + ensure_decimal(unmodifieds['values'])\n if numerator is not None and adjustment['numerators'] > numerator:\n raise serializers.ValidationError(\"The disaggregation numerator should not exceed update numerator\")\n if total > value:\n raise serializers.ValidationError(\"The accumulated disaggregations value should not exceed update value\")\n\n def is_valid(self, raise_exception=False):\n # HACK to allow nested posting...\n self._disaggregations_data = self.initial_data.pop('disaggregations', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).is_valid(raise_exception=raise_exception)\n", "path": "akvo/rest/serializers/indicator_period_data.py"}]} | 2,902 | 744 |
gh_patches_debug_16628 | rasdani/github-patches | git_diff | jazzband__pip-tools-595 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
README broken on PyPI (must be reStructuredText)
The [package description](https://pypi.python.org/pypi/pip-tools/) on PyPI is unreadable since PyPI expects the README in [reStructuredText](http://www.sphinx-doc.org/en/stable/rest.html) file format and we use MarkDown.
Solution A: Convert to reST
---------------------
1. Rename the current `README.md` to `README.rst`
1. Replace the markdown of the badges and the code samples ([example](https://github.com/Organice/djangocms-maps/blob/master/README.rst))
1. Add a `long_description=read_file('README.rst')` line to `setup.py` ([example](https://github.com/Organice/djangocms-maps/blob/master/setup.py#L50))
Solution B: Process before Upload
-------------------
1. Integrate [pypandoc](https://pypi.python.org/pypi/pypandoc) in `setup.py` ([example](https://github.com/jrief/djangocms-cascade/blob/master/setup.py#L7-L14))
1. Add a `long_description=convert('README.md', 'rst')` line to `setup.py` ([example](https://github.com/jrief/djangocms-cascade/blob/master/setup.py#L49))
------------
Both solutions above will render a nicely formatted, HTML-styled package description on PyPI.
Quality Assurance
--------------
Optionally, you may check your README with [checkdocs](https://github.com/Organice/djangocms-maps/blob/master/tox.ini#L13-L19) before uploading the package to PyPI, because sometimes the reST-to-HTML conversion that PyPI uses fails -- and renders a still hard-to-read, broken, unformatted package description.
</issue>
<code>
[start of setup.py]
1 """
2 pip-tools keeps your pinned dependencies fresh.
3 """
4 from setuptools import find_packages, setup
5
6 setup(
7 name='pip-tools',
8 use_scm_version=True,
9 url='https://github.com/jazzband/pip-tools/',
10 license='BSD',
11 author='Vincent Driessen',
12 author_email='[email protected]',
13 description=__doc__,
14 packages=find_packages(exclude=['tests']),
15 setup_requires=['setuptools_scm'],
16 install_requires=[
17 'click>=6',
18 'first',
19 'six',
20 'setuptools'
21 ],
22 extras_require={
23 ':python_version < "3.0"': ['contextlib2']
24 },
25 zip_safe=False,
26 entry_points={
27 'console_scripts': [
28 'pip-compile = piptools.scripts.compile:cli',
29 'pip-sync = piptools.scripts.sync:cli',
30 ],
31 },
32 platforms='any',
33 classifiers=[
34 'Development Status :: 5 - Production/Stable',
35 'Intended Audience :: Developers',
36 'Intended Audience :: System Administrators',
37 'License :: OSI Approved :: BSD License',
38 'Operating System :: OS Independent',
39 'Programming Language :: Python',
40 'Programming Language :: Python :: 2',
41 'Programming Language :: Python :: 2.7',
42 'Programming Language :: Python :: 3',
43 'Programming Language :: Python :: 3.4',
44 'Programming Language :: Python :: 3.5',
45 'Programming Language :: Python :: 3.6',
46 'Topic :: System :: Systems Administration',
47 ]
48 )
49
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,8 +1,14 @@
"""
pip-tools keeps your pinned dependencies fresh.
"""
+from os.path import abspath, dirname, join
from setuptools import find_packages, setup
+def read_file(filename):
+ """Read the contents of a file located relative to setup.py"""
+ with open(join(abspath(dirname(__file__)), filename)) as thefile:
+ return thefile.read()
+
setup(
name='pip-tools',
use_scm_version=True,
@@ -11,6 +17,7 @@
author='Vincent Driessen',
author_email='[email protected]',
description=__doc__,
+ long_description=read_file('README.rst'),
packages=find_packages(exclude=['tests']),
setup_requires=['setuptools_scm'],
install_requires=[
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,8 +1,14 @@\n \"\"\"\n pip-tools keeps your pinned dependencies fresh.\n \"\"\"\n+from os.path import abspath, dirname, join\n from setuptools import find_packages, setup\n \n+def read_file(filename):\n+ \"\"\"Read the contents of a file located relative to setup.py\"\"\"\n+ with open(join(abspath(dirname(__file__)), filename)) as thefile:\n+ return thefile.read()\n+\n setup(\n name='pip-tools',\n use_scm_version=True,\n@@ -11,6 +17,7 @@\n author='Vincent Driessen',\n author_email='[email protected]',\n description=__doc__,\n+ long_description=read_file('README.rst'),\n packages=find_packages(exclude=['tests']),\n setup_requires=['setuptools_scm'],\n install_requires=[\n", "issue": "README broken on PyPI (must be reStructuredText)\nThe [package description](https://pypi.python.org/pypi/pip-tools/) on PyPI is unreadable since PyPI expects the README in [reStructuredText](http://www.sphinx-doc.org/en/stable/rest.html) file format and we use MarkDown.\r\n\r\nSolution A: Convert to reST\r\n---------------------\r\n\r\n1. Rename the current `README.md` to `README.rst`\r\n1. Replace the markdown of the badges and the code samples ([example](https://github.com/Organice/djangocms-maps/blob/master/README.rst))\r\n1. Add a `long_description=read_file('README.rst')` line to `setup.py` ([example](https://github.com/Organice/djangocms-maps/blob/master/setup.py#L50))\r\n\r\nSolution B: Process before Upload\r\n-------------------\r\n\r\n1. Integrate [pypandoc](https://pypi.python.org/pypi/pypandoc) in `setup.py` ([example](https://github.com/jrief/djangocms-cascade/blob/master/setup.py#L7-L14))\r\n1. Add a `long_description=convert('README.md', 'rst')` line to `setup.py` ([example](https://github.com/jrief/djangocms-cascade/blob/master/setup.py#L49))\r\n\r\n------------\r\n\r\nBoth solutions above will render a nicely formatted, HTML-styled package description on PyPI.\r\n\r\nQuality Assurance\r\n--------------\r\n\r\nOptionally, you may check your README with [checkdocs](https://github.com/Organice/djangocms-maps/blob/master/tox.ini#L13-L19) before uploading the package to PyPI, because sometimes the reST-to-HTML conversion that PyPI uses fails -- and renders a still hard-to-read, broken, unformatted package description.\n", "before_files": [{"content": "\"\"\"\npip-tools keeps your pinned dependencies fresh.\n\"\"\"\nfrom setuptools import find_packages, setup\n\nsetup(\n name='pip-tools',\n use_scm_version=True,\n url='https://github.com/jazzband/pip-tools/',\n license='BSD',\n author='Vincent Driessen',\n author_email='[email protected]',\n description=__doc__,\n packages=find_packages(exclude=['tests']),\n setup_requires=['setuptools_scm'],\n install_requires=[\n 'click>=6',\n 'first',\n 'six',\n 'setuptools'\n ],\n extras_require={\n ':python_version < \"3.0\"': ['contextlib2']\n },\n zip_safe=False,\n entry_points={\n 'console_scripts': [\n 'pip-compile = piptools.scripts.compile:cli',\n 'pip-sync = piptools.scripts.sync:cli',\n ],\n },\n platforms='any',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: System :: Systems Administration',\n ]\n)\n", "path": "setup.py"}]} | 1,339 | 192 |
gh_patches_debug_21303 | rasdani/github-patches | git_diff | nltk__nltk-2819 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WordNetLemmatizer in nltk.stem module
What's the parameter of WordNetLemmatizer.lemmatize() in nltk.stem module?
Turn to the document, what are the candidate value of the parameter **'pos'**?

The default value is 'Noun'. But use the function pos_tag() to get the pos of the word, the value appears to come from several options.
</issue>
<code>
[start of nltk/stem/wordnet.py]
1 # Natural Language Toolkit: WordNet stemmer interface
2 #
3 # Copyright (C) 2001-2021 NLTK Project
4 # Author: Steven Bird <[email protected]>
5 # Edward Loper <[email protected]>
6 # URL: <http://nltk.org/>
7 # For license information, see LICENSE.TXT
8
9 from nltk.corpus import wordnet
10 from nltk.corpus.reader.wordnet import NOUN
11
12
13 class WordNetLemmatizer:
14 """
15 WordNet Lemmatizer
16
17 Lemmatize using WordNet's built-in morphy function.
18 Returns the input word unchanged if it cannot be found in WordNet.
19
20 >>> from nltk.stem import WordNetLemmatizer
21 >>> wnl = WordNetLemmatizer()
22 >>> print(wnl.lemmatize('dogs'))
23 dog
24 >>> print(wnl.lemmatize('churches'))
25 church
26 >>> print(wnl.lemmatize('aardwolves'))
27 aardwolf
28 >>> print(wnl.lemmatize('abaci'))
29 abacus
30 >>> print(wnl.lemmatize('hardrock'))
31 hardrock
32 """
33
34 def __init__(self):
35 pass
36
37 def lemmatize(self, word, pos=NOUN):
38 lemmas = wordnet._morphy(word, pos)
39 return min(lemmas, key=len) if lemmas else word
40
41 def __repr__(self):
42 return "<WordNetLemmatizer>"
43
[end of nltk/stem/wordnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nltk/stem/wordnet.py b/nltk/stem/wordnet.py
--- a/nltk/stem/wordnet.py
+++ b/nltk/stem/wordnet.py
@@ -6,8 +6,7 @@
# URL: <http://nltk.org/>
# For license information, see LICENSE.TXT
-from nltk.corpus import wordnet
-from nltk.corpus.reader.wordnet import NOUN
+from nltk.corpus import wordnet as wn
class WordNetLemmatizer:
@@ -31,11 +30,19 @@
hardrock
"""
- def __init__(self):
- pass
-
- def lemmatize(self, word, pos=NOUN):
- lemmas = wordnet._morphy(word, pos)
+ def lemmatize(self, word: str, pos: str = wn.NOUN) -> str:
+ """Lemmatize `word` using WordNet's built-in morphy function.
+ Returns the input word unchanged if it cannot be found in WordNet.
+
+ :param word: The input word to lemmatize.
+ :type word: str
+ :param pos: The Part Of Speech tag. Valid options are `"n"` for nouns,
+ `"v"` for verbs, `"a"` for adjectives, `"r"` for adverbs and `"s"`
+ for satellite adjectives.
+ :param pos: str
+ :return: The lemma of `word`, for the given `pos`.
+ """
+ lemmas = wn._morphy(word, pos)
return min(lemmas, key=len) if lemmas else word
def __repr__(self):
| {"golden_diff": "diff --git a/nltk/stem/wordnet.py b/nltk/stem/wordnet.py\n--- a/nltk/stem/wordnet.py\n+++ b/nltk/stem/wordnet.py\n@@ -6,8 +6,7 @@\n # URL: <http://nltk.org/>\n # For license information, see LICENSE.TXT\n \n-from nltk.corpus import wordnet\n-from nltk.corpus.reader.wordnet import NOUN\n+from nltk.corpus import wordnet as wn\n \n \n class WordNetLemmatizer:\n@@ -31,11 +30,19 @@\n hardrock\n \"\"\"\n \n- def __init__(self):\n- pass\n-\n- def lemmatize(self, word, pos=NOUN):\n- lemmas = wordnet._morphy(word, pos)\n+ def lemmatize(self, word: str, pos: str = wn.NOUN) -> str:\n+ \"\"\"Lemmatize `word` using WordNet's built-in morphy function.\n+ Returns the input word unchanged if it cannot be found in WordNet.\n+\n+ :param word: The input word to lemmatize.\n+ :type word: str\n+ :param pos: The Part Of Speech tag. Valid options are `\"n\"` for nouns,\n+ `\"v\"` for verbs, `\"a\"` for adjectives, `\"r\"` for adverbs and `\"s\"`\n+ for satellite adjectives.\n+ :param pos: str\n+ :return: The lemma of `word`, for the given `pos`.\n+ \"\"\"\n+ lemmas = wn._morphy(word, pos)\n return min(lemmas, key=len) if lemmas else word\n \n def __repr__(self):\n", "issue": "WordNetLemmatizer in nltk.stem module\nWhat's the parameter of WordNetLemmatizer.lemmatize() in nltk.stem module?\r\nTurn to the document, what are the candidate value of the parameter **'pos'**?\r\n\r\nThe default value is 'Noun'. But use the function pos_tag() to get the pos of the word, the value appears to come from several options.\n", "before_files": [{"content": "# Natural Language Toolkit: WordNet stemmer interface\n#\n# Copyright (C) 2001-2021 NLTK Project\n# Author: Steven Bird <[email protected]>\n# Edward Loper <[email protected]>\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\nfrom nltk.corpus import wordnet\nfrom nltk.corpus.reader.wordnet import NOUN\n\n\nclass WordNetLemmatizer:\n \"\"\"\n WordNet Lemmatizer\n\n Lemmatize using WordNet's built-in morphy function.\n Returns the input word unchanged if it cannot be found in WordNet.\n\n >>> from nltk.stem import WordNetLemmatizer\n >>> wnl = WordNetLemmatizer()\n >>> print(wnl.lemmatize('dogs'))\n dog\n >>> print(wnl.lemmatize('churches'))\n church\n >>> print(wnl.lemmatize('aardwolves'))\n aardwolf\n >>> print(wnl.lemmatize('abaci'))\n abacus\n >>> print(wnl.lemmatize('hardrock'))\n hardrock\n \"\"\"\n\n def __init__(self):\n pass\n\n def lemmatize(self, word, pos=NOUN):\n lemmas = wordnet._morphy(word, pos)\n return min(lemmas, key=len) if lemmas else word\n\n def __repr__(self):\n return \"<WordNetLemmatizer>\"\n", "path": "nltk/stem/wordnet.py"}]} | 1,094 | 376 |
gh_patches_debug_21029 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-655 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create classes to represent ionization state distributions
My plan for this PR is to create classes to represent the ionization state distributions of one or more elements. I am going to add in a bunch of dunder methods like `__getitem__` and maybe `__call__` to help making access to the ionization states more straightfoward and intuitive. Any suggestions on the naming convention will be helpful so that we can maximize readability.
Eventually we'll need a way to calculate ionization state distributions assuming collisional ionization equilibrium, but that will be for a different PR. The purpose of this PR is to set up how to store and access the ionization distributions. This will be discussed in #352.
This will address some of #352. It will probably be best to wait until after the `0.1.0` release to merge this, since this PR is only for a partial implementation anyway.
</issue>
<code>
[start of plasmapy/examples/plot_dispersion_function.py]
1 """
2 The plasma dispersion function
3 ==============================
4
5 Let's import some basics (and `PlasmaPy`!)
6 """
7
8
9 import numpy as np
10 import matplotlib.pyplot as plt
11 import plasmapy
12
13
14 #######################################################################
15 help(plasmapy.mathematics.plasma_dispersion_func)
16
17
18 #######################################################################
19 # We'll now make some sample data to visualize the dispersion function:
20
21 x = np.linspace(-1, 1, 1000)
22 X, Y = np.meshgrid(x, x)
23 Z = X + 1j * Y
24 print(Z.shape)
25
26 #######################################################################
27 # Before we start plotting, let's make a visualization function first:
28
29
30 def plot_complex(X, Y, Z, N=50):
31 fig, (real_axis, imag_axis) = plt.subplots(1, 2)
32 real_axis.contourf(X, Y, Z.real, N)
33 imag_axis.contourf(X, Y, Z.imag, N)
34 real_axis.set_title("Real values")
35 imag_axis.set_title("Imaginary values")
36 for ax in [real_axis, imag_axis]:
37 ax.set_xlabel("Real values")
38 ax.set_ylabel("Imaginary values")
39 fig.tight_layout()
40
41
42 plot_complex(X, Y, Z)
43
44 #######################################################################
45 # We can now apply our visualization function to our simple
46
47 F = plasmapy.mathematics.plasma_dispersion_func(Z)
48 plot_complex(X, Y, F)
49
50
51 #######################################################################
52 # So this is going to be a hack and I'm not 100% sure the dispersion function
53 # is quite what I think it is, but let's find the area where the dispersion
54 # function has a lesser than zero real part because I think it may be important
55 # (brb reading Fried and Conte):
56
57 plot_complex(X, Y, F.real < 0)
58
59
60 #######################################################################
61 # We can also visualize the derivative:
62
63 F = plasmapy.mathematics.plasma_dispersion_func_deriv(Z)
64 plot_complex(X, Y, F)
65
66 #######################################################################
67 # Plotting the same function on a larger area:
68
69 x = np.linspace(-2, 2, 2000)
70 X, Y = np.meshgrid(x, x)
71 Z = X + 1j * Y
72 print(Z.shape)
73
74 #######################################################################
75
76 F = plasmapy.mathematics.plasma_dispersion_func(Z)
77 plot_complex(X, Y, F, 100)
78
79 #######################################################################
80 # Now we examine the derivative of the dispersion function as a function
81 # of the phase velocity of an electromagnetic wave propagating through
82 # the plasma. This is recreating figure 5.1 in:
83 # J. Sheffield, D. Froula, S. H. Glenzer, and N. C. Luhmann Jr,
84 # Plasma scattering of electromagnetic radiation: theory and measurement
85 # techniques. Chapter 5 Pg 106 (Academic press, 2010).
86
87 xs = np.linspace(0, 4, 100)
88 ws = (-1 / 2) * plasmapy.mathematics.plasma_dispersion_func_deriv(xs)
89 wRe = np.real(ws)
90 wIm = np.imag(ws)
91
92 plt.plot(xs, wRe, label="Re")
93 plt.plot(xs, wIm, label="Im")
94 plt.axis([0, 4, -0.3, 1])
95 plt.legend(loc='upper right',
96 frameon=False,
97 labelspacing=0.001,
98 fontsize=14,
99 borderaxespad=0.1)
100 plt.show()
[end of plasmapy/examples/plot_dispersion_function.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plasmapy/examples/plot_dispersion_function.py b/plasmapy/examples/plot_dispersion_function.py
--- a/plasmapy/examples/plot_dispersion_function.py
+++ b/plasmapy/examples/plot_dispersion_function.py
@@ -10,7 +10,6 @@
import matplotlib.pyplot as plt
import plasmapy
-
#######################################################################
help(plasmapy.mathematics.plasma_dispersion_func)
@@ -41,9 +40,10 @@
plot_complex(X, Y, Z)
-#######################################################################
-# We can now apply our visualization function to our simple
+###############################################################################
+# We can now apply our visualization function to our simple dispersion relation
+# sphinx_gallery_thumbnail_number = 2
F = plasmapy.mathematics.plasma_dispersion_func(Z)
plot_complex(X, Y, F)
@@ -97,4 +97,4 @@
labelspacing=0.001,
fontsize=14,
borderaxespad=0.1)
-plt.show()
\ No newline at end of file
+plt.show()
| {"golden_diff": "diff --git a/plasmapy/examples/plot_dispersion_function.py b/plasmapy/examples/plot_dispersion_function.py\n--- a/plasmapy/examples/plot_dispersion_function.py\n+++ b/plasmapy/examples/plot_dispersion_function.py\n@@ -10,7 +10,6 @@\n import matplotlib.pyplot as plt\n import plasmapy\n \n-\n #######################################################################\n help(plasmapy.mathematics.plasma_dispersion_func)\n \n@@ -41,9 +40,10 @@\n \n plot_complex(X, Y, Z)\n \n-#######################################################################\n-# We can now apply our visualization function to our simple\n+###############################################################################\n+# We can now apply our visualization function to our simple dispersion relation\n \n+# sphinx_gallery_thumbnail_number = 2\n F = plasmapy.mathematics.plasma_dispersion_func(Z)\n plot_complex(X, Y, F)\n \n@@ -97,4 +97,4 @@\n labelspacing=0.001,\n fontsize=14,\n borderaxespad=0.1)\n-plt.show()\n\\ No newline at end of file\n+plt.show()\n", "issue": "Create classes to represent ionization state distributions\nMy plan for this PR is to create classes to represent the ionization state distributions of one or more elements. I am going to add in a bunch of dunder methods like `__getitem__` and maybe `__call__` to help making access to the ionization states more straightfoward and intuitive. Any suggestions on the naming convention will be helpful so that we can maximize readability. \r\n\r\nEventually we'll need a way to calculate ionization state distributions assuming collisional ionization equilibrium, but that will be for a different PR. The purpose of this PR is to set up how to store and access the ionization distributions. This will be discussed in #352.\r\n\r\nThis will address some of #352. It will probably be best to wait until after the `0.1.0` release to merge this, since this PR is only for a partial implementation anyway.\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nThe plasma dispersion function\n==============================\n\nLet's import some basics (and `PlasmaPy`!)\n\"\"\"\n\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport plasmapy\n\n\n#######################################################################\nhelp(plasmapy.mathematics.plasma_dispersion_func)\n\n\n#######################################################################\n# We'll now make some sample data to visualize the dispersion function:\n\nx = np.linspace(-1, 1, 1000)\nX, Y = np.meshgrid(x, x)\nZ = X + 1j * Y\nprint(Z.shape)\n\n#######################################################################\n# Before we start plotting, let's make a visualization function first:\n\n\ndef plot_complex(X, Y, Z, N=50):\n fig, (real_axis, imag_axis) = plt.subplots(1, 2)\n real_axis.contourf(X, Y, Z.real, N)\n imag_axis.contourf(X, Y, Z.imag, N)\n real_axis.set_title(\"Real values\")\n imag_axis.set_title(\"Imaginary values\")\n for ax in [real_axis, imag_axis]:\n ax.set_xlabel(\"Real values\")\n ax.set_ylabel(\"Imaginary values\")\n fig.tight_layout()\n\n\nplot_complex(X, Y, Z)\n\n#######################################################################\n# We can now apply our visualization function to our simple\n\nF = plasmapy.mathematics.plasma_dispersion_func(Z)\nplot_complex(X, Y, F)\n\n\n#######################################################################\n# So this is going to be a hack and I'm not 100% sure the dispersion function\n# is quite what I think it is, but let's find the area where the dispersion\n# function has a lesser than zero real part because I think it may be important\n# (brb reading Fried and Conte):\n\nplot_complex(X, Y, F.real < 0)\n\n\n#######################################################################\n# We can also visualize the derivative:\n\nF = plasmapy.mathematics.plasma_dispersion_func_deriv(Z)\nplot_complex(X, Y, F)\n\n#######################################################################\n# Plotting the same function on a larger area:\n\nx = np.linspace(-2, 2, 2000)\nX, Y = np.meshgrid(x, x)\nZ = X + 1j * Y\nprint(Z.shape)\n\n#######################################################################\n\nF = plasmapy.mathematics.plasma_dispersion_func(Z)\nplot_complex(X, Y, F, 100)\n\n#######################################################################\n# Now we examine the derivative of the dispersion function as a function\n# of the phase velocity of an electromagnetic wave propagating through\n# the plasma. This is recreating figure 5.1 in:\n# J. Sheffield, D. Froula, S. H. Glenzer, and N. C. Luhmann Jr,\n# Plasma scattering of electromagnetic radiation: theory and measurement\n# techniques. Chapter 5 Pg 106 (Academic press, 2010).\n\nxs = np.linspace(0, 4, 100)\nws = (-1 / 2) * plasmapy.mathematics.plasma_dispersion_func_deriv(xs)\nwRe = np.real(ws)\nwIm = np.imag(ws)\n\nplt.plot(xs, wRe, label=\"Re\")\nplt.plot(xs, wIm, label=\"Im\")\nplt.axis([0, 4, -0.3, 1])\nplt.legend(loc='upper right',\n frameon=False,\n labelspacing=0.001,\n fontsize=14,\n borderaxespad=0.1)\nplt.show()", "path": "plasmapy/examples/plot_dispersion_function.py"}]} | 1,686 | 239 |
gh_patches_debug_10872 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1887 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception processing E3037 for AWS::S3::Bucket.Transition.TransitionDate
```
$ cfn-lint --version
cfn-lint 0.44.5
```
The `TransitionDate` property is defined with `PrimitiveType: "Timestamp"`:
```yaml
AWSTemplateFormatVersion: 2010-09-09
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
LifecycleConfiguration:
Rules:
- Status: Enabled
Transitions:
- StorageClass: INTELLIGENT_TIERING
TransitionDate: 2021-01-01T00:00:00.000Z
```
This is a valid template and can be successfully deployed, but `cfn-lint` fails with:
```
$ cfn-lint scratch.yml
E0002 Unknown exception while processing rule E3037: Object of type datetime is not JSON serializable
scratch.yml:1:1
```
Running with `--debug` shows the exception is generated at https://github.com/aws-cloudformation/cfn-python-lint/blob/c7658511bd7066417682103f21f71983c67ea6d0/src/cfnlint/rules/resources/properties/ListDuplicates.py#L36
Quoting the TransitionDate value suppresses this error, e.g. `TransitionDate: "2021-01-01T00:00:00.000Z"`
</issue>
<code>
[start of src/cfnlint/rules/resources/properties/ListDuplicates.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import hashlib
6 import json
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9
10 from cfnlint.helpers import RESOURCE_SPECS
11
12
13 class ListDuplicates(CloudFormationLintRule):
14 """Check if duplicates exist in a List"""
15 id = 'E3037'
16 shortdesc = 'Check if a list has duplicate values'
17 description = 'Certain lists don\'t support duplicate items. ' \
18 'Check when duplicates are provided but not supported.'
19 source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'
20 tags = ['resources', 'property', 'list']
21
22 def initialize(self, cfn):
23 """Initialize the rule"""
24 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):
25 self.resource_property_types.append(resource_type_spec)
26 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
27 self.resource_sub_property_types.append(property_type_spec)
28
29 def _check_duplicates(self, values, path, scenario=None):
30 """ Check for Duplicates """
31 matches = []
32
33 list_items = []
34 if isinstance(values, list):
35 for index, value in enumerate(values):
36 value_hash = hashlib.sha1(json.dumps(
37 value, sort_keys=True).encode('utf-8')).hexdigest()
38 if value_hash in list_items:
39 if not scenario:
40 message = 'List has a duplicate value at {0}'
41 matches.append(
42 RuleMatch(path + [index], message.format('/'.join(map(str, path + [index])))))
43 else:
44 scenario_text = ' and '.join(
45 ['condition "%s" is %s' % (k, v) for (k, v) in scenario.items()])
46 message = 'List has a duplicate value at {0} when {1}'
47 matches.append(RuleMatch(path, message.format(
48 '/'.join(map(str, path)), scenario_text)))
49
50 list_items.append(value_hash)
51
52 return matches
53
54 def check_duplicates(self, values, path, cfn):
55 """ Check for duplicates """
56 matches = []
57
58 if isinstance(values, list):
59 matches.extend(self._check_duplicates(values, path))
60 elif isinstance(values, dict):
61 props = cfn.get_object_without_conditions(values)
62 for prop in props:
63 matches.extend(self._check_duplicates(
64 prop.get('Object'), path, prop.get('Scenario')))
65
66 return matches
67
68 def check(self, cfn, properties, value_specs, path):
69 """Check itself"""
70 matches = list()
71 for p_value, p_path in properties.items_safe(path[:]):
72 for prop in p_value:
73 if prop in value_specs:
74 property_type = value_specs.get(prop).get('Type')
75 duplicates_allowed = value_specs.get(prop).get('DuplicatesAllowed', True)
76 if property_type == 'List' and not duplicates_allowed:
77 matches.extend(
78 self.check_duplicates(
79 p_value[prop], p_path + [prop], cfn
80 )
81 )
82
83 return matches
84
85 def match_resource_sub_properties(self, properties, property_type, path, cfn):
86 """Match for sub properties"""
87 matches = list()
88
89 specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
90 'PropertyTypes').get(property_type, {}).get('Properties', {})
91 matches.extend(self.check(cfn, properties, specs, path))
92
93 return matches
94
95 def match_resource_properties(self, properties, resource_type, path, cfn):
96 """Check CloudFormation Properties"""
97 matches = list()
98
99 specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
100 'ResourceTypes').get(resource_type, {}).get('Properties', {})
101 matches.extend(self.check(cfn, properties, specs, path))
102
103 return matches
104
[end of src/cfnlint/rules/resources/properties/ListDuplicates.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/properties/ListDuplicates.py b/src/cfnlint/rules/resources/properties/ListDuplicates.py
--- a/src/cfnlint/rules/resources/properties/ListDuplicates.py
+++ b/src/cfnlint/rules/resources/properties/ListDuplicates.py
@@ -34,7 +34,7 @@
if isinstance(values, list):
for index, value in enumerate(values):
value_hash = hashlib.sha1(json.dumps(
- value, sort_keys=True).encode('utf-8')).hexdigest()
+ value, sort_keys=True, default=str).encode('utf-8')).hexdigest()
if value_hash in list_items:
if not scenario:
message = 'List has a duplicate value at {0}'
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/ListDuplicates.py b/src/cfnlint/rules/resources/properties/ListDuplicates.py\n--- a/src/cfnlint/rules/resources/properties/ListDuplicates.py\n+++ b/src/cfnlint/rules/resources/properties/ListDuplicates.py\n@@ -34,7 +34,7 @@\n if isinstance(values, list):\n for index, value in enumerate(values):\n value_hash = hashlib.sha1(json.dumps(\n- value, sort_keys=True).encode('utf-8')).hexdigest()\n+ value, sort_keys=True, default=str).encode('utf-8')).hexdigest()\n if value_hash in list_items:\n if not scenario:\n message = 'List has a duplicate value at {0}'\n", "issue": "Exception processing E3037 for AWS::S3::Bucket.Transition.TransitionDate\n```\r\n$ cfn-lint --version\r\ncfn-lint 0.44.5\r\n```\r\n\r\nThe `TransitionDate` property is defined with `PrimitiveType: \"Timestamp\"`:\r\n\r\n```yaml\r\nAWSTemplateFormatVersion: 2010-09-09\r\n\r\nResources:\r\n Bucket:\r\n Type: AWS::S3::Bucket\r\n Properties:\r\n LifecycleConfiguration:\r\n Rules:\r\n - Status: Enabled\r\n Transitions:\r\n - StorageClass: INTELLIGENT_TIERING\r\n TransitionDate: 2021-01-01T00:00:00.000Z\r\n```\r\n\r\nThis is a valid template and can be successfully deployed, but `cfn-lint` fails with:\r\n\r\n```\r\n$ cfn-lint scratch.yml\r\nE0002 Unknown exception while processing rule E3037: Object of type datetime is not JSON serializable\r\nscratch.yml:1:1\r\n```\r\n\r\nRunning with `--debug` shows the exception is generated at https://github.com/aws-cloudformation/cfn-python-lint/blob/c7658511bd7066417682103f21f71983c67ea6d0/src/cfnlint/rules/resources/properties/ListDuplicates.py#L36\r\n\r\nQuoting the TransitionDate value suppresses this error, e.g. `TransitionDate: \"2021-01-01T00:00:00.000Z\"`\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport hashlib\nimport json\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass ListDuplicates(CloudFormationLintRule):\n \"\"\"Check if duplicates exist in a List\"\"\"\n id = 'E3037'\n shortdesc = 'Check if a list has duplicate values'\n description = 'Certain lists don\\'t support duplicate items. ' \\\n 'Check when duplicates are provided but not supported.'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'list']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def _check_duplicates(self, values, path, scenario=None):\n \"\"\" Check for Duplicates \"\"\"\n matches = []\n\n list_items = []\n if isinstance(values, list):\n for index, value in enumerate(values):\n value_hash = hashlib.sha1(json.dumps(\n value, sort_keys=True).encode('utf-8')).hexdigest()\n if value_hash in list_items:\n if not scenario:\n message = 'List has a duplicate value at {0}'\n matches.append(\n RuleMatch(path + [index], message.format('/'.join(map(str, path + [index])))))\n else:\n scenario_text = ' and '.join(\n ['condition \"%s\" is %s' % (k, v) for (k, v) in scenario.items()])\n message = 'List has a duplicate value at {0} when {1}'\n matches.append(RuleMatch(path, message.format(\n '/'.join(map(str, path)), scenario_text)))\n\n list_items.append(value_hash)\n\n return matches\n\n def check_duplicates(self, values, path, cfn):\n \"\"\" Check for duplicates \"\"\"\n matches = []\n\n if isinstance(values, list):\n matches.extend(self._check_duplicates(values, path))\n elif isinstance(values, dict):\n props = cfn.get_object_without_conditions(values)\n for prop in props:\n matches.extend(self._check_duplicates(\n prop.get('Object'), path, prop.get('Scenario')))\n\n return matches\n\n def check(self, cfn, properties, value_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n property_type = value_specs.get(prop).get('Type')\n duplicates_allowed = value_specs.get(prop).get('DuplicatesAllowed', True)\n if property_type == 'List' and not duplicates_allowed:\n matches.extend(\n self.check_duplicates(\n p_value[prop], p_path + [prop], cfn\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'PropertyTypes').get(property_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/ListDuplicates.py"}]} | 1,956 | 155 |
gh_patches_debug_33558 | rasdani/github-patches | git_diff | wagtail__wagtail-170 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken URL for jquery.ui.datepicker when 'en-US' used as lang
This isn't a big deal at all, but wanted to post just in case anyone wants to take a look.
When loading a page with `jquery.ui.datepicker.js`, I notice in console that a call to http://jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-en-US.js returns a 404.
I searched out the CDN for the directory in which the file is attempting to be called:
http://jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/
As you can see, there is no `../jquery.ui.datepicker-en-US.js` present (not that there necessarily ought to be)
The call stems from:
https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/templatetags/localize.py#L42
The interpolation inserts `en-US` into the URI
Again, no big deal... just FYI
Cheers, all!
Edit:
I should add, this issue does _not_ break usability - a fallback seems to be in place.
</issue>
<code>
[start of wagtail/wagtailadmin/templatetags/localize.py]
1 from django import template
2 from django.conf import settings
3 from django.utils import formats
4 from django.utils.translation import get_language
5
6 register = template.Library()
7
8 # For reasons unkown, the el (greek) locale in django/conf/locale/el/formats.py
9 # *did not* contain a DATE_INPUT_FORMATS -- so it fell back to using the US
10 # date format (mm/dd/yy) which is not the correct one for Greece (dd/mm/yy).
11 # This means that if we used a localized datepicker django *won't* be able to
12 # parse the dates! So a test here checks if DATE_INPUT_FORMATS is actually
13 # defined in a format module. If yes then it will just return an empty string
14 # so that the normal, localized date format from datepicker will be used.
15 # If DATE_INPUT_FORMATS is not defined then it will return
16 @register.assignment_tag
17 def get_date_format_override():
18 if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True:
19
20 for m in formats.get_format_modules():
21 if hasattr(m, 'DATE_INPUT_FORMATS'):
22 return ''
23 else: # fall back to the ISO to be sure date will be parsed
24 return 'yy-mm-dd'
25 else: # Fall back to ISO if I18N is *not* used
26 return 'yy-mm-dd'
27
28 # Get the correct i18n + l10n settings for datepicker depending on current
29 # thread language
30 @register.simple_tag
31 def get_localized_datepicker_js():
32 if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True and \
33 hasattr(settings, 'USE_L10N') and settings.USE_L10N==True:
34
35 lang = get_language()
36
37 if '-' in lang:
38 lang_parts = lang.split('-')
39 lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()
40 else:
41 lang=lang.lower()
42 return '<script src="//jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-{0}.js"></script>'.format(
43 lang
44 )
45
46 else: # Don't write anything if we don't use I18N and L10N
47 return ''
48
[end of wagtail/wagtailadmin/templatetags/localize.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/wagtailadmin/templatetags/localize.py b/wagtail/wagtailadmin/templatetags/localize.py
--- a/wagtail/wagtailadmin/templatetags/localize.py
+++ b/wagtail/wagtailadmin/templatetags/localize.py
@@ -1,5 +1,6 @@
from django import template
from django.conf import settings
+from django.templatetags.static import static
from django.utils import formats
from django.utils.translation import get_language
@@ -25,6 +26,15 @@
else: # Fall back to ISO if I18N is *not* used
return 'yy-mm-dd'
+# This is a list of all supported langs for jquery-ui datepicker which exist in
+# wagtailadmin/js/venor/i18n/. In case any new translations are added there the
+# language code should also be added in this list.
+SUPPORTED_DATEPICKER_LANGS = ['af', 'ar-DZ', 'ar', 'az', 'be', 'bg', 'bs', 'ca', 'cs', 'cy-GB', 'da', 'de',
+ 'el', 'en-AU', 'en-GB', 'en-NZ', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fo', 'fr-CA', 'fr-CH', 'fr', 'gl',
+ 'he', 'hi', 'hr', 'hu', 'hy', 'id', 'is', 'it', 'ja', 'ka', 'kk', 'km', 'ko', 'ky', 'lb', 'lt', 'lv',
+ 'mk', 'ml', 'ms', 'nb', 'nl-BE', 'nl', 'nn', 'no', 'pl', 'pt-BR', 'pt', 'rm', 'ro', 'ru', 'sk', 'sl', 'sq',
+ 'sr-SR', 'sr', 'sv', 'ta', 'th', 'tj', 'tr', 'uk', 'vi', 'zh-CN', 'zh-HK', 'zh-TW'
+]
# Get the correct i18n + l10n settings for datepicker depending on current
# thread language
@register.simple_tag
@@ -39,10 +49,14 @@
lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()
else:
lang=lang.lower()
- return '<script src="//jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-{0}.js"></script>'.format(
- lang
- )
+ if lang in SUPPORTED_DATEPICKER_LANGS:
+ translation_file = static("wagtailadmin/js/vendor/i18n/jquery.ui.datepicker-{0}.js".format(
+ lang
+ ))
+ return '<script src="{0}"></script>'.format(translation_file)
+ else: # Don't return anything if language is not supported
+ return ''
- else: # Don't write anything if we don't use I18N and L10N
+ else: # Don't return anything if we don't use I18N and L10N
return ''
\ No newline at end of file
| {"golden_diff": "diff --git a/wagtail/wagtailadmin/templatetags/localize.py b/wagtail/wagtailadmin/templatetags/localize.py\n--- a/wagtail/wagtailadmin/templatetags/localize.py\n+++ b/wagtail/wagtailadmin/templatetags/localize.py\n@@ -1,5 +1,6 @@\n from django import template\n from django.conf import settings\n+from django.templatetags.static import static\n from django.utils import formats\n from django.utils.translation import get_language\n \n@@ -25,6 +26,15 @@\n else: # Fall back to ISO if I18N is *not* used\n return 'yy-mm-dd'\n \n+# This is a list of all supported langs for jquery-ui datepicker which exist in\n+# wagtailadmin/js/venor/i18n/. In case any new translations are added there the\n+# language code should also be added in this list.\n+SUPPORTED_DATEPICKER_LANGS = ['af', 'ar-DZ', 'ar', 'az', 'be', 'bg', 'bs', 'ca', 'cs', 'cy-GB', 'da', 'de',\n+ 'el', 'en-AU', 'en-GB', 'en-NZ', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fo', 'fr-CA', 'fr-CH', 'fr', 'gl',\n+ 'he', 'hi', 'hr', 'hu', 'hy', 'id', 'is', 'it', 'ja', 'ka', 'kk', 'km', 'ko', 'ky', 'lb', 'lt', 'lv',\n+ 'mk', 'ml', 'ms', 'nb', 'nl-BE', 'nl', 'nn', 'no', 'pl', 'pt-BR', 'pt', 'rm', 'ro', 'ru', 'sk', 'sl', 'sq',\n+ 'sr-SR', 'sr', 'sv', 'ta', 'th', 'tj', 'tr', 'uk', 'vi', 'zh-CN', 'zh-HK', 'zh-TW'\n+]\n # Get the correct i18n + l10n settings for datepicker depending on current \n # thread language \n @register.simple_tag\n@@ -39,10 +49,14 @@\n lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()\n else:\n lang=lang.lower()\n- return '<script src=\"//jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-{0}.js\"></script>'.format(\n- lang\n- )\n+ if lang in SUPPORTED_DATEPICKER_LANGS:\n+ translation_file = static(\"wagtailadmin/js/vendor/i18n/jquery.ui.datepicker-{0}.js\".format(\n+ lang\n+ ))\n+ return '<script src=\"{0}\"></script>'.format(translation_file)\n+ else: # Don't return anything if language is not supported\n+ return ''\n \n- else: # Don't write anything if we don't use I18N and L10N\n+ else: # Don't return anything if we don't use I18N and L10N\n return '' \n \n\\ No newline at end of file\n", "issue": "Broken URL for jquery.ui.datepicker when 'en-US' used as lang \nThis isn't a big deal at all, but wanted to post just in case anyone wants to take a look.\n\nWhen loading a page with `jquery.ui.datepicker.js`, I notice in console that a call to http://jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-en-US.js returns a 404.\n\nI searched out the CDN for the directory in which the file is attempting to be called:\nhttp://jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/\n\nAs you can see, there is no `../jquery.ui.datepicker-en-US.js` present (not that there necessarily ought to be)\n\nThe call stems from:\nhttps://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/templatetags/localize.py#L42\n\nThe interpolation inserts `en-US` into the URI\n\nAgain, no big deal... just FYI\n\nCheers, all!\n\nEdit:\n\nI should add, this issue does _not_ break usability - a fallback seems to be in place.\n\n", "before_files": [{"content": "from django import template\nfrom django.conf import settings\nfrom django.utils import formats\nfrom django.utils.translation import get_language\n\nregister = template.Library()\n\n# For reasons unkown, the el (greek) locale in django/conf/locale/el/formats.py \n# *did not* contain a DATE_INPUT_FORMATS -- so it fell back to using the US \n# date format (mm/dd/yy) which is not the correct one for Greece (dd/mm/yy). \n# This means that if we used a localized datepicker django *won't* be able to\n# parse the dates! So a test here checks if DATE_INPUT_FORMATS is actually \n# defined in a format module. If yes then it will just return an empty string \n# so that the normal, localized date format from datepicker will be used.\n# If DATE_INPUT_FORMATS is not defined then it will return\[email protected]_tag\ndef get_date_format_override():\n if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True:\n \n for m in formats.get_format_modules():\n if hasattr(m, 'DATE_INPUT_FORMATS'):\n return ''\n else: # fall back to the ISO to be sure date will be parsed\n return 'yy-mm-dd'\n else: # Fall back to ISO if I18N is *not* used\n return 'yy-mm-dd'\n\n# Get the correct i18n + l10n settings for datepicker depending on current \n# thread language \[email protected]_tag\ndef get_localized_datepicker_js():\n if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True and \\\n hasattr(settings, 'USE_L10N') and settings.USE_L10N==True:\n \n lang = get_language()\n \n if '-' in lang:\n lang_parts = lang.split('-')\n lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()\n else:\n lang=lang.lower()\n return '<script src=\"//jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-{0}.js\"></script>'.format(\n lang\n )\n \n else: # Don't write anything if we don't use I18N and L10N\n return '' \n ", "path": "wagtail/wagtailadmin/templatetags/localize.py"}]} | 1,370 | 735 |
gh_patches_debug_6828 | rasdani/github-patches | git_diff | kartoza__prj.app-162 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Landing page gives a 404
</issue>
<code>
[start of django_project/base/views/error_views.py]
1 # coding=utf-8
2 """Our custom error views"""
3 from django.shortcuts import render_to_response
4 from django.template import RequestContext
5 from base.models.project import Project
6
7
8 def custom_404(request, template_name='404.html'):
9 """Our custom 404 view
10
11 We want to include a list of all public and approved Projects in the 404
12 view
13 :param request: Request obj
14 :type request: HttpRequest
15
16 :param template_name: The template to render
17 :type template_name: str
18
19 :return: Response obj
20 :rtype: HttpResponse
21
22 """
23 public_projects = Project.objects.filter(approved=True, private=False)
24 return render_to_response(template_name, {
25 'request_path': request.path,
26 'projects': public_projects
27 }, context_instance=RequestContext(request))
28
[end of django_project/base/views/error_views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django_project/base/views/error_views.py b/django_project/base/views/error_views.py
--- a/django_project/base/views/error_views.py
+++ b/django_project/base/views/error_views.py
@@ -21,7 +21,11 @@
"""
public_projects = Project.objects.filter(approved=True, private=False)
- return render_to_response(template_name, {
- 'request_path': request.path,
- 'projects': public_projects
- }, context_instance=RequestContext(request))
+
+ response = render_to_response(
+ template_name, {
+ 'request_path': request.path,
+ 'projects': public_projects},
+ context_instance=RequestContext(request))
+ response.status_code = 404
+ return response
| {"golden_diff": "diff --git a/django_project/base/views/error_views.py b/django_project/base/views/error_views.py\n--- a/django_project/base/views/error_views.py\n+++ b/django_project/base/views/error_views.py\n@@ -21,7 +21,11 @@\n \n \"\"\"\n public_projects = Project.objects.filter(approved=True, private=False)\n- return render_to_response(template_name, {\n- 'request_path': request.path,\n- 'projects': public_projects\n- }, context_instance=RequestContext(request))\n+\n+ response = render_to_response(\n+ template_name, {\n+ 'request_path': request.path,\n+ 'projects': public_projects},\n+ context_instance=RequestContext(request))\n+ response.status_code = 404\n+ return response\n", "issue": "Landing page gives a 404\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"Our custom error views\"\"\"\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\nfrom base.models.project import Project\n\n\ndef custom_404(request, template_name='404.html'):\n \"\"\"Our custom 404 view\n\n We want to include a list of all public and approved Projects in the 404\n view\n :param request: Request obj\n :type request: HttpRequest\n\n :param template_name: The template to render\n :type template_name: str\n\n :return: Response obj\n :rtype: HttpResponse\n\n \"\"\"\n public_projects = Project.objects.filter(approved=True, private=False)\n return render_to_response(template_name, {\n 'request_path': request.path,\n 'projects': public_projects\n }, context_instance=RequestContext(request))\n", "path": "django_project/base/views/error_views.py"}]} | 781 | 169 |
gh_patches_debug_38348 | rasdani/github-patches | git_diff | PaddlePaddle__models-312 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
resnet模型配置的问题
目前resnet的配置有一些问题,可见 https://github.com/PaddlePaddle/models/issues/308#issuecomment-331384031
</issue>
<code>
[start of image_classification/resnet.py]
1 import paddle.v2 as paddle
2
3 __all__ = ['resnet_imagenet', 'resnet_cifar10']
4
5
6 def conv_bn_layer(input,
7 ch_out,
8 filter_size,
9 stride,
10 padding,
11 active_type=paddle.activation.Relu(),
12 ch_in=None):
13 tmp = paddle.layer.img_conv(
14 input=input,
15 filter_size=filter_size,
16 num_channels=ch_in,
17 num_filters=ch_out,
18 stride=stride,
19 padding=padding,
20 act=paddle.activation.Linear(),
21 bias_attr=False)
22 return paddle.layer.batch_norm(input=tmp, act=active_type)
23
24
25 def shortcut(input, ch_in, ch_out, stride):
26 if ch_in != ch_out:
27 return conv_bn_layer(input, ch_out, 1, stride, 0,
28 paddle.activation.Linear())
29 else:
30 return input
31
32
33 def basicblock(input, ch_in, ch_out, stride):
34 short = shortcut(input, ch_in, ch_out, stride)
35 conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)
36 conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())
37 return paddle.layer.addto(
38 input=[short, conv2], act=paddle.activation.Relu())
39
40
41 def bottleneck(input, ch_in, ch_out, stride):
42 short = shortcut(input, ch_in, ch_out * 4, stride)
43 conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)
44 conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)
45 conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,
46 paddle.activation.Linear())
47 return paddle.layer.addto(
48 input=[short, conv3], act=paddle.activation.Relu())
49
50
51 def layer_warp(block_func, input, ch_in, ch_out, count, stride):
52 conv = block_func(input, ch_in, ch_out, stride)
53 for i in range(1, count):
54 conv = block_func(conv, ch_out, ch_out, 1)
55 return conv
56
57
58 def resnet_imagenet(input, class_dim, depth=50):
59 cfg = {
60 18: ([2, 2, 2, 1], basicblock),
61 34: ([3, 4, 6, 3], basicblock),
62 50: ([3, 4, 6, 3], bottleneck),
63 101: ([3, 4, 23, 3], bottleneck),
64 152: ([3, 8, 36, 3], bottleneck)
65 }
66 stages, block_func = cfg[depth]
67 conv1 = conv_bn_layer(
68 input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)
69 pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)
70 res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)
71 res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)
72 res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)
73 res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)
74 pool2 = paddle.layer.img_pool(
75 input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())
76 out = paddle.layer.fc(
77 input=pool2, size=class_dim, act=paddle.activation.Softmax())
78 return out
79
80
81 def resnet_cifar10(input, class_dim, depth=32):
82 # depth should be one of 20, 32, 44, 56, 110, 1202
83 assert (depth - 2) % 6 == 0
84 n = (depth - 2) / 6
85 nStages = {16, 64, 128}
86 conv1 = conv_bn_layer(
87 input, ch_in=3, ch_out=16, filter_size=3, stride=1, padding=1)
88 res1 = layer_warp(basicblock, conv1, 16, 16, n, 1)
89 res2 = layer_warp(basicblock, res1, 16, 32, n, 2)
90 res3 = layer_warp(basicblock, res2, 32, 64, n, 2)
91 pool = paddle.layer.img_pool(
92 input=res3, pool_size=8, stride=1, pool_type=paddle.pooling.Avg())
93 out = paddle.layer.fc(
94 input=pool, size=class_dim, act=paddle.activation.Softmax())
95 return out
96
[end of image_classification/resnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/image_classification/resnet.py b/image_classification/resnet.py
--- a/image_classification/resnet.py
+++ b/image_classification/resnet.py
@@ -22,24 +22,24 @@
return paddle.layer.batch_norm(input=tmp, act=active_type)
-def shortcut(input, ch_in, ch_out, stride):
- if ch_in != ch_out:
+def shortcut(input, ch_out, stride):
+ if input.num_filters != ch_out:
return conv_bn_layer(input, ch_out, 1, stride, 0,
paddle.activation.Linear())
else:
return input
-def basicblock(input, ch_in, ch_out, stride):
- short = shortcut(input, ch_in, ch_out, stride)
+def basicblock(input, ch_out, stride):
+ short = shortcut(input, ch_out, stride)
conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)
conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())
return paddle.layer.addto(
input=[short, conv2], act=paddle.activation.Relu())
-def bottleneck(input, ch_in, ch_out, stride):
- short = shortcut(input, ch_in, ch_out * 4, stride)
+def bottleneck(input, ch_out, stride):
+ short = shortcut(input, ch_out * 4, stride)
conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)
conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)
conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,
@@ -48,10 +48,10 @@
input=[short, conv3], act=paddle.activation.Relu())
-def layer_warp(block_func, input, ch_in, ch_out, count, stride):
- conv = block_func(input, ch_in, ch_out, stride)
+def layer_warp(block_func, input, ch_out, count, stride):
+ conv = block_func(input, ch_out, stride)
for i in range(1, count):
- conv = block_func(conv, ch_out, ch_out, 1)
+ conv = block_func(conv, ch_out, 1)
return conv
@@ -67,10 +67,10 @@
conv1 = conv_bn_layer(
input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)
pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)
- res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)
- res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)
- res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)
- res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)
+ res1 = layer_warp(block_func, pool1, 64, stages[0], 1)
+ res2 = layer_warp(block_func, res1, 128, stages[1], 2)
+ res3 = layer_warp(block_func, res2, 256, stages[2], 2)
+ res4 = layer_warp(block_func, res3, 512, stages[3], 2)
pool2 = paddle.layer.img_pool(
input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())
out = paddle.layer.fc(
| {"golden_diff": "diff --git a/image_classification/resnet.py b/image_classification/resnet.py\n--- a/image_classification/resnet.py\n+++ b/image_classification/resnet.py\n@@ -22,24 +22,24 @@\n return paddle.layer.batch_norm(input=tmp, act=active_type)\n \n \n-def shortcut(input, ch_in, ch_out, stride):\n- if ch_in != ch_out:\n+def shortcut(input, ch_out, stride):\n+ if input.num_filters != ch_out:\n return conv_bn_layer(input, ch_out, 1, stride, 0,\n paddle.activation.Linear())\n else:\n return input\n \n \n-def basicblock(input, ch_in, ch_out, stride):\n- short = shortcut(input, ch_in, ch_out, stride)\n+def basicblock(input, ch_out, stride):\n+ short = shortcut(input, ch_out, stride)\n conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())\n return paddle.layer.addto(\n input=[short, conv2], act=paddle.activation.Relu())\n \n \n-def bottleneck(input, ch_in, ch_out, stride):\n- short = shortcut(input, ch_in, ch_out * 4, stride)\n+def bottleneck(input, ch_out, stride):\n+ short = shortcut(input, ch_out * 4, stride)\n conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)\n conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,\n@@ -48,10 +48,10 @@\n input=[short, conv3], act=paddle.activation.Relu())\n \n \n-def layer_warp(block_func, input, ch_in, ch_out, count, stride):\n- conv = block_func(input, ch_in, ch_out, stride)\n+def layer_warp(block_func, input, ch_out, count, stride):\n+ conv = block_func(input, ch_out, stride)\n for i in range(1, count):\n- conv = block_func(conv, ch_out, ch_out, 1)\n+ conv = block_func(conv, ch_out, 1)\n return conv\n \n \n@@ -67,10 +67,10 @@\n conv1 = conv_bn_layer(\n input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)\n pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)\n- res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)\n- res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)\n- res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)\n- res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)\n+ res1 = layer_warp(block_func, pool1, 64, stages[0], 1)\n+ res2 = layer_warp(block_func, res1, 128, stages[1], 2)\n+ res3 = layer_warp(block_func, res2, 256, stages[2], 2)\n+ res4 = layer_warp(block_func, res3, 512, stages[3], 2)\n pool2 = paddle.layer.img_pool(\n input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())\n out = paddle.layer.fc(\n", "issue": "resnet\u6a21\u578b\u914d\u7f6e\u7684\u95ee\u9898\n\u76ee\u524dresnet\u7684\u914d\u7f6e\u6709\u4e00\u4e9b\u95ee\u9898\uff0c\u53ef\u89c1 https://github.com/PaddlePaddle/models/issues/308#issuecomment-331384031\n", "before_files": [{"content": "import paddle.v2 as paddle\n\n__all__ = ['resnet_imagenet', 'resnet_cifar10']\n\n\ndef conv_bn_layer(input,\n ch_out,\n filter_size,\n stride,\n padding,\n active_type=paddle.activation.Relu(),\n ch_in=None):\n tmp = paddle.layer.img_conv(\n input=input,\n filter_size=filter_size,\n num_channels=ch_in,\n num_filters=ch_out,\n stride=stride,\n padding=padding,\n act=paddle.activation.Linear(),\n bias_attr=False)\n return paddle.layer.batch_norm(input=tmp, act=active_type)\n\n\ndef shortcut(input, ch_in, ch_out, stride):\n if ch_in != ch_out:\n return conv_bn_layer(input, ch_out, 1, stride, 0,\n paddle.activation.Linear())\n else:\n return input\n\n\ndef basicblock(input, ch_in, ch_out, stride):\n short = shortcut(input, ch_in, ch_out, stride)\n conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())\n return paddle.layer.addto(\n input=[short, conv2], act=paddle.activation.Relu())\n\n\ndef bottleneck(input, ch_in, ch_out, stride):\n short = shortcut(input, ch_in, ch_out * 4, stride)\n conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)\n conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,\n paddle.activation.Linear())\n return paddle.layer.addto(\n input=[short, conv3], act=paddle.activation.Relu())\n\n\ndef layer_warp(block_func, input, ch_in, ch_out, count, stride):\n conv = block_func(input, ch_in, ch_out, stride)\n for i in range(1, count):\n conv = block_func(conv, ch_out, ch_out, 1)\n return conv\n\n\ndef resnet_imagenet(input, class_dim, depth=50):\n cfg = {\n 18: ([2, 2, 2, 1], basicblock),\n 34: ([3, 4, 6, 3], basicblock),\n 50: ([3, 4, 6, 3], bottleneck),\n 101: ([3, 4, 23, 3], bottleneck),\n 152: ([3, 8, 36, 3], bottleneck)\n }\n stages, block_func = cfg[depth]\n conv1 = conv_bn_layer(\n input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)\n pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)\n res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)\n res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)\n res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)\n res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)\n pool2 = paddle.layer.img_pool(\n input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())\n out = paddle.layer.fc(\n input=pool2, size=class_dim, act=paddle.activation.Softmax())\n return out\n\n\ndef resnet_cifar10(input, class_dim, depth=32):\n # depth should be one of 20, 32, 44, 56, 110, 1202\n assert (depth - 2) % 6 == 0\n n = (depth - 2) / 6\n nStages = {16, 64, 128}\n conv1 = conv_bn_layer(\n input, ch_in=3, ch_out=16, filter_size=3, stride=1, padding=1)\n res1 = layer_warp(basicblock, conv1, 16, 16, n, 1)\n res2 = layer_warp(basicblock, res1, 16, 32, n, 2)\n res3 = layer_warp(basicblock, res2, 32, 64, n, 2)\n pool = paddle.layer.img_pool(\n input=res3, pool_size=8, stride=1, pool_type=paddle.pooling.Avg())\n out = paddle.layer.fc(\n input=pool, size=class_dim, act=paddle.activation.Softmax())\n return out\n", "path": "image_classification/resnet.py"}]} | 1,879 | 850 |
gh_patches_debug_3551 | rasdani/github-patches | git_diff | kornia__kornia-677 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_adapted_uniform is broken for tensors > 1 dimension and same_on_batch=True
## 🐛 Bug
`kornia.augmentation.utils.helpers._adapted_uniform` is broken for `len(shape) > 1` and `same_on_batch=True`
## To Reproduce
```python
from kornia.augmentation.utils.helpers import _adapted_uniform
shape = (1, 2)
_adapted_uniform(shape, 0.0, 1.0, same_on_batch=True)
```
```
RuntimeError: Number of dimensions of repeat dims can not be smaller than number of dimensions of tensor
```
</issue>
<code>
[start of kornia/augmentation/utils/helpers.py]
1 from typing import Tuple, Union, List, cast, Optional
2
3 import torch
4 from torch.distributions import Uniform, Beta
5
6
7 def _infer_batch_shape(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:
8 r"""Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)
9 """
10 if isinstance(input, tuple):
11 tensor = _transform_input(input[0])
12 else:
13 tensor = _transform_input(input)
14 return tensor.shape
15
16
17 def _infer_batch_shape3d(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:
18 r"""Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)
19 """
20 if isinstance(input, tuple):
21 tensor = _transform_input3d(input[0])
22 else:
23 tensor = _transform_input3d(input)
24 return tensor.shape
25
26
27 def _transform_input(input: torch.Tensor) -> torch.Tensor:
28 r"""Reshape an input tensor to be (*, C, H, W). Accept either (H, W), (C, H, W) or (*, C, H, W).
29 Args:
30 input: torch.Tensor
31
32 Returns:
33 torch.Tensor
34 """
35 if not torch.is_tensor(input):
36 raise TypeError(f"Input type is not a torch.Tensor. Got {type(input)}")
37
38 if len(input.shape) not in [2, 3, 4]:
39 raise ValueError(
40 f"Input size must have a shape of either (H, W), (C, H, W) or (*, C, H, W). Got {input.shape}")
41
42 if len(input.shape) == 2:
43 input = input.unsqueeze(0)
44
45 if len(input.shape) == 3:
46 input = input.unsqueeze(0)
47
48 return input
49
50
51 def _transform_input3d(input: torch.Tensor) -> torch.Tensor:
52 r"""Reshape an input tensor to be (*, C, D, H, W). Accept either (D, H, W), (C, D, H, W) or (*, C, D, H, W).
53 Args:
54 input: torch.Tensor
55
56 Returns:
57 torch.Tensor
58 """
59 if not torch.is_tensor(input):
60 raise TypeError(f"Input type is not a torch.Tensor. Got {type(input)}")
61
62 if len(input.shape) not in [3, 4, 5]:
63 raise ValueError(
64 f"Input size must have a shape of either (D, H, W), (C, D, H, W) or (*, C, D, H, W). Got {input.shape}")
65
66 if len(input.shape) == 3:
67 input = input.unsqueeze(0)
68
69 if len(input.shape) == 4:
70 input = input.unsqueeze(0)
71
72 return input
73
74
75 def _validate_input_dtype(input: torch.Tensor, accepted_dtypes: List) -> None:
76 r"""Check if the dtype of the input tensor is in the range of accepted_dtypes
77 Args:
78 input: torch.Tensor
79 accepted_dtypes: List. e.g. [torch.float32, torch.float64]
80 """
81 if input.dtype not in accepted_dtypes:
82 raise TypeError(f"Expected input of {accepted_dtypes}. Got {input.dtype}")
83
84
85 def _validate_shape(shape: Union[Tuple, torch.Size], required_shapes: List[str] = ["BCHW"]) -> None:
86 r"""Check if the dtype of the input tensor is in the range of accepted_dtypes
87 Args:
88 input: torch.Tensor
89 required_shapes: List. e.g. ["BCHW", "BCDHW"]
90 """
91 passed = False
92 for required_shape in required_shapes:
93 if len(shape) == len(required_shape):
94 passed = True
95 break
96 if not passed:
97 raise TypeError(f"Expected input shape in {required_shape}. Got {shape}.")
98
99
100 def _validate_input_shape(input: torch.Tensor, channel_index: int, number: int) -> bool:
101 r"""Validate if an input has the right shape. e.g. to check if an input is channel first.
102 If channel first, the second channel of an RGB input shall be fixed to 3. To verify using:
103 _validate_input_shape(input, 1, 3)
104 Args:
105 input: torch.Tensor
106 channel_index: int
107 number: int
108 Returns:
109 bool
110 """
111 return input.shape[channel_index] == number
112
113
114 def _adapted_uniform(
115 shape: Union[Tuple, torch.Size],
116 low: Union[float, int, torch.Tensor],
117 high: Union[float, int, torch.Tensor],
118 same_on_batch=False
119 ) -> torch.Tensor:
120 r""" The uniform sampling function that accepts 'same_on_batch'.
121 If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).
122 By default, same_on_batch is set to False.
123 """
124 if not isinstance(low, torch.Tensor):
125 low = torch.tensor(low, dtype=torch.float32)
126 if not isinstance(high, torch.Tensor):
127 high = torch.tensor(high, dtype=torch.float32)
128 dist = Uniform(low, high)
129 if same_on_batch:
130 return dist.rsample((1, *shape[1:])).repeat(shape[0])
131 else:
132 return dist.rsample(shape)
133
134
135 def _adapted_beta(
136 shape: Union[Tuple, torch.Size],
137 a: Union[float, int, torch.Tensor],
138 b: Union[float, int, torch.Tensor],
139 same_on_batch=False
140 ) -> torch.Tensor:
141 r""" The beta sampling function that accepts 'same_on_batch'.
142 If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).
143 By default, same_on_batch is set to False.
144 """
145 if not isinstance(a, torch.Tensor):
146 a = torch.tensor(a, dtype=torch.float32)
147 if not isinstance(b, torch.Tensor):
148 b = torch.tensor(b, dtype=torch.float32)
149 dist = Beta(a, b)
150 if same_on_batch:
151 return dist.rsample((1, *shape[1:])).repeat(shape[0])
152 else:
153 return dist.rsample(shape)
154
155
156 def _check_and_bound(factor: Union[torch.Tensor, float, Tuple[float, float], List[float]], name: str,
157 center: float = 0., bounds: Tuple[float, float] = (0, float('inf'))) -> torch.Tensor:
158 r"""Check inputs and compute the corresponding factor bounds
159 """
160 factor_bound: torch.Tensor
161 if not isinstance(factor, torch.Tensor):
162 factor = torch.tensor(factor, dtype=torch.float32)
163
164 if factor.dim() == 0:
165 _center = torch.tensor(center, dtype=torch.float32)
166
167 if factor < 0:
168 raise ValueError(f"If {name} is a single number number, it must be non negative. Got {factor.item()}")
169
170 factor_bound = torch.tensor([_center - factor, _center + factor], dtype=torch.float32)
171 # Should be something other than clamp
172 # Currently, single value factor will not out of scope as long as the user provided it.
173 factor_bound = torch.clamp(factor_bound, bounds[0], bounds[1])
174
175 elif factor.shape[0] == 2 and factor.dim() == 1:
176
177 if not bounds[0] <= factor[0] or not bounds[1] >= factor[1]:
178 raise ValueError(f"{name} out of bounds. Expected inside {bounds}, got {factor}.")
179
180 if not bounds[0] <= factor[0] <= factor[1] <= bounds[1]:
181 raise ValueError(f"{name}[0] should be smaller than {name}[1] got {factor}")
182
183 factor_bound = factor
184
185 else:
186
187 raise TypeError(
188 f"The {name} should be a float number or a tuple with length 2 whose values move between {bounds}.")
189
190 return factor_bound
191
192
193 def _shape_validation(param: torch.Tensor, shape: Union[tuple, list], name: str) -> None:
194 assert param.shape == torch.Size(shape), f"Invalid shape for {name}. Expected {shape}. Got {param.shape}"
195
[end of kornia/augmentation/utils/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kornia/augmentation/utils/helpers.py b/kornia/augmentation/utils/helpers.py
--- a/kornia/augmentation/utils/helpers.py
+++ b/kornia/augmentation/utils/helpers.py
@@ -127,7 +127,7 @@
high = torch.tensor(high, dtype=torch.float32)
dist = Uniform(low, high)
if same_on_batch:
- return dist.rsample((1, *shape[1:])).repeat(shape[0])
+ return dist.rsample((1, *shape[1:])).repeat(shape[0], *[1] * (len(shape) - 1))
else:
return dist.rsample(shape)
| {"golden_diff": "diff --git a/kornia/augmentation/utils/helpers.py b/kornia/augmentation/utils/helpers.py\n--- a/kornia/augmentation/utils/helpers.py\n+++ b/kornia/augmentation/utils/helpers.py\n@@ -127,7 +127,7 @@\n high = torch.tensor(high, dtype=torch.float32)\n dist = Uniform(low, high)\n if same_on_batch:\n- return dist.rsample((1, *shape[1:])).repeat(shape[0])\n+ return dist.rsample((1, *shape[1:])).repeat(shape[0], *[1] * (len(shape) - 1))\n else:\n return dist.rsample(shape)\n", "issue": "_adapted_uniform is broken for tensors > 1 dimension and same_on_batch=True\n## \ud83d\udc1b Bug\r\n\r\n`kornia.augmentation.utils.helpers._adapted_uniform` is broken for `len(shape) > 1` and `same_on_batch=True`\r\n\r\n## To Reproduce\r\n\r\n```python\r\nfrom kornia.augmentation.utils.helpers import _adapted_uniform\r\n\r\nshape = (1, 2)\r\n_adapted_uniform(shape, 0.0, 1.0, same_on_batch=True)\r\n```\r\n\r\n```\r\nRuntimeError: Number of dimensions of repeat dims can not be smaller than number of dimensions of tensor\r\n```\r\n\n", "before_files": [{"content": "from typing import Tuple, Union, List, cast, Optional\n\nimport torch\nfrom torch.distributions import Uniform, Beta\n\n\ndef _infer_batch_shape(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:\n r\"\"\"Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)\n \"\"\"\n if isinstance(input, tuple):\n tensor = _transform_input(input[0])\n else:\n tensor = _transform_input(input)\n return tensor.shape\n\n\ndef _infer_batch_shape3d(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:\n r\"\"\"Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)\n \"\"\"\n if isinstance(input, tuple):\n tensor = _transform_input3d(input[0])\n else:\n tensor = _transform_input3d(input)\n return tensor.shape\n\n\ndef _transform_input(input: torch.Tensor) -> torch.Tensor:\n r\"\"\"Reshape an input tensor to be (*, C, H, W). Accept either (H, W), (C, H, W) or (*, C, H, W).\n Args:\n input: torch.Tensor\n\n Returns:\n torch.Tensor\n \"\"\"\n if not torch.is_tensor(input):\n raise TypeError(f\"Input type is not a torch.Tensor. Got {type(input)}\")\n\n if len(input.shape) not in [2, 3, 4]:\n raise ValueError(\n f\"Input size must have a shape of either (H, W), (C, H, W) or (*, C, H, W). Got {input.shape}\")\n\n if len(input.shape) == 2:\n input = input.unsqueeze(0)\n\n if len(input.shape) == 3:\n input = input.unsqueeze(0)\n\n return input\n\n\ndef _transform_input3d(input: torch.Tensor) -> torch.Tensor:\n r\"\"\"Reshape an input tensor to be (*, C, D, H, W). Accept either (D, H, W), (C, D, H, W) or (*, C, D, H, W).\n Args:\n input: torch.Tensor\n\n Returns:\n torch.Tensor\n \"\"\"\n if not torch.is_tensor(input):\n raise TypeError(f\"Input type is not a torch.Tensor. Got {type(input)}\")\n\n if len(input.shape) not in [3, 4, 5]:\n raise ValueError(\n f\"Input size must have a shape of either (D, H, W), (C, D, H, W) or (*, C, D, H, W). Got {input.shape}\")\n\n if len(input.shape) == 3:\n input = input.unsqueeze(0)\n\n if len(input.shape) == 4:\n input = input.unsqueeze(0)\n\n return input\n\n\ndef _validate_input_dtype(input: torch.Tensor, accepted_dtypes: List) -> None:\n r\"\"\"Check if the dtype of the input tensor is in the range of accepted_dtypes\n Args:\n input: torch.Tensor\n accepted_dtypes: List. e.g. [torch.float32, torch.float64]\n \"\"\"\n if input.dtype not in accepted_dtypes:\n raise TypeError(f\"Expected input of {accepted_dtypes}. Got {input.dtype}\")\n\n\ndef _validate_shape(shape: Union[Tuple, torch.Size], required_shapes: List[str] = [\"BCHW\"]) -> None:\n r\"\"\"Check if the dtype of the input tensor is in the range of accepted_dtypes\n Args:\n input: torch.Tensor\n required_shapes: List. e.g. [\"BCHW\", \"BCDHW\"]\n \"\"\"\n passed = False\n for required_shape in required_shapes:\n if len(shape) == len(required_shape):\n passed = True\n break\n if not passed:\n raise TypeError(f\"Expected input shape in {required_shape}. Got {shape}.\")\n\n\ndef _validate_input_shape(input: torch.Tensor, channel_index: int, number: int) -> bool:\n r\"\"\"Validate if an input has the right shape. e.g. to check if an input is channel first.\n If channel first, the second channel of an RGB input shall be fixed to 3. To verify using:\n _validate_input_shape(input, 1, 3)\n Args:\n input: torch.Tensor\n channel_index: int\n number: int\n Returns:\n bool\n \"\"\"\n return input.shape[channel_index] == number\n\n\ndef _adapted_uniform(\n shape: Union[Tuple, torch.Size],\n low: Union[float, int, torch.Tensor],\n high: Union[float, int, torch.Tensor],\n same_on_batch=False\n) -> torch.Tensor:\n r\"\"\" The uniform sampling function that accepts 'same_on_batch'.\n If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).\n By default, same_on_batch is set to False.\n \"\"\"\n if not isinstance(low, torch.Tensor):\n low = torch.tensor(low, dtype=torch.float32)\n if not isinstance(high, torch.Tensor):\n high = torch.tensor(high, dtype=torch.float32)\n dist = Uniform(low, high)\n if same_on_batch:\n return dist.rsample((1, *shape[1:])).repeat(shape[0])\n else:\n return dist.rsample(shape)\n\n\ndef _adapted_beta(\n shape: Union[Tuple, torch.Size],\n a: Union[float, int, torch.Tensor],\n b: Union[float, int, torch.Tensor],\n same_on_batch=False\n) -> torch.Tensor:\n r\"\"\" The beta sampling function that accepts 'same_on_batch'.\n If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).\n By default, same_on_batch is set to False.\n \"\"\"\n if not isinstance(a, torch.Tensor):\n a = torch.tensor(a, dtype=torch.float32)\n if not isinstance(b, torch.Tensor):\n b = torch.tensor(b, dtype=torch.float32)\n dist = Beta(a, b)\n if same_on_batch:\n return dist.rsample((1, *shape[1:])).repeat(shape[0])\n else:\n return dist.rsample(shape)\n\n\ndef _check_and_bound(factor: Union[torch.Tensor, float, Tuple[float, float], List[float]], name: str,\n center: float = 0., bounds: Tuple[float, float] = (0, float('inf'))) -> torch.Tensor:\n r\"\"\"Check inputs and compute the corresponding factor bounds\n \"\"\"\n factor_bound: torch.Tensor\n if not isinstance(factor, torch.Tensor):\n factor = torch.tensor(factor, dtype=torch.float32)\n\n if factor.dim() == 0:\n _center = torch.tensor(center, dtype=torch.float32)\n\n if factor < 0:\n raise ValueError(f\"If {name} is a single number number, it must be non negative. Got {factor.item()}\")\n\n factor_bound = torch.tensor([_center - factor, _center + factor], dtype=torch.float32)\n # Should be something other than clamp\n # Currently, single value factor will not out of scope as long as the user provided it.\n factor_bound = torch.clamp(factor_bound, bounds[0], bounds[1])\n\n elif factor.shape[0] == 2 and factor.dim() == 1:\n\n if not bounds[0] <= factor[0] or not bounds[1] >= factor[1]:\n raise ValueError(f\"{name} out of bounds. Expected inside {bounds}, got {factor}.\")\n\n if not bounds[0] <= factor[0] <= factor[1] <= bounds[1]:\n raise ValueError(f\"{name}[0] should be smaller than {name}[1] got {factor}\")\n\n factor_bound = factor\n\n else:\n\n raise TypeError(\n f\"The {name} should be a float number or a tuple with length 2 whose values move between {bounds}.\")\n\n return factor_bound\n\n\ndef _shape_validation(param: torch.Tensor, shape: Union[tuple, list], name: str) -> None:\n assert param.shape == torch.Size(shape), f\"Invalid shape for {name}. Expected {shape}. Got {param.shape}\"\n", "path": "kornia/augmentation/utils/helpers.py"}]} | 2,951 | 151 |
gh_patches_debug_36716 | rasdani/github-patches | git_diff | e-valuation__EvaP-2216 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Export Reward Point Summary
A new button `Export reward points` should be added to the right of the text showing the number of available reward points on the staff reward points redemption events page. Clicking the button should download a CSV file containing a summary of the reward points.
This file should contain the two columns `Email` and `Points` for each user, listing the number of points currently available for that user (grantings minus redemptions) next to the user's email address. A line should only be added for users where this number of available points is not zero.
</issue>
<code>
[start of evap/rewards/views.py]
1 from datetime import datetime
2
3 from django.contrib import messages
4 from django.contrib.messages.views import SuccessMessageMixin
5 from django.core.exceptions import BadRequest, SuspiciousOperation
6 from django.db.models import Sum
7 from django.http import HttpResponse
8 from django.shortcuts import get_object_or_404, redirect, render
9 from django.urls import reverse_lazy
10 from django.utils.translation import get_language
11 from django.utils.translation import gettext as _
12 from django.utils.translation import gettext_lazy
13 from django.views.decorators.http import require_POST
14 from django.views.generic import CreateView, UpdateView
15
16 from evap.evaluation.auth import manager_required, reward_user_required
17 from evap.evaluation.models import Semester
18 from evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x
19 from evap.rewards.exporters import RewardsExporter
20 from evap.rewards.forms import RewardPointRedemptionEventForm
21 from evap.rewards.models import (
22 NoPointsSelectedError,
23 NotEnoughPointsError,
24 OutdatedRedemptionDataError,
25 RedemptionEventExpiredError,
26 RewardPointGranting,
27 RewardPointRedemption,
28 RewardPointRedemptionEvent,
29 SemesterActivation,
30 )
31 from evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions
32
33
34 def redeem_reward_points(request):
35 redemptions = {}
36 try:
37 for key, value in request.POST.items():
38 if key.startswith("points-"):
39 event_id = int(key.rpartition("-")[2])
40 redemptions[event_id] = int(value)
41 previous_redeemed_points = int(request.POST["previous_redeemed_points"])
42 except (ValueError, KeyError, TypeError) as e:
43 raise BadRequest from e
44
45 try:
46 save_redemptions(request, redemptions, previous_redeemed_points)
47 messages.success(request, _("You successfully redeemed your points."))
48 except (
49 NoPointsSelectedError,
50 NotEnoughPointsError,
51 RedemptionEventExpiredError,
52 OutdatedRedemptionDataError,
53 ) as error:
54 status_code = 400
55 if isinstance(error, NoPointsSelectedError):
56 error_string = _("You cannot redeem 0 points.")
57 elif isinstance(error, NotEnoughPointsError):
58 error_string = _("You don't have enough reward points.")
59 elif isinstance(error, RedemptionEventExpiredError):
60 error_string = _("Sorry, the deadline for this event expired already.")
61 elif isinstance(error, OutdatedRedemptionDataError):
62 status_code = 409
63 error_string = _(
64 "It appears that your browser sent multiple redemption requests. You can see all successful redemptions below."
65 )
66 messages.error(request, error_string)
67 return status_code
68 return 200
69
70
71 @reward_user_required
72 def index(request):
73 status = 200
74 if request.method == "POST":
75 status = redeem_reward_points(request)
76 total_points_available = reward_points_of_user(request.user)
77 reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)
78 reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)
79 events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by("date")
80
81 granted_point_actions = [
82 (granting.granting_time, _("Reward for") + " " + granting.semester.name, granting.value, "")
83 for granting in reward_point_grantings
84 ]
85 redemption_point_actions = [
86 (redemption.redemption_time, redemption.event.name, "", redemption.value)
87 for redemption in reward_point_redemptions
88 ]
89
90 reward_point_actions = sorted(
91 granted_point_actions + redemption_point_actions, key=lambda action: action[0], reverse=True
92 )
93
94 template_data = {
95 "reward_point_actions": reward_point_actions,
96 "total_points_available": total_points_available,
97 "total_points_spent": sum(redemption.value for redemption in reward_point_redemptions),
98 "events": events,
99 }
100 return render(request, "rewards_index.html", template_data, status=status)
101
102
103 @manager_required
104 def reward_point_redemption_events(request):
105 upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by("date")
106 past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by("-date")
107 total_points_granted = RewardPointGranting.objects.aggregate(Sum("value"))["value__sum"] or 0
108 total_points_redeemed = RewardPointRedemption.objects.aggregate(Sum("value"))["value__sum"] or 0
109 total_points_available = total_points_granted - total_points_redeemed
110 template_data = {
111 "upcoming_events": upcoming_events,
112 "past_events": past_events,
113 "total_points_available": total_points_available,
114 }
115 return render(request, "rewards_reward_point_redemption_events.html", template_data)
116
117
118 @manager_required
119 class RewardPointRedemptionEventCreateView(SuccessMessageMixin, CreateView):
120 model = RewardPointRedemptionEvent
121 form_class = RewardPointRedemptionEventForm
122 template_name = "rewards_reward_point_redemption_event_form.html"
123 success_url = reverse_lazy("rewards:reward_point_redemption_events")
124 success_message = gettext_lazy("Successfully created event.")
125
126
127 @manager_required
128 class RewardPointRedemptionEventEditView(SuccessMessageMixin, UpdateView):
129 model = RewardPointRedemptionEvent
130 form_class = RewardPointRedemptionEventForm
131 template_name = "rewards_reward_point_redemption_event_form.html"
132 success_url = reverse_lazy("rewards:reward_point_redemption_events")
133 success_message = gettext_lazy("Successfully updated event.")
134 pk_url_kwarg = "event_id"
135 context_object_name = "event"
136
137
138 @require_POST
139 @manager_required
140 def reward_point_redemption_event_delete(request):
141 event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, "event_id")
142
143 if not event.can_delete:
144 raise SuspiciousOperation("Deleting redemption event not allowed")
145 event.delete()
146 return HttpResponse() # 200 OK
147
148
149 @manager_required
150 def reward_point_redemption_event_export(request, event_id):
151 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
152
153 filename = _("RewardPoints") + f"-{event.date}-{event.name}-{get_language()}.xls"
154 response = AttachmentResponse(filename, content_type="application/vnd.ms-excel")
155
156 RewardsExporter().export(response, event.redemptions_by_user())
157
158 return response
159
160
161 @require_POST
162 @manager_required
163 def semester_activation_edit(request, semester_id):
164 semester = get_object_or_404(Semester, id=semester_id)
165 status = request.POST.get("activation_status")
166 if status == "on":
167 active = True
168 elif status == "off":
169 active = False
170 else:
171 raise SuspiciousOperation("Invalid activation keyword")
172 SemesterActivation.objects.update_or_create(semester=semester, defaults={"is_active": active})
173 if active:
174 grant_eligible_reward_points_for_semester(request, semester)
175 return redirect("staff:semester_view", semester_id)
176
[end of evap/rewards/views.py]
[start of evap/rewards/urls.py]
1 from django.urls import path
2
3 from evap.rewards import views
4
5 app_name = "rewards"
6
7 urlpatterns = [
8 path("", views.index, name="index"),
9
10 path("reward_point_redemption_events/", views.reward_point_redemption_events, name="reward_point_redemption_events"),
11 path("reward_point_redemption_event/create", views.RewardPointRedemptionEventCreateView.as_view(), name="reward_point_redemption_event_create"),
12 path("reward_point_redemption_event/<int:event_id>/edit", views.RewardPointRedemptionEventEditView.as_view(), name="reward_point_redemption_event_edit"),
13 path("reward_point_redemption_event/<int:event_id>/export", views.reward_point_redemption_event_export, name="reward_point_redemption_event_export"),
14 path("reward_point_redemption_event/delete", views.reward_point_redemption_event_delete, name="reward_point_redemption_event_delete"),
15
16 path("semester_activation/<int:semester_id>/edit", views.semester_activation_edit, name="semester_activation_edit"),
17 ]
18
[end of evap/rewards/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/rewards/urls.py b/evap/rewards/urls.py
--- a/evap/rewards/urls.py
+++ b/evap/rewards/urls.py
@@ -7,6 +7,7 @@
urlpatterns = [
path("", views.index, name="index"),
+ path("reward_points_export", views.reward_points_export, name="reward_points_export"),
path("reward_point_redemption_events/", views.reward_point_redemption_events, name="reward_point_redemption_events"),
path("reward_point_redemption_event/create", views.RewardPointRedemptionEventCreateView.as_view(), name="reward_point_redemption_event_create"),
path("reward_point_redemption_event/<int:event_id>/edit", views.RewardPointRedemptionEventEditView.as_view(), name="reward_point_redemption_event_edit"),
diff --git a/evap/rewards/views.py b/evap/rewards/views.py
--- a/evap/rewards/views.py
+++ b/evap/rewards/views.py
@@ -1,3 +1,4 @@
+import csv
from datetime import datetime
from django.contrib import messages
@@ -14,7 +15,7 @@
from django.views.generic import CreateView, UpdateView
from evap.evaluation.auth import manager_required, reward_user_required
-from evap.evaluation.models import Semester
+from evap.evaluation.models import Semester, UserProfile
from evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x
from evap.rewards.exporters import RewardsExporter
from evap.rewards.forms import RewardPointRedemptionEventForm
@@ -158,6 +159,32 @@
return response
+@manager_required
+def reward_points_export(request):
+ filename = _("RewardPoints") + f"-{get_language()}.csv"
+ response = AttachmentResponse(filename, content_type="text/csv")
+
+ writer = csv.writer(response, delimiter=";", lineterminator="\n")
+ writer.writerow([_("Email address"), _("Number of points")])
+ profiles_with_points = (
+ UserProfile.objects.annotate(
+ points=Sum("reward_point_grantings__value", default=0) - Sum("reward_point_redemptions__value", default=0)
+ )
+ .filter(points__gt=0)
+ .order_by("-points")
+ )
+
+ for profile in profiles_with_points.all():
+ writer.writerow(
+ [
+ profile.email,
+ profile.points,
+ ]
+ )
+
+ return response
+
+
@require_POST
@manager_required
def semester_activation_edit(request, semester_id):
| {"golden_diff": "diff --git a/evap/rewards/urls.py b/evap/rewards/urls.py\n--- a/evap/rewards/urls.py\n+++ b/evap/rewards/urls.py\n@@ -7,6 +7,7 @@\n urlpatterns = [\n path(\"\", views.index, name=\"index\"),\n \n+ path(\"reward_points_export\", views.reward_points_export, name=\"reward_points_export\"),\n path(\"reward_point_redemption_events/\", views.reward_point_redemption_events, name=\"reward_point_redemption_events\"),\n path(\"reward_point_redemption_event/create\", views.RewardPointRedemptionEventCreateView.as_view(), name=\"reward_point_redemption_event_create\"),\n path(\"reward_point_redemption_event/<int:event_id>/edit\", views.RewardPointRedemptionEventEditView.as_view(), name=\"reward_point_redemption_event_edit\"),\ndiff --git a/evap/rewards/views.py b/evap/rewards/views.py\n--- a/evap/rewards/views.py\n+++ b/evap/rewards/views.py\n@@ -1,3 +1,4 @@\n+import csv\n from datetime import datetime\n \n from django.contrib import messages\n@@ -14,7 +15,7 @@\n from django.views.generic import CreateView, UpdateView\n \n from evap.evaluation.auth import manager_required, reward_user_required\n-from evap.evaluation.models import Semester\n+from evap.evaluation.models import Semester, UserProfile\n from evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x\n from evap.rewards.exporters import RewardsExporter\n from evap.rewards.forms import RewardPointRedemptionEventForm\n@@ -158,6 +159,32 @@\n return response\n \n \n+@manager_required\n+def reward_points_export(request):\n+ filename = _(\"RewardPoints\") + f\"-{get_language()}.csv\"\n+ response = AttachmentResponse(filename, content_type=\"text/csv\")\n+\n+ writer = csv.writer(response, delimiter=\";\", lineterminator=\"\\n\")\n+ writer.writerow([_(\"Email address\"), _(\"Number of points\")])\n+ profiles_with_points = (\n+ UserProfile.objects.annotate(\n+ points=Sum(\"reward_point_grantings__value\", default=0) - Sum(\"reward_point_redemptions__value\", default=0)\n+ )\n+ .filter(points__gt=0)\n+ .order_by(\"-points\")\n+ )\n+\n+ for profile in profiles_with_points.all():\n+ writer.writerow(\n+ [\n+ profile.email,\n+ profile.points,\n+ ]\n+ )\n+\n+ return response\n+\n+\n @require_POST\n @manager_required\n def semester_activation_edit(request, semester_id):\n", "issue": "Export Reward Point Summary\nA new button `Export reward points` should be added to the right of the text showing the number of available reward points on the staff reward points redemption events page. Clicking the button should download a CSV file containing a summary of the reward points.\r\n\r\nThis file should contain the two columns `Email` and `Points` for each user, listing the number of points currently available for that user (grantings minus redemptions) next to the user's email address. A line should only be added for users where this number of available points is not zero.\n", "before_files": [{"content": "from datetime import datetime\n\nfrom django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.exceptions import BadRequest, SuspiciousOperation\nfrom django.db.models import Sum\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.urls import reverse_lazy\nfrom django.utils.translation import get_language\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\nfrom django.views.decorators.http import require_POST\nfrom django.views.generic import CreateView, UpdateView\n\nfrom evap.evaluation.auth import manager_required, reward_user_required\nfrom evap.evaluation.models import Semester\nfrom evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x\nfrom evap.rewards.exporters import RewardsExporter\nfrom evap.rewards.forms import RewardPointRedemptionEventForm\nfrom evap.rewards.models import (\n NoPointsSelectedError,\n NotEnoughPointsError,\n OutdatedRedemptionDataError,\n RedemptionEventExpiredError,\n RewardPointGranting,\n RewardPointRedemption,\n RewardPointRedemptionEvent,\n SemesterActivation,\n)\nfrom evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions\n\n\ndef redeem_reward_points(request):\n redemptions = {}\n try:\n for key, value in request.POST.items():\n if key.startswith(\"points-\"):\n event_id = int(key.rpartition(\"-\")[2])\n redemptions[event_id] = int(value)\n previous_redeemed_points = int(request.POST[\"previous_redeemed_points\"])\n except (ValueError, KeyError, TypeError) as e:\n raise BadRequest from e\n\n try:\n save_redemptions(request, redemptions, previous_redeemed_points)\n messages.success(request, _(\"You successfully redeemed your points.\"))\n except (\n NoPointsSelectedError,\n NotEnoughPointsError,\n RedemptionEventExpiredError,\n OutdatedRedemptionDataError,\n ) as error:\n status_code = 400\n if isinstance(error, NoPointsSelectedError):\n error_string = _(\"You cannot redeem 0 points.\")\n elif isinstance(error, NotEnoughPointsError):\n error_string = _(\"You don't have enough reward points.\")\n elif isinstance(error, RedemptionEventExpiredError):\n error_string = _(\"Sorry, the deadline for this event expired already.\")\n elif isinstance(error, OutdatedRedemptionDataError):\n status_code = 409\n error_string = _(\n \"It appears that your browser sent multiple redemption requests. You can see all successful redemptions below.\"\n )\n messages.error(request, error_string)\n return status_code\n return 200\n\n\n@reward_user_required\ndef index(request):\n status = 200\n if request.method == \"POST\":\n status = redeem_reward_points(request)\n total_points_available = reward_points_of_user(request.user)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)\n reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)\n events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n\n granted_point_actions = [\n (granting.granting_time, _(\"Reward for\") + \" \" + granting.semester.name, granting.value, \"\")\n for granting in reward_point_grantings\n ]\n redemption_point_actions = [\n (redemption.redemption_time, redemption.event.name, \"\", redemption.value)\n for redemption in reward_point_redemptions\n ]\n\n reward_point_actions = sorted(\n granted_point_actions + redemption_point_actions, key=lambda action: action[0], reverse=True\n )\n\n template_data = {\n \"reward_point_actions\": reward_point_actions,\n \"total_points_available\": total_points_available,\n \"total_points_spent\": sum(redemption.value for redemption in reward_point_redemptions),\n \"events\": events,\n }\n return render(request, \"rewards_index.html\", template_data, status=status)\n\n\n@manager_required\ndef reward_point_redemption_events(request):\n upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by(\"-date\")\n total_points_granted = RewardPointGranting.objects.aggregate(Sum(\"value\"))[\"value__sum\"] or 0\n total_points_redeemed = RewardPointRedemption.objects.aggregate(Sum(\"value\"))[\"value__sum\"] or 0\n total_points_available = total_points_granted - total_points_redeemed\n template_data = {\n \"upcoming_events\": upcoming_events,\n \"past_events\": past_events,\n \"total_points_available\": total_points_available,\n }\n return render(request, \"rewards_reward_point_redemption_events.html\", template_data)\n\n\n@manager_required\nclass RewardPointRedemptionEventCreateView(SuccessMessageMixin, CreateView):\n model = RewardPointRedemptionEvent\n form_class = RewardPointRedemptionEventForm\n template_name = \"rewards_reward_point_redemption_event_form.html\"\n success_url = reverse_lazy(\"rewards:reward_point_redemption_events\")\n success_message = gettext_lazy(\"Successfully created event.\")\n\n\n@manager_required\nclass RewardPointRedemptionEventEditView(SuccessMessageMixin, UpdateView):\n model = RewardPointRedemptionEvent\n form_class = RewardPointRedemptionEventForm\n template_name = \"rewards_reward_point_redemption_event_form.html\"\n success_url = reverse_lazy(\"rewards:reward_point_redemption_events\")\n success_message = gettext_lazy(\"Successfully updated event.\")\n pk_url_kwarg = \"event_id\"\n context_object_name = \"event\"\n\n\n@require_POST\n@manager_required\ndef reward_point_redemption_event_delete(request):\n event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, \"event_id\")\n\n if not event.can_delete:\n raise SuspiciousOperation(\"Deleting redemption event not allowed\")\n event.delete()\n return HttpResponse() # 200 OK\n\n\n@manager_required\ndef reward_point_redemption_event_export(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n filename = _(\"RewardPoints\") + f\"-{event.date}-{event.name}-{get_language()}.xls\"\n response = AttachmentResponse(filename, content_type=\"application/vnd.ms-excel\")\n\n RewardsExporter().export(response, event.redemptions_by_user())\n\n return response\n\n\n@require_POST\n@manager_required\ndef semester_activation_edit(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n status = request.POST.get(\"activation_status\")\n if status == \"on\":\n active = True\n elif status == \"off\":\n active = False\n else:\n raise SuspiciousOperation(\"Invalid activation keyword\")\n SemesterActivation.objects.update_or_create(semester=semester, defaults={\"is_active\": active})\n if active:\n grant_eligible_reward_points_for_semester(request, semester)\n return redirect(\"staff:semester_view\", semester_id)\n", "path": "evap/rewards/views.py"}, {"content": "from django.urls import path\n\nfrom evap.rewards import views\n\napp_name = \"rewards\"\n\nurlpatterns = [\n path(\"\", views.index, name=\"index\"),\n\n path(\"reward_point_redemption_events/\", views.reward_point_redemption_events, name=\"reward_point_redemption_events\"),\n path(\"reward_point_redemption_event/create\", views.RewardPointRedemptionEventCreateView.as_view(), name=\"reward_point_redemption_event_create\"),\n path(\"reward_point_redemption_event/<int:event_id>/edit\", views.RewardPointRedemptionEventEditView.as_view(), name=\"reward_point_redemption_event_edit\"),\n path(\"reward_point_redemption_event/<int:event_id>/export\", views.reward_point_redemption_event_export, name=\"reward_point_redemption_event_export\"),\n path(\"reward_point_redemption_event/delete\", views.reward_point_redemption_event_delete, name=\"reward_point_redemption_event_delete\"),\n\n path(\"semester_activation/<int:semester_id>/edit\", views.semester_activation_edit, name=\"semester_activation_edit\"),\n]\n", "path": "evap/rewards/urls.py"}]} | 2,897 | 576 |
gh_patches_debug_17053 | rasdani/github-patches | git_diff | feast-dev__feast-2255 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Optimize `_populate_result_rows_from_feature_view`
Signed-off-by: Judah Rand <[email protected]>
<!-- Thanks for sending a pull request! Here are some tips for you:
1. Ensure that your code follows our code conventions: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#code-style--linting
2. Run unit tests and ensure that they are passing: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#unit-tests
3. If your change introduces any API changes, make sure to update the integration tests scripts here: https://github.com/feast-dev/feast/tree/master/sdk/python/tests or https://github.com/feast-dev/feast/tree/master/sdk/go
4. Make sure documentation is updated for your PR!
5. Make sure you have signed the CLA https://cla.developers.google.com/clas
-->
**What this PR does / why we need it**:
This commit optimizes the fetching of features by only fetching
the features for each unique Entity once and then expands the result
to the shape of input EntityKeys.
Previously, if an Entity occurred twice the features would be fetched
from the OnlineStore twice. This can be hugely inefficient.
The only assumption that this makes is that the OnlineStore will return
the feature data in the same order as the EntityKeyProtos are provided.
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
-->
Fixes #
**Does this PR introduce a user-facing change?**:
<!--
If no, just write "NONE" in the release-note block below.
If yes, a release note is required:
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
For more information about release notes, see kubernetes' guide here:
http://git.k8s.io/community/contributors/guide/release-notes.md
-->
```release-note
Speed up `get_online_features` when duplicate Entities are present.
```
</issue>
<code>
[start of sdk/python/setup.py]
1 # Copyright 2019 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import glob
15 import os
16 import re
17 import shutil
18 import subprocess
19 import pathlib
20
21 from distutils.cmd import Command
22 from setuptools import find_packages
23
24 try:
25 from setuptools import setup
26 from setuptools.command.install import install
27 from setuptools.command.develop import develop
28 from setuptools.command.egg_info import egg_info
29 from setuptools.command.sdist import sdist
30 from setuptools.command.build_py import build_py
31 except ImportError:
32 from distutils.core import setup
33 from distutils.command.install import install
34 from distutils.command.build_py import build_py
35
36 NAME = "feast"
37 DESCRIPTION = "Python SDK for Feast"
38 URL = "https://github.com/feast-dev/feast"
39 AUTHOR = "Feast"
40 REQUIRES_PYTHON = ">=3.7.0"
41
42 REQUIRED = [
43 "Click==8.*",
44 "colorama>=0.3.9",
45 "dill==0.3.*",
46 "fastavro>=1.1.0",
47 "google-api-core>=1.23.0",
48 "googleapis-common-protos==1.52.*",
49 "grpcio>=1.34.0",
50 "grpcio-reflection>=1.34.0",
51 "Jinja2>=2.0.0",
52 "jsonschema",
53 "mmh3",
54 "pandas>=1.0.0",
55 "pandavro==1.5.*",
56 "protobuf>=3.10",
57 "proto-plus<1.19.7",
58 "pyarrow>=4.0.0",
59 "pydantic>=1.0.0",
60 "PyYAML>=5.4.*",
61 "tabulate==0.8.*",
62 "tenacity>=7.*",
63 "toml==0.10.*",
64 "tqdm==4.*",
65 "fastapi>=0.68.0",
66 "uvicorn[standard]>=0.14.0",
67 "proto-plus<1.19.7",
68 "tensorflow-metadata>=1.0.0,<2.0.0",
69 ]
70
71 GCP_REQUIRED = [
72 "google-cloud-bigquery>=2.28.1",
73 "google-cloud-bigquery-storage >= 2.0.0",
74 "google-cloud-datastore>=2.1.*",
75 "google-cloud-storage>=1.34.*,<1.41",
76 "google-cloud-core>=1.4.0,<2.0.0",
77 ]
78
79 REDIS_REQUIRED = [
80 "redis>=4.1.0",
81 "hiredis>=2.0.0",
82 ]
83
84 AWS_REQUIRED = [
85 "boto3>=1.17.0",
86 "docker>=5.0.2",
87 ]
88
89 CI_REQUIRED = (
90 [
91 "cryptography==3.3.2",
92 "flake8",
93 "black==19.10b0",
94 "isort>=5",
95 "grpcio-tools==1.34.0",
96 "grpcio-testing==1.34.0",
97 "minio==7.1.0",
98 "mock==2.0.0",
99 "moto",
100 "mypy==0.931",
101 "mypy-protobuf==3.1.0",
102 "avro==1.10.0",
103 "gcsfs",
104 "urllib3>=1.25.4",
105 "pytest>=6.0.0",
106 "pytest-cov",
107 "pytest-xdist",
108 "pytest-benchmark>=3.4.1",
109 "pytest-lazy-fixture==0.6.3",
110 "pytest-timeout==1.4.2",
111 "pytest-ordering==0.6.*",
112 "pytest-mock==1.10.4",
113 "Sphinx!=4.0.0,<4.4.0",
114 "sphinx-rtd-theme",
115 "testcontainers==3.4.2",
116 "adlfs==0.5.9",
117 "firebase-admin==4.5.2",
118 "pre-commit",
119 "assertpy==1.1",
120 "pip-tools",
121 "types-protobuf",
122 "types-python-dateutil",
123 "types-pytz",
124 "types-PyYAML",
125 "types-redis",
126 "types-requests",
127 "types-setuptools",
128 "types-tabulate",
129 ]
130 + GCP_REQUIRED
131 + REDIS_REQUIRED
132 + AWS_REQUIRED
133 )
134
135 DEV_REQUIRED = ["mypy-protobuf>=1.*", "grpcio-testing==1.*"] + CI_REQUIRED
136
137 # Get git repo root directory
138 repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)
139
140 # README file from Feast repo root directory
141 README_FILE = os.path.join(repo_root, "README.md")
142 with open(README_FILE, "r", encoding="utf8") as f:
143 LONG_DESCRIPTION = f.read()
144
145 # Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.
146 # Regex modified from default tag regex in:
147 # https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9
148 TAG_REGEX = re.compile(
149 r"^(?:[\/\w-]+)?(?P<version>[vV]?\d+(?:\.\d+){0,2}[^\+]*)(?:\+.*)?$"
150 )
151
152 # Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)
153 if shutil.which("git"):
154 use_scm_version = {"root": "../..", "relative_to": __file__, "tag_regex": TAG_REGEX}
155 else:
156 use_scm_version = None
157
158
159 class BuildProtoCommand(Command):
160 description = "Builds the proto files into python files."
161
162 def initialize_options(self):
163 self.protoc = ["python", "-m", "grpc_tools.protoc"] # find_executable("protoc")
164 self.proto_folder = os.path.join(repo_root, "protos")
165 self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')
166 self.sub_folders = ["core", "serving", "types", "storage"]
167
168 def finalize_options(self):
169 pass
170
171 def _generate_protos(self, path):
172 proto_files = glob.glob(os.path.join(self.proto_folder, path))
173
174 subprocess.check_call(self.protoc + [
175 '-I', self.proto_folder,
176 '--python_out', self.this_package,
177 '--grpc_python_out', self.this_package,
178 '--mypy_out', self.this_package] + proto_files)
179
180 def run(self):
181 for sub_folder in self.sub_folders:
182 self._generate_protos(f'feast/{sub_folder}/*.proto')
183
184 from pathlib import Path
185
186 for path in Path('feast/protos').rglob('*.py'):
187 for folder in self.sub_folders:
188 # Read in the file
189 with open(path, 'r') as file:
190 filedata = file.read()
191
192 # Replace the target string
193 filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')
194
195 # Write the file out again
196 with open(path, 'w') as file:
197 file.write(filedata)
198
199
200 class BuildCommand(build_py):
201 """Custom build command."""
202
203 def run(self):
204 self.run_command('build_proto')
205 build_py.run(self)
206
207
208 class DevelopCommand(develop):
209 """Custom develop command."""
210
211 def run(self):
212 self.run_command('build_proto')
213 develop.run(self)
214
215
216 setup(
217 name=NAME,
218 author=AUTHOR,
219 description=DESCRIPTION,
220 long_description=LONG_DESCRIPTION,
221 long_description_content_type="text/markdown",
222 python_requires=REQUIRES_PYTHON,
223 url=URL,
224 packages=find_packages(exclude=("tests",)),
225 install_requires=REQUIRED,
226 # https://stackoverflow.com/questions/28509965/setuptools-development-requirements
227 # Install dev requirements with: pip install -e .[dev]
228 extras_require={
229 "dev": DEV_REQUIRED,
230 "ci": CI_REQUIRED,
231 "gcp": GCP_REQUIRED,
232 "aws": AWS_REQUIRED,
233 "redis": REDIS_REQUIRED,
234 },
235 include_package_data=True,
236 license="Apache",
237 classifiers=[
238 # Trove classifiers
239 # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
240 "License :: OSI Approved :: Apache Software License",
241 "Programming Language :: Python",
242 "Programming Language :: Python :: 3",
243 "Programming Language :: Python :: 3.7",
244 ],
245 entry_points={"console_scripts": ["feast=feast.cli:cli"]},
246 use_scm_version=use_scm_version,
247 setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==1.*", "sphinx!=4.0.0"],
248 package_data={
249 "": [
250 "protos/feast/**/*.proto",
251 "protos/feast/third_party/grpc/health/v1/*.proto",
252 "feast/protos/feast/**/*.py",
253 ],
254 },
255 cmdclass={
256 "build_proto": BuildProtoCommand,
257 "build_py": BuildCommand,
258 "develop": DevelopCommand,
259 },
260 )
261
[end of sdk/python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -132,7 +132,7 @@
+ AWS_REQUIRED
)
-DEV_REQUIRED = ["mypy-protobuf>=1.*", "grpcio-testing==1.*"] + CI_REQUIRED
+DEV_REQUIRED = ["mypy-protobuf>=3.1.0", "grpcio-testing==1.*"] + CI_REQUIRED
# Get git repo root directory
repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)
@@ -244,7 +244,7 @@
],
entry_points={"console_scripts": ["feast=feast.cli:cli"]},
use_scm_version=use_scm_version,
- setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==1.*", "sphinx!=4.0.0"],
+ setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==3.1.0", "sphinx!=4.0.0"],
package_data={
"": [
"protos/feast/**/*.proto",
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -132,7 +132,7 @@\n + AWS_REQUIRED\n )\n \n-DEV_REQUIRED = [\"mypy-protobuf>=1.*\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n+DEV_REQUIRED = [\"mypy-protobuf>=3.1.0\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n \n # Get git repo root directory\n repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)\n@@ -244,7 +244,7 @@\n ],\n entry_points={\"console_scripts\": [\"feast=feast.cli:cli\"]},\n use_scm_version=use_scm_version,\n- setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==1.*\", \"sphinx!=4.0.0\"],\n+ setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==3.1.0\", \"sphinx!=4.0.0\"],\n package_data={\n \"\": [\n \"protos/feast/**/*.proto\",\n", "issue": "Optimize `_populate_result_rows_from_feature_view`\nSigned-off-by: Judah Rand <[email protected]>\r\n\r\n<!-- Thanks for sending a pull request! Here are some tips for you:\r\n\r\n1. Ensure that your code follows our code conventions: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#code-style--linting\r\n2. Run unit tests and ensure that they are passing: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#unit-tests\r\n3. If your change introduces any API changes, make sure to update the integration tests scripts here: https://github.com/feast-dev/feast/tree/master/sdk/python/tests or https://github.com/feast-dev/feast/tree/master/sdk/go\r\n4. Make sure documentation is updated for your PR!\r\n5. Make sure you have signed the CLA https://cla.developers.google.com/clas\r\n\r\n-->\r\n\r\n**What this PR does / why we need it**:\r\nThis commit optimizes the fetching of features by only fetching\r\nthe features for each unique Entity once and then expands the result\r\nto the shape of input EntityKeys.\r\n\r\nPreviously, if an Entity occurred twice the features would be fetched\r\nfrom the OnlineStore twice. This can be hugely inefficient.\r\n\r\nThe only assumption that this makes is that the OnlineStore will return \r\nthe feature data in the same order as the EntityKeyProtos are provided.\r\n\r\n**Which issue(s) this PR fixes**:\r\n<!--\r\n*Automatically closes linked issue when PR is merged.\r\nUsage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.\r\n-->\r\nFixes #\r\n\r\n**Does this PR introduce a user-facing change?**:\r\n<!--\r\nIf no, just write \"NONE\" in the release-note block below.\r\nIf yes, a release note is required:\r\nEnter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string \"action required\".\r\n\r\nFor more information about release notes, see kubernetes' guide here:\r\nhttp://git.k8s.io/community/contributors/guide/release-notes.md\r\n-->\r\n```release-note\r\nSpeed up `get_online_features` when duplicate Entities are present.\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport glob\nimport os\nimport re\nimport shutil\nimport subprocess\nimport pathlib\n\nfrom distutils.cmd import Command\nfrom setuptools import find_packages\n\ntry:\n from setuptools import setup\n from setuptools.command.install import install\n from setuptools.command.develop import develop\n from setuptools.command.egg_info import egg_info\n from setuptools.command.sdist import sdist\n from setuptools.command.build_py import build_py\nexcept ImportError:\n from distutils.core import setup\n from distutils.command.install import install\n from distutils.command.build_py import build_py\n\nNAME = \"feast\"\nDESCRIPTION = \"Python SDK for Feast\"\nURL = \"https://github.com/feast-dev/feast\"\nAUTHOR = \"Feast\"\nREQUIRES_PYTHON = \">=3.7.0\"\n\nREQUIRED = [\n \"Click==8.*\",\n \"colorama>=0.3.9\",\n \"dill==0.3.*\",\n \"fastavro>=1.1.0\",\n \"google-api-core>=1.23.0\",\n \"googleapis-common-protos==1.52.*\",\n \"grpcio>=1.34.0\",\n \"grpcio-reflection>=1.34.0\",\n \"Jinja2>=2.0.0\",\n \"jsonschema\",\n \"mmh3\",\n \"pandas>=1.0.0\",\n \"pandavro==1.5.*\",\n \"protobuf>=3.10\",\n \"proto-plus<1.19.7\",\n \"pyarrow>=4.0.0\",\n \"pydantic>=1.0.0\",\n \"PyYAML>=5.4.*\",\n \"tabulate==0.8.*\",\n \"tenacity>=7.*\",\n \"toml==0.10.*\",\n \"tqdm==4.*\",\n \"fastapi>=0.68.0\",\n \"uvicorn[standard]>=0.14.0\",\n \"proto-plus<1.19.7\",\n \"tensorflow-metadata>=1.0.0,<2.0.0\",\n]\n\nGCP_REQUIRED = [\n \"google-cloud-bigquery>=2.28.1\",\n \"google-cloud-bigquery-storage >= 2.0.0\",\n \"google-cloud-datastore>=2.1.*\",\n \"google-cloud-storage>=1.34.*,<1.41\",\n \"google-cloud-core>=1.4.0,<2.0.0\",\n]\n\nREDIS_REQUIRED = [\n \"redis>=4.1.0\",\n \"hiredis>=2.0.0\",\n]\n\nAWS_REQUIRED = [\n \"boto3>=1.17.0\",\n \"docker>=5.0.2\",\n]\n\nCI_REQUIRED = (\n [\n \"cryptography==3.3.2\",\n \"flake8\",\n \"black==19.10b0\",\n \"isort>=5\",\n \"grpcio-tools==1.34.0\",\n \"grpcio-testing==1.34.0\",\n \"minio==7.1.0\",\n \"mock==2.0.0\",\n \"moto\",\n \"mypy==0.931\",\n \"mypy-protobuf==3.1.0\",\n \"avro==1.10.0\",\n \"gcsfs\",\n \"urllib3>=1.25.4\",\n \"pytest>=6.0.0\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-benchmark>=3.4.1\",\n \"pytest-lazy-fixture==0.6.3\",\n \"pytest-timeout==1.4.2\",\n \"pytest-ordering==0.6.*\",\n \"pytest-mock==1.10.4\",\n \"Sphinx!=4.0.0,<4.4.0\",\n \"sphinx-rtd-theme\",\n \"testcontainers==3.4.2\",\n \"adlfs==0.5.9\",\n \"firebase-admin==4.5.2\",\n \"pre-commit\",\n \"assertpy==1.1\",\n \"pip-tools\",\n \"types-protobuf\",\n \"types-python-dateutil\",\n \"types-pytz\",\n \"types-PyYAML\",\n \"types-redis\",\n \"types-requests\",\n \"types-setuptools\",\n \"types-tabulate\",\n ]\n + GCP_REQUIRED\n + REDIS_REQUIRED\n + AWS_REQUIRED\n)\n\nDEV_REQUIRED = [\"mypy-protobuf>=1.*\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n\n# Get git repo root directory\nrepo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)\n\n# README file from Feast repo root directory\nREADME_FILE = os.path.join(repo_root, \"README.md\")\nwith open(README_FILE, \"r\", encoding=\"utf8\") as f:\n LONG_DESCRIPTION = f.read()\n\n# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.\n# Regex modified from default tag regex in:\n# https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9\nTAG_REGEX = re.compile(\n r\"^(?:[\\/\\w-]+)?(?P<version>[vV]?\\d+(?:\\.\\d+){0,2}[^\\+]*)(?:\\+.*)?$\"\n)\n\n# Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)\nif shutil.which(\"git\"):\n use_scm_version = {\"root\": \"../..\", \"relative_to\": __file__, \"tag_regex\": TAG_REGEX}\nelse:\n use_scm_version = None\n\n\nclass BuildProtoCommand(Command):\n description = \"Builds the proto files into python files.\"\n\n def initialize_options(self):\n self.protoc = [\"python\", \"-m\", \"grpc_tools.protoc\"] # find_executable(\"protoc\")\n self.proto_folder = os.path.join(repo_root, \"protos\")\n self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')\n self.sub_folders = [\"core\", \"serving\", \"types\", \"storage\"]\n\n def finalize_options(self):\n pass\n\n def _generate_protos(self, path):\n proto_files = glob.glob(os.path.join(self.proto_folder, path))\n\n subprocess.check_call(self.protoc + [\n '-I', self.proto_folder,\n '--python_out', self.this_package,\n '--grpc_python_out', self.this_package,\n '--mypy_out', self.this_package] + proto_files)\n\n def run(self):\n for sub_folder in self.sub_folders:\n self._generate_protos(f'feast/{sub_folder}/*.proto')\n\n from pathlib import Path\n\n for path in Path('feast/protos').rglob('*.py'):\n for folder in self.sub_folders:\n # Read in the file\n with open(path, 'r') as file:\n filedata = file.read()\n\n # Replace the target string\n filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')\n\n # Write the file out again\n with open(path, 'w') as file:\n file.write(filedata)\n\n\nclass BuildCommand(build_py):\n \"\"\"Custom build command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n build_py.run(self)\n\n\nclass DevelopCommand(develop):\n \"\"\"Custom develop command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n develop.run(self)\n\n\nsetup(\n name=NAME,\n author=AUTHOR,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=(\"tests\",)),\n install_requires=REQUIRED,\n # https://stackoverflow.com/questions/28509965/setuptools-development-requirements\n # Install dev requirements with: pip install -e .[dev]\n extras_require={\n \"dev\": DEV_REQUIRED,\n \"ci\": CI_REQUIRED,\n \"gcp\": GCP_REQUIRED,\n \"aws\": AWS_REQUIRED,\n \"redis\": REDIS_REQUIRED,\n },\n include_package_data=True,\n license=\"Apache\",\n classifiers=[\n # Trove classifiers\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n ],\n entry_points={\"console_scripts\": [\"feast=feast.cli:cli\"]},\n use_scm_version=use_scm_version,\n setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==1.*\", \"sphinx!=4.0.0\"],\n package_data={\n \"\": [\n \"protos/feast/**/*.proto\",\n \"protos/feast/third_party/grpc/health/v1/*.proto\",\n \"feast/protos/feast/**/*.py\",\n ],\n },\n cmdclass={\n \"build_proto\": BuildProtoCommand,\n \"build_py\": BuildCommand,\n \"develop\": DevelopCommand,\n },\n)\n", "path": "sdk/python/setup.py"}]} | 3,920 | 281 |
gh_patches_debug_7622 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-18425 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Extractor for yourporn.sexy is broken
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.12.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.12.03**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
```
$ youtube-dl -v https://yourporn.sexy/post/5bf56573616c2.html
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'https://yourporn.sexy/post/5bf56573616c2.html']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.12.03
[debug] Python version 2.7.10 (CPython) - Darwin-17.7.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 4.0.2, ffprobe 4.0.2
[debug] Proxy map: {}
[YourPorn] 5bf56573616c2: Downloading webpage
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on u'https://yourporn.sexy/cdn/c11/ldjJi9usRy26gVwhgzEn9w/1544086469/hk5sajembx0dd41hcp09ah8m3s2/25qb3fr5d605l7m316y1969c42k.mp4'
ERROR: Did not get any data blocks
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/Users/v-delta/.local/bin/youtube-dl/__main__.py", line 19, in <module>
youtube_dl.main()
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py", line 472, in main
_real_main(argv)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py", line 462, in _real_main
retcode = ydl.download(all_urls)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2001, in download
url, force_generic_extractor=self.params.get('force_generic_extractor', False))
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 803, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 857, in process_ie_result
return self.process_video_result(ie_result, download=download)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1635, in process_video_result
self.process_info(new_info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1908, in process_info
success = dl(filename, info_dict)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1847, in dl
return fd.download(name, info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py", line 364, in download
return self.real_download(filename, info_dict)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py", line 342, in real_download
return download()
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py", line 312, in download
self.report_error('Did not get any data blocks')
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py", line 165, in report_error
self.ydl.report_error(*args, **kargs)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 620, in report_error
self.trouble(error_message, tb)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 582, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
```
### Description of your *issue*, suggested solution and other information
The videos play fine in any browser, that is because somewehre the URL the extractor delivers is changed from
```
https://yourporn.sexy/cdn/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4
```
to
```
https://yourporn.sexy/cdn2/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4
```
A `2` is inserted after `/cdn`. I will create a pull request fixing this bug soon.
</issue>
<code>
[start of youtube_dl/extractor/yourporn.py]
1 from __future__ import unicode_literals
2
3 from .common import InfoExtractor
4 from ..utils import urljoin
5
6
7 class YourPornIE(InfoExtractor):
8 _VALID_URL = r'https?://(?:www\.)?yourporn\.sexy/post/(?P<id>[^/?#&.]+)'
9 _TEST = {
10 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',
11 'md5': '6f8682b6464033d87acaa7a8ff0c092e',
12 'info_dict': {
13 'id': '57ffcb2e1179b',
14 'ext': 'mp4',
15 'title': 'md5:c9f43630bd968267672651ba905a7d35',
16 'thumbnail': r're:^https?://.*\.jpg$',
17 },
18 }
19
20 def _real_extract(self, url):
21 video_id = self._match_id(url)
22
23 webpage = self._download_webpage(url, video_id)
24
25 video_url = urljoin(url, self._parse_json(
26 self._search_regex(
27 r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info',
28 group='data'),
29 video_id)[video_id])
30
31 title = (self._search_regex(
32 r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title',
33 default=None) or self._og_search_description(webpage)).strip()
34 thumbnail = self._og_search_thumbnail(webpage)
35
36 return {
37 'id': video_id,
38 'url': video_url,
39 'title': title,
40 'thumbnail': thumbnail,
41 }
42
[end of youtube_dl/extractor/yourporn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py
--- a/youtube_dl/extractor/yourporn.py
+++ b/youtube_dl/extractor/yourporn.py
@@ -26,7 +26,7 @@
self._search_regex(
r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info',
group='data'),
- video_id)[video_id])
+ video_id)[video_id]).replace('/cdn/', '/cdn2/')
title = (self._search_regex(
r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title',
| {"golden_diff": "diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py\n--- a/youtube_dl/extractor/yourporn.py\n+++ b/youtube_dl/extractor/yourporn.py\n@@ -26,7 +26,7 @@\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n- video_id)[video_id])\n+ video_id)[video_id]).replace('/cdn/', '/cdn2/')\n \n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n", "issue": "Extractor for yourporn.sexy is broken\n## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.12.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.12.03**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [x] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n```\r\n$ youtube-dl -v https://yourporn.sexy/post/5bf56573616c2.html\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'https://yourporn.sexy/post/5bf56573616c2.html']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2018.12.03\r\n[debug] Python version 2.7.10 (CPython) - Darwin-17.7.0-x86_64-i386-64bit\r\n[debug] exe versions: ffmpeg 4.0.2, ffprobe 4.0.2\r\n[debug] Proxy map: {}\r\n[YourPorn] 5bf56573616c2: Downloading webpage\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on u'https://yourporn.sexy/cdn/c11/ldjJi9usRy26gVwhgzEn9w/1544086469/hk5sajembx0dd41hcp09ah8m3s2/25qb3fr5d605l7m316y1969c42k.mp4'\r\n\r\n\r\nERROR: Did not get any data blocks\r\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 162, in _run_module_as_main\r\n \"__main__\", fname, loader, pkg_name)\r\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/Users/v-delta/.local/bin/youtube-dl/__main__.py\", line 19, in <module>\r\n youtube_dl.main()\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py\", line 472, in main\r\n _real_main(argv)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py\", line 462, in _real_main\r\n retcode = ydl.download(all_urls)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 2001, in download\r\n url, force_generic_extractor=self.params.get('force_generic_extractor', False))\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 803, in extract_info\r\n return self.process_ie_result(ie_result, download, extra_info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 857, in process_ie_result\r\n return self.process_video_result(ie_result, download=download)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1635, in process_video_result\r\n self.process_info(new_info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1908, in process_info\r\n success = dl(filename, info_dict)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1847, in dl\r\n return fd.download(name, info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py\", line 364, in download\r\n return self.real_download(filename, info_dict)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py\", line 342, in real_download\r\n return download()\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py\", line 312, in download\r\n self.report_error('Did not get any data blocks')\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py\", line 165, in report_error\r\n self.ydl.report_error(*args, **kargs)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 620, in report_error\r\n self.trouble(error_message, tb)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 582, in trouble\r\n tb_data = traceback.format_list(traceback.extract_stack())\r\n```\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nThe videos play fine in any browser, that is because somewehre the URL the extractor delivers is changed from\r\n\r\n```\r\nhttps://yourporn.sexy/cdn/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4\r\n```\r\n\r\nto\r\n\r\n```\r\nhttps://yourporn.sexy/cdn2/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4\r\n```\r\n\r\nA `2` is inserted after `/cdn`. I will create a pull request fixing this bug soon.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import urljoin\n\n\nclass YourPornIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n _TEST = {\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n 'id': '57ffcb2e1179b',\n 'ext': 'mp4',\n 'title': 'md5:c9f43630bd968267672651ba905a7d35',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = urljoin(url, self._parse_json(\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n video_id)[video_id])\n\n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n default=None) or self._og_search_description(webpage)).strip()\n thumbnail = self._og_search_thumbnail(webpage)\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'title': title,\n 'thumbnail': thumbnail,\n }\n", "path": "youtube_dl/extractor/yourporn.py"}]} | 2,752 | 164 |
gh_patches_debug_25195 | rasdani/github-patches | git_diff | pypi__warehouse-1178 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Search relevancy is still not ideal
#1020 fixed #1019 for the majority of packages, however a few still produce odd results:
For example:
https://warehouse.python.org/search/?q=flask (`Flask` package is 2nd, `Flask-Admin` is first)
https://warehouse.python.org/search/?q=django (`Django` package is 11th, `dotulu` is first)
https://warehouse.python.org/search/?q=git (First 3 packages do not have "git" anywhere in them)
This is hard to test in dev because the dev DB is a snapshot of TestPyPI, and those packages are missing.
@dstufft, would it be possible to get a more complete DB for local development?
</issue>
<code>
[start of warehouse/packaging/search.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from elasticsearch_dsl import DocType, String, analyzer, MetaField, Date
14
15 from warehouse.search import doc_type
16
17
18 EmailAnalyzer = analyzer(
19 "email",
20 tokenizer="uax_url_email",
21 filter=["standard", "lowercase", "stop", "snowball"],
22 )
23
24
25 @doc_type
26 class Project(DocType):
27
28 name = String()
29 normalized_name = String(index="not_analyzed")
30 version = String(index="not_analyzed", multi=True)
31 summary = String(analyzer="snowball")
32 description = String(analyzer="snowball")
33 author = String()
34 author_email = String(analyzer=EmailAnalyzer)
35 maintainer = String()
36 maintainer_email = String(analyzer=EmailAnalyzer)
37 license = String()
38 home_page = String(index="not_analyzed")
39 download_url = String(index="not_analyzed")
40 keywords = String(analyzer="snowball")
41 platform = String(index="not_analyzed")
42 created = Date()
43 classifiers = String(index="not_analyzed", multi=True)
44
45 class Meta:
46 # disable the _all field to save some space
47 all = MetaField(enabled=False)
48
49 @classmethod
50 def from_db(cls, release):
51 obj = cls(meta={"id": release.project.normalized_name})
52 obj["name"] = release.project.name
53 obj["normalized_name"] = release.project.normalized_name
54 obj["version"] = [r.version for r in release.project.releases]
55 obj["summary"] = release.summary
56 obj["description"] = release.description
57 obj["author"] = release.author
58 obj["author_email"] = release.author_email
59 obj["maintainer"] = release.maintainer
60 obj["maintainer_email"] = release.maintainer_email
61 obj["home_page"] = release.home_page
62 obj["download_url"] = release.download_url
63 obj["keywords"] = release.keywords
64 obj["platform"] = release.platform
65 obj["created"] = release.created
66 obj["classifiers"] = list(release.classifiers)
67
68 return obj
69
[end of warehouse/packaging/search.py]
[start of warehouse/views.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections
14
15 from pyramid.httpexceptions import (
16 HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,
17 )
18 from pyramid.view import (
19 notfound_view_config, forbidden_view_config, view_config,
20 )
21 from sqlalchemy import func
22 from sqlalchemy.orm import aliased, joinedload
23
24 from warehouse.accounts import REDIRECT_FIELD_NAME
25 from warehouse.accounts.models import User
26 from warehouse.cache.origin import origin_cache
27 from warehouse.cache.http import cache_control
28 from warehouse.classifiers.models import Classifier
29 from warehouse.packaging.models import Project, Release, File
30 from warehouse.utils.row_counter import RowCount
31 from warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory
32
33
34 @view_config(context=HTTPException)
35 @notfound_view_config(append_slash=HTTPMovedPermanently)
36 def httpexception_view(exc, request):
37 return exc
38
39
40 @forbidden_view_config()
41 def forbidden(exc, request):
42 # If the forbidden error is because the user isn't logged in, then we'll
43 # redirect them to the log in page.
44 if request.authenticated_userid is None:
45 url = request.route_url(
46 "accounts.login",
47 _query={REDIRECT_FIELD_NAME: request.path_qs},
48 )
49 return HTTPSeeOther(url)
50
51 # If we've reached here, then the user is logged in and they are genuinely
52 # not allowed to access this page.
53 # TODO: Style the forbidden page.
54 return exc
55
56
57 @view_config(
58 route_name="robots.txt",
59 renderer="robots.txt",
60 decorator=[
61 cache_control(1 * 24 * 60 * 60), # 1 day
62 origin_cache(
63 1 * 24 * 60 * 60, # 1 day
64 stale_while_revalidate=6 * 60 * 60, # 6 hours
65 stale_if_error=1 * 24 * 60 * 60, # 1 day
66 ),
67 ],
68 )
69 def robotstxt(request):
70 request.response.content_type = "text/plain"
71 return {}
72
73
74 @view_config(
75 route_name="index",
76 renderer="index.html",
77 decorator=[
78 origin_cache(
79 1 * 60 * 60, # 1 hour
80 stale_while_revalidate=10 * 60, # 10 minutes
81 stale_if_error=1 * 24 * 60 * 60, # 1 day
82 keys=["all-projects"],
83 ),
84 ]
85 )
86 def index(request):
87 project_names = [
88 r[0] for r in (
89 request.db.query(File.name)
90 .group_by(File.name)
91 .order_by(func.sum(File.downloads).desc())
92 .limit(5)
93 .all())
94 ]
95 release_a = aliased(
96 Release,
97 request.db.query(Release)
98 .distinct(Release.name)
99 .filter(Release.name.in_(project_names))
100 .order_by(Release.name, Release._pypi_ordering.desc())
101 .subquery(),
102 )
103 top_projects = (
104 request.db.query(release_a)
105 .options(joinedload(release_a.project),
106 joinedload(release_a.uploader))
107 .order_by(func.array_idx(project_names, release_a.name))
108 .all()
109 )
110
111 latest_releases = (
112 request.db.query(Release)
113 .options(joinedload(Release.project),
114 joinedload(Release.uploader))
115 .order_by(Release.created.desc())
116 .limit(5)
117 .all()
118 )
119
120 counts = dict(
121 request.db.query(RowCount.table_name, RowCount.count)
122 .filter(
123 RowCount.table_name.in_([
124 Project.__tablename__,
125 Release.__tablename__,
126 File.__tablename__,
127 User.__tablename__,
128 ]))
129 .all()
130 )
131
132 return {
133 "latest_releases": latest_releases,
134 "top_projects": top_projects,
135 "num_projects": counts.get(Project.__tablename__, 0),
136 "num_releases": counts.get(Release.__tablename__, 0),
137 "num_files": counts.get(File.__tablename__, 0),
138 "num_users": counts.get(User.__tablename__, 0),
139 }
140
141
142 @view_config(
143 route_name="search",
144 renderer="search/results.html",
145 decorator=[
146 origin_cache(
147 1 * 60 * 60, # 1 hour
148 stale_while_revalidate=10 * 60, # 10 minutes
149 stale_if_error=1 * 24 * 60 * 60, # 1 day
150 keys=["all-projects"],
151 )
152 ],
153 )
154 def search(request):
155 if request.params.get("q"):
156 query = request.es.query(
157 "multi_match",
158 query=request.params["q"],
159 fields=[
160 "name^2", "version", "author", "author_email", "maintainer",
161 "maintainer_email", "home_page", "license", "summary",
162 "description", "keywords", "platform", "download_url",
163 ],
164 ).suggest(
165 name="name_suggestion",
166 text=request.params["q"],
167 term={"field": "name"}
168 )
169 else:
170 query = request.es.query()
171
172 if request.params.get("o"):
173 query = query.sort(request.params["o"])
174
175 if request.params.getall("c"):
176 query = query.filter('terms', classifiers=request.params.getall("c"))
177
178 page_num = int(request.params.get("page", 1))
179 page = ElasticsearchPage(
180 query,
181 page=page_num,
182 url_maker=paginate_url_factory(request),
183 )
184
185 if page.page_count and page_num > page.page_count:
186 raise HTTPNotFound
187
188 available_filters = collections.defaultdict(list)
189
190 for cls in request.db.query(Classifier).order_by(Classifier.classifier):
191 first, *_ = cls.classifier.split(' :: ')
192 available_filters[first].append(cls.classifier)
193
194 return {
195 "page": page,
196 "term": request.params.get("q", ''),
197 "order": request.params.get("o", ''),
198 "available_filters": sorted(available_filters.items()),
199 "applied_filters": request.params.getall("c"),
200 }
201
202
203 @view_config(
204 route_name="includes.current-user-indicator",
205 renderer="includes/current-user-indicator.html",
206 uses_session=True,
207 )
208 def current_user_indicator(request):
209 return {}
210
211
212 @view_config(route_name="health", renderer="string")
213 def health(request):
214 # This will ensure that we can access the database and run queries against
215 # it without doing anything that will take a lock or block other queries.
216 request.db.execute("SELECT 1")
217
218 # Nothing will actually check this, but it's a little nicer to have
219 # something to return besides an empty body.
220 return "OK"
221
[end of warehouse/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/packaging/search.py b/warehouse/packaging/search.py
--- a/warehouse/packaging/search.py
+++ b/warehouse/packaging/search.py
@@ -21,12 +21,18 @@
filter=["standard", "lowercase", "stop", "snowball"],
)
+NameAnalyzer = analyzer(
+ "normalized_name",
+ tokenizer="lowercase",
+ filter=["standard", "lowercase", "word_delimiter"],
+)
+
@doc_type
class Project(DocType):
name = String()
- normalized_name = String(index="not_analyzed")
+ normalized_name = String(analyzer=NameAnalyzer, index_options="docs")
version = String(index="not_analyzed", multi=True)
summary = String(analyzer="snowball")
description = String(analyzer="snowball")
diff --git a/warehouse/views.py b/warehouse/views.py
--- a/warehouse/views.py
+++ b/warehouse/views.py
@@ -157,9 +157,10 @@
"multi_match",
query=request.params["q"],
fields=[
- "name^2", "version", "author", "author_email", "maintainer",
- "maintainer_email", "home_page", "license", "summary",
- "description", "keywords", "platform", "download_url",
+ "author", "author_email", "description^5", "download_url",
+ "home_page", "keywords^5", "license", "maintainer",
+ "maintainer_email", "normalized_name^10", "platform",
+ "summary^5",
],
).suggest(
name="name_suggestion",
| {"golden_diff": "diff --git a/warehouse/packaging/search.py b/warehouse/packaging/search.py\n--- a/warehouse/packaging/search.py\n+++ b/warehouse/packaging/search.py\n@@ -21,12 +21,18 @@\n filter=[\"standard\", \"lowercase\", \"stop\", \"snowball\"],\n )\n \n+NameAnalyzer = analyzer(\n+ \"normalized_name\",\n+ tokenizer=\"lowercase\",\n+ filter=[\"standard\", \"lowercase\", \"word_delimiter\"],\n+)\n+\n \n @doc_type\n class Project(DocType):\n \n name = String()\n- normalized_name = String(index=\"not_analyzed\")\n+ normalized_name = String(analyzer=NameAnalyzer, index_options=\"docs\")\n version = String(index=\"not_analyzed\", multi=True)\n summary = String(analyzer=\"snowball\")\n description = String(analyzer=\"snowball\")\ndiff --git a/warehouse/views.py b/warehouse/views.py\n--- a/warehouse/views.py\n+++ b/warehouse/views.py\n@@ -157,9 +157,10 @@\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n- \"name^2\", \"version\", \"author\", \"author_email\", \"maintainer\",\n- \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n- \"description\", \"keywords\", \"platform\", \"download_url\",\n+ \"author\", \"author_email\", \"description^5\", \"download_url\",\n+ \"home_page\", \"keywords^5\", \"license\", \"maintainer\",\n+ \"maintainer_email\", \"normalized_name^10\", \"platform\",\n+ \"summary^5\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n", "issue": "Search relevancy is still not ideal\n#1020 fixed #1019 for the majority of packages, however a few still produce odd results:\n\nFor example:\nhttps://warehouse.python.org/search/?q=flask (`Flask` package is 2nd, `Flask-Admin` is first)\nhttps://warehouse.python.org/search/?q=django (`Django` package is 11th, `dotulu` is first)\nhttps://warehouse.python.org/search/?q=git (First 3 packages do not have \"git\" anywhere in them)\n\nThis is hard to test in dev because the dev DB is a snapshot of TestPyPI, and those packages are missing.\n\n@dstufft, would it be possible to get a more complete DB for local development?\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom elasticsearch_dsl import DocType, String, analyzer, MetaField, Date\n\nfrom warehouse.search import doc_type\n\n\nEmailAnalyzer = analyzer(\n \"email\",\n tokenizer=\"uax_url_email\",\n filter=[\"standard\", \"lowercase\", \"stop\", \"snowball\"],\n)\n\n\n@doc_type\nclass Project(DocType):\n\n name = String()\n normalized_name = String(index=\"not_analyzed\")\n version = String(index=\"not_analyzed\", multi=True)\n summary = String(analyzer=\"snowball\")\n description = String(analyzer=\"snowball\")\n author = String()\n author_email = String(analyzer=EmailAnalyzer)\n maintainer = String()\n maintainer_email = String(analyzer=EmailAnalyzer)\n license = String()\n home_page = String(index=\"not_analyzed\")\n download_url = String(index=\"not_analyzed\")\n keywords = String(analyzer=\"snowball\")\n platform = String(index=\"not_analyzed\")\n created = Date()\n classifiers = String(index=\"not_analyzed\", multi=True)\n\n class Meta:\n # disable the _all field to save some space\n all = MetaField(enabled=False)\n\n @classmethod\n def from_db(cls, release):\n obj = cls(meta={\"id\": release.project.normalized_name})\n obj[\"name\"] = release.project.name\n obj[\"normalized_name\"] = release.project.normalized_name\n obj[\"version\"] = [r.version for r in release.project.releases]\n obj[\"summary\"] = release.summary\n obj[\"description\"] = release.description\n obj[\"author\"] = release.author\n obj[\"author_email\"] = release.author_email\n obj[\"maintainer\"] = release.maintainer\n obj[\"maintainer_email\"] = release.maintainer_email\n obj[\"home_page\"] = release.home_page\n obj[\"download_url\"] = release.download_url\n obj[\"keywords\"] = release.keywords\n obj[\"platform\"] = release.platform\n obj[\"created\"] = release.created\n obj[\"classifiers\"] = list(release.classifiers)\n\n return obj\n", "path": "warehouse/packaging/search.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\n\nfrom pyramid.httpexceptions import (\n HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,\n)\nfrom pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n)\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import aliased, joinedload\n\nfrom warehouse.accounts import REDIRECT_FIELD_NAME\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.cache.http import cache_control\nfrom warehouse.classifiers.models import Classifier\nfrom warehouse.packaging.models import Project, Release, File\nfrom warehouse.utils.row_counter import RowCount\nfrom warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n\n\n@view_config(context=HTTPException)\n@notfound_view_config(append_slash=HTTPMovedPermanently)\ndef httpexception_view(exc, request):\n return exc\n\n\n@forbidden_view_config()\ndef forbidden(exc, request):\n # If the forbidden error is because the user isn't logged in, then we'll\n # redirect them to the log in page.\n if request.authenticated_userid is None:\n url = request.route_url(\n \"accounts.login\",\n _query={REDIRECT_FIELD_NAME: request.path_qs},\n )\n return HTTPSeeOther(url)\n\n # If we've reached here, then the user is logged in and they are genuinely\n # not allowed to access this page.\n # TODO: Style the forbidden page.\n return exc\n\n\n@view_config(\n route_name=\"robots.txt\",\n renderer=\"robots.txt\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=6 * 60 * 60, # 6 hours\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef robotstxt(request):\n request.response.content_type = \"text/plain\"\n return {}\n\n\n@view_config(\n route_name=\"index\",\n renderer=\"index.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n ),\n ]\n)\ndef index(request):\n project_names = [\n r[0] for r in (\n request.db.query(File.name)\n .group_by(File.name)\n .order_by(func.sum(File.downloads).desc())\n .limit(5)\n .all())\n ]\n release_a = aliased(\n Release,\n request.db.query(Release)\n .distinct(Release.name)\n .filter(Release.name.in_(project_names))\n .order_by(Release.name, Release._pypi_ordering.desc())\n .subquery(),\n )\n top_projects = (\n request.db.query(release_a)\n .options(joinedload(release_a.project),\n joinedload(release_a.uploader))\n .order_by(func.array_idx(project_names, release_a.name))\n .all()\n )\n\n latest_releases = (\n request.db.query(Release)\n .options(joinedload(Release.project),\n joinedload(Release.uploader))\n .order_by(Release.created.desc())\n .limit(5)\n .all()\n )\n\n counts = dict(\n request.db.query(RowCount.table_name, RowCount.count)\n .filter(\n RowCount.table_name.in_([\n Project.__tablename__,\n Release.__tablename__,\n File.__tablename__,\n User.__tablename__,\n ]))\n .all()\n )\n\n return {\n \"latest_releases\": latest_releases,\n \"top_projects\": top_projects,\n \"num_projects\": counts.get(Project.__tablename__, 0),\n \"num_releases\": counts.get(Release.__tablename__, 0),\n \"num_files\": counts.get(File.__tablename__, 0),\n \"num_users\": counts.get(User.__tablename__, 0),\n }\n\n\n@view_config(\n route_name=\"search\",\n renderer=\"search/results.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n )\n ],\n)\ndef search(request):\n if request.params.get(\"q\"):\n query = request.es.query(\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n \"name^2\", \"version\", \"author\", \"author_email\", \"maintainer\",\n \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n \"description\", \"keywords\", \"platform\", \"download_url\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n text=request.params[\"q\"],\n term={\"field\": \"name\"}\n )\n else:\n query = request.es.query()\n\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n\n if request.params.getall(\"c\"):\n query = query.filter('terms', classifiers=request.params.getall(\"c\"))\n\n page_num = int(request.params.get(\"page\", 1))\n page = ElasticsearchPage(\n query,\n page=page_num,\n url_maker=paginate_url_factory(request),\n )\n\n if page.page_count and page_num > page.page_count:\n raise HTTPNotFound\n\n available_filters = collections.defaultdict(list)\n\n for cls in request.db.query(Classifier).order_by(Classifier.classifier):\n first, *_ = cls.classifier.split(' :: ')\n available_filters[first].append(cls.classifier)\n\n return {\n \"page\": page,\n \"term\": request.params.get(\"q\", ''),\n \"order\": request.params.get(\"o\", ''),\n \"available_filters\": sorted(available_filters.items()),\n \"applied_filters\": request.params.getall(\"c\"),\n }\n\n\n@view_config(\n route_name=\"includes.current-user-indicator\",\n renderer=\"includes/current-user-indicator.html\",\n uses_session=True,\n)\ndef current_user_indicator(request):\n return {}\n\n\n@view_config(route_name=\"health\", renderer=\"string\")\ndef health(request):\n # This will ensure that we can access the database and run queries against\n # it without doing anything that will take a lock or block other queries.\n request.db.execute(\"SELECT 1\")\n\n # Nothing will actually check this, but it's a little nicer to have\n # something to return besides an empty body.\n return \"OK\"\n", "path": "warehouse/views.py"}]} | 3,560 | 372 |
gh_patches_debug_51074 | rasdani/github-patches | git_diff | Qiskit__qiskit-4331 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pass_manager_drawer requires filename to render
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master
- **Python version**:
- **Operating system**:
### What is the current behavior?
The `pass_manager_drawer` requires a filename in order to run. However this is not really a requirement of the code itself. Indeed, this works fine:
```python
pass_manager_drawer(pm, '')
```
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
</issue>
<code>
[start of qiskit/visualization/pass_manager_visualization.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Visualization function for a pass manager. Passes are grouped based on their
17 flow controller, and coloured based on the type of pass.
18 """
19 import os
20 import inspect
21 import tempfile
22
23 try:
24 from PIL import Image
25
26 HAS_PIL = True
27 except ImportError:
28 HAS_PIL = False
29
30 from qiskit.visualization import utils
31 from qiskit.visualization.exceptions import VisualizationError
32 from qiskit.transpiler.basepasses import AnalysisPass, TransformationPass
33
34 DEFAULT_STYLE = {AnalysisPass: 'red',
35 TransformationPass: 'blue'}
36
37
38 def pass_manager_drawer(pass_manager, filename, style=None, raw=False):
39 """
40 Draws the pass manager.
41
42 This function needs `pydot <https://github.com/erocarrera/pydot>`, which in turn needs
43 Graphviz <https://www.graphviz.org/>` to be installed.
44
45 Args:
46 pass_manager (PassManager): the pass manager to be drawn
47 filename (str): file path to save image to
48 style (dict or OrderedDict): keys are the pass classes and the values are
49 the colors to make them. An example can be seen in the DEFAULT_STYLE. An ordered
50 dict can be used to ensure a priority coloring when pass falls into multiple
51 categories. Any values not included in the provided dict will be filled in from
52 the default dict
53 raw (Bool) : True if you want to save the raw Dot output not an image. The
54 default is False.
55 Returns:
56 PIL.Image or None: an in-memory representation of the pass manager. Or None if
57 no image was generated or PIL is not installed.
58 Raises:
59 ImportError: when nxpd or pydot not installed.
60 VisualizationError: If raw=True and filename=None.
61
62 Example:
63 .. code-block::
64
65 %matplotlib inline
66 from qiskit import QuantumCircuit
67 from qiskit.compiler import transpile
68 from qiskit.transpiler import PassManager
69 from qiskit.visualization import pass_manager_drawer
70 from qiskit.transpiler.passes import Unroller
71
72 circ = QuantumCircuit(3)
73 circ.ccx(0, 1, 2)
74 circ.draw()
75
76 pass_ = Unroller(['u1', 'u2', 'u3', 'cx'])
77 pm = PassManager(pass_)
78 new_circ = pm.run(circ)
79 new_circ.draw(output='mpl')
80
81 pass_manager_drawer(pm, "passmanager.jpg")
82 """
83
84 try:
85 import subprocess
86
87 _PROC = subprocess.Popen(['dot', '-V'], # pylint: disable=invalid-name
88 stdout=subprocess.PIPE,
89 stderr=subprocess.PIPE)
90 _PROC.communicate()
91 if _PROC.returncode != 0:
92 has_graphviz = False
93 else:
94 has_graphviz = True
95 except Exception: # pylint: disable=broad-except
96 # this is raised when the dot command cannot be found, which means GraphViz
97 # isn't installed
98 has_graphviz = False
99
100 HAS_GRAPHVIZ = has_graphviz # pylint: disable=invalid-name
101
102 try:
103 import pydot
104 if not HAS_GRAPHVIZ:
105 raise ImportError
106 except ImportError:
107 raise ImportError("pass_manager_drawer requires pydot and graphviz. "
108 "Run 'pip install pydot'. "
109 "Graphviz can be installed using 'brew install graphviz' on Mac"
110 " or by downloading it from the website.")
111
112 passes = pass_manager.passes()
113
114 if not style:
115 style = DEFAULT_STYLE
116
117 # create the overall graph
118 graph = pydot.Dot()
119
120 # identifiers for nodes need to be unique, so assign an id
121 # can't just use python's id in case the exact same pass was
122 # appended more than once
123 component_id = 0
124
125 prev_node = None
126
127 for index, controller_group in enumerate(passes):
128
129 # label is the name of the flow controller parameter
130 label = "[%s] %s" % (index, ', '.join(controller_group['flow_controllers']))
131
132 # create the subgraph for this controller
133 subgraph = pydot.Cluster(str(component_id), label=label, fontname='helvetica',
134 labeljust='l')
135 component_id += 1
136
137 for pass_ in controller_group['passes']:
138
139 # label is the name of the pass
140 node = pydot.Node(str(component_id),
141 label=str(type(pass_).__name__),
142 color=_get_node_color(pass_, style),
143 shape="rectangle",
144 fontname='helvetica')
145
146 subgraph.add_node(node)
147 component_id += 1
148
149 # the arguments that were provided to the pass when it was created
150 arg_spec = inspect.getfullargspec(pass_.__init__)
151 # 0 is the args, 1: to remove the self arg
152 args = arg_spec[0][1:]
153
154 num_optional = len(arg_spec[3]) if arg_spec[3] else 0
155
156 # add in the inputs to the pass
157 for arg_index, arg in enumerate(args):
158 nd_style = 'solid'
159 # any optional args are dashed
160 # the num of optional counts from the end towards the start of the list
161 if arg_index >= (len(args) - num_optional):
162 nd_style = 'dashed'
163
164 input_node = pydot.Node(component_id, label=arg,
165 color="black",
166 shape="ellipse",
167 fontsize=10,
168 style=nd_style,
169 fontname='helvetica')
170 subgraph.add_node(input_node)
171 component_id += 1
172 subgraph.add_edge(pydot.Edge(input_node, node))
173
174 # if there is a previous node, add an edge between them
175 if prev_node:
176 subgraph.add_edge(pydot.Edge(prev_node, node))
177
178 prev_node = node
179
180 graph.add_subgraph(subgraph)
181
182 if raw:
183 if filename:
184 graph.write(filename, format='raw')
185 return None
186 else:
187 raise VisualizationError("if format=raw, then a filename is required.")
188
189 if not HAS_PIL and filename:
190 # linter says this isn't a method - it is
191 graph.write_png(filename) # pylint: disable=no-member
192 return None
193
194 with tempfile.TemporaryDirectory() as tmpdirname:
195 tmppath = os.path.join(tmpdirname, 'pass_manager.png')
196
197 # linter says this isn't a method - it is
198 graph.write_png(tmppath) # pylint: disable=no-member
199
200 image = Image.open(tmppath)
201 image = utils._trim(image)
202 os.remove(tmppath)
203 if filename:
204 image.save(filename, 'PNG')
205 return image
206
207
208 def _get_node_color(pss, style):
209 # look in the user provided dict first
210 for typ, color in style.items():
211 if isinstance(pss, typ):
212 return color
213
214 # failing that, look in the default
215 for typ, color in DEFAULT_STYLE.items():
216 if isinstance(pss, typ):
217 return color
218
219 return "black"
220
[end of qiskit/visualization/pass_manager_visualization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/visualization/pass_manager_visualization.py b/qiskit/visualization/pass_manager_visualization.py
--- a/qiskit/visualization/pass_manager_visualization.py
+++ b/qiskit/visualization/pass_manager_visualization.py
@@ -35,7 +35,7 @@
TransformationPass: 'blue'}
-def pass_manager_drawer(pass_manager, filename, style=None, raw=False):
+def pass_manager_drawer(pass_manager, filename=None, style=None, raw=False):
"""
Draws the pass manager.
| {"golden_diff": "diff --git a/qiskit/visualization/pass_manager_visualization.py b/qiskit/visualization/pass_manager_visualization.py\n--- a/qiskit/visualization/pass_manager_visualization.py\n+++ b/qiskit/visualization/pass_manager_visualization.py\n@@ -35,7 +35,7 @@\n TransformationPass: 'blue'}\n \n \n-def pass_manager_drawer(pass_manager, filename, style=None, raw=False):\n+def pass_manager_drawer(pass_manager, filename=None, style=None, raw=False):\n \"\"\"\n Draws the pass manager.\n", "issue": "pass_manager_drawer requires filename to render\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: master\r\n- **Python version**:\r\n- **Operating system**:\r\n\r\n### What is the current behavior?\r\nThe `pass_manager_drawer` requires a filename in order to run. However this is not really a requirement of the code itself. Indeed, this works fine:\r\n```python\r\npass_manager_drawer(pm, '')\r\n```\r\n\r\n\r\n### Steps to reproduce the problem\r\n\r\n\r\n\r\n### What is the expected behavior?\r\n\r\n\r\n\r\n### Suggested solutions\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\nVisualization function for a pass manager. Passes are grouped based on their\nflow controller, and coloured based on the type of pass.\n\"\"\"\nimport os\nimport inspect\nimport tempfile\n\ntry:\n from PIL import Image\n\n HAS_PIL = True\nexcept ImportError:\n HAS_PIL = False\n\nfrom qiskit.visualization import utils\nfrom qiskit.visualization.exceptions import VisualizationError\nfrom qiskit.transpiler.basepasses import AnalysisPass, TransformationPass\n\nDEFAULT_STYLE = {AnalysisPass: 'red',\n TransformationPass: 'blue'}\n\n\ndef pass_manager_drawer(pass_manager, filename, style=None, raw=False):\n \"\"\"\n Draws the pass manager.\n\n This function needs `pydot <https://github.com/erocarrera/pydot>`, which in turn needs\n Graphviz <https://www.graphviz.org/>` to be installed.\n\n Args:\n pass_manager (PassManager): the pass manager to be drawn\n filename (str): file path to save image to\n style (dict or OrderedDict): keys are the pass classes and the values are\n the colors to make them. An example can be seen in the DEFAULT_STYLE. An ordered\n dict can be used to ensure a priority coloring when pass falls into multiple\n categories. Any values not included in the provided dict will be filled in from\n the default dict\n raw (Bool) : True if you want to save the raw Dot output not an image. The\n default is False.\n Returns:\n PIL.Image or None: an in-memory representation of the pass manager. Or None if\n no image was generated or PIL is not installed.\n Raises:\n ImportError: when nxpd or pydot not installed.\n VisualizationError: If raw=True and filename=None.\n\n Example:\n .. code-block::\n\n %matplotlib inline\n from qiskit import QuantumCircuit\n from qiskit.compiler import transpile\n from qiskit.transpiler import PassManager\n from qiskit.visualization import pass_manager_drawer\n from qiskit.transpiler.passes import Unroller\n\n circ = QuantumCircuit(3)\n circ.ccx(0, 1, 2)\n circ.draw()\n\n pass_ = Unroller(['u1', 'u2', 'u3', 'cx'])\n pm = PassManager(pass_)\n new_circ = pm.run(circ)\n new_circ.draw(output='mpl')\n\n pass_manager_drawer(pm, \"passmanager.jpg\")\n \"\"\"\n\n try:\n import subprocess\n\n _PROC = subprocess.Popen(['dot', '-V'], # pylint: disable=invalid-name\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n _PROC.communicate()\n if _PROC.returncode != 0:\n has_graphviz = False\n else:\n has_graphviz = True\n except Exception: # pylint: disable=broad-except\n # this is raised when the dot command cannot be found, which means GraphViz\n # isn't installed\n has_graphviz = False\n\n HAS_GRAPHVIZ = has_graphviz # pylint: disable=invalid-name\n\n try:\n import pydot\n if not HAS_GRAPHVIZ:\n raise ImportError\n except ImportError:\n raise ImportError(\"pass_manager_drawer requires pydot and graphviz. \"\n \"Run 'pip install pydot'. \"\n \"Graphviz can be installed using 'brew install graphviz' on Mac\"\n \" or by downloading it from the website.\")\n\n passes = pass_manager.passes()\n\n if not style:\n style = DEFAULT_STYLE\n\n # create the overall graph\n graph = pydot.Dot()\n\n # identifiers for nodes need to be unique, so assign an id\n # can't just use python's id in case the exact same pass was\n # appended more than once\n component_id = 0\n\n prev_node = None\n\n for index, controller_group in enumerate(passes):\n\n # label is the name of the flow controller parameter\n label = \"[%s] %s\" % (index, ', '.join(controller_group['flow_controllers']))\n\n # create the subgraph for this controller\n subgraph = pydot.Cluster(str(component_id), label=label, fontname='helvetica',\n labeljust='l')\n component_id += 1\n\n for pass_ in controller_group['passes']:\n\n # label is the name of the pass\n node = pydot.Node(str(component_id),\n label=str(type(pass_).__name__),\n color=_get_node_color(pass_, style),\n shape=\"rectangle\",\n fontname='helvetica')\n\n subgraph.add_node(node)\n component_id += 1\n\n # the arguments that were provided to the pass when it was created\n arg_spec = inspect.getfullargspec(pass_.__init__)\n # 0 is the args, 1: to remove the self arg\n args = arg_spec[0][1:]\n\n num_optional = len(arg_spec[3]) if arg_spec[3] else 0\n\n # add in the inputs to the pass\n for arg_index, arg in enumerate(args):\n nd_style = 'solid'\n # any optional args are dashed\n # the num of optional counts from the end towards the start of the list\n if arg_index >= (len(args) - num_optional):\n nd_style = 'dashed'\n\n input_node = pydot.Node(component_id, label=arg,\n color=\"black\",\n shape=\"ellipse\",\n fontsize=10,\n style=nd_style,\n fontname='helvetica')\n subgraph.add_node(input_node)\n component_id += 1\n subgraph.add_edge(pydot.Edge(input_node, node))\n\n # if there is a previous node, add an edge between them\n if prev_node:\n subgraph.add_edge(pydot.Edge(prev_node, node))\n\n prev_node = node\n\n graph.add_subgraph(subgraph)\n\n if raw:\n if filename:\n graph.write(filename, format='raw')\n return None\n else:\n raise VisualizationError(\"if format=raw, then a filename is required.\")\n\n if not HAS_PIL and filename:\n # linter says this isn't a method - it is\n graph.write_png(filename) # pylint: disable=no-member\n return None\n\n with tempfile.TemporaryDirectory() as tmpdirname:\n tmppath = os.path.join(tmpdirname, 'pass_manager.png')\n\n # linter says this isn't a method - it is\n graph.write_png(tmppath) # pylint: disable=no-member\n\n image = Image.open(tmppath)\n image = utils._trim(image)\n os.remove(tmppath)\n if filename:\n image.save(filename, 'PNG')\n return image\n\n\ndef _get_node_color(pss, style):\n # look in the user provided dict first\n for typ, color in style.items():\n if isinstance(pss, typ):\n return color\n\n # failing that, look in the default\n for typ, color in DEFAULT_STYLE.items():\n if isinstance(pss, typ):\n return color\n\n return \"black\"\n", "path": "qiskit/visualization/pass_manager_visualization.py"}]} | 2,917 | 112 |
gh_patches_debug_30359 | rasdani/github-patches | git_diff | apluslms__a-plus-1293 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Course staff may create duplicate student groups
Course staff may create student groups (course/models.py class StudentGroup) that contain exactly the same group members as an existing group. Duplicate groups should not be allowed. The course staff UI for editing groups is in the URL http://localhost:8000/def/current/teachers/groups/ (in the left navigation menu, it is the "Groups" link under the heading Course staff).
Course staff may also create new groups (or edit existing groups) that are empty (no members) or only have one member. Groups should always have at least two members.
When students create groups in the "form a group" page (with user personal codes), A+ already prevents empty and duplicate groups.
</issue>
<code>
[start of course/forms.py]
1 from typing import Any
2
3 from django import forms
4 from django.contrib.humanize.templatetags.humanize import ordinal
5 from django.utils.safestring import mark_safe
6 from django.utils.text import format_lazy
7 from django.utils.translation import gettext_lazy as _
8
9 from aplus.api import api_reverse
10 from exercise.models import SubmissionDraft
11 from lib.fields import UsersSearchSelectField
12 from .models import Enrollment, StudentGroup
13 from userprofile.models import UserProfile
14
15
16 class GroupsForm(forms.Form):
17
18 def __init__(self, *args, **kwargs):
19 self.profile = kwargs.pop('profile')
20 self.instance = kwargs.pop('instance')
21 self.content = kwargs.pop('content')
22 super().__init__(*args, **kwargs)
23 total = self.content.total()
24 min_size = max(total.min_group_size, 2)
25 max_size = total.max_group_size
26
27 for n in range(2, max_size + 1):
28 widget = forms.TextInput(attrs={'class':'form-control'})
29 field = forms.CharField(widget=widget, required=(n <= min_size))
30 field.label = mark_safe(format_lazy(_('GROUP_MEMBER_LABEL -- {num}'), num=ordinal(n)))
31 self.fields['member{:d}'.format(n)] = field
32
33 def clean(self):
34 super().clean()
35
36 self.member_profiles = [self.profile]
37 for key in self.fields.keys():
38 if key in self.cleaned_data and self.cleaned_data[key]:
39 enrollment = Enrollment.objects.filter(
40 course_instance=self.instance,
41 personal_code=self.cleaned_data[key].upper()
42 ).first()
43 if not enrollment:
44 self.add_error(key, _('ERROR_CODE_NOT_RECOGNIZED'))
45 elif enrollment.user_profile in self.member_profiles:
46 self.add_error(key, _('ERROR_USER_ALREADY_IN_GROUP'))
47 else:
48 self.member_profiles.append(enrollment.user_profile)
49
50 if not self.errors and len(self.member_profiles) > 1:
51 if StudentGroup.get_exact(self.instance, self.member_profiles):
52 self.add_error(None, _('ERROR_GROUP_ALREADY_EXISTS'))
53
54 return self.cleaned_data
55
56 def save(self):
57 group = StudentGroup(course_instance=self.instance)
58 group.save()
59 group.members.add(*self.member_profiles)
60 return group
61
62
63 class GroupSelectForm(forms.Form):
64 group = forms.IntegerField(required=True)
65
66 def __init__(self, *args, **kwargs):
67 self.profile = kwargs.pop('profile')
68 self.instance = kwargs.pop('instance')
69 super().__init__(*args, **kwargs)
70
71 def clean(self):
72 super().clean()
73 self.selected_group = None
74 if 'group' in self.cleaned_data:
75 gid = self.cleaned_data['group']
76 if gid != 0:
77 group = self.profile.groups.filter(id=gid, course_instance=self.instance).first()
78 if group:
79 self.selected_group = group
80 else:
81 self.add_error('group', 'Invalid group id')
82 return self.cleaned_data
83
84 def save(self) -> Enrollment:
85 enrollment = self.instance.get_enrollment_for(self.profile.user)
86 enrollment.selected_group = self.selected_group
87 enrollment.save()
88 # Deactivate all drafts when changing groups.
89 SubmissionDraft.objects.filter(
90 exercise__course_module__course_instance=self.instance,
91 submitter=self.profile,
92 active=True,
93 ).update(active=False)
94 return enrollment
95
96
97 class GroupEditForm(forms.ModelForm):
98
99 members = UsersSearchSelectField(queryset=UserProfile.objects.none(),
100 initial_queryset=UserProfile.objects.none(),
101 label=_('LABEL_MEMBERS'),
102 )
103
104 def __init__(self, *args: Any, **kwargs: Any) -> None:
105 course_instance = kwargs.get('instance').course_instance
106 super().__init__(*args, **kwargs)
107 self.fields['members'].widget.search_api_url = api_reverse(
108 "course-students-list",
109 kwargs={'course_id': course_instance.id},
110 )
111 self.fields["members"].queryset = course_instance.get_student_profiles()
112 # Course staff may use this form for modifying and creating student groups.
113 # If an existing group is being modified, its current members must be
114 # set to the initial queryset.
115 if self.instance.id:
116 self.fields["members"].initial_queryset = self.instance.members.all()
117
118 class Meta:
119 model = StudentGroup
120 fields = ['members']
121
122
123 class EnrollStudentsForm(forms.Form):
124
125 user_profiles = UsersSearchSelectField(queryset=UserProfile.objects.all(),
126 initial_queryset=UserProfile.objects.none(),
127 label=_('LABEL_USERS'),
128 required=False,
129 )
130
131 def __init__(self, *args: Any, **kwargs: Any) -> None:
132 self.instance = kwargs.pop('instance')
133 super().__init__(*args, **kwargs)
134 self.fields['user_profiles'].widget.search_api_url = api_reverse("user-list")
135 if self.instance.sis_id:
136 self.fields['sis'] = forms.BooleanField(
137 required=False,
138 label=_('LABEL_ENROLL_FROM_SIS'),
139 )
140
[end of course/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/course/forms.py b/course/forms.py
--- a/course/forms.py
+++ b/course/forms.py
@@ -5,6 +5,7 @@
from django.utils.safestring import mark_safe
from django.utils.text import format_lazy
from django.utils.translation import gettext_lazy as _
+from django.db.models import Count
from aplus.api import api_reverse
from exercise.models import SubmissionDraft
@@ -115,6 +116,30 @@
if self.instance.id:
self.fields["members"].initial_queryset = self.instance.members.all()
+ def clean(self):
+ super().clean()
+ members = self.cleaned_data.get('members')
+ if members:
+ if len(members) == 1:
+ self.add_error('members', _('MUST_HAVE_TWO_MEMBERS'))
+ course_instance = self.instance.course_instance
+ # Filter all groups with course instance and that have one or more similar members as in the members list
+ filtered_groups = StudentGroup.objects.filter(course_instance=course_instance, members__in=members)
+ # Count number of members in each group
+ groups_with_member_count = filtered_groups.annotate(member_count=Count('members'))
+ # Filter only those groups that have same number of members
+ groups_with_exact_member_count = groups_with_member_count.filter(member_count=len(members))
+ # Loop through the returned groups and check if any group with exact same members exist
+ group_exists = False
+ for group in groups_with_exact_member_count:
+ group_members = group.members.all()
+ if list(group_members) == list(members):
+ group_exists = True
+ if group_exists:
+ self.add_error('members', _('ERROR_GROUP_ALREADY_EXISTS'))
+ return self.cleaned_data
+
+
class Meta:
model = StudentGroup
fields = ['members']
| {"golden_diff": "diff --git a/course/forms.py b/course/forms.py\n--- a/course/forms.py\n+++ b/course/forms.py\n@@ -5,6 +5,7 @@\n from django.utils.safestring import mark_safe\n from django.utils.text import format_lazy\n from django.utils.translation import gettext_lazy as _\n+from django.db.models import Count\n \n from aplus.api import api_reverse\n from exercise.models import SubmissionDraft\n@@ -115,6 +116,30 @@\n if self.instance.id:\n self.fields[\"members\"].initial_queryset = self.instance.members.all()\n \n+ def clean(self):\n+ super().clean()\n+ members = self.cleaned_data.get('members')\n+ if members:\n+ if len(members) == 1:\n+ self.add_error('members', _('MUST_HAVE_TWO_MEMBERS'))\n+ course_instance = self.instance.course_instance\n+ # Filter all groups with course instance and that have one or more similar members as in the members list\n+ filtered_groups = StudentGroup.objects.filter(course_instance=course_instance, members__in=members)\n+ # Count number of members in each group\n+ groups_with_member_count = filtered_groups.annotate(member_count=Count('members'))\n+ # Filter only those groups that have same number of members\n+ groups_with_exact_member_count = groups_with_member_count.filter(member_count=len(members))\n+ # Loop through the returned groups and check if any group with exact same members exist\n+ group_exists = False\n+ for group in groups_with_exact_member_count:\n+ group_members = group.members.all()\n+ if list(group_members) == list(members):\n+ group_exists = True\n+ if group_exists:\n+ self.add_error('members', _('ERROR_GROUP_ALREADY_EXISTS'))\n+ return self.cleaned_data\n+\n+\n class Meta:\n model = StudentGroup\n fields = ['members']\n", "issue": "Course staff may create duplicate student groups\nCourse staff may create student groups (course/models.py class StudentGroup) that contain exactly the same group members as an existing group. Duplicate groups should not be allowed. The course staff UI for editing groups is in the URL http://localhost:8000/def/current/teachers/groups/ (in the left navigation menu, it is the \"Groups\" link under the heading Course staff).\r\n\r\nCourse staff may also create new groups (or edit existing groups) that are empty (no members) or only have one member. Groups should always have at least two members.\r\n\r\nWhen students create groups in the \"form a group\" page (with user personal codes), A+ already prevents empty and duplicate groups.\n", "before_files": [{"content": "from typing import Any\n\nfrom django import forms\nfrom django.contrib.humanize.templatetags.humanize import ordinal\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nfrom aplus.api import api_reverse\nfrom exercise.models import SubmissionDraft\nfrom lib.fields import UsersSearchSelectField\nfrom .models import Enrollment, StudentGroup\nfrom userprofile.models import UserProfile\n\n\nclass GroupsForm(forms.Form):\n\n def __init__(self, *args, **kwargs):\n self.profile = kwargs.pop('profile')\n self.instance = kwargs.pop('instance')\n self.content = kwargs.pop('content')\n super().__init__(*args, **kwargs)\n total = self.content.total()\n min_size = max(total.min_group_size, 2)\n max_size = total.max_group_size\n\n for n in range(2, max_size + 1):\n widget = forms.TextInput(attrs={'class':'form-control'})\n field = forms.CharField(widget=widget, required=(n <= min_size))\n field.label = mark_safe(format_lazy(_('GROUP_MEMBER_LABEL -- {num}'), num=ordinal(n)))\n self.fields['member{:d}'.format(n)] = field\n\n def clean(self):\n super().clean()\n\n self.member_profiles = [self.profile]\n for key in self.fields.keys():\n if key in self.cleaned_data and self.cleaned_data[key]:\n enrollment = Enrollment.objects.filter(\n course_instance=self.instance,\n personal_code=self.cleaned_data[key].upper()\n ).first()\n if not enrollment:\n self.add_error(key, _('ERROR_CODE_NOT_RECOGNIZED'))\n elif enrollment.user_profile in self.member_profiles:\n self.add_error(key, _('ERROR_USER_ALREADY_IN_GROUP'))\n else:\n self.member_profiles.append(enrollment.user_profile)\n\n if not self.errors and len(self.member_profiles) > 1:\n if StudentGroup.get_exact(self.instance, self.member_profiles):\n self.add_error(None, _('ERROR_GROUP_ALREADY_EXISTS'))\n\n return self.cleaned_data\n\n def save(self):\n group = StudentGroup(course_instance=self.instance)\n group.save()\n group.members.add(*self.member_profiles)\n return group\n\n\nclass GroupSelectForm(forms.Form):\n group = forms.IntegerField(required=True)\n\n def __init__(self, *args, **kwargs):\n self.profile = kwargs.pop('profile')\n self.instance = kwargs.pop('instance')\n super().__init__(*args, **kwargs)\n\n def clean(self):\n super().clean()\n self.selected_group = None\n if 'group' in self.cleaned_data:\n gid = self.cleaned_data['group']\n if gid != 0:\n group = self.profile.groups.filter(id=gid, course_instance=self.instance).first()\n if group:\n self.selected_group = group\n else:\n self.add_error('group', 'Invalid group id')\n return self.cleaned_data\n\n def save(self) -> Enrollment:\n enrollment = self.instance.get_enrollment_for(self.profile.user)\n enrollment.selected_group = self.selected_group\n enrollment.save()\n # Deactivate all drafts when changing groups.\n SubmissionDraft.objects.filter(\n exercise__course_module__course_instance=self.instance,\n submitter=self.profile,\n active=True,\n ).update(active=False)\n return enrollment\n\n\nclass GroupEditForm(forms.ModelForm):\n\n members = UsersSearchSelectField(queryset=UserProfile.objects.none(),\n initial_queryset=UserProfile.objects.none(),\n label=_('LABEL_MEMBERS'),\n )\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n course_instance = kwargs.get('instance').course_instance\n super().__init__(*args, **kwargs)\n self.fields['members'].widget.search_api_url = api_reverse(\n \"course-students-list\",\n kwargs={'course_id': course_instance.id},\n )\n self.fields[\"members\"].queryset = course_instance.get_student_profiles()\n # Course staff may use this form for modifying and creating student groups.\n # If an existing group is being modified, its current members must be\n # set to the initial queryset.\n if self.instance.id:\n self.fields[\"members\"].initial_queryset = self.instance.members.all()\n\n class Meta:\n model = StudentGroup\n fields = ['members']\n\n\nclass EnrollStudentsForm(forms.Form):\n\n user_profiles = UsersSearchSelectField(queryset=UserProfile.objects.all(),\n initial_queryset=UserProfile.objects.none(),\n label=_('LABEL_USERS'),\n required=False,\n )\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n self.instance = kwargs.pop('instance')\n super().__init__(*args, **kwargs)\n self.fields['user_profiles'].widget.search_api_url = api_reverse(\"user-list\")\n if self.instance.sis_id:\n self.fields['sis'] = forms.BooleanField(\n required=False,\n label=_('LABEL_ENROLL_FROM_SIS'),\n )\n", "path": "course/forms.py"}]} | 2,042 | 399 |
gh_patches_debug_24936 | rasdani/github-patches | git_diff | DDMAL__CantusDB-733 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Links to unpublished sources should not appear on Provenance detail pages
Example: visit http://206.12.93.196/provenance/3665 (while logged out), click on first link. We get a 403 Forbidden error, since the source is unpublished.
Unpublished sources should not be listed on the Provenance Detail page.
Credit to @zhannaklimanova and her link checker script for catching this bug!
</issue>
<code>
[start of django/cantusdb_project/main_app/views/provenance.py]
1 from django.views.generic import DetailView
2 from main_app.models import Provenance
3
4
5 class ProvenanceDetailView(DetailView):
6 model = Provenance
7 context_object_name = "provenance"
8 template_name = "provenance_detail.html"
9
[end of django/cantusdb_project/main_app/views/provenance.py]
[start of django/cantusdb_project/main_app/views/century.py]
1 from django.views.generic import DetailView
2 from main_app.models import Century, Source
3 from typing import Any
4
5
6 class CenturyDetailView(DetailView):
7 model = Century
8 context_object_name = "century"
9 template_name = "century_detail.html"
10
11 def get_context_data(self, **kwargs: Any) -> dict[str, Any]:
12 context = super().get_context_data(**kwargs)
13 century = self.get_object()
14 user = self.request.user
15 display_unpublished = user.is_authenticated
16 sources = Source.objects.filter(century=century)
17 if not display_unpublished:
18 sources = sources.filter(published=True)
19 sources = sources.only("title", "id")
20 context["sources"] = sources
21 return context
22
[end of django/cantusdb_project/main_app/views/century.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/views/century.py b/django/cantusdb_project/main_app/views/century.py
--- a/django/cantusdb_project/main_app/views/century.py
+++ b/django/cantusdb_project/main_app/views/century.py
@@ -16,6 +16,6 @@
sources = Source.objects.filter(century=century)
if not display_unpublished:
sources = sources.filter(published=True)
- sources = sources.only("title", "id")
+ sources = sources.only("title", "id", "siglum")
context["sources"] = sources
return context
diff --git a/django/cantusdb_project/main_app/views/provenance.py b/django/cantusdb_project/main_app/views/provenance.py
--- a/django/cantusdb_project/main_app/views/provenance.py
+++ b/django/cantusdb_project/main_app/views/provenance.py
@@ -1,8 +1,21 @@
from django.views.generic import DetailView
-from main_app.models import Provenance
+from main_app.models import Provenance, Source
+from typing import Any
class ProvenanceDetailView(DetailView):
model = Provenance
context_object_name = "provenance"
template_name = "provenance_detail.html"
+
+ def get_context_data(self, **kwargs: Any) -> dict[str, Any]:
+ context = super().get_context_data(**kwargs)
+ provenance = self.get_object()
+ user = self.request.user
+ display_unpublished = user.is_authenticated
+ sources = Source.objects.filter(provenance=provenance)
+ if not display_unpublished:
+ sources = sources.filter(published=True)
+ sources = sources.only("title", "id", "siglum")
+ context["sources"] = sources
+ return context
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/century.py b/django/cantusdb_project/main_app/views/century.py\n--- a/django/cantusdb_project/main_app/views/century.py\n+++ b/django/cantusdb_project/main_app/views/century.py\n@@ -16,6 +16,6 @@\n sources = Source.objects.filter(century=century)\n if not display_unpublished:\n sources = sources.filter(published=True)\n- sources = sources.only(\"title\", \"id\")\n+ sources = sources.only(\"title\", \"id\", \"siglum\")\n context[\"sources\"] = sources\n return context\ndiff --git a/django/cantusdb_project/main_app/views/provenance.py b/django/cantusdb_project/main_app/views/provenance.py\n--- a/django/cantusdb_project/main_app/views/provenance.py\n+++ b/django/cantusdb_project/main_app/views/provenance.py\n@@ -1,8 +1,21 @@\n from django.views.generic import DetailView\n-from main_app.models import Provenance\n+from main_app.models import Provenance, Source\n+from typing import Any\n \n \n class ProvenanceDetailView(DetailView):\n model = Provenance\n context_object_name = \"provenance\"\n template_name = \"provenance_detail.html\"\n+\n+ def get_context_data(self, **kwargs: Any) -> dict[str, Any]:\n+ context = super().get_context_data(**kwargs)\n+ provenance = self.get_object()\n+ user = self.request.user\n+ display_unpublished = user.is_authenticated\n+ sources = Source.objects.filter(provenance=provenance)\n+ if not display_unpublished:\n+ sources = sources.filter(published=True)\n+ sources = sources.only(\"title\", \"id\", \"siglum\")\n+ context[\"sources\"] = sources\n+ return context\n", "issue": "Links to unpublished sources should not appear on Provenance detail pages\nExample: visit http://206.12.93.196/provenance/3665 (while logged out), click on first link. We get a 403 Forbidden error, since the source is unpublished.\r\n\r\nUnpublished sources should not be listed on the Provenance Detail page.\r\n\r\nCredit to @zhannaklimanova and her link checker script for catching this bug!\n", "before_files": [{"content": "from django.views.generic import DetailView\nfrom main_app.models import Provenance\n\n\nclass ProvenanceDetailView(DetailView):\n model = Provenance\n context_object_name = \"provenance\"\n template_name = \"provenance_detail.html\"\n", "path": "django/cantusdb_project/main_app/views/provenance.py"}, {"content": "from django.views.generic import DetailView\nfrom main_app.models import Century, Source\nfrom typing import Any\n\n\nclass CenturyDetailView(DetailView):\n model = Century\n context_object_name = \"century\"\n template_name = \"century_detail.html\"\n\n def get_context_data(self, **kwargs: Any) -> dict[str, Any]:\n context = super().get_context_data(**kwargs)\n century = self.get_object()\n user = self.request.user\n display_unpublished = user.is_authenticated\n sources = Source.objects.filter(century=century)\n if not display_unpublished:\n sources = sources.filter(published=True)\n sources = sources.only(\"title\", \"id\")\n context[\"sources\"] = sources\n return context\n", "path": "django/cantusdb_project/main_app/views/century.py"}]} | 943 | 425 |
gh_patches_debug_39481 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3317 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider lowes is broken
During the global build at 2021-06-02-14-42-40, spider **lowes** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/lowes.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/lowes.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/lowes.geojson))
</issue>
<code>
[start of locations/spiders/lowes.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import re
4 import json
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 day_mapping = {'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th', 'Friday': 'Fr', 'Saturday': 'Sa',
10 'Sunday': 'Su'}
11
12
13 class LowesSpider(scrapy.Spider):
14 """"This spider scrapes Lowes retail store locations"""
15 name = "lowes"
16 item_attributes = { 'brand': "Lowe's", 'brand_wikidata': "Q1373493" }
17 allowed_domains = ["lowes.com"]
18 start_urls = ('https://www.lowes.com/Lowes-Stores',)
19 download_delay = 0.5
20
21 custom_settings = {
22 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
23 }
24
25 def parse_hours(self, store_hours):
26 opening_hours = OpeningHours()
27
28 for weekday in store_hours:
29 day = weekday.get('day').get('day')
30 open_time = weekday.get('day').get('open')
31 hour, minute, sec = open_time.split('.')
32 open_time_formatted = hour + ':' + minute
33
34 close = weekday.get('day').get('close')
35 hour, minute, sec = close.split('.')
36 close_time_formatted = hour + ':' + minute
37
38 if close_time_formatted in {'00:00', '24:00'}:
39 close_time_formatted = "23:59"
40
41 opening_hours.add_range(day=day_mapping[day],
42 open_time=open_time_formatted,
43 close_time=close_time_formatted)
44
45 return opening_hours.as_opening_hours()
46
47 def parse_store(self, response):
48 ref = re.search(r'.+/(.+)', response.url).group(1)
49
50 script_content = response.xpath('//script[contains(text(),"storeHours")]/text()').extract_first()
51 if not script_content:
52 return
53
54 # effectively strip off leading "window.__PRELOADED_STATE__ = " where
55 # the rest is a json blob
56 script_data = script_content.split(" = ", 1)[-1]
57 json_data = json.loads(script_data)
58 store_hours = json_data.get('storeHours')
59
60 state_texts = response.xpath('//span[@itemprop="addressRegion"]/text()').extract()
61 properties = {
62 'lat': float(json_data['storeDetails']['lat']),
63 'lon': float(json_data['storeDetails']['long']),
64 'ref': ref,
65 'addr_full': response.xpath('normalize-space(//span[@itemprop="streetAddress"]/text())').extract_first(),
66 'city': response.xpath('normalize-space(//span[@itemprop="addressLocality"]/text())').extract_first(),
67 'state': " ".join(text.strip() for text in state_texts if text.strip()),
68 'postcode': response.xpath('normalize-space(//span[@itemprop="postalCode"]/text())').extract_first(),
69 'phone': response.xpath('normalize-space(//meta[@itemprop="telephone"]/@content)').extract_first(),
70 'website': response.request.url,
71 'opening_hours': self.parse_hours(store_hours),
72 'extras': {
73 'amenity:toilets': True,
74 },
75 }
76
77 yield GeojsonPointItem(**properties)
78
79 def parse_state(self, response):
80 city_urls = response.xpath('//div[@class="v-spacing-small"]/a/@href').extract()
81 for path in city_urls:
82 yield scrapy.Request(response.urljoin(path), callback=self.parse_store)
83
84 def parse(self, response):
85 urls = response.xpath('//div[@id="mainContent"]//li[@role="listitem"]/a/@href').extract()
86 for path in urls:
87 yield scrapy.Request(response.urljoin(path), callback=self.parse_state)
88
[end of locations/spiders/lowes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/lowes.py b/locations/spiders/lowes.py
--- a/locations/spiders/lowes.py
+++ b/locations/spiders/lowes.py
@@ -6,16 +6,23 @@
from locations.hours import OpeningHours
-day_mapping = {'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th', 'Friday': 'Fr', 'Saturday': 'Sa',
- 'Sunday': 'Su'}
+day_mapping = {
+ 'Monday': 'Mo',
+ 'Tuesday': 'Tu',
+ 'Wednesday': 'We',
+ 'Thursday': 'Th',
+ 'Friday': 'Fr',
+ 'Saturday': 'Sa',
+ 'Sunday': 'Su',
+}
class LowesSpider(scrapy.Spider):
""""This spider scrapes Lowes retail store locations"""
name = "lowes"
- item_attributes = { 'brand': "Lowe's", 'brand_wikidata': "Q1373493" }
+ item_attributes = {'brand': "Lowe's", 'brand_wikidata': "Q1373493"}
allowed_domains = ["lowes.com"]
- start_urls = ('https://www.lowes.com/Lowes-Stores',)
+ start_urls = ('https://www.lowes.com/sitemap/store0.xml',)
download_delay = 0.5
custom_settings = {
@@ -59,14 +66,14 @@
state_texts = response.xpath('//span[@itemprop="addressRegion"]/text()').extract()
properties = {
- 'lat': float(json_data['storeDetails']['lat']),
- 'lon': float(json_data['storeDetails']['long']),
- 'ref': ref,
- 'addr_full': response.xpath('normalize-space(//span[@itemprop="streetAddress"]/text())').extract_first(),
- 'city': response.xpath('normalize-space(//span[@itemprop="addressLocality"]/text())').extract_first(),
- 'state': " ".join(text.strip() for text in state_texts if text.strip()),
- 'postcode': response.xpath('normalize-space(//span[@itemprop="postalCode"]/text())').extract_first(),
- 'phone': response.xpath('normalize-space(//meta[@itemprop="telephone"]/@content)').extract_first(),
+ 'lat': json_data['storeDetails']['lat'],
+ 'lon': json_data['storeDetails']['long'],
+ 'ref': json_data['storeDetails']['id'],
+ 'addr_full': json_data['storeDetails']['address'],
+ 'city': json_data['storeDetails']['city'],
+ 'state': json_data['storeDetails']['state'],
+ 'postcode': json_data['storeDetails']['zip'],
+ 'phone': json_data['storeDetails']['phone'],
'website': response.request.url,
'opening_hours': self.parse_hours(store_hours),
'extras': {
@@ -76,12 +83,9 @@
yield GeojsonPointItem(**properties)
- def parse_state(self, response):
- city_urls = response.xpath('//div[@class="v-spacing-small"]/a/@href').extract()
- for path in city_urls:
- yield scrapy.Request(response.urljoin(path), callback=self.parse_store)
-
def parse(self, response):
- urls = response.xpath('//div[@id="mainContent"]//li[@role="listitem"]/a/@href').extract()
- for path in urls:
- yield scrapy.Request(response.urljoin(path), callback=self.parse_state)
+ response.selector.remove_namespaces()
+ urls = response.xpath('//url/loc/text()').extract()
+
+ for url in urls:
+ yield scrapy.Request(url, callback=self.parse_store)
| {"golden_diff": "diff --git a/locations/spiders/lowes.py b/locations/spiders/lowes.py\n--- a/locations/spiders/lowes.py\n+++ b/locations/spiders/lowes.py\n@@ -6,16 +6,23 @@\n from locations.hours import OpeningHours\n \n \n-day_mapping = {'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th', 'Friday': 'Fr', 'Saturday': 'Sa',\n- 'Sunday': 'Su'}\n+day_mapping = {\n+ 'Monday': 'Mo',\n+ 'Tuesday': 'Tu',\n+ 'Wednesday': 'We',\n+ 'Thursday': 'Th',\n+ 'Friday': 'Fr',\n+ 'Saturday': 'Sa',\n+ 'Sunday': 'Su',\n+}\n \n \n class LowesSpider(scrapy.Spider):\n \"\"\"\"This spider scrapes Lowes retail store locations\"\"\"\n name = \"lowes\"\n- item_attributes = { 'brand': \"Lowe's\", 'brand_wikidata': \"Q1373493\" }\n+ item_attributes = {'brand': \"Lowe's\", 'brand_wikidata': \"Q1373493\"}\n allowed_domains = [\"lowes.com\"]\n- start_urls = ('https://www.lowes.com/Lowes-Stores',)\n+ start_urls = ('https://www.lowes.com/sitemap/store0.xml',)\n download_delay = 0.5\n \n custom_settings = {\n@@ -59,14 +66,14 @@\n \n state_texts = response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract()\n properties = {\n- 'lat': float(json_data['storeDetails']['lat']),\n- 'lon': float(json_data['storeDetails']['long']),\n- 'ref': ref,\n- 'addr_full': response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/text())').extract_first(),\n- 'city': response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first(),\n- 'state': \" \".join(text.strip() for text in state_texts if text.strip()),\n- 'postcode': response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first(),\n- 'phone': response.xpath('normalize-space(//meta[@itemprop=\"telephone\"]/@content)').extract_first(),\n+ 'lat': json_data['storeDetails']['lat'],\n+ 'lon': json_data['storeDetails']['long'],\n+ 'ref': json_data['storeDetails']['id'],\n+ 'addr_full': json_data['storeDetails']['address'],\n+ 'city': json_data['storeDetails']['city'],\n+ 'state': json_data['storeDetails']['state'],\n+ 'postcode': json_data['storeDetails']['zip'],\n+ 'phone': json_data['storeDetails']['phone'],\n 'website': response.request.url,\n 'opening_hours': self.parse_hours(store_hours),\n 'extras': {\n@@ -76,12 +83,9 @@\n \n yield GeojsonPointItem(**properties)\n \n- def parse_state(self, response):\n- city_urls = response.xpath('//div[@class=\"v-spacing-small\"]/a/@href').extract()\n- for path in city_urls:\n- yield scrapy.Request(response.urljoin(path), callback=self.parse_store)\n-\n def parse(self, response):\n- urls = response.xpath('//div[@id=\"mainContent\"]//li[@role=\"listitem\"]/a/@href').extract()\n- for path in urls:\n- yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n+ response.selector.remove_namespaces()\n+ urls = response.xpath('//url/loc/text()').extract()\n+\n+ for url in urls:\n+ yield scrapy.Request(url, callback=self.parse_store)\n", "issue": "Spider lowes is broken\nDuring the global build at 2021-06-02-14-42-40, spider **lowes** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/lowes.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/lowes.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/lowes.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\nimport json\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nday_mapping = {'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th', 'Friday': 'Fr', 'Saturday': 'Sa',\n 'Sunday': 'Su'}\n\n\nclass LowesSpider(scrapy.Spider):\n \"\"\"\"This spider scrapes Lowes retail store locations\"\"\"\n name = \"lowes\"\n item_attributes = { 'brand': \"Lowe's\", 'brand_wikidata': \"Q1373493\" }\n allowed_domains = [\"lowes.com\"]\n start_urls = ('https://www.lowes.com/Lowes-Stores',)\n download_delay = 0.5\n\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n\n def parse_hours(self, store_hours):\n opening_hours = OpeningHours()\n\n for weekday in store_hours:\n day = weekday.get('day').get('day')\n open_time = weekday.get('day').get('open')\n hour, minute, sec = open_time.split('.')\n open_time_formatted = hour + ':' + minute\n\n close = weekday.get('day').get('close')\n hour, minute, sec = close.split('.')\n close_time_formatted = hour + ':' + minute\n\n if close_time_formatted in {'00:00', '24:00'}:\n close_time_formatted = \"23:59\"\n\n opening_hours.add_range(day=day_mapping[day],\n open_time=open_time_formatted,\n close_time=close_time_formatted)\n\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n ref = re.search(r'.+/(.+)', response.url).group(1)\n\n script_content = response.xpath('//script[contains(text(),\"storeHours\")]/text()').extract_first()\n if not script_content:\n return\n\n # effectively strip off leading \"window.__PRELOADED_STATE__ = \" where\n # the rest is a json blob\n script_data = script_content.split(\" = \", 1)[-1]\n json_data = json.loads(script_data)\n store_hours = json_data.get('storeHours')\n\n state_texts = response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract()\n properties = {\n 'lat': float(json_data['storeDetails']['lat']),\n 'lon': float(json_data['storeDetails']['long']),\n 'ref': ref,\n 'addr_full': response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/text())').extract_first(),\n 'city': response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first(),\n 'state': \" \".join(text.strip() for text in state_texts if text.strip()),\n 'postcode': response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first(),\n 'phone': response.xpath('normalize-space(//meta[@itemprop=\"telephone\"]/@content)').extract_first(),\n 'website': response.request.url,\n 'opening_hours': self.parse_hours(store_hours),\n 'extras': {\n 'amenity:toilets': True,\n },\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse_state(self, response):\n city_urls = response.xpath('//div[@class=\"v-spacing-small\"]/a/@href').extract()\n for path in city_urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_store)\n\n def parse(self, response):\n urls = response.xpath('//div[@id=\"mainContent\"]//li[@role=\"listitem\"]/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n", "path": "locations/spiders/lowes.py"}]} | 1,765 | 830 |
gh_patches_debug_28882 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-1391 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WYSIWYG for static pages
Client-side WYSIWYG :
- http://sofish.github.io/pen/
- https://github.com/mduvall/grande.js
- http://imperavi.com/redactor/
- https://github.com/tholman/zenpen
</issue>
<code>
[start of geotrek/flatpages/views.py]
1 from rest_framework import viewsets
2
3 from geotrek.flatpages.serializers import FlatPageSerializer
4 from geotrek.flatpages import models as flatpages_models
5
6
7 class FlatPageViewSet(viewsets.ModelViewSet):
8 """
9 A viewset for viewing and editing flat pages instances.
10 """
11 serializer_class = FlatPageSerializer
12 queryset = flatpages_models.FlatPage.objects.all()
13
[end of geotrek/flatpages/views.py]
[start of geotrek/flatpages/admin.py]
1 from django.contrib import admin
2 from django.conf import settings
3
4 from modeltranslation.admin import TranslationAdmin
5
6 from geotrek.flatpages import models as flatpages_models
7
8
9 class FlatPagesAdmin(TranslationAdmin):
10 list_display = ('title', 'published', 'publication_date', 'target')
11 search_fields = ('title', 'content')
12
13
14 if settings.FLATPAGES_ENABLED:
15 admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)
16
[end of geotrek/flatpages/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/geotrek/flatpages/admin.py b/geotrek/flatpages/admin.py
--- a/geotrek/flatpages/admin.py
+++ b/geotrek/flatpages/admin.py
@@ -2,6 +2,7 @@
from django.conf import settings
from modeltranslation.admin import TranslationAdmin
+from tinymce.widgets import TinyMCE
from geotrek.flatpages import models as flatpages_models
@@ -10,6 +11,11 @@
list_display = ('title', 'published', 'publication_date', 'target')
search_fields = ('title', 'content')
+ def formfield_for_dbfield(self, db_field, **kwargs):
+ if db_field.name[:7] == 'content':
+ return db_field.formfield(widget=TinyMCE)
+ return super(FlatPagesAdmin, self).formfield_for_dbfield(db_field, **kwargs)
+
if settings.FLATPAGES_ENABLED:
admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)
diff --git a/geotrek/flatpages/views.py b/geotrek/flatpages/views.py
--- a/geotrek/flatpages/views.py
+++ b/geotrek/flatpages/views.py
@@ -1,3 +1,4 @@
+from rest_framework import permissions as rest_permissions
from rest_framework import viewsets
from geotrek.flatpages.serializers import FlatPageSerializer
@@ -8,5 +9,9 @@
"""
A viewset for viewing and editing flat pages instances.
"""
+ model = flatpages_models.FlatPage
serializer_class = FlatPageSerializer
- queryset = flatpages_models.FlatPage.objects.all()
+ permission_classes = [rest_permissions.DjangoModelPermissionsOrAnonReadOnly]
+
+ def get_queryset(self):
+ return flatpages_models.FlatPage.objects.filter(published=True)
| {"golden_diff": "diff --git a/geotrek/flatpages/admin.py b/geotrek/flatpages/admin.py\n--- a/geotrek/flatpages/admin.py\n+++ b/geotrek/flatpages/admin.py\n@@ -2,6 +2,7 @@\n from django.conf import settings\n \n from modeltranslation.admin import TranslationAdmin\n+from tinymce.widgets import TinyMCE\n \n from geotrek.flatpages import models as flatpages_models\n \n@@ -10,6 +11,11 @@\n list_display = ('title', 'published', 'publication_date', 'target')\n search_fields = ('title', 'content')\n \n+ def formfield_for_dbfield(self, db_field, **kwargs):\n+ if db_field.name[:7] == 'content':\n+ return db_field.formfield(widget=TinyMCE)\n+ return super(FlatPagesAdmin, self).formfield_for_dbfield(db_field, **kwargs)\n+\n \n if settings.FLATPAGES_ENABLED:\n admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)\ndiff --git a/geotrek/flatpages/views.py b/geotrek/flatpages/views.py\n--- a/geotrek/flatpages/views.py\n+++ b/geotrek/flatpages/views.py\n@@ -1,3 +1,4 @@\n+from rest_framework import permissions as rest_permissions\n from rest_framework import viewsets\n \n from geotrek.flatpages.serializers import FlatPageSerializer\n@@ -8,5 +9,9 @@\n \"\"\"\n A viewset for viewing and editing flat pages instances.\n \"\"\"\n+ model = flatpages_models.FlatPage\n serializer_class = FlatPageSerializer\n- queryset = flatpages_models.FlatPage.objects.all()\n+ permission_classes = [rest_permissions.DjangoModelPermissionsOrAnonReadOnly]\n+\n+ def get_queryset(self):\n+ return flatpages_models.FlatPage.objects.filter(published=True)\n", "issue": "WYSIWYG for static pages\nClient-side WYSIWYG : \n- http://sofish.github.io/pen/\n- https://github.com/mduvall/grande.js\n- http://imperavi.com/redactor/\n- https://github.com/tholman/zenpen\n\n", "before_files": [{"content": "from rest_framework import viewsets\n\nfrom geotrek.flatpages.serializers import FlatPageSerializer\nfrom geotrek.flatpages import models as flatpages_models\n\n\nclass FlatPageViewSet(viewsets.ModelViewSet):\n \"\"\"\n A viewset for viewing and editing flat pages instances.\n \"\"\"\n serializer_class = FlatPageSerializer\n queryset = flatpages_models.FlatPage.objects.all()\n", "path": "geotrek/flatpages/views.py"}, {"content": "from django.contrib import admin\nfrom django.conf import settings\n\nfrom modeltranslation.admin import TranslationAdmin\n\nfrom geotrek.flatpages import models as flatpages_models\n\n\nclass FlatPagesAdmin(TranslationAdmin):\n list_display = ('title', 'published', 'publication_date', 'target')\n search_fields = ('title', 'content')\n\n\nif settings.FLATPAGES_ENABLED:\n admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)\n", "path": "geotrek/flatpages/admin.py"}]} | 842 | 400 |
gh_patches_debug_7795 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-4076 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
azure - unpinn EventGrid SDK version
We need AdvancedFilters to be added to the stable version.
https://pypi.org/project/azure-mgmt-eventgrid/
</issue>
<code>
[start of tools/c7n_azure/setup.py]
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from io import open
16 from os import path
17 from setuptools import setup, find_packages
18
19 # read the contents of your README file
20 this_directory = path.abspath(path.dirname(__file__))
21 readme = path.join(this_directory, 'readme.md')
22 long_description = ''
23 if path.exists(readme):
24 with open(readme, encoding='utf-8') as f:
25 long_description = f.read()
26
27 setup(
28 name="c7n_azure",
29 version='0.5.3',
30 description="Cloud Custodian - Azure Support",
31 long_description=long_description,
32 long_description_content_type='text/markdown',
33 classifiers=[
34 "Topic :: System :: Systems Administration",
35 "Topic :: System :: Distributed Computing"
36 ],
37 url="https://github.com/cloud-custodian/cloud-custodian",
38 license="Apache-2.0",
39 packages=find_packages(),
40 entry_points={
41 "custodian.resources": [
42 'azure = c7n_azure.entry:initialize_azure']
43 },
44 install_requires=["azure-mgmt-authorization",
45 "azure-mgmt-applicationinsights==0.1.1",
46 "azure-mgmt-batch",
47 "azure-mgmt-cognitiveservices",
48 "azure-mgmt-cosmosdb",
49 "azure-mgmt-compute",
50 "azure-mgmt-cdn",
51 "azure-mgmt-containerregistry",
52 "azure-mgmt-containerservice",
53 "azure-mgmt-datalake-store",
54 "azure-mgmt-datafactory",
55 "azure-mgmt-iothub",
56 "azure-mgmt-keyvault",
57 "azure-mgmt-managementgroups",
58 "azure-mgmt-network",
59 "azure-mgmt-redis",
60 "azure-mgmt-resource==2.1.0",
61 "azure-mgmt-sql",
62 "azure-mgmt-storage",
63 "azure-mgmt-web",
64 "azure-mgmt-monitor",
65 "azure-mgmt-policyinsights",
66 "azure-mgmt-eventgrid==2.0.0rc2", # RC2 supports AdvancedFilters
67 "azure-graphrbac",
68 "azure-keyvault",
69 "azure-storage-blob",
70 "azure-storage-queue",
71 "distlib",
72 "requests",
73 "PyJWT",
74 "c7n",
75 "requests",
76 "azure-cli-core",
77 "adal",
78 "backports.functools_lru_cache",
79 "futures>=3.1.1",
80 "netaddr"],
81 package_data={str(''): [str('function_binding_resources/bin/*.dll'),
82 str('function_binding_resources/*.csproj'),
83 str('function_binding_resources/bin/*.json')]}
84 )
85
[end of tools/c7n_azure/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/c7n_azure/setup.py b/tools/c7n_azure/setup.py
--- a/tools/c7n_azure/setup.py
+++ b/tools/c7n_azure/setup.py
@@ -63,7 +63,7 @@
"azure-mgmt-web",
"azure-mgmt-monitor",
"azure-mgmt-policyinsights",
- "azure-mgmt-eventgrid==2.0.0rc2", # RC2 supports AdvancedFilters
+ "azure-mgmt-eventgrid",
"azure-graphrbac",
"azure-keyvault",
"azure-storage-blob",
| {"golden_diff": "diff --git a/tools/c7n_azure/setup.py b/tools/c7n_azure/setup.py\n--- a/tools/c7n_azure/setup.py\n+++ b/tools/c7n_azure/setup.py\n@@ -63,7 +63,7 @@\n \"azure-mgmt-web\",\n \"azure-mgmt-monitor\",\n \"azure-mgmt-policyinsights\",\n- \"azure-mgmt-eventgrid==2.0.0rc2\", # RC2 supports AdvancedFilters\n+ \"azure-mgmt-eventgrid\",\n \"azure-graphrbac\",\n \"azure-keyvault\",\n \"azure-storage-blob\",\n", "issue": "azure - unpinn EventGrid SDK version\nWe need AdvancedFilters to be added to the stable version.\r\n\r\nhttps://pypi.org/project/azure-mgmt-eventgrid/\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom io import open\nfrom os import path\nfrom setuptools import setup, find_packages\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nreadme = path.join(this_directory, 'readme.md')\nlong_description = ''\nif path.exists(readme):\n with open(readme, encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name=\"c7n_azure\",\n version='0.5.3',\n description=\"Cloud Custodian - Azure Support\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n classifiers=[\n \"Topic :: System :: Systems Administration\",\n \"Topic :: System :: Distributed Computing\"\n ],\n url=\"https://github.com/cloud-custodian/cloud-custodian\",\n license=\"Apache-2.0\",\n packages=find_packages(),\n entry_points={\n \"custodian.resources\": [\n 'azure = c7n_azure.entry:initialize_azure']\n },\n install_requires=[\"azure-mgmt-authorization\",\n \"azure-mgmt-applicationinsights==0.1.1\",\n \"azure-mgmt-batch\",\n \"azure-mgmt-cognitiveservices\",\n \"azure-mgmt-cosmosdb\",\n \"azure-mgmt-compute\",\n \"azure-mgmt-cdn\",\n \"azure-mgmt-containerregistry\",\n \"azure-mgmt-containerservice\",\n \"azure-mgmt-datalake-store\",\n \"azure-mgmt-datafactory\",\n \"azure-mgmt-iothub\",\n \"azure-mgmt-keyvault\",\n \"azure-mgmt-managementgroups\",\n \"azure-mgmt-network\",\n \"azure-mgmt-redis\",\n \"azure-mgmt-resource==2.1.0\",\n \"azure-mgmt-sql\",\n \"azure-mgmt-storage\",\n \"azure-mgmt-web\",\n \"azure-mgmt-monitor\",\n \"azure-mgmt-policyinsights\",\n \"azure-mgmt-eventgrid==2.0.0rc2\", # RC2 supports AdvancedFilters\n \"azure-graphrbac\",\n \"azure-keyvault\",\n \"azure-storage-blob\",\n \"azure-storage-queue\",\n \"distlib\",\n \"requests\",\n \"PyJWT\",\n \"c7n\",\n \"requests\",\n \"azure-cli-core\",\n \"adal\",\n \"backports.functools_lru_cache\",\n \"futures>=3.1.1\",\n \"netaddr\"],\n package_data={str(''): [str('function_binding_resources/bin/*.dll'),\n str('function_binding_resources/*.csproj'),\n str('function_binding_resources/bin/*.json')]}\n)\n", "path": "tools/c7n_azure/setup.py"}]} | 1,426 | 133 |
gh_patches_debug_41534 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-8855 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-3107] [Bug] nested dependencies not installed when package is a tarball
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
when running `dbt deps` to install a package specified as a tarball, dbt doesn't install nested dependencies (i.e. packages specified in the imported package's `packages.yml` file) as it does when installing a package from local, git or the dbt hub.
### Expected Behavior
consistent behaviour across import methods regarding nested dependencies. dbt should install any dependencies specified in the tarball project's packages.yml file.
### Steps To Reproduce
this can be reproduced by importing the tarball of a package with nested dependencies. In this case, importing dbt_expectations should cause dbt_date to be installed, as its included in the package's dependencies here: https://github.com/calogica/dbt-expectations/blob/0.9.0/packages.yml
Steps:
1. create a `packages.yml` file in a project with the following structure:
``` yaml
packages:
- tarball: "https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz"
name: "dbt_expectations"
```
2. run `dbt deps`
running dbt deps will only install dbt_expectations:
```
20:08:55 Running with dbt=1.5.6
20:08:55 Installing dbt_expectations
20:08:56 Installed from tarball (url: https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz)
```
compare this to installing the same package from dbt hub, with the following `packages.yml`:
``` yaml
packages:
- package: calogica/dbt_expectations
version: "0.9.0"
```
```
20:14:24 Running with dbt=1.5.6
20:14:24 Installing calogica/dbt_expectations
20:14:25 Installed from version 0.9.0
20:14:25 Up to date!
20:14:25 Installing calogica/dbt_date
20:14:25 Installed from version 0.8.1
20:14:25 Updated version available: 0.9.1
20:14:25
20:14:25 Updates available for packages: ['calogica/dbt_date']
Update your versions in packages.yml, then run dbt deps
```
### Relevant log output
_No response_
### Environment
```markdown
- OS: Mac OS 13.5.2 (22G91)
- Python: 3.9
- dbt: 1.5.6
```
### Which database adapter are you using with dbt?
snowflake
### Additional Context
_No response_
</issue>
<code>
[start of core/dbt/deps/tarball.py]
1 from typing import Dict
2
3 from dbt.contracts.project import RegistryPackageMetadata, TarballPackage
4 from dbt.deps.base import PinnedPackage, UnpinnedPackage
5
6
7 class TarballPackageMixin:
8 def __init__(self, tarball: str) -> None:
9 super().__init__()
10 self.tarball = tarball
11
12 @property
13 def name(self):
14 return self.tarball
15
16 def source_type(self) -> str:
17 return "tarball"
18
19
20 class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):
21 def __init__(self, tarball: str, package: str) -> None:
22 super().__init__(tarball)
23 # setup to recycle RegistryPinnedPackage fns
24 self.package = package
25 self.version = "tarball"
26
27 @property
28 def name(self):
29 return self.package
30
31 def to_dict(self) -> Dict[str, str]:
32 return {
33 "tarball": self.tarball,
34 "version": self.version,
35 "package": self.package,
36 }
37
38 def get_version(self):
39 return self.version
40
41 def nice_version_name(self):
42 return f"tarball (url: {self.tarball})"
43
44 def _fetch_metadata(self, project, renderer):
45 """
46 recycle RegistryPackageMetadata so that we can use the install and
47 download_and_untar from RegistryPinnedPackage next.
48 build RegistryPackageMetadata from info passed via packages.yml since no
49 'metadata' service exists in this case.
50 """
51
52 dct = {
53 "name": self.package,
54 "packages": [], # note: required by RegistryPackageMetadata
55 "downloads": {"tarball": self.tarball},
56 }
57
58 return RegistryPackageMetadata.from_dict(dct)
59
60 def install(self, project, renderer):
61 self._install(project, renderer)
62
63
64 class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):
65 def __init__(
66 self,
67 tarball: str,
68 package: str,
69 ) -> None:
70 super().__init__(tarball)
71 # setup to recycle RegistryPinnedPackage fns
72 self.package = package
73 self.version = "tarball"
74
75 @classmethod
76 def from_contract(cls, contract: TarballPackage) -> "TarballUnpinnedPackage":
77 return cls(tarball=contract.tarball, package=contract.name)
78
79 def incorporate(self, other: "TarballUnpinnedPackage") -> "TarballUnpinnedPackage":
80 return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)
81
82 def resolved(self) -> TarballPinnedPackage:
83 return TarballPinnedPackage(tarball=self.tarball, package=self.package)
84
[end of core/dbt/deps/tarball.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/dbt/deps/tarball.py b/core/dbt/deps/tarball.py
--- a/core/dbt/deps/tarball.py
+++ b/core/dbt/deps/tarball.py
@@ -1,7 +1,14 @@
+import functools
+import os
+from pathlib import Path
from typing import Dict
-from dbt.contracts.project import RegistryPackageMetadata, TarballPackage
-from dbt.deps.base import PinnedPackage, UnpinnedPackage
+from dbt.clients import system
+from dbt.config.project import PartialProject
+from dbt.contracts.project import TarballPackage
+from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path
+from dbt.exceptions import DependencyError
+from dbt.utils import _connection_exception_retry as connection_exception_retry
class TarballPackageMixin:
@@ -20,9 +27,10 @@
class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):
def __init__(self, tarball: str, package: str) -> None:
super().__init__(tarball)
- # setup to recycle RegistryPinnedPackage fns
self.package = package
self.version = "tarball"
+ self.tar_path = os.path.join(Path(get_downloads_path()), self.package)
+ self.untarred_path = f"{self.tar_path}_untarred"
@property
def name(self):
@@ -31,8 +39,7 @@
def to_dict(self) -> Dict[str, str]:
return {
"tarball": self.tarball,
- "version": self.version,
- "package": self.package,
+ "name": self.package,
}
def get_version(self):
@@ -42,23 +49,38 @@
return f"tarball (url: {self.tarball})"
def _fetch_metadata(self, project, renderer):
- """
- recycle RegistryPackageMetadata so that we can use the install and
- download_and_untar from RegistryPinnedPackage next.
- build RegistryPackageMetadata from info passed via packages.yml since no
- 'metadata' service exists in this case.
- """
-
- dct = {
- "name": self.package,
- "packages": [], # note: required by RegistryPackageMetadata
- "downloads": {"tarball": self.tarball},
- }
-
- return RegistryPackageMetadata.from_dict(dct)
+ """Download and untar the project and parse metadata from the project folder."""
+ download_untar_fn = functools.partial(
+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name
+ )
+ connection_exception_retry(download_untar_fn, 5)
+
+ tar_contents = os.listdir(self.untarred_path)
+ if len(tar_contents) != 1:
+ raise DependencyError(
+ f"Incorrect structure for package extracted from {self.tarball}."
+ f"The extracted package needs to follow the structure {self.name}/<package_contents>."
+ )
+ child_folder = os.listdir(self.untarred_path)[0]
+
+ self.untarred_path = os.path.join(self.untarred_path, child_folder)
+ partial = PartialProject.from_project_root(self.untarred_path)
+ metadata = partial.render_package_metadata(renderer)
+ metadata.name = self.package if self.package else metadata.name
+ return metadata
def install(self, project, renderer):
- self._install(project, renderer)
+ download_untar_fn = functools.partial(
+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name
+ )
+ connection_exception_retry(download_untar_fn, 5)
+ dest_path = self.get_installation_path(project, renderer)
+ if os.path.exists(dest_path):
+ if system.path_is_symlink(dest_path):
+ system.remove_file(dest_path)
+ else:
+ system.rmdir(dest_path)
+ system.move(self.untarred_path, dest_path)
class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):
| {"golden_diff": "diff --git a/core/dbt/deps/tarball.py b/core/dbt/deps/tarball.py\n--- a/core/dbt/deps/tarball.py\n+++ b/core/dbt/deps/tarball.py\n@@ -1,7 +1,14 @@\n+import functools\n+import os\n+from pathlib import Path\n from typing import Dict\n \n-from dbt.contracts.project import RegistryPackageMetadata, TarballPackage\n-from dbt.deps.base import PinnedPackage, UnpinnedPackage\n+from dbt.clients import system\n+from dbt.config.project import PartialProject\n+from dbt.contracts.project import TarballPackage\n+from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path\n+from dbt.exceptions import DependencyError\n+from dbt.utils import _connection_exception_retry as connection_exception_retry\n \n \n class TarballPackageMixin:\n@@ -20,9 +27,10 @@\n class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n- # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n+ self.tar_path = os.path.join(Path(get_downloads_path()), self.package)\n+ self.untarred_path = f\"{self.tar_path}_untarred\"\n \n @property\n def name(self):\n@@ -31,8 +39,7 @@\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n- \"version\": self.version,\n- \"package\": self.package,\n+ \"name\": self.package,\n }\n \n def get_version(self):\n@@ -42,23 +49,38 @@\n return f\"tarball (url: {self.tarball})\"\n \n def _fetch_metadata(self, project, renderer):\n- \"\"\"\n- recycle RegistryPackageMetadata so that we can use the install and\n- download_and_untar from RegistryPinnedPackage next.\n- build RegistryPackageMetadata from info passed via packages.yml since no\n- 'metadata' service exists in this case.\n- \"\"\"\n-\n- dct = {\n- \"name\": self.package,\n- \"packages\": [], # note: required by RegistryPackageMetadata\n- \"downloads\": {\"tarball\": self.tarball},\n- }\n-\n- return RegistryPackageMetadata.from_dict(dct)\n+ \"\"\"Download and untar the project and parse metadata from the project folder.\"\"\"\n+ download_untar_fn = functools.partial(\n+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n+ )\n+ connection_exception_retry(download_untar_fn, 5)\n+\n+ tar_contents = os.listdir(self.untarred_path)\n+ if len(tar_contents) != 1:\n+ raise DependencyError(\n+ f\"Incorrect structure for package extracted from {self.tarball}.\"\n+ f\"The extracted package needs to follow the structure {self.name}/<package_contents>.\"\n+ )\n+ child_folder = os.listdir(self.untarred_path)[0]\n+\n+ self.untarred_path = os.path.join(self.untarred_path, child_folder)\n+ partial = PartialProject.from_project_root(self.untarred_path)\n+ metadata = partial.render_package_metadata(renderer)\n+ metadata.name = self.package if self.package else metadata.name\n+ return metadata\n \n def install(self, project, renderer):\n- self._install(project, renderer)\n+ download_untar_fn = functools.partial(\n+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n+ )\n+ connection_exception_retry(download_untar_fn, 5)\n+ dest_path = self.get_installation_path(project, renderer)\n+ if os.path.exists(dest_path):\n+ if system.path_is_symlink(dest_path):\n+ system.remove_file(dest_path)\n+ else:\n+ system.rmdir(dest_path)\n+ system.move(self.untarred_path, dest_path)\n \n \n class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n", "issue": "[CT-3107] [Bug] nested dependencies not installed when package is a tarball\n### Is this a new bug in dbt-core?\r\n\r\n- [X] I believe this is a new bug in dbt-core\r\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\r\n\r\n### Current Behavior\r\n\r\nwhen running `dbt deps` to install a package specified as a tarball, dbt doesn't install nested dependencies (i.e. packages specified in the imported package's `packages.yml` file) as it does when installing a package from local, git or the dbt hub.\r\n\r\n### Expected Behavior\r\n\r\nconsistent behaviour across import methods regarding nested dependencies. dbt should install any dependencies specified in the tarball project's packages.yml file.\r\n\r\n\r\n### Steps To Reproduce\r\n\r\nthis can be reproduced by importing the tarball of a package with nested dependencies. In this case, importing dbt_expectations should cause dbt_date to be installed, as its included in the package's dependencies here: https://github.com/calogica/dbt-expectations/blob/0.9.0/packages.yml\r\n\r\nSteps:\r\n1. create a `packages.yml` file in a project with the following structure:\r\n``` yaml\r\npackages:\r\n - tarball: \"https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz\"\r\n name: \"dbt_expectations\"\r\n```\r\n2. run `dbt deps`\r\n\r\n\r\nrunning dbt deps will only install dbt_expectations:\r\n```\r\n20:08:55 Running with dbt=1.5.6\r\n20:08:55 Installing dbt_expectations\r\n20:08:56 Installed from tarball (url: https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz)\r\n```\r\ncompare this to installing the same package from dbt hub, with the following `packages.yml`:\r\n``` yaml\r\npackages:\r\n - package: calogica/dbt_expectations\r\n version: \"0.9.0\"\r\n```\r\n```\r\n20:14:24 Running with dbt=1.5.6\r\n20:14:24 Installing calogica/dbt_expectations\r\n20:14:25 Installed from version 0.9.0\r\n20:14:25 Up to date!\r\n20:14:25 Installing calogica/dbt_date\r\n20:14:25 Installed from version 0.8.1\r\n20:14:25 Updated version available: 0.9.1\r\n20:14:25 \r\n20:14:25 Updates available for packages: ['calogica/dbt_date'] \r\nUpdate your versions in packages.yml, then run dbt deps\r\n```\r\n\r\n### Relevant log output\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Mac OS 13.5.2 (22G91)\r\n- Python: 3.9\r\n- dbt: 1.5.6\r\n```\r\n\r\n\r\n### Which database adapter are you using with dbt?\r\n\r\nsnowflake\r\n\r\n### Additional Context\r\n\r\n_No response_\n", "before_files": [{"content": "from typing import Dict\n\nfrom dbt.contracts.project import RegistryPackageMetadata, TarballPackage\nfrom dbt.deps.base import PinnedPackage, UnpinnedPackage\n\n\nclass TarballPackageMixin:\n def __init__(self, tarball: str) -> None:\n super().__init__()\n self.tarball = tarball\n\n @property\n def name(self):\n return self.tarball\n\n def source_type(self) -> str:\n return \"tarball\"\n\n\nclass TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @property\n def name(self):\n return self.package\n\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n \"version\": self.version,\n \"package\": self.package,\n }\n\n def get_version(self):\n return self.version\n\n def nice_version_name(self):\n return f\"tarball (url: {self.tarball})\"\n\n def _fetch_metadata(self, project, renderer):\n \"\"\"\n recycle RegistryPackageMetadata so that we can use the install and\n download_and_untar from RegistryPinnedPackage next.\n build RegistryPackageMetadata from info passed via packages.yml since no\n 'metadata' service exists in this case.\n \"\"\"\n\n dct = {\n \"name\": self.package,\n \"packages\": [], # note: required by RegistryPackageMetadata\n \"downloads\": {\"tarball\": self.tarball},\n }\n\n return RegistryPackageMetadata.from_dict(dct)\n\n def install(self, project, renderer):\n self._install(project, renderer)\n\n\nclass TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n def __init__(\n self,\n tarball: str,\n package: str,\n ) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @classmethod\n def from_contract(cls, contract: TarballPackage) -> \"TarballUnpinnedPackage\":\n return cls(tarball=contract.tarball, package=contract.name)\n\n def incorporate(self, other: \"TarballUnpinnedPackage\") -> \"TarballUnpinnedPackage\":\n return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)\n\n def resolved(self) -> TarballPinnedPackage:\n return TarballPinnedPackage(tarball=self.tarball, package=self.package)\n", "path": "core/dbt/deps/tarball.py"}]} | 2,012 | 944 |
gh_patches_debug_12566 | rasdani/github-patches | git_diff | netbox-community__netbox-4303 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IP Prefix Family returned doesn't match swagger definition
### Environment
* Python version: 3.7.6
* NetBox version: v2.7.7
Swagger definition for Prefix.Family does not match the swagger definition.
### Steps to Reproduce
1. Get a prefix object `wget http://netbox/api/ipam/prefixes/210/`
2. Notice object is like
```
"family": {
"value": 4,
"label": "IPv4"
},
```
3. Notice definition is
```
"family": {
"label": "string",
"value": "string"
},
```
<!-- What did you expect to happen? -->
### Expected Behavior
Object returned matches definition. I'm not sure if the definition needs to be fixed or the returned value type needs to be changed.
<!-- What happened instead? -->
### Observed Behavior
Object doesn't match definition
</issue>
<code>
[start of netbox/utilities/custom_inspectors.py]
1 from drf_yasg import openapi
2 from drf_yasg.inspectors import FieldInspector, NotHandled, PaginatorInspector, FilterInspector, SwaggerAutoSchema
3 from drf_yasg.utils import get_serializer_ref_name
4 from rest_framework.fields import ChoiceField
5 from rest_framework.relations import ManyRelatedField
6 from taggit_serializer.serializers import TagListSerializerField
7
8 from dcim.api.serializers import InterfaceSerializer as DeviceInterfaceSerializer
9 from extras.api.customfields import CustomFieldsSerializer
10 from utilities.api import ChoiceField, SerializedPKRelatedField, WritableNestedSerializer
11 from virtualization.api.serializers import InterfaceSerializer as VirtualMachineInterfaceSerializer
12
13 # this might be ugly, but it limits drf_yasg-specific code to this file
14 DeviceInterfaceSerializer.Meta.ref_name = 'DeviceInterface'
15 VirtualMachineInterfaceSerializer.Meta.ref_name = 'VirtualMachineInterface'
16
17
18 class NetBoxSwaggerAutoSchema(SwaggerAutoSchema):
19 writable_serializers = {}
20
21 def get_request_serializer(self):
22 serializer = super().get_request_serializer()
23
24 if serializer is not None and self.method in self.implicit_body_methods:
25 properties = {}
26 for child_name, child in serializer.fields.items():
27 if isinstance(child, (ChoiceField, WritableNestedSerializer)):
28 properties[child_name] = None
29 elif isinstance(child, ManyRelatedField) and isinstance(child.child_relation, SerializedPKRelatedField):
30 properties[child_name] = None
31
32 if properties:
33 if type(serializer) not in self.writable_serializers:
34 writable_name = 'Writable' + type(serializer).__name__
35 meta_class = getattr(type(serializer), 'Meta', None)
36 if meta_class:
37 ref_name = 'Writable' + get_serializer_ref_name(serializer)
38 writable_meta = type('Meta', (meta_class,), {'ref_name': ref_name})
39 properties['Meta'] = writable_meta
40
41 self.writable_serializers[type(serializer)] = type(writable_name, (type(serializer),), properties)
42
43 writable_class = self.writable_serializers[type(serializer)]
44 serializer = writable_class()
45
46 return serializer
47
48
49 class SerializedPKRelatedFieldInspector(FieldInspector):
50 def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):
51 SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)
52 if isinstance(field, SerializedPKRelatedField):
53 return self.probe_field_inspectors(field.serializer(), ChildSwaggerType, use_references)
54
55 return NotHandled
56
57
58 class TagListFieldInspector(FieldInspector):
59 def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):
60 SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)
61 if isinstance(field, TagListSerializerField):
62 child_schema = self.probe_field_inspectors(field.child, ChildSwaggerType, use_references)
63 return SwaggerType(
64 type=openapi.TYPE_ARRAY,
65 items=child_schema,
66 )
67
68 return NotHandled
69
70
71 class CustomChoiceFieldInspector(FieldInspector):
72 def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):
73 # this returns a callable which extracts title, description and other stuff
74 # https://drf-yasg.readthedocs.io/en/stable/_modules/drf_yasg/inspectors/base.html#FieldInspector._get_partial_types
75 SwaggerType, _ = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)
76
77 if isinstance(field, ChoiceField):
78 value_schema = openapi.Schema(type=openapi.TYPE_STRING)
79
80 choices = list(field._choices.keys())
81 if set([None] + choices) == {None, True, False}:
82 # DeviceType.subdevice_role, Device.face and InterfaceConnection.connection_status all need to be
83 # differentiated since they each have subtly different values in their choice keys.
84 # - subdevice_role and connection_status are booleans, although subdevice_role includes None
85 # - face is an integer set {0, 1} which is easily confused with {False, True}
86 schema_type = openapi.TYPE_STRING
87 if all(type(x) == bool for x in [c for c in choices if c is not None]):
88 schema_type = openapi.TYPE_BOOLEAN
89 value_schema = openapi.Schema(type=schema_type)
90 value_schema['x-nullable'] = True
91
92 schema = SwaggerType(type=openapi.TYPE_OBJECT, required=["label", "value"], properties={
93 "label": openapi.Schema(type=openapi.TYPE_STRING),
94 "value": value_schema
95 })
96
97 return schema
98
99 elif isinstance(field, CustomFieldsSerializer):
100 schema = SwaggerType(type=openapi.TYPE_OBJECT)
101 return schema
102
103 return NotHandled
104
105
106 class NullableBooleanFieldInspector(FieldInspector):
107 def process_result(self, result, method_name, obj, **kwargs):
108
109 if isinstance(result, openapi.Schema) and isinstance(obj, ChoiceField) and result.type == 'boolean':
110 keys = obj.choices.keys()
111 if set(keys) == {None, True, False}:
112 result['x-nullable'] = True
113 result.type = 'boolean'
114
115 return result
116
117
118 class IdInFilterInspector(FilterInspector):
119 def process_result(self, result, method_name, obj, **kwargs):
120 if isinstance(result, list):
121 params = [p for p in result if isinstance(p, openapi.Parameter) and p.name == 'id__in']
122 for p in params:
123 p.type = 'string'
124
125 return result
126
127
128 class NullablePaginatorInspector(PaginatorInspector):
129 def process_result(self, result, method_name, obj, **kwargs):
130 if method_name == 'get_paginated_response' and isinstance(result, openapi.Schema):
131 next = result.properties['next']
132 if isinstance(next, openapi.Schema):
133 next['x-nullable'] = True
134 previous = result.properties['previous']
135 if isinstance(previous, openapi.Schema):
136 previous['x-nullable'] = True
137
138 return result
139
[end of netbox/utilities/custom_inspectors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/utilities/custom_inspectors.py b/netbox/utilities/custom_inspectors.py
--- a/netbox/utilities/custom_inspectors.py
+++ b/netbox/utilities/custom_inspectors.py
@@ -89,6 +89,10 @@
value_schema = openapi.Schema(type=schema_type)
value_schema['x-nullable'] = True
+ if isinstance(choices[0], int):
+ # Change value_schema for IPAddressFamilyChoices, RackWidthChoices
+ value_schema = openapi.Schema(type=openapi.TYPE_INTEGER)
+
schema = SwaggerType(type=openapi.TYPE_OBJECT, required=["label", "value"], properties={
"label": openapi.Schema(type=openapi.TYPE_STRING),
"value": value_schema
| {"golden_diff": "diff --git a/netbox/utilities/custom_inspectors.py b/netbox/utilities/custom_inspectors.py\n--- a/netbox/utilities/custom_inspectors.py\n+++ b/netbox/utilities/custom_inspectors.py\n@@ -89,6 +89,10 @@\n value_schema = openapi.Schema(type=schema_type)\n value_schema['x-nullable'] = True\n \n+ if isinstance(choices[0], int):\n+ # Change value_schema for IPAddressFamilyChoices, RackWidthChoices\n+ value_schema = openapi.Schema(type=openapi.TYPE_INTEGER)\n+\n schema = SwaggerType(type=openapi.TYPE_OBJECT, required=[\"label\", \"value\"], properties={\n \"label\": openapi.Schema(type=openapi.TYPE_STRING),\n \"value\": value_schema\n", "issue": "IP Prefix Family returned doesn't match swagger definition\n### Environment\r\n* Python version: 3.7.6\r\n* NetBox version: v2.7.7\r\n\r\nSwagger definition for Prefix.Family does not match the swagger definition.\r\n\r\n### Steps to Reproduce\r\n1. Get a prefix object `wget http://netbox/api/ipam/prefixes/210/`\r\n2. Notice object is like\r\n```\r\n \"family\": {\r\n \"value\": 4,\r\n \"label\": \"IPv4\"\r\n },\r\n```\r\n3. Notice definition is\r\n```\r\n \"family\": {\r\n \"label\": \"string\",\r\n \"value\": \"string\"\r\n },\r\n```\r\n\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nObject returned matches definition. I'm not sure if the definition needs to be fixed or the returned value type needs to be changed.\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nObject doesn't match definition\n", "before_files": [{"content": "from drf_yasg import openapi\nfrom drf_yasg.inspectors import FieldInspector, NotHandled, PaginatorInspector, FilterInspector, SwaggerAutoSchema\nfrom drf_yasg.utils import get_serializer_ref_name\nfrom rest_framework.fields import ChoiceField\nfrom rest_framework.relations import ManyRelatedField\nfrom taggit_serializer.serializers import TagListSerializerField\n\nfrom dcim.api.serializers import InterfaceSerializer as DeviceInterfaceSerializer\nfrom extras.api.customfields import CustomFieldsSerializer\nfrom utilities.api import ChoiceField, SerializedPKRelatedField, WritableNestedSerializer\nfrom virtualization.api.serializers import InterfaceSerializer as VirtualMachineInterfaceSerializer\n\n# this might be ugly, but it limits drf_yasg-specific code to this file\nDeviceInterfaceSerializer.Meta.ref_name = 'DeviceInterface'\nVirtualMachineInterfaceSerializer.Meta.ref_name = 'VirtualMachineInterface'\n\n\nclass NetBoxSwaggerAutoSchema(SwaggerAutoSchema):\n writable_serializers = {}\n\n def get_request_serializer(self):\n serializer = super().get_request_serializer()\n\n if serializer is not None and self.method in self.implicit_body_methods:\n properties = {}\n for child_name, child in serializer.fields.items():\n if isinstance(child, (ChoiceField, WritableNestedSerializer)):\n properties[child_name] = None\n elif isinstance(child, ManyRelatedField) and isinstance(child.child_relation, SerializedPKRelatedField):\n properties[child_name] = None\n\n if properties:\n if type(serializer) not in self.writable_serializers:\n writable_name = 'Writable' + type(serializer).__name__\n meta_class = getattr(type(serializer), 'Meta', None)\n if meta_class:\n ref_name = 'Writable' + get_serializer_ref_name(serializer)\n writable_meta = type('Meta', (meta_class,), {'ref_name': ref_name})\n properties['Meta'] = writable_meta\n\n self.writable_serializers[type(serializer)] = type(writable_name, (type(serializer),), properties)\n\n writable_class = self.writable_serializers[type(serializer)]\n serializer = writable_class()\n\n return serializer\n\n\nclass SerializedPKRelatedFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n if isinstance(field, SerializedPKRelatedField):\n return self.probe_field_inspectors(field.serializer(), ChildSwaggerType, use_references)\n\n return NotHandled\n\n\nclass TagListFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n if isinstance(field, TagListSerializerField):\n child_schema = self.probe_field_inspectors(field.child, ChildSwaggerType, use_references)\n return SwaggerType(\n type=openapi.TYPE_ARRAY,\n items=child_schema,\n )\n\n return NotHandled\n\n\nclass CustomChoiceFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n # this returns a callable which extracts title, description and other stuff\n # https://drf-yasg.readthedocs.io/en/stable/_modules/drf_yasg/inspectors/base.html#FieldInspector._get_partial_types\n SwaggerType, _ = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n\n if isinstance(field, ChoiceField):\n value_schema = openapi.Schema(type=openapi.TYPE_STRING)\n\n choices = list(field._choices.keys())\n if set([None] + choices) == {None, True, False}:\n # DeviceType.subdevice_role, Device.face and InterfaceConnection.connection_status all need to be\n # differentiated since they each have subtly different values in their choice keys.\n # - subdevice_role and connection_status are booleans, although subdevice_role includes None\n # - face is an integer set {0, 1} which is easily confused with {False, True}\n schema_type = openapi.TYPE_STRING\n if all(type(x) == bool for x in [c for c in choices if c is not None]):\n schema_type = openapi.TYPE_BOOLEAN\n value_schema = openapi.Schema(type=schema_type)\n value_schema['x-nullable'] = True\n\n schema = SwaggerType(type=openapi.TYPE_OBJECT, required=[\"label\", \"value\"], properties={\n \"label\": openapi.Schema(type=openapi.TYPE_STRING),\n \"value\": value_schema\n })\n\n return schema\n\n elif isinstance(field, CustomFieldsSerializer):\n schema = SwaggerType(type=openapi.TYPE_OBJECT)\n return schema\n\n return NotHandled\n\n\nclass NullableBooleanFieldInspector(FieldInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n\n if isinstance(result, openapi.Schema) and isinstance(obj, ChoiceField) and result.type == 'boolean':\n keys = obj.choices.keys()\n if set(keys) == {None, True, False}:\n result['x-nullable'] = True\n result.type = 'boolean'\n\n return result\n\n\nclass IdInFilterInspector(FilterInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n if isinstance(result, list):\n params = [p for p in result if isinstance(p, openapi.Parameter) and p.name == 'id__in']\n for p in params:\n p.type = 'string'\n\n return result\n\n\nclass NullablePaginatorInspector(PaginatorInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n if method_name == 'get_paginated_response' and isinstance(result, openapi.Schema):\n next = result.properties['next']\n if isinstance(next, openapi.Schema):\n next['x-nullable'] = True\n previous = result.properties['previous']\n if isinstance(previous, openapi.Schema):\n previous['x-nullable'] = True\n\n return result\n", "path": "netbox/utilities/custom_inspectors.py"}]} | 2,346 | 164 |
gh_patches_debug_25038 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-715 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Better handle newly added ElasticSearch functions
When using the agent with an older version of ElasticSearch, the following warning is logged:
```
Failed to instrument elasticsearch.Elasticsearch.search_mvt: AttributeError("type object 'Elasticsearch' has no attribute 'search_mvt'")
```
When a client method doesn't exist, the agent should either ignore it or more quietly log that information.
</issue>
<code>
[start of src/scout_apm/instruments/elasticsearch.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import logging
5 from collections import namedtuple
6
7 import wrapt
8
9 from scout_apm.compat import get_pos_args, unwrap_decorators
10 from scout_apm.core.tracked_request import TrackedRequest
11
12 try:
13 from elasticsearch import Elasticsearch, Transport
14 except ImportError: # pragma: no cover
15 Elasticsearch = None
16 Transport = None
17
18 logger = logging.getLogger(__name__)
19
20
21 def ensure_installed():
22 logger.debug("Instrumenting elasticsearch.")
23
24 if Elasticsearch is None:
25 logger.debug(
26 "Couldn't import elasticsearch.Elasticsearch - probably not installed."
27 )
28 else:
29 ensure_client_instrumented()
30 ensure_transport_instrumented()
31
32
33 ClientMethod = namedtuple("ClientMethod", ["name", "takes_index_argument"])
34
35 CLIENT_METHODS = [
36 ClientMethod("bulk", True),
37 ClientMethod("clear_scroll", False),
38 ClientMethod("close", False),
39 ClientMethod("close_point_in_time", False),
40 ClientMethod("count", True),
41 ClientMethod("create", True),
42 ClientMethod("delete", True),
43 ClientMethod("delete_by_query", True),
44 ClientMethod("delete_by_query_rethrottle", False),
45 ClientMethod("delete_script", False),
46 ClientMethod("exists", True),
47 ClientMethod("exists_source", True),
48 ClientMethod("explain", True),
49 ClientMethod("field_caps", True),
50 ClientMethod("get", True),
51 ClientMethod("get_script", False),
52 ClientMethod("get_script_context", False),
53 ClientMethod("get_script_languages", False),
54 ClientMethod("get_source", True),
55 ClientMethod("index", True),
56 ClientMethod("info", False),
57 ClientMethod("mget", True),
58 ClientMethod("msearch", True),
59 ClientMethod("msearch_template", True),
60 ClientMethod("mtermvectors", True),
61 ClientMethod("open_point_in_time", True),
62 ClientMethod("ping", False),
63 ClientMethod("put_script", False),
64 ClientMethod("rank_eval", True),
65 ClientMethod("reindex", False),
66 ClientMethod("reindex_rethrottle", False),
67 ClientMethod("render_search_template", False),
68 ClientMethod("scripts_painless_execute", False),
69 ClientMethod("scroll", False),
70 ClientMethod("search", True),
71 ClientMethod("search_mvt", True),
72 ClientMethod("search_shards", True),
73 ClientMethod("search_template", True),
74 ClientMethod("termvectors", True),
75 ClientMethod("terms_enum", True),
76 ClientMethod("update", True),
77 ClientMethod("update_by_query", True),
78 ClientMethod("update_by_query_rethrottle", False),
79 ]
80
81
82 have_patched_client = False
83
84
85 def ensure_client_instrumented():
86 global have_patched_client
87
88 if not have_patched_client:
89 for name, takes_index_argument in CLIENT_METHODS:
90 try:
91 method = getattr(Elasticsearch, name)
92 if takes_index_argument:
93 wrapped = wrap_client_index_method(method)
94 else:
95 wrapped = wrap_client_method(method)
96 setattr(Elasticsearch, name, wrapped)
97 except Exception as exc:
98 logger.warning(
99 "Failed to instrument elasticsearch.Elasticsearch.%s: %r",
100 name,
101 exc,
102 exc_info=exc,
103 )
104
105 have_patched_client = True
106
107
108 @wrapt.decorator
109 def wrap_client_index_method(wrapped, instance, args, kwargs):
110 # elasticsearch-py 7.5.1 changed the order of arguments for client methods,
111 # so to be safe we need to inspect the wrapped method's positional
112 # arguments to see if we should pull it from there
113 if "index" in kwargs:
114 index = kwargs["index"]
115 else:
116 unwrapped = unwrap_decorators(wrapped)
117 pos_args = get_pos_args(unwrapped)
118 try:
119 index_index = pos_args.index("index")
120 except ValueError: # pragma: no cover
121 # This guards against the method not accepting an 'index' argument
122 # but they all do - for now
123 index = ""
124 else:
125 try:
126 index = args[index_index - 1] # subtract 'self'
127 except IndexError:
128 index = ""
129
130 if isinstance(index, (list, tuple)):
131 index = ",".join(index)
132 if index == "":
133 index = "Unknown"
134 index = index.title()
135
136 camel_name = "".join(c.title() for c in wrapped.__name__.split("_"))
137 operation = "Elasticsearch/{}/{}".format(index, camel_name)
138 tracked_request = TrackedRequest.instance()
139 with tracked_request.span(operation=operation, ignore_children=True):
140 return wrapped(*args, **kwargs)
141
142
143 @wrapt.decorator
144 def wrap_client_method(wrapped, instance, args, kwargs):
145 camel_name = "".join(c.title() for c in wrapped.__name__.split("_"))
146 operation = "Elasticsearch/{}".format(camel_name)
147 tracked_request = TrackedRequest.instance()
148 with tracked_request.span(operation=operation, ignore_children=True):
149 return wrapped(*args, **kwargs)
150
151
152 have_patched_transport = False
153
154
155 def ensure_transport_instrumented():
156 global have_patched_transport
157
158 if not have_patched_transport:
159 try:
160 Transport.perform_request = wrapped_perform_request(
161 Transport.perform_request
162 )
163 except Exception as exc:
164 logger.warning(
165 "Failed to instrument elasticsearch.Transport.perform_request: %r",
166 exc,
167 exc_info=exc,
168 )
169
170 have_patched_transport = True
171
172
173 def _sanitize_name(name):
174 try:
175 op = name.split("/")[-1]
176 op = op[1:] # chop leading '_' from op
177 known_names = (
178 "bench",
179 "bulk",
180 "count",
181 "exists",
182 "explain",
183 "field_stats",
184 "health",
185 "mget",
186 "mlt",
187 "mpercolate",
188 "msearch",
189 "mtermvectors",
190 "percolate",
191 "query",
192 "scroll",
193 "search_shards",
194 "source",
195 "suggest",
196 "template",
197 "termvectors",
198 "update",
199 "search",
200 )
201 if op in known_names:
202 return op.title()
203 return "Unknown"
204 except Exception:
205 return "Unknown"
206
207
208 @wrapt.decorator
209 def wrapped_perform_request(wrapped, instance, args, kwargs):
210 try:
211 op = _sanitize_name(args[1])
212 except IndexError:
213 op = "Unknown"
214
215 tracked_request = TrackedRequest.instance()
216 with tracked_request.span(
217 operation="Elasticsearch/{}".format(op),
218 ignore_children=True,
219 ):
220 return wrapped(*args, **kwargs)
221
[end of src/scout_apm/instruments/elasticsearch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/instruments/elasticsearch.py b/src/scout_apm/instruments/elasticsearch.py
--- a/src/scout_apm/instruments/elasticsearch.py
+++ b/src/scout_apm/instruments/elasticsearch.py
@@ -86,6 +86,7 @@
global have_patched_client
if not have_patched_client:
+ instrumented_count = 0
for name, takes_index_argument in CLIENT_METHODS:
try:
method = getattr(Elasticsearch, name)
@@ -94,13 +95,19 @@
else:
wrapped = wrap_client_method(method)
setattr(Elasticsearch, name, wrapped)
+ instrumented_count += 1
except Exception as exc:
- logger.warning(
+ logger.debug(
"Failed to instrument elasticsearch.Elasticsearch.%s: %r",
name,
exc,
exc_info=exc,
)
+ if instrumented_count == 0:
+ logger.warning(
+ "Failed to instrument any elasticsearch.Elasticsearch methods."
+ " Enable debug logs to view root causes."
+ )
have_patched_client = True
| {"golden_diff": "diff --git a/src/scout_apm/instruments/elasticsearch.py b/src/scout_apm/instruments/elasticsearch.py\n--- a/src/scout_apm/instruments/elasticsearch.py\n+++ b/src/scout_apm/instruments/elasticsearch.py\n@@ -86,6 +86,7 @@\n global have_patched_client\n \n if not have_patched_client:\n+ instrumented_count = 0\n for name, takes_index_argument in CLIENT_METHODS:\n try:\n method = getattr(Elasticsearch, name)\n@@ -94,13 +95,19 @@\n else:\n wrapped = wrap_client_method(method)\n setattr(Elasticsearch, name, wrapped)\n+ instrumented_count += 1\n except Exception as exc:\n- logger.warning(\n+ logger.debug(\n \"Failed to instrument elasticsearch.Elasticsearch.%s: %r\",\n name,\n exc,\n exc_info=exc,\n )\n+ if instrumented_count == 0:\n+ logger.warning(\n+ \"Failed to instrument any elasticsearch.Elasticsearch methods.\"\n+ \" Enable debug logs to view root causes.\"\n+ )\n \n have_patched_client = True\n", "issue": "Better handle newly added ElasticSearch functions\nWhen using the agent with an older version of ElasticSearch, the following warning is logged:\r\n\r\n```\r\nFailed to instrument elasticsearch.Elasticsearch.search_mvt: AttributeError(\"type object 'Elasticsearch' has no attribute 'search_mvt'\")\r\n```\r\n\r\nWhen a client method doesn't exist, the agent should either ignore it or more quietly log that information.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nfrom collections import namedtuple\n\nimport wrapt\n\nfrom scout_apm.compat import get_pos_args, unwrap_decorators\nfrom scout_apm.core.tracked_request import TrackedRequest\n\ntry:\n from elasticsearch import Elasticsearch, Transport\nexcept ImportError: # pragma: no cover\n Elasticsearch = None\n Transport = None\n\nlogger = logging.getLogger(__name__)\n\n\ndef ensure_installed():\n logger.debug(\"Instrumenting elasticsearch.\")\n\n if Elasticsearch is None:\n logger.debug(\n \"Couldn't import elasticsearch.Elasticsearch - probably not installed.\"\n )\n else:\n ensure_client_instrumented()\n ensure_transport_instrumented()\n\n\nClientMethod = namedtuple(\"ClientMethod\", [\"name\", \"takes_index_argument\"])\n\nCLIENT_METHODS = [\n ClientMethod(\"bulk\", True),\n ClientMethod(\"clear_scroll\", False),\n ClientMethod(\"close\", False),\n ClientMethod(\"close_point_in_time\", False),\n ClientMethod(\"count\", True),\n ClientMethod(\"create\", True),\n ClientMethod(\"delete\", True),\n ClientMethod(\"delete_by_query\", True),\n ClientMethod(\"delete_by_query_rethrottle\", False),\n ClientMethod(\"delete_script\", False),\n ClientMethod(\"exists\", True),\n ClientMethod(\"exists_source\", True),\n ClientMethod(\"explain\", True),\n ClientMethod(\"field_caps\", True),\n ClientMethod(\"get\", True),\n ClientMethod(\"get_script\", False),\n ClientMethod(\"get_script_context\", False),\n ClientMethod(\"get_script_languages\", False),\n ClientMethod(\"get_source\", True),\n ClientMethod(\"index\", True),\n ClientMethod(\"info\", False),\n ClientMethod(\"mget\", True),\n ClientMethod(\"msearch\", True),\n ClientMethod(\"msearch_template\", True),\n ClientMethod(\"mtermvectors\", True),\n ClientMethod(\"open_point_in_time\", True),\n ClientMethod(\"ping\", False),\n ClientMethod(\"put_script\", False),\n ClientMethod(\"rank_eval\", True),\n ClientMethod(\"reindex\", False),\n ClientMethod(\"reindex_rethrottle\", False),\n ClientMethod(\"render_search_template\", False),\n ClientMethod(\"scripts_painless_execute\", False),\n ClientMethod(\"scroll\", False),\n ClientMethod(\"search\", True),\n ClientMethod(\"search_mvt\", True),\n ClientMethod(\"search_shards\", True),\n ClientMethod(\"search_template\", True),\n ClientMethod(\"termvectors\", True),\n ClientMethod(\"terms_enum\", True),\n ClientMethod(\"update\", True),\n ClientMethod(\"update_by_query\", True),\n ClientMethod(\"update_by_query_rethrottle\", False),\n]\n\n\nhave_patched_client = False\n\n\ndef ensure_client_instrumented():\n global have_patched_client\n\n if not have_patched_client:\n for name, takes_index_argument in CLIENT_METHODS:\n try:\n method = getattr(Elasticsearch, name)\n if takes_index_argument:\n wrapped = wrap_client_index_method(method)\n else:\n wrapped = wrap_client_method(method)\n setattr(Elasticsearch, name, wrapped)\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Elasticsearch.%s: %r\",\n name,\n exc,\n exc_info=exc,\n )\n\n have_patched_client = True\n\n\[email protected]\ndef wrap_client_index_method(wrapped, instance, args, kwargs):\n # elasticsearch-py 7.5.1 changed the order of arguments for client methods,\n # so to be safe we need to inspect the wrapped method's positional\n # arguments to see if we should pull it from there\n if \"index\" in kwargs:\n index = kwargs[\"index\"]\n else:\n unwrapped = unwrap_decorators(wrapped)\n pos_args = get_pos_args(unwrapped)\n try:\n index_index = pos_args.index(\"index\")\n except ValueError: # pragma: no cover\n # This guards against the method not accepting an 'index' argument\n # but they all do - for now\n index = \"\"\n else:\n try:\n index = args[index_index - 1] # subtract 'self'\n except IndexError:\n index = \"\"\n\n if isinstance(index, (list, tuple)):\n index = \",\".join(index)\n if index == \"\":\n index = \"Unknown\"\n index = index.title()\n\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}/{}\".format(index, camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\[email protected]\ndef wrap_client_method(wrapped, instance, args, kwargs):\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}\".format(camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\nhave_patched_transport = False\n\n\ndef ensure_transport_instrumented():\n global have_patched_transport\n\n if not have_patched_transport:\n try:\n Transport.perform_request = wrapped_perform_request(\n Transport.perform_request\n )\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Transport.perform_request: %r\",\n exc,\n exc_info=exc,\n )\n\n have_patched_transport = True\n\n\ndef _sanitize_name(name):\n try:\n op = name.split(\"/\")[-1]\n op = op[1:] # chop leading '_' from op\n known_names = (\n \"bench\",\n \"bulk\",\n \"count\",\n \"exists\",\n \"explain\",\n \"field_stats\",\n \"health\",\n \"mget\",\n \"mlt\",\n \"mpercolate\",\n \"msearch\",\n \"mtermvectors\",\n \"percolate\",\n \"query\",\n \"scroll\",\n \"search_shards\",\n \"source\",\n \"suggest\",\n \"template\",\n \"termvectors\",\n \"update\",\n \"search\",\n )\n if op in known_names:\n return op.title()\n return \"Unknown\"\n except Exception:\n return \"Unknown\"\n\n\[email protected]\ndef wrapped_perform_request(wrapped, instance, args, kwargs):\n try:\n op = _sanitize_name(args[1])\n except IndexError:\n op = \"Unknown\"\n\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(\n operation=\"Elasticsearch/{}\".format(op),\n ignore_children=True,\n ):\n return wrapped(*args, **kwargs)\n", "path": "src/scout_apm/instruments/elasticsearch.py"}]} | 2,653 | 255 |
gh_patches_debug_27699 | rasdani/github-patches | git_diff | cowrie__cowrie-638 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
csirtg plugin no longer working
I'm not sure exactly when this happened, but just happend to check the logs today, and noticed the csirtg plugin has some errors.
```
2017-11-02T17:05:41-0400 [cowrie.telnet.transport.HoneyPotTelnetFactory] New connection: 45.32.221.61:59776 (x.x.x.x:23) [session: TT0]
2017-11-02T17:05:41-0400 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.csirtg.Output object at 0x7f3a5ce9bb50>>) due to exception: [Failure instance: Traceback: <type 'exceptions.TypeError'>: string indices must be integers
/home/cowrie/cowrie/cowrie/telnet/transport.py:218:connectionMade
/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py:53:sync
/usr/local/lib/python2.7/dist-packages/twisted/python/log.py:286:msg
/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:154:publishToNewObserver
--- <exception caught here> ---
/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py:131:__call__
/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:93:__call__
/home/cowrie/cowrie/cowrie/core/output.py:190:emit
/home/cowrie/cowrie/cowrie/output/csirtg.py:82:write
]
Traceback (most recent call last):
File "/home/cowrie/cowrie/cowrie/telnet/transport.py", line 218, in connectionMade
session=self.transportId, sessionno='T'+str(sessionno))
File "/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py", line 53, in sync
return function(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/twisted/python/log.py", line 286, in msg
_publishNew(self._publishPublisher, actualEventDict, textFromEventDict)
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py", line 154, in publishToNewObserver
observer(eventDict)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py", line 131, in __call__
observer(event)
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py", line 93, in __call__
self.legacyObserver(event)
File "/home/cowrie/cowrie/cowrie/core/output.py", line 190, in emit
self.write(ev)
File "/home/cowrie/cowrie/cowrie/output/csirtg.py", line 82, in write
logger.info('logged to csirtg %s ' % ret['indicator']['location'])
exceptions.TypeError: string indices must be integers
```
</issue>
<code>
[start of cowrie/output/csirtg.py]
1 from __future__ import division, absolute_import
2
3 import cowrie.core.output
4
5 from csirtgsdk.indicator import Indicator
6 from csirtgsdk.client import Client
7 from datetime import datetime
8 import logging
9 import os
10
11 logger = logging.getLogger(__name__)
12
13 USERNAME = os.environ.get('CSIRTG_USER')
14 FEED = os.environ.get('CSIRTG_FEED')
15 TOKEN = os.environ.get('CSIRG_TOKEN')
16 DESCRIPTION = os.environ.get('CSIRTG_DESCRIPTION', 'random scanning activity')
17
18
19 class Output(cowrie.core.output.Output):
20 def __init__(self, cfg):
21 cowrie.core.output.Output.__init__(self, cfg)
22 self.user = cfg.get('output_csirtg', 'username') or USERNAME
23 self.feed = cfg.get('output_csirtg', 'feed') or FEED
24 self.token = cfg.get('output_csirtg', 'token') or TOKEN
25 try:
26 self.description = cfg.get('output_csirtg', 'description')
27 except Exception:
28 self.description = DESCRIPTION
29 self.context = {}
30 self.client = Client(token=self.token)
31
32 def start(self,):
33 pass
34
35 def stop(self):
36 pass
37
38 def write(self, e):
39 sid = e['session']
40 peerIP = e['src_ip']
41 ts = e['timestamp']
42 system = e['system']
43
44 if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:
45 logger.debug('skipping {}'.format(system))
46 return
47
48 today = str(datetime.now().date())
49
50 if not self.context.get(today):
51 logger.debug('resetting context for %s' % today)
52 self.context = {}
53 self.context[today] = set()
54
55 key = ','.join([peerIP, system])
56
57 if key in self.context[today]:
58 logger.debug('skipping {}'.format(key))
59 return
60
61 self.context[today].add(key)
62
63 tags = 'scanner,ssh'
64 port = 22
65 if e['system'] == 'cowrie.telnet.transport.HoneyPotTelnetFactory':
66 tags = 'scanner,telnet'
67 port = 23
68
69 i = {
70 'user': self.user,
71 'feed': self.feed,
72 'indicator': peerIP,
73 'portlist': port,
74 'protocol': 'tcp',
75 'tags': tags,
76 'firsttime': ts,
77 'lasttime': ts,
78 'description': self.description
79 }
80
81 ret = Indicator(self.client, i).submit()
82 logger.info('logged to csirtg %s ' % ret['indicator']['location'])
83
84
[end of cowrie/output/csirtg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cowrie/output/csirtg.py b/cowrie/output/csirtg.py
--- a/cowrie/output/csirtg.py
+++ b/cowrie/output/csirtg.py
@@ -7,8 +7,7 @@
from datetime import datetime
import logging
import os
-
-logger = logging.getLogger(__name__)
+from twisted.python import log
USERNAME = os.environ.get('CSIRTG_USER')
FEED = os.environ.get('CSIRTG_FEED')
@@ -42,20 +41,17 @@
system = e['system']
if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:
- logger.debug('skipping {}'.format(system))
return
today = str(datetime.now().date())
if not self.context.get(today):
- logger.debug('resetting context for %s' % today)
self.context = {}
self.context[today] = set()
key = ','.join([peerIP, system])
if key in self.context[today]:
- logger.debug('skipping {}'.format(key))
return
self.context[today].add(key)
@@ -79,5 +75,5 @@
}
ret = Indicator(self.client, i).submit()
- logger.info('logged to csirtg %s ' % ret['indicator']['location'])
+ log.msg('logged to csirtg %s ' % ret['location'])
| {"golden_diff": "diff --git a/cowrie/output/csirtg.py b/cowrie/output/csirtg.py\n--- a/cowrie/output/csirtg.py\n+++ b/cowrie/output/csirtg.py\n@@ -7,8 +7,7 @@\n from datetime import datetime\n import logging\n import os\n-\n-logger = logging.getLogger(__name__)\n+from twisted.python import log\n \n USERNAME = os.environ.get('CSIRTG_USER')\n FEED = os.environ.get('CSIRTG_FEED')\n@@ -42,20 +41,17 @@\n system = e['system']\n \n if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:\n- logger.debug('skipping {}'.format(system))\n return\n \n today = str(datetime.now().date())\n \n if not self.context.get(today):\n- logger.debug('resetting context for %s' % today)\n self.context = {}\n self.context[today] = set()\n \n key = ','.join([peerIP, system])\n \n if key in self.context[today]:\n- logger.debug('skipping {}'.format(key))\n return\n \n self.context[today].add(key)\n@@ -79,5 +75,5 @@\n }\n \n ret = Indicator(self.client, i).submit()\n- logger.info('logged to csirtg %s ' % ret['indicator']['location'])\n+ log.msg('logged to csirtg %s ' % ret['location'])\n", "issue": "csirtg plugin no longer working\nI'm not sure exactly when this happened, but just happend to check the logs today, and noticed the csirtg plugin has some errors.\r\n\r\n```\r\n2017-11-02T17:05:41-0400 [cowrie.telnet.transport.HoneyPotTelnetFactory] New connection: 45.32.221.61:59776 (x.x.x.x:23) [session: TT0]\r\n2017-11-02T17:05:41-0400 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.csirtg.Output object at 0x7f3a5ce9bb50>>) due to exception: [Failure instance: Traceback: <type 'exceptions.TypeError'>: string indices must be integers\r\n\t/home/cowrie/cowrie/cowrie/telnet/transport.py:218:connectionMade\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py:53:sync\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/python/log.py:286:msg\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:154:publishToNewObserver\r\n\t--- <exception caught here> ---\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py:131:__call__\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:93:__call__\r\n\t/home/cowrie/cowrie/cowrie/core/output.py:190:emit\r\n\t/home/cowrie/cowrie/cowrie/output/csirtg.py:82:write\r\n\t]\r\n\tTraceback (most recent call last):\r\n\t File \"/home/cowrie/cowrie/cowrie/telnet/transport.py\", line 218, in connectionMade\r\n\t session=self.transportId, sessionno='T'+str(sessionno))\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py\", line 53, in sync\r\n\t return function(self, *args, **kwargs)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/python/log.py\", line 286, in msg\r\n\t _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py\", line 154, in publishToNewObserver\r\n\t observer(eventDict)\r\n\t--- <exception caught here> ---\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py\", line 131, in __call__\r\n\t observer(event)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py\", line 93, in __call__\r\n\t self.legacyObserver(event)\r\n\t File \"/home/cowrie/cowrie/cowrie/core/output.py\", line 190, in emit\r\n\t self.write(ev)\r\n\t File \"/home/cowrie/cowrie/cowrie/output/csirtg.py\", line 82, in write\r\n\t logger.info('logged to csirtg %s ' % ret['indicator']['location'])\r\n\texceptions.TypeError: string indices must be integers\r\n```\n", "before_files": [{"content": "from __future__ import division, absolute_import\n\nimport cowrie.core.output\n\nfrom csirtgsdk.indicator import Indicator\nfrom csirtgsdk.client import Client\nfrom datetime import datetime\nimport logging\nimport os\n\nlogger = logging.getLogger(__name__)\n\nUSERNAME = os.environ.get('CSIRTG_USER')\nFEED = os.environ.get('CSIRTG_FEED')\nTOKEN = os.environ.get('CSIRG_TOKEN')\nDESCRIPTION = os.environ.get('CSIRTG_DESCRIPTION', 'random scanning activity')\n\n\nclass Output(cowrie.core.output.Output):\n def __init__(self, cfg):\n cowrie.core.output.Output.__init__(self, cfg)\n self.user = cfg.get('output_csirtg', 'username') or USERNAME\n self.feed = cfg.get('output_csirtg', 'feed') or FEED\n self.token = cfg.get('output_csirtg', 'token') or TOKEN\n try:\n self.description = cfg.get('output_csirtg', 'description')\n except Exception:\n self.description = DESCRIPTION\n self.context = {}\n self.client = Client(token=self.token)\n\n def start(self,):\n pass\n\n def stop(self):\n pass\n\n def write(self, e):\n sid = e['session']\n peerIP = e['src_ip']\n ts = e['timestamp']\n system = e['system']\n\n if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:\n logger.debug('skipping {}'.format(system))\n return\n\n today = str(datetime.now().date())\n\n if not self.context.get(today):\n logger.debug('resetting context for %s' % today)\n self.context = {}\n self.context[today] = set()\n\n key = ','.join([peerIP, system])\n\n if key in self.context[today]:\n logger.debug('skipping {}'.format(key))\n return\n\n self.context[today].add(key)\n\n tags = 'scanner,ssh'\n port = 22\n if e['system'] == 'cowrie.telnet.transport.HoneyPotTelnetFactory':\n tags = 'scanner,telnet'\n port = 23\n\n i = {\n 'user': self.user,\n 'feed': self.feed,\n 'indicator': peerIP,\n 'portlist': port,\n 'protocol': 'tcp',\n 'tags': tags,\n 'firsttime': ts,\n 'lasttime': ts,\n 'description': self.description\n }\n\n ret = Indicator(self.client, i).submit()\n logger.info('logged to csirtg %s ' % ret['indicator']['location'])\n\n", "path": "cowrie/output/csirtg.py"}]} | 2,037 | 326 |
gh_patches_debug_26662 | rasdani/github-patches | git_diff | chainer__chainer-903 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stream object should have .ptr set to 0, not None.
The event object expects the stream.ptr to be an integer (size_t) here:
https://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L56
https://github.com/pfnet/chainer/blob/master/cupy/cuda/runtime.pyx#L309
In trunk at the moment, recording events with default stream fails via:
Traceback (most recent call last):
File "train_imagenet.py", line 85, in <module>
train_loop()
File "train_imagenet.py", line 67, in train_loop
start.record()
File "/home/awesomebox/anaconda/lib/python2.7/site-packages/chainer-1.5.1-py2.7-linux-x86_64.egg/cupy/cuda/stream.py", line 56, in record
runtime.eventRecord(self.ptr, stream.ptr)
File "cupy/cuda/runtime.pyx", line 309, in cupy.cuda.runtime.eventRecord (cupy/cuda/runtime.cpp:6139)
TypeError: an integer is required
The fix seems simple:
https://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L103
self.ptr = 0
</issue>
<code>
[start of cupy/cuda/stream.py]
1 from cupy.cuda import runtime
2
3
4 class Event(object):
5
6 """CUDA event, a synchronization point of CUDA streams.
7
8 This class handles the CUDA event handle in RAII way, i.e., when an Event
9 instance is destroyed by the GC, its handle is also destroyed.
10
11 Args:
12 block (bool): If True, the event blocks on the
13 :meth:`~cupy.cuda.Event.synchronize` method.
14 disable_timing (bool): If True, the event does not prepare the timing
15 data.
16 interprocess (bool): If True, the event can be passed to other
17 processes.
18
19 Attributes:
20 ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to
21 the CUDA Runtime API via ctypes.
22
23 """
24 def __init__(self, block=False, disable_timing=False, interprocess=False):
25 self.ptr = None
26
27 if interprocess and not disable_timing:
28 raise ValueError('Timing must be disabled for interprocess events')
29 flag = ((block and runtime.eventBlockingSync) |
30 (disable_timing and runtime.eventDisableTiming) |
31 (interprocess and runtime.eventInterprocess))
32 self.ptr = runtime.eventCreateWithFlags(flag)
33
34 def __del__(self):
35 if self.ptr:
36 runtime.eventDestroy(self.ptr)
37 self.ptr = None
38
39 @property
40 def done(self):
41 """True if the event is done."""
42 return bool(runtime.eventQuery(self.ptr))
43
44 def record(self, stream=None):
45 """Records the event to a stream.
46
47 Args:
48 stream (cupy.cuda.Stream): CUDA stream to record event. The null
49 stream is used by default.
50
51 .. seealso:: :meth:`cupy.cuda.Stream.record`
52
53 """
54 if stream is None:
55 stream = Stream(null=True)
56 runtime.eventRecord(self.ptr, stream.ptr)
57
58 def synchronize(self):
59 """Synchronizes all device work to the event.
60
61 If the event is created as a blocking event, it also blocks the CPU
62 thread until the event is done.
63
64 """
65 runtime.eventSynchronize(self.ptr)
66
67
68 def get_elapsed_time(start_event, end_event):
69 """Gets the elapsed time between two events.
70
71 Args:
72 start_event (Event): Earlier event.
73 end_event (Event): Later event.
74
75 Returns:
76 float: Elapsed time in milliseconds.
77
78 """
79 return runtime.eventElapsedTime(start_event.ptr, end_event.ptr)
80
81
82 class Stream(object):
83
84 """CUDA stream.
85
86 This class handles the CUDA stream handle in RAII way, i.e., when an Stream
87 instance is destroyed by the GC, its handle is also destroyed.
88
89 Args:
90 null (bool): If True, the stream is a null stream (i.e. the default
91 stream that synchronizes with all streams). Otherwise, a plain new
92 stream is created.
93 non_blocking (bool): If True, the stream does not synchronize with the
94 NULL stream.
95
96 Attributes:
97 ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to
98 the CUDA Runtime API via ctypes.
99
100 """
101 def __init__(self, null=False, non_blocking=False):
102 if null:
103 self.ptr = None
104 elif non_blocking:
105 self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)
106 else:
107 self.ptr = runtime.streamCreate()
108
109 def __del__(self):
110 if self.ptr:
111 runtime.streamDestroy(self.ptr)
112 self.ptr = None
113
114 @property
115 def done(self):
116 """True if all work on this stream has been done."""
117 return bool(runtime.streamQuery(self.ptr))
118
119 def synchronize(self):
120 """Waits for the stream completing all queued work."""
121 runtime.streamSynchronize(self.ptr)
122
123 def add_callback(self, callback, arg):
124 """Adds a callback that is called when all queued work is done.
125
126 Args:
127 callback (function): Callback function. It must take three
128 arguments (Stream object, int error status, and user data
129 object), and returns nothing.
130 arg (object): Argument to the callback.
131
132 """
133 runtime.streamAddCallback(self.ptr, callback, arg)
134
135 def record(self, event=None):
136 """Records an event on the stream.
137
138 Args:
139 event (None or cupy.cuda.Event): CUDA event. If None, then a new
140 plain event is created and used.
141
142 Returns:
143 cupy.cuda.Event: The recorded event.
144
145 .. seealso:: :meth:`cupy.cuda.Event.record`
146
147 """
148 if event is None:
149 event = Event()
150 runtime.eventRecord(event.ptr, self.ptr)
151 return event
152
153 def wait_event(self, event):
154 """Makes the stream wait for an event.
155
156 The future work on this stream will be done after the event.
157
158 Args:
159 event (cupy.cuda.Event): CUDA event.
160
161 """
162 runtime.streamWaitEvent(self.ptr, event)
163
[end of cupy/cuda/stream.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/cuda/stream.py b/cupy/cuda/stream.py
--- a/cupy/cuda/stream.py
+++ b/cupy/cuda/stream.py
@@ -22,7 +22,7 @@
"""
def __init__(self, block=False, disable_timing=False, interprocess=False):
- self.ptr = None
+ self.ptr = 0
if interprocess and not disable_timing:
raise ValueError('Timing must be disabled for interprocess events')
@@ -34,7 +34,6 @@
def __del__(self):
if self.ptr:
runtime.eventDestroy(self.ptr)
- self.ptr = None
@property
def done(self):
@@ -100,7 +99,7 @@
"""
def __init__(self, null=False, non_blocking=False):
if null:
- self.ptr = None
+ self.ptr = 0
elif non_blocking:
self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)
else:
@@ -109,7 +108,6 @@
def __del__(self):
if self.ptr:
runtime.streamDestroy(self.ptr)
- self.ptr = None
@property
def done(self):
| {"golden_diff": "diff --git a/cupy/cuda/stream.py b/cupy/cuda/stream.py\n--- a/cupy/cuda/stream.py\n+++ b/cupy/cuda/stream.py\n@@ -22,7 +22,7 @@\n \n \"\"\"\n def __init__(self, block=False, disable_timing=False, interprocess=False):\n- self.ptr = None\n+ self.ptr = 0\n \n if interprocess and not disable_timing:\n raise ValueError('Timing must be disabled for interprocess events')\n@@ -34,7 +34,6 @@\n def __del__(self):\n if self.ptr:\n runtime.eventDestroy(self.ptr)\n- self.ptr = None\n \n @property\n def done(self):\n@@ -100,7 +99,7 @@\n \"\"\"\n def __init__(self, null=False, non_blocking=False):\n if null:\n- self.ptr = None\n+ self.ptr = 0\n elif non_blocking:\n self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)\n else:\n@@ -109,7 +108,6 @@\n def __del__(self):\n if self.ptr:\n runtime.streamDestroy(self.ptr)\n- self.ptr = None\n \n @property\n def done(self):\n", "issue": "Stream object should have .ptr set to 0, not None.\nThe event object expects the stream.ptr to be an integer (size_t) here:\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L56\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/runtime.pyx#L309\n\nIn trunk at the moment, recording events with default stream fails via:\nTraceback (most recent call last):\n File \"train_imagenet.py\", line 85, in <module>\n train_loop()\n File \"train_imagenet.py\", line 67, in train_loop\n start.record()\n File \"/home/awesomebox/anaconda/lib/python2.7/site-packages/chainer-1.5.1-py2.7-linux-x86_64.egg/cupy/cuda/stream.py\", line 56, in record\n runtime.eventRecord(self.ptr, stream.ptr)\n File \"cupy/cuda/runtime.pyx\", line 309, in cupy.cuda.runtime.eventRecord (cupy/cuda/runtime.cpp:6139)\nTypeError: an integer is required\n\nThe fix seems simple:\n\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L103\nself.ptr = 0\n\n", "before_files": [{"content": "from cupy.cuda import runtime\n\n\nclass Event(object):\n\n \"\"\"CUDA event, a synchronization point of CUDA streams.\n\n This class handles the CUDA event handle in RAII way, i.e., when an Event\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n block (bool): If True, the event blocks on the\n :meth:`~cupy.cuda.Event.synchronize` method.\n disable_timing (bool): If True, the event does not prepare the timing\n data.\n interprocess (bool): If True, the event can be passed to other\n processes.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, block=False, disable_timing=False, interprocess=False):\n self.ptr = None\n\n if interprocess and not disable_timing:\n raise ValueError('Timing must be disabled for interprocess events')\n flag = ((block and runtime.eventBlockingSync) |\n (disable_timing and runtime.eventDisableTiming) |\n (interprocess and runtime.eventInterprocess))\n self.ptr = runtime.eventCreateWithFlags(flag)\n\n def __del__(self):\n if self.ptr:\n runtime.eventDestroy(self.ptr)\n self.ptr = None\n\n @property\n def done(self):\n \"\"\"True if the event is done.\"\"\"\n return bool(runtime.eventQuery(self.ptr))\n\n def record(self, stream=None):\n \"\"\"Records the event to a stream.\n\n Args:\n stream (cupy.cuda.Stream): CUDA stream to record event. The null\n stream is used by default.\n\n .. seealso:: :meth:`cupy.cuda.Stream.record`\n\n \"\"\"\n if stream is None:\n stream = Stream(null=True)\n runtime.eventRecord(self.ptr, stream.ptr)\n\n def synchronize(self):\n \"\"\"Synchronizes all device work to the event.\n\n If the event is created as a blocking event, it also blocks the CPU\n thread until the event is done.\n\n \"\"\"\n runtime.eventSynchronize(self.ptr)\n\n\ndef get_elapsed_time(start_event, end_event):\n \"\"\"Gets the elapsed time between two events.\n\n Args:\n start_event (Event): Earlier event.\n end_event (Event): Later event.\n\n Returns:\n float: Elapsed time in milliseconds.\n\n \"\"\"\n return runtime.eventElapsedTime(start_event.ptr, end_event.ptr)\n\n\nclass Stream(object):\n\n \"\"\"CUDA stream.\n\n This class handles the CUDA stream handle in RAII way, i.e., when an Stream\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n null (bool): If True, the stream is a null stream (i.e. the default\n stream that synchronizes with all streams). Otherwise, a plain new\n stream is created.\n non_blocking (bool): If True, the stream does not synchronize with the\n NULL stream.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, null=False, non_blocking=False):\n if null:\n self.ptr = None\n elif non_blocking:\n self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)\n else:\n self.ptr = runtime.streamCreate()\n\n def __del__(self):\n if self.ptr:\n runtime.streamDestroy(self.ptr)\n self.ptr = None\n\n @property\n def done(self):\n \"\"\"True if all work on this stream has been done.\"\"\"\n return bool(runtime.streamQuery(self.ptr))\n\n def synchronize(self):\n \"\"\"Waits for the stream completing all queued work.\"\"\"\n runtime.streamSynchronize(self.ptr)\n\n def add_callback(self, callback, arg):\n \"\"\"Adds a callback that is called when all queued work is done.\n\n Args:\n callback (function): Callback function. It must take three\n arguments (Stream object, int error status, and user data\n object), and returns nothing.\n arg (object): Argument to the callback.\n\n \"\"\"\n runtime.streamAddCallback(self.ptr, callback, arg)\n\n def record(self, event=None):\n \"\"\"Records an event on the stream.\n\n Args:\n event (None or cupy.cuda.Event): CUDA event. If None, then a new\n plain event is created and used.\n\n Returns:\n cupy.cuda.Event: The recorded event.\n\n .. seealso:: :meth:`cupy.cuda.Event.record`\n\n \"\"\"\n if event is None:\n event = Event()\n runtime.eventRecord(event.ptr, self.ptr)\n return event\n\n def wait_event(self, event):\n \"\"\"Makes the stream wait for an event.\n\n The future work on this stream will be done after the event.\n\n Args:\n event (cupy.cuda.Event): CUDA event.\n\n \"\"\"\n runtime.streamWaitEvent(self.ptr, event)\n", "path": "cupy/cuda/stream.py"}]} | 2,272 | 272 |
gh_patches_debug_25005 | rasdani/github-patches | git_diff | meltano__meltano-5980 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Meltano remove lock file bug
From #5977 I was removing an installed plugin and got this on the command line:
```
❯ meltano remove orchestrator airflow
2022-06-02T15:20:50.385299Z [info ] Environment 'dev' is active
Removing orchestrator 'airflow'...
Reset orchestrator 'airflow' plugin settings in the system database
Removed orchestrator 'airflow' from meltano.yml
Removed orchestrator 'airflow' from .meltano/orchestrators
Could not find orchestrator 'airflow' in /Users/taylormurphy/Documents/Projects/dev/meltano/addfromhub/plugins/orchestrators/airflow--original.lock to remove
```
In my `plugins/` folder I still have:
```
plugins
files/
airflow--meltano.lock
orchestrators/
airflow--apache.lock
```
@edgarrmondragon cc @aaronsteers
</issue>
<code>
[start of src/meltano/core/plugin_location_remove.py]
1 """Defines PluginLocationRemoveStatus, PluginLocationRemoveManager, DbRemoveManager, MeltanoYmlRemoveManager and InstallationRemoveManager."""
2
3 import shutil
4 from abc import ABC, abstractmethod
5 from enum import Enum
6
7 import sqlalchemy
8
9 from meltano.core.db import project_engine
10 from meltano.core.plugin.error import PluginNotFoundError
11 from meltano.core.plugin.project_plugin import ProjectPlugin
12 from meltano.core.plugin.settings_service import PluginSettingsService
13 from meltano.core.project_plugins_service import ProjectPluginsService
14
15 from .project import Project
16 from .settings_store import SettingValueStore
17
18
19 class PluginLocationRemoveStatus(Enum):
20 """Possible remove statuses."""
21
22 REMOVED = "removed"
23 ERROR = "error"
24 NOT_FOUND = "not found"
25
26
27 class PluginLocationRemoveManager(ABC):
28 """Handle removal of a plugin from a given location."""
29
30 def __init__(self, plugin: ProjectPlugin, location):
31 """Construct a PluginLocationRemoveManager instance.
32
33 Args:
34 plugin: The plugin to remove.
35 location: The location to remove the plugin from.
36 """
37 self.plugin = plugin
38 self.plugin_descriptor = f"{plugin.type.descriptor} '{plugin.name}'"
39 self.location = location
40 self.remove_status = None
41 self.remove_message = None
42
43 @abstractmethod
44 def remove(self):
45 """Abstract remove method."""
46 pass
47
48 @property
49 def plugin_removed(self) -> bool:
50 """Wether or not the plugin was successfully removed.
51
52 Returns:
53 True if the plugin was successfully removed, False otherwise.
54 """
55 return self.remove_status is PluginLocationRemoveStatus.REMOVED
56
57 @property
58 def plugin_not_found(self) -> bool:
59 """Wether or not the plugin was not found to remove.
60
61 Returns:
62 True if the plugin was not found, False otherwise.
63 """
64 return self.remove_status is PluginLocationRemoveStatus.NOT_FOUND
65
66 @property
67 def plugin_error(self) -> bool:
68 """Wether or not an error was encountered the plugin removal process.
69
70 Returns:
71 True if an error was encountered, False otherwise.
72 """
73 return self.remove_status is PluginLocationRemoveStatus.ERROR
74
75
76 class DbRemoveManager(PluginLocationRemoveManager):
77 """Handle removal of a plugin's settings from the system database `plugin_settings` table."""
78
79 def __init__(self, plugin, project):
80 """Construct a DbRemoveManager instance.
81
82 Args:
83 plugin: The plugin to remove.
84 project: The Meltano project.
85 """
86 super().__init__(plugin, "system database")
87 self.plugins_settings_service = PluginSettingsService(project, plugin)
88 self.session = project_engine(project)[1]
89
90 def remove(self):
91 """Remove the plugin's settings from the system database `plugin_settings` table.
92
93 Returns:
94 The remove status.
95 """
96 session = self.session()
97 try:
98 self.plugins_settings_service.reset(
99 store=SettingValueStore.DB, session=session
100 )
101 except sqlalchemy.exc.OperationalError as err:
102 self.remove_status = PluginLocationRemoveStatus.ERROR
103 self.message = err.orig
104 return
105
106 self.remove_status = PluginLocationRemoveStatus.REMOVED
107
108
109 class MeltanoYmlRemoveManager(PluginLocationRemoveManager):
110 """Handle removal of a plugin from `meltano.yml`."""
111
112 def __init__(self, plugin, project: Project):
113 """Construct a MeltanoYmlRemoveManager instance.
114
115 Args:
116 plugin: The plugin to remove.
117 project: The Meltano project.
118 """
119 super().__init__(plugin, str(project.meltanofile.relative_to(project.root)))
120 self.project_plugins_service = ProjectPluginsService(project)
121
122 def remove(self):
123 """Remove the plugin from `meltano.yml`."""
124 try:
125 self.project_plugins_service.remove_from_file(self.plugin)
126 except PluginNotFoundError:
127 self.remove_status = PluginLocationRemoveStatus.NOT_FOUND
128 return
129 except OSError as err:
130 self.remove_status = PluginLocationRemoveStatus.ERROR
131 self.message = err.strerror
132 return
133
134 self.remove_status = PluginLocationRemoveStatus.REMOVED
135
136
137 class LockedDefinitionRemoveManager(PluginLocationRemoveManager):
138 """Handle removal of a plugin locked definition from `plugins/`."""
139
140 def __init__(self, plugin, project: Project):
141 """Construct a LockedDefinitionRemoveManager instance.
142
143 Args:
144 plugin: The plugin to remove.
145 project: The Meltano project.
146 """
147 path = project.plugin_lock_path(plugin.type, plugin.name, plugin.variant)
148 super().__init__(plugin, str(path))
149 self.path = path
150
151 def remove(self):
152 """Remove the plugin from `plugins/`."""
153 try:
154 self.path.unlink()
155 except FileNotFoundError:
156 self.remove_status = PluginLocationRemoveStatus.NOT_FOUND
157 return
158 except OSError as err:
159 self.remove_status = PluginLocationRemoveStatus.ERROR
160 self.message = err.strerror
161 return
162
163 self.remove_status = PluginLocationRemoveStatus.REMOVED
164
165
166 class InstallationRemoveManager(PluginLocationRemoveManager):
167 """Handle removal of a plugin installation from `.meltano`."""
168
169 def __init__(self, plugin, project: Project):
170 """Construct a InstallationRemoveManager instance.
171
172 Args:
173 plugin: The plugin to remove.
174 project: The Meltano project.
175 """
176 path = project.plugin_dir(plugin, make_dirs=False)
177 super().__init__(plugin, str(path.parent.relative_to(project.root)))
178 self.path = path
179
180 def remove(self):
181 """Remove the plugin installation from `.meltano`."""
182 if not self.path.exists():
183 self.remove_status = PluginLocationRemoveStatus.NOT_FOUND
184 self.message = f"{self.plugin_descriptor} not found in {self.path.parent}"
185 return
186
187 try:
188 shutil.rmtree(self.path)
189 except OSError as err:
190 self.remove_status = PluginLocationRemoveStatus.ERROR
191 self.message = err.strerror
192 return
193
194 self.remove_status = PluginLocationRemoveStatus.REMOVED
195
[end of src/meltano/core/plugin_location_remove.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/meltano/core/plugin_location_remove.py b/src/meltano/core/plugin_location_remove.py
--- a/src/meltano/core/plugin_location_remove.py
+++ b/src/meltano/core/plugin_location_remove.py
@@ -144,21 +144,28 @@
plugin: The plugin to remove.
project: The Meltano project.
"""
- path = project.plugin_lock_path(plugin.type, plugin.name, plugin.variant)
- super().__init__(plugin, str(path))
- self.path = path
+ lockfile_dir = project.root_plugins_dir(plugin.type)
+ glob_expr = f"{plugin.name}*.lock"
+ super().__init__(
+ plugin,
+ str(lockfile_dir.relative_to(project.root).joinpath(glob_expr)),
+ )
+
+ self.paths = list(lockfile_dir.glob(glob_expr))
def remove(self):
"""Remove the plugin from `plugins/`."""
- try:
- self.path.unlink()
- except FileNotFoundError:
+ if not self.paths:
self.remove_status = PluginLocationRemoveStatus.NOT_FOUND
return
- except OSError as err:
- self.remove_status = PluginLocationRemoveStatus.ERROR
- self.message = err.strerror
- return
+
+ for path in self.paths:
+ try:
+ path.unlink()
+ except OSError as err:
+ self.remove_status = PluginLocationRemoveStatus.ERROR
+ self.message = err.strerror
+ return
self.remove_status = PluginLocationRemoveStatus.REMOVED
| {"golden_diff": "diff --git a/src/meltano/core/plugin_location_remove.py b/src/meltano/core/plugin_location_remove.py\n--- a/src/meltano/core/plugin_location_remove.py\n+++ b/src/meltano/core/plugin_location_remove.py\n@@ -144,21 +144,28 @@\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n- path = project.plugin_lock_path(plugin.type, plugin.name, plugin.variant)\n- super().__init__(plugin, str(path))\n- self.path = path\n+ lockfile_dir = project.root_plugins_dir(plugin.type)\n+ glob_expr = f\"{plugin.name}*.lock\"\n+ super().__init__(\n+ plugin,\n+ str(lockfile_dir.relative_to(project.root).joinpath(glob_expr)),\n+ )\n+\n+ self.paths = list(lockfile_dir.glob(glob_expr))\n \n def remove(self):\n \"\"\"Remove the plugin from `plugins/`.\"\"\"\n- try:\n- self.path.unlink()\n- except FileNotFoundError:\n+ if not self.paths:\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n return\n- except OSError as err:\n- self.remove_status = PluginLocationRemoveStatus.ERROR\n- self.message = err.strerror\n- return\n+\n+ for path in self.paths:\n+ try:\n+ path.unlink()\n+ except OSError as err:\n+ self.remove_status = PluginLocationRemoveStatus.ERROR\n+ self.message = err.strerror\n+ return\n \n self.remove_status = PluginLocationRemoveStatus.REMOVED\n", "issue": "Meltano remove lock file bug\nFrom #5977 I was removing an installed plugin and got this on the command line:\r\n\r\n```\r\n\u276f meltano remove orchestrator airflow\r\n2022-06-02T15:20:50.385299Z [info ] Environment 'dev' is active\r\n\r\nRemoving orchestrator 'airflow'...\r\n\r\nReset orchestrator 'airflow' plugin settings in the system database\r\nRemoved orchestrator 'airflow' from meltano.yml\r\nRemoved orchestrator 'airflow' from .meltano/orchestrators\r\nCould not find orchestrator 'airflow' in /Users/taylormurphy/Documents/Projects/dev/meltano/addfromhub/plugins/orchestrators/airflow--original.lock to remove\r\n```\r\n\r\nIn my `plugins/` folder I still have:\r\n\r\n```\r\nplugins\r\n files/\r\n airflow--meltano.lock\r\n orchestrators/\r\n airflow--apache.lock\r\n```\r\n\r\n@edgarrmondragon cc @aaronsteers \n", "before_files": [{"content": "\"\"\"Defines PluginLocationRemoveStatus, PluginLocationRemoveManager, DbRemoveManager, MeltanoYmlRemoveManager and InstallationRemoveManager.\"\"\"\n\nimport shutil\nfrom abc import ABC, abstractmethod\nfrom enum import Enum\n\nimport sqlalchemy\n\nfrom meltano.core.db import project_engine\nfrom meltano.core.plugin.error import PluginNotFoundError\nfrom meltano.core.plugin.project_plugin import ProjectPlugin\nfrom meltano.core.plugin.settings_service import PluginSettingsService\nfrom meltano.core.project_plugins_service import ProjectPluginsService\n\nfrom .project import Project\nfrom .settings_store import SettingValueStore\n\n\nclass PluginLocationRemoveStatus(Enum):\n \"\"\"Possible remove statuses.\"\"\"\n\n REMOVED = \"removed\"\n ERROR = \"error\"\n NOT_FOUND = \"not found\"\n\n\nclass PluginLocationRemoveManager(ABC):\n \"\"\"Handle removal of a plugin from a given location.\"\"\"\n\n def __init__(self, plugin: ProjectPlugin, location):\n \"\"\"Construct a PluginLocationRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n location: The location to remove the plugin from.\n \"\"\"\n self.plugin = plugin\n self.plugin_descriptor = f\"{plugin.type.descriptor} '{plugin.name}'\"\n self.location = location\n self.remove_status = None\n self.remove_message = None\n\n @abstractmethod\n def remove(self):\n \"\"\"Abstract remove method.\"\"\"\n pass\n\n @property\n def plugin_removed(self) -> bool:\n \"\"\"Wether or not the plugin was successfully removed.\n\n Returns:\n True if the plugin was successfully removed, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.REMOVED\n\n @property\n def plugin_not_found(self) -> bool:\n \"\"\"Wether or not the plugin was not found to remove.\n\n Returns:\n True if the plugin was not found, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.NOT_FOUND\n\n @property\n def plugin_error(self) -> bool:\n \"\"\"Wether or not an error was encountered the plugin removal process.\n\n Returns:\n True if an error was encountered, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.ERROR\n\n\nclass DbRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin's settings from the system database `plugin_settings` table.\"\"\"\n\n def __init__(self, plugin, project):\n \"\"\"Construct a DbRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n super().__init__(plugin, \"system database\")\n self.plugins_settings_service = PluginSettingsService(project, plugin)\n self.session = project_engine(project)[1]\n\n def remove(self):\n \"\"\"Remove the plugin's settings from the system database `plugin_settings` table.\n\n Returns:\n The remove status.\n \"\"\"\n session = self.session()\n try:\n self.plugins_settings_service.reset(\n store=SettingValueStore.DB, session=session\n )\n except sqlalchemy.exc.OperationalError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.orig\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass MeltanoYmlRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin from `meltano.yml`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a MeltanoYmlRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n super().__init__(plugin, str(project.meltanofile.relative_to(project.root)))\n self.project_plugins_service = ProjectPluginsService(project)\n\n def remove(self):\n \"\"\"Remove the plugin from `meltano.yml`.\"\"\"\n try:\n self.project_plugins_service.remove_from_file(self.plugin)\n except PluginNotFoundError:\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n return\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass LockedDefinitionRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin locked definition from `plugins/`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a LockedDefinitionRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n path = project.plugin_lock_path(plugin.type, plugin.name, plugin.variant)\n super().__init__(plugin, str(path))\n self.path = path\n\n def remove(self):\n \"\"\"Remove the plugin from `plugins/`.\"\"\"\n try:\n self.path.unlink()\n except FileNotFoundError:\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n return\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass InstallationRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin installation from `.meltano`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a InstallationRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n path = project.plugin_dir(plugin, make_dirs=False)\n super().__init__(plugin, str(path.parent.relative_to(project.root)))\n self.path = path\n\n def remove(self):\n \"\"\"Remove the plugin installation from `.meltano`.\"\"\"\n if not self.path.exists():\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n self.message = f\"{self.plugin_descriptor} not found in {self.path.parent}\"\n return\n\n try:\n shutil.rmtree(self.path)\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n", "path": "src/meltano/core/plugin_location_remove.py"}]} | 2,538 | 337 |
gh_patches_debug_18933 | rasdani/github-patches | git_diff | Qiskit__qiskit-1720 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BackendConfiguration fails validation if backend supports pulse
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Terra version**:0.8.0
- **Python version**3.6.6
- **Operating system**:OSX
### What is the current behavior?
If a backend sets `open_pulse=true` in its configuration Qiskit will raise a validation error when creating a `BackendConfigurationSchema`
### Steps to reproduce the problem
Create a backend with `open_pulse=true` set in its configuration.
### What is the expected behavior?
Should not fail.
### Suggested solutions
Allow `open_pulse=true` to be valid.
</issue>
<code>
[start of qiskit/providers/models/backendconfiguration.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Model and schema for backend configuration."""
9
10 from marshmallow.validate import Equal, Length, OneOf, Range, Regexp
11
12 from qiskit.validation import BaseModel, BaseSchema, bind_schema
13 from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String
14
15
16 class GateConfigSchema(BaseSchema):
17 """Schema for GateConfig."""
18
19 # Required properties.
20 name = String(required=True)
21 parameters = List(String(), required=True)
22 qasm_def = String(required=True)
23
24 # Optional properties.
25 coupling_map = List(List(Integer(),
26 validate=Length(min=1)),
27 validate=Length(min=1))
28 latency_map = List(List(Integer(validate=OneOf([0, 1])),
29 validate=Length(min=1)),
30 validate=Length(min=1))
31 conditional = Boolean()
32 description = String()
33
34
35 class BackendConfigurationSchema(BaseSchema):
36 """Schema for BackendConfiguration."""
37
38 # Required properties.
39 backend_name = String(required=True)
40 backend_version = String(required=True,
41 validate=Regexp("[0-9]+.[0-9]+.[0-9]+$"))
42 n_qubits = Integer(required=True, validate=Range(min=1))
43 basis_gates = List(String(), required=True,
44 validate=Length(min=1))
45 gates = Nested(GateConfigSchema, required=True, many=True,
46 validate=Length(min=1))
47 local = Boolean(required=True)
48 simulator = Boolean(required=True)
49 conditional = Boolean(required=True)
50 open_pulse = Boolean(required=True, validate=Equal(False))
51 memory = Boolean(required=True)
52 max_shots = Integer(required=True, validate=Range(min=1))
53
54 # Optional properties.
55 max_experiments = Integer(validate=Range(min=1))
56 sample_name = String()
57 coupling_map = List(List(Integer(),
58 validate=Length(min=1)),
59 validate=Length(min=1))
60 n_registers = Integer(validate=Range(min=1))
61 register_map = List(List(Integer(validate=OneOf([0, 1])),
62 validate=Length(min=1)),
63 validate=Length(min=1))
64 configurable = Boolean()
65 credits_required = Boolean()
66 online_date = DateTime()
67 display_name = String()
68 description = String()
69 tags = List(String())
70
71
72 @bind_schema(GateConfigSchema)
73 class GateConfig(BaseModel):
74 """Model for GateConfig.
75
76 Please note that this class only describes the required fields. For the
77 full description of the model, please check ``GateConfigSchema``.
78
79 Attributes:
80 name (str): the gate name as it will be referred to in QASM.
81 parameters (list[str]): variable names for the gate parameters (if any).
82 qasm_def (str): definition of this gate in terms of QASM primitives U
83 and CX.
84 """
85
86 def __init__(self, name, parameters, qasm_def, **kwargs):
87 self.name = name
88 self.parameters = parameters
89 self.qasm_def = qasm_def
90
91 super().__init__(**kwargs)
92
93
94 @bind_schema(BackendConfigurationSchema)
95 class BackendConfiguration(BaseModel):
96 """Model for BackendConfiguration.
97
98 Please note that this class only describes the required fields. For the
99 full description of the model, please check ``BackendConfigurationSchema``.
100 Attributes:
101 backend_name (str): backend name.
102 backend_version (str): backend version in the form X.Y.Z.
103 n_qubits (int): number of qubits.
104 basis_gates (list[str]): list of basis gates names on the backend.
105 gates (GateConfig): list of basis gates on the backend.
106 local (bool): backend is local or remote.
107 simulator (bool): backend is a simulator.
108 conditional (bool): backend supports conditional operations.
109 open_pulse (bool): backend supports open pulse.
110 memory (bool): backend supports memory.
111 max_shots (int): maximum number of shots supported.
112 """
113
114 def __init__(self, backend_name, backend_version, n_qubits, basis_gates,
115 gates, local, simulator, conditional, open_pulse, memory,
116 max_shots, **kwargs):
117 self.backend_name = backend_name
118 self.backend_version = backend_version
119 self.n_qubits = n_qubits
120 self.basis_gates = basis_gates
121 self.gates = gates
122 self.local = local
123 self.simulator = simulator
124 self.conditional = conditional
125 self.open_pulse = open_pulse
126 self.memory = memory
127 self.max_shots = max_shots
128
129 super().__init__(**kwargs)
130
[end of qiskit/providers/models/backendconfiguration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/providers/models/backendconfiguration.py b/qiskit/providers/models/backendconfiguration.py
--- a/qiskit/providers/models/backendconfiguration.py
+++ b/qiskit/providers/models/backendconfiguration.py
@@ -7,7 +7,7 @@
"""Model and schema for backend configuration."""
-from marshmallow.validate import Equal, Length, OneOf, Range, Regexp
+from marshmallow.validate import Length, OneOf, Range, Regexp
from qiskit.validation import BaseModel, BaseSchema, bind_schema
from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String
@@ -47,7 +47,7 @@
local = Boolean(required=True)
simulator = Boolean(required=True)
conditional = Boolean(required=True)
- open_pulse = Boolean(required=True, validate=Equal(False))
+ open_pulse = Boolean(required=True)
memory = Boolean(required=True)
max_shots = Integer(required=True, validate=Range(min=1))
| {"golden_diff": "diff --git a/qiskit/providers/models/backendconfiguration.py b/qiskit/providers/models/backendconfiguration.py\n--- a/qiskit/providers/models/backendconfiguration.py\n+++ b/qiskit/providers/models/backendconfiguration.py\n@@ -7,7 +7,7 @@\n \n \"\"\"Model and schema for backend configuration.\"\"\"\n \n-from marshmallow.validate import Equal, Length, OneOf, Range, Regexp\n+from marshmallow.validate import Length, OneOf, Range, Regexp\n \n from qiskit.validation import BaseModel, BaseSchema, bind_schema\n from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String\n@@ -47,7 +47,7 @@\n local = Boolean(required=True)\n simulator = Boolean(required=True)\n conditional = Boolean(required=True)\n- open_pulse = Boolean(required=True, validate=Equal(False))\n+ open_pulse = Boolean(required=True)\n memory = Boolean(required=True)\n max_shots = Integer(required=True, validate=Range(min=1))\n", "issue": "BackendConfiguration fails validation if backend supports pulse\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Informations\r\n\r\n- **Qiskit Terra version**:0.8.0\r\n- **Python version**3.6.6\r\n- **Operating system**:OSX\r\n\r\n### What is the current behavior?\r\nIf a backend sets `open_pulse=true` in its configuration Qiskit will raise a validation error when creating a `BackendConfigurationSchema`\r\n\r\n\r\n### Steps to reproduce the problem\r\nCreate a backend with `open_pulse=true` set in its configuration.\r\n\r\n\r\n### What is the expected behavior?\r\nShould not fail.\r\n\r\n\r\n### Suggested solutions\r\nAllow `open_pulse=true` to be valid.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2018, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"Model and schema for backend configuration.\"\"\"\n\nfrom marshmallow.validate import Equal, Length, OneOf, Range, Regexp\n\nfrom qiskit.validation import BaseModel, BaseSchema, bind_schema\nfrom qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String\n\n\nclass GateConfigSchema(BaseSchema):\n \"\"\"Schema for GateConfig.\"\"\"\n\n # Required properties.\n name = String(required=True)\n parameters = List(String(), required=True)\n qasm_def = String(required=True)\n\n # Optional properties.\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n latency_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n conditional = Boolean()\n description = String()\n\n\nclass BackendConfigurationSchema(BaseSchema):\n \"\"\"Schema for BackendConfiguration.\"\"\"\n\n # Required properties.\n backend_name = String(required=True)\n backend_version = String(required=True,\n validate=Regexp(\"[0-9]+.[0-9]+.[0-9]+$\"))\n n_qubits = Integer(required=True, validate=Range(min=1))\n basis_gates = List(String(), required=True,\n validate=Length(min=1))\n gates = Nested(GateConfigSchema, required=True, many=True,\n validate=Length(min=1))\n local = Boolean(required=True)\n simulator = Boolean(required=True)\n conditional = Boolean(required=True)\n open_pulse = Boolean(required=True, validate=Equal(False))\n memory = Boolean(required=True)\n max_shots = Integer(required=True, validate=Range(min=1))\n\n # Optional properties.\n max_experiments = Integer(validate=Range(min=1))\n sample_name = String()\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n n_registers = Integer(validate=Range(min=1))\n register_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n configurable = Boolean()\n credits_required = Boolean()\n online_date = DateTime()\n display_name = String()\n description = String()\n tags = List(String())\n\n\n@bind_schema(GateConfigSchema)\nclass GateConfig(BaseModel):\n \"\"\"Model for GateConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``GateConfigSchema``.\n\n Attributes:\n name (str): the gate name as it will be referred to in QASM.\n parameters (list[str]): variable names for the gate parameters (if any).\n qasm_def (str): definition of this gate in terms of QASM primitives U\n and CX.\n \"\"\"\n\n def __init__(self, name, parameters, qasm_def, **kwargs):\n self.name = name\n self.parameters = parameters\n self.qasm_def = qasm_def\n\n super().__init__(**kwargs)\n\n\n@bind_schema(BackendConfigurationSchema)\nclass BackendConfiguration(BaseModel):\n \"\"\"Model for BackendConfiguration.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``BackendConfigurationSchema``.\n Attributes:\n backend_name (str): backend name.\n backend_version (str): backend version in the form X.Y.Z.\n n_qubits (int): number of qubits.\n basis_gates (list[str]): list of basis gates names on the backend.\n gates (GateConfig): list of basis gates on the backend.\n local (bool): backend is local or remote.\n simulator (bool): backend is a simulator.\n conditional (bool): backend supports conditional operations.\n open_pulse (bool): backend supports open pulse.\n memory (bool): backend supports memory.\n max_shots (int): maximum number of shots supported.\n \"\"\"\n\n def __init__(self, backend_name, backend_version, n_qubits, basis_gates,\n gates, local, simulator, conditional, open_pulse, memory,\n max_shots, **kwargs):\n self.backend_name = backend_name\n self.backend_version = backend_version\n self.n_qubits = n_qubits\n self.basis_gates = basis_gates\n self.gates = gates\n self.local = local\n self.simulator = simulator\n self.conditional = conditional\n self.open_pulse = open_pulse\n self.memory = memory\n self.max_shots = max_shots\n\n super().__init__(**kwargs)\n", "path": "qiskit/providers/models/backendconfiguration.py"}]} | 2,024 | 209 |
gh_patches_debug_23149 | rasdani/github-patches | git_diff | frappe__frappe-26301 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typing validations should be ignored for tests
## Description of the issue
https://github.com/frappe/frappe/blob/010aa4636ace30a9df4c09f0ca991169f34274b9/frappe/utils/typing_validations.py#L164
If you're writing Frappe tests using the `unittest.mock` module, there might be cases where the argument object is replaced with a `Mock` or `MagicMock` object. This breaks typing validations when running CI tests using the `develop` branch.
I think a reasonable approach could be to either ignore all validations during tests, and/or allow configuring this behaviour per-test (with the default being "ignore").
## Context
**Output of `bench version`**
```
frappe 14.14.2
```
</issue>
<code>
[start of frappe/utils/typing_validations.py]
1 from collections.abc import Callable
2 from functools import lru_cache, wraps
3 from inspect import _empty, isclass, signature
4 from types import EllipsisType
5 from typing import ForwardRef, TypeVar, Union
6
7 from pydantic import ConfigDict
8
9 from frappe.exceptions import FrappeTypeError
10
11 SLACK_DICT = {
12 bool: (int, bool, float),
13 }
14 T = TypeVar("T")
15
16
17 FrappePydanticConfig = ConfigDict(arbitrary_types_allowed=True)
18
19
20 def validate_argument_types(func: Callable, apply_condition: Callable = lambda: True):
21 @wraps(func)
22 def wrapper(*args, **kwargs):
23 """Validate argument types of whitelisted functions.
24
25 :param args: Function arguments.
26 :param kwargs: Function keyword arguments."""
27
28 if apply_condition():
29 args, kwargs = transform_parameter_types(func, args, kwargs)
30
31 return func(*args, **kwargs)
32
33 return wrapper
34
35
36 def qualified_name(obj) -> str:
37 """
38 Return the qualified name (e.g. package.module.Type) for the given object.
39
40 Builtins and types from the :mod:typing package get special treatment by having the module
41 name stripped from the generated name.
42
43 """
44 discovered_type = obj if isclass(obj) else type(obj)
45 module, qualname = discovered_type.__module__, discovered_type.__qualname__
46
47 if module in {"typing", "types"}:
48 return obj
49 elif module in {"builtins"}:
50 return qualname
51 else:
52 return f"{module}.{qualname}"
53
54
55 def raise_type_error(
56 arg_name: str, arg_type: type, arg_value: object, current_exception: Exception | None = None
57 ):
58 """
59 Raise a TypeError with a message that includes the name of the argument, the expected type
60 and the actual type of the value passed.
61
62 """
63 raise FrappeTypeError(
64 f"Argument '{arg_name}' should be of type '{qualified_name(arg_type)}' but got "
65 f"'{qualified_name(arg_value)}' instead."
66 ) from current_exception
67
68
69 @lru_cache(maxsize=2048)
70 def TypeAdapter(type_):
71 from pydantic import TypeAdapter as PyTypeAdapter
72
73 return PyTypeAdapter(type_, config=FrappePydanticConfig)
74
75
76 def transform_parameter_types(func: Callable, args: tuple, kwargs: dict):
77 """
78 Validate the types of the arguments passed to a function with the type annotations
79 defined on the function.
80
81 """
82 if not (args or kwargs) or not func.__annotations__:
83 return args, kwargs
84
85 from pydantic import ValidationError as PyValidationError
86
87 annotations = func.__annotations__
88 new_args, new_kwargs = list(args), kwargs
89
90 # generate kwargs dict from args
91 arg_names = func.__code__.co_varnames[: func.__code__.co_argcount]
92
93 if not args:
94 prepared_args = kwargs
95
96 elif kwargs:
97 arg_values = args or func.__defaults__ or []
98 prepared_args = dict(zip(arg_names, arg_values, strict=False))
99 prepared_args.update(kwargs)
100
101 else:
102 prepared_args = dict(zip(arg_names, args, strict=False))
103
104 # check if type hints dont match the default values
105 func_signature = signature(func)
106 func_params = dict(func_signature.parameters)
107
108 # check if the argument types are correct
109 for current_arg, current_arg_type in annotations.items():
110 if current_arg not in prepared_args:
111 continue
112
113 current_arg_value = prepared_args[current_arg]
114
115 # if the type is a ForwardRef or str, ignore it
116 if isinstance(current_arg_type, ForwardRef | str):
117 continue
118 elif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, "__args__", [])):
119 continue
120
121 # allow slack for Frappe types
122 if current_arg_type in SLACK_DICT:
123 current_arg_type = SLACK_DICT[current_arg_type]
124
125 param_def = func_params.get(current_arg)
126
127 # add default value's type in acceptable types
128 if param_def.default is not _empty:
129 if isinstance(current_arg_type, tuple):
130 if type(param_def.default) not in current_arg_type:
131 current_arg_type += (type(param_def.default),)
132 current_arg_type = Union[current_arg_type] # noqa: UP007
133
134 elif param_def.default != current_arg_type:
135 current_arg_type = Union[current_arg_type, type(param_def.default)] # noqa: UP007
136 elif isinstance(current_arg_type, tuple):
137 current_arg_type = Union[current_arg_type] # noqa: UP007
138
139 # validate the type set using pydantic - raise a TypeError if Validation is raised or Ellipsis is returned
140 try:
141 current_arg_value_after = TypeAdapter(current_arg_type).validate_python(current_arg_value)
142 except (TypeError, PyValidationError) as e:
143 raise_type_error(current_arg, current_arg_type, current_arg_value, current_exception=e)
144
145 if isinstance(current_arg_value_after, EllipsisType):
146 raise_type_error(current_arg, current_arg_type, current_arg_value)
147
148 # update the args and kwargs with possibly casted value
149 if current_arg in kwargs:
150 new_kwargs[current_arg] = current_arg_value_after
151 else:
152 new_args[arg_names.index(current_arg)] = current_arg_value_after
153
154 return new_args, new_kwargs
155
[end of frappe/utils/typing_validations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/frappe/utils/typing_validations.py b/frappe/utils/typing_validations.py
--- a/frappe/utils/typing_validations.py
+++ b/frappe/utils/typing_validations.py
@@ -3,6 +3,7 @@
from inspect import _empty, isclass, signature
from types import EllipsisType
from typing import ForwardRef, TypeVar, Union
+from unittest import mock
from pydantic import ConfigDict
@@ -77,8 +78,8 @@
"""
Validate the types of the arguments passed to a function with the type annotations
defined on the function.
-
"""
+
if not (args or kwargs) or not func.__annotations__:
return args, kwargs
@@ -117,6 +118,9 @@
continue
elif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, "__args__", [])):
continue
+ # ignore unittest.mock objects
+ elif isinstance(current_arg_value, mock.Mock):
+ continue
# allow slack for Frappe types
if current_arg_type in SLACK_DICT:
| {"golden_diff": "diff --git a/frappe/utils/typing_validations.py b/frappe/utils/typing_validations.py\n--- a/frappe/utils/typing_validations.py\n+++ b/frappe/utils/typing_validations.py\n@@ -3,6 +3,7 @@\n from inspect import _empty, isclass, signature\n from types import EllipsisType\n from typing import ForwardRef, TypeVar, Union\n+from unittest import mock\n \n from pydantic import ConfigDict\n \n@@ -77,8 +78,8 @@\n \t\"\"\"\n \tValidate the types of the arguments passed to a function with the type annotations\n \tdefined on the function.\n-\n \t\"\"\"\n+\n \tif not (args or kwargs) or not func.__annotations__:\n \t\treturn args, kwargs\n \n@@ -117,6 +118,9 @@\n \t\t\tcontinue\n \t\telif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, \"__args__\", [])):\n \t\t\tcontinue\n+\t\t# ignore unittest.mock objects\n+\t\telif isinstance(current_arg_value, mock.Mock):\n+\t\t\tcontinue\n \n \t\t# allow slack for Frappe types\n \t\tif current_arg_type in SLACK_DICT:\n", "issue": "Typing validations should be ignored for tests\n## Description of the issue\r\nhttps://github.com/frappe/frappe/blob/010aa4636ace30a9df4c09f0ca991169f34274b9/frappe/utils/typing_validations.py#L164\r\n\r\nIf you're writing Frappe tests using the `unittest.mock` module, there might be cases where the argument object is replaced with a `Mock` or `MagicMock` object. This breaks typing validations when running CI tests using the `develop` branch.\r\n\r\nI think a reasonable approach could be to either ignore all validations during tests, and/or allow configuring this behaviour per-test (with the default being \"ignore\").\r\n\r\n## Context\r\n\r\n**Output of `bench version`**\r\n```\r\nfrappe 14.14.2\r\n```\r\n\n", "before_files": [{"content": "from collections.abc import Callable\nfrom functools import lru_cache, wraps\nfrom inspect import _empty, isclass, signature\nfrom types import EllipsisType\nfrom typing import ForwardRef, TypeVar, Union\n\nfrom pydantic import ConfigDict\n\nfrom frappe.exceptions import FrappeTypeError\n\nSLACK_DICT = {\n\tbool: (int, bool, float),\n}\nT = TypeVar(\"T\")\n\n\nFrappePydanticConfig = ConfigDict(arbitrary_types_allowed=True)\n\n\ndef validate_argument_types(func: Callable, apply_condition: Callable = lambda: True):\n\t@wraps(func)\n\tdef wrapper(*args, **kwargs):\n\t\t\"\"\"Validate argument types of whitelisted functions.\n\n\t\t:param args: Function arguments.\n\t\t:param kwargs: Function keyword arguments.\"\"\"\n\n\t\tif apply_condition():\n\t\t\targs, kwargs = transform_parameter_types(func, args, kwargs)\n\n\t\treturn func(*args, **kwargs)\n\n\treturn wrapper\n\n\ndef qualified_name(obj) -> str:\n\t\"\"\"\n\tReturn the qualified name (e.g. package.module.Type) for the given object.\n\n\tBuiltins and types from the :mod:typing package get special treatment by having the module\n\tname stripped from the generated name.\n\n\t\"\"\"\n\tdiscovered_type = obj if isclass(obj) else type(obj)\n\tmodule, qualname = discovered_type.__module__, discovered_type.__qualname__\n\n\tif module in {\"typing\", \"types\"}:\n\t\treturn obj\n\telif module in {\"builtins\"}:\n\t\treturn qualname\n\telse:\n\t\treturn f\"{module}.{qualname}\"\n\n\ndef raise_type_error(\n\targ_name: str, arg_type: type, arg_value: object, current_exception: Exception | None = None\n):\n\t\"\"\"\n\tRaise a TypeError with a message that includes the name of the argument, the expected type\n\tand the actual type of the value passed.\n\n\t\"\"\"\n\traise FrappeTypeError(\n\t\tf\"Argument '{arg_name}' should be of type '{qualified_name(arg_type)}' but got \"\n\t\tf\"'{qualified_name(arg_value)}' instead.\"\n\t) from current_exception\n\n\n@lru_cache(maxsize=2048)\ndef TypeAdapter(type_):\n\tfrom pydantic import TypeAdapter as PyTypeAdapter\n\n\treturn PyTypeAdapter(type_, config=FrappePydanticConfig)\n\n\ndef transform_parameter_types(func: Callable, args: tuple, kwargs: dict):\n\t\"\"\"\n\tValidate the types of the arguments passed to a function with the type annotations\n\tdefined on the function.\n\n\t\"\"\"\n\tif not (args or kwargs) or not func.__annotations__:\n\t\treturn args, kwargs\n\n\tfrom pydantic import ValidationError as PyValidationError\n\n\tannotations = func.__annotations__\n\tnew_args, new_kwargs = list(args), kwargs\n\n\t# generate kwargs dict from args\n\targ_names = func.__code__.co_varnames[: func.__code__.co_argcount]\n\n\tif not args:\n\t\tprepared_args = kwargs\n\n\telif kwargs:\n\t\targ_values = args or func.__defaults__ or []\n\t\tprepared_args = dict(zip(arg_names, arg_values, strict=False))\n\t\tprepared_args.update(kwargs)\n\n\telse:\n\t\tprepared_args = dict(zip(arg_names, args, strict=False))\n\n\t# check if type hints dont match the default values\n\tfunc_signature = signature(func)\n\tfunc_params = dict(func_signature.parameters)\n\n\t# check if the argument types are correct\n\tfor current_arg, current_arg_type in annotations.items():\n\t\tif current_arg not in prepared_args:\n\t\t\tcontinue\n\n\t\tcurrent_arg_value = prepared_args[current_arg]\n\n\t\t# if the type is a ForwardRef or str, ignore it\n\t\tif isinstance(current_arg_type, ForwardRef | str):\n\t\t\tcontinue\n\t\telif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, \"__args__\", [])):\n\t\t\tcontinue\n\n\t\t# allow slack for Frappe types\n\t\tif current_arg_type in SLACK_DICT:\n\t\t\tcurrent_arg_type = SLACK_DICT[current_arg_type]\n\n\t\tparam_def = func_params.get(current_arg)\n\n\t\t# add default value's type in acceptable types\n\t\tif param_def.default is not _empty:\n\t\t\tif isinstance(current_arg_type, tuple):\n\t\t\t\tif type(param_def.default) not in current_arg_type:\n\t\t\t\t\tcurrent_arg_type += (type(param_def.default),)\n\t\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t\telif param_def.default != current_arg_type:\n\t\t\t\tcurrent_arg_type = Union[current_arg_type, type(param_def.default)] # noqa: UP007\n\t\telif isinstance(current_arg_type, tuple):\n\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t# validate the type set using pydantic - raise a TypeError if Validation is raised or Ellipsis is returned\n\t\ttry:\n\t\t\tcurrent_arg_value_after = TypeAdapter(current_arg_type).validate_python(current_arg_value)\n\t\texcept (TypeError, PyValidationError) as e:\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value, current_exception=e)\n\n\t\tif isinstance(current_arg_value_after, EllipsisType):\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value)\n\n\t\t# update the args and kwargs with possibly casted value\n\t\tif current_arg in kwargs:\n\t\t\tnew_kwargs[current_arg] = current_arg_value_after\n\t\telse:\n\t\t\tnew_args[arg_names.index(current_arg)] = current_arg_value_after\n\n\treturn new_args, new_kwargs\n", "path": "frappe/utils/typing_validations.py"}]} | 2,285 | 251 |
gh_patches_debug_16021 | rasdani/github-patches | git_diff | wagtail__wagtail-8270 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ThumbnailMixin does not display in header the value defined under thumb_col_header_text
<!--
Found a bug? Please fill out the sections below. 👍
-->
### Issue Summary
When adding ThumbnailMixin to a ModelAdmin, and giving it the `thumb_col_header_text` attribute, should display that on the list header for the thumbnail. but it always uses the default defined 'image'


### Steps to Reproduce
1. (for example) Start a new project with `wagtail start myproject`
2. in models.py add a new model (non page) with a forignkey to wagtailimages.Image
3. add model admin definition in wagtail_hooks.py
4. add ThumbnailMixin to model admin super classes
5. add some value to thumb_col_header_text
6. register new model admin
7. load app
8. add new instance of your new model with an image
9. in list header for your image it will say 'image' not what you defined in thumb_col_header_text
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes)
* i already know why this is happening and will submit a pull request shortly
### Technical details
* Python version: 3.9.7
* Django version: 4.0.3
* Wagtail version: 2.16.1
* Browser version: Chrome Version 100.0.4896.60 (Official Build) (x86_64)
</issue>
<code>
[start of wagtail/contrib/modeladmin/mixins.py]
1 from django.conf import settings
2 from django.core.exceptions import ImproperlyConfigured
3 from django.forms.utils import flatatt
4 from django.utils.safestring import mark_safe
5 from django.utils.translation import gettext_lazy as _
6
7
8 class ThumbnailMixin:
9 """
10 Mixin class to help display thumbnail images in ModelAdmin listing results.
11 `thumb_image_field_name` must be overridden to name a ForeignKey field on
12 your model, linking to `wagtailimages.Image`.
13 """
14
15 thumb_image_field_name = "image"
16 thumb_image_filter_spec = "fill-100x100"
17 thumb_image_width = 50
18 thumb_classname = "admin-thumb"
19 thumb_col_header_text = _("image")
20 thumb_default = None
21
22 def __init__(self, *args, **kwargs):
23 if "wagtail.images" not in settings.INSTALLED_APPS:
24 raise ImproperlyConfigured(
25 "The `wagtail.images` app must be installed in order "
26 "to use the `ThumbnailMixin` class."
27 )
28 super().__init__(*args, **kwargs)
29
30 def admin_thumb(self, obj):
31 try:
32 image = getattr(obj, self.thumb_image_field_name, None)
33 except AttributeError:
34 raise ImproperlyConfigured(
35 "The `thumb_image_field_name` attribute on your `%s` class "
36 "must name a field on your model." % self.__class__.__name__
37 )
38
39 img_attrs = {
40 "src": self.thumb_default,
41 "width": self.thumb_image_width,
42 "class": self.thumb_classname,
43 }
44 if not image:
45 if self.thumb_default:
46 return mark_safe("<img{}>".format(flatatt(img_attrs)))
47 return ""
48
49 # try to get a rendition of the image to use
50 from wagtail.images.shortcuts import get_rendition_or_not_found
51
52 spec = self.thumb_image_filter_spec
53 rendition = get_rendition_or_not_found(image, spec)
54 img_attrs.update({"src": rendition.url})
55 return mark_safe("<img{}>".format(flatatt(img_attrs)))
56
57 admin_thumb.short_description = thumb_col_header_text
58
[end of wagtail/contrib/modeladmin/mixins.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/contrib/modeladmin/mixins.py b/wagtail/contrib/modeladmin/mixins.py
--- a/wagtail/contrib/modeladmin/mixins.py
+++ b/wagtail/contrib/modeladmin/mixins.py
@@ -25,6 +25,7 @@
"The `wagtail.images` app must be installed in order "
"to use the `ThumbnailMixin` class."
)
+ self.__class__.admin_thumb.short_description = self.thumb_col_header_text
super().__init__(*args, **kwargs)
def admin_thumb(self, obj):
@@ -53,5 +54,3 @@
rendition = get_rendition_or_not_found(image, spec)
img_attrs.update({"src": rendition.url})
return mark_safe("<img{}>".format(flatatt(img_attrs)))
-
- admin_thumb.short_description = thumb_col_header_text
| {"golden_diff": "diff --git a/wagtail/contrib/modeladmin/mixins.py b/wagtail/contrib/modeladmin/mixins.py\n--- a/wagtail/contrib/modeladmin/mixins.py\n+++ b/wagtail/contrib/modeladmin/mixins.py\n@@ -25,6 +25,7 @@\n \"The `wagtail.images` app must be installed in order \"\n \"to use the `ThumbnailMixin` class.\"\n )\n+ self.__class__.admin_thumb.short_description = self.thumb_col_header_text\n super().__init__(*args, **kwargs)\n \n def admin_thumb(self, obj):\n@@ -53,5 +54,3 @@\n rendition = get_rendition_or_not_found(image, spec)\n img_attrs.update({\"src\": rendition.url})\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n-\n- admin_thumb.short_description = thumb_col_header_text\n", "issue": "ThumbnailMixin does not display in header the value defined under thumb_col_header_text \n<!--\r\nFound a bug? Please fill out the sections below. \ud83d\udc4d\r\n-->\r\n\r\n### Issue Summary\r\n\r\nWhen adding ThumbnailMixin to a ModelAdmin, and giving it the `thumb_col_header_text` attribute, should display that on the list header for the thumbnail. but it always uses the default defined 'image' \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n1. (for example) Start a new project with `wagtail start myproject`\r\n2. in models.py add a new model (non page) with a forignkey to wagtailimages.Image \r\n3. add model admin definition in wagtail_hooks.py\r\n4. add ThumbnailMixin to model admin super classes\r\n5. add some value to thumb_col_header_text\r\n6. register new model admin\r\n7. load app\r\n8. add new instance of your new model with an image\r\n9. in list header for your image it will say 'image' not what you defined in thumb_col_header_text\r\n\r\nAny other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes)\r\n* i already know why this is happening and will submit a pull request shortly\r\n\r\n\r\n### Technical details\r\n\r\n* Python version: 3.9.7\r\n* Django version: 4.0.3\r\n* Wagtail version: 2.16.1\r\n* Browser version: Chrome Version 100.0.4896.60 (Official Build) (x86_64)\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.forms.utils import flatatt\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass ThumbnailMixin:\n \"\"\"\n Mixin class to help display thumbnail images in ModelAdmin listing results.\n `thumb_image_field_name` must be overridden to name a ForeignKey field on\n your model, linking to `wagtailimages.Image`.\n \"\"\"\n\n thumb_image_field_name = \"image\"\n thumb_image_filter_spec = \"fill-100x100\"\n thumb_image_width = 50\n thumb_classname = \"admin-thumb\"\n thumb_col_header_text = _(\"image\")\n thumb_default = None\n\n def __init__(self, *args, **kwargs):\n if \"wagtail.images\" not in settings.INSTALLED_APPS:\n raise ImproperlyConfigured(\n \"The `wagtail.images` app must be installed in order \"\n \"to use the `ThumbnailMixin` class.\"\n )\n super().__init__(*args, **kwargs)\n\n def admin_thumb(self, obj):\n try:\n image = getattr(obj, self.thumb_image_field_name, None)\n except AttributeError:\n raise ImproperlyConfigured(\n \"The `thumb_image_field_name` attribute on your `%s` class \"\n \"must name a field on your model.\" % self.__class__.__name__\n )\n\n img_attrs = {\n \"src\": self.thumb_default,\n \"width\": self.thumb_image_width,\n \"class\": self.thumb_classname,\n }\n if not image:\n if self.thumb_default:\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n return \"\"\n\n # try to get a rendition of the image to use\n from wagtail.images.shortcuts import get_rendition_or_not_found\n\n spec = self.thumb_image_filter_spec\n rendition = get_rendition_or_not_found(image, spec)\n img_attrs.update({\"src\": rendition.url})\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n\n admin_thumb.short_description = thumb_col_header_text\n", "path": "wagtail/contrib/modeladmin/mixins.py"}]} | 1,587 | 197 |
gh_patches_debug_1095 | rasdani/github-patches | git_diff | python-poetry__poetry-277 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Discrepancy regarding license between doc and poetry init
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **OS version and name**: Manjaro Linux
- **Poetry version**: 0.11.1
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**:
## Issue
<!-- Now feel free to write your issue, but please be descriptive! Thanks again 🙌 ❤️ -->
During the `license` prompt of `poetry init`, a valid license is required as input. Acording to the documentation, a license is highly recommended, but not actually required. This descrepancy should be removed by updating either the documentation or the code.
</issue>
<code>
[start of poetry/console/commands/init.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 import re
5
6 from typing import List
7 from typing import Tuple
8
9 from .command import Command
10 from .venv_command import VenvCommand
11
12
13 class InitCommand(Command):
14 """
15 Creates a basic <comment>pyproject.toml</> file in the current directory.
16
17 init
18 {--name= : Name of the package}
19 {--description= : Description of the package}
20 {--author= : Author name of the package}
21 {--dependency=* : Package to require with an optional version constraint,
22 e.g. requests:^2.10.0 or requests=2.11.1}
23 {--dev-dependency=* : Package to require for development with an optional version constraint,
24 e.g. requests:^2.10.0 or requests=2.11.1}
25 {--l|license= : License of the package}
26 """
27
28 help = """\
29 The <info>init</info> command creates a basic <comment>pyproject.toml</> file in the current directory.
30 """
31
32 def __init__(self):
33 super(InitCommand, self).__init__()
34
35 self._pool = None
36
37 def handle(self):
38 from poetry.layouts import layout
39 from poetry.utils._compat import Path
40 from poetry.vcs.git import GitConfig
41
42 if (Path.cwd() / "pyproject.toml").exists():
43 self.error("A pyproject.toml file already exists.")
44 return 1
45
46 vcs_config = GitConfig()
47
48 self.line(
49 [
50 "",
51 "This command will guide you through creating your <info>poetry.toml</> config.",
52 "",
53 ]
54 )
55
56 name = self.option("name")
57 if not name:
58 name = Path.cwd().name.lower()
59
60 question = self.create_question(
61 "Package name [<comment>{}</comment>]: ".format(name), default=name
62 )
63 name = self.ask(question)
64
65 version = "0.1.0"
66 question = self.create_question(
67 "Version [<comment>{}</comment>]: ".format(version), default=version
68 )
69 version = self.ask(question)
70
71 description = self.option("description") or ""
72 question = self.create_question(
73 "Description [<comment>{}</comment>]: ".format(description),
74 default=description,
75 )
76 description = self.ask(question)
77
78 author = self.option("author")
79 if not author and vcs_config and vcs_config.get("user.name"):
80 author = vcs_config["user.name"]
81 author_email = vcs_config.get("user.email")
82 if author_email:
83 author += " <{}>".format(author_email)
84
85 question = self.create_question(
86 "Author [<comment>{}</comment>, n to skip]: ".format(author), default=author
87 )
88 question.validator = lambda v: self._validate_author(v, author)
89 author = self.ask(question)
90
91 if not author:
92 authors = []
93 else:
94 authors = [author]
95
96 license = self.option("license") or ""
97
98 question = self.create_question(
99 "License [<comment>{}</comment>]: ".format(license), default=license
100 )
101 question.validator = self._validate_license
102 license = self.ask(question)
103
104 question = self.create_question("Compatible Python versions [*]: ", default="*")
105 python = self.ask(question)
106
107 self.line("")
108
109 requirements = {}
110
111 question = "Would you like to define your dependencies" " (require) interactively?"
112 if self.confirm(question, True):
113 requirements = self._format_requirements(
114 self._determine_requirements(self.option("dependency"))
115 )
116
117 dev_requirements = {}
118
119 question = "Would you like to define your dev dependencies" " (require-dev) interactively"
120 if self.confirm(question, True):
121 dev_requirements = self._format_requirements(
122 self._determine_requirements(self.option("dev-dependency"))
123 )
124
125 layout_ = layout("standard")(
126 name,
127 version,
128 description=description,
129 author=authors[0] if authors else None,
130 license=license,
131 python=python,
132 dependencies=requirements,
133 dev_dependencies=dev_requirements,
134 )
135
136 content = layout_.generate_poetry_content()
137 if self.input.is_interactive():
138 self.line("<info>Generated file</info>")
139 self.line(["", content, ""])
140
141 if not self.confirm("Do you confirm generation?", True):
142 self.line("<error>Command aborted</error>")
143
144 return 1
145
146 with (Path.cwd() / "pyproject.toml").open("w") as f:
147 f.write(content)
148
149 def _determine_requirements(
150 self, requires, allow_prereleases=False # type: List[str] # type: bool
151 ): # type: (...) -> List[str]
152 if not requires:
153 requires = []
154
155 package = self.ask("Search for package:")
156 while package is not None:
157 matches = self._get_pool().search(package)
158
159 if not matches:
160 self.line("<error>Unable to find package</error>")
161 package = False
162 else:
163 choices = []
164
165 for found_package in matches:
166 choices.append(found_package.pretty_name)
167
168 self.line(
169 "Found <info>{}</info> packages matching <info>{}</info>".format(
170 len(matches), package
171 )
172 )
173
174 package = self.choice(
175 "\nEnter package # to add, or the complete package name if it is not listed",
176 choices,
177 attempts=3,
178 )
179
180 # no constraint yet, determine the best version automatically
181 if package is not False and " " not in package:
182 question = self.create_question(
183 "Enter the version constraint to require "
184 "(or leave blank to use the latest version):"
185 )
186 question.attempts = 3
187 question.validator = lambda x: (x or "").strip() or False
188
189 constraint = self.ask(question)
190
191 if constraint is False:
192 _, constraint = self._find_best_version_for_package(package)
193
194 self.line(
195 "Using version <info>{}</info> for <info>{}</info>".format(
196 constraint, package
197 )
198 )
199
200 package += " {}".format(constraint)
201
202 if package is not False:
203 requires.append(package)
204
205 package = self.ask("\nSearch for a package:")
206
207 return requires
208
209 requires = self._parse_name_version_pairs(requires)
210 result = []
211 for requirement in requires:
212 if "version" not in requirement:
213 # determine the best version automatically
214 name, version = self._find_best_version_for_package(
215 requirement["name"], allow_prereleases=allow_prereleases
216 )
217 requirement["version"] = version
218 requirement["name"] = name
219
220 self.line(
221 "Using version <info>{}</> for <info>{}</>".format(version, name)
222 )
223 else:
224 # check that the specified version/constraint exists
225 # before we proceed
226 name, _ = self._find_best_version_for_package(
227 requirement["name"],
228 requirement["version"],
229 allow_prereleases=allow_prereleases,
230 )
231
232 requirement["name"] = name
233
234 result.append("{} {}".format(requirement["name"], requirement["version"]))
235
236 return result
237
238 def _find_best_version_for_package(
239 self, name, required_version=None, allow_prereleases=False
240 ): # type: (...) -> Tuple[str, str]
241 from poetry.version.version_selector import VersionSelector
242
243 selector = VersionSelector(self._get_pool())
244 package = selector.find_best_candidate(
245 name, required_version, allow_prereleases=allow_prereleases
246 )
247
248 if not package:
249 # TODO: find similar
250 raise ValueError(
251 "Could not find a matching version of package {}".format(name)
252 )
253
254 return (package.pretty_name, selector.find_recommended_require_version(package))
255
256 def _parse_name_version_pairs(self, pairs): # type: (list) -> list
257 result = []
258
259 for i in range(len(pairs)):
260 pair = re.sub("^([^=: ]+)[=: ](.*)$", "\\1 \\2", pairs[i].strip())
261 pair = pair.strip()
262
263 if " " in pair:
264 name, version = pair.split(" ", 2)
265 result.append({"name": name, "version": version})
266 else:
267 result.append({"name": pair})
268
269 return result
270
271 def _format_requirements(self, requirements): # type: (List[str]) -> dict
272 requires = {}
273 requirements = self._parse_name_version_pairs(requirements)
274 for requirement in requirements:
275 requires[requirement["name"]] = requirement["version"]
276
277 return requires
278
279 def _validate_author(self, author, default):
280 from poetry.packages.package import AUTHOR_REGEX
281
282 author = author or default
283
284 if author in ["n", "no"]:
285 return
286
287 m = AUTHOR_REGEX.match(author)
288 if not m:
289 raise ValueError(
290 "Invalid author string. Must be in the format: "
291 "John Smith <[email protected]>"
292 )
293
294 return author
295
296 def _validate_license(self, license):
297 from poetry.spdx import license_by_id
298
299 license_by_id(license)
300
301 return license
302
303 def _get_pool(self):
304 from poetry.repositories import Pool
305 from poetry.repositories.pypi_repository import PyPiRepository
306
307 if isinstance(self, VenvCommand):
308 return self.poetry.pool
309
310 if self._pool is None:
311 self._pool = Pool()
312 self._pool.add_repository(PyPiRepository())
313
314 return self._pool
315
[end of poetry/console/commands/init.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/poetry/console/commands/init.py b/poetry/console/commands/init.py
--- a/poetry/console/commands/init.py
+++ b/poetry/console/commands/init.py
@@ -296,7 +296,8 @@
def _validate_license(self, license):
from poetry.spdx import license_by_id
- license_by_id(license)
+ if license:
+ license_by_id(license)
return license
| {"golden_diff": "diff --git a/poetry/console/commands/init.py b/poetry/console/commands/init.py\n--- a/poetry/console/commands/init.py\n+++ b/poetry/console/commands/init.py\n@@ -296,7 +296,8 @@\n def _validate_license(self, license):\n from poetry.spdx import license_by_id\n \n- license_by_id(license)\n+ if license:\n+ license_by_id(license)\n \n return license\n", "issue": "Discrepancy regarding license between doc and poetry init\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **OS version and name**: Manjaro Linux\r\n- **Poetry version**: 0.11.1\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: \r\n\r\n## Issue\r\n<!-- Now feel free to write your issue, but please be descriptive! Thanks again \ud83d\ude4c \u2764\ufe0f -->\r\nDuring the `license` prompt of `poetry init`, a valid license is required as input. Acording to the documentation, a license is highly recommended, but not actually required. This descrepancy should be removed by updating either the documentation or the code.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom typing import List\nfrom typing import Tuple\n\nfrom .command import Command\nfrom .venv_command import VenvCommand\n\n\nclass InitCommand(Command):\n \"\"\"\n Creates a basic <comment>pyproject.toml</> file in the current directory.\n\n init\n {--name= : Name of the package}\n {--description= : Description of the package}\n {--author= : Author name of the package}\n {--dependency=* : Package to require with an optional version constraint,\n e.g. requests:^2.10.0 or requests=2.11.1}\n {--dev-dependency=* : Package to require for development with an optional version constraint,\n e.g. requests:^2.10.0 or requests=2.11.1}\n {--l|license= : License of the package}\n \"\"\"\n\n help = \"\"\"\\\nThe <info>init</info> command creates a basic <comment>pyproject.toml</> file in the current directory.\n\"\"\"\n\n def __init__(self):\n super(InitCommand, self).__init__()\n\n self._pool = None\n\n def handle(self):\n from poetry.layouts import layout\n from poetry.utils._compat import Path\n from poetry.vcs.git import GitConfig\n\n if (Path.cwd() / \"pyproject.toml\").exists():\n self.error(\"A pyproject.toml file already exists.\")\n return 1\n\n vcs_config = GitConfig()\n\n self.line(\n [\n \"\",\n \"This command will guide you through creating your <info>poetry.toml</> config.\",\n \"\",\n ]\n )\n\n name = self.option(\"name\")\n if not name:\n name = Path.cwd().name.lower()\n\n question = self.create_question(\n \"Package name [<comment>{}</comment>]: \".format(name), default=name\n )\n name = self.ask(question)\n\n version = \"0.1.0\"\n question = self.create_question(\n \"Version [<comment>{}</comment>]: \".format(version), default=version\n )\n version = self.ask(question)\n\n description = self.option(\"description\") or \"\"\n question = self.create_question(\n \"Description [<comment>{}</comment>]: \".format(description),\n default=description,\n )\n description = self.ask(question)\n\n author = self.option(\"author\")\n if not author and vcs_config and vcs_config.get(\"user.name\"):\n author = vcs_config[\"user.name\"]\n author_email = vcs_config.get(\"user.email\")\n if author_email:\n author += \" <{}>\".format(author_email)\n\n question = self.create_question(\n \"Author [<comment>{}</comment>, n to skip]: \".format(author), default=author\n )\n question.validator = lambda v: self._validate_author(v, author)\n author = self.ask(question)\n\n if not author:\n authors = []\n else:\n authors = [author]\n\n license = self.option(\"license\") or \"\"\n\n question = self.create_question(\n \"License [<comment>{}</comment>]: \".format(license), default=license\n )\n question.validator = self._validate_license\n license = self.ask(question)\n\n question = self.create_question(\"Compatible Python versions [*]: \", default=\"*\")\n python = self.ask(question)\n\n self.line(\"\")\n\n requirements = {}\n\n question = \"Would you like to define your dependencies\" \" (require) interactively?\"\n if self.confirm(question, True):\n requirements = self._format_requirements(\n self._determine_requirements(self.option(\"dependency\"))\n )\n\n dev_requirements = {}\n\n question = \"Would you like to define your dev dependencies\" \" (require-dev) interactively\"\n if self.confirm(question, True):\n dev_requirements = self._format_requirements(\n self._determine_requirements(self.option(\"dev-dependency\"))\n )\n\n layout_ = layout(\"standard\")(\n name,\n version,\n description=description,\n author=authors[0] if authors else None,\n license=license,\n python=python,\n dependencies=requirements,\n dev_dependencies=dev_requirements,\n )\n\n content = layout_.generate_poetry_content()\n if self.input.is_interactive():\n self.line(\"<info>Generated file</info>\")\n self.line([\"\", content, \"\"])\n\n if not self.confirm(\"Do you confirm generation?\", True):\n self.line(\"<error>Command aborted</error>\")\n\n return 1\n\n with (Path.cwd() / \"pyproject.toml\").open(\"w\") as f:\n f.write(content)\n\n def _determine_requirements(\n self, requires, allow_prereleases=False # type: List[str] # type: bool\n ): # type: (...) -> List[str]\n if not requires:\n requires = []\n\n package = self.ask(\"Search for package:\")\n while package is not None:\n matches = self._get_pool().search(package)\n\n if not matches:\n self.line(\"<error>Unable to find package</error>\")\n package = False\n else:\n choices = []\n\n for found_package in matches:\n choices.append(found_package.pretty_name)\n\n self.line(\n \"Found <info>{}</info> packages matching <info>{}</info>\".format(\n len(matches), package\n )\n )\n\n package = self.choice(\n \"\\nEnter package # to add, or the complete package name if it is not listed\",\n choices,\n attempts=3,\n )\n\n # no constraint yet, determine the best version automatically\n if package is not False and \" \" not in package:\n question = self.create_question(\n \"Enter the version constraint to require \"\n \"(or leave blank to use the latest version):\"\n )\n question.attempts = 3\n question.validator = lambda x: (x or \"\").strip() or False\n\n constraint = self.ask(question)\n\n if constraint is False:\n _, constraint = self._find_best_version_for_package(package)\n\n self.line(\n \"Using version <info>{}</info> for <info>{}</info>\".format(\n constraint, package\n )\n )\n\n package += \" {}\".format(constraint)\n\n if package is not False:\n requires.append(package)\n\n package = self.ask(\"\\nSearch for a package:\")\n\n return requires\n\n requires = self._parse_name_version_pairs(requires)\n result = []\n for requirement in requires:\n if \"version\" not in requirement:\n # determine the best version automatically\n name, version = self._find_best_version_for_package(\n requirement[\"name\"], allow_prereleases=allow_prereleases\n )\n requirement[\"version\"] = version\n requirement[\"name\"] = name\n\n self.line(\n \"Using version <info>{}</> for <info>{}</>\".format(version, name)\n )\n else:\n # check that the specified version/constraint exists\n # before we proceed\n name, _ = self._find_best_version_for_package(\n requirement[\"name\"],\n requirement[\"version\"],\n allow_prereleases=allow_prereleases,\n )\n\n requirement[\"name\"] = name\n\n result.append(\"{} {}\".format(requirement[\"name\"], requirement[\"version\"]))\n\n return result\n\n def _find_best_version_for_package(\n self, name, required_version=None, allow_prereleases=False\n ): # type: (...) -> Tuple[str, str]\n from poetry.version.version_selector import VersionSelector\n\n selector = VersionSelector(self._get_pool())\n package = selector.find_best_candidate(\n name, required_version, allow_prereleases=allow_prereleases\n )\n\n if not package:\n # TODO: find similar\n raise ValueError(\n \"Could not find a matching version of package {}\".format(name)\n )\n\n return (package.pretty_name, selector.find_recommended_require_version(package))\n\n def _parse_name_version_pairs(self, pairs): # type: (list) -> list\n result = []\n\n for i in range(len(pairs)):\n pair = re.sub(\"^([^=: ]+)[=: ](.*)$\", \"\\\\1 \\\\2\", pairs[i].strip())\n pair = pair.strip()\n\n if \" \" in pair:\n name, version = pair.split(\" \", 2)\n result.append({\"name\": name, \"version\": version})\n else:\n result.append({\"name\": pair})\n\n return result\n\n def _format_requirements(self, requirements): # type: (List[str]) -> dict\n requires = {}\n requirements = self._parse_name_version_pairs(requirements)\n for requirement in requirements:\n requires[requirement[\"name\"]] = requirement[\"version\"]\n\n return requires\n\n def _validate_author(self, author, default):\n from poetry.packages.package import AUTHOR_REGEX\n\n author = author or default\n\n if author in [\"n\", \"no\"]:\n return\n\n m = AUTHOR_REGEX.match(author)\n if not m:\n raise ValueError(\n \"Invalid author string. Must be in the format: \"\n \"John Smith <[email protected]>\"\n )\n\n return author\n\n def _validate_license(self, license):\n from poetry.spdx import license_by_id\n\n license_by_id(license)\n\n return license\n\n def _get_pool(self):\n from poetry.repositories import Pool\n from poetry.repositories.pypi_repository import PyPiRepository\n\n if isinstance(self, VenvCommand):\n return self.poetry.pool\n\n if self._pool is None:\n self._pool = Pool()\n self._pool.add_repository(PyPiRepository())\n\n return self._pool\n", "path": "poetry/console/commands/init.py"}]} | 3,823 | 103 |
gh_patches_debug_39253 | rasdani/github-patches | git_diff | lightly-ai__lightly-1531 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in `GatherLayer.backward`
Hi,
We've been implementing a model at [cellarium-ml](https://github.com/cellarium-ai/cellarium-ml) using your `NTXentLoss`. Comparing the model training with a single GPU and two GPUs we noticed that they do not match. By investigating it we found an apparent bug in the `GatherLayer.backward` where gradients are not sum-reduced over GPUs. Here is our fixed version (https://github.com/cellarium-ai/cellarium-ml/blob/main/cellarium/ml/distributed/gather.py#L17-L21):
```py
@staticmethod
def backward(ctx, *grads) -> torch.Tensor:
grad_out = grads[dist.get_rank()].contiguous()
dist.all_reduce(grad_out, op=dist.ReduceOp.SUM)
return grad_out
```
and the [test](https://github.com/cellarium-ai/cellarium-ml/blob/main/tests/distributed/test_gather.py) we wrote. Would you agree that this is indeed a bug? I would be happy to contribute a PR with the fix.
</issue>
<code>
[start of lightly/utils/dist.py]
1 from typing import Optional, Tuple
2
3 import torch
4 import torch.distributed as dist
5
6
7 class GatherLayer(torch.autograd.Function):
8 """Gather tensors from all processes, supporting backward propagation.
9
10 This code was taken and adapted from here:
11 https://github.com/Spijkervet/SimCLR
12
13 """
14
15 @staticmethod
16 def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:
17 ctx.save_for_backward(input)
18 output = [torch.empty_like(input) for _ in range(dist.get_world_size())]
19 dist.all_gather(output, input)
20 return tuple(output)
21
22 @staticmethod
23 def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:
24 (input,) = ctx.saved_tensors
25 grad_out = torch.empty_like(input)
26 grad_out[:] = grads[dist.get_rank()]
27 return grad_out
28
29
30 def rank() -> int:
31 """Returns the rank of the current process."""
32 return dist.get_rank() if dist.is_initialized() else 0
33
34
35 def world_size() -> int:
36 """Returns the current world size (number of distributed processes)."""
37 return dist.get_world_size() if dist.is_initialized() else 1
38
39
40 def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:
41 """Gathers this tensor from all processes. Supports backprop."""
42 return GatherLayer.apply(input)
43
44
45 def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:
46 """Returns an (n, n * world_size) zero matrix with the diagonal for the rank
47 of this process set to 1.
48
49 Example output where n=3, the current process has rank 1, and there are
50 4 processes in total:
51
52 rank0 rank1 rank2 rank3
53 0 0 0 | 1 0 0 | 0 0 0 | 0 0 0
54 0 0 0 | 0 1 0 | 0 0 0 | 0 0 0
55 0 0 0 | 0 0 1 | 0 0 0 | 0 0 0
56
57 Equivalent to torch.eye for undistributed settings or if world size == 1.
58
59 Args:
60 n:
61 Size of the square matrix on a single process.
62 device:
63 Device on which the matrix should be created.
64
65 """
66 rows = torch.arange(n, device=device, dtype=torch.long)
67 cols = rows + rank() * n
68 diag_mask = torch.zeros((n, n * world_size()), dtype=torch.bool)
69 diag_mask[(rows, cols)] = True
70 return diag_mask
71
72
73 def rank_zero_only(fn):
74 """Decorator that only runs the function on the process with rank 0.
75
76 Example:
77 >>> @rank_zero_only
78 >>> def print_rank_zero(message: str):
79 >>> print(message)
80 >>>
81 >>> print_rank_zero("Hello from rank 0!")
82
83 """
84
85 def wrapped(*args, **kwargs):
86 if rank() == 0:
87 return fn(*args, **kwargs)
88
89 return wrapped
90
91
92 @rank_zero_only
93 def print_rank_zero(*args, **kwargs) -> None:
94 """Equivalent to print, but only runs on the process with rank 0."""
95 print(*args, **kwargs)
96
[end of lightly/utils/dist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lightly/utils/dist.py b/lightly/utils/dist.py
--- a/lightly/utils/dist.py
+++ b/lightly/utils/dist.py
@@ -1,29 +1,29 @@
-from typing import Optional, Tuple
+from typing import Any, Callable, Optional, Tuple, TypeVar
import torch
import torch.distributed as dist
+from torch.autograd.function import FunctionCtx
class GatherLayer(torch.autograd.Function):
"""Gather tensors from all processes, supporting backward propagation.
This code was taken and adapted from here:
- https://github.com/Spijkervet/SimCLR
+ https://github.com/vturrisi/solo-learn/blob/b69b4bd27472593919956d9ac58902a301537a4d/solo/utils/misc.py#L187
"""
@staticmethod
- def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:
- ctx.save_for_backward(input)
+ def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]: # type: ignore
output = [torch.empty_like(input) for _ in range(dist.get_world_size())]
dist.all_gather(output, input)
return tuple(output)
@staticmethod
- def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:
- (input,) = ctx.saved_tensors
- grad_out = torch.empty_like(input)
- grad_out[:] = grads[dist.get_rank()]
+ def backward(ctx, *grads) -> torch.Tensor: # type: ignore
+ all_gradients = torch.stack(grads)
+ dist.all_reduce(all_gradients)
+ grad_out = all_gradients[dist.get_rank()]
return grad_out
@@ -39,7 +39,7 @@
def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:
"""Gathers this tensor from all processes. Supports backprop."""
- return GatherLayer.apply(input)
+ return GatherLayer.apply(input) # type: ignore[no-any-return]
def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:
@@ -70,7 +70,10 @@
return diag_mask
-def rank_zero_only(fn):
+R = TypeVar("R")
+
+
+def rank_zero_only(fn: Callable[..., R]) -> Callable[..., Optional[R]]:
"""Decorator that only runs the function on the process with rank 0.
Example:
@@ -79,17 +82,17 @@
>>> print(message)
>>>
>>> print_rank_zero("Hello from rank 0!")
-
"""
- def wrapped(*args, **kwargs):
+ def wrapped(*args: Any, **kwargs: Any) -> Optional[R]:
if rank() == 0:
return fn(*args, **kwargs)
+ return None
return wrapped
@rank_zero_only
-def print_rank_zero(*args, **kwargs) -> None:
+def print_rank_zero(*args: Any, **kwargs: Any) -> None: # type: ignore[misc]
"""Equivalent to print, but only runs on the process with rank 0."""
print(*args, **kwargs)
| {"golden_diff": "diff --git a/lightly/utils/dist.py b/lightly/utils/dist.py\n--- a/lightly/utils/dist.py\n+++ b/lightly/utils/dist.py\n@@ -1,29 +1,29 @@\n-from typing import Optional, Tuple\n+from typing import Any, Callable, Optional, Tuple, TypeVar\n \n import torch\n import torch.distributed as dist\n+from torch.autograd.function import FunctionCtx\n \n \n class GatherLayer(torch.autograd.Function):\n \"\"\"Gather tensors from all processes, supporting backward propagation.\n \n This code was taken and adapted from here:\n- https://github.com/Spijkervet/SimCLR\n+ https://github.com/vturrisi/solo-learn/blob/b69b4bd27472593919956d9ac58902a301537a4d/solo/utils/misc.py#L187\n \n \"\"\"\n \n @staticmethod\n- def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:\n- ctx.save_for_backward(input)\n+ def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]: # type: ignore\n output = [torch.empty_like(input) for _ in range(dist.get_world_size())]\n dist.all_gather(output, input)\n return tuple(output)\n \n @staticmethod\n- def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:\n- (input,) = ctx.saved_tensors\n- grad_out = torch.empty_like(input)\n- grad_out[:] = grads[dist.get_rank()]\n+ def backward(ctx, *grads) -> torch.Tensor: # type: ignore\n+ all_gradients = torch.stack(grads)\n+ dist.all_reduce(all_gradients)\n+ grad_out = all_gradients[dist.get_rank()]\n return grad_out\n \n \n@@ -39,7 +39,7 @@\n \n def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:\n \"\"\"Gathers this tensor from all processes. Supports backprop.\"\"\"\n- return GatherLayer.apply(input)\n+ return GatherLayer.apply(input) # type: ignore[no-any-return]\n \n \n def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:\n@@ -70,7 +70,10 @@\n return diag_mask\n \n \n-def rank_zero_only(fn):\n+R = TypeVar(\"R\")\n+\n+\n+def rank_zero_only(fn: Callable[..., R]) -> Callable[..., Optional[R]]:\n \"\"\"Decorator that only runs the function on the process with rank 0.\n \n Example:\n@@ -79,17 +82,17 @@\n >>> print(message)\n >>>\n >>> print_rank_zero(\"Hello from rank 0!\")\n-\n \"\"\"\n \n- def wrapped(*args, **kwargs):\n+ def wrapped(*args: Any, **kwargs: Any) -> Optional[R]:\n if rank() == 0:\n return fn(*args, **kwargs)\n+ return None\n \n return wrapped\n \n \n @rank_zero_only\n-def print_rank_zero(*args, **kwargs) -> None:\n+def print_rank_zero(*args: Any, **kwargs: Any) -> None: # type: ignore[misc]\n \"\"\"Equivalent to print, but only runs on the process with rank 0.\"\"\"\n print(*args, **kwargs)\n", "issue": "Bug in `GatherLayer.backward`\nHi,\r\n\r\nWe've been implementing a model at [cellarium-ml](https://github.com/cellarium-ai/cellarium-ml) using your `NTXentLoss`. Comparing the model training with a single GPU and two GPUs we noticed that they do not match. By investigating it we found an apparent bug in the `GatherLayer.backward` where gradients are not sum-reduced over GPUs. Here is our fixed version (https://github.com/cellarium-ai/cellarium-ml/blob/main/cellarium/ml/distributed/gather.py#L17-L21):\r\n\r\n```py\r\n @staticmethod\r\n def backward(ctx, *grads) -> torch.Tensor:\r\n grad_out = grads[dist.get_rank()].contiguous()\r\n dist.all_reduce(grad_out, op=dist.ReduceOp.SUM)\r\n return grad_out\r\n```\r\n\r\nand the [test](https://github.com/cellarium-ai/cellarium-ml/blob/main/tests/distributed/test_gather.py) we wrote. Would you agree that this is indeed a bug? I would be happy to contribute a PR with the fix.\n", "before_files": [{"content": "from typing import Optional, Tuple\n\nimport torch\nimport torch.distributed as dist\n\n\nclass GatherLayer(torch.autograd.Function):\n \"\"\"Gather tensors from all processes, supporting backward propagation.\n\n This code was taken and adapted from here:\n https://github.com/Spijkervet/SimCLR\n\n \"\"\"\n\n @staticmethod\n def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:\n ctx.save_for_backward(input)\n output = [torch.empty_like(input) for _ in range(dist.get_world_size())]\n dist.all_gather(output, input)\n return tuple(output)\n\n @staticmethod\n def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:\n (input,) = ctx.saved_tensors\n grad_out = torch.empty_like(input)\n grad_out[:] = grads[dist.get_rank()]\n return grad_out\n\n\ndef rank() -> int:\n \"\"\"Returns the rank of the current process.\"\"\"\n return dist.get_rank() if dist.is_initialized() else 0\n\n\ndef world_size() -> int:\n \"\"\"Returns the current world size (number of distributed processes).\"\"\"\n return dist.get_world_size() if dist.is_initialized() else 1\n\n\ndef gather(input: torch.Tensor) -> Tuple[torch.Tensor]:\n \"\"\"Gathers this tensor from all processes. Supports backprop.\"\"\"\n return GatherLayer.apply(input)\n\n\ndef eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:\n \"\"\"Returns an (n, n * world_size) zero matrix with the diagonal for the rank\n of this process set to 1.\n\n Example output where n=3, the current process has rank 1, and there are\n 4 processes in total:\n\n rank0 rank1 rank2 rank3\n 0 0 0 | 1 0 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 1 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 0 1 | 0 0 0 | 0 0 0\n\n Equivalent to torch.eye for undistributed settings or if world size == 1.\n\n Args:\n n:\n Size of the square matrix on a single process.\n device:\n Device on which the matrix should be created.\n\n \"\"\"\n rows = torch.arange(n, device=device, dtype=torch.long)\n cols = rows + rank() * n\n diag_mask = torch.zeros((n, n * world_size()), dtype=torch.bool)\n diag_mask[(rows, cols)] = True\n return diag_mask\n\n\ndef rank_zero_only(fn):\n \"\"\"Decorator that only runs the function on the process with rank 0.\n\n Example:\n >>> @rank_zero_only\n >>> def print_rank_zero(message: str):\n >>> print(message)\n >>>\n >>> print_rank_zero(\"Hello from rank 0!\")\n\n \"\"\"\n\n def wrapped(*args, **kwargs):\n if rank() == 0:\n return fn(*args, **kwargs)\n\n return wrapped\n\n\n@rank_zero_only\ndef print_rank_zero(*args, **kwargs) -> None:\n \"\"\"Equivalent to print, but only runs on the process with rank 0.\"\"\"\n print(*args, **kwargs)\n", "path": "lightly/utils/dist.py"}]} | 1,690 | 723 |
gh_patches_debug_23382 | rasdani/github-patches | git_diff | coala__coala-1290 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`DocstyleDefinition`: Accept a single marker set also
Via the normal constructor or a class method.
</issue>
<code>
[start of coalib/bearlib/languages/documentation/DocstyleDefinition.py]
1 import os.path
2
3 from coalib.misc.Decorators import generate_eq, generate_repr, enforce_signature
4 from coalib.parsing.ConfParser import ConfParser
5
6
7 @generate_repr()
8 @generate_eq("language", "docstyle", "markers")
9 class DocstyleDefinition:
10 """
11 The DocstyleDefinition class holds values that identify a certain type of
12 documentation comment (for which language, documentation style/tool used
13 etc.).
14 """
15
16 @enforce_signature
17 def __init__(self, language: str, docstyle: str, markers):
18 """
19 Instantiates a new DocstyleDefinition.
20
21 :param language: The case insensitive programming language of the
22 documentation comment, e.g. `"CPP"` for C++ or
23 `"PYTHON3"`.
24 :param docstyle: The case insensitive documentation style/tool used
25 to document code, e.g. `"default"` or `"doxygen"`.
26 :param markers: An iterable of marker/delimiter string iterables that
27 identify a documentation comment. See `markers`
28 property for more details on markers.
29 """
30 self._language = language.lower()
31 self._docstyle = docstyle.lower()
32 self._markers = tuple(tuple(marker_set) for marker_set in markers)
33
34 # Check marker set dimensions.
35 for marker_set in self._markers:
36 length = len(marker_set)
37 if length != 3:
38 raise ValueError("Length of a given marker set was not 3 (was "
39 "actually {}).".format(length))
40
41 @property
42 def language(self):
43 """
44 The programming language.
45
46 :return: A lower-case string defining the programming language (i.e.
47 "cpp" or "python").
48 """
49 return self._language
50
51 @property
52 def docstyle(self):
53 """
54 The documentation style/tool used to document code.
55
56 :return: A lower-case string defining the docstyle (i.e. "default" or
57 "doxygen").
58 """
59 return self._docstyle
60
61 @property
62 def markers(self):
63 """
64 A tuple of marker sets that identify a documentation comment.
65
66 Marker sets consist of 3 entries where the first is the start-marker,
67 the second one the each-line marker and the last one the end-marker.
68 For example a marker tuple with a single marker set
69 `(("/**", "*", "*/"),)` would match following documentation comment:
70
71 ```
72 /**
73 * This is documentation.
74 */
75 ```
76
77 It's also possible to supply an empty each-line marker
78 (`("/**", "", "*/")`):
79
80 ```
81 /**
82 This is more documentation.
83 */
84 ```
85
86 Markers are matched "greedy", that means it will match as many
87 each-line markers as possible. I.e. for `("///", "///", "///")`):
88
89 ```
90 /// Brief documentation.
91 ///
92 /// Detailed documentation.
93 ```
94
95 :return: A tuple of marker/delimiter string tuples that identify a
96 documentation comment.
97 """
98 return self._markers
99
100 @classmethod
101 @enforce_signature
102 def load(cls, language: str, docstyle: str):
103 """
104 Loads a `DocstyleDefinition` from the coala docstyle definition files.
105
106 This function considers all settings inside the according coalang-files
107 as markers.
108
109 :param language: The case insensitive programming language of
110 the documentation comment as a string.
111 :param docstyle: The case insensitive documentation
112 style/tool used to document code, e.g.
113 `"default"` or `"doxygen"`.
114 :raises FileNotFoundError: Raised when the given docstyle was not
115 found.
116 :raises KeyError: Raised when the given language is not
117 defined for given docstyle.
118 :return: The `DocstyleDefinition` for given language
119 and docstyle.
120 """
121
122 docstyle = docstyle.lower()
123
124 language_config_parser = ConfParser(remove_empty_iter_elements=False)
125 try:
126 docstyle_settings = language_config_parser.parse(
127 os.path.dirname(__file__) + "/" + docstyle + ".coalang")
128 except FileNotFoundError:
129 raise FileNotFoundError("Docstyle definition " + repr(docstyle) +
130 " not found.")
131
132 language = language.lower()
133
134 try:
135 docstyle_settings = docstyle_settings[language]
136 except KeyError:
137 raise KeyError("Language {} is not defined for docstyle {}."
138 .format(repr(language), repr(docstyle)))
139
140 marker_sets = (tuple(value)
141 for key, value in
142 filter(lambda kv: not kv[0].startswith("comment"),
143 docstyle_settings.contents.items()))
144
145 return cls(language, docstyle, marker_sets)
146
[end of coalib/bearlib/languages/documentation/DocstyleDefinition.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/coalib/bearlib/languages/documentation/DocstyleDefinition.py b/coalib/bearlib/languages/documentation/DocstyleDefinition.py
--- a/coalib/bearlib/languages/documentation/DocstyleDefinition.py
+++ b/coalib/bearlib/languages/documentation/DocstyleDefinition.py
@@ -23,12 +23,19 @@
`"PYTHON3"`.
:param docstyle: The case insensitive documentation style/tool used
to document code, e.g. `"default"` or `"doxygen"`.
- :param markers: An iterable of marker/delimiter string iterables that
+ :param markers: An iterable of marker/delimiter string iterables
+ or a single marker/delimiter string iterable that
identify a documentation comment. See `markers`
property for more details on markers.
"""
self._language = language.lower()
self._docstyle = docstyle.lower()
+
+ # Check and modify tuple if only one marker_set exists.
+ markers = tuple(markers)
+ if len(markers) == 3 and all(isinstance(x, str) for x in markers):
+ markers = (markers,)
+
self._markers = tuple(tuple(marker_set) for marker_set in markers)
# Check marker set dimensions.
| {"golden_diff": "diff --git a/coalib/bearlib/languages/documentation/DocstyleDefinition.py b/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n--- a/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n+++ b/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n@@ -23,12 +23,19 @@\n `\"PYTHON3\"`.\n :param docstyle: The case insensitive documentation style/tool used\n to document code, e.g. `\"default\"` or `\"doxygen\"`.\n- :param markers: An iterable of marker/delimiter string iterables that\n+ :param markers: An iterable of marker/delimiter string iterables\n+ or a single marker/delimiter string iterable that\n identify a documentation comment. See `markers`\n property for more details on markers.\n \"\"\"\n self._language = language.lower()\n self._docstyle = docstyle.lower()\n+\n+ # Check and modify tuple if only one marker_set exists.\n+ markers = tuple(markers)\n+ if len(markers) == 3 and all(isinstance(x, str) for x in markers):\n+ markers = (markers,)\n+\n self._markers = tuple(tuple(marker_set) for marker_set in markers)\n \n # Check marker set dimensions.\n", "issue": "`DocstyleDefinition`: Accept a single marker set also\nVia the normal constructor or a class method.\n\n", "before_files": [{"content": "import os.path\n\nfrom coalib.misc.Decorators import generate_eq, generate_repr, enforce_signature\nfrom coalib.parsing.ConfParser import ConfParser\n\n\n@generate_repr()\n@generate_eq(\"language\", \"docstyle\", \"markers\")\nclass DocstyleDefinition:\n \"\"\"\n The DocstyleDefinition class holds values that identify a certain type of\n documentation comment (for which language, documentation style/tool used\n etc.).\n \"\"\"\n\n @enforce_signature\n def __init__(self, language: str, docstyle: str, markers):\n \"\"\"\n Instantiates a new DocstyleDefinition.\n\n :param language: The case insensitive programming language of the\n documentation comment, e.g. `\"CPP\"` for C++ or\n `\"PYTHON3\"`.\n :param docstyle: The case insensitive documentation style/tool used\n to document code, e.g. `\"default\"` or `\"doxygen\"`.\n :param markers: An iterable of marker/delimiter string iterables that\n identify a documentation comment. See `markers`\n property for more details on markers.\n \"\"\"\n self._language = language.lower()\n self._docstyle = docstyle.lower()\n self._markers = tuple(tuple(marker_set) for marker_set in markers)\n\n # Check marker set dimensions.\n for marker_set in self._markers:\n length = len(marker_set)\n if length != 3:\n raise ValueError(\"Length of a given marker set was not 3 (was \"\n \"actually {}).\".format(length))\n\n @property\n def language(self):\n \"\"\"\n The programming language.\n\n :return: A lower-case string defining the programming language (i.e.\n \"cpp\" or \"python\").\n \"\"\"\n return self._language\n\n @property\n def docstyle(self):\n \"\"\"\n The documentation style/tool used to document code.\n\n :return: A lower-case string defining the docstyle (i.e. \"default\" or\n \"doxygen\").\n \"\"\"\n return self._docstyle\n\n @property\n def markers(self):\n \"\"\"\n A tuple of marker sets that identify a documentation comment.\n\n Marker sets consist of 3 entries where the first is the start-marker,\n the second one the each-line marker and the last one the end-marker.\n For example a marker tuple with a single marker set\n `((\"/**\", \"*\", \"*/\"),)` would match following documentation comment:\n\n ```\n /**\n * This is documentation.\n */\n ```\n\n It's also possible to supply an empty each-line marker\n (`(\"/**\", \"\", \"*/\")`):\n\n ```\n /**\n This is more documentation.\n */\n ```\n\n Markers are matched \"greedy\", that means it will match as many\n each-line markers as possible. I.e. for `(\"///\", \"///\", \"///\")`):\n\n ```\n /// Brief documentation.\n ///\n /// Detailed documentation.\n ```\n\n :return: A tuple of marker/delimiter string tuples that identify a\n documentation comment.\n \"\"\"\n return self._markers\n\n @classmethod\n @enforce_signature\n def load(cls, language: str, docstyle: str):\n \"\"\"\n Loads a `DocstyleDefinition` from the coala docstyle definition files.\n\n This function considers all settings inside the according coalang-files\n as markers.\n\n :param language: The case insensitive programming language of\n the documentation comment as a string.\n :param docstyle: The case insensitive documentation\n style/tool used to document code, e.g.\n `\"default\"` or `\"doxygen\"`.\n :raises FileNotFoundError: Raised when the given docstyle was not\n found.\n :raises KeyError: Raised when the given language is not\n defined for given docstyle.\n :return: The `DocstyleDefinition` for given language\n and docstyle.\n \"\"\"\n\n docstyle = docstyle.lower()\n\n language_config_parser = ConfParser(remove_empty_iter_elements=False)\n try:\n docstyle_settings = language_config_parser.parse(\n os.path.dirname(__file__) + \"/\" + docstyle + \".coalang\")\n except FileNotFoundError:\n raise FileNotFoundError(\"Docstyle definition \" + repr(docstyle) +\n \" not found.\")\n\n language = language.lower()\n\n try:\n docstyle_settings = docstyle_settings[language]\n except KeyError:\n raise KeyError(\"Language {} is not defined for docstyle {}.\"\n .format(repr(language), repr(docstyle)))\n\n marker_sets = (tuple(value)\n for key, value in\n filter(lambda kv: not kv[0].startswith(\"comment\"),\n docstyle_settings.contents.items()))\n\n return cls(language, docstyle, marker_sets)\n", "path": "coalib/bearlib/languages/documentation/DocstyleDefinition.py"}]} | 1,916 | 284 |
gh_patches_debug_33205 | rasdani/github-patches | git_diff | CTFd__CTFd-1589 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Review usage of error components
Looks like there needs to be more usage of the error components jinja snippet. It looks like it's missing in core/teams/public and core/teams/private at least.
</issue>
<code>
[start of CTFd/teams.py]
1 from flask import Blueprint, redirect, render_template, request, url_for
2
3 from CTFd.cache import clear_team_session, clear_user_session
4 from CTFd.models import Teams, db
5 from CTFd.utils import config, get_config
6 from CTFd.utils.crypto import verify_password
7 from CTFd.utils.decorators import authed_only, ratelimit
8 from CTFd.utils.decorators.modes import require_team_mode
9 from CTFd.utils.decorators.visibility import (
10 check_account_visibility,
11 check_score_visibility,
12 )
13 from CTFd.utils.helpers import get_errors, get_infos
14 from CTFd.utils.user import get_current_user
15
16 teams = Blueprint("teams", __name__)
17
18
19 @teams.route("/teams")
20 @check_account_visibility
21 @require_team_mode
22 def listing():
23 q = request.args.get("q")
24 field = request.args.get("field", "name")
25 filters = []
26
27 if field not in ("name", "affiliation", "website"):
28 field = "name"
29
30 if q:
31 filters.append(getattr(Teams, field).like("%{}%".format(q)))
32
33 teams = (
34 Teams.query.filter_by(hidden=False, banned=False)
35 .filter(*filters)
36 .order_by(Teams.id.asc())
37 .paginate(per_page=50)
38 )
39
40 args = dict(request.args)
41 args.pop("page", 1)
42
43 return render_template(
44 "teams/teams.html",
45 teams=teams,
46 prev_page=url_for(request.endpoint, page=teams.prev_num, **args),
47 next_page=url_for(request.endpoint, page=teams.next_num, **args),
48 q=q,
49 field=field,
50 )
51
52
53 @teams.route("/teams/join", methods=["GET", "POST"])
54 @authed_only
55 @require_team_mode
56 @ratelimit(method="POST", limit=10, interval=5)
57 def join():
58 infos = get_infos()
59 errors = get_errors()
60 if request.method == "GET":
61 team_size_limit = get_config("team_size", default=0)
62 if team_size_limit:
63 plural = "" if team_size_limit == 1 else "s"
64 infos.append(
65 "Teams are limited to {limit} member{plural}".format(
66 limit=team_size_limit, plural=plural
67 )
68 )
69 return render_template("teams/join_team.html", infos=infos, errors=errors)
70
71 if request.method == "POST":
72 teamname = request.form.get("name")
73 passphrase = request.form.get("password", "").strip()
74
75 team = Teams.query.filter_by(name=teamname).first()
76
77 if team and verify_password(passphrase, team.password):
78 team_size_limit = get_config("team_size", default=0)
79 if team_size_limit and len(team.members) >= team_size_limit:
80 errors.append(
81 "{name} has already reached the team size limit of {limit}".format(
82 name=team.name, limit=team_size_limit
83 )
84 )
85 return render_template(
86 "teams/join_team.html", infos=infos, errors=errors
87 )
88
89 user = get_current_user()
90 user.team_id = team.id
91 db.session.commit()
92
93 if len(team.members) == 1:
94 team.captain_id = user.id
95 db.session.commit()
96
97 clear_user_session(user_id=user.id)
98 clear_team_session(team_id=team.id)
99
100 return redirect(url_for("challenges.listing"))
101 else:
102 errors.append("That information is incorrect")
103 return render_template("teams/join_team.html", infos=infos, errors=errors)
104
105
106 @teams.route("/teams/new", methods=["GET", "POST"])
107 @authed_only
108 @require_team_mode
109 def new():
110 infos = get_infos()
111 errors = get_errors()
112 if request.method == "GET":
113 team_size_limit = get_config("team_size", default=0)
114 if team_size_limit:
115 plural = "" if team_size_limit == 1 else "s"
116 infos.append(
117 "Teams are limited to {limit} member{plural}".format(
118 limit=team_size_limit, plural=plural
119 )
120 )
121
122 return render_template("teams/new_team.html", infos=infos, errors=errors)
123 elif request.method == "POST":
124 teamname = request.form.get("name", "").strip()
125 passphrase = request.form.get("password", "").strip()
126 errors = get_errors()
127
128 user = get_current_user()
129
130 existing_team = Teams.query.filter_by(name=teamname).first()
131 if existing_team:
132 errors.append("That team name is already taken")
133 if not teamname:
134 errors.append("That team name is invalid")
135
136 if errors:
137 return render_template("teams/new_team.html", errors=errors)
138
139 team = Teams(name=teamname, password=passphrase, captain_id=user.id)
140
141 db.session.add(team)
142 db.session.commit()
143
144 user.team_id = team.id
145 db.session.commit()
146
147 clear_user_session(user_id=user.id)
148 clear_team_session(team_id=team.id)
149
150 return redirect(url_for("challenges.listing"))
151
152
153 @teams.route("/team")
154 @authed_only
155 @require_team_mode
156 def private():
157 user = get_current_user()
158 if not user.team_id:
159 return render_template("teams/team_enrollment.html")
160
161 team_id = user.team_id
162
163 team = Teams.query.filter_by(id=team_id).first_or_404()
164 solves = team.get_solves()
165 awards = team.get_awards()
166
167 place = team.place
168 score = team.score
169
170 return render_template(
171 "teams/private.html",
172 solves=solves,
173 awards=awards,
174 user=user,
175 team=team,
176 score=score,
177 place=place,
178 score_frozen=config.is_scoreboard_frozen(),
179 )
180
181
182 @teams.route("/teams/<int:team_id>")
183 @check_account_visibility
184 @check_score_visibility
185 @require_team_mode
186 def public(team_id):
187 errors = get_errors()
188 team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()
189 solves = team.get_solves()
190 awards = team.get_awards()
191
192 place = team.place
193 score = team.score
194
195 if errors:
196 return render_template("teams/public.html", team=team, errors=errors)
197
198 return render_template(
199 "teams/public.html",
200 solves=solves,
201 awards=awards,
202 team=team,
203 score=score,
204 place=place,
205 score_frozen=config.is_scoreboard_frozen(),
206 )
207
[end of CTFd/teams.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/teams.py b/CTFd/teams.py
--- a/CTFd/teams.py
+++ b/CTFd/teams.py
@@ -154,6 +154,9 @@
@authed_only
@require_team_mode
def private():
+ infos = get_infos()
+ errors = get_errors()
+
user = get_current_user()
if not user.team_id:
return render_template("teams/team_enrollment.html")
@@ -167,6 +170,9 @@
place = team.place
score = team.score
+ if config.is_scoreboard_frozen():
+ infos.append("Scoreboard has been frozen")
+
return render_template(
"teams/private.html",
solves=solves,
@@ -176,6 +182,8 @@
score=score,
place=place,
score_frozen=config.is_scoreboard_frozen(),
+ infos=infos,
+ errors=errors,
)
@@ -184,6 +192,7 @@
@check_score_visibility
@require_team_mode
def public(team_id):
+ infos = get_infos()
errors = get_errors()
team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()
solves = team.get_solves()
@@ -195,6 +204,9 @@
if errors:
return render_template("teams/public.html", team=team, errors=errors)
+ if config.is_scoreboard_frozen():
+ infos.append("Scoreboard has been frozen")
+
return render_template(
"teams/public.html",
solves=solves,
@@ -203,4 +215,6 @@
score=score,
place=place,
score_frozen=config.is_scoreboard_frozen(),
+ infos=infos,
+ errors=errors,
)
| {"golden_diff": "diff --git a/CTFd/teams.py b/CTFd/teams.py\n--- a/CTFd/teams.py\n+++ b/CTFd/teams.py\n@@ -154,6 +154,9 @@\n @authed_only\n @require_team_mode\n def private():\n+ infos = get_infos()\n+ errors = get_errors()\n+\n user = get_current_user()\n if not user.team_id:\n return render_template(\"teams/team_enrollment.html\")\n@@ -167,6 +170,9 @@\n place = team.place\n score = team.score\n \n+ if config.is_scoreboard_frozen():\n+ infos.append(\"Scoreboard has been frozen\")\n+\n return render_template(\n \"teams/private.html\",\n solves=solves,\n@@ -176,6 +182,8 @@\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n+ infos=infos,\n+ errors=errors,\n )\n \n \n@@ -184,6 +192,7 @@\n @check_score_visibility\n @require_team_mode\n def public(team_id):\n+ infos = get_infos()\n errors = get_errors()\n team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()\n solves = team.get_solves()\n@@ -195,6 +204,9 @@\n if errors:\n return render_template(\"teams/public.html\", team=team, errors=errors)\n \n+ if config.is_scoreboard_frozen():\n+ infos.append(\"Scoreboard has been frozen\")\n+\n return render_template(\n \"teams/public.html\",\n solves=solves,\n@@ -203,4 +215,6 @@\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n+ infos=infos,\n+ errors=errors,\n )\n", "issue": "Review usage of error components\nLooks like there needs to be more usage of the error components jinja snippet. It looks like it's missing in core/teams/public and core/teams/private at least. \n", "before_files": [{"content": "from flask import Blueprint, redirect, render_template, request, url_for\n\nfrom CTFd.cache import clear_team_session, clear_user_session\nfrom CTFd.models import Teams, db\nfrom CTFd.utils import config, get_config\nfrom CTFd.utils.crypto import verify_password\nfrom CTFd.utils.decorators import authed_only, ratelimit\nfrom CTFd.utils.decorators.modes import require_team_mode\nfrom CTFd.utils.decorators.visibility import (\n check_account_visibility,\n check_score_visibility,\n)\nfrom CTFd.utils.helpers import get_errors, get_infos\nfrom CTFd.utils.user import get_current_user\n\nteams = Blueprint(\"teams\", __name__)\n\n\[email protected](\"/teams\")\n@check_account_visibility\n@require_team_mode\ndef listing():\n q = request.args.get(\"q\")\n field = request.args.get(\"field\", \"name\")\n filters = []\n\n if field not in (\"name\", \"affiliation\", \"website\"):\n field = \"name\"\n\n if q:\n filters.append(getattr(Teams, field).like(\"%{}%\".format(q)))\n\n teams = (\n Teams.query.filter_by(hidden=False, banned=False)\n .filter(*filters)\n .order_by(Teams.id.asc())\n .paginate(per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"teams/teams.html\",\n teams=teams,\n prev_page=url_for(request.endpoint, page=teams.prev_num, **args),\n next_page=url_for(request.endpoint, page=teams.next_num, **args),\n q=q,\n field=field,\n )\n\n\[email protected](\"/teams/join\", methods=[\"GET\", \"POST\"])\n@authed_only\n@require_team_mode\n@ratelimit(method=\"POST\", limit=10, interval=5)\ndef join():\n infos = get_infos()\n errors = get_errors()\n if request.method == \"GET\":\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit:\n plural = \"\" if team_size_limit == 1 else \"s\"\n infos.append(\n \"Teams are limited to {limit} member{plural}\".format(\n limit=team_size_limit, plural=plural\n )\n )\n return render_template(\"teams/join_team.html\", infos=infos, errors=errors)\n\n if request.method == \"POST\":\n teamname = request.form.get(\"name\")\n passphrase = request.form.get(\"password\", \"\").strip()\n\n team = Teams.query.filter_by(name=teamname).first()\n\n if team and verify_password(passphrase, team.password):\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit and len(team.members) >= team_size_limit:\n errors.append(\n \"{name} has already reached the team size limit of {limit}\".format(\n name=team.name, limit=team_size_limit\n )\n )\n return render_template(\n \"teams/join_team.html\", infos=infos, errors=errors\n )\n\n user = get_current_user()\n user.team_id = team.id\n db.session.commit()\n\n if len(team.members) == 1:\n team.captain_id = user.id\n db.session.commit()\n\n clear_user_session(user_id=user.id)\n clear_team_session(team_id=team.id)\n\n return redirect(url_for(\"challenges.listing\"))\n else:\n errors.append(\"That information is incorrect\")\n return render_template(\"teams/join_team.html\", infos=infos, errors=errors)\n\n\[email protected](\"/teams/new\", methods=[\"GET\", \"POST\"])\n@authed_only\n@require_team_mode\ndef new():\n infos = get_infos()\n errors = get_errors()\n if request.method == \"GET\":\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit:\n plural = \"\" if team_size_limit == 1 else \"s\"\n infos.append(\n \"Teams are limited to {limit} member{plural}\".format(\n limit=team_size_limit, plural=plural\n )\n )\n\n return render_template(\"teams/new_team.html\", infos=infos, errors=errors)\n elif request.method == \"POST\":\n teamname = request.form.get(\"name\", \"\").strip()\n passphrase = request.form.get(\"password\", \"\").strip()\n errors = get_errors()\n\n user = get_current_user()\n\n existing_team = Teams.query.filter_by(name=teamname).first()\n if existing_team:\n errors.append(\"That team name is already taken\")\n if not teamname:\n errors.append(\"That team name is invalid\")\n\n if errors:\n return render_template(\"teams/new_team.html\", errors=errors)\n\n team = Teams(name=teamname, password=passphrase, captain_id=user.id)\n\n db.session.add(team)\n db.session.commit()\n\n user.team_id = team.id\n db.session.commit()\n\n clear_user_session(user_id=user.id)\n clear_team_session(team_id=team.id)\n\n return redirect(url_for(\"challenges.listing\"))\n\n\[email protected](\"/team\")\n@authed_only\n@require_team_mode\ndef private():\n user = get_current_user()\n if not user.team_id:\n return render_template(\"teams/team_enrollment.html\")\n\n team_id = user.team_id\n\n team = Teams.query.filter_by(id=team_id).first_or_404()\n solves = team.get_solves()\n awards = team.get_awards()\n\n place = team.place\n score = team.score\n\n return render_template(\n \"teams/private.html\",\n solves=solves,\n awards=awards,\n user=user,\n team=team,\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n )\n\n\[email protected](\"/teams/<int:team_id>\")\n@check_account_visibility\n@check_score_visibility\n@require_team_mode\ndef public(team_id):\n errors = get_errors()\n team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()\n solves = team.get_solves()\n awards = team.get_awards()\n\n place = team.place\n score = team.score\n\n if errors:\n return render_template(\"teams/public.html\", team=team, errors=errors)\n\n return render_template(\n \"teams/public.html\",\n solves=solves,\n awards=awards,\n team=team,\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n )\n", "path": "CTFd/teams.py"}]} | 2,520 | 415 |
gh_patches_debug_4297 | rasdani/github-patches | git_diff | NVIDIA-Merlin__NVTabular-1312 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Getting error when loading the TF4Rec PyTorch model to the TIS
**Describe the bug**
I am getting the following error when I load a trained TF4Rec PyTorch to TIS:
```
| t4r_pytorch_pt | 1 | UNAVAILABLE: Internal: ImportError: cannot import name '_convert_string2pytorch_dty |
| | | pe' from 'nvtabular.inference.triton' (/nvtabular/nvtabular/inference/triton/__init |
| | | __.py) |
| | | |
| | | At: |
| | | /workspace/models/t4r_pytorch_pt/1/model.py(42): <module> |
| | | <frozen importlib._bootstrap>(219): _call_with_frames_removed |
| | | <frozen importlib._bootstrap_external>(848): exec_module |
| | | <frozen importlib._bootstrap>(686): _load_unlocked |
| | | <frozen importlib._bootstrap>(975): _find_and_load_unlocked |
| | | <frozen importlib._bootstrap>(991): _find_and_load |
+-----------------+---------+---------------------------------------------------------
```
**Steps/Code to reproduce bug**
Run the 02 and 03 notebooks Transformers4Rec tutorial [notebooks](https://github.com/NVIDIA-Merlin/Transformers4Rec/tree/main/examples/tutorial) to train the model. Then serve the model to TIS based on the instructions given on the [inference notebook](https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/examples/tutorial/04-Inference-with-Triton.ipynb).
`Oct-2019.parquet` Dataset can be downloaded from here: https://drive.google.com/drive/u/0/folders/1nTuG6UHWOEaZnBJj7YSIVvnphE1zGc1h
**Expected behavior**
Model should be loaded to the TIS without issue.
**Environment details (please complete the following information):**
- Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)] : Docker
- Method of NVTabular install: [conda, Docker, or from source]: Docker `merlin-inference:21.11` and `merlin-pytoch-training:21.11` `
Please do `git pull origin main` && `pip install -e .` to pull the latest main branch.
- If method of install is [Docker], provide `docker pull` & `docker run` commands used
This issue was also submitted by a user on TF4Rec GH repo- https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/339
</issue>
<code>
[start of nvtabular/inference/triton/__init__.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import json
16 import os
17
18 import pandas as pd
19
20 # this needs to be before any modules that import protobuf
21 os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python"
22
23 import tritonclient.grpc as grpcclient # noqa
24 from tritonclient.utils import np_to_triton_dtype # noqa
25
26 from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa
27 from nvtabular.inference.triton.ensemble import ( # noqa
28 export_hugectr_ensemble,
29 export_pytorch_ensemble,
30 export_tensorflow_ensemble,
31 generate_hugectr_model,
32 generate_nvtabular_model,
33 )
34
35
36 def convert_df_to_triton_input(column_names, batch, input_class=grpcclient.InferInput):
37 columns = [(col, batch[col]) for col in column_names]
38 inputs = []
39 for i, (name, col) in enumerate(columns):
40 if _is_list_dtype(col):
41 if isinstance(col, pd.Series):
42 raise ValueError("this function doesn't support CPU list values yet")
43 inputs.append(
44 _convert_column_to_triton_input(
45 col._column.offsets.values_host.astype("int64"), name + "__nnzs", input_class
46 )
47 )
48 inputs.append(
49 _convert_column_to_triton_input(
50 col.list.leaves.values_host.astype("int64"), name + "__values", input_class
51 )
52 )
53 else:
54 values = col.values if isinstance(col, pd.Series) else col.values_host
55 inputs.append(_convert_column_to_triton_input(values, name, input_class))
56 return inputs
57
58
59 def _convert_column_to_triton_input(col, name, input_class=grpcclient.InferInput):
60 col = col.reshape(len(col), 1)
61 input_tensor = input_class(name, col.shape, np_to_triton_dtype(col.dtype))
62 input_tensor.set_data_from_numpy(col)
63 return input_tensor
64
65
66 def convert_triton_output_to_df(columns, response):
67 return _make_df({col: response.as_numpy(col) for col in columns})
68
69
70 def get_column_types(path):
71 return json.load(open(os.path.join(path, "column_types.json")))
72
73
74 def _convert_tensor(t):
75 out = t.as_numpy()
76 if len(out.shape) == 2:
77 out = out[:, 0]
78 # cudf doesn't seem to handle dtypes like |S15 or object that well
79 if _is_string_dtype(out.dtype):
80 out = out.astype("str")
81 return out
82
[end of nvtabular/inference/triton/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nvtabular/inference/triton/__init__.py b/nvtabular/inference/triton/__init__.py
--- a/nvtabular/inference/triton/__init__.py
+++ b/nvtabular/inference/triton/__init__.py
@@ -25,6 +25,7 @@
from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa
from nvtabular.inference.triton.ensemble import ( # noqa
+ _convert_string2pytorch_dtype,
export_hugectr_ensemble,
export_pytorch_ensemble,
export_tensorflow_ensemble,
| {"golden_diff": "diff --git a/nvtabular/inference/triton/__init__.py b/nvtabular/inference/triton/__init__.py\n--- a/nvtabular/inference/triton/__init__.py\n+++ b/nvtabular/inference/triton/__init__.py\n@@ -25,6 +25,7 @@\n \n from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa\n from nvtabular.inference.triton.ensemble import ( # noqa\n+ _convert_string2pytorch_dtype,\n export_hugectr_ensemble,\n export_pytorch_ensemble,\n export_tensorflow_ensemble,\n", "issue": "[BUG] Getting error when loading the TF4Rec PyTorch model to the TIS\n**Describe the bug**\r\nI am getting the following error when I load a trained TF4Rec PyTorch to TIS:\r\n\r\n```\r\n | t4r_pytorch_pt | 1 | UNAVAILABLE: Internal: ImportError: cannot import name '_convert_string2pytorch_dty |\r\n| | | pe' from 'nvtabular.inference.triton' (/nvtabular/nvtabular/inference/triton/__init |\r\n| | | __.py) |\r\n| | | |\r\n| | | At: |\r\n| | | /workspace/models/t4r_pytorch_pt/1/model.py(42): <module> |\r\n| | | <frozen importlib._bootstrap>(219): _call_with_frames_removed |\r\n| | | <frozen importlib._bootstrap_external>(848): exec_module |\r\n| | | <frozen importlib._bootstrap>(686): _load_unlocked |\r\n| | | <frozen importlib._bootstrap>(975): _find_and_load_unlocked |\r\n| | | <frozen importlib._bootstrap>(991): _find_and_load |\r\n+-----------------+---------+---------------------------------------------------------\r\n```\r\n\r\n**Steps/Code to reproduce bug**\r\n\r\nRun the 02 and 03 notebooks Transformers4Rec tutorial [notebooks](https://github.com/NVIDIA-Merlin/Transformers4Rec/tree/main/examples/tutorial) to train the model. Then serve the model to TIS based on the instructions given on the [inference notebook](https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/examples/tutorial/04-Inference-with-Triton.ipynb).\r\n\r\n`Oct-2019.parquet` Dataset can be downloaded from here: https://drive.google.com/drive/u/0/folders/1nTuG6UHWOEaZnBJj7YSIVvnphE1zGc1h\r\n\r\n**Expected behavior**\r\nModel should be loaded to the TIS without issue.\r\n\r\n**Environment details (please complete the following information):**\r\n - Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)] : Docker\r\n - Method of NVTabular install: [conda, Docker, or from source]: Docker `merlin-inference:21.11` and `merlin-pytoch-training:21.11` `\r\n Please do `git pull origin main` && `pip install -e .` to pull the latest main branch.\r\n - If method of install is [Docker], provide `docker pull` & `docker run` commands used\r\n \r\nThis issue was also submitted by a user on TF4Rec GH repo- https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/339\r\n\n", "before_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport json\nimport os\n\nimport pandas as pd\n\n# this needs to be before any modules that import protobuf\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n\nimport tritonclient.grpc as grpcclient # noqa\nfrom tritonclient.utils import np_to_triton_dtype # noqa\n\nfrom nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa\nfrom nvtabular.inference.triton.ensemble import ( # noqa\n export_hugectr_ensemble,\n export_pytorch_ensemble,\n export_tensorflow_ensemble,\n generate_hugectr_model,\n generate_nvtabular_model,\n)\n\n\ndef convert_df_to_triton_input(column_names, batch, input_class=grpcclient.InferInput):\n columns = [(col, batch[col]) for col in column_names]\n inputs = []\n for i, (name, col) in enumerate(columns):\n if _is_list_dtype(col):\n if isinstance(col, pd.Series):\n raise ValueError(\"this function doesn't support CPU list values yet\")\n inputs.append(\n _convert_column_to_triton_input(\n col._column.offsets.values_host.astype(\"int64\"), name + \"__nnzs\", input_class\n )\n )\n inputs.append(\n _convert_column_to_triton_input(\n col.list.leaves.values_host.astype(\"int64\"), name + \"__values\", input_class\n )\n )\n else:\n values = col.values if isinstance(col, pd.Series) else col.values_host\n inputs.append(_convert_column_to_triton_input(values, name, input_class))\n return inputs\n\n\ndef _convert_column_to_triton_input(col, name, input_class=grpcclient.InferInput):\n col = col.reshape(len(col), 1)\n input_tensor = input_class(name, col.shape, np_to_triton_dtype(col.dtype))\n input_tensor.set_data_from_numpy(col)\n return input_tensor\n\n\ndef convert_triton_output_to_df(columns, response):\n return _make_df({col: response.as_numpy(col) for col in columns})\n\n\ndef get_column_types(path):\n return json.load(open(os.path.join(path, \"column_types.json\")))\n\n\ndef _convert_tensor(t):\n out = t.as_numpy()\n if len(out.shape) == 2:\n out = out[:, 0]\n # cudf doesn't seem to handle dtypes like |S15 or object that well\n if _is_string_dtype(out.dtype):\n out = out.astype(\"str\")\n return out\n", "path": "nvtabular/inference/triton/__init__.py"}]} | 2,026 | 145 |
gh_patches_debug_28929 | rasdani/github-patches | git_diff | iterative__dvc-7729 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dvc list: Error on empty directory.
# Bug Report
Got error message on an empty directory, shouldn't it show nothing? like ls command.
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description

Error when list a empty path, strange behavior.
Might relate to https://github.com/iterative/dvc/blob/daf07451f8e8f3e76a791c696b0ea175e8ed3ac1/dvc/repo/ls.py#L40-L41
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
1. git init
2. dvc init
3. mkdir empty
4. dvc list . empty
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected
Show nothing like ls command

<!--
A clear and concise description of what you expect to happen.
-->
### Environment information
DVC version: 2.0.17+7e4851
---------------------------------
Platform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit
Supports: All remotes
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: None
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
</issue>
<code>
[start of dvc/repo/ls.py]
1 import os
2 from itertools import chain
3
4 from dvc.exceptions import PathMissingError
5
6
7 def ls(url, path=None, rev=None, recursive=None, dvc_only=False):
8 """Methods for getting files and outputs for the repo.
9
10 Args:
11 url (str): the repo url
12 path (str, optional): relative path into the repo
13 rev (str, optional): SHA commit, branch or tag name
14 recursive (bool, optional): recursively walk the repo
15 dvc_only (bool, optional): show only DVC-artifacts
16
17 Returns:
18 list of `entry`
19
20 Notes:
21 `entry` is a dictionary with structure
22 {
23 "path": str,
24 "isout": bool,
25 "isdir": bool,
26 "isexec": bool,
27 }
28 """
29 from . import Repo
30
31 with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:
32 path = path or ""
33
34 ret = _ls(repo.repo_fs, path, recursive, dvc_only)
35
36 if path and not ret:
37 raise PathMissingError(path, repo, dvc_only=dvc_only)
38
39 ret_list = []
40 for path, info in ret.items():
41 info["path"] = path
42 ret_list.append(info)
43 ret_list.sort(key=lambda f: f["path"])
44 return ret_list
45
46
47 def _ls(fs, path, recursive=None, dvc_only=False):
48 fs_path = fs.from_os_path(path)
49
50 try:
51 fs_path = fs.info(fs_path)["name"]
52 except FileNotFoundError:
53 return {}
54
55 infos = {}
56 for root, dirs, files in fs.walk(
57 fs_path, dvcfiles=True, dvc_only=dvc_only
58 ):
59 entries = chain(files, dirs) if not recursive else files
60
61 for entry in entries:
62 entry_fs_path = fs.path.join(root, entry)
63 relparts = fs.path.relparts(entry_fs_path, fs_path)
64 name = os.path.join(*relparts)
65 infos[name] = fs.info(entry_fs_path)
66
67 if not recursive:
68 break
69
70 if not infos and fs.isfile(fs_path):
71 infos[os.path.basename(path)] = fs.info(fs_path)
72
73 ret = {}
74 for name, info in infos.items():
75 dvc_info = info.get("dvc_info", {})
76 if dvc_info.get("outs") or not dvc_only:
77 ret[name] = {
78 "isout": dvc_info.get("isout", False),
79 "isdir": info["type"] == "directory",
80 "isexec": info.get("isexec", False),
81 }
82
83 return ret
84
[end of dvc/repo/ls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/repo/ls.py b/dvc/repo/ls.py
--- a/dvc/repo/ls.py
+++ b/dvc/repo/ls.py
@@ -1,10 +1,22 @@
import os
from itertools import chain
+from typing import TYPE_CHECKING, Optional
from dvc.exceptions import PathMissingError
+if TYPE_CHECKING:
+ from dvc.fs.repo import RepoFileSystem
-def ls(url, path=None, rev=None, recursive=None, dvc_only=False):
+ from . import Repo
+
+
+def ls(
+ url: str,
+ path: Optional[str] = None,
+ rev: str = None,
+ recursive: bool = None,
+ dvc_only: bool = False,
+):
"""Methods for getting files and outputs for the repo.
Args:
@@ -31,10 +43,7 @@
with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:
path = path or ""
- ret = _ls(repo.repo_fs, path, recursive, dvc_only)
-
- if path and not ret:
- raise PathMissingError(path, repo, dvc_only=dvc_only)
+ ret = _ls(repo, path, recursive, dvc_only)
ret_list = []
for path, info in ret.items():
@@ -44,13 +53,16 @@
return ret_list
-def _ls(fs, path, recursive=None, dvc_only=False):
+def _ls(
+ repo: "Repo", path: str, recursive: bool = None, dvc_only: bool = False
+):
+ fs: "RepoFileSystem" = repo.repo_fs
fs_path = fs.from_os_path(path)
try:
fs_path = fs.info(fs_path)["name"]
except FileNotFoundError:
- return {}
+ raise PathMissingError(path, repo, dvc_only=dvc_only)
infos = {}
for root, dirs, files in fs.walk(
| {"golden_diff": "diff --git a/dvc/repo/ls.py b/dvc/repo/ls.py\n--- a/dvc/repo/ls.py\n+++ b/dvc/repo/ls.py\n@@ -1,10 +1,22 @@\n import os\n from itertools import chain\n+from typing import TYPE_CHECKING, Optional\n \n from dvc.exceptions import PathMissingError\n \n+if TYPE_CHECKING:\n+ from dvc.fs.repo import RepoFileSystem\n \n-def ls(url, path=None, rev=None, recursive=None, dvc_only=False):\n+ from . import Repo\n+\n+\n+def ls(\n+ url: str,\n+ path: Optional[str] = None,\n+ rev: str = None,\n+ recursive: bool = None,\n+ dvc_only: bool = False,\n+):\n \"\"\"Methods for getting files and outputs for the repo.\n \n Args:\n@@ -31,10 +43,7 @@\n with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:\n path = path or \"\"\n \n- ret = _ls(repo.repo_fs, path, recursive, dvc_only)\n-\n- if path and not ret:\n- raise PathMissingError(path, repo, dvc_only=dvc_only)\n+ ret = _ls(repo, path, recursive, dvc_only)\n \n ret_list = []\n for path, info in ret.items():\n@@ -44,13 +53,16 @@\n return ret_list\n \n \n-def _ls(fs, path, recursive=None, dvc_only=False):\n+def _ls(\n+ repo: \"Repo\", path: str, recursive: bool = None, dvc_only: bool = False\n+):\n+ fs: \"RepoFileSystem\" = repo.repo_fs\n fs_path = fs.from_os_path(path)\n \n try:\n fs_path = fs.info(fs_path)[\"name\"]\n except FileNotFoundError:\n- return {}\n+ raise PathMissingError(path, repo, dvc_only=dvc_only)\n \n infos = {}\n for root, dirs, files in fs.walk(\n", "issue": "dvc list: Error on empty directory. \n# Bug Report\r\n\r\nGot error message on an empty directory, shouldn't it show nothing? like ls command.\r\n\r\n\r\n<!--\r\n## Issue name\r\n\r\nIssue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug. \r\n\r\nExample: `repro: doesn't detect input changes`\r\n-->\r\n\r\n## Description\r\n\r\nError when list a empty path, strange behavior.\r\nMight relate to https://github.com/iterative/dvc/blob/daf07451f8e8f3e76a791c696b0ea175e8ed3ac1/dvc/repo/ls.py#L40-L41\r\n\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\n\r\n### Reproduce\r\n\r\n1. git init\r\n2. dvc init\r\n3. mkdir empty\r\n4. dvc list . empty\r\n\r\n<!--\r\nStep list of how to reproduce the bug\r\n-->\r\n\r\n<!--\r\nExample:\r\n\r\n1. dvc init\r\n2. Copy dataset.zip to the directory\r\n3. dvc add dataset.zip\r\n4. dvc run -d dataset.zip -o model ./train.sh\r\n5. modify dataset.zip\r\n6. dvc repro\r\n-->\r\n\r\n### Expected\r\nShow nothing like ls command\r\n\r\n\r\n<!--\r\nA clear and concise description of what you expect to happen.\r\n-->\r\n\r\n### Environment information\r\nDVC version: 2.0.17+7e4851\r\n---------------------------------\r\nPlatform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit\r\nSupports: All remotes\r\nCache types: <https://error.dvc.org/no-dvc-cache>\r\nCaches: local\r\nRemotes: None\r\nWorkspace directory: apfs on /dev/disk3s1s1\r\nRepo: dvc, git\r\n<!--\r\nThis is required to ensure that we can reproduce the bug.\r\n-->\r\n\r\n**Output of `dvc doctor`:**\r\n\r\n```console\r\n$ dvc doctor\r\n```\r\n\r\n**Additional Information (if any):**\r\n\r\n<!--\r\nPlease check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.\r\n\r\nIf applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.\r\nIf the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.\r\n-->\r\n\n", "before_files": [{"content": "import os\nfrom itertools import chain\n\nfrom dvc.exceptions import PathMissingError\n\n\ndef ls(url, path=None, rev=None, recursive=None, dvc_only=False):\n \"\"\"Methods for getting files and outputs for the repo.\n\n Args:\n url (str): the repo url\n path (str, optional): relative path into the repo\n rev (str, optional): SHA commit, branch or tag name\n recursive (bool, optional): recursively walk the repo\n dvc_only (bool, optional): show only DVC-artifacts\n\n Returns:\n list of `entry`\n\n Notes:\n `entry` is a dictionary with structure\n {\n \"path\": str,\n \"isout\": bool,\n \"isdir\": bool,\n \"isexec\": bool,\n }\n \"\"\"\n from . import Repo\n\n with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:\n path = path or \"\"\n\n ret = _ls(repo.repo_fs, path, recursive, dvc_only)\n\n if path and not ret:\n raise PathMissingError(path, repo, dvc_only=dvc_only)\n\n ret_list = []\n for path, info in ret.items():\n info[\"path\"] = path\n ret_list.append(info)\n ret_list.sort(key=lambda f: f[\"path\"])\n return ret_list\n\n\ndef _ls(fs, path, recursive=None, dvc_only=False):\n fs_path = fs.from_os_path(path)\n\n try:\n fs_path = fs.info(fs_path)[\"name\"]\n except FileNotFoundError:\n return {}\n\n infos = {}\n for root, dirs, files in fs.walk(\n fs_path, dvcfiles=True, dvc_only=dvc_only\n ):\n entries = chain(files, dirs) if not recursive else files\n\n for entry in entries:\n entry_fs_path = fs.path.join(root, entry)\n relparts = fs.path.relparts(entry_fs_path, fs_path)\n name = os.path.join(*relparts)\n infos[name] = fs.info(entry_fs_path)\n\n if not recursive:\n break\n\n if not infos and fs.isfile(fs_path):\n infos[os.path.basename(path)] = fs.info(fs_path)\n\n ret = {}\n for name, info in infos.items():\n dvc_info = info.get(\"dvc_info\", {})\n if dvc_info.get(\"outs\") or not dvc_only:\n ret[name] = {\n \"isout\": dvc_info.get(\"isout\", False),\n \"isdir\": info[\"type\"] == \"directory\",\n \"isexec\": info.get(\"isexec\", False),\n }\n\n return ret\n", "path": "dvc/repo/ls.py"}]} | 1,940 | 448 |
gh_patches_debug_14771 | rasdani/github-patches | git_diff | litestar-org__litestar-992 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Running `starlite run` after installing starlite[cli] gives error about missing cryptography package
The error is here:
```
Traceback (most recent call last):
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\cookie_backend.py", line 20,
in <module>
from cryptography.exceptions import InvalidTag
ModuleNotFoundError: No module named 'cryptography'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\Scripts\starlite.exe\__main__.py", line 4, in <module>
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\cli.py", line 41, in <module>
from starlite.middleware.session import SessionMiddleware
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\__init__.py", line 2, in <module>
from .cookie_backend import (
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\cookie_backend.py", line 23,
in <module>
raise MissingDependencyException("cryptography is not installed") from e
starlite.exceptions.base_exceptions.MissingDependencyException: cryptography is not installed
```
I thought it might be a good idea to install the package automatically with the CLI extra. (Or to update the [docs](https://starlite-api.github.io/starlite/usage/19-cli/?h=uvicorn) if I'm missing something).
My versions: Windows, Python 3.10, starlite 1.46.0
PS: Thank you all for the great amount of effort you spend on this project!
</issue>
<code>
[start of starlite/middleware/session/__init__.py]
1 from .base import SessionMiddleware
2 from .cookie_backend import (
3 CookieBackendConfig as SessionCookieConfig, # backwards compatible export
4 )
5
6 __all__ = [
7 "SessionMiddleware",
8 "SessionCookieConfig",
9 ]
10
[end of starlite/middleware/session/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlite/middleware/session/__init__.py b/starlite/middleware/session/__init__.py
--- a/starlite/middleware/session/__init__.py
+++ b/starlite/middleware/session/__init__.py
@@ -1,9 +1,27 @@
+from typing import Any
+
+from starlite.utils import warn_deprecation
+
from .base import SessionMiddleware
-from .cookie_backend import (
- CookieBackendConfig as SessionCookieConfig, # backwards compatible export
-)
-
-__all__ = [
- "SessionMiddleware",
- "SessionCookieConfig",
-]
+
+
+def __getattr__(name: str) -> Any:
+ """Provide lazy importing as per https://peps.python.org/pep-0562/"""
+
+ if name != "SessionCookieConfig":
+ raise AttributeError(f"Module {__package__} has no attribute {name}")
+
+ from .cookie_backend import CookieBackendConfig
+
+ warn_deprecation(
+ deprecated_name=f"{name} from {__package__}",
+ kind="import",
+ alternative="'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'",
+ version="1.47.0",
+ )
+
+ globals()[name] = CookieBackendConfig
+ return CookieBackendConfig
+
+
+__all__ = ["SessionMiddleware"]
| {"golden_diff": "diff --git a/starlite/middleware/session/__init__.py b/starlite/middleware/session/__init__.py\n--- a/starlite/middleware/session/__init__.py\n+++ b/starlite/middleware/session/__init__.py\n@@ -1,9 +1,27 @@\n+from typing import Any\n+\n+from starlite.utils import warn_deprecation\n+\n from .base import SessionMiddleware\n-from .cookie_backend import (\n- CookieBackendConfig as SessionCookieConfig, # backwards compatible export\n-)\n-\n-__all__ = [\n- \"SessionMiddleware\",\n- \"SessionCookieConfig\",\n-]\n+\n+\n+def __getattr__(name: str) -> Any:\n+ \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n+\n+ if name != \"SessionCookieConfig\":\n+ raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n+\n+ from .cookie_backend import CookieBackendConfig\n+\n+ warn_deprecation(\n+ deprecated_name=f\"{name} from {__package__}\",\n+ kind=\"import\",\n+ alternative=\"'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'\",\n+ version=\"1.47.0\",\n+ )\n+\n+ globals()[name] = CookieBackendConfig\n+ return CookieBackendConfig\n+\n+\n+__all__ = [\"SessionMiddleware\"]\n", "issue": "Bug: Running `starlite run` after installing starlite[cli] gives error about missing cryptography package\nThe error is here:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\cookie_backend.py\", line 20, \r\nin <module>\r\n from cryptography.exceptions import InvalidTag\r\nModuleNotFoundError: No module named 'cryptography'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\Scripts\\starlite.exe\\__main__.py\", line 4, in <module>\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\cli.py\", line 41, in <module>\r\n from starlite.middleware.session import SessionMiddleware\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\__init__.py\", line 2, in <module>\r\n from .cookie_backend import (\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\cookie_backend.py\", line 23, \r\nin <module>\r\n raise MissingDependencyException(\"cryptography is not installed\") from e\r\nstarlite.exceptions.base_exceptions.MissingDependencyException: cryptography is not installed\r\n```\r\n\r\nI thought it might be a good idea to install the package automatically with the CLI extra. (Or to update the [docs](https://starlite-api.github.io/starlite/usage/19-cli/?h=uvicorn) if I'm missing something).\r\n\r\nMy versions: Windows, Python 3.10, starlite 1.46.0 \r\n\r\nPS: Thank you all for the great amount of effort you spend on this project!\n", "before_files": [{"content": "from .base import SessionMiddleware\nfrom .cookie_backend import (\n CookieBackendConfig as SessionCookieConfig, # backwards compatible export\n)\n\n__all__ = [\n \"SessionMiddleware\",\n \"SessionCookieConfig\",\n]\n", "path": "starlite/middleware/session/__init__.py"}]} | 1,128 | 296 |
gh_patches_debug_19426 | rasdani/github-patches | git_diff | nautobot__nautobot-975 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`::1/128` is not a valid prefix
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version: 3.6
* Nautobot version: 1.1.3
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
When trying to create the prefix `::1/128` I get the following error:
```no-highlight
<class 'netaddr.core.AddrFormatError'>
invalid IPNetwork 0.0.0.1/128
```
Both Python netaddr and ipaddress modules see this as a valid IPNetwork.
### Steps to Reproduce
1. Create a prefix or aggregate using the prefix `::1/128`
<!-- What did you expect to happen? -->
### Expected Behavior
Prefix created
<!-- What happened instead? -->
### Observed Behavior
```
invalid IPNetwork 0.0.0.1/128
```
</issue>
<code>
[start of nautobot/ipam/fields.py]
1 from django.core.exceptions import ValidationError
2 from django.db import models
3 from django.utils.datastructures import DictWrapper
4 import netaddr
5
6 from .formfields import IPNetworkFormField
7
8
9 class VarbinaryIPField(models.BinaryField):
10 """
11 IP network address
12 """
13
14 description = "IP network address"
15
16 def __init__(self, **kwargs):
17 super().__init__(**kwargs)
18
19 def db_type(self, connection):
20 """Returns the correct field type for a given database vendor."""
21
22 # Use 'bytea' type for PostgreSQL.
23 if connection.vendor == "postgresql":
24 return "bytea"
25
26 # Or 'varbinary' for everyone else.
27 return "varbinary(16)"
28
29 def value_to_string(self, obj):
30 """IPField is serialized as str(IPAddress())"""
31 value = self.value_from_object(obj)
32 if not value:
33 return value
34
35 return str(self._parse_address(value))
36
37 def _parse_address(self, value):
38 """
39 Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.
40 """
41 try:
42 value = int.from_bytes(value, "big")
43 except TypeError:
44 pass # It's a string
45
46 try:
47 return netaddr.IPAddress(value)
48 except netaddr.AddrFormatError:
49 raise ValidationError("Invalid IP address format: {}".format(value))
50 except (TypeError, ValueError) as e:
51 raise ValidationError(e)
52
53 def from_db_value(self, value, expression, connection):
54 """Converts DB (varbinary) to Python (str)."""
55 return self.to_python(value)
56
57 def to_python(self, value):
58 """Converts `value` to Python (str)."""
59 if isinstance(value, netaddr.IPAddress):
60 return str(value)
61
62 if value is None:
63 return value
64
65 return str(self._parse_address(value))
66
67 def get_db_prep_value(self, value, connection, prepared=False):
68 """Converts Python (str) to DB (varbinary)."""
69 if value is None:
70 return value
71
72 # Parse the address and then pack it to binary.
73 value = self._parse_address(value).packed
74
75 # Use defaults for PostgreSQL
76 if connection.vendor == "postgresql":
77 return super().get_db_prep_value(value, connection, prepared)
78
79 return value
80
81 def form_class(self):
82 return IPNetworkFormField
83
84 def formfield(self, **kwargs):
85 defaults = {"form_class": self.form_class()}
86 defaults.update(kwargs)
87 return super().formfield(**defaults)
88
[end of nautobot/ipam/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nautobot/ipam/fields.py b/nautobot/ipam/fields.py
--- a/nautobot/ipam/fields.py
+++ b/nautobot/ipam/fields.py
@@ -39,12 +39,17 @@
Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.
"""
try:
- value = int.from_bytes(value, "big")
+ int_value = int.from_bytes(value, "big")
+ # Distinguish between
+ # \x00\x00\x00\x01 (IPv4 0.0.0.1) and
+ # \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01 (IPv6 ::1), among other cases
+ version = 4 if len(value) == 4 else 6
+ value = int_value
except TypeError:
- pass # It's a string
+ version = None # It's a string, IP version should be self-evident
try:
- return netaddr.IPAddress(value)
+ return netaddr.IPAddress(value, version=version)
except netaddr.AddrFormatError:
raise ValidationError("Invalid IP address format: {}".format(value))
except (TypeError, ValueError) as e:
| {"golden_diff": "diff --git a/nautobot/ipam/fields.py b/nautobot/ipam/fields.py\n--- a/nautobot/ipam/fields.py\n+++ b/nautobot/ipam/fields.py\n@@ -39,12 +39,17 @@\n Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.\n \"\"\"\n try:\n- value = int.from_bytes(value, \"big\")\n+ int_value = int.from_bytes(value, \"big\")\n+ # Distinguish between\n+ # \\x00\\x00\\x00\\x01 (IPv4 0.0.0.1) and\n+ # \\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01 (IPv6 ::1), among other cases\n+ version = 4 if len(value) == 4 else 6\n+ value = int_value\n except TypeError:\n- pass # It's a string\n+ version = None # It's a string, IP version should be self-evident\n \n try:\n- return netaddr.IPAddress(value)\n+ return netaddr.IPAddress(value, version=version)\n except netaddr.AddrFormatError:\n raise ValidationError(\"Invalid IP address format: {}\".format(value))\n except (TypeError, ValueError) as e:\n", "issue": "`::1/128` is not a valid prefix\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: 3.6\r\n* Nautobot version: 1.1.3\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n\r\nWhen trying to create the prefix `::1/128` I get the following error:\r\n\r\n```no-highlight\r\n<class 'netaddr.core.AddrFormatError'>\r\n\r\ninvalid IPNetwork 0.0.0.1/128\r\n```\r\n\r\nBoth Python netaddr and ipaddress modules see this as a valid IPNetwork. \r\n\r\n### Steps to Reproduce\r\n1. Create a prefix or aggregate using the prefix `::1/128`\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\n\r\nPrefix created\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n\r\n```\r\ninvalid IPNetwork 0.0.0.1/128\r\n```\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.datastructures import DictWrapper\nimport netaddr\n\nfrom .formfields import IPNetworkFormField\n\n\nclass VarbinaryIPField(models.BinaryField):\n \"\"\"\n IP network address\n \"\"\"\n\n description = \"IP network address\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n def db_type(self, connection):\n \"\"\"Returns the correct field type for a given database vendor.\"\"\"\n\n # Use 'bytea' type for PostgreSQL.\n if connection.vendor == \"postgresql\":\n return \"bytea\"\n\n # Or 'varbinary' for everyone else.\n return \"varbinary(16)\"\n\n def value_to_string(self, obj):\n \"\"\"IPField is serialized as str(IPAddress())\"\"\"\n value = self.value_from_object(obj)\n if not value:\n return value\n\n return str(self._parse_address(value))\n\n def _parse_address(self, value):\n \"\"\"\n Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.\n \"\"\"\n try:\n value = int.from_bytes(value, \"big\")\n except TypeError:\n pass # It's a string\n\n try:\n return netaddr.IPAddress(value)\n except netaddr.AddrFormatError:\n raise ValidationError(\"Invalid IP address format: {}\".format(value))\n except (TypeError, ValueError) as e:\n raise ValidationError(e)\n\n def from_db_value(self, value, expression, connection):\n \"\"\"Converts DB (varbinary) to Python (str).\"\"\"\n return self.to_python(value)\n\n def to_python(self, value):\n \"\"\"Converts `value` to Python (str).\"\"\"\n if isinstance(value, netaddr.IPAddress):\n return str(value)\n\n if value is None:\n return value\n\n return str(self._parse_address(value))\n\n def get_db_prep_value(self, value, connection, prepared=False):\n \"\"\"Converts Python (str) to DB (varbinary).\"\"\"\n if value is None:\n return value\n\n # Parse the address and then pack it to binary.\n value = self._parse_address(value).packed\n\n # Use defaults for PostgreSQL\n if connection.vendor == \"postgresql\":\n return super().get_db_prep_value(value, connection, prepared)\n\n return value\n\n def form_class(self):\n return IPNetworkFormField\n\n def formfield(self, **kwargs):\n defaults = {\"form_class\": self.form_class()}\n defaults.update(kwargs)\n return super().formfield(**defaults)\n", "path": "nautobot/ipam/fields.py"}]} | 1,661 | 327 |
gh_patches_debug_28563 | rasdani/github-patches | git_diff | talonhub__community-479 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Number small can go larger than 100
If you say "ten five" number small will be 105.
Number small can go larger than 100
If you say "ten five" number small will be 105.
</issue>
<code>
[start of code/numbers.py]
1 from talon import Context, Module, actions
2 from typing import List, Optional, Union, Iterator
3
4 mod = Module()
5 ctx = Context()
6
7 digits = "zero one two three four five six seven eight nine".split()
8 teens = "eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen".split()
9 tens = "ten twenty thirty forty fifty sixty seventy eighty ninety".split()
10 scales = "hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion".split()
11
12 digits_map = {n: i for i, n in enumerate(digits)}
13 digits_map["oh"] = 0
14 teens_map = {n: i + 11 for i, n in enumerate(teens)}
15 tens_map = {n: 10 * (i + 1) for i, n in enumerate(tens)}
16 scales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}
17 scales_map["hundred"] = 100
18
19 numbers_map = digits_map.copy()
20 numbers_map.update(teens_map)
21 numbers_map.update(tens_map)
22 numbers_map.update(scales_map)
23
24 def parse_number(l: List[str]) -> str:
25 """Parses a list of words into a number/digit string."""
26 l = list(scan_small_numbers(l))
27 for scale in scales:
28 l = parse_scale(scale, l)
29 return "".join(str(n) for n in l)
30
31 def scan_small_numbers(l: List[str]) -> Iterator[Union[str,int]]:
32 """
33 Takes a list of number words, yields a generator of mixed numbers & strings.
34 Translates small number terms (<100) into corresponding numbers.
35 Drops all occurrences of "and".
36 Smashes digits onto tens words, eg. ["twenty", "one"] -> [21].
37 But note that "ten" and "zero" are excluded, ie:
38 ["ten", "three"] -> [10, 3]
39 ["fifty", "zero"] -> [50, 0]
40 Does nothing to scale words ("hundred", "thousand", "million", etc).
41 """
42 # reversed so that repeated pop() visits in left-to-right order
43 l = [x for x in reversed(l) if x != "and"]
44 while l:
45 n = l.pop()
46 # fuse tens onto digits, eg. "twenty", "one" -> 21
47 if n in tens_map and n != "ten" and l and digits_map.get(l[-1], 0) != 0:
48 d = l.pop()
49 yield numbers_map[n] + numbers_map[d]
50 # turn small number terms into corresponding numbers
51 elif n not in scales_map:
52 yield numbers_map[n]
53 else:
54 yield n
55
56 def parse_scale(scale: str, l: List[Union[str,int]]) -> List[Union[str,int]]:
57 """Parses a list of mixed numbers & strings for occurrences of the following
58 pattern:
59
60 <multiplier> <scale> <remainder>
61
62 where <scale> is a scale word like "hundred", "thousand", "million", etc and
63 multiplier and remainder are numbers or strings of numbers of the
64 appropriate size. For example:
65
66 parse_scale("hundred", [1, "hundred", 2]) -> [102]
67 parse_scale("thousand", [12, "thousand", 3, 45]) -> [12345]
68
69 We assume that all scales of lower magnitude have already been parsed; don't
70 call parse_scale("thousand") until you've called parse_scale("hundred").
71 """
72 scale_value = scales_map[scale]
73 scale_digits = len(str(scale_value))
74
75 # Split the list on the desired scale word, then parse from left to right.
76 left, *splits = split_list(scale, l)
77 for right in splits:
78 # (1) Figure out the multiplier by looking to the left of the scale
79 # word. We ignore non-integers because they are scale words that we
80 # haven't processed yet; this strategy means that "thousand hundred"
81 # gets parsed as 1,100 instead of 100,000, but "hundred thousand" is
82 # parsed correctly as 100,000.
83 before = 1 # default multiplier
84 if left and isinstance(left[-1], int) and left[-1] != 0:
85 before = left.pop()
86
87 # (2) Absorb numbers to the right, eg. in [1, "thousand", 1, 26], "1
88 # thousand" absorbs ["1", "26"] to make 1,126. We pull numbers off
89 # `right` until we fill up the desired number of digits.
90 after = ""
91 while right and isinstance(right[0], int):
92 next = after + str(right[0])
93 if len(next) >= scale_digits: break
94 after = next
95 right.pop(0)
96 after = int(after) if after else 0
97
98 # (3) Push the parsed number into place, append whatever was left
99 # unparsed, and continue.
100 left.append(before * scale_value + after)
101 left.extend(right)
102
103 return left
104
105 def split_list(value, l: list) -> Iterator:
106 """Splits a list by occurrences of a given value."""
107 start = 0
108 while True:
109 try: i = l.index(value, start)
110 except ValueError: break
111 yield l[start:i]
112 start = i+1
113 yield l[start:]
114
115
116 # # ---------- TESTS (uncomment to run) ----------
117 # def test_number(expected, string):
118 # print('testing:', string)
119 # l = list(scan_small_numbers(string.split()))
120 # print(" scan --->", l)
121 # for scale in scales:
122 # old = l
123 # l = parse_scale(scale, l)
124 # if scale in old: print(" parse -->", l)
125 # else: assert old == l, "parse_scale should do nothing if the scale does not occur in the list"
126 # result = "".join(str(n) for n in l)
127 # assert result == parse_number(string.split())
128 # assert str(expected) == result, f"parsing {string!r}, expected {expected}, got {result}"
129
130 # test_number(105000, "one hundred and five thousand")
131 # test_number(1000000, "one thousand thousand")
132 # test_number(1501000, "one million five hundred one thousand")
133 # test_number(1501106, "one million five hundred and one thousand one hundred and six")
134 # test_number(123, "one two three")
135 # test_number(123, "one twenty three")
136 # test_number(104, "ten four") # borderline, but valid in some dialects
137 # test_number(1066, "ten sixty six") # a common way of saying years
138 # test_number(1906, "nineteen oh six") # year
139 # test_number(2001, "twenty oh one") # year
140 # test_number(2020, "twenty twenty")
141 # test_number(1001, "one thousand one")
142 # test_number(1010, "one thousand ten")
143 # test_number(123456, "one hundred and twenty three thousand and four hundred and fifty six")
144 # test_number(123456, "one twenty three thousand four fifty six")
145
146 # ## failing (and somewhat debatable) tests from old numbers.py
147 # #test_number(10000011, "one million one one")
148 # #test_number(100001010, "one million ten ten")
149 # #test_number(1050006000, "one hundred thousand and five thousand and six thousand")
150
151
152 # ---------- CAPTURES ----------
153 alt_digits = "(" + ("|".join(digits_map.keys())) + ")"
154 alt_teens = "(" + ("|".join(teens_map.keys())) + ")"
155 alt_tens = "(" + ("|".join(tens_map.keys())) + ")"
156 alt_scales = "(" + ("|".join(scales_map.keys())) + ")"
157 number_word = "(" + "|".join(numbers_map.keys()) + ")"
158
159 # TODO: allow things like "double eight" for 88
160 @ctx.capture("digit_string", rule=f"({alt_digits} | {alt_teens} | {alt_tens})+")
161 def digit_string(m) -> str: return parse_number(list(m))
162
163 @ctx.capture("digits", rule="<digit_string>")
164 def digits(m) -> int:
165 """Parses a phrase representing a digit sequence, returning it as an integer."""
166 return int(m.digit_string)
167
168 @mod.capture(rule=f"{number_word}+ (and {number_word}+)*")
169 def number_string(m) -> str:
170 """Parses a number phrase, returning that number as a string."""
171 return parse_number(list(m))
172
173 @ctx.capture("number", rule="<user.number_string>")
174 def number(m) -> int:
175 """Parses a number phrase, returning it as an integer."""
176 return int(m.number_string)
177
178 @ctx.capture("number_signed", rule=f"[negative|minus] <number>")
179 def number_signed(m):
180 number = m[-1]
181 return -number if (m[0] in ["negative", "minus"]) else number
182
183 @ctx.capture(
184 "number_small", rule=f"({alt_digits} | {alt_teens} | {alt_tens} [{alt_digits}])"
185 )
186 def number_small(m): return int(parse_number(list(m)))
187
[end of code/numbers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/code/numbers.py b/code/numbers.py
--- a/code/numbers.py
+++ b/code/numbers.py
@@ -5,14 +5,14 @@
ctx = Context()
digits = "zero one two three four five six seven eight nine".split()
-teens = "eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen".split()
-tens = "ten twenty thirty forty fifty sixty seventy eighty ninety".split()
+teens = "ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen".split()
+tens = "twenty thirty forty fifty sixty seventy eighty ninety".split()
scales = "hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion".split()
digits_map = {n: i for i, n in enumerate(digits)}
digits_map["oh"] = 0
-teens_map = {n: i + 11 for i, n in enumerate(teens)}
-tens_map = {n: 10 * (i + 1) for i, n in enumerate(tens)}
+teens_map = {n: i + 10 for i, n in enumerate(teens)}
+tens_map = {n: 10 * (i + 2) for i, n in enumerate(tens)}
scales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}
scales_map["hundred"] = 100
@@ -44,7 +44,7 @@
while l:
n = l.pop()
# fuse tens onto digits, eg. "twenty", "one" -> 21
- if n in tens_map and n != "ten" and l and digits_map.get(l[-1], 0) != 0:
+ if n in tens_map and l and digits_map.get(l[-1], 0) != 0:
d = l.pop()
yield numbers_map[n] + numbers_map[d]
# turn small number terms into corresponding numbers
| {"golden_diff": "diff --git a/code/numbers.py b/code/numbers.py\n--- a/code/numbers.py\n+++ b/code/numbers.py\n@@ -5,14 +5,14 @@\n ctx = Context()\n \n digits = \"zero one two three four five six seven eight nine\".split()\n-teens = \"eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\n-tens = \"ten twenty thirty forty fifty sixty seventy eighty ninety\".split()\n+teens = \"ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\n+tens = \"twenty thirty forty fifty sixty seventy eighty ninety\".split()\n scales = \"hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion\".split()\n \n digits_map = {n: i for i, n in enumerate(digits)}\n digits_map[\"oh\"] = 0\n-teens_map = {n: i + 11 for i, n in enumerate(teens)}\n-tens_map = {n: 10 * (i + 1) for i, n in enumerate(tens)}\n+teens_map = {n: i + 10 for i, n in enumerate(teens)}\n+tens_map = {n: 10 * (i + 2) for i, n in enumerate(tens)}\n scales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}\n scales_map[\"hundred\"] = 100\n \n@@ -44,7 +44,7 @@\n while l:\n n = l.pop()\n # fuse tens onto digits, eg. \"twenty\", \"one\" -> 21\n- if n in tens_map and n != \"ten\" and l and digits_map.get(l[-1], 0) != 0:\n+ if n in tens_map and l and digits_map.get(l[-1], 0) != 0:\n d = l.pop()\n yield numbers_map[n] + numbers_map[d]\n # turn small number terms into corresponding numbers\n", "issue": "Number small can go larger than 100\nIf you say \"ten five\" number small will be 105.\nNumber small can go larger than 100\nIf you say \"ten five\" number small will be 105.\n", "before_files": [{"content": "from talon import Context, Module, actions\nfrom typing import List, Optional, Union, Iterator\n\nmod = Module()\nctx = Context()\n\ndigits = \"zero one two three four five six seven eight nine\".split()\nteens = \"eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\ntens = \"ten twenty thirty forty fifty sixty seventy eighty ninety\".split()\nscales = \"hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion\".split()\n\ndigits_map = {n: i for i, n in enumerate(digits)}\ndigits_map[\"oh\"] = 0\nteens_map = {n: i + 11 for i, n in enumerate(teens)}\ntens_map = {n: 10 * (i + 1) for i, n in enumerate(tens)}\nscales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}\nscales_map[\"hundred\"] = 100\n\nnumbers_map = digits_map.copy()\nnumbers_map.update(teens_map)\nnumbers_map.update(tens_map)\nnumbers_map.update(scales_map)\n\ndef parse_number(l: List[str]) -> str:\n \"\"\"Parses a list of words into a number/digit string.\"\"\"\n l = list(scan_small_numbers(l))\n for scale in scales:\n l = parse_scale(scale, l)\n return \"\".join(str(n) for n in l)\n\ndef scan_small_numbers(l: List[str]) -> Iterator[Union[str,int]]:\n \"\"\"\n Takes a list of number words, yields a generator of mixed numbers & strings.\n Translates small number terms (<100) into corresponding numbers.\n Drops all occurrences of \"and\".\n Smashes digits onto tens words, eg. [\"twenty\", \"one\"] -> [21].\n But note that \"ten\" and \"zero\" are excluded, ie:\n [\"ten\", \"three\"] -> [10, 3]\n [\"fifty\", \"zero\"] -> [50, 0]\n Does nothing to scale words (\"hundred\", \"thousand\", \"million\", etc).\n \"\"\"\n # reversed so that repeated pop() visits in left-to-right order\n l = [x for x in reversed(l) if x != \"and\"]\n while l:\n n = l.pop()\n # fuse tens onto digits, eg. \"twenty\", \"one\" -> 21\n if n in tens_map and n != \"ten\" and l and digits_map.get(l[-1], 0) != 0:\n d = l.pop()\n yield numbers_map[n] + numbers_map[d]\n # turn small number terms into corresponding numbers\n elif n not in scales_map:\n yield numbers_map[n]\n else:\n yield n\n\ndef parse_scale(scale: str, l: List[Union[str,int]]) -> List[Union[str,int]]:\n \"\"\"Parses a list of mixed numbers & strings for occurrences of the following\n pattern:\n\n <multiplier> <scale> <remainder>\n\n where <scale> is a scale word like \"hundred\", \"thousand\", \"million\", etc and\n multiplier and remainder are numbers or strings of numbers of the\n appropriate size. For example:\n\n parse_scale(\"hundred\", [1, \"hundred\", 2]) -> [102]\n parse_scale(\"thousand\", [12, \"thousand\", 3, 45]) -> [12345]\n\n We assume that all scales of lower magnitude have already been parsed; don't\n call parse_scale(\"thousand\") until you've called parse_scale(\"hundred\").\n \"\"\"\n scale_value = scales_map[scale]\n scale_digits = len(str(scale_value))\n\n # Split the list on the desired scale word, then parse from left to right.\n left, *splits = split_list(scale, l)\n for right in splits:\n # (1) Figure out the multiplier by looking to the left of the scale\n # word. We ignore non-integers because they are scale words that we\n # haven't processed yet; this strategy means that \"thousand hundred\"\n # gets parsed as 1,100 instead of 100,000, but \"hundred thousand\" is\n # parsed correctly as 100,000.\n before = 1 # default multiplier\n if left and isinstance(left[-1], int) and left[-1] != 0:\n before = left.pop()\n\n # (2) Absorb numbers to the right, eg. in [1, \"thousand\", 1, 26], \"1\n # thousand\" absorbs [\"1\", \"26\"] to make 1,126. We pull numbers off\n # `right` until we fill up the desired number of digits.\n after = \"\"\n while right and isinstance(right[0], int):\n next = after + str(right[0])\n if len(next) >= scale_digits: break\n after = next\n right.pop(0)\n after = int(after) if after else 0\n\n # (3) Push the parsed number into place, append whatever was left\n # unparsed, and continue.\n left.append(before * scale_value + after)\n left.extend(right)\n\n return left\n\ndef split_list(value, l: list) -> Iterator:\n \"\"\"Splits a list by occurrences of a given value.\"\"\"\n start = 0\n while True:\n try: i = l.index(value, start)\n except ValueError: break\n yield l[start:i]\n start = i+1\n yield l[start:]\n\n\f\n# # ---------- TESTS (uncomment to run) ----------\n# def test_number(expected, string):\n# print('testing:', string)\n# l = list(scan_small_numbers(string.split()))\n# print(\" scan --->\", l)\n# for scale in scales:\n# old = l\n# l = parse_scale(scale, l)\n# if scale in old: print(\" parse -->\", l)\n# else: assert old == l, \"parse_scale should do nothing if the scale does not occur in the list\"\n# result = \"\".join(str(n) for n in l)\n# assert result == parse_number(string.split())\n# assert str(expected) == result, f\"parsing {string!r}, expected {expected}, got {result}\"\n\n# test_number(105000, \"one hundred and five thousand\")\n# test_number(1000000, \"one thousand thousand\")\n# test_number(1501000, \"one million five hundred one thousand\")\n# test_number(1501106, \"one million five hundred and one thousand one hundred and six\")\n# test_number(123, \"one two three\")\n# test_number(123, \"one twenty three\")\n# test_number(104, \"ten four\") # borderline, but valid in some dialects\n# test_number(1066, \"ten sixty six\") # a common way of saying years\n# test_number(1906, \"nineteen oh six\") # year\n# test_number(2001, \"twenty oh one\") # year\n# test_number(2020, \"twenty twenty\")\n# test_number(1001, \"one thousand one\")\n# test_number(1010, \"one thousand ten\")\n# test_number(123456, \"one hundred and twenty three thousand and four hundred and fifty six\")\n# test_number(123456, \"one twenty three thousand four fifty six\")\n\n# ## failing (and somewhat debatable) tests from old numbers.py\n# #test_number(10000011, \"one million one one\")\n# #test_number(100001010, \"one million ten ten\")\n# #test_number(1050006000, \"one hundred thousand and five thousand and six thousand\")\n\n\f\n# ---------- CAPTURES ----------\nalt_digits = \"(\" + (\"|\".join(digits_map.keys())) + \")\"\nalt_teens = \"(\" + (\"|\".join(teens_map.keys())) + \")\"\nalt_tens = \"(\" + (\"|\".join(tens_map.keys())) + \")\"\nalt_scales = \"(\" + (\"|\".join(scales_map.keys())) + \")\"\nnumber_word = \"(\" + \"|\".join(numbers_map.keys()) + \")\"\n\n# TODO: allow things like \"double eight\" for 88\[email protected](\"digit_string\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens})+\")\ndef digit_string(m) -> str: return parse_number(list(m))\n\[email protected](\"digits\", rule=\"<digit_string>\")\ndef digits(m) -> int:\n \"\"\"Parses a phrase representing a digit sequence, returning it as an integer.\"\"\"\n return int(m.digit_string)\n\[email protected](rule=f\"{number_word}+ (and {number_word}+)*\")\ndef number_string(m) -> str:\n \"\"\"Parses a number phrase, returning that number as a string.\"\"\"\n return parse_number(list(m))\n\[email protected](\"number\", rule=\"<user.number_string>\")\ndef number(m) -> int:\n \"\"\"Parses a number phrase, returning it as an integer.\"\"\"\n return int(m.number_string)\n\[email protected](\"number_signed\", rule=f\"[negative|minus] <number>\")\ndef number_signed(m):\n number = m[-1]\n return -number if (m[0] in [\"negative\", \"minus\"]) else number\n\[email protected](\n \"number_small\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens} [{alt_digits}])\"\n)\ndef number_small(m): return int(parse_number(list(m)))\n", "path": "code/numbers.py"}]} | 3,170 | 438 |
gh_patches_debug_5419 | rasdani/github-patches | git_diff | scrapy__scrapy-475 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ability to not send specific headers in HTTP requests
Some web servers behave differently when they are receive or don't receive specific headers.
For example FeedBurner (http://feeds.feedburner.com/someblog) sends out XML RSS feeds **only is you do not set the "Referer" header.**
The idea would be to use the `headers` dict with some keys with a `None` value, and skip these headers when sending the HTTP request.
Currently, for the "Referer" example:
- `headers={"Referer": None}` sends "Referer: None"
- `headers={"Referer": ""}` sends "Referer: " (which works for the FeedBurner case, but is not satisfactory)
- disable `RefererMiddleware` but that feels a bit heavy
(for this FeedBurner thing, apparently adding `?format=xml` also does the trick)
</issue>
<code>
[start of scrapy/http/headers.py]
1 from w3lib.http import headers_dict_to_raw
2 from scrapy.utils.datatypes import CaselessDict
3
4
5 class Headers(CaselessDict):
6 """Case insensitive http headers dictionary"""
7
8 def __init__(self, seq=None, encoding='utf-8'):
9 self.encoding = encoding
10 super(Headers, self).__init__(seq)
11
12 def normkey(self, key):
13 """Headers must not be unicode"""
14 if isinstance(key, unicode):
15 return key.title().encode(self.encoding)
16 return key.title()
17
18 def normvalue(self, value):
19 """Headers must not be unicode"""
20 if not hasattr(value, '__iter__'):
21 value = [value]
22 return [x.encode(self.encoding) if isinstance(x, unicode) else x \
23 for x in value]
24
25 def __getitem__(self, key):
26 try:
27 return super(Headers, self).__getitem__(key)[-1]
28 except IndexError:
29 return None
30
31 def get(self, key, def_val=None):
32 try:
33 return super(Headers, self).get(key, def_val)[-1]
34 except IndexError:
35 return None
36
37 def getlist(self, key, def_val=None):
38 try:
39 return super(Headers, self).__getitem__(key)
40 except KeyError:
41 if def_val is not None:
42 return self.normvalue(def_val)
43 return []
44
45 def setlist(self, key, list_):
46 self[key] = list_
47
48 def setlistdefault(self, key, default_list=()):
49 return self.setdefault(key, default_list)
50
51 def appendlist(self, key, value):
52 lst = self.getlist(key)
53 lst.extend(self.normvalue(value))
54 self[key] = lst
55
56 def items(self):
57 return list(self.iteritems())
58
59 def iteritems(self):
60 return ((k, self.getlist(k)) for k in self.keys())
61
62 def values(self):
63 return [self[k] for k in self.keys()]
64
65 def to_string(self):
66 return headers_dict_to_raw(self)
67
68 def __copy__(self):
69 return self.__class__(self)
70 copy = __copy__
71
72
73
[end of scrapy/http/headers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/http/headers.py b/scrapy/http/headers.py
--- a/scrapy/http/headers.py
+++ b/scrapy/http/headers.py
@@ -17,7 +17,9 @@
def normvalue(self, value):
"""Headers must not be unicode"""
- if not hasattr(value, '__iter__'):
+ if value is None:
+ value = []
+ elif not hasattr(value, '__iter__'):
value = [value]
return [x.encode(self.encoding) if isinstance(x, unicode) else x \
for x in value]
| {"golden_diff": "diff --git a/scrapy/http/headers.py b/scrapy/http/headers.py\n--- a/scrapy/http/headers.py\n+++ b/scrapy/http/headers.py\n@@ -17,7 +17,9 @@\n \n def normvalue(self, value):\n \"\"\"Headers must not be unicode\"\"\"\n- if not hasattr(value, '__iter__'):\n+ if value is None:\n+ value = []\n+ elif not hasattr(value, '__iter__'):\n value = [value]\n return [x.encode(self.encoding) if isinstance(x, unicode) else x \\\n for x in value]\n", "issue": "Ability to not send specific headers in HTTP requests\nSome web servers behave differently when they are receive or don't receive specific headers.\n\nFor example FeedBurner (http://feeds.feedburner.com/someblog) sends out XML RSS feeds **only is you do not set the \"Referer\" header.**\n\nThe idea would be to use the `headers` dict with some keys with a `None` value, and skip these headers when sending the HTTP request.\n\nCurrently, for the \"Referer\" example:\n- `headers={\"Referer\": None}` sends \"Referer: None\"\n- `headers={\"Referer\": \"\"}` sends \"Referer: \" (which works for the FeedBurner case, but is not satisfactory)\n- disable `RefererMiddleware` but that feels a bit heavy\n\n(for this FeedBurner thing, apparently adding `?format=xml` also does the trick)\n\n", "before_files": [{"content": "from w3lib.http import headers_dict_to_raw\nfrom scrapy.utils.datatypes import CaselessDict\n\n\nclass Headers(CaselessDict):\n \"\"\"Case insensitive http headers dictionary\"\"\"\n\n def __init__(self, seq=None, encoding='utf-8'):\n self.encoding = encoding\n super(Headers, self).__init__(seq)\n\n def normkey(self, key):\n \"\"\"Headers must not be unicode\"\"\"\n if isinstance(key, unicode):\n return key.title().encode(self.encoding)\n return key.title()\n\n def normvalue(self, value):\n \"\"\"Headers must not be unicode\"\"\"\n if not hasattr(value, '__iter__'):\n value = [value]\n return [x.encode(self.encoding) if isinstance(x, unicode) else x \\\n for x in value]\n\n def __getitem__(self, key):\n try:\n return super(Headers, self).__getitem__(key)[-1]\n except IndexError:\n return None\n\n def get(self, key, def_val=None):\n try:\n return super(Headers, self).get(key, def_val)[-1]\n except IndexError:\n return None\n\n def getlist(self, key, def_val=None):\n try:\n return super(Headers, self).__getitem__(key)\n except KeyError:\n if def_val is not None:\n return self.normvalue(def_val)\n return []\n\n def setlist(self, key, list_):\n self[key] = list_\n\n def setlistdefault(self, key, default_list=()):\n return self.setdefault(key, default_list)\n\n def appendlist(self, key, value):\n lst = self.getlist(key)\n lst.extend(self.normvalue(value))\n self[key] = lst\n\n def items(self):\n return list(self.iteritems())\n\n def iteritems(self):\n return ((k, self.getlist(k)) for k in self.keys())\n\n def values(self):\n return [self[k] for k in self.keys()]\n\n def to_string(self):\n return headers_dict_to_raw(self)\n\n def __copy__(self):\n return self.__class__(self)\n copy = __copy__\n\n\n", "path": "scrapy/http/headers.py"}]} | 1,315 | 128 |
gh_patches_debug_6813 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-436 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make master exist when there are no tasks left.
Currently, master exists when there are no tasks left AND all workers are gone. It might left hanging if a worker got preempted.
</issue>
<code>
[start of elasticdl/python/elasticdl/master/main.py]
1 import logging
2 import time
3 import argparse
4 import os
5
6 import grpc
7 import tensorflow as tf
8
9 tf.enable_eager_execution()
10
11 from concurrent import futures
12 from recordio import File
13 from elasticdl.proto import master_pb2_grpc
14 from elasticdl.master.servicer import MasterServicer
15 from elasticdl.master.task_queue import _TaskQueue
16 from elasticdl.master.k8s_worker_manager import WorkerManager
17 from elasticdl.common.model_helper import load_user_model, build_model
18
19
20 def _make_task_queue(data_dir, record_per_task, num_epoch):
21 f_records = {}
22 for f in os.listdir(data_dir):
23 p = os.path.join(data_dir, f)
24 with File(p, "r") as rio:
25 f_records[p] = rio.count()
26 return _TaskQueue(f_records, record_per_task, num_epoch)
27
28
29 def _parse_args():
30 parser = argparse.ArgumentParser(description="ElasticDL Master")
31 parser.add_argument(
32 "--model_file",
33 help="Full file path of user defined neural model",
34 required=True,
35 )
36 parser.add_argument(
37 "--train_data_dir",
38 help="Training data directory. Files should be in RecordIO format",
39 required=True,
40 )
41 parser.add_argument("--record_per_task", type=int, required=True)
42 parser.add_argument("--num_epoch", type=int, required=True)
43 parser.add_argument(
44 "--grads_to_wait",
45 type=int,
46 help="Number of gradients to wait before updating model",
47 required=True,
48 )
49 parser.add_argument(
50 "--minibatch_size",
51 type=int,
52 help="Minibatch size used by workers to compute gradients",
53 required=True,
54 )
55 parser.add_argument(
56 "--num_worker",
57 type=int,
58 help="the number of workers used in training",
59 default=0,
60 )
61 parser.add_argument(
62 "--worker_cpu_request",
63 help="the minimal cpu required by worker in training",
64 default="1000m",
65 )
66 parser.add_argument(
67 "--worker_cpu_limit",
68 help="the maximal cpu used by worker in training",
69 default="1000m",
70 )
71 parser.add_argument(
72 "--worker_memory_request",
73 help="the minimal memory required by worker in training",
74 default="4096Mi",
75 )
76 parser.add_argument(
77 "--worker_memory_limit",
78 help="the maximal memory used by worker in training",
79 default="4096Mi",
80 )
81 parser.add_argument(
82 "--worker_pod_priority",
83 help="the requested priority of worker pod")
84 parser.add_argument(
85 "--worker_image", help="docker image for worker", default=None
86 )
87 parser.add_argument("--job_name", help="job name", required=True)
88 parser.add_argument(
89 "--codec_type",
90 default="bytes",
91 choices=["tf_example", "bytes"],
92 help="Type of codec(tf_example or bytes)",
93 )
94 return parser.parse_args()
95
96
97 def main():
98 # TODO: pass port via flags.
99 PORT = 50001
100 logger = logging.getLogger("master")
101 args = _parse_args()
102 task_q = _make_task_queue(
103 args.train_data_dir, args.record_per_task, args.num_epoch
104 )
105 model_module = load_user_model(args.model_file)
106 model_inst = model_module.model
107 build_model(model_inst, model_module.feature_columns())
108 optimizer = model_module.optimizer()
109
110 server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))
111 master_pb2_grpc.add_MasterServicer_to_server(
112 MasterServicer(
113 logger,
114 args.grads_to_wait,
115 args.minibatch_size,
116 optimizer,
117 task_q,
118 init_var=model_inst.trainable_variables,
119 ),
120 server,
121 )
122 server.add_insecure_port("[::]:{}".format(PORT))
123 server.start()
124 logger.warning("Server started at port: %d", PORT)
125
126 if args.num_worker:
127 master_addr = "%s:%d" % (os.getenv("MY_POD_IP", "localhost"), PORT)
128 worker_command = ["python"]
129 worker_args = [
130 "-m",
131 "elasticdl.worker.main",
132 "--model_file",
133 args.model_file,
134 "--master_addr",
135 master_addr,
136 "--codec_type",
137 args.codec_type
138 ]
139
140 worker_manager = WorkerManager(
141 job_name=args.job_name,
142 worker_image=args.worker_image,
143 command=worker_command,
144 args=worker_args,
145 namespace="default",
146 num_worker=args.num_worker,
147 cpu_request=args.worker_cpu_request,
148 cpu_limit=args.worker_cpu_limit,
149 memory_request=args.worker_memory_request,
150 memory_limit=args.worker_memory_limit,
151 pod_priority=args.worker_pod_priority,
152 )
153 worker_manager.start_workers(restart_policy="Never")
154
155 try:
156 while True:
157 if task_q.finished():
158 break
159 time.sleep(30)
160 except KeyboardInterrupt:
161 logger.warning("Server stopping")
162
163 if args.num_worker:
164 # TODO: worker_manager.remove_workers supports synchronized call
165 worker_manager.remove_workers()
166 # wait for worker pod to be deleted
167 max_check_num = 10
168 for _ in range(max_check_num):
169 time.sleep(3)
170 counters = worker_manager.get_counters()
171 if not counters:
172 break
173 server.stop(0)
174
175
176 if __name__ == "__main__":
177 logging.basicConfig()
178 main()
179
[end of elasticdl/python/elasticdl/master/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/python/elasticdl/master/main.py b/elasticdl/python/elasticdl/master/main.py
--- a/elasticdl/python/elasticdl/master/main.py
+++ b/elasticdl/python/elasticdl/master/main.py
@@ -163,13 +163,7 @@
if args.num_worker:
# TODO: worker_manager.remove_workers supports synchronized call
worker_manager.remove_workers()
- # wait for worker pod to be deleted
- max_check_num = 10
- for _ in range(max_check_num):
- time.sleep(3)
- counters = worker_manager.get_counters()
- if not counters:
- break
+
server.stop(0)
| {"golden_diff": "diff --git a/elasticdl/python/elasticdl/master/main.py b/elasticdl/python/elasticdl/master/main.py\n--- a/elasticdl/python/elasticdl/master/main.py\n+++ b/elasticdl/python/elasticdl/master/main.py\n@@ -163,13 +163,7 @@\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n- # wait for worker pod to be deleted\n- max_check_num = 10\n- for _ in range(max_check_num):\n- time.sleep(3)\n- counters = worker_manager.get_counters()\n- if not counters:\n- break\n+\n server.stop(0)\n", "issue": "Make master exist when there are no tasks left.\nCurrently, master exists when there are no tasks left AND all workers are gone. It might left hanging if a worker got preempted.\n", "before_files": [{"content": "import logging\nimport time\nimport argparse\nimport os\n\nimport grpc\nimport tensorflow as tf\n\ntf.enable_eager_execution()\n\nfrom concurrent import futures\nfrom recordio import File\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.master.servicer import MasterServicer\nfrom elasticdl.master.task_queue import _TaskQueue\nfrom elasticdl.master.k8s_worker_manager import WorkerManager\nfrom elasticdl.common.model_helper import load_user_model, build_model\n\n\ndef _make_task_queue(data_dir, record_per_task, num_epoch):\n f_records = {}\n for f in os.listdir(data_dir):\n p = os.path.join(data_dir, f)\n with File(p, \"r\") as rio:\n f_records[p] = rio.count()\n return _TaskQueue(f_records, record_per_task, num_epoch)\n\n\ndef _parse_args():\n parser = argparse.ArgumentParser(description=\"ElasticDL Master\")\n parser.add_argument(\n \"--model_file\",\n help=\"Full file path of user defined neural model\",\n required=True,\n )\n parser.add_argument(\n \"--train_data_dir\",\n help=\"Training data directory. Files should be in RecordIO format\",\n required=True,\n )\n parser.add_argument(\"--record_per_task\", type=int, required=True)\n parser.add_argument(\"--num_epoch\", type=int, required=True)\n parser.add_argument(\n \"--grads_to_wait\",\n type=int,\n help=\"Number of gradients to wait before updating model\",\n required=True,\n )\n parser.add_argument(\n \"--minibatch_size\",\n type=int,\n help=\"Minibatch size used by workers to compute gradients\",\n required=True,\n )\n parser.add_argument(\n \"--num_worker\",\n type=int,\n help=\"the number of workers used in training\",\n default=0,\n )\n parser.add_argument(\n \"--worker_cpu_request\",\n help=\"the minimal cpu required by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_cpu_limit\",\n help=\"the maximal cpu used by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_memory_request\",\n help=\"the minimal memory required by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_memory_limit\",\n help=\"the maximal memory used by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_pod_priority\",\n help=\"the requested priority of worker pod\")\n parser.add_argument(\n \"--worker_image\", help=\"docker image for worker\", default=None\n )\n parser.add_argument(\"--job_name\", help=\"job name\", required=True)\n parser.add_argument(\n \"--codec_type\",\n default=\"bytes\",\n choices=[\"tf_example\", \"bytes\"],\n help=\"Type of codec(tf_example or bytes)\",\n )\n return parser.parse_args()\n\n\ndef main():\n # TODO: pass port via flags.\n PORT = 50001\n logger = logging.getLogger(\"master\")\n args = _parse_args()\n task_q = _make_task_queue(\n args.train_data_dir, args.record_per_task, args.num_epoch\n )\n model_module = load_user_model(args.model_file)\n model_inst = model_module.model\n build_model(model_inst, model_module.feature_columns())\n optimizer = model_module.optimizer()\n\n server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))\n master_pb2_grpc.add_MasterServicer_to_server(\n MasterServicer(\n logger,\n args.grads_to_wait,\n args.minibatch_size,\n optimizer,\n task_q,\n init_var=model_inst.trainable_variables,\n ),\n server,\n )\n server.add_insecure_port(\"[::]:{}\".format(PORT))\n server.start()\n logger.warning(\"Server started at port: %d\", PORT)\n\n if args.num_worker:\n master_addr = \"%s:%d\" % (os.getenv(\"MY_POD_IP\", \"localhost\"), PORT)\n worker_command = [\"python\"]\n worker_args = [\n \"-m\",\n \"elasticdl.worker.main\",\n \"--model_file\",\n args.model_file,\n \"--master_addr\",\n master_addr,\n \"--codec_type\",\n args.codec_type\n ]\n\n worker_manager = WorkerManager(\n job_name=args.job_name,\n worker_image=args.worker_image,\n command=worker_command,\n args=worker_args,\n namespace=\"default\",\n num_worker=args.num_worker,\n cpu_request=args.worker_cpu_request,\n cpu_limit=args.worker_cpu_limit,\n memory_request=args.worker_memory_request,\n memory_limit=args.worker_memory_limit,\n pod_priority=args.worker_pod_priority,\n )\n worker_manager.start_workers(restart_policy=\"Never\")\n\n try:\n while True:\n if task_q.finished():\n break\n time.sleep(30)\n except KeyboardInterrupt:\n logger.warning(\"Server stopping\")\n\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n # wait for worker pod to be deleted\n max_check_num = 10\n for _ in range(max_check_num):\n time.sleep(3)\n counters = worker_manager.get_counters()\n if not counters:\n break\n server.stop(0)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig()\n main()\n", "path": "elasticdl/python/elasticdl/master/main.py"}]} | 2,159 | 155 |
gh_patches_debug_38762 | rasdani/github-patches | git_diff | nilearn__nilearn-1225 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remove examples/03_connectivity/plot_power_connectome.py ?
- Signal extraction from spheres placed on Power coordinates is already done in `examples/03_connectivity/plot_seed_based_connectome.py`
- Sparse inverse covariance estimation is already explained in `examples/03_connectivity/plot_inverse_covariance_connectome.py` for MSDL atlas. For me, it doesn't really make a difference estimating it on timeseries extracted from probabilistic maps or spheric ROIs.
</issue>
<code>
[start of examples/03_connectivity/plot_power_connectome.py]
1 """
2 Extracting signals and plotting a connectome for the Power-264 seed-region atlas
3 ================================================================================
4
5 This example shows how to extract signals from spherical seed-regions based
6 on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse
7 inverse covariance.
8
9 Power, Jonathan D., et al. "Functional network organization of the
10 human brain." Neuron 72.4 (2011): 665-678.
11
12 """
13
14 import numpy as np
15 import matplotlib.pyplot as plt
16 from nilearn import datasets, connectome, plotting, input_data
17
18
19 ###############################################################################
20 # Atlas and dataset fetching
21
22 # Fetch the coordinates of power atlas
23 power = datasets.fetch_coords_power_2011()
24 power_coords = np.vstack((
25 power.rois['x'],
26 power.rois['y'],
27 power.rois['z'],
28 )).T
29
30 # Fetch the first subject of ADHD dataset
31 adhd = datasets.fetch_adhd(n_subjects=1)
32
33
34 ###############################################################################
35 # Masking: taking the signal in a sphere of radius 5mm around Power coords
36
37 masker = input_data.NiftiSpheresMasker(seeds=power_coords,
38 smoothing_fwhm=4,
39 radius=5.,
40 standardize=True,
41 detrend=True,
42 low_pass=0.1,
43 high_pass=0.01,
44 t_r=2.5)
45
46 timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])
47
48 ###############################################################################
49 # Extract and plot correlation matrix
50
51 # calculate connectivity and plot Power-264 correlation matrix
52 connectivity = connectome.ConnectivityMeasure(kind='correlation')
53 corr_matrix = connectivity.fit_transform([timeseries])[0]
54 np.fill_diagonal(corr_matrix, 0)
55 plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')
56 plt.colorbar()
57 plt.title('Power 264 Connectivity')
58
59 # Plot the connectome
60
61 plotting.plot_connectome(corr_matrix,
62 power_coords,
63 edge_threshold='99.8%',
64 node_size=20)
65
66
67 ###############################################################################
68 # Extract and plot covariance and sparse covariance
69
70 # Compute the sparse inverse covariance
71 from sklearn.covariance import GraphLassoCV
72
73 estimator = GraphLassoCV()
74 estimator.fit(timeseries)
75
76 # Display the covariance
77 plt.figure(figsize=(5, 5))
78 plt.imshow(estimator.covariance_, interpolation="nearest",
79 vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
80 plt.title('Covariance matrix')
81
82 # display the corresponding graph
83 plotting.plot_connectome(estimator.covariance_,
84 power_coords,
85 title='Covariance connectome',
86 edge_threshold='99.8%',
87 node_size=20)
88
89 # Display the sparse inverse covariance
90 plt.figure(figsize=(5, 5))
91 plt.imshow(estimator.precision_, interpolation="nearest",
92 vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
93 plt.title('Precision matrix')
94
95 # And now display the corresponding graph
96 plotting.plot_connectome(estimator.precision_, power_coords,
97 title='Precision connectome',
98 edge_threshold="99.8%",
99 node_size=20)
100 plotting.show()
101
[end of examples/03_connectivity/plot_power_connectome.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/03_connectivity/plot_power_connectome.py b/examples/03_connectivity/plot_power_connectome.py
deleted file mode 100644
--- a/examples/03_connectivity/plot_power_connectome.py
+++ /dev/null
@@ -1,100 +0,0 @@
-"""
-Extracting signals and plotting a connectome for the Power-264 seed-region atlas
-================================================================================
-
-This example shows how to extract signals from spherical seed-regions based
-on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse
-inverse covariance.
-
-Power, Jonathan D., et al. "Functional network organization of the
-human brain." Neuron 72.4 (2011): 665-678.
-
-"""
-
-import numpy as np
-import matplotlib.pyplot as plt
-from nilearn import datasets, connectome, plotting, input_data
-
-
-###############################################################################
-# Atlas and dataset fetching
-
-# Fetch the coordinates of power atlas
-power = datasets.fetch_coords_power_2011()
-power_coords = np.vstack((
- power.rois['x'],
- power.rois['y'],
- power.rois['z'],
-)).T
-
-# Fetch the first subject of ADHD dataset
-adhd = datasets.fetch_adhd(n_subjects=1)
-
-
-###############################################################################
-# Masking: taking the signal in a sphere of radius 5mm around Power coords
-
-masker = input_data.NiftiSpheresMasker(seeds=power_coords,
- smoothing_fwhm=4,
- radius=5.,
- standardize=True,
- detrend=True,
- low_pass=0.1,
- high_pass=0.01,
- t_r=2.5)
-
-timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])
-
-###############################################################################
-# Extract and plot correlation matrix
-
-# calculate connectivity and plot Power-264 correlation matrix
-connectivity = connectome.ConnectivityMeasure(kind='correlation')
-corr_matrix = connectivity.fit_transform([timeseries])[0]
-np.fill_diagonal(corr_matrix, 0)
-plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')
-plt.colorbar()
-plt.title('Power 264 Connectivity')
-
-# Plot the connectome
-
-plotting.plot_connectome(corr_matrix,
- power_coords,
- edge_threshold='99.8%',
- node_size=20)
-
-
-###############################################################################
-# Extract and plot covariance and sparse covariance
-
-# Compute the sparse inverse covariance
-from sklearn.covariance import GraphLassoCV
-
-estimator = GraphLassoCV()
-estimator.fit(timeseries)
-
-# Display the covariance
-plt.figure(figsize=(5, 5))
-plt.imshow(estimator.covariance_, interpolation="nearest",
- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
-plt.title('Covariance matrix')
-
-# display the corresponding graph
-plotting.plot_connectome(estimator.covariance_,
- power_coords,
- title='Covariance connectome',
- edge_threshold='99.8%',
- node_size=20)
-
-# Display the sparse inverse covariance
-plt.figure(figsize=(5, 5))
-plt.imshow(estimator.precision_, interpolation="nearest",
- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
-plt.title('Precision matrix')
-
-# And now display the corresponding graph
-plotting.plot_connectome(estimator.precision_, power_coords,
- title='Precision connectome',
- edge_threshold="99.8%",
- node_size=20)
-plotting.show()
| {"golden_diff": "diff --git a/examples/03_connectivity/plot_power_connectome.py b/examples/03_connectivity/plot_power_connectome.py\ndeleted file mode 100644\n--- a/examples/03_connectivity/plot_power_connectome.py\n+++ /dev/null\n@@ -1,100 +0,0 @@\n-\"\"\"\n-Extracting signals and plotting a connectome for the Power-264 seed-region atlas\n-================================================================================\n-\n-This example shows how to extract signals from spherical seed-regions based\n-on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse\n-inverse covariance.\n-\n-Power, Jonathan D., et al. \"Functional network organization of the\n-human brain.\" Neuron 72.4 (2011): 665-678.\n-\n-\"\"\"\n-\n-import numpy as np\n-import matplotlib.pyplot as plt\n-from nilearn import datasets, connectome, plotting, input_data\n-\n-\n-###############################################################################\n-# Atlas and dataset fetching\n-\n-# Fetch the coordinates of power atlas\n-power = datasets.fetch_coords_power_2011()\n-power_coords = np.vstack((\n- power.rois['x'],\n- power.rois['y'],\n- power.rois['z'],\n-)).T\n-\n-# Fetch the first subject of ADHD dataset\n-adhd = datasets.fetch_adhd(n_subjects=1)\n-\n-\n-###############################################################################\n-# Masking: taking the signal in a sphere of radius 5mm around Power coords\n-\n-masker = input_data.NiftiSpheresMasker(seeds=power_coords,\n- smoothing_fwhm=4,\n- radius=5.,\n- standardize=True,\n- detrend=True,\n- low_pass=0.1,\n- high_pass=0.01,\n- t_r=2.5)\n-\n-timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])\n-\n-###############################################################################\n-# Extract and plot correlation matrix\n-\n-# calculate connectivity and plot Power-264 correlation matrix\n-connectivity = connectome.ConnectivityMeasure(kind='correlation')\n-corr_matrix = connectivity.fit_transform([timeseries])[0]\n-np.fill_diagonal(corr_matrix, 0)\n-plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')\n-plt.colorbar()\n-plt.title('Power 264 Connectivity')\n-\n-# Plot the connectome\n-\n-plotting.plot_connectome(corr_matrix,\n- power_coords,\n- edge_threshold='99.8%',\n- node_size=20)\n-\n-\n-###############################################################################\n-# Extract and plot covariance and sparse covariance\n-\n-# Compute the sparse inverse covariance\n-from sklearn.covariance import GraphLassoCV\n-\n-estimator = GraphLassoCV()\n-estimator.fit(timeseries)\n-\n-# Display the covariance\n-plt.figure(figsize=(5, 5))\n-plt.imshow(estimator.covariance_, interpolation=\"nearest\",\n- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\n-plt.title('Covariance matrix')\n-\n-# display the corresponding graph\n-plotting.plot_connectome(estimator.covariance_,\n- power_coords,\n- title='Covariance connectome',\n- edge_threshold='99.8%',\n- node_size=20)\n-\n-# Display the sparse inverse covariance\n-plt.figure(figsize=(5, 5))\n-plt.imshow(estimator.precision_, interpolation=\"nearest\",\n- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\n-plt.title('Precision matrix')\n-\n-# And now display the corresponding graph\n-plotting.plot_connectome(estimator.precision_, power_coords,\n- title='Precision connectome',\n- edge_threshold=\"99.8%\",\n- node_size=20)\n-plotting.show()\n", "issue": "remove examples/03_connectivity/plot_power_connectome.py ?\n- Signal extraction from spheres placed on Power coordinates is already done in `examples/03_connectivity/plot_seed_based_connectome.py`\n- Sparse inverse covariance estimation is already explained in `examples/03_connectivity/plot_inverse_covariance_connectome.py` for MSDL atlas. For me, it doesn't really make a difference estimating it on timeseries extracted from probabilistic maps or spheric ROIs.\n\n", "before_files": [{"content": "\"\"\"\nExtracting signals and plotting a connectome for the Power-264 seed-region atlas\n================================================================================\n\nThis example shows how to extract signals from spherical seed-regions based\non the Power-264 atlas (Power, 2011) and estimating a connectome using sparse\ninverse covariance.\n\nPower, Jonathan D., et al. \"Functional network organization of the\nhuman brain.\" Neuron 72.4 (2011): 665-678.\n\n\"\"\"\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom nilearn import datasets, connectome, plotting, input_data\n\n\n###############################################################################\n# Atlas and dataset fetching\n\n# Fetch the coordinates of power atlas\npower = datasets.fetch_coords_power_2011()\npower_coords = np.vstack((\n power.rois['x'],\n power.rois['y'],\n power.rois['z'],\n)).T\n\n# Fetch the first subject of ADHD dataset\nadhd = datasets.fetch_adhd(n_subjects=1)\n\n\n###############################################################################\n# Masking: taking the signal in a sphere of radius 5mm around Power coords\n\nmasker = input_data.NiftiSpheresMasker(seeds=power_coords,\n smoothing_fwhm=4,\n radius=5.,\n standardize=True,\n detrend=True,\n low_pass=0.1,\n high_pass=0.01,\n t_r=2.5)\n\ntimeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])\n\n###############################################################################\n# Extract and plot correlation matrix\n\n# calculate connectivity and plot Power-264 correlation matrix\nconnectivity = connectome.ConnectivityMeasure(kind='correlation')\ncorr_matrix = connectivity.fit_transform([timeseries])[0]\nnp.fill_diagonal(corr_matrix, 0)\nplt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')\nplt.colorbar()\nplt.title('Power 264 Connectivity')\n\n# Plot the connectome\n\nplotting.plot_connectome(corr_matrix,\n power_coords,\n edge_threshold='99.8%',\n node_size=20)\n\n\n###############################################################################\n# Extract and plot covariance and sparse covariance\n\n# Compute the sparse inverse covariance\nfrom sklearn.covariance import GraphLassoCV\n\nestimator = GraphLassoCV()\nestimator.fit(timeseries)\n\n# Display the covariance\nplt.figure(figsize=(5, 5))\nplt.imshow(estimator.covariance_, interpolation=\"nearest\",\n vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\nplt.title('Covariance matrix')\n\n# display the corresponding graph\nplotting.plot_connectome(estimator.covariance_,\n power_coords,\n title='Covariance connectome',\n edge_threshold='99.8%',\n node_size=20)\n\n# Display the sparse inverse covariance\nplt.figure(figsize=(5, 5))\nplt.imshow(estimator.precision_, interpolation=\"nearest\",\n vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\nplt.title('Precision matrix')\n\n# And now display the corresponding graph\nplotting.plot_connectome(estimator.precision_, power_coords,\n title='Precision connectome',\n edge_threshold=\"99.8%\",\n node_size=20)\nplotting.show()\n", "path": "examples/03_connectivity/plot_power_connectome.py"}]} | 1,544 | 840 |
gh_patches_debug_4734 | rasdani/github-patches | git_diff | DDMAL__CantusDB-848 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Source.number_of_melodies is not updating properly
The `number_of_melodies` of a source should correspond to the number of chants in the source that contain a volpiano entry. When I check my test sources on the database that do not have any chants that contain a volpaino, the `number_of_melodies` field matches the total number of chants. I suspect the `update_source_melody_count()` function in `signals.py` is not working as expected.
</issue>
<code>
[start of django/cantusdb_project/main_app/signals.py]
1 import operator
2 from functools import reduce
3
4 from django.contrib.postgres.search import SearchVector
5 from django.db import models
6 from django.db.models import Value
7 from django.db.models.signals import post_save, post_delete
8 from django.dispatch import receiver
9
10 import re
11
12 from main_app.models import Chant
13 from main_app.models import Sequence
14 from main_app.models import Feast
15 from main_app.models import Source
16
17
18 @receiver(post_save, sender=Chant)
19 def on_chant_save(instance, **kwargs):
20 update_source_chant_count(instance)
21 update_source_melody_count(instance)
22
23 update_chant_search_vector(instance)
24 update_volpiano_fields(instance)
25
26
27 @receiver(post_delete, sender=Chant)
28 def on_chant_delete(instance, **kwargs):
29 update_source_chant_count(instance)
30 update_source_melody_count(instance)
31
32
33 @receiver(post_save, sender=Sequence)
34 def on_sequence_save(instance, **kwargs):
35 update_source_chant_count(instance)
36
37
38 @receiver(post_delete, sender=Sequence)
39 def on_sequence_delete(instance, **kwargs):
40 update_source_chant_count(instance)
41
42
43 @receiver(post_save, sender=Feast)
44 def on_feast_save(instance, **kwargs):
45 update_prefix_field(instance)
46
47
48 def update_chant_search_vector(instance):
49 """When saving an instance of Chant, update its search vector field.
50
51 Called in on_chant_save()
52 """
53 index_components = instance.index_components()
54 pk = instance.pk
55 search_vectors = []
56
57 for weight, data in index_components.items():
58 search_vectors.append(
59 SearchVector(Value(data, output_field=models.TextField()), weight=weight)
60 )
61 instance.__class__.objects.filter(pk=pk).update(
62 search_vector=reduce(operator.add, search_vectors)
63 )
64
65
66 def update_source_chant_count(instance):
67 """When saving or deleting a Chant or Sequence, update its Source's number_of_chants field
68
69 Called in on_chant_save(), on_chant_delete(), on_sequence_save() and on_sequence_delete()
70 """
71
72 # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist
73 try:
74 source = instance.source
75 except Source.DoesNotExist:
76 source = None
77 if source is not None:
78 source.number_of_chants = source.chant_set.count() + source.sequence_set.count()
79 source.save()
80
81
82 def update_source_melody_count(instance):
83 """When saving or deleting a Chant, update its Source's number_of_melodies field
84
85 Called in on_chant_save() and on_chant_delete()
86 """
87
88 # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist
89 try:
90 source = instance.source
91 except Source.DoesNotExist:
92 source = None
93 if source is not None:
94 source.number_of_melodies = source.chant_set.filter(
95 volpiano__isnull=False
96 ).count()
97 source.save()
98
99
100 def update_volpiano_fields(instance):
101 """When saving a Chant, make sure the chant's volpiano_notes and volpiano_intervals are up-to-date
102
103 Called in on_chant_save()
104 """
105
106 def generate_volpiano_notes(volpiano):
107 """
108 Populate the ``volpiano_notes`` field of the ``Chant`` model
109
110 This field is used for melody search
111
112 Args:
113 volpiano (str): The content of ``chant.volpiano``
114
115 Returns:
116 str: Volpiano str with non-note chars and duplicate consecutive notes removed
117 """
118 # unwanted_chars are non-note chars, including the clefs, barlines, and accidentals etc.
119 # the `searchMelody.js` on old cantus makes no reference to the b-flat accidentals ("y", "i", "z")
120 # so put them in unwanted chars for now
121 unwanted_chars = [
122 "-",
123 "1",
124 "2",
125 "3",
126 "4",
127 "5",
128 "6",
129 "7",
130 "?",
131 ".",
132 " ",
133 "y",
134 "i",
135 "z",
136 ]
137 # convert all charactors to lower-case, upper-case letters stand for liquescent of the same pitch
138 volpiano_lower = volpiano.lower()
139 # `)` stands for the lowest `g` note liquescent in volpiano, its 'lower case' is `9`
140 volpiano_notes = volpiano_lower.replace(")", "9")
141 # remove none-note charactors
142 for unwanted_char in unwanted_chars:
143 volpiano_notes = volpiano_notes.replace(unwanted_char, "")
144 # remove duplicate consecutive chars
145 volpiano_notes = re.sub(r"(.)\1+", r"\1", volpiano_notes)
146 return volpiano_notes
147
148 def generate_volpiano_intervals(volpiano_notes):
149 """
150 Populate the ``volpiano_intervals`` field of the ``Chant`` model
151
152 This field is used for melody search when searching for transpositions
153
154 Args:
155 volpiano_notes (str): The content of ``chant.volpiano_notes``,
156 populated by the ``generate_volpiano_notes`` function
157
158 Returns:
159 str: A str of digits, recording the intervals between adjacent notes
160 """
161 # replace '9' (the note G) with the char corresponding to (ASCII(a) - 1), because 'a' denotes the note A
162 volpiano_notes = volpiano_notes.replace("9", chr(ord("a") - 1))
163 # we model the interval between notes using the difference between the ASCII codes of corresponding letters
164 # the letter for the note B is "j" (106), note A is "h" (104), the letter "i" (105) is skipped
165 # move all notes above A down by one letter
166 volpiano_notes = list(volpiano_notes)
167 for j, note in enumerate(volpiano_notes):
168 if ord(note) >= 106:
169 volpiano_notes[j] = chr(ord(note) - 1)
170
171 # `intervals` records the difference between two adjacent notes.
172 # Note that intervals are encoded by counting the number of scale
173 # steps between adjacent notes: an ascending second is thus encoded
174 # as "1"; a descending third is encoded "-2", and so on.
175 intervals = []
176 for j in range(1, len(volpiano_notes)):
177 intervals.append(ord(volpiano_notes[j]) - ord(volpiano_notes[j - 1]))
178 # convert `intervals` to str
179 volpiano_intervals = "".join([str(interval) for interval in intervals])
180 return volpiano_intervals
181
182 if instance.volpiano is None:
183 return
184
185 volpiano_notes = generate_volpiano_notes(instance.volpiano)
186 volpiano_intervals = generate_volpiano_intervals(volpiano_notes)
187
188 Chant.objects.filter(id=instance.id).update(
189 volpiano_notes=volpiano_notes,
190 volpiano_intervals=volpiano_intervals,
191 )
192
193
194 def update_prefix_field(instance):
195 pk = instance.pk
196
197 if instance.feast_code:
198 prefix = str(instance.feast_code)[0:2]
199 instance.__class__.objects.filter(pk=pk).update(prefix=prefix)
200 else: # feast_code is None, ""
201 instance.__class__.objects.filter(pk=pk).update(prefix="")
202
[end of django/cantusdb_project/main_app/signals.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/signals.py b/django/cantusdb_project/main_app/signals.py
--- a/django/cantusdb_project/main_app/signals.py
+++ b/django/cantusdb_project/main_app/signals.py
@@ -91,9 +91,11 @@
except Source.DoesNotExist:
source = None
if source is not None:
- source.number_of_melodies = source.chant_set.filter(
- volpiano__isnull=False
- ).count()
+ source.number_of_melodies = (
+ source.chant_set.exclude(volpiano__isnull=True)
+ .exclude(volpiano__exact="")
+ .count()
+ )
source.save()
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/signals.py b/django/cantusdb_project/main_app/signals.py\n--- a/django/cantusdb_project/main_app/signals.py\n+++ b/django/cantusdb_project/main_app/signals.py\n@@ -91,9 +91,11 @@\n except Source.DoesNotExist:\n source = None\n if source is not None:\n- source.number_of_melodies = source.chant_set.filter(\n- volpiano__isnull=False\n- ).count()\n+ source.number_of_melodies = (\n+ source.chant_set.exclude(volpiano__isnull=True)\n+ .exclude(volpiano__exact=\"\")\n+ .count()\n+ )\n source.save()\n", "issue": "Source.number_of_melodies is not updating properly\nThe `number_of_melodies` of a source should correspond to the number of chants in the source that contain a volpiano entry. When I check my test sources on the database that do not have any chants that contain a volpaino, the `number_of_melodies` field matches the total number of chants. I suspect the `update_source_melody_count()` function in `signals.py` is not working as expected.\n", "before_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.contrib.postgres.search import SearchVector\nfrom django.db import models\nfrom django.db.models import Value\nfrom django.db.models.signals import post_save, post_delete\nfrom django.dispatch import receiver\n\nimport re\n\nfrom main_app.models import Chant\nfrom main_app.models import Sequence\nfrom main_app.models import Feast\nfrom main_app.models import Source\n\n\n@receiver(post_save, sender=Chant)\ndef on_chant_save(instance, **kwargs):\n update_source_chant_count(instance)\n update_source_melody_count(instance)\n\n update_chant_search_vector(instance)\n update_volpiano_fields(instance)\n\n\n@receiver(post_delete, sender=Chant)\ndef on_chant_delete(instance, **kwargs):\n update_source_chant_count(instance)\n update_source_melody_count(instance)\n\n\n@receiver(post_save, sender=Sequence)\ndef on_sequence_save(instance, **kwargs):\n update_source_chant_count(instance)\n\n\n@receiver(post_delete, sender=Sequence)\ndef on_sequence_delete(instance, **kwargs):\n update_source_chant_count(instance)\n\n\n@receiver(post_save, sender=Feast)\ndef on_feast_save(instance, **kwargs):\n update_prefix_field(instance)\n\n\ndef update_chant_search_vector(instance):\n \"\"\"When saving an instance of Chant, update its search vector field.\n\n Called in on_chant_save()\n \"\"\"\n index_components = instance.index_components()\n pk = instance.pk\n search_vectors = []\n\n for weight, data in index_components.items():\n search_vectors.append(\n SearchVector(Value(data, output_field=models.TextField()), weight=weight)\n )\n instance.__class__.objects.filter(pk=pk).update(\n search_vector=reduce(operator.add, search_vectors)\n )\n\n\ndef update_source_chant_count(instance):\n \"\"\"When saving or deleting a Chant or Sequence, update its Source's number_of_chants field\n\n Called in on_chant_save(), on_chant_delete(), on_sequence_save() and on_sequence_delete()\n \"\"\"\n\n # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist\n try:\n source = instance.source\n except Source.DoesNotExist:\n source = None\n if source is not None:\n source.number_of_chants = source.chant_set.count() + source.sequence_set.count()\n source.save()\n\n\ndef update_source_melody_count(instance):\n \"\"\"When saving or deleting a Chant, update its Source's number_of_melodies field\n\n Called in on_chant_save() and on_chant_delete()\n \"\"\"\n\n # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist\n try:\n source = instance.source\n except Source.DoesNotExist:\n source = None\n if source is not None:\n source.number_of_melodies = source.chant_set.filter(\n volpiano__isnull=False\n ).count()\n source.save()\n\n\ndef update_volpiano_fields(instance):\n \"\"\"When saving a Chant, make sure the chant's volpiano_notes and volpiano_intervals are up-to-date\n\n Called in on_chant_save()\n \"\"\"\n\n def generate_volpiano_notes(volpiano):\n \"\"\"\n Populate the ``volpiano_notes`` field of the ``Chant`` model\n\n This field is used for melody search\n\n Args:\n volpiano (str): The content of ``chant.volpiano``\n\n Returns:\n str: Volpiano str with non-note chars and duplicate consecutive notes removed\n \"\"\"\n # unwanted_chars are non-note chars, including the clefs, barlines, and accidentals etc.\n # the `searchMelody.js` on old cantus makes no reference to the b-flat accidentals (\"y\", \"i\", \"z\")\n # so put them in unwanted chars for now\n unwanted_chars = [\n \"-\",\n \"1\",\n \"2\",\n \"3\",\n \"4\",\n \"5\",\n \"6\",\n \"7\",\n \"?\",\n \".\",\n \" \",\n \"y\",\n \"i\",\n \"z\",\n ]\n # convert all charactors to lower-case, upper-case letters stand for liquescent of the same pitch\n volpiano_lower = volpiano.lower()\n # `)` stands for the lowest `g` note liquescent in volpiano, its 'lower case' is `9`\n volpiano_notes = volpiano_lower.replace(\")\", \"9\")\n # remove none-note charactors\n for unwanted_char in unwanted_chars:\n volpiano_notes = volpiano_notes.replace(unwanted_char, \"\")\n # remove duplicate consecutive chars\n volpiano_notes = re.sub(r\"(.)\\1+\", r\"\\1\", volpiano_notes)\n return volpiano_notes\n\n def generate_volpiano_intervals(volpiano_notes):\n \"\"\"\n Populate the ``volpiano_intervals`` field of the ``Chant`` model\n\n This field is used for melody search when searching for transpositions\n\n Args:\n volpiano_notes (str): The content of ``chant.volpiano_notes``,\n populated by the ``generate_volpiano_notes`` function\n\n Returns:\n str: A str of digits, recording the intervals between adjacent notes\n \"\"\"\n # replace '9' (the note G) with the char corresponding to (ASCII(a) - 1), because 'a' denotes the note A\n volpiano_notes = volpiano_notes.replace(\"9\", chr(ord(\"a\") - 1))\n # we model the interval between notes using the difference between the ASCII codes of corresponding letters\n # the letter for the note B is \"j\" (106), note A is \"h\" (104), the letter \"i\" (105) is skipped\n # move all notes above A down by one letter\n volpiano_notes = list(volpiano_notes)\n for j, note in enumerate(volpiano_notes):\n if ord(note) >= 106:\n volpiano_notes[j] = chr(ord(note) - 1)\n\n # `intervals` records the difference between two adjacent notes.\n # Note that intervals are encoded by counting the number of scale\n # steps between adjacent notes: an ascending second is thus encoded\n # as \"1\"; a descending third is encoded \"-2\", and so on.\n intervals = []\n for j in range(1, len(volpiano_notes)):\n intervals.append(ord(volpiano_notes[j]) - ord(volpiano_notes[j - 1]))\n # convert `intervals` to str\n volpiano_intervals = \"\".join([str(interval) for interval in intervals])\n return volpiano_intervals\n\n if instance.volpiano is None:\n return\n\n volpiano_notes = generate_volpiano_notes(instance.volpiano)\n volpiano_intervals = generate_volpiano_intervals(volpiano_notes)\n\n Chant.objects.filter(id=instance.id).update(\n volpiano_notes=volpiano_notes,\n volpiano_intervals=volpiano_intervals,\n )\n\n\ndef update_prefix_field(instance):\n pk = instance.pk\n\n if instance.feast_code:\n prefix = str(instance.feast_code)[0:2]\n instance.__class__.objects.filter(pk=pk).update(prefix=prefix)\n else: # feast_code is None, \"\"\n instance.__class__.objects.filter(pk=pk).update(prefix=\"\")\n", "path": "django/cantusdb_project/main_app/signals.py"}]} | 2,774 | 167 |
gh_patches_debug_30188 | rasdani/github-patches | git_diff | internetarchive__openlibrary-8966 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support different seeds for random.hourly sort
These carousels are all sorted by random.hourly, but we want them to have a different random subset!

### Proposal & Constraints
Expand `random.hourly` sorting to support a custom seed like `random`
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
### Stakeholders
@RayBB
</issue>
<code>
[start of openlibrary/plugins/worksearch/schemes/__init__.py]
1 import logging
2 from collections.abc import Callable
3
4 import luqum.tree
5 from luqum.exceptions import ParseError
6 from openlibrary.solr.query_utils import (
7 escape_unknown_fields,
8 fully_escape_query,
9 luqum_parser,
10 )
11
12 logger = logging.getLogger("openlibrary.worksearch")
13
14
15 class SearchScheme:
16 # Set of queries that define the universe of this scheme
17 universe: list[str]
18 # All actual solr fields that can be in a user query
19 all_fields: set[str]
20 # These fields are fetched for facets and can also be url params
21 facet_fields: set[str]
22 # Mapping of user-only fields to solr fields
23 field_name_map: dict[str, str]
24 # Mapping of user sort to solr sort
25 sorts: dict[str, str | Callable[[], str]]
26 # Default
27 default_fetched_fields: set[str]
28 # Fields that should be rewritten
29 facet_rewrites: dict[tuple[str, str], str | Callable[[], str]]
30
31 def is_search_field(self, field: str):
32 return field in self.all_fields or field in self.field_name_map
33
34 def process_user_sort(self, user_sort: str) -> str:
35 """
36 Convert a user-provided sort to a solr sort
37
38 >>> from openlibrary.plugins.worksearch.schemes.works import WorkSearchScheme
39 >>> scheme = WorkSearchScheme()
40 >>> scheme.process_user_sort('editions')
41 'edition_count desc'
42 >>> scheme.process_user_sort('editions, new')
43 'edition_count desc,first_publish_year desc'
44 >>> scheme.process_user_sort('random')
45 'random_1 asc'
46 >>> scheme.process_user_sort('random_custom_seed')
47 'random_custom_seed asc'
48 >>> scheme.process_user_sort('random_custom_seed desc')
49 'random_custom_seed desc'
50 >>> scheme.process_user_sort('random_custom_seed asc')
51 'random_custom_seed asc'
52 """
53
54 def process_individual_sort(sort: str):
55 if sort.startswith('random_'):
56 # Allow custom randoms; so anything random_* is allowed
57 return sort if ' ' in sort else f'{sort} asc'
58 else:
59 solr_sort = self.sorts[sort]
60 return solr_sort() if callable(solr_sort) else solr_sort
61
62 return ','.join(
63 process_individual_sort(s.strip()) for s in user_sort.split(',')
64 )
65
66 def process_user_query(self, q_param: str) -> str:
67 if q_param == '*:*':
68 # This is a special solr syntax; don't process
69 return q_param
70
71 try:
72 q_param = escape_unknown_fields(
73 (
74 # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now,
75 # let's not expose that and escape all '/'. Otherwise
76 # `key:/works/OL1W` is interpreted as a regex.
77 q_param.strip()
78 .replace('/', '\\/')
79 # Also escape unexposed lucene features
80 .replace('?', '\\?')
81 .replace('~', '\\~')
82 ),
83 self.is_search_field,
84 lower=True,
85 )
86 q_tree = luqum_parser(q_param)
87 except ParseError:
88 # This isn't a syntactically valid lucene query
89 logger.warning("Invalid lucene query", exc_info=True)
90 # Escape everything we can
91 q_tree = luqum_parser(fully_escape_query(q_param))
92
93 q_tree = self.transform_user_query(q_param, q_tree)
94 return str(q_tree)
95
96 def transform_user_query(
97 self,
98 user_query: str,
99 q_tree: luqum.tree.Item,
100 ) -> luqum.tree.Item:
101 return q_tree
102
103 def build_q_from_params(self, params: dict) -> str | None:
104 return None
105
106 def q_to_solr_params(
107 self,
108 q: str,
109 solr_fields: set[str],
110 cur_solr_params: list[tuple[str, str]],
111 ) -> list[tuple[str, str]]:
112 return [('q', q)]
113
[end of openlibrary/plugins/worksearch/schemes/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openlibrary/plugins/worksearch/schemes/__init__.py b/openlibrary/plugins/worksearch/schemes/__init__.py
--- a/openlibrary/plugins/worksearch/schemes/__init__.py
+++ b/openlibrary/plugins/worksearch/schemes/__init__.py
@@ -44,17 +44,27 @@
>>> scheme.process_user_sort('random')
'random_1 asc'
>>> scheme.process_user_sort('random_custom_seed')
- 'random_custom_seed asc'
+ 'random_1_custom_seed asc'
>>> scheme.process_user_sort('random_custom_seed desc')
- 'random_custom_seed desc'
+ 'random_1_custom_seed desc'
>>> scheme.process_user_sort('random_custom_seed asc')
- 'random_custom_seed asc'
+ 'random_1_custom_seed asc'
"""
- def process_individual_sort(sort: str):
- if sort.startswith('random_'):
+ def process_individual_sort(sort: str) -> str:
+ if sort.startswith(('random_', 'random.hourly_', 'random.daily_')):
# Allow custom randoms; so anything random_* is allowed
- return sort if ' ' in sort else f'{sort} asc'
+ # Also Allow custom time randoms to allow carousels with overlapping
+ # books to have a fresh ordering when on the same collection
+ sort_order: str | None = None
+ if ' ' in sort:
+ sort, sort_order = sort.split(' ', 1)
+ random_type, random_seed = sort.split('_', 1)
+ solr_sort = self.sorts[random_type]
+ solr_sort_str = solr_sort() if callable(solr_sort) else solr_sort
+ solr_sort_field, solr_sort_order = solr_sort_str.split(' ', 1)
+ sort_order = sort_order or solr_sort_order
+ return f'{solr_sort_field}_{random_seed} {sort_order}'
else:
solr_sort = self.sorts[sort]
return solr_sort() if callable(solr_sort) else solr_sort
| {"golden_diff": "diff --git a/openlibrary/plugins/worksearch/schemes/__init__.py b/openlibrary/plugins/worksearch/schemes/__init__.py\n--- a/openlibrary/plugins/worksearch/schemes/__init__.py\n+++ b/openlibrary/plugins/worksearch/schemes/__init__.py\n@@ -44,17 +44,27 @@\n >>> scheme.process_user_sort('random')\n 'random_1 asc'\n >>> scheme.process_user_sort('random_custom_seed')\n- 'random_custom_seed asc'\n+ 'random_1_custom_seed asc'\n >>> scheme.process_user_sort('random_custom_seed desc')\n- 'random_custom_seed desc'\n+ 'random_1_custom_seed desc'\n >>> scheme.process_user_sort('random_custom_seed asc')\n- 'random_custom_seed asc'\n+ 'random_1_custom_seed asc'\n \"\"\"\n \n- def process_individual_sort(sort: str):\n- if sort.startswith('random_'):\n+ def process_individual_sort(sort: str) -> str:\n+ if sort.startswith(('random_', 'random.hourly_', 'random.daily_')):\n # Allow custom randoms; so anything random_* is allowed\n- return sort if ' ' in sort else f'{sort} asc'\n+ # Also Allow custom time randoms to allow carousels with overlapping\n+ # books to have a fresh ordering when on the same collection\n+ sort_order: str | None = None\n+ if ' ' in sort:\n+ sort, sort_order = sort.split(' ', 1)\n+ random_type, random_seed = sort.split('_', 1)\n+ solr_sort = self.sorts[random_type]\n+ solr_sort_str = solr_sort() if callable(solr_sort) else solr_sort\n+ solr_sort_field, solr_sort_order = solr_sort_str.split(' ', 1)\n+ sort_order = sort_order or solr_sort_order\n+ return f'{solr_sort_field}_{random_seed} {sort_order}'\n else:\n solr_sort = self.sorts[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n", "issue": "Support different seeds for random.hourly sort\nThese carousels are all sorted by random.hourly, but we want them to have a different random subset!\r\n\r\n\r\n\r\n\r\n### Proposal & Constraints\r\nExpand `random.hourly` sorting to support a custom seed like `random`\r\n\r\n### Additional context\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\r\n### Stakeholders\r\n@RayBB \n", "before_files": [{"content": "import logging\nfrom collections.abc import Callable\n\nimport luqum.tree\nfrom luqum.exceptions import ParseError\nfrom openlibrary.solr.query_utils import (\n escape_unknown_fields,\n fully_escape_query,\n luqum_parser,\n)\n\nlogger = logging.getLogger(\"openlibrary.worksearch\")\n\n\nclass SearchScheme:\n # Set of queries that define the universe of this scheme\n universe: list[str]\n # All actual solr fields that can be in a user query\n all_fields: set[str]\n # These fields are fetched for facets and can also be url params\n facet_fields: set[str]\n # Mapping of user-only fields to solr fields\n field_name_map: dict[str, str]\n # Mapping of user sort to solr sort\n sorts: dict[str, str | Callable[[], str]]\n # Default\n default_fetched_fields: set[str]\n # Fields that should be rewritten\n facet_rewrites: dict[tuple[str, str], str | Callable[[], str]]\n\n def is_search_field(self, field: str):\n return field in self.all_fields or field in self.field_name_map\n\n def process_user_sort(self, user_sort: str) -> str:\n \"\"\"\n Convert a user-provided sort to a solr sort\n\n >>> from openlibrary.plugins.worksearch.schemes.works import WorkSearchScheme\n >>> scheme = WorkSearchScheme()\n >>> scheme.process_user_sort('editions')\n 'edition_count desc'\n >>> scheme.process_user_sort('editions, new')\n 'edition_count desc,first_publish_year desc'\n >>> scheme.process_user_sort('random')\n 'random_1 asc'\n >>> scheme.process_user_sort('random_custom_seed')\n 'random_custom_seed asc'\n >>> scheme.process_user_sort('random_custom_seed desc')\n 'random_custom_seed desc'\n >>> scheme.process_user_sort('random_custom_seed asc')\n 'random_custom_seed asc'\n \"\"\"\n\n def process_individual_sort(sort: str):\n if sort.startswith('random_'):\n # Allow custom randoms; so anything random_* is allowed\n return sort if ' ' in sort else f'{sort} asc'\n else:\n solr_sort = self.sorts[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n\n return ','.join(\n process_individual_sort(s.strip()) for s in user_sort.split(',')\n )\n\n def process_user_query(self, q_param: str) -> str:\n if q_param == '*:*':\n # This is a special solr syntax; don't process\n return q_param\n\n try:\n q_param = escape_unknown_fields(\n (\n # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now,\n # let's not expose that and escape all '/'. Otherwise\n # `key:/works/OL1W` is interpreted as a regex.\n q_param.strip()\n .replace('/', '\\\\/')\n # Also escape unexposed lucene features\n .replace('?', '\\\\?')\n .replace('~', '\\\\~')\n ),\n self.is_search_field,\n lower=True,\n )\n q_tree = luqum_parser(q_param)\n except ParseError:\n # This isn't a syntactically valid lucene query\n logger.warning(\"Invalid lucene query\", exc_info=True)\n # Escape everything we can\n q_tree = luqum_parser(fully_escape_query(q_param))\n\n q_tree = self.transform_user_query(q_param, q_tree)\n return str(q_tree)\n\n def transform_user_query(\n self,\n user_query: str,\n q_tree: luqum.tree.Item,\n ) -> luqum.tree.Item:\n return q_tree\n\n def build_q_from_params(self, params: dict) -> str | None:\n return None\n\n def q_to_solr_params(\n self,\n q: str,\n solr_fields: set[str],\n cur_solr_params: list[tuple[str, str]],\n ) -> list[tuple[str, str]]:\n return [('q', q)]\n", "path": "openlibrary/plugins/worksearch/schemes/__init__.py"}]} | 1,797 | 454 |
gh_patches_debug_31570 | rasdani/github-patches | git_diff | bridgecrewio__checkov-854 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature Request: another flag to display failed checks
**Is your feature request related to a problem? Please describe.**
it seems that -o github_failed_only only returns failed but with plain text. if I use -o json then I get all checks(failed and success).
**Describe the solution you'd like**
Apart from json and github_failed_only parameters. It might be good to have to another flag to display failed only reports. It can be used with json output. Something like to see failed checks in json format.
```
$ checkov --display-failed-checks -o json -d .
```
</issue>
<code>
[start of checkov/common/output/report.py]
1 import json
2 from collections import defaultdict
3
4 from colorama import init
5 from junit_xml import TestCase, TestSuite
6 from termcolor import colored
7
8 from checkov.common.models.enums import CheckResult
9 from checkov.version import version
10 from tabulate import tabulate
11
12 init(autoreset=True)
13
14
15 class Report:
16
17 def __init__(self, check_type):
18 self.check_type = check_type
19 self.passed_checks = []
20 self.failed_checks = []
21 self.skipped_checks = []
22 self.parsing_errors = []
23
24 def add_parsing_errors(self, errors):
25 for file in errors:
26 self.add_parsing_error(file)
27
28 def add_parsing_error(self, file):
29 if file:
30 self.parsing_errors.append(file)
31
32 def add_record(self, record):
33 if record.check_result['result'] == CheckResult.PASSED:
34 self.passed_checks.append(record)
35 if record.check_result['result'] == CheckResult.FAILED:
36 self.failed_checks.append(record)
37 if record.check_result['result'] == CheckResult.SKIPPED:
38 self.skipped_checks.append(record)
39
40 def get_summary(self):
41 return {
42 "passed": len(self.passed_checks),
43 "failed": len(self.failed_checks),
44 "skipped": len(self.skipped_checks),
45 "parsing_errors": len(self.parsing_errors),
46 "checkov_version": version
47 }
48
49 def get_json(self):
50 return json.dumps(self.get_dict(), indent=4)
51
52 def get_dict(self):
53 return {
54 "check_type": self.check_type,
55 "results": {
56 "passed_checks": [check.__dict__ for check in self.passed_checks],
57 "failed_checks": [check.__dict__ for check in self.failed_checks],
58 "skipped_checks": [check.__dict__ for check in self.skipped_checks],
59 "parsing_errors": list(self.parsing_errors)
60 },
61 "summary": self.get_summary()
62 }
63
64 def get_exit_code(self, soft_fail):
65 if soft_fail:
66 return 0
67 elif len(self.failed_checks) > 0:
68 return 1
69 return 0
70
71 def is_empty(self):
72 return len(self.passed_checks) + len(self.failed_checks) + len(self.skipped_checks) + len(self.parsing_errors) == 0
73
74 def print_console(self, is_quiet=False, is_compact=False):
75 summary = self.get_summary()
76 print(colored(f"{self.check_type} scan results:", "blue"))
77 if self.parsing_errors:
78 message = "\nPassed checks: {}, Failed checks: {}, Skipped checks: {}, Parsing errors: {}\n".format(
79 summary["passed"], summary["failed"], summary["skipped"], summary["parsing_errors"])
80 else:
81 message = "\nPassed checks: {}, Failed checks: {}, Skipped checks: {}\n".format(
82 summary["passed"], summary["failed"], summary["skipped"])
83 print(colored(message, "cyan"))
84 if not is_quiet:
85 for record in self.passed_checks:
86 print(record.to_string(compact=is_compact))
87 for record in self.failed_checks:
88 print(record.to_string(compact=is_compact))
89 if not is_quiet:
90 for record in self.skipped_checks:
91 print(record.to_string(compact=is_compact))
92
93 if not is_quiet:
94 for file in self.parsing_errors:
95 Report._print_parsing_error_console(file)
96
97 @staticmethod
98 def _print_parsing_error_console(file):
99 print(colored(f'Error parsing file {file}', 'red'))
100
101 def print_junit_xml(self):
102 ts = self.get_test_suites()
103 print(TestSuite.to_xml_string(ts))
104
105 def print_failed_github_md(self):
106 result = []
107 for record in self.failed_checks:
108 result.append([record.check_id, record.file_path ,record.resource, record.check_name, record.guideline])
109 print(tabulate(result, headers=["check_id", "file" ,"resource", "check_name", "guideline"], tablefmt="github", showindex=True))
110 print("\n\n---\n\n")
111
112 def get_test_suites(self):
113 test_cases = defaultdict(list)
114 test_suites = []
115 records = self.passed_checks + self.failed_checks + self.skipped_checks
116 for record in records:
117 check_name = record.check_name
118
119 test_name = "{} {} {}".format(self.check_type, check_name, record.resource)
120 test_case = TestCase(name=test_name, file=record.file_path, classname=record.check_class)
121 if record.check_result['result'] == CheckResult.FAILED:
122 test_case.add_failure_info(
123 "Resource \"{}\" failed in check \"{}\"".format(record.resource, check_name))
124 if record.check_result['result'] == CheckResult.SKIPPED:
125 test_case.add_skipped_info(
126 "Resource \"{}\" skipped in check \"{}\"\n Suppress comment: {}".format(record.resource, check_name,
127 record.check_result[
128 'suppress_comment']))
129 test_cases[check_name].append(test_case)
130 for key in test_cases.keys():
131 test_suites.append(
132 TestSuite(name=key, test_cases=test_cases[key], package=test_cases[key][0].classname))
133 return test_suites
134
135 def print_json(self):
136 print(self.get_json())
137
138
[end of checkov/common/output/report.py]
[start of checkov/common/runners/runner_registry.py]
1 import json
2 import logging
3 from abc import abstractmethod
4
5 from checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry
6 from checkov.common.output.report import Report
7
8 OUTPUT_CHOICES = ['cli', 'json', 'junitxml', 'github_failed_only']
9
10 from checkov.common.bridgecrew.platform_integration import BcPlatformIntegration
11
12
13 class RunnerRegistry(object):
14 runners = []
15 scan_reports = []
16 banner = ""
17
18 def __init__(self, banner, runner_filter, *runners):
19 self.logger = logging.getLogger(__name__)
20 self.runner_filter = runner_filter
21 self.runners = runners
22 self.banner = banner
23 self.scan_reports = []
24 self.filter_runner_framework()
25 self.bc_platform = BcPlatformIntegration()
26
27 @abstractmethod
28 def extract_entity_details(self, entity):
29 raise NotImplementedError()
30
31 def run(self, root_folder=None, external_checks_dir=None, files=None, guidelines=None, collect_skip_comments=True, bc_integration=None):
32 for runner in self.runners:
33 integration_feature_registry.run_pre_scan()
34 scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,
35 runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)
36 integration_feature_registry.run_post_scan(scan_report)
37 if guidelines:
38 RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)
39 self.scan_reports.append(scan_report)
40 return self.scan_reports
41
42 def print_reports(self, scan_reports, args, url=None):
43 if args.output == 'cli':
44 print(f"{self.banner}\n")
45 exit_codes = []
46 report_jsons = []
47 junit_reports = []
48 for report in scan_reports:
49 if not report.is_empty():
50 if args.output == "json":
51 report_jsons.append(report.get_dict())
52 elif args.output == "junitxml":
53 junit_reports.append(report)
54 # report.print_junit_xml()
55 elif args.output == 'github_failed_only':
56 report.print_failed_github_md()
57 else:
58 report.print_console(is_quiet=args.quiet, is_compact=args.compact)
59 if url:
60 print("More details: {}".format(url))
61 exit_codes.append(report.get_exit_code(args.soft_fail))
62 if args.output == "junitxml":
63 if len(junit_reports) == 1:
64 junit_reports[0].print_junit_xml()
65 else:
66 master_report = Report(None)
67 for report in junit_reports:
68 master_report.skipped_checks += report.skipped_checks
69 master_report.passed_checks += report.passed_checks
70 master_report.failed_checks += report.failed_checks
71 master_report.print_junit_xml()
72 if args.output == "json":
73 if len(report_jsons) == 1:
74 print(json.dumps(report_jsons[0], indent=4))
75 else:
76 print(json.dumps(report_jsons, indent=4))
77 if args.output == "cli":
78 self.bc_platform.get_report_to_platform(args,scan_reports)
79
80 exit_code = 1 if 1 in exit_codes else 0
81 exit(exit_code)
82
83 def filter_runner_framework(self):
84 if not self.runner_filter:
85 return
86 if self.runner_filter.framework == 'all':
87 return
88 for runner in self.runners:
89 if runner.check_type == self.runner_filter.framework:
90 self.runners = [runner]
91 return
92
93 @staticmethod
94 def enrich_report_with_guidelines(scan_report, guidelines):
95 for record in scan_report.failed_checks + scan_report.passed_checks + scan_report.skipped_checks:
96 if record.check_id in guidelines:
97 record.set_guideline(guidelines[record.check_id])
98
[end of checkov/common/runners/runner_registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/common/output/report.py b/checkov/common/output/report.py
--- a/checkov/common/output/report.py
+++ b/checkov/common/output/report.py
@@ -49,17 +49,26 @@
def get_json(self):
return json.dumps(self.get_dict(), indent=4)
- def get_dict(self):
- return {
- "check_type": self.check_type,
- "results": {
- "passed_checks": [check.__dict__ for check in self.passed_checks],
- "failed_checks": [check.__dict__ for check in self.failed_checks],
- "skipped_checks": [check.__dict__ for check in self.skipped_checks],
- "parsing_errors": list(self.parsing_errors)
- },
- "summary": self.get_summary()
+ def get_dict(self, is_quiet=False):
+ if is_quiet:
+ return {
+ "check_type": self.check_type,
+ "results": {
+ "failed_checks": [check.__dict__ for check in self.failed_checks]
+ },
+ "summary": self.get_summary()
}
+ else:
+ return {
+ "check_type": self.check_type,
+ "results": {
+ "passed_checks": [check.__dict__ for check in self.passed_checks],
+ "failed_checks": [check.__dict__ for check in self.failed_checks],
+ "skipped_checks": [check.__dict__ for check in self.skipped_checks],
+ "parsing_errors": list(self.parsing_errors)
+ },
+ "summary": self.get_summary()
+ }
def get_exit_code(self, soft_fail):
if soft_fail:
diff --git a/checkov/common/runners/runner_registry.py b/checkov/common/runners/runner_registry.py
--- a/checkov/common/runners/runner_registry.py
+++ b/checkov/common/runners/runner_registry.py
@@ -48,7 +48,7 @@
for report in scan_reports:
if not report.is_empty():
if args.output == "json":
- report_jsons.append(report.get_dict())
+ report_jsons.append(report.get_dict(is_quiet=args.quiet))
elif args.output == "junitxml":
junit_reports.append(report)
# report.print_junit_xml()
| {"golden_diff": "diff --git a/checkov/common/output/report.py b/checkov/common/output/report.py\n--- a/checkov/common/output/report.py\n+++ b/checkov/common/output/report.py\n@@ -49,17 +49,26 @@\n def get_json(self):\n return json.dumps(self.get_dict(), indent=4)\n \n- def get_dict(self):\n- return {\n- \"check_type\": self.check_type,\n- \"results\": {\n- \"passed_checks\": [check.__dict__ for check in self.passed_checks],\n- \"failed_checks\": [check.__dict__ for check in self.failed_checks],\n- \"skipped_checks\": [check.__dict__ for check in self.skipped_checks],\n- \"parsing_errors\": list(self.parsing_errors)\n- },\n- \"summary\": self.get_summary()\n+ def get_dict(self, is_quiet=False):\n+ if is_quiet:\n+ return {\n+ \"check_type\": self.check_type,\n+ \"results\": {\n+ \"failed_checks\": [check.__dict__ for check in self.failed_checks]\n+ },\n+ \"summary\": self.get_summary()\n }\n+ else: \n+ return {\n+ \"check_type\": self.check_type,\n+ \"results\": {\n+ \"passed_checks\": [check.__dict__ for check in self.passed_checks],\n+ \"failed_checks\": [check.__dict__ for check in self.failed_checks],\n+ \"skipped_checks\": [check.__dict__ for check in self.skipped_checks],\n+ \"parsing_errors\": list(self.parsing_errors)\n+ },\n+ \"summary\": self.get_summary()\n+ }\n \n def get_exit_code(self, soft_fail):\n if soft_fail:\ndiff --git a/checkov/common/runners/runner_registry.py b/checkov/common/runners/runner_registry.py\n--- a/checkov/common/runners/runner_registry.py\n+++ b/checkov/common/runners/runner_registry.py\n@@ -48,7 +48,7 @@\n for report in scan_reports:\n if not report.is_empty():\n if args.output == \"json\":\n- report_jsons.append(report.get_dict())\n+ report_jsons.append(report.get_dict(is_quiet=args.quiet))\n elif args.output == \"junitxml\":\n junit_reports.append(report)\n # report.print_junit_xml()\n", "issue": "Feature Request: another flag to display failed checks\n**Is your feature request related to a problem? Please describe.**\r\n\r\nit seems that -o github_failed_only only returns failed but with plain text. if I use -o json then I get all checks(failed and success). \r\n**Describe the solution you'd like**\r\n\r\nApart from json and github_failed_only parameters. It might be good to have to another flag to display failed only reports. It can be used with json output. Something like to see failed checks in json format.\r\n```\r\n$ checkov --display-failed-checks -o json -d .\r\n```\r\n\r\n\n", "before_files": [{"content": "import json\nfrom collections import defaultdict\n\nfrom colorama import init\nfrom junit_xml import TestCase, TestSuite\nfrom termcolor import colored\n\nfrom checkov.common.models.enums import CheckResult\nfrom checkov.version import version\nfrom tabulate import tabulate\n\ninit(autoreset=True)\n\n\nclass Report:\n\n def __init__(self, check_type):\n self.check_type = check_type\n self.passed_checks = []\n self.failed_checks = []\n self.skipped_checks = []\n self.parsing_errors = []\n\n def add_parsing_errors(self, errors):\n for file in errors:\n self.add_parsing_error(file)\n\n def add_parsing_error(self, file):\n if file:\n self.parsing_errors.append(file)\n\n def add_record(self, record):\n if record.check_result['result'] == CheckResult.PASSED:\n self.passed_checks.append(record)\n if record.check_result['result'] == CheckResult.FAILED:\n self.failed_checks.append(record)\n if record.check_result['result'] == CheckResult.SKIPPED:\n self.skipped_checks.append(record)\n\n def get_summary(self):\n return {\n \"passed\": len(self.passed_checks),\n \"failed\": len(self.failed_checks),\n \"skipped\": len(self.skipped_checks),\n \"parsing_errors\": len(self.parsing_errors),\n \"checkov_version\": version\n }\n\n def get_json(self):\n return json.dumps(self.get_dict(), indent=4)\n\n def get_dict(self):\n return {\n \"check_type\": self.check_type,\n \"results\": {\n \"passed_checks\": [check.__dict__ for check in self.passed_checks],\n \"failed_checks\": [check.__dict__ for check in self.failed_checks],\n \"skipped_checks\": [check.__dict__ for check in self.skipped_checks],\n \"parsing_errors\": list(self.parsing_errors)\n },\n \"summary\": self.get_summary()\n }\n\n def get_exit_code(self, soft_fail):\n if soft_fail:\n return 0\n elif len(self.failed_checks) > 0:\n return 1\n return 0\n\n def is_empty(self):\n return len(self.passed_checks) + len(self.failed_checks) + len(self.skipped_checks) + len(self.parsing_errors) == 0\n\n def print_console(self, is_quiet=False, is_compact=False):\n summary = self.get_summary()\n print(colored(f\"{self.check_type} scan results:\", \"blue\"))\n if self.parsing_errors:\n message = \"\\nPassed checks: {}, Failed checks: {}, Skipped checks: {}, Parsing errors: {}\\n\".format(\n summary[\"passed\"], summary[\"failed\"], summary[\"skipped\"], summary[\"parsing_errors\"])\n else:\n message = \"\\nPassed checks: {}, Failed checks: {}, Skipped checks: {}\\n\".format(\n summary[\"passed\"], summary[\"failed\"], summary[\"skipped\"])\n print(colored(message, \"cyan\"))\n if not is_quiet:\n for record in self.passed_checks:\n print(record.to_string(compact=is_compact))\n for record in self.failed_checks:\n print(record.to_string(compact=is_compact))\n if not is_quiet:\n for record in self.skipped_checks:\n print(record.to_string(compact=is_compact))\n\n if not is_quiet:\n for file in self.parsing_errors:\n Report._print_parsing_error_console(file)\n\n @staticmethod\n def _print_parsing_error_console(file):\n print(colored(f'Error parsing file {file}', 'red'))\n\n def print_junit_xml(self):\n ts = self.get_test_suites()\n print(TestSuite.to_xml_string(ts))\n\n def print_failed_github_md(self):\n result = []\n for record in self.failed_checks:\n result.append([record.check_id, record.file_path ,record.resource, record.check_name, record.guideline])\n print(tabulate(result, headers=[\"check_id\", \"file\" ,\"resource\", \"check_name\", \"guideline\"], tablefmt=\"github\", showindex=True))\n print(\"\\n\\n---\\n\\n\")\n\n def get_test_suites(self):\n test_cases = defaultdict(list)\n test_suites = []\n records = self.passed_checks + self.failed_checks + self.skipped_checks\n for record in records:\n check_name = record.check_name\n\n test_name = \"{} {} {}\".format(self.check_type, check_name, record.resource)\n test_case = TestCase(name=test_name, file=record.file_path, classname=record.check_class)\n if record.check_result['result'] == CheckResult.FAILED:\n test_case.add_failure_info(\n \"Resource \\\"{}\\\" failed in check \\\"{}\\\"\".format(record.resource, check_name))\n if record.check_result['result'] == CheckResult.SKIPPED:\n test_case.add_skipped_info(\n \"Resource \\\"{}\\\" skipped in check \\\"{}\\\"\\n Suppress comment: {}\".format(record.resource, check_name,\n record.check_result[\n 'suppress_comment']))\n test_cases[check_name].append(test_case)\n for key in test_cases.keys():\n test_suites.append(\n TestSuite(name=key, test_cases=test_cases[key], package=test_cases[key][0].classname))\n return test_suites\n\n def print_json(self):\n print(self.get_json())\n\n", "path": "checkov/common/output/report.py"}, {"content": "import json\nimport logging\nfrom abc import abstractmethod\n\nfrom checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry\nfrom checkov.common.output.report import Report\n\nOUTPUT_CHOICES = ['cli', 'json', 'junitxml', 'github_failed_only']\n\nfrom checkov.common.bridgecrew.platform_integration import BcPlatformIntegration\n\n\nclass RunnerRegistry(object):\n runners = []\n scan_reports = []\n banner = \"\"\n\n def __init__(self, banner, runner_filter, *runners):\n self.logger = logging.getLogger(__name__)\n self.runner_filter = runner_filter\n self.runners = runners\n self.banner = banner\n self.scan_reports = []\n self.filter_runner_framework()\n self.bc_platform = BcPlatformIntegration()\n\n @abstractmethod\n def extract_entity_details(self, entity):\n raise NotImplementedError()\n\n def run(self, root_folder=None, external_checks_dir=None, files=None, guidelines=None, collect_skip_comments=True, bc_integration=None):\n for runner in self.runners:\n integration_feature_registry.run_pre_scan()\n scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)\n integration_feature_registry.run_post_scan(scan_report)\n if guidelines:\n RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)\n self.scan_reports.append(scan_report)\n return self.scan_reports\n\n def print_reports(self, scan_reports, args, url=None):\n if args.output == 'cli':\n print(f\"{self.banner}\\n\")\n exit_codes = []\n report_jsons = []\n junit_reports = []\n for report in scan_reports:\n if not report.is_empty():\n if args.output == \"json\":\n report_jsons.append(report.get_dict())\n elif args.output == \"junitxml\":\n junit_reports.append(report)\n # report.print_junit_xml()\n elif args.output == 'github_failed_only':\n report.print_failed_github_md()\n else:\n report.print_console(is_quiet=args.quiet, is_compact=args.compact)\n if url:\n print(\"More details: {}\".format(url))\n exit_codes.append(report.get_exit_code(args.soft_fail))\n if args.output == \"junitxml\":\n if len(junit_reports) == 1:\n junit_reports[0].print_junit_xml()\n else:\n master_report = Report(None)\n for report in junit_reports:\n master_report.skipped_checks += report.skipped_checks\n master_report.passed_checks += report.passed_checks\n master_report.failed_checks += report.failed_checks\n master_report.print_junit_xml()\n if args.output == \"json\":\n if len(report_jsons) == 1:\n print(json.dumps(report_jsons[0], indent=4))\n else:\n print(json.dumps(report_jsons, indent=4))\n if args.output == \"cli\":\n self.bc_platform.get_report_to_platform(args,scan_reports)\n\n exit_code = 1 if 1 in exit_codes else 0\n exit(exit_code)\n\n def filter_runner_framework(self):\n if not self.runner_filter:\n return\n if self.runner_filter.framework == 'all':\n return\n for runner in self.runners:\n if runner.check_type == self.runner_filter.framework:\n self.runners = [runner]\n return\n\n @staticmethod\n def enrich_report_with_guidelines(scan_report, guidelines):\n for record in scan_report.failed_checks + scan_report.passed_checks + scan_report.skipped_checks:\n if record.check_id in guidelines:\n record.set_guideline(guidelines[record.check_id])\n", "path": "checkov/common/runners/runner_registry.py"}]} | 3,110 | 507 |
gh_patches_debug_11300 | rasdani/github-patches | git_diff | pypa__setuptools-1986 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deprecated distutils bdist_wininst is going to be removed
I proposed to remove the bdist_winstinst command from distutils in Python 3.9:
* https://bugs.python.org/issue39541
* https://discuss.python.org/t/remove-distutils-bdist-wininst-command/3115
* https://github.com/python/cpython/pull/18329
Problem: setuptools always uses it on all platforms at: setuptools/command/install_scripts.py, line 35:
```
bw_cmd = self.get_finalized_command("bdist_wininst")
```
See #857 which is a closed duplicated which proposed different options to fix the issue.
</issue>
<code>
[start of setuptools/command/install_scripts.py]
1 from distutils import log
2 import distutils.command.install_scripts as orig
3 import os
4 import sys
5
6 from pkg_resources import Distribution, PathMetadata, ensure_directory
7
8
9 class install_scripts(orig.install_scripts):
10 """Do normal script install, plus any egg_info wrapper scripts"""
11
12 def initialize_options(self):
13 orig.install_scripts.initialize_options(self)
14 self.no_ep = False
15
16 def run(self):
17 import setuptools.command.easy_install as ei
18
19 self.run_command("egg_info")
20 if self.distribution.scripts:
21 orig.install_scripts.run(self) # run first to set up self.outfiles
22 else:
23 self.outfiles = []
24 if self.no_ep:
25 # don't install entry point scripts into .egg file!
26 return
27
28 ei_cmd = self.get_finalized_command("egg_info")
29 dist = Distribution(
30 ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info),
31 ei_cmd.egg_name, ei_cmd.egg_version,
32 )
33 bs_cmd = self.get_finalized_command('build_scripts')
34 exec_param = getattr(bs_cmd, 'executable', None)
35 bw_cmd = self.get_finalized_command("bdist_wininst")
36 is_wininst = getattr(bw_cmd, '_is_running', False)
37 writer = ei.ScriptWriter
38 if is_wininst:
39 exec_param = "python.exe"
40 writer = ei.WindowsScriptWriter
41 if exec_param == sys.executable:
42 # In case the path to the Python executable contains a space, wrap
43 # it so it's not split up.
44 exec_param = [exec_param]
45 # resolve the writer to the environment
46 writer = writer.best()
47 cmd = writer.command_spec_class.best().from_param(exec_param)
48 for args in writer.get_args(dist, cmd.as_header()):
49 self.write_script(*args)
50
51 def write_script(self, script_name, contents, mode="t", *ignored):
52 """Write an executable file to the scripts directory"""
53 from setuptools.command.easy_install import chmod, current_umask
54
55 log.info("Installing %s script to %s", script_name, self.install_dir)
56 target = os.path.join(self.install_dir, script_name)
57 self.outfiles.append(target)
58
59 mask = current_umask()
60 if not self.dry_run:
61 ensure_directory(target)
62 f = open(target, "w" + mode)
63 f.write(contents)
64 f.close()
65 chmod(target, 0o777 - mask)
66
[end of setuptools/command/install_scripts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/command/install_scripts.py b/setuptools/command/install_scripts.py
--- a/setuptools/command/install_scripts.py
+++ b/setuptools/command/install_scripts.py
@@ -32,8 +32,11 @@
)
bs_cmd = self.get_finalized_command('build_scripts')
exec_param = getattr(bs_cmd, 'executable', None)
- bw_cmd = self.get_finalized_command("bdist_wininst")
- is_wininst = getattr(bw_cmd, '_is_running', False)
+ try:
+ bw_cmd = self.get_finalized_command("bdist_wininst")
+ is_wininst = getattr(bw_cmd, '_is_running', False)
+ except ImportError:
+ is_wininst = False
writer = ei.ScriptWriter
if is_wininst:
exec_param = "python.exe"
| {"golden_diff": "diff --git a/setuptools/command/install_scripts.py b/setuptools/command/install_scripts.py\n--- a/setuptools/command/install_scripts.py\n+++ b/setuptools/command/install_scripts.py\n@@ -32,8 +32,11 @@\n )\n bs_cmd = self.get_finalized_command('build_scripts')\n exec_param = getattr(bs_cmd, 'executable', None)\n- bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n- is_wininst = getattr(bw_cmd, '_is_running', False)\n+ try:\n+ bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n+ is_wininst = getattr(bw_cmd, '_is_running', False)\n+ except ImportError:\n+ is_wininst = False\n writer = ei.ScriptWriter\n if is_wininst:\n exec_param = \"python.exe\"\n", "issue": "Deprecated distutils bdist_wininst is going to be removed\nI proposed to remove the bdist_winstinst command from distutils in Python 3.9:\r\n\r\n* https://bugs.python.org/issue39541\r\n* https://discuss.python.org/t/remove-distutils-bdist-wininst-command/3115\r\n* https://github.com/python/cpython/pull/18329\r\n\r\nProblem: setuptools always uses it on all platforms at: setuptools/command/install_scripts.py, line 35:\r\n\r\n```\r\n bw_cmd = self.get_finalized_command(\"bdist_wininst\")\r\n```\r\n\r\nSee #857 which is a closed duplicated which proposed different options to fix the issue.\n", "before_files": [{"content": "from distutils import log\nimport distutils.command.install_scripts as orig\nimport os\nimport sys\n\nfrom pkg_resources import Distribution, PathMetadata, ensure_directory\n\n\nclass install_scripts(orig.install_scripts):\n \"\"\"Do normal script install, plus any egg_info wrapper scripts\"\"\"\n\n def initialize_options(self):\n orig.install_scripts.initialize_options(self)\n self.no_ep = False\n\n def run(self):\n import setuptools.command.easy_install as ei\n\n self.run_command(\"egg_info\")\n if self.distribution.scripts:\n orig.install_scripts.run(self) # run first to set up self.outfiles\n else:\n self.outfiles = []\n if self.no_ep:\n # don't install entry point scripts into .egg file!\n return\n\n ei_cmd = self.get_finalized_command(\"egg_info\")\n dist = Distribution(\n ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info),\n ei_cmd.egg_name, ei_cmd.egg_version,\n )\n bs_cmd = self.get_finalized_command('build_scripts')\n exec_param = getattr(bs_cmd, 'executable', None)\n bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n is_wininst = getattr(bw_cmd, '_is_running', False)\n writer = ei.ScriptWriter\n if is_wininst:\n exec_param = \"python.exe\"\n writer = ei.WindowsScriptWriter\n if exec_param == sys.executable:\n # In case the path to the Python executable contains a space, wrap\n # it so it's not split up.\n exec_param = [exec_param]\n # resolve the writer to the environment\n writer = writer.best()\n cmd = writer.command_spec_class.best().from_param(exec_param)\n for args in writer.get_args(dist, cmd.as_header()):\n self.write_script(*args)\n\n def write_script(self, script_name, contents, mode=\"t\", *ignored):\n \"\"\"Write an executable file to the scripts directory\"\"\"\n from setuptools.command.easy_install import chmod, current_umask\n\n log.info(\"Installing %s script to %s\", script_name, self.install_dir)\n target = os.path.join(self.install_dir, script_name)\n self.outfiles.append(target)\n\n mask = current_umask()\n if not self.dry_run:\n ensure_directory(target)\n f = open(target, \"w\" + mode)\n f.write(contents)\n f.close()\n chmod(target, 0o777 - mask)\n", "path": "setuptools/command/install_scripts.py"}]} | 1,337 | 181 |
gh_patches_debug_39724 | rasdani/github-patches | git_diff | ephios-dev__ephios-178 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Event creation mails do not include event description
</issue>
<code>
[start of ephios/event_management/mail.py]
1 from django.core import mail
2 from django.core.mail import EmailMultiAlternatives
3 from django.template.loader import render_to_string
4 from django.utils.translation import gettext as _
5 from guardian.shortcuts import get_users_with_perms
6
7 from ephios.event_management.models import AbstractParticipation
8 from ephios.extra.permissions import get_groups_with_perms
9 from ephios.settings import SITE_URL
10 from ephios.user_management.models import UserProfile
11
12
13 def new_event(event):
14 messages = []
15 users = UserProfile.objects.filter(
16 groups__in=get_groups_with_perms(event, only_with_perms_in=["view_event"]), is_active=True
17 ).distinct()
18 responsible_users = get_users_with_perms(event, only_with_perms_in=["change_event"]).distinct()
19 responsible_persons_mails = list(responsible_users.values_list("email", flat=True))
20
21 subject = _("New {type}: {title}").format(type=event.type, title=event.title)
22 text_content = _(
23 "A new {type} ({title}) has been added. \n You can view it here: {link}"
24 ).format(type=event.type, title=event.title, link=event.get_absolute_url())
25 html_content = render_to_string(
26 "event_management/mails/new_event.html", {"event": event, "site_url": SITE_URL}
27 )
28
29 for user in users:
30 message = EmailMultiAlternatives(
31 to=[user.email], subject=subject, body=text_content, reply_to=responsible_persons_mails
32 )
33 message.attach_alternative(html_content, "text/html")
34 messages.append(message)
35 mail.get_connection().send_messages(messages)
36
37
38 def participation_state_changed(participation: AbstractParticipation):
39 if participation.state != AbstractParticipation.States.USER_DECLINED:
40 messages = []
41
42 # send mail to the participant whose participation has been changed
43 if participation.participant.email is not None:
44 text_content = _(
45 "The status for your participation for {shift} has changed. It is now {status}."
46 ).format(shift=participation.shift, status=participation.get_state_display())
47 html_content = render_to_string("email_base.html", {"message_text": text_content})
48 message = EmailMultiAlternatives(
49 to=[participation.participant.email],
50 subject=_("Your participation state changed"),
51 body=text_content,
52 )
53 message.attach_alternative(html_content, "text/html")
54 messages.append(message)
55
56 # send mail to responsible users
57 responsible_users = get_users_with_perms(
58 participation.shift.event, only_with_perms_in=["change_event"]
59 ).distinct()
60 subject = _("Participation was changed for your event")
61 text_content = _(
62 "The participation of {participant} for {shift} was changed. The status is now {status}"
63 ).format(
64 participant=participation.participant,
65 shift=participation.shift,
66 status=participation.get_state_display(),
67 )
68 html_content = render_to_string("email_base.html", {"message_text": text_content})
69 for user in responsible_users:
70 message = EmailMultiAlternatives(to=[user.email], subject=subject, body=text_content)
71 message.attach_alternative(html_content, "text/html")
72 messages.append(message)
73
74 mail.get_connection().send_messages(messages)
75
[end of ephios/event_management/mail.py]
[start of ephios/user_management/mail.py]
1 from django.contrib.auth.tokens import default_token_generator
2 from django.core.mail import EmailMultiAlternatives
3 from django.template.loader import render_to_string
4 from django.urls import reverse
5 from django.utils.encoding import force_bytes
6 from django.utils.http import urlsafe_base64_encode
7 from django.utils.translation import gettext as _
8
9 from ephios.settings import SITE_URL
10
11
12 def send_account_creation_info(userprofile):
13 subject = _("Welcome to ephios!")
14 uid = urlsafe_base64_encode(force_bytes(userprofile.id))
15 token = default_token_generator.make_token(userprofile)
16 reset_link = reverse("password_reset_confirm", kwargs={"uidb64": uid, "token": token})
17 text_content = _(
18 "You're receiving this email because a new account has been created for you at ephios.\n"
19 "Please go to the following page and choose a password: {url}{reset_link}\n"
20 "Your username is your email address: {email}\n"
21 ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)
22
23 html_content = render_to_string(
24 "user_management/new_account_email.html",
25 {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email},
26 )
27 message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
28 message.attach_alternative(html_content, "text/html")
29 message.send()
30
31
32 def send_account_update_info(userprofile):
33 subject = _("ephios account updated")
34 url = reverse("user_management:profile")
35 text_content = _(
36 "You're receiving this email because your account at ephios has been updated.\n"
37 "You can see the changes in your profile: {site_url}{url}\n"
38 "Your username is your email address: {email}\n"
39 ).format(site_url=SITE_URL, url=url, email=userprofile.email)
40
41 html_content = render_to_string(
42 "user_management/account_updated_email.html",
43 {"site_url": SITE_URL, "url": url, "email": userprofile.email},
44 )
45 message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
46 message.attach_alternative(html_content, "text/html")
47 message.send()
48
[end of ephios/user_management/mail.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ephios/event_management/mail.py b/ephios/event_management/mail.py
--- a/ephios/event_management/mail.py
+++ b/ephios/event_management/mail.py
@@ -1,3 +1,5 @@
+from urllib.parse import urljoin
+
from django.core import mail
from django.core.mail import EmailMultiAlternatives
from django.template.loader import render_to_string
@@ -20,8 +22,16 @@
subject = _("New {type}: {title}").format(type=event.type, title=event.title)
text_content = _(
- "A new {type} ({title}) has been added. \n You can view it here: {link}"
- ).format(type=event.type, title=event.title, link=event.get_absolute_url())
+ "A new {type} ({title}, {location}) has been added.\n"
+ "Further information: {description}\n"
+ "You can view the event here: {url}"
+ ).format(
+ type=event.type,
+ title=event.title,
+ location=event.location,
+ description=event.description,
+ url=urljoin(SITE_URL, event.get_absolute_url()),
+ )
html_content = render_to_string(
"event_management/mails/new_event.html", {"event": event, "site_url": SITE_URL}
)
diff --git a/ephios/user_management/mail.py b/ephios/user_management/mail.py
--- a/ephios/user_management/mail.py
+++ b/ephios/user_management/mail.py
@@ -1,3 +1,5 @@
+from urllib.parse import urljoin
+
from django.contrib.auth.tokens import default_token_generator
from django.core.mail import EmailMultiAlternatives
from django.template.loader import render_to_string
@@ -16,9 +18,9 @@
reset_link = reverse("password_reset_confirm", kwargs={"uidb64": uid, "token": token})
text_content = _(
"You're receiving this email because a new account has been created for you at ephios.\n"
- "Please go to the following page and choose a password: {url}{reset_link}\n"
+ "Please go to the following page and choose a password: {url}\n"
"Your username is your email address: {email}\n"
- ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)
+ ).format(url=urljoin(SITE_URL, reset_link), email=userprofile.email)
html_content = render_to_string(
"user_management/new_account_email.html",
@@ -34,9 +36,9 @@
url = reverse("user_management:profile")
text_content = _(
"You're receiving this email because your account at ephios has been updated.\n"
- "You can see the changes in your profile: {site_url}{url}\n"
+ "You can see the changes in your profile: {url}\n"
"Your username is your email address: {email}\n"
- ).format(site_url=SITE_URL, url=url, email=userprofile.email)
+ ).format(url=urljoin(SITE_URL, url), email=userprofile.email)
html_content = render_to_string(
"user_management/account_updated_email.html",
| {"golden_diff": "diff --git a/ephios/event_management/mail.py b/ephios/event_management/mail.py\n--- a/ephios/event_management/mail.py\n+++ b/ephios/event_management/mail.py\n@@ -1,3 +1,5 @@\n+from urllib.parse import urljoin\n+\n from django.core import mail\n from django.core.mail import EmailMultiAlternatives\n from django.template.loader import render_to_string\n@@ -20,8 +22,16 @@\n \n subject = _(\"New {type}: {title}\").format(type=event.type, title=event.title)\n text_content = _(\n- \"A new {type} ({title}) has been added. \\n You can view it here: {link}\"\n- ).format(type=event.type, title=event.title, link=event.get_absolute_url())\n+ \"A new {type} ({title}, {location}) has been added.\\n\"\n+ \"Further information: {description}\\n\"\n+ \"You can view the event here: {url}\"\n+ ).format(\n+ type=event.type,\n+ title=event.title,\n+ location=event.location,\n+ description=event.description,\n+ url=urljoin(SITE_URL, event.get_absolute_url()),\n+ )\n html_content = render_to_string(\n \"event_management/mails/new_event.html\", {\"event\": event, \"site_url\": SITE_URL}\n )\ndiff --git a/ephios/user_management/mail.py b/ephios/user_management/mail.py\n--- a/ephios/user_management/mail.py\n+++ b/ephios/user_management/mail.py\n@@ -1,3 +1,5 @@\n+from urllib.parse import urljoin\n+\n from django.contrib.auth.tokens import default_token_generator\n from django.core.mail import EmailMultiAlternatives\n from django.template.loader import render_to_string\n@@ -16,9 +18,9 @@\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at ephios.\\n\"\n- \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n+ \"Please go to the following page and choose a password: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n- ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n+ ).format(url=urljoin(SITE_URL, reset_link), email=userprofile.email)\n \n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n@@ -34,9 +36,9 @@\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at ephios has been updated.\\n\"\n- \"You can see the changes in your profile: {site_url}{url}\\n\"\n+ \"You can see the changes in your profile: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n- ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n+ ).format(url=urljoin(SITE_URL, url), email=userprofile.email)\n \n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n", "issue": "Event creation mails do not include event description\n\n", "before_files": [{"content": "from django.core import mail\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import gettext as _\nfrom guardian.shortcuts import get_users_with_perms\n\nfrom ephios.event_management.models import AbstractParticipation\nfrom ephios.extra.permissions import get_groups_with_perms\nfrom ephios.settings import SITE_URL\nfrom ephios.user_management.models import UserProfile\n\n\ndef new_event(event):\n messages = []\n users = UserProfile.objects.filter(\n groups__in=get_groups_with_perms(event, only_with_perms_in=[\"view_event\"]), is_active=True\n ).distinct()\n responsible_users = get_users_with_perms(event, only_with_perms_in=[\"change_event\"]).distinct()\n responsible_persons_mails = list(responsible_users.values_list(\"email\", flat=True))\n\n subject = _(\"New {type}: {title}\").format(type=event.type, title=event.title)\n text_content = _(\n \"A new {type} ({title}) has been added. \\n You can view it here: {link}\"\n ).format(type=event.type, title=event.title, link=event.get_absolute_url())\n html_content = render_to_string(\n \"event_management/mails/new_event.html\", {\"event\": event, \"site_url\": SITE_URL}\n )\n\n for user in users:\n message = EmailMultiAlternatives(\n to=[user.email], subject=subject, body=text_content, reply_to=responsible_persons_mails\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n mail.get_connection().send_messages(messages)\n\n\ndef participation_state_changed(participation: AbstractParticipation):\n if participation.state != AbstractParticipation.States.USER_DECLINED:\n messages = []\n\n # send mail to the participant whose participation has been changed\n if participation.participant.email is not None:\n text_content = _(\n \"The status for your participation for {shift} has changed. It is now {status}.\"\n ).format(shift=participation.shift, status=participation.get_state_display())\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n message = EmailMultiAlternatives(\n to=[participation.participant.email],\n subject=_(\"Your participation state changed\"),\n body=text_content,\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n # send mail to responsible users\n responsible_users = get_users_with_perms(\n participation.shift.event, only_with_perms_in=[\"change_event\"]\n ).distinct()\n subject = _(\"Participation was changed for your event\")\n text_content = _(\n \"The participation of {participant} for {shift} was changed. The status is now {status}\"\n ).format(\n participant=participation.participant,\n shift=participation.shift,\n status=participation.get_state_display(),\n )\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n for user in responsible_users:\n message = EmailMultiAlternatives(to=[user.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n mail.get_connection().send_messages(messages)\n", "path": "ephios/event_management/mail.py"}, {"content": "from django.contrib.auth.tokens import default_token_generator\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\nfrom django.utils.encoding import force_bytes\nfrom django.utils.http import urlsafe_base64_encode\nfrom django.utils.translation import gettext as _\n\nfrom ephios.settings import SITE_URL\n\n\ndef send_account_creation_info(userprofile):\n subject = _(\"Welcome to ephios!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at ephios.\\n\"\n \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n\n\ndef send_account_update_info(userprofile):\n subject = _(\"ephios account updated\")\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at ephios has been updated.\\n\"\n \"You can see the changes in your profile: {site_url}{url}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n {\"site_url\": SITE_URL, \"url\": url, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n", "path": "ephios/user_management/mail.py"}]} | 1,969 | 709 |
gh_patches_debug_535 | rasdani/github-patches | git_diff | neptune-ai__neptune-client-155 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
create_experiment() fails on windows 10
Hi there,
I enjoy neptune very much and on my macbook everything works fine. But when I run the same code on my Windows 10 machine, I get an error when calling create_experiment().
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\neptune\__init__.py", line 177, in create_experiment
notebook_id=notebook_id
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\neptune\projects.py", line 400, in create_experiment
click.echo(str(experiment.id))
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\utils.py", line 218, in echo
file = _default_text_stdout()
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_compat.py", line 675, in func
rv = wrapper_func()
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_compat.py", line 436, in get_text_stdout
rv = _get_windows_console_stream(sys.stdout, encoding, errors)
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_winconsole.py", line 295, in _get_windows_console_stream
func = _stream_factories.get(f.fileno())
AttributeError: 'StdOutWithUpload' object has no attribute 'fileno'`
It happens when I run:
`import neptune `
`import cfg`
`neptune.init(api_token=cfg.neptune_token, project_qualified_name=cfg.neptune_project_name) `
`neptune.create_experiment()`
I run it in conda environments both times.
</issue>
<code>
[start of neptune/internal/streams/stdstream_uploader.py]
1 #
2 # Copyright (c) 2019, Neptune Labs Sp. z o.o.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 import sys
17
18 from neptune.internal.channels.channels import ChannelNamespace
19 from neptune.internal.streams.channel_writer import ChannelWriter
20
21
22 class StdStreamWithUpload(object):
23
24 def __init__(self, experiment, channel_name, stream):
25 # pylint:disable=protected-access
26 self._channel = experiment._get_channel(channel_name, 'text', ChannelNamespace.SYSTEM)
27 self._channel_writer = ChannelWriter(experiment, channel_name, ChannelNamespace.SYSTEM)
28 self._stream = stream
29
30 def write(self, data):
31 self._stream.write(data)
32 try:
33 self._channel_writer.write(data)
34 # pylint:disable=bare-except
35 except:
36 pass
37
38 def isatty(self):
39 return hasattr(self._stream, 'isatty') and self._stream.isatty()
40
41 def flush(self):
42 self._stream.flush()
43
44
45 class StdOutWithUpload(StdStreamWithUpload):
46
47 def __init__(self, experiment):
48 super(StdOutWithUpload, self).__init__(experiment, 'stdout', sys.__stdout__)
49 sys.stdout = self
50
51 def close(self):
52 sys.stdout = sys.__stdout__
53
54
55 class StdErrWithUpload(StdStreamWithUpload):
56
57 def __init__(self, experiment):
58 super(StdErrWithUpload, self).__init__(experiment, 'stderr', sys.__stderr__)
59 sys.stderr = self
60
61 def close(self):
62 sys.stderr = sys.__stderr__
63
[end of neptune/internal/streams/stdstream_uploader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/neptune/internal/streams/stdstream_uploader.py b/neptune/internal/streams/stdstream_uploader.py
--- a/neptune/internal/streams/stdstream_uploader.py
+++ b/neptune/internal/streams/stdstream_uploader.py
@@ -41,6 +41,9 @@
def flush(self):
self._stream.flush()
+ def fileno(self):
+ return self._stream.fileno()
+
class StdOutWithUpload(StdStreamWithUpload):
| {"golden_diff": "diff --git a/neptune/internal/streams/stdstream_uploader.py b/neptune/internal/streams/stdstream_uploader.py\n--- a/neptune/internal/streams/stdstream_uploader.py\n+++ b/neptune/internal/streams/stdstream_uploader.py\n@@ -41,6 +41,9 @@\n def flush(self):\n self._stream.flush()\n \n+ def fileno(self):\n+ return self._stream.fileno()\n+\n \n class StdOutWithUpload(StdStreamWithUpload):\n", "issue": "create_experiment() fails on windows 10\nHi there, \r\n\r\nI enjoy neptune very much and on my macbook everything works fine. But when I run the same code on my Windows 10 machine, I get an error when calling create_experiment().\r\n\r\n`Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\neptune\\__init__.py\", line 177, in create_experiment\r\n notebook_id=notebook_id\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\neptune\\projects.py\", line 400, in create_experiment\r\n click.echo(str(experiment.id))\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\utils.py\", line 218, in echo\r\n file = _default_text_stdout()\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_compat.py\", line 675, in func\r\n rv = wrapper_func()\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_compat.py\", line 436, in get_text_stdout\r\n rv = _get_windows_console_stream(sys.stdout, encoding, errors)\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_winconsole.py\", line 295, in _get_windows_console_stream\r\n func = _stream_factories.get(f.fileno())\r\nAttributeError: 'StdOutWithUpload' object has no attribute 'fileno'`\r\n\r\nIt happens when I run:\r\n\r\n`import neptune `\r\n`import cfg`\r\n`neptune.init(api_token=cfg.neptune_token, project_qualified_name=cfg.neptune_project_name) `\r\n`neptune.create_experiment()`\r\n\r\nI run it in conda environments both times.\r\n\n", "before_files": [{"content": "#\n# Copyright (c) 2019, Neptune Labs Sp. z o.o.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport sys\n\nfrom neptune.internal.channels.channels import ChannelNamespace\nfrom neptune.internal.streams.channel_writer import ChannelWriter\n\n\nclass StdStreamWithUpload(object):\n\n def __init__(self, experiment, channel_name, stream):\n # pylint:disable=protected-access\n self._channel = experiment._get_channel(channel_name, 'text', ChannelNamespace.SYSTEM)\n self._channel_writer = ChannelWriter(experiment, channel_name, ChannelNamespace.SYSTEM)\n self._stream = stream\n\n def write(self, data):\n self._stream.write(data)\n try:\n self._channel_writer.write(data)\n # pylint:disable=bare-except\n except:\n pass\n\n def isatty(self):\n return hasattr(self._stream, 'isatty') and self._stream.isatty()\n\n def flush(self):\n self._stream.flush()\n\n\nclass StdOutWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdOutWithUpload, self).__init__(experiment, 'stdout', sys.__stdout__)\n sys.stdout = self\n\n def close(self):\n sys.stdout = sys.__stdout__\n\n\nclass StdErrWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdErrWithUpload, self).__init__(experiment, 'stderr', sys.__stderr__)\n sys.stderr = self\n\n def close(self):\n sys.stderr = sys.__stderr__\n", "path": "neptune/internal/streams/stdstream_uploader.py"}]} | 1,564 | 105 |
gh_patches_debug_31116 | rasdani/github-patches | git_diff | gratipay__gratipay.com-870 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Profiles generated with fake_data.py 500 when viewed
Introduced in a4b904f
This is due to `user_info` being set to an empty string instead of a dict/json, causing `escape` to be passed `None` (from `user_info.get`) instead of a string.
```
Traceback (most recent call last):
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py", line 66, in handle_safely
response = self.handle(request)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py", line 99, in handle
response = request.resource.respond(request)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py", line 57, in respond
response = self.get_response(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/negotiated_resource.py", line 100, in get_response
response.body = render(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/__init__.py", line 113, in __call__
return self.render_content(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/tornado.py", line 14, in render_content
return self.compiled.generate(**context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/template.py", line 129, in generate
return execute()
File "/home/joe/git/www.gittip.com/www/%username/index.html", line 786, in _execute
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/escape.py", line 52, in xhtml_escape
return xml.sax.saxutils.escape(value, {'"': """})
File "/usr/lib/python2.7/xml/sax/saxutils.py", line 39, in escape
data = data.replace("&", "&")
AttributeError: 'NoneType' object has no attribute 'replace'
```
Profiles generated with fake_data.py 500 when viewed
Introduced in a4b904f
This is due to `user_info` being set to an empty string instead of a dict/json, causing `escape` to be passed `None` (from `user_info.get`) instead of a string.
```
Traceback (most recent call last):
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py", line 66, in handle_safely
response = self.handle(request)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py", line 99, in handle
response = request.resource.respond(request)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py", line 57, in respond
response = self.get_response(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/negotiated_resource.py", line 100, in get_response
response.body = render(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/__init__.py", line 113, in __call__
return self.render_content(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/tornado.py", line 14, in render_content
return self.compiled.generate(**context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/template.py", line 129, in generate
return execute()
File "/home/joe/git/www.gittip.com/www/%username/index.html", line 786, in _execute
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/escape.py", line 52, in xhtml_escape
return xml.sax.saxutils.escape(value, {'"': """})
File "/usr/lib/python2.7/xml/sax/saxutils.py", line 39, in escape
data = data.replace("&", "&")
AttributeError: 'NoneType' object has no attribute 'replace'
```
</issue>
<code>
[start of gittip/fake_data.py]
1 from faker import Factory
2 from gittip import orm
3 from gittip.models.tip import Tip
4 from gittip.models.participant import Participant
5 from gittip.models.elsewhere import Elsewhere
6 from gittip import AMOUNTS
7 import string
8 import random
9
10 faker = Factory.create()
11
12 platforms = ['github', 'twitter']
13
14
15 def fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):
16 """
17 Create a random text id
18 """
19 return ''.join(random.choice(chars) for x in range(size))
20
21
22 def fake_balance(max_amount=100):
23 """
24 Return a random amount between 0 and max_amount
25 """
26 return random.random() * max_amount
27
28 def fake_int_id(nmax=2 ** 31 -1):
29 """
30 Create a random int id
31 """
32 return random.randint(0, nmax)
33
34
35 def fake_participant(is_admin=False, anonymous=False):
36 """
37 Create a fake User
38 """
39 username = faker.firstName() + fake_text_id(3)
40 return Participant(
41 id=fake_int_id(),
42 username=username,
43 username_lower=username.lower(),
44 statement=faker.sentence(),
45 ctime=faker.dateTimeThisYear(),
46 is_admin=is_admin,
47 balance=fake_balance(),
48 anonymous=anonymous,
49 goal=fake_balance(),
50 balanced_account_uri=faker.uri(),
51 last_ach_result='',
52 is_suspicious=False,
53 last_bill_result='', # Needed to not be suspicious
54 claimed_time=faker.dateTimeThisYear(),
55 type="individual"
56 )
57
58
59 def fake_tip(tipper, tippee):
60 """
61 Create a fake tip
62 """
63 return Tip(
64 id=fake_int_id(),
65 ctime=faker.dateTimeThisYear(),
66 mtime=faker.dateTimeThisMonth(),
67 tipper=tipper.username,
68 tippee=tippee.username,
69 amount=random.choice(AMOUNTS)
70 )
71
72
73 def fake_elsewhere(participant, platform=None):
74 """
75 Create a fake elsewhere
76 """
77 if platform is None:
78 platform = random.choice(platforms)
79
80 return Elsewhere(
81 id=fake_int_id(),
82 platform=platform,
83 user_id=fake_text_id(),
84 is_locked=False,
85 participant=participant.username,
86 user_info=''
87 )
88
89
90 def populate_db(session, num_participants=100, num_tips=50):
91 """
92 Populate DB with fake data
93 """
94 #Make the participants
95 participants = []
96 for i in xrange(num_participants):
97 p = fake_participant()
98 session.add(p)
99 participants.append(p)
100
101 #Make the "Elsewhere's"
102 for p in participants:
103 #All participants get 1 or 2 elsewheres
104 num_elsewheres = random.randint(1, 2)
105 for platform_name in platforms[:num_elsewheres]:
106 e = fake_elsewhere(p, platform_name)
107 session.add(e)
108
109 #Make the tips
110 tips = []
111 for i in xrange(num_tips):
112 tipper, tippee = random.sample(participants, 2)
113 t = fake_tip(tipper, tippee)
114 tips.append(t)
115 session.add(t)
116 session.commit()
117
118
119 def main():
120 db = orm.db
121 dbsession = db.session
122 populate_db(dbsession)
123
124 if __name__ == '__main__':
125 main()
126
[end of gittip/fake_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gittip/fake_data.py b/gittip/fake_data.py
--- a/gittip/fake_data.py
+++ b/gittip/fake_data.py
@@ -9,7 +9,7 @@
faker = Factory.create()
-platforms = ['github', 'twitter']
+platforms = ['github', 'twitter', 'bitbucket']
def fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):
@@ -77,13 +77,33 @@
if platform is None:
platform = random.choice(platforms)
+ info_templates = {
+ "github": {
+ "name": participant.username,
+ "html_url": "https://github.com/" + participant.username,
+ "type": "User",
+ "login": participant.username
+ },
+ "twitter": {
+ "name": participant.username,
+ "html_url": "https://twitter.com/" + participant.username,
+ "screen_name": participant.username
+ },
+ "bitbucket": {
+ "display_name": participant.username,
+ "username": participant.username,
+ "is_team": "False",
+ "html_url": "https://bitbucket.org/" + participant.username,
+ }
+ }
+
return Elsewhere(
id=fake_int_id(),
platform=platform,
user_id=fake_text_id(),
is_locked=False,
participant=participant.username,
- user_info=''
+ user_info=info_templates[platform]
)
@@ -100,8 +120,8 @@
#Make the "Elsewhere's"
for p in participants:
- #All participants get 1 or 2 elsewheres
- num_elsewheres = random.randint(1, 2)
+ #All participants get between 1 and 3 elsewheres
+ num_elsewheres = random.randint(1, 3)
for platform_name in platforms[:num_elsewheres]:
e = fake_elsewhere(p, platform_name)
session.add(e)
| {"golden_diff": "diff --git a/gittip/fake_data.py b/gittip/fake_data.py\n--- a/gittip/fake_data.py\n+++ b/gittip/fake_data.py\n@@ -9,7 +9,7 @@\n \n faker = Factory.create()\n \n-platforms = ['github', 'twitter']\n+platforms = ['github', 'twitter', 'bitbucket']\n \n \n def fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):\n@@ -77,13 +77,33 @@\n if platform is None:\n platform = random.choice(platforms)\n \n+ info_templates = {\n+ \"github\": {\n+ \"name\": participant.username,\n+ \"html_url\": \"https://github.com/\" + participant.username,\n+ \"type\": \"User\",\n+ \"login\": participant.username\n+ },\n+ \"twitter\": {\n+ \"name\": participant.username,\n+ \"html_url\": \"https://twitter.com/\" + participant.username,\n+ \"screen_name\": participant.username\n+ },\n+ \"bitbucket\": {\n+ \"display_name\": participant.username,\n+ \"username\": participant.username,\n+ \"is_team\": \"False\",\n+ \"html_url\": \"https://bitbucket.org/\" + participant.username,\n+ }\n+ }\n+\n return Elsewhere(\n id=fake_int_id(),\n platform=platform,\n user_id=fake_text_id(),\n is_locked=False,\n participant=participant.username,\n- user_info=''\n+ user_info=info_templates[platform]\n )\n \n \n@@ -100,8 +120,8 @@\n \n #Make the \"Elsewhere's\"\n for p in participants:\n- #All participants get 1 or 2 elsewheres\n- num_elsewheres = random.randint(1, 2)\n+ #All participants get between 1 and 3 elsewheres\n+ num_elsewheres = random.randint(1, 3)\n for platform_name in platforms[:num_elsewheres]:\n e = fake_elsewhere(p, platform_name)\n session.add(e)\n", "issue": "Profiles generated with fake_data.py 500 when viewed\nIntroduced in a4b904f\n\nThis is due to `user_info` being set to an empty string instead of a dict/json, causing `escape` to be passed `None` (from `user_info.get`) instead of a string.\n\n```\nTraceback (most recent call last):\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py\", line 66, in handle_safely\n response = self.handle(request)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py\", line 99, in handle\n response = request.resource.respond(request)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py\", line 57, in respond\n response = self.get_response(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/negotiated_resource.py\", line 100, in get_response\n response.body = render(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/__init__.py\", line 113, in __call__\n return self.render_content(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/tornado.py\", line 14, in render_content\n return self.compiled.generate(**context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/template.py\", line 129, in generate\n return execute()\n File \"/home/joe/git/www.gittip.com/www/%username/index.html\", line 786, in _execute\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/escape.py\", line 52, in xhtml_escape\n return xml.sax.saxutils.escape(value, {'\"': \"\"\"})\n File \"/usr/lib/python2.7/xml/sax/saxutils.py\", line 39, in escape\n data = data.replace(\"&\", \"&\")\nAttributeError: 'NoneType' object has no attribute 'replace'\n```\n\nProfiles generated with fake_data.py 500 when viewed\nIntroduced in a4b904f\n\nThis is due to `user_info` being set to an empty string instead of a dict/json, causing `escape` to be passed `None` (from `user_info.get`) instead of a string.\n\n```\nTraceback (most recent call last):\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py\", line 66, in handle_safely\n response = self.handle(request)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py\", line 99, in handle\n response = request.resource.respond(request)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py\", line 57, in respond\n response = self.get_response(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/negotiated_resource.py\", line 100, in get_response\n response.body = render(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/__init__.py\", line 113, in __call__\n return self.render_content(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/tornado.py\", line 14, in render_content\n return self.compiled.generate(**context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/template.py\", line 129, in generate\n return execute()\n File \"/home/joe/git/www.gittip.com/www/%username/index.html\", line 786, in _execute\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/escape.py\", line 52, in xhtml_escape\n return xml.sax.saxutils.escape(value, {'\"': \"\"\"})\n File \"/usr/lib/python2.7/xml/sax/saxutils.py\", line 39, in escape\n data = data.replace(\"&\", \"&\")\nAttributeError: 'NoneType' object has no attribute 'replace'\n```\n\n", "before_files": [{"content": "from faker import Factory\nfrom gittip import orm\nfrom gittip.models.tip import Tip\nfrom gittip.models.participant import Participant\nfrom gittip.models.elsewhere import Elsewhere\nfrom gittip import AMOUNTS\nimport string\nimport random\n\nfaker = Factory.create()\n\nplatforms = ['github', 'twitter']\n\n\ndef fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):\n \"\"\"\n Create a random text id\n \"\"\"\n return ''.join(random.choice(chars) for x in range(size))\n\n\ndef fake_balance(max_amount=100):\n \"\"\"\n Return a random amount between 0 and max_amount\n \"\"\"\n return random.random() * max_amount\n\ndef fake_int_id(nmax=2 ** 31 -1):\n \"\"\"\n Create a random int id\n \"\"\"\n return random.randint(0, nmax)\n\n\ndef fake_participant(is_admin=False, anonymous=False):\n \"\"\"\n Create a fake User\n \"\"\"\n username = faker.firstName() + fake_text_id(3)\n return Participant(\n id=fake_int_id(),\n username=username,\n username_lower=username.lower(),\n statement=faker.sentence(),\n ctime=faker.dateTimeThisYear(),\n is_admin=is_admin,\n balance=fake_balance(),\n anonymous=anonymous,\n goal=fake_balance(),\n balanced_account_uri=faker.uri(),\n last_ach_result='',\n is_suspicious=False,\n last_bill_result='', # Needed to not be suspicious\n claimed_time=faker.dateTimeThisYear(),\n type=\"individual\"\n )\n\n\ndef fake_tip(tipper, tippee):\n \"\"\"\n Create a fake tip\n \"\"\"\n return Tip(\n id=fake_int_id(),\n ctime=faker.dateTimeThisYear(),\n mtime=faker.dateTimeThisMonth(),\n tipper=tipper.username,\n tippee=tippee.username,\n amount=random.choice(AMOUNTS)\n )\n\n\ndef fake_elsewhere(participant, platform=None):\n \"\"\"\n Create a fake elsewhere\n \"\"\"\n if platform is None:\n platform = random.choice(platforms)\n\n return Elsewhere(\n id=fake_int_id(),\n platform=platform,\n user_id=fake_text_id(),\n is_locked=False,\n participant=participant.username,\n user_info=''\n )\n\n\ndef populate_db(session, num_participants=100, num_tips=50):\n \"\"\"\n Populate DB with fake data\n \"\"\"\n #Make the participants\n participants = []\n for i in xrange(num_participants):\n p = fake_participant()\n session.add(p)\n participants.append(p)\n\n #Make the \"Elsewhere's\"\n for p in participants:\n #All participants get 1 or 2 elsewheres\n num_elsewheres = random.randint(1, 2)\n for platform_name in platforms[:num_elsewheres]:\n e = fake_elsewhere(p, platform_name)\n session.add(e)\n\n #Make the tips\n tips = []\n for i in xrange(num_tips):\n tipper, tippee = random.sample(participants, 2)\n t = fake_tip(tipper, tippee)\n tips.append(t)\n session.add(t)\n session.commit()\n\n\ndef main():\n db = orm.db\n dbsession = db.session\n populate_db(dbsession)\n\nif __name__ == '__main__':\n main()\n", "path": "gittip/fake_data.py"}]} | 2,577 | 455 |
gh_patches_debug_32741 | rasdani/github-patches | git_diff | WeblateOrg__weblate-9260 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document weblate migrate command
### Describe the problem
`weblate migrate` command is mentioned in the docs, but not actually documented.
It is also used inconsistently:
1. https://docs.weblate.org/en/latest/admin/languages.html#built-in-language-definitions
2. https://docs.weblate.org/en/latest/admin/install.html#filling-up-the-database
### Describe the solution you'd like
document the usage and link it in mentioned occurrences.
### Describe alternatives you've considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_
</issue>
<code>
[start of docs/_ext/djangodocs.py]
1 """Sphinx plugins for Weblate documentation."""
2 import re
3
4 from sphinx import addnodes
5 from sphinx.domains.std import Cmdoption
6
7 # RE for option descriptions without a '--' prefix
8 simple_option_desc_re = re.compile(r"([-_a-zA-Z0-9]+)(\s*.*?)(?=,\s+(?:/|-|--)|$)")
9
10
11 def setup(app):
12 app.add_crossref_type(
13 directivename="setting", rolename="setting", indextemplate="pair: %s; setting"
14 )
15 app.add_object_type(
16 directivename="django-admin",
17 rolename="djadmin",
18 indextemplate="pair: %s; weblate admin command",
19 parse_node=parse_django_admin_node,
20 )
21 app.add_directive("django-admin-option", Cmdoption)
22
23
24 def parse_django_admin_node(env, sig, signode):
25 command = sig.split(" ")[0]
26 env.ref_context["std:program"] = command
27 title = f"weblate {sig}"
28 signode += addnodes.desc_name(title, title)
29 return command
30
[end of docs/_ext/djangodocs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/_ext/djangodocs.py b/docs/_ext/djangodocs.py
--- a/docs/_ext/djangodocs.py
+++ b/docs/_ext/djangodocs.py
@@ -1,6 +1,7 @@
"""Sphinx plugins for Weblate documentation."""
import re
+from docutils.nodes import literal
from sphinx import addnodes
from sphinx.domains.std import Cmdoption
@@ -8,22 +9,45 @@
simple_option_desc_re = re.compile(r"([-_a-zA-Z0-9]+)(\s*.*?)(?=,\s+(?:/|-|--)|$)")
+class WeblateCommandLiteral(literal):
+ def __init__(self, rawsource="", text="", *children, **attributes):
+ if not text:
+ text = "weblate "
+ super().__init__(rawsource, text, *children, **attributes)
+
+
def setup(app):
app.add_crossref_type(
directivename="setting", rolename="setting", indextemplate="pair: %s; setting"
)
+ app.add_object_type(
+ directivename="weblate-admin",
+ rolename="wladmin",
+ indextemplate="pair: %s; weblate admin command",
+ parse_node=parse_weblate_admin_node,
+ ref_nodeclass=WeblateCommandLiteral,
+ )
+ app.add_directive("weblate-admin-option", Cmdoption)
app.add_object_type(
directivename="django-admin",
rolename="djadmin",
- indextemplate="pair: %s; weblate admin command",
+ indextemplate="pair: %s; django-admin command",
parse_node=parse_django_admin_node,
)
- app.add_directive("django-admin-option", Cmdoption)
-def parse_django_admin_node(env, sig, signode):
+def parse_weblate_admin_node(env, sig, signode):
command = sig.split(" ")[0]
+ # Context for options
env.ref_context["std:program"] = command
title = f"weblate {sig}"
signode += addnodes.desc_name(title, title)
return command
+
+
+def parse_django_admin_node(env, sig, signode):
+ command = sig.split(" ")[0]
+ env.ref_context["std:program"] = command
+ title = "django-admin %s" % sig
+ signode += addnodes.desc_name(title, title)
+ return command
| {"golden_diff": "diff --git a/docs/_ext/djangodocs.py b/docs/_ext/djangodocs.py\n--- a/docs/_ext/djangodocs.py\n+++ b/docs/_ext/djangodocs.py\n@@ -1,6 +1,7 @@\n \"\"\"Sphinx plugins for Weblate documentation.\"\"\"\n import re\n \n+from docutils.nodes import literal\n from sphinx import addnodes\n from sphinx.domains.std import Cmdoption\n \n@@ -8,22 +9,45 @@\n simple_option_desc_re = re.compile(r\"([-_a-zA-Z0-9]+)(\\s*.*?)(?=,\\s+(?:/|-|--)|$)\")\n \n \n+class WeblateCommandLiteral(literal):\n+ def __init__(self, rawsource=\"\", text=\"\", *children, **attributes):\n+ if not text:\n+ text = \"weblate \"\n+ super().__init__(rawsource, text, *children, **attributes)\n+\n+\n def setup(app):\n app.add_crossref_type(\n directivename=\"setting\", rolename=\"setting\", indextemplate=\"pair: %s; setting\"\n )\n+ app.add_object_type(\n+ directivename=\"weblate-admin\",\n+ rolename=\"wladmin\",\n+ indextemplate=\"pair: %s; weblate admin command\",\n+ parse_node=parse_weblate_admin_node,\n+ ref_nodeclass=WeblateCommandLiteral,\n+ )\n+ app.add_directive(\"weblate-admin-option\", Cmdoption)\n app.add_object_type(\n directivename=\"django-admin\",\n rolename=\"djadmin\",\n- indextemplate=\"pair: %s; weblate admin command\",\n+ indextemplate=\"pair: %s; django-admin command\",\n parse_node=parse_django_admin_node,\n )\n- app.add_directive(\"django-admin-option\", Cmdoption)\n \n \n-def parse_django_admin_node(env, sig, signode):\n+def parse_weblate_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n+ # Context for options\n env.ref_context[\"std:program\"] = command\n title = f\"weblate {sig}\"\n signode += addnodes.desc_name(title, title)\n return command\n+\n+\n+def parse_django_admin_node(env, sig, signode):\n+ command = sig.split(\" \")[0]\n+ env.ref_context[\"std:program\"] = command\n+ title = \"django-admin %s\" % sig\n+ signode += addnodes.desc_name(title, title)\n+ return command\n", "issue": "Document weblate migrate command\n### Describe the problem\n\n`weblate migrate` command is mentioned in the docs, but not actually documented. \r\n\r\nIt is also used inconsistently:\r\n1. https://docs.weblate.org/en/latest/admin/languages.html#built-in-language-definitions\r\n2. https://docs.weblate.org/en/latest/admin/install.html#filling-up-the-database\n\n### Describe the solution you'd like\n\ndocument the usage and link it in mentioned occurrences.\n\n### Describe alternatives you've considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "\"\"\"Sphinx plugins for Weblate documentation.\"\"\"\nimport re\n\nfrom sphinx import addnodes\nfrom sphinx.domains.std import Cmdoption\n\n# RE for option descriptions without a '--' prefix\nsimple_option_desc_re = re.compile(r\"([-_a-zA-Z0-9]+)(\\s*.*?)(?=,\\s+(?:/|-|--)|$)\")\n\n\ndef setup(app):\n app.add_crossref_type(\n directivename=\"setting\", rolename=\"setting\", indextemplate=\"pair: %s; setting\"\n )\n app.add_object_type(\n directivename=\"django-admin\",\n rolename=\"djadmin\",\n indextemplate=\"pair: %s; weblate admin command\",\n parse_node=parse_django_admin_node,\n )\n app.add_directive(\"django-admin-option\", Cmdoption)\n\n\ndef parse_django_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n env.ref_context[\"std:program\"] = command\n title = f\"weblate {sig}\"\n signode += addnodes.desc_name(title, title)\n return command\n", "path": "docs/_ext/djangodocs.py"}]} | 958 | 559 |
gh_patches_debug_34798 | rasdani/github-patches | git_diff | sopel-irc__sopel-625 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
.dice breaks if you try to roll a die with no sides
Repro:
```
(8:55:43 PM) creftos: .roll 1d0
(8:55:43 PM) Willie: ValueError: empty range for randrange() (1,1, 0) (file "/usr/lib64/python2.7/random.py", line 217, in randrange)
```
Logs:
```
Traceback (most recent call last):
File "/home/bdc/workspace/willie/willie/bot.py", line 741, in call
exit_code = func(willie, trigger)
File "/home/bdc/workspace/willie/willie/modules/dice.py", line 180, in roll
dice = list(map(_roll_dice, dice_expressions))
File "/home/bdc/workspace/willie/willie/modules/dice.py", line 140, in _roll_dice
dice = DicePouch(dice_num, dice_type, 0)
File "/home/bdc/workspace/willie/willie/modules/dice.py", line 34, in __init__
self.roll_dice()
File "/home/bdc/workspace/willie/willie/modules/dice.py", line 41, in roll_dice
number = random.randint(1, self.type)
File "/usr/lib64/python2.7/random.py", line 241, in randint
return self.randrange(a, b+1)
File "/usr/lib64/python2.7/random.py", line 217, in randrange
raise ValueError, "empty range for randrange() (%d,%d, %d)" % (istart, istop, width)
ValueError: empty range for randrange() (1,1, 0)
```
Proposed Solution:
Needs a catch to make sure user can't do this.
Additional Information:
Willie v. 4.5.1
OS: Fedora release 20 (Heisenbug)
Kernel Version: 3.16.2-201.fc20.x86_64
</issue>
<code>
[start of willie/modules/dice.py]
1 # coding=utf8
2 """
3 dice.py - Dice Module
4 Copyright 2010-2013, Dimitri "Tyrope" Molenaars, TyRope.nl
5 Copyright 2013, Ari Koivula, <[email protected]>
6 Licensed under the Eiffel Forum License 2.
7
8 http://willie.dftba.net/
9 """
10 from __future__ import unicode_literals
11 import random
12 import re
13
14 import willie.module
15 from willie.tools import eval_equation
16
17
18 class DicePouch:
19 def __init__(self, num_of_die, type_of_die, addition):
20 """Initialize dice pouch and roll the dice.
21
22 Args:
23 num_of_die: number of dice in the pouch.
24 type_of_die: how many faces the dice have.
25 addition: how much is added to the result of the dice.
26 """
27 self.num = num_of_die
28 self.type = type_of_die
29 self.addition = addition
30
31 self.dice = {}
32 self.dropped = {}
33
34 self.roll_dice()
35
36 def roll_dice(self):
37 """Roll all the dice in the pouch."""
38 self.dice = {}
39 self.dropped = {}
40 for __ in range(self.num):
41 number = random.randint(1, self.type)
42 count = self.dice.setdefault(number, 0)
43 self.dice[number] = count + 1
44
45 def drop_lowest(self, n):
46 """Drop n lowest dice from the result.
47
48 Args:
49 n: the number of dice to drop.
50 """
51 for i, count in self.dice.items():
52 count = self.dice[i]
53 if n == 0:
54 break
55 elif n < count:
56 self.dice[i] = count - n
57 self.dropped[i] = n
58 break
59 else:
60 self.dice[i] = 0
61 self.dropped[i] = count
62 n = n - count
63
64 for i, count in self.dropped.items():
65 if self.dice[i] == 0:
66 del self.dice[i]
67
68 def get_simple_string(self):
69 """Return the values of the dice like (2+2+2[+1+1])+1."""
70 dice = self.dice.items()
71 faces = ("+".join([str(face)] * times) for face, times in dice)
72 dice_str = "+".join(faces)
73
74 dropped_str = ""
75 if self.dropped:
76 dropped = self.dropped.items()
77 dfaces = ("+".join([str(face)] * times) for face, times in dropped)
78 dropped_str = "[+%s]" % ("+".join(dfaces),)
79
80 plus_str = ""
81 if self.addition:
82 plus_str = "{:+d}".format(self.addition)
83
84 return "(%s%s)%s" % (dice_str, dropped_str, plus_str)
85
86 def get_compressed_string(self):
87 """Return the values of the dice like (3x2[+2x1])+1."""
88 dice = self.dice.items()
89 faces = ("%dx%d" % (times, face) for face, times in dice)
90 dice_str = "+".join(faces)
91
92 dropped_str = ""
93 if self.dropped:
94 dropped = self.dropped.items()
95 dfaces = ("%dx%d" % (times, face) for face, times in dropped)
96 dropped_str = "[+%s]" % ("+".join(dfaces),)
97
98 plus_str = ""
99 if self.addition:
100 plus_str = "{:+d}".format(self.addition)
101
102 return "(%s%s)%s" % (dice_str, dropped_str, plus_str)
103
104 def get_sum(self):
105 """Get the sum of non-dropped dice and the addition."""
106 result = self.addition
107 for face, times in self.dice.items():
108 result += face * times
109 return result
110
111 def get_number_of_faces(self):
112 """Returns sum of different faces for dropped and not dropped dice
113
114 This can be used to estimate, whether the result can be shown in
115 compressed form in a reasonable amount of space.
116 """
117 return len(self.dice) + len(self.dropped)
118
119
120 def _roll_dice(dice_expression):
121 result = re.search(
122 r"""
123 (?P<dice_num>\d*)
124 d
125 (?P<dice_type>\d+)
126 (v(?P<drop_lowest>\d+))?
127 $""",
128 dice_expression,
129 re.IGNORECASE | re.VERBOSE)
130
131 dice_num = int(result.group('dice_num') or 1)
132 dice_type = int(result.group('dice_type'))
133
134 # Upper limit for dice should be at most a million. Creating a dict with
135 # more than a million elements already takes a noticeable amount of time
136 # on a fast computer and ~55kB of memory.
137 if dice_num > 1000:
138 return None
139
140 dice = DicePouch(dice_num, dice_type, 0)
141
142 if result.group('drop_lowest'):
143 drop = int(result.group('drop_lowest'))
144 dice.drop_lowest(drop)
145
146 return dice
147
148
149 @willie.module.commands("roll")
150 @willie.module.commands("dice")
151 @willie.module.commands("d")
152 @willie.module.priority("medium")
153 @willie.module.example(".roll 3d1+1", 'You roll 3d1+1: (1+1+1)+1 = 4')
154 @willie.module.example(".roll 3d1v2+1", 'You roll 3d1v2+1: (1[+1+1])+1 = 2')
155 @willie.module.example(".roll 2d4", 'You roll 2d4: \(\d\+\d\) = \d', re=True)
156 @willie.module.example(".roll 100d1", '[^:]*: \(100x1\) = 100', re=True)
157 @willie.module.example(".roll 1001d1", 'I only have 1000 dice. =(')
158 @willie.module.example(".roll 1d1 + 1d1", 'You roll 1d1 + 1d1: (1) + (1) = 2')
159 @willie.module.example(".roll 1d1+1d1", 'You roll 1d1+1d1: (1)+(1) = 2')
160 def roll(bot, trigger):
161 """.dice XdY[vZ][+N], rolls dice and reports the result.
162
163 X is the number of dice. Y is the number of faces in the dice. Z is the
164 number of lowest dice to be dropped from the result. N is the constant to
165 be applied to the end result.
166 """
167 # This regexp is only allowed to have one captured group, because having
168 # more would alter the output of re.findall.
169 dice_regexp = r"\d*d\d+(?:v\d+)?"
170
171 # Get a list of all dice expressions, evaluate them and then replace the
172 # expressions in the original string with the results. Replacing is done
173 # using string formatting, so %-characters must be escaped.
174 arg_str = trigger.group(2)
175 dice_expressions = re.findall(dice_regexp, arg_str)
176 arg_str = arg_str.replace("%", "%%")
177 arg_str = re.sub(dice_regexp, "%s", arg_str)
178 dice = list(map(_roll_dice, dice_expressions))
179 if None in dice:
180 bot.reply("I only have 1000 dice. =(")
181 return
182
183 def _get_eval_str(dice):
184 return "(%d)" % (dice.get_sum(),)
185
186 def _get_pretty_str(dice):
187 if dice.num <= 10:
188 return dice.get_simple_string()
189 elif dice.get_number_of_faces() <= 10:
190 return dice.get_compressed_string()
191 else:
192 return "(...)"
193
194 eval_str = arg_str % (tuple(map(_get_eval_str, dice)))
195 pretty_str = arg_str % (tuple(map(_get_pretty_str, dice)))
196
197 # Showing the actual error will hopefully give a better hint of what is
198 # wrong with the syntax than a generic error message.
199 try:
200 result = eval_equation(eval_str)
201 except Exception as e:
202 bot.reply("SyntaxError, eval(%s), %s" % (eval_str, e))
203 return
204
205 bot.reply("You roll %s: %s = %d" % (
206 trigger.group(2), pretty_str, result))
207
208
209 @willie.module.commands("choice")
210 @willie.module.commands("ch")
211 @willie.module.commands("choose")
212 @willie.module.priority("medium")
213 def choose(bot, trigger):
214 """
215 .choice option1|option2|option3 - Makes a difficult choice easy.
216 """
217 if not trigger.group(2):
218 return bot.reply('I\'d choose an option, but you didn\'t give me any.')
219 choices = re.split('[\|\\\\\/]', trigger.group(2))
220 pick = random.choice(choices)
221 return bot.reply('Your options: %s. My choice: %s' % (', '.join(choices), pick))
222
223
224 if __name__ == "__main__":
225 from willie.test_tools import run_example_tests
226 run_example_tests(__file__)
227
[end of willie/modules/dice.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/willie/modules/dice.py b/willie/modules/dice.py
--- a/willie/modules/dice.py
+++ b/willie/modules/dice.py
@@ -117,7 +117,7 @@
return len(self.dice) + len(self.dropped)
-def _roll_dice(dice_expression):
+def _roll_dice(bot, dice_expression):
result = re.search(
r"""
(?P<dice_num>\d*)
@@ -131,11 +131,17 @@
dice_num = int(result.group('dice_num') or 1)
dice_type = int(result.group('dice_type'))
+ # Dice can't have zero or a negative number of sides.
+ if dice_type <= 0:
+ bot.reply("I don't have any dice with %d sides. =(" % dice_type)
+ return None # Signal there was a problem
+
# Upper limit for dice should be at most a million. Creating a dict with
# more than a million elements already takes a noticeable amount of time
# on a fast computer and ~55kB of memory.
if dice_num > 1000:
- return None
+ bot.reply('I only have 1000 dice. =(')
+ return None # Signal there was a problem
dice = DicePouch(dice_num, dice_type, 0)
@@ -171,13 +177,18 @@
# Get a list of all dice expressions, evaluate them and then replace the
# expressions in the original string with the results. Replacing is done
# using string formatting, so %-characters must be escaped.
+ if not trigger.group(2):
+ return bot.reply("No dice to roll.")
arg_str = trigger.group(2)
dice_expressions = re.findall(dice_regexp, arg_str)
arg_str = arg_str.replace("%", "%%")
arg_str = re.sub(dice_regexp, "%s", arg_str)
- dice = list(map(_roll_dice, dice_expressions))
+
+ f = lambda dice_expr: _roll_dice (bot, dice_expr)
+ dice = list(map(f, dice_expressions))
+
if None in dice:
- bot.reply("I only have 1000 dice. =(")
+ # Stop computing roll if there was a problem rolling dice.
return
def _get_eval_str(dice):
| {"golden_diff": "diff --git a/willie/modules/dice.py b/willie/modules/dice.py\n--- a/willie/modules/dice.py\n+++ b/willie/modules/dice.py\n@@ -117,7 +117,7 @@\n return len(self.dice) + len(self.dropped)\n \n \n-def _roll_dice(dice_expression):\n+def _roll_dice(bot, dice_expression):\n result = re.search(\n r\"\"\"\n (?P<dice_num>\\d*)\n@@ -131,11 +131,17 @@\n dice_num = int(result.group('dice_num') or 1)\n dice_type = int(result.group('dice_type'))\n \n+ # Dice can't have zero or a negative number of sides.\n+ if dice_type <= 0:\n+ bot.reply(\"I don't have any dice with %d sides. =(\" % dice_type)\n+ return None # Signal there was a problem\n+\n # Upper limit for dice should be at most a million. Creating a dict with\n # more than a million elements already takes a noticeable amount of time\n # on a fast computer and ~55kB of memory.\n if dice_num > 1000:\n- return None\n+ bot.reply('I only have 1000 dice. =(')\n+ return None # Signal there was a problem\n \n dice = DicePouch(dice_num, dice_type, 0)\n \n@@ -171,13 +177,18 @@\n # Get a list of all dice expressions, evaluate them and then replace the\n # expressions in the original string with the results. Replacing is done\n # using string formatting, so %-characters must be escaped.\n+ if not trigger.group(2):\n+ return bot.reply(\"No dice to roll.\")\n arg_str = trigger.group(2)\n dice_expressions = re.findall(dice_regexp, arg_str)\n arg_str = arg_str.replace(\"%\", \"%%\")\n arg_str = re.sub(dice_regexp, \"%s\", arg_str)\n- dice = list(map(_roll_dice, dice_expressions))\n+\n+ f = lambda dice_expr: _roll_dice (bot, dice_expr)\n+ dice = list(map(f, dice_expressions))\n+\n if None in dice:\n- bot.reply(\"I only have 1000 dice. =(\")\n+ # Stop computing roll if there was a problem rolling dice.\n return\n \n def _get_eval_str(dice):\n", "issue": ".dice breaks if you try to roll a die with no sides\nRepro:\n\n```\n(8:55:43 PM) creftos: .roll 1d0\n(8:55:43 PM) Willie: ValueError: empty range for randrange() (1,1, 0) (file \"/usr/lib64/python2.7/random.py\", line 217, in randrange)\n```\n\nLogs:\n\n```\n Traceback (most recent call last):\n File \"/home/bdc/workspace/willie/willie/bot.py\", line 741, in call\n exit_code = func(willie, trigger)\n File \"/home/bdc/workspace/willie/willie/modules/dice.py\", line 180, in roll\n dice = list(map(_roll_dice, dice_expressions))\n File \"/home/bdc/workspace/willie/willie/modules/dice.py\", line 140, in _roll_dice\n dice = DicePouch(dice_num, dice_type, 0)\n File \"/home/bdc/workspace/willie/willie/modules/dice.py\", line 34, in __init__\n self.roll_dice()\n File \"/home/bdc/workspace/willie/willie/modules/dice.py\", line 41, in roll_dice\n number = random.randint(1, self.type)\n File \"/usr/lib64/python2.7/random.py\", line 241, in randint\n return self.randrange(a, b+1)\n File \"/usr/lib64/python2.7/random.py\", line 217, in randrange\n raise ValueError, \"empty range for randrange() (%d,%d, %d)\" % (istart, istop, width)\n ValueError: empty range for randrange() (1,1, 0)\n```\n\nProposed Solution:\nNeeds a catch to make sure user can't do this.\n\nAdditional Information:\nWillie v. 4.5.1\nOS: Fedora release 20 (Heisenbug)\nKernel Version: 3.16.2-201.fc20.x86_64\n\n", "before_files": [{"content": "# coding=utf8\n\"\"\"\ndice.py - Dice Module\nCopyright 2010-2013, Dimitri \"Tyrope\" Molenaars, TyRope.nl\nCopyright 2013, Ari Koivula, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttp://willie.dftba.net/\n\"\"\"\nfrom __future__ import unicode_literals\nimport random\nimport re\n\nimport willie.module\nfrom willie.tools import eval_equation\n\n\nclass DicePouch:\n def __init__(self, num_of_die, type_of_die, addition):\n \"\"\"Initialize dice pouch and roll the dice.\n\n Args:\n num_of_die: number of dice in the pouch.\n type_of_die: how many faces the dice have.\n addition: how much is added to the result of the dice.\n \"\"\"\n self.num = num_of_die\n self.type = type_of_die\n self.addition = addition\n\n self.dice = {}\n self.dropped = {}\n\n self.roll_dice()\n\n def roll_dice(self):\n \"\"\"Roll all the dice in the pouch.\"\"\"\n self.dice = {}\n self.dropped = {}\n for __ in range(self.num):\n number = random.randint(1, self.type)\n count = self.dice.setdefault(number, 0)\n self.dice[number] = count + 1\n\n def drop_lowest(self, n):\n \"\"\"Drop n lowest dice from the result.\n\n Args:\n n: the number of dice to drop.\n \"\"\"\n for i, count in self.dice.items():\n count = self.dice[i]\n if n == 0:\n break\n elif n < count:\n self.dice[i] = count - n\n self.dropped[i] = n\n break\n else:\n self.dice[i] = 0\n self.dropped[i] = count\n n = n - count\n\n for i, count in self.dropped.items():\n if self.dice[i] == 0:\n del self.dice[i]\n\n def get_simple_string(self):\n \"\"\"Return the values of the dice like (2+2+2[+1+1])+1.\"\"\"\n dice = self.dice.items()\n faces = (\"+\".join([str(face)] * times) for face, times in dice)\n dice_str = \"+\".join(faces)\n\n dropped_str = \"\"\n if self.dropped:\n dropped = self.dropped.items()\n dfaces = (\"+\".join([str(face)] * times) for face, times in dropped)\n dropped_str = \"[+%s]\" % (\"+\".join(dfaces),)\n\n plus_str = \"\"\n if self.addition:\n plus_str = \"{:+d}\".format(self.addition)\n\n return \"(%s%s)%s\" % (dice_str, dropped_str, plus_str)\n\n def get_compressed_string(self):\n \"\"\"Return the values of the dice like (3x2[+2x1])+1.\"\"\"\n dice = self.dice.items()\n faces = (\"%dx%d\" % (times, face) for face, times in dice)\n dice_str = \"+\".join(faces)\n\n dropped_str = \"\"\n if self.dropped:\n dropped = self.dropped.items()\n dfaces = (\"%dx%d\" % (times, face) for face, times in dropped)\n dropped_str = \"[+%s]\" % (\"+\".join(dfaces),)\n\n plus_str = \"\"\n if self.addition:\n plus_str = \"{:+d}\".format(self.addition)\n\n return \"(%s%s)%s\" % (dice_str, dropped_str, plus_str)\n\n def get_sum(self):\n \"\"\"Get the sum of non-dropped dice and the addition.\"\"\"\n result = self.addition\n for face, times in self.dice.items():\n result += face * times\n return result\n\n def get_number_of_faces(self):\n \"\"\"Returns sum of different faces for dropped and not dropped dice\n\n This can be used to estimate, whether the result can be shown in\n compressed form in a reasonable amount of space.\n \"\"\"\n return len(self.dice) + len(self.dropped)\n\n\ndef _roll_dice(dice_expression):\n result = re.search(\n r\"\"\"\n (?P<dice_num>\\d*)\n d\n (?P<dice_type>\\d+)\n (v(?P<drop_lowest>\\d+))?\n $\"\"\",\n dice_expression,\n re.IGNORECASE | re.VERBOSE)\n\n dice_num = int(result.group('dice_num') or 1)\n dice_type = int(result.group('dice_type'))\n\n # Upper limit for dice should be at most a million. Creating a dict with\n # more than a million elements already takes a noticeable amount of time\n # on a fast computer and ~55kB of memory.\n if dice_num > 1000:\n return None\n\n dice = DicePouch(dice_num, dice_type, 0)\n\n if result.group('drop_lowest'):\n drop = int(result.group('drop_lowest'))\n dice.drop_lowest(drop)\n\n return dice\n\n\[email protected](\"roll\")\[email protected](\"dice\")\[email protected](\"d\")\[email protected](\"medium\")\[email protected](\".roll 3d1+1\", 'You roll 3d1+1: (1+1+1)+1 = 4')\[email protected](\".roll 3d1v2+1\", 'You roll 3d1v2+1: (1[+1+1])+1 = 2')\[email protected](\".roll 2d4\", 'You roll 2d4: \\(\\d\\+\\d\\) = \\d', re=True)\[email protected](\".roll 100d1\", '[^:]*: \\(100x1\\) = 100', re=True)\[email protected](\".roll 1001d1\", 'I only have 1000 dice. =(')\[email protected](\".roll 1d1 + 1d1\", 'You roll 1d1 + 1d1: (1) + (1) = 2')\[email protected](\".roll 1d1+1d1\", 'You roll 1d1+1d1: (1)+(1) = 2')\ndef roll(bot, trigger):\n \"\"\".dice XdY[vZ][+N], rolls dice and reports the result.\n\n X is the number of dice. Y is the number of faces in the dice. Z is the\n number of lowest dice to be dropped from the result. N is the constant to\n be applied to the end result.\n \"\"\"\n # This regexp is only allowed to have one captured group, because having\n # more would alter the output of re.findall.\n dice_regexp = r\"\\d*d\\d+(?:v\\d+)?\"\n\n # Get a list of all dice expressions, evaluate them and then replace the\n # expressions in the original string with the results. Replacing is done\n # using string formatting, so %-characters must be escaped.\n arg_str = trigger.group(2)\n dice_expressions = re.findall(dice_regexp, arg_str)\n arg_str = arg_str.replace(\"%\", \"%%\")\n arg_str = re.sub(dice_regexp, \"%s\", arg_str)\n dice = list(map(_roll_dice, dice_expressions))\n if None in dice:\n bot.reply(\"I only have 1000 dice. =(\")\n return\n\n def _get_eval_str(dice):\n return \"(%d)\" % (dice.get_sum(),)\n\n def _get_pretty_str(dice):\n if dice.num <= 10:\n return dice.get_simple_string()\n elif dice.get_number_of_faces() <= 10:\n return dice.get_compressed_string()\n else:\n return \"(...)\"\n\n eval_str = arg_str % (tuple(map(_get_eval_str, dice)))\n pretty_str = arg_str % (tuple(map(_get_pretty_str, dice)))\n\n # Showing the actual error will hopefully give a better hint of what is\n # wrong with the syntax than a generic error message.\n try:\n result = eval_equation(eval_str)\n except Exception as e:\n bot.reply(\"SyntaxError, eval(%s), %s\" % (eval_str, e))\n return\n\n bot.reply(\"You roll %s: %s = %d\" % (\n trigger.group(2), pretty_str, result))\n\n\[email protected](\"choice\")\[email protected](\"ch\")\[email protected](\"choose\")\[email protected](\"medium\")\ndef choose(bot, trigger):\n \"\"\"\n .choice option1|option2|option3 - Makes a difficult choice easy.\n \"\"\"\n if not trigger.group(2):\n return bot.reply('I\\'d choose an option, but you didn\\'t give me any.')\n choices = re.split('[\\|\\\\\\\\\\/]', trigger.group(2))\n pick = random.choice(choices)\n return bot.reply('Your options: %s. My choice: %s' % (', '.join(choices), pick))\n\n\nif __name__ == \"__main__\":\n from willie.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "willie/modules/dice.py"}]} | 3,635 | 545 |
gh_patches_debug_404 | rasdani/github-patches | git_diff | pytorch__rl-1536 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] TruncatedNormal crashing when computing entropy
## Describe the bug
Calling `.entropy()` on a `TruncatedNormal` distribution causes the code to crash.
## To Reproduce
First crash happened using a PPO agent with entropy bonus turned on and actor parametrized with a `TruncatedNormal`.
A simple snippet to reproduce is the following:
```python
import torch
from torchrl.modules.distributions import IndependentNormal, TruncatedNormal
if __name__ == '__main__':
loc, scale = torch.zeros(1), torch.ones(1)
d1 = IndependentNormal(loc, scale)
print(d1.entropy())
d2 = TruncatedNormal(loc, scale)
print(d2.entropy())
```
```bash
tensor(1.4189)
Traceback (most recent call last):
File "/home/diego/Desktop/test.py", line 10, in <module>
print(d2.entropy())
File "/home/diego/miniconda3/envs/pytorch/lib/python3.10/site-packages/torch/distributions/independent.py", line 103, in entropy
entropy = self.base_dist.entropy()
TypeError: 'Tensor' object is not callable
```
## Expected behavior
The entropy value should be returned.
## System info
* Python 3.10.12
* torch 2.0.1
```python
import torchrl, numpy, sys
print(torchrl.__version__, numpy.__version__, sys.version, sys.platform)
```
```
0.1.1 1.25.1 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] linux
```
## Reason and Possible fixes
In the `TruncatedStandardNormal` class, the `self._entropy` attribute is a constant tensor computed at initialization. For some reason, calling `TruncatedStandardNormal.entropy` returns the `self._entropy` attribute, rather than the `entropy()` property:
```python
import torch
from torchrl.modules.distributions.truncated_normal import TruncatedStandardNormal
loc, scale = torch.zeros(1), torch.ones(1)
print(TruncatedStandardNormal(loc, scale).entropy)
print(TruncatedStandardNormal(loc, scale).entropy())
```
```bash
tensor([-0.0104])
Traceback (most recent call last):
File "/home/diego/Desktop/test.py", line 5, in <module>
print(TruncatedStandardNormal(loc, scale).entropy())
TypeError: 'Tensor' object is not callable
```
## Checklist
- [x] I have checked that there is no similar issue in the repo (**required**)
- [x] I have read the [documentation](https://github.com/pytorch/rl/tree/main/docs/) (**required**)
- [x] I have provided a minimal working example to reproduce the bug (**required**)
</issue>
<code>
[start of torchrl/modules/distributions/truncated_normal.py]
1 # Copyright (c) Meta Platforms, Inc. and affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 # from https://github.com/toshas/torch_truncnorm
8
9 import math
10 from numbers import Number
11
12 import torch
13 from torch.distributions import constraints, Distribution
14 from torch.distributions.utils import broadcast_all
15
16 CONST_SQRT_2 = math.sqrt(2)
17 CONST_INV_SQRT_2PI = 1 / math.sqrt(2 * math.pi)
18 CONST_INV_SQRT_2 = 1 / math.sqrt(2)
19 CONST_LOG_INV_SQRT_2PI = math.log(CONST_INV_SQRT_2PI)
20 CONST_LOG_SQRT_2PI_E = 0.5 * math.log(2 * math.pi * math.e)
21
22
23 class TruncatedStandardNormal(Distribution):
24 """Truncated Standard Normal distribution.
25
26 Source: https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
27 """
28
29 arg_constraints = {
30 "a": constraints.real,
31 "b": constraints.real,
32 }
33 has_rsample = True
34 eps = 1e-6
35
36 def __init__(self, a, b, validate_args=None):
37 self.a, self.b = broadcast_all(a, b)
38 if isinstance(a, Number) and isinstance(b, Number):
39 batch_shape = torch.Size()
40 else:
41 batch_shape = self.a.size()
42 super(TruncatedStandardNormal, self).__init__(
43 batch_shape, validate_args=validate_args
44 )
45 if self.a.dtype != self.b.dtype:
46 raise ValueError("Truncation bounds types are different")
47 if any(
48 (self.a >= self.b)
49 .view(
50 -1,
51 )
52 .tolist()
53 ):
54 raise ValueError("Incorrect truncation range")
55 eps = self.eps
56 self._dtype_min_gt_0 = eps
57 self._dtype_max_lt_1 = 1 - eps
58 self._little_phi_a = self._little_phi(self.a)
59 self._little_phi_b = self._little_phi(self.b)
60 self._big_phi_a = self._big_phi(self.a)
61 self._big_phi_b = self._big_phi(self.b)
62 self._Z = (self._big_phi_b - self._big_phi_a).clamp(eps, 1 - eps)
63 self._log_Z = self._Z.log()
64 little_phi_coeff_a = torch.nan_to_num(self.a, nan=math.nan)
65 little_phi_coeff_b = torch.nan_to_num(self.b, nan=math.nan)
66 self._lpbb_m_lpaa_d_Z = (
67 self._little_phi_b * little_phi_coeff_b
68 - self._little_phi_a * little_phi_coeff_a
69 ) / self._Z
70 self._mean = -(self._little_phi_b - self._little_phi_a) / self._Z
71 self._variance = (
72 1
73 - self._lpbb_m_lpaa_d_Z
74 - ((self._little_phi_b - self._little_phi_a) / self._Z) ** 2
75 )
76 self._entropy = CONST_LOG_SQRT_2PI_E + self._log_Z - 0.5 * self._lpbb_m_lpaa_d_Z
77
78 @constraints.dependent_property
79 def support(self):
80 return constraints.interval(self.a, self.b)
81
82 @property
83 def mean(self):
84 return self._mean
85
86 @property
87 def variance(self):
88 return self._variance
89
90 @property
91 def entropy(self):
92 return self._entropy
93
94 @property
95 def auc(self):
96 return self._Z
97
98 @staticmethod
99 def _little_phi(x):
100 return (-(x**2) * 0.5).exp() * CONST_INV_SQRT_2PI
101
102 def _big_phi(self, x):
103 phi = 0.5 * (1 + (x * CONST_INV_SQRT_2).erf())
104 return phi.clamp(self.eps, 1 - self.eps)
105
106 @staticmethod
107 def _inv_big_phi(x):
108 return CONST_SQRT_2 * (2 * x - 1).erfinv()
109
110 def cdf(self, value):
111 if self._validate_args:
112 self._validate_sample(value)
113 return ((self._big_phi(value) - self._big_phi_a) / self._Z).clamp(0, 1)
114
115 def icdf(self, value):
116 y = self._big_phi_a + value * self._Z
117 y = y.clamp(self.eps, 1 - self.eps)
118 return self._inv_big_phi(y)
119
120 def log_prob(self, value):
121 if self._validate_args:
122 self._validate_sample(value)
123 return CONST_LOG_INV_SQRT_2PI - self._log_Z - (value**2) * 0.5
124
125 def rsample(self, sample_shape=None):
126 if sample_shape is None:
127 sample_shape = torch.Size([])
128 shape = self._extended_shape(sample_shape)
129 p = torch.empty(shape, device=self.a.device).uniform_(
130 self._dtype_min_gt_0, self._dtype_max_lt_1
131 )
132 return self.icdf(p)
133
134
135 class TruncatedNormal(TruncatedStandardNormal):
136 """Truncated Normal distribution.
137
138 https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
139 """
140
141 has_rsample = True
142
143 def __init__(self, loc, scale, a, b, validate_args=None):
144 scale = scale.clamp_min(self.eps)
145 self.loc, self.scale, a, b = broadcast_all(loc, scale, a, b)
146 self._non_std_a = a
147 self._non_std_b = b
148 a = (a - self.loc) / self.scale
149 b = (b - self.loc) / self.scale
150 super(TruncatedNormal, self).__init__(a, b, validate_args=validate_args)
151 self._log_scale = self.scale.log()
152 self._mean = self._mean * self.scale + self.loc
153 self._variance = self._variance * self.scale**2
154 self._entropy += self._log_scale
155
156 def _to_std_rv(self, value):
157 return (value - self.loc) / self.scale
158
159 def _from_std_rv(self, value):
160 return value * self.scale + self.loc
161
162 def cdf(self, value):
163 return super(TruncatedNormal, self).cdf(self._to_std_rv(value))
164
165 def icdf(self, value):
166 sample = self._from_std_rv(super().icdf(value))
167
168 # clamp data but keep gradients
169 sample_clip = torch.stack(
170 [sample.detach(), self._non_std_a.detach().expand_as(sample)], 0
171 ).max(0)[0]
172 sample_clip = torch.stack(
173 [sample_clip, self._non_std_b.detach().expand_as(sample)], 0
174 ).min(0)[0]
175 sample.data.copy_(sample_clip)
176 return sample
177
178 def log_prob(self, value):
179 value = self._to_std_rv(value)
180 return super(TruncatedNormal, self).log_prob(value) - self._log_scale
181
[end of torchrl/modules/distributions/truncated_normal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchrl/modules/distributions/truncated_normal.py b/torchrl/modules/distributions/truncated_normal.py
--- a/torchrl/modules/distributions/truncated_normal.py
+++ b/torchrl/modules/distributions/truncated_normal.py
@@ -87,7 +87,6 @@
def variance(self):
return self._variance
- @property
def entropy(self):
return self._entropy
| {"golden_diff": "diff --git a/torchrl/modules/distributions/truncated_normal.py b/torchrl/modules/distributions/truncated_normal.py\n--- a/torchrl/modules/distributions/truncated_normal.py\n+++ b/torchrl/modules/distributions/truncated_normal.py\n@@ -87,7 +87,6 @@\n def variance(self):\n return self._variance\n \n- @property\n def entropy(self):\n return self._entropy\n", "issue": "[BUG] TruncatedNormal crashing when computing entropy\n## Describe the bug\r\n\r\nCalling `.entropy()` on a `TruncatedNormal` distribution causes the code to crash.\r\n\r\n## To Reproduce\r\n\r\nFirst crash happened using a PPO agent with entropy bonus turned on and actor parametrized with a `TruncatedNormal`.\r\nA simple snippet to reproduce is the following:\r\n\r\n```python\r\nimport torch\r\nfrom torchrl.modules.distributions import IndependentNormal, TruncatedNormal\r\n\r\nif __name__ == '__main__':\r\n\tloc, scale = torch.zeros(1), torch.ones(1)\r\n\td1 = IndependentNormal(loc, scale)\r\n\tprint(d1.entropy())\r\n\t\r\n\td2 = TruncatedNormal(loc, scale)\r\n\tprint(d2.entropy())\r\n```\r\n\r\n```bash\r\ntensor(1.4189)\r\nTraceback (most recent call last):\r\n File \"/home/diego/Desktop/test.py\", line 10, in <module>\r\n print(d2.entropy())\r\n File \"/home/diego/miniconda3/envs/pytorch/lib/python3.10/site-packages/torch/distributions/independent.py\", line 103, in entropy\r\n entropy = self.base_dist.entropy()\r\nTypeError: 'Tensor' object is not callable\r\n\r\n```\r\n\r\n## Expected behavior\r\n\r\nThe entropy value should be returned.\r\n\r\n## System info\r\n* Python 3.10.12\r\n* torch 2.0.1\r\n\r\n```python\r\nimport torchrl, numpy, sys\r\nprint(torchrl.__version__, numpy.__version__, sys.version, sys.platform)\r\n```\r\n```\r\n0.1.1 1.25.1 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] linux\r\n```\r\n## Reason and Possible fixes\r\n\r\nIn the `TruncatedStandardNormal` class, the `self._entropy` attribute is a constant tensor computed at initialization. For some reason, calling `TruncatedStandardNormal.entropy` returns the `self._entropy` attribute, rather than the `entropy()` property:\r\n\r\n```python\r\nimport torch\r\nfrom torchrl.modules.distributions.truncated_normal import TruncatedStandardNormal\r\nloc, scale = torch.zeros(1), torch.ones(1)\r\nprint(TruncatedStandardNormal(loc, scale).entropy)\r\nprint(TruncatedStandardNormal(loc, scale).entropy())\r\n```\r\n\r\n```bash\r\ntensor([-0.0104])\r\nTraceback (most recent call last):\r\n File \"/home/diego/Desktop/test.py\", line 5, in <module>\r\n print(TruncatedStandardNormal(loc, scale).entropy())\r\nTypeError: 'Tensor' object is not callable\r\n\r\n```\r\n\r\n## Checklist\r\n\r\n- [x] I have checked that there is no similar issue in the repo (**required**)\r\n- [x] I have read the [documentation](https://github.com/pytorch/rl/tree/main/docs/) (**required**)\r\n- [x] I have provided a minimal working example to reproduce the bug (**required**)\r\n\n", "before_files": [{"content": "# Copyright (c) Meta Platforms, Inc. and affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# from https://github.com/toshas/torch_truncnorm\n\nimport math\nfrom numbers import Number\n\nimport torch\nfrom torch.distributions import constraints, Distribution\nfrom torch.distributions.utils import broadcast_all\n\nCONST_SQRT_2 = math.sqrt(2)\nCONST_INV_SQRT_2PI = 1 / math.sqrt(2 * math.pi)\nCONST_INV_SQRT_2 = 1 / math.sqrt(2)\nCONST_LOG_INV_SQRT_2PI = math.log(CONST_INV_SQRT_2PI)\nCONST_LOG_SQRT_2PI_E = 0.5 * math.log(2 * math.pi * math.e)\n\n\nclass TruncatedStandardNormal(Distribution):\n \"\"\"Truncated Standard Normal distribution.\n\n Source: https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf\n \"\"\"\n\n arg_constraints = {\n \"a\": constraints.real,\n \"b\": constraints.real,\n }\n has_rsample = True\n eps = 1e-6\n\n def __init__(self, a, b, validate_args=None):\n self.a, self.b = broadcast_all(a, b)\n if isinstance(a, Number) and isinstance(b, Number):\n batch_shape = torch.Size()\n else:\n batch_shape = self.a.size()\n super(TruncatedStandardNormal, self).__init__(\n batch_shape, validate_args=validate_args\n )\n if self.a.dtype != self.b.dtype:\n raise ValueError(\"Truncation bounds types are different\")\n if any(\n (self.a >= self.b)\n .view(\n -1,\n )\n .tolist()\n ):\n raise ValueError(\"Incorrect truncation range\")\n eps = self.eps\n self._dtype_min_gt_0 = eps\n self._dtype_max_lt_1 = 1 - eps\n self._little_phi_a = self._little_phi(self.a)\n self._little_phi_b = self._little_phi(self.b)\n self._big_phi_a = self._big_phi(self.a)\n self._big_phi_b = self._big_phi(self.b)\n self._Z = (self._big_phi_b - self._big_phi_a).clamp(eps, 1 - eps)\n self._log_Z = self._Z.log()\n little_phi_coeff_a = torch.nan_to_num(self.a, nan=math.nan)\n little_phi_coeff_b = torch.nan_to_num(self.b, nan=math.nan)\n self._lpbb_m_lpaa_d_Z = (\n self._little_phi_b * little_phi_coeff_b\n - self._little_phi_a * little_phi_coeff_a\n ) / self._Z\n self._mean = -(self._little_phi_b - self._little_phi_a) / self._Z\n self._variance = (\n 1\n - self._lpbb_m_lpaa_d_Z\n - ((self._little_phi_b - self._little_phi_a) / self._Z) ** 2\n )\n self._entropy = CONST_LOG_SQRT_2PI_E + self._log_Z - 0.5 * self._lpbb_m_lpaa_d_Z\n\n @constraints.dependent_property\n def support(self):\n return constraints.interval(self.a, self.b)\n\n @property\n def mean(self):\n return self._mean\n\n @property\n def variance(self):\n return self._variance\n\n @property\n def entropy(self):\n return self._entropy\n\n @property\n def auc(self):\n return self._Z\n\n @staticmethod\n def _little_phi(x):\n return (-(x**2) * 0.5).exp() * CONST_INV_SQRT_2PI\n\n def _big_phi(self, x):\n phi = 0.5 * (1 + (x * CONST_INV_SQRT_2).erf())\n return phi.clamp(self.eps, 1 - self.eps)\n\n @staticmethod\n def _inv_big_phi(x):\n return CONST_SQRT_2 * (2 * x - 1).erfinv()\n\n def cdf(self, value):\n if self._validate_args:\n self._validate_sample(value)\n return ((self._big_phi(value) - self._big_phi_a) / self._Z).clamp(0, 1)\n\n def icdf(self, value):\n y = self._big_phi_a + value * self._Z\n y = y.clamp(self.eps, 1 - self.eps)\n return self._inv_big_phi(y)\n\n def log_prob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n return CONST_LOG_INV_SQRT_2PI - self._log_Z - (value**2) * 0.5\n\n def rsample(self, sample_shape=None):\n if sample_shape is None:\n sample_shape = torch.Size([])\n shape = self._extended_shape(sample_shape)\n p = torch.empty(shape, device=self.a.device).uniform_(\n self._dtype_min_gt_0, self._dtype_max_lt_1\n )\n return self.icdf(p)\n\n\nclass TruncatedNormal(TruncatedStandardNormal):\n \"\"\"Truncated Normal distribution.\n\n https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf\n \"\"\"\n\n has_rsample = True\n\n def __init__(self, loc, scale, a, b, validate_args=None):\n scale = scale.clamp_min(self.eps)\n self.loc, self.scale, a, b = broadcast_all(loc, scale, a, b)\n self._non_std_a = a\n self._non_std_b = b\n a = (a - self.loc) / self.scale\n b = (b - self.loc) / self.scale\n super(TruncatedNormal, self).__init__(a, b, validate_args=validate_args)\n self._log_scale = self.scale.log()\n self._mean = self._mean * self.scale + self.loc\n self._variance = self._variance * self.scale**2\n self._entropy += self._log_scale\n\n def _to_std_rv(self, value):\n return (value - self.loc) / self.scale\n\n def _from_std_rv(self, value):\n return value * self.scale + self.loc\n\n def cdf(self, value):\n return super(TruncatedNormal, self).cdf(self._to_std_rv(value))\n\n def icdf(self, value):\n sample = self._from_std_rv(super().icdf(value))\n\n # clamp data but keep gradients\n sample_clip = torch.stack(\n [sample.detach(), self._non_std_a.detach().expand_as(sample)], 0\n ).max(0)[0]\n sample_clip = torch.stack(\n [sample_clip, self._non_std_b.detach().expand_as(sample)], 0\n ).min(0)[0]\n sample.data.copy_(sample_clip)\n return sample\n\n def log_prob(self, value):\n value = self._to_std_rv(value)\n return super(TruncatedNormal, self).log_prob(value) - self._log_scale\n", "path": "torchrl/modules/distributions/truncated_normal.py"}]} | 3,215 | 92 |
gh_patches_debug_32632 | rasdani/github-patches | git_diff | docker__docker-py-727 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
split_port() does not properly handle ":80" or "127.0.0.1:" properly
Initially reported as https://github.com/docker/compose/issues/1887
Example:
``` python
def test_port_only_with_colon(self):
self.assertRaises(ValueError,
lambda: split_port(":80"))
def test_host_only_with_colon(self):
self.assertRaises(ValueError,
lambda: split_port("localhost:"))
```
Results:
```
======================================================================
ERROR: test_host_only_with_colon (__main__.UtilsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/utils_test.py", line 428, in test_host_only_with_colon
lambda: split_port("localhost:"))
File "/usr/lib/python2.7/unittest/case.py", line 473, in assertRaises
callableObj(*args, **kwargs)
File "tests/utils_test.py", line 428, in <lambda>
lambda: split_port("localhost:"))
File "/home/mark/Projects/docker-py/docker/utils/ports/ports.py", line 77, in split_port
if len(internal_range) != len(external_range):
TypeError: object of type 'NoneType' has no len()
======================================================================
ERROR: test_port_only_with_colon (__main__.UtilsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/utils_test.py", line 424, in test_port_only_with_colon
lambda: split_port(":80"))
File "/usr/lib/python2.7/unittest/case.py", line 473, in assertRaises
callableObj(*args, **kwargs)
File "tests/utils_test.py", line 424, in <lambda>
lambda: split_port(":80"))
File "/home/mark/Projects/docker-py/docker/utils/ports/ports.py", line 77, in split_port
if len(internal_range) != len(external_range):
TypeError: object of type 'NoneType' has no len()
```
</issue>
<code>
[start of docker/utils/ports/ports.py]
1
2
3 def add_port_mapping(port_bindings, internal_port, external):
4 if internal_port in port_bindings:
5 port_bindings[internal_port].append(external)
6 else:
7 port_bindings[internal_port] = [external]
8
9
10 def add_port(port_bindings, internal_port_range, external_range):
11 if external_range is None:
12 for internal_port in internal_port_range:
13 add_port_mapping(port_bindings, internal_port, None)
14 else:
15 ports = zip(internal_port_range, external_range)
16 for internal_port, external_port in ports:
17 add_port_mapping(port_bindings, internal_port, external_port)
18
19
20 def build_port_bindings(ports):
21 port_bindings = {}
22 for port in ports:
23 internal_port_range, external_range = split_port(port)
24 add_port(port_bindings, internal_port_range, external_range)
25 return port_bindings
26
27
28 def to_port_range(port):
29 if not port:
30 return None
31
32 protocol = ""
33 if "/" in port:
34 parts = port.split("/")
35 if len(parts) != 2:
36 raise ValueError('Invalid port "%s", should be '
37 '[[remote_ip:]remote_port[-remote_port]:]'
38 'port[/protocol]' % port)
39 port, protocol = parts
40 protocol = "/" + protocol
41
42 parts = str(port).split('-')
43
44 if len(parts) == 1:
45 return ["%s%s" % (port, protocol)]
46
47 if len(parts) == 2:
48 full_port_range = range(int(parts[0]), int(parts[1]) + 1)
49 return ["%s%s" % (p, protocol) for p in full_port_range]
50
51 raise ValueError('Invalid port range "%s", should be '
52 'port or startport-endport' % port)
53
54
55 def split_port(port):
56 parts = str(port).split(':')
57 if not 1 <= len(parts) <= 3:
58 raise ValueError('Invalid port "%s", should be '
59 '[[remote_ip:]remote_port:]port[/protocol]' % port)
60
61 if len(parts) == 1:
62 internal_port, = parts
63 return to_port_range(internal_port), None
64 if len(parts) == 2:
65 external_port, internal_port = parts
66
67 internal_range = to_port_range(internal_port)
68 external_range = to_port_range(external_port)
69 if len(internal_range) != len(external_range):
70 raise ValueError('Port ranges don\'t match in length')
71
72 return internal_range, external_range
73
74 external_ip, external_port, internal_port = parts
75 internal_range = to_port_range(internal_port)
76 external_range = to_port_range(external_port)
77 if not external_range:
78 external_range = [None] * len(internal_range)
79
80 if len(internal_range) != len(external_range):
81 raise ValueError('Port ranges don\'t match in length')
82
83 return internal_range, [(external_ip, ex_port or None)
84 for ex_port in external_range]
85
[end of docker/utils/ports/ports.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/utils/ports/ports.py b/docker/utils/ports/ports.py
--- a/docker/utils/ports/ports.py
+++ b/docker/utils/ports/ports.py
@@ -1,5 +1,4 @@
-
def add_port_mapping(port_bindings, internal_port, external):
if internal_port in port_bindings:
port_bindings[internal_port].append(external)
@@ -33,9 +32,8 @@
if "/" in port:
parts = port.split("/")
if len(parts) != 2:
- raise ValueError('Invalid port "%s", should be '
- '[[remote_ip:]remote_port[-remote_port]:]'
- 'port[/protocol]' % port)
+ _raise_invalid_port(port)
+
port, protocol = parts
protocol = "/" + protocol
@@ -52,11 +50,17 @@
'port or startport-endport' % port)
+def _raise_invalid_port(port):
+ raise ValueError('Invalid port "%s", should be '
+ '[[remote_ip:]remote_port[-remote_port]:]'
+ 'port[/protocol]' % port)
+
+
def split_port(port):
parts = str(port).split(':')
+
if not 1 <= len(parts) <= 3:
- raise ValueError('Invalid port "%s", should be '
- '[[remote_ip:]remote_port:]port[/protocol]' % port)
+ _raise_invalid_port(port)
if len(parts) == 1:
internal_port, = parts
@@ -66,6 +70,10 @@
internal_range = to_port_range(internal_port)
external_range = to_port_range(external_port)
+
+ if internal_range is None or external_range is None:
+ _raise_invalid_port(port)
+
if len(internal_range) != len(external_range):
raise ValueError('Port ranges don\'t match in length')
| {"golden_diff": "diff --git a/docker/utils/ports/ports.py b/docker/utils/ports/ports.py\n--- a/docker/utils/ports/ports.py\n+++ b/docker/utils/ports/ports.py\n@@ -1,5 +1,4 @@\n \n-\n def add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n@@ -33,9 +32,8 @@\n if \"/\" in port:\n parts = port.split(\"/\")\n if len(parts) != 2:\n- raise ValueError('Invalid port \"%s\", should be '\n- '[[remote_ip:]remote_port[-remote_port]:]'\n- 'port[/protocol]' % port)\n+ _raise_invalid_port(port)\n+\n port, protocol = parts\n protocol = \"/\" + protocol\n \n@@ -52,11 +50,17 @@\n 'port or startport-endport' % port)\n \n \n+def _raise_invalid_port(port):\n+ raise ValueError('Invalid port \"%s\", should be '\n+ '[[remote_ip:]remote_port[-remote_port]:]'\n+ 'port[/protocol]' % port)\n+\n+\n def split_port(port):\n parts = str(port).split(':')\n+\n if not 1 <= len(parts) <= 3:\n- raise ValueError('Invalid port \"%s\", should be '\n- '[[remote_ip:]remote_port:]port[/protocol]' % port)\n+ _raise_invalid_port(port)\n \n if len(parts) == 1:\n internal_port, = parts\n@@ -66,6 +70,10 @@\n \n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n+\n+ if internal_range is None or external_range is None:\n+ _raise_invalid_port(port)\n+\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n", "issue": "split_port() does not properly handle \":80\" or \"127.0.0.1:\" properly\nInitially reported as https://github.com/docker/compose/issues/1887 \n\nExample:\n\n``` python\n def test_port_only_with_colon(self):\n self.assertRaises(ValueError,\n lambda: split_port(\":80\"))\n\n def test_host_only_with_colon(self):\n self.assertRaises(ValueError,\n lambda: split_port(\"localhost:\"))\n```\n\nResults:\n\n```\n======================================================================\nERROR: test_host_only_with_colon (__main__.UtilsTest)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"tests/utils_test.py\", line 428, in test_host_only_with_colon\n lambda: split_port(\"localhost:\"))\n File \"/usr/lib/python2.7/unittest/case.py\", line 473, in assertRaises\n callableObj(*args, **kwargs)\n File \"tests/utils_test.py\", line 428, in <lambda>\n lambda: split_port(\"localhost:\"))\n File \"/home/mark/Projects/docker-py/docker/utils/ports/ports.py\", line 77, in split_port\n if len(internal_range) != len(external_range):\nTypeError: object of type 'NoneType' has no len()\n\n======================================================================\nERROR: test_port_only_with_colon (__main__.UtilsTest)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"tests/utils_test.py\", line 424, in test_port_only_with_colon\n lambda: split_port(\":80\"))\n File \"/usr/lib/python2.7/unittest/case.py\", line 473, in assertRaises\n callableObj(*args, **kwargs)\n File \"tests/utils_test.py\", line 424, in <lambda>\n lambda: split_port(\":80\"))\n File \"/home/mark/Projects/docker-py/docker/utils/ports/ports.py\", line 77, in split_port\n if len(internal_range) != len(external_range):\nTypeError: object of type 'NoneType' has no len()\n```\n\n", "before_files": [{"content": "\n\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef to_port_range(port):\n if not port:\n return None\n\n protocol = \"\"\n if \"/\" in port:\n parts = port.split(\"/\")\n if len(parts) != 2:\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n port, protocol = parts\n protocol = \"/\" + protocol\n\n parts = str(port).split('-')\n\n if len(parts) == 1:\n return [\"%s%s\" % (port, protocol)]\n\n if len(parts) == 2:\n full_port_range = range(int(parts[0]), int(parts[1]) + 1)\n return [\"%s%s\" % (p, protocol) for p in full_port_range]\n\n raise ValueError('Invalid port range \"%s\", should be '\n 'port or startport-endport' % port)\n\n\ndef split_port(port):\n parts = str(port).split(':')\n if not 1 <= len(parts) <= 3:\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port:]port[/protocol]' % port)\n\n if len(parts) == 1:\n internal_port, = parts\n return to_port_range(internal_port), None\n if len(parts) == 2:\n external_port, internal_port = parts\n\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, external_range\n\n external_ip, external_port, internal_port = parts\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n if not external_range:\n external_range = [None] * len(internal_range)\n\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, [(external_ip, ex_port or None)\n for ex_port in external_range]\n", "path": "docker/utils/ports/ports.py"}]} | 1,777 | 416 |
gh_patches_debug_10847 | rasdani/github-patches | git_diff | translate__translate-3709 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
YAML serialization raises an exception when a node disappears
https://github.com/translate/translate/blob/e5d4d38fbcc7fb310683e7b12f9ae7deab9d7788/translate/storage/yaml.py#L142
```Exception Value: 'str' object does not support item assignment```
In my case (through weblate), the existing file has:
```
it:
base:
path: Italian path
```
And a new node is used in the source translation (`base->path->sublevel`):
```
en:
base:
path:
sublevel: Now I want these nested actually
```
The code in `serialize` will raise an exception on line 154 (and make the file empty).
</issue>
<code>
[start of translate/storage/yaml.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2016 Michal Čihař
4 #
5 # This file is part of the Translate Toolkit.
6 #
7 # This program is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program; if not, see <http://www.gnu.org/licenses/>.
19
20 r"""Class that manages YAML data files for translation
21 """
22
23 from __future__ import absolute_import
24 from __future__ import unicode_literals
25
26 import uuid
27 from collections import OrderedDict
28
29 import six
30 import yaml
31 import yaml.constructor
32
33 from translate.storage import base
34
35
36 class OrderedDictYAMLLoader(yaml.SafeLoader):
37 """
38 A YAML loader that loads mappings into ordered dictionaries.
39 """
40
41 def __init__(self, *args, **kwargs):
42 yaml.SafeLoader.__init__(self, *args, **kwargs)
43
44 self.add_constructor(u'tag:yaml.org,2002:map', type(self).construct_yaml_map)
45 self.add_constructor(u'tag:yaml.org,2002:omap', type(self).construct_yaml_map)
46
47 def construct_yaml_map(self, node):
48 data = OrderedDict()
49 yield data
50 value = self.construct_mapping(node)
51 data.update(value)
52
53 def construct_mapping(self, node, deep=False):
54 if isinstance(node, yaml.MappingNode):
55 self.flatten_mapping(node)
56 else:
57 raise yaml.constructor.ConstructorError(
58 None, None,
59 'expected a mapping node, but found %s' % node.id, node.start_mark
60 )
61
62 mapping = OrderedDict()
63 for key_node, value_node in node.value:
64 key = self.construct_object(key_node, deep=deep)
65 try:
66 hash(key)
67 except TypeError as exc:
68 raise yaml.constructor.ConstructorError(
69 'while constructing a mapping',
70 node.start_mark,
71 'found unacceptable key (%s)' % exc, key_node.start_mark
72 )
73 value = self.construct_object(value_node, deep=deep)
74 mapping[key] = value
75 return mapping
76
77
78 class UnsortableList(list):
79 def sort(self, *args, **kwargs):
80 pass
81
82
83 class UnsortableOrderedDict(OrderedDict):
84 def items(self, *args, **kwargs):
85 return UnsortableList(OrderedDict.items(self, *args, **kwargs))
86
87
88 class YAMLDumper(yaml.SafeDumper):
89 def represent_unsorted(self, data):
90 return self.represent_dict(data.items())
91
92
93 YAMLDumper.add_representer(UnsortableOrderedDict, YAMLDumper.represent_unsorted)
94
95
96 class YAMLUnit(base.TranslationUnit):
97 """A YAML entry"""
98
99 def __init__(self, source=None, **kwargs):
100 self._id = None
101 if source:
102 self.source = source
103 super(YAMLUnit, self).__init__(source)
104
105 def getsource(self):
106 return self.target
107
108 def setsource(self, source):
109 self.target = source
110 source = property(getsource, setsource)
111
112 def setid(self, value):
113 self._id = value
114
115 def getid(self):
116 # Ensure we have ID (for serialization)
117 if self._id is None:
118 self._id = str(uuid.uuid4())
119 return self._id
120
121 def getlocations(self):
122 return [self.getid()]
123
124
125 class YAMLFile(base.TranslationStore):
126 """A YAML file"""
127
128 UnitClass = YAMLUnit
129
130 def __init__(self, inputfile=None, **kwargs):
131 """construct a YAML file, optionally reading in from inputfile."""
132 super(YAMLFile, self).__init__(**kwargs)
133 self.filename = ''
134 self._file = u''
135 if inputfile is not None:
136 self.parse(inputfile)
137
138 def get_root_node(self, node):
139 """Returns root node for serialize"""
140 return node
141
142 def serialize(self, out):
143 def nested_set(target, path, value):
144 if len(path) > 1:
145 if len(path) == 2 and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():
146 if path[0] not in target:
147 target[path[0]] = []
148 target[path[0]].append(value)
149 else:
150 if path[0] not in target:
151 target[path[0]] = UnsortableOrderedDict()
152 nested_set(target[path[0]], path[1:], value)
153 else:
154 target[path[0]] = value
155
156 units = UnsortableOrderedDict()
157 for unit in self.unit_iter():
158 nested_set(units, unit.getid().split('->'), unit.target)
159 out.write(yaml.dump_all(
160 [self.get_root_node(units)],
161 Dumper=YAMLDumper,
162 default_flow_style=False, encoding='utf-8', allow_unicode=True
163 ))
164
165 def _flatten(self, data, prev=""):
166 """Flatten YAML dictionary.
167 """
168 if isinstance(data, dict):
169 for k, v in six.iteritems(data):
170 if not isinstance(k, six.string_types):
171 raise base.ParseError(
172 'Key not string: {0}/{1} ({2})'.format(prev, k, type(k))
173 )
174
175 for x in self._flatten(v, '->'.join((prev, k)) if prev else k):
176 yield x
177 else:
178 if isinstance(data, six.string_types):
179 yield (prev, data)
180 elif isinstance(data, bool):
181 yield (prev, str(data))
182 elif isinstance(data, list):
183 for k, v in enumerate(data):
184 key = '[{0}]'.format(k)
185 yield ('->'.join((prev, key)), six.text_type(v))
186 elif data is None:
187 pass
188 else:
189 raise ValueError("We don't handle these values:\n"
190 "Type: %s\n"
191 "Data: %s\n"
192 "Previous: %s" % (type(data), data, prev))
193
194 def preprocess(self, data):
195 """Preprocess hook for child formats"""
196 return data
197
198 def parse(self, input):
199 """parse the given file or file source string"""
200 if hasattr(input, 'name'):
201 self.filename = input.name
202 elif not getattr(self, 'filename', ''):
203 self.filename = ''
204 if hasattr(input, "read"):
205 src = input.read()
206 input.close()
207 input = src
208 if isinstance(input, bytes):
209 input = input.decode('utf-8')
210 try:
211 self._file = yaml.load(input, OrderedDictYAMLLoader)
212 except yaml.YAMLError as e:
213 message = e.problem if hasattr(e, 'problem') else e.message
214 if hasattr(e, 'problem_mark'):
215 message += ' {0}'.format(e.problem_mark)
216 raise base.ParseError(message)
217
218 self._file = self.preprocess(self._file)
219
220 for k, data in self._flatten(self._file):
221 unit = self.UnitClass(data)
222 unit.setid(k)
223 self.addunit(unit)
224
225
226 class RubyYAMLFile(YAMLFile):
227 """Ruby YAML file, it has language code as first node."""
228
229 def preprocess(self, data):
230 if isinstance(data, OrderedDict) and len(data) == 1:
231 lang = list(data.keys())[0]
232 self.settargetlanguage(lang)
233 return data[lang]
234 return data
235
236 def get_root_node(self, node):
237 """Returns root node for serialize"""
238 if self.targetlanguage is not None:
239 result = UnsortableOrderedDict()
240 result[self.targetlanguage] = node
241 return result
242 return node
243
[end of translate/storage/yaml.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/translate/storage/yaml.py b/translate/storage/yaml.py
--- a/translate/storage/yaml.py
+++ b/translate/storage/yaml.py
@@ -147,7 +147,9 @@
target[path[0]] = []
target[path[0]].append(value)
else:
- if path[0] not in target:
+ # Add empty dict in case there is value and we
+ # expect dict
+ if path[0] not in target or not isinstance(target[path[0]], dict):
target[path[0]] = UnsortableOrderedDict()
nested_set(target[path[0]], path[1:], value)
else:
| {"golden_diff": "diff --git a/translate/storage/yaml.py b/translate/storage/yaml.py\n--- a/translate/storage/yaml.py\n+++ b/translate/storage/yaml.py\n@@ -147,7 +147,9 @@\n target[path[0]] = []\n target[path[0]].append(value)\n else:\n- if path[0] not in target:\n+ # Add empty dict in case there is value and we\n+ # expect dict\n+ if path[0] not in target or not isinstance(target[path[0]], dict):\n target[path[0]] = UnsortableOrderedDict()\n nested_set(target[path[0]], path[1:], value)\n else:\n", "issue": "YAML serialization raises an exception when a node disappears\nhttps://github.com/translate/translate/blob/e5d4d38fbcc7fb310683e7b12f9ae7deab9d7788/translate/storage/yaml.py#L142\r\n\r\n```Exception Value:\t'str' object does not support item assignment```\r\n\r\nIn my case (through weblate), the existing file has:\r\n```\r\nit:\r\n base:\r\n path: Italian path\r\n```\r\n\r\nAnd a new node is used in the source translation (`base->path->sublevel`):\r\n```\r\nen:\r\n base:\r\n path:\r\n sublevel: Now I want these nested actually\r\n```\r\n\r\nThe code in `serialize` will raise an exception on line 154 (and make the file empty). \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2016 Michal \u010ciha\u0159\n#\n# This file is part of the Translate Toolkit.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nr\"\"\"Class that manages YAML data files for translation\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport uuid\nfrom collections import OrderedDict\n\nimport six\nimport yaml\nimport yaml.constructor\n\nfrom translate.storage import base\n\n\nclass OrderedDictYAMLLoader(yaml.SafeLoader):\n \"\"\"\n A YAML loader that loads mappings into ordered dictionaries.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n yaml.SafeLoader.__init__(self, *args, **kwargs)\n\n self.add_constructor(u'tag:yaml.org,2002:map', type(self).construct_yaml_map)\n self.add_constructor(u'tag:yaml.org,2002:omap', type(self).construct_yaml_map)\n\n def construct_yaml_map(self, node):\n data = OrderedDict()\n yield data\n value = self.construct_mapping(node)\n data.update(value)\n\n def construct_mapping(self, node, deep=False):\n if isinstance(node, yaml.MappingNode):\n self.flatten_mapping(node)\n else:\n raise yaml.constructor.ConstructorError(\n None, None,\n 'expected a mapping node, but found %s' % node.id, node.start_mark\n )\n\n mapping = OrderedDict()\n for key_node, value_node in node.value:\n key = self.construct_object(key_node, deep=deep)\n try:\n hash(key)\n except TypeError as exc:\n raise yaml.constructor.ConstructorError(\n 'while constructing a mapping',\n node.start_mark,\n 'found unacceptable key (%s)' % exc, key_node.start_mark\n )\n value = self.construct_object(value_node, deep=deep)\n mapping[key] = value\n return mapping\n\n\nclass UnsortableList(list):\n def sort(self, *args, **kwargs):\n pass\n\n\nclass UnsortableOrderedDict(OrderedDict):\n def items(self, *args, **kwargs):\n return UnsortableList(OrderedDict.items(self, *args, **kwargs))\n\n\nclass YAMLDumper(yaml.SafeDumper):\n def represent_unsorted(self, data):\n return self.represent_dict(data.items())\n\n\nYAMLDumper.add_representer(UnsortableOrderedDict, YAMLDumper.represent_unsorted)\n\n\nclass YAMLUnit(base.TranslationUnit):\n \"\"\"A YAML entry\"\"\"\n\n def __init__(self, source=None, **kwargs):\n self._id = None\n if source:\n self.source = source\n super(YAMLUnit, self).__init__(source)\n\n def getsource(self):\n return self.target\n\n def setsource(self, source):\n self.target = source\n source = property(getsource, setsource)\n\n def setid(self, value):\n self._id = value\n\n def getid(self):\n # Ensure we have ID (for serialization)\n if self._id is None:\n self._id = str(uuid.uuid4())\n return self._id\n\n def getlocations(self):\n return [self.getid()]\n\n\nclass YAMLFile(base.TranslationStore):\n \"\"\"A YAML file\"\"\"\n\n UnitClass = YAMLUnit\n\n def __init__(self, inputfile=None, **kwargs):\n \"\"\"construct a YAML file, optionally reading in from inputfile.\"\"\"\n super(YAMLFile, self).__init__(**kwargs)\n self.filename = ''\n self._file = u''\n if inputfile is not None:\n self.parse(inputfile)\n\n def get_root_node(self, node):\n \"\"\"Returns root node for serialize\"\"\"\n return node\n\n def serialize(self, out):\n def nested_set(target, path, value):\n if len(path) > 1:\n if len(path) == 2 and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():\n if path[0] not in target:\n target[path[0]] = []\n target[path[0]].append(value)\n else:\n if path[0] not in target:\n target[path[0]] = UnsortableOrderedDict()\n nested_set(target[path[0]], path[1:], value)\n else:\n target[path[0]] = value\n\n units = UnsortableOrderedDict()\n for unit in self.unit_iter():\n nested_set(units, unit.getid().split('->'), unit.target)\n out.write(yaml.dump_all(\n [self.get_root_node(units)],\n Dumper=YAMLDumper,\n default_flow_style=False, encoding='utf-8', allow_unicode=True\n ))\n\n def _flatten(self, data, prev=\"\"):\n \"\"\"Flatten YAML dictionary.\n \"\"\"\n if isinstance(data, dict):\n for k, v in six.iteritems(data):\n if not isinstance(k, six.string_types):\n raise base.ParseError(\n 'Key not string: {0}/{1} ({2})'.format(prev, k, type(k))\n )\n\n for x in self._flatten(v, '->'.join((prev, k)) if prev else k):\n yield x\n else:\n if isinstance(data, six.string_types):\n yield (prev, data)\n elif isinstance(data, bool):\n yield (prev, str(data))\n elif isinstance(data, list):\n for k, v in enumerate(data):\n key = '[{0}]'.format(k)\n yield ('->'.join((prev, key)), six.text_type(v))\n elif data is None:\n pass\n else:\n raise ValueError(\"We don't handle these values:\\n\"\n \"Type: %s\\n\"\n \"Data: %s\\n\"\n \"Previous: %s\" % (type(data), data, prev))\n\n def preprocess(self, data):\n \"\"\"Preprocess hook for child formats\"\"\"\n return data\n\n def parse(self, input):\n \"\"\"parse the given file or file source string\"\"\"\n if hasattr(input, 'name'):\n self.filename = input.name\n elif not getattr(self, 'filename', ''):\n self.filename = ''\n if hasattr(input, \"read\"):\n src = input.read()\n input.close()\n input = src\n if isinstance(input, bytes):\n input = input.decode('utf-8')\n try:\n self._file = yaml.load(input, OrderedDictYAMLLoader)\n except yaml.YAMLError as e:\n message = e.problem if hasattr(e, 'problem') else e.message\n if hasattr(e, 'problem_mark'):\n message += ' {0}'.format(e.problem_mark)\n raise base.ParseError(message)\n\n self._file = self.preprocess(self._file)\n\n for k, data in self._flatten(self._file):\n unit = self.UnitClass(data)\n unit.setid(k)\n self.addunit(unit)\n\n\nclass RubyYAMLFile(YAMLFile):\n \"\"\"Ruby YAML file, it has language code as first node.\"\"\"\n\n def preprocess(self, data):\n if isinstance(data, OrderedDict) and len(data) == 1:\n lang = list(data.keys())[0]\n self.settargetlanguage(lang)\n return data[lang]\n return data\n\n def get_root_node(self, node):\n \"\"\"Returns root node for serialize\"\"\"\n if self.targetlanguage is not None:\n result = UnsortableOrderedDict()\n result[self.targetlanguage] = node\n return result\n return node\n", "path": "translate/storage/yaml.py"}]} | 3,086 | 150 |
gh_patches_debug_10166 | rasdani/github-patches | git_diff | numba__numba-777 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dispatcher bug
``` python
from numba import jit
@jit
def foo(a, b):
return 1
# <class 'int'> <class 'float'>
# <class 'float'> <class 'int'>
# <class 'float'> <class 'float'>
# <class 'int'> <class 'int'>
foo(1, 0.1)
foo(0.1, 1)
foo(0.1, 0.1)
foo(1, 1)
```
```
Traceback (most recent call last):
File "test_disp.py", line 15, in <module>
foo(1, 1)
File "/Users/sklam/dev/numba/numba/dispatcher.py", line 161, in _explain_ambiguous
tuple(self.overloads.keys()), args, kws)
File "/Users/sklam/dev/numba/numba/typing/templates.py", line 84, in resolve_overload
if len(args) == len(case.args):
AttributeError: 'tuple' object has no attribute 'args'
```
Dispatcher bug
``` python
from numba import jit
@jit
def foo(a, b):
return 1
# <class 'int'> <class 'float'>
# <class 'float'> <class 'int'>
# <class 'float'> <class 'float'>
# <class 'int'> <class 'int'>
foo(1, 0.1)
foo(0.1, 1)
foo(0.1, 0.1)
foo(1, 1)
```
```
Traceback (most recent call last):
File "test_disp.py", line 15, in <module>
foo(1, 1)
File "/Users/sklam/dev/numba/numba/dispatcher.py", line 161, in _explain_ambiguous
tuple(self.overloads.keys()), args, kws)
File "/Users/sklam/dev/numba/numba/typing/templates.py", line 84, in resolve_overload
if len(args) == len(case.args):
AttributeError: 'tuple' object has no attribute 'args'
```
</issue>
<code>
[start of numba/dispatcher.py]
1 from __future__ import print_function, division, absolute_import
2 import contextlib
3 import functools
4 import inspect
5 import sys
6
7 from numba import _dispatcher, compiler, utils
8 from numba.typeconv.rules import default_type_manager
9 from numba import typing
10 from numba.typing.templates import resolve_overload
11 from numba import types, sigutils
12 from numba.bytecode import get_code_object
13
14
15 class _OverloadedBase(_dispatcher.Dispatcher):
16 """
17 Common base class for dispatcher Implementations.
18 """
19
20 __numba__ = "py_func"
21
22 def __init__(self, arg_count, py_func):
23 self.tm = default_type_manager
24 _dispatcher.Dispatcher.__init__(self, self.tm.get_pointer(), arg_count)
25
26 # A mapping of signatures to entry points
27 self.overloads = {}
28 # A mapping of signatures to types.Function objects
29 self._function_types = {}
30 # A mapping of signatures to compile results
31 self._compileinfos = {}
32
33 self.py_func = py_func
34 # other parts of Numba assume the old Python 2 name for code object
35 self.func_code = get_code_object(py_func)
36 # but newer python uses a different name
37 self.__code__ = self.func_code
38
39 self.doc = py_func.__doc__
40 self._compiling = False
41
42 utils.finalize(self, self._make_finalizer())
43
44 def _make_finalizer(self):
45 """
46 Return a finalizer function that will release references to
47 related compiled functions.
48 """
49 overloads = self.overloads
50 targetctx = self.targetctx
51 # Early-bind utils.shutting_down() into the function's local namespace
52 # (see issue #689)
53 def finalizer(shutting_down=utils.shutting_down):
54 # The finalizer may crash at shutdown, skip it (resources
55 # will be cleared by the process exiting, anyway).
56 if shutting_down():
57 return
58 # This function must *not* hold any reference to self:
59 # we take care to bind the necessary objects in the closure.
60 for func in overloads.values():
61 try:
62 targetctx.remove_user_function(func)
63 targetctx.remove_native_function(func)
64 except KeyError:
65 # Not a native function (object mode presumably)
66 pass
67
68 return finalizer
69
70 @property
71 def signatures(self):
72 """
73 Returns a list of compiled function signatures.
74 """
75 return list(self.overloads)
76
77 def disable_compile(self, val=True):
78 """Disable the compilation of new signatures at call time.
79 """
80 self._disable_compile(int(val))
81
82 def add_overload(self, cres):
83 args = tuple(cres.signature.args)
84 sig = [a._code for a in args]
85 self._insert(sig, cres.entry_point, cres.objectmode)
86 self.overloads[args] = cres.entry_point
87 self._compileinfos[args] = cres
88
89 # Add native function for correct typing the code generation
90 target = cres.target_context
91 cfunc = cres.entry_point
92 if cfunc in target.native_funcs:
93 target.dynamic_map_function(cfunc)
94 # Create function type for typing
95 func_name = cres.fndesc.mangled_name
96 name = "CallTemplate(%s)" % cres.fndesc.mangled_name
97 # The `key` isn't really used except for diagnosis here,
98 # so avoid keeping a reference to `cfunc`.
99 call_template = typing.make_concrete_template(
100 name, key=func_name, signatures=[cres.signature])
101 self._function_types[args] = call_template
102
103 def get_call_template(self, args, kws):
104 """
105 Get a typing.ConcreteTemplate for this dispatcher and the given *args*
106 and *kws*. This allows to resolve the return type.
107 """
108 if kws:
109 raise TypeError("kwargs not supported")
110 # Ensure an overload is available, but avoid compiler re-entrance
111 if not self.is_compiling:
112 self.compile(tuple(args))
113 return self._function_types[args]
114
115 def get_overload(self, sig):
116 args, return_type = sigutils.normalize_signature(sig)
117 return self.overloads[tuple(args)]
118
119 @contextlib.contextmanager
120 def _compile_lock(self):
121 if self._compiling:
122 raise RuntimeError("Compiler re-entrant")
123 self._compiling = True
124 try:
125 yield
126 finally:
127 self._compiling = False
128
129 @property
130 def is_compiling(self):
131 return self._compiling
132
133 def jit(self, sig, **kws):
134 """Alias of compile(sig, **kws)
135 """
136 return self.compile(sig, **kws)
137
138 def _compile_for_args(self, *args, **kws):
139 """
140 For internal use. Compile a specialized version of the function
141 for the given *args* and *kws*, and return the resulting callable.
142 """
143 assert not kws
144 sig = tuple([self.typeof_pyval(a) for a in args])
145 return self.jit(sig)
146
147 def inspect_types(self, file=None):
148 if file is None:
149 file = sys.stdout
150
151 for ver, res in utils.iteritems(self._compileinfos):
152 print("%s %s" % (self.py_func.__name__, ver), file=file)
153 print('-' * 80, file=file)
154 print(res.type_annotation, file=file)
155 print('=' * 80, file=file)
156
157 def _explain_ambiguous(self, *args, **kws):
158 assert not kws, "kwargs not handled"
159 args = tuple([self.typeof_pyval(a) for a in args])
160 resolve_overload(self.typingctx, self.py_func,
161 tuple(self.overloads.keys()), args, kws)
162
163 def __repr__(self):
164 return "%s(%s)" % (type(self).__name__, self.py_func)
165
166 def typeof_pyval(self, val):
167 """
168 Resolve the Numba type of Python value *val*.
169 This is called from numba._dispatcher as a fallback if the native code
170 cannot decide the type.
171 """
172 if isinstance(val, utils.INT_TYPES):
173 # Ensure no autoscaling of integer type, to match the
174 # typecode() function in _dispatcher.c.
175 return types.int64
176
177 tp = self.typingctx.resolve_data_type(val)
178 if tp is None:
179 tp = types.pyobject
180 return tp
181
182
183 class Overloaded(_OverloadedBase):
184 """
185 Implementation of user-facing dispatcher objects (i.e. created using
186 the @jit decorator).
187 This is an abstract base class. Subclasses should define the targetdescr
188 class attribute.
189 """
190
191 def __init__(self, py_func, locals={}, targetoptions={}):
192 """
193 Parameters
194 ----------
195 py_func: function object to be compiled
196 locals: dict, optional
197 Mapping of local variable names to Numba types. Used to override
198 the types deduced by the type inference engine.
199 targetoptions: dict, optional
200 Target-specific config options.
201 """
202 self.typingctx = self.targetdescr.typing_context
203 self.targetctx = self.targetdescr.target_context
204
205 argspec = inspect.getargspec(py_func)
206 argct = len(argspec.args)
207
208 _OverloadedBase.__init__(self, argct, py_func)
209
210 functools.update_wrapper(self, py_func)
211
212 self.targetoptions = targetoptions
213 self.locals = locals
214
215 self.typingctx.insert_overloaded(self)
216
217 def compile(self, sig, locals={}, **targetoptions):
218 with self._compile_lock():
219 locs = self.locals.copy()
220 locs.update(locals)
221
222 topt = self.targetoptions.copy()
223 topt.update(targetoptions)
224
225 flags = compiler.Flags()
226 self.targetdescr.options.parse_as_flags(flags, topt)
227
228 args, return_type = sigutils.normalize_signature(sig)
229
230 # Don't recompile if signature already exist.
231 existing = self.overloads.get(tuple(args))
232 if existing is not None:
233 return existing
234
235 cres = compiler.compile_extra(self.typingctx, self.targetctx,
236 self.py_func,
237 args=args, return_type=return_type,
238 flags=flags, locals=locs)
239
240 # Check typing error if object mode is used
241 if cres.typing_error is not None and not flags.enable_pyobject:
242 raise cres.typing_error
243
244 self.add_overload(cres)
245 return cres.entry_point
246
247
248 class LiftedLoop(_OverloadedBase):
249 """
250 Implementation of the hidden dispatcher objects used for lifted loop
251 (a lifted loop is really compiled as a separate function).
252 """
253
254 def __init__(self, bytecode, typingctx, targetctx, locals, flags):
255 self.typingctx = typingctx
256 self.targetctx = targetctx
257
258 argspec = bytecode.argspec
259 argct = len(argspec.args)
260
261 _OverloadedBase.__init__(self, argct, bytecode.func)
262
263 self.locals = locals
264 self.flags = flags
265 self.bytecode = bytecode
266
267 def compile(self, sig):
268 with self._compile_lock():
269 # FIXME this is mostly duplicated from Overloaded
270 flags = self.flags
271 args, return_type = sigutils.normalize_signature(sig)
272
273 # Don't recompile if signature already exist.
274 existing = self.overloads.get(tuple(args))
275 if existing is not None:
276 return existing.entry_point
277
278 assert not flags.enable_looplift, "Enable looplift flags is on"
279 cres = compiler.compile_bytecode(typingctx=self.typingctx,
280 targetctx=self.targetctx,
281 bc=self.bytecode,
282 args=args,
283 return_type=return_type,
284 flags=flags,
285 locals=self.locals)
286
287 # Check typing error if object mode is used
288 if cres.typing_error is not None and not flags.enable_pyobject:
289 raise cres.typing_error
290
291 self.add_overload(cres)
292 return cres.entry_point
293
294
295 # Initialize dispatcher
296 _dispatcher.init_types(dict((str(t), t._code) for t in types.number_domain))
297
[end of numba/dispatcher.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/numba/dispatcher.py b/numba/dispatcher.py
--- a/numba/dispatcher.py
+++ b/numba/dispatcher.py
@@ -157,8 +157,8 @@
def _explain_ambiguous(self, *args, **kws):
assert not kws, "kwargs not handled"
args = tuple([self.typeof_pyval(a) for a in args])
- resolve_overload(self.typingctx, self.py_func,
- tuple(self.overloads.keys()), args, kws)
+ sigs = [cr.signature for cr in self._compileinfos.values()]
+ resolve_overload(self.typingctx, self.py_func, sigs, args, kws)
def __repr__(self):
return "%s(%s)" % (type(self).__name__, self.py_func)
| {"golden_diff": "diff --git a/numba/dispatcher.py b/numba/dispatcher.py\n--- a/numba/dispatcher.py\n+++ b/numba/dispatcher.py\n@@ -157,8 +157,8 @@\n def _explain_ambiguous(self, *args, **kws):\n assert not kws, \"kwargs not handled\"\n args = tuple([self.typeof_pyval(a) for a in args])\n- resolve_overload(self.typingctx, self.py_func,\n- tuple(self.overloads.keys()), args, kws)\n+ sigs = [cr.signature for cr in self._compileinfos.values()]\n+ resolve_overload(self.typingctx, self.py_func, sigs, args, kws)\n \n def __repr__(self):\n return \"%s(%s)\" % (type(self).__name__, self.py_func)\n", "issue": "Dispatcher bug \n``` python\nfrom numba import jit\n\n@jit\ndef foo(a, b):\n return 1\n\n# <class 'int'> <class 'float'>\n# <class 'float'> <class 'int'>\n# <class 'float'> <class 'float'>\n# <class 'int'> <class 'int'>\n\nfoo(1, 0.1)\nfoo(0.1, 1)\nfoo(0.1, 0.1)\nfoo(1, 1)\n```\n\n```\nTraceback (most recent call last):\n File \"test_disp.py\", line 15, in <module>\n foo(1, 1)\n File \"/Users/sklam/dev/numba/numba/dispatcher.py\", line 161, in _explain_ambiguous\n tuple(self.overloads.keys()), args, kws)\n File \"/Users/sklam/dev/numba/numba/typing/templates.py\", line 84, in resolve_overload\n if len(args) == len(case.args):\nAttributeError: 'tuple' object has no attribute 'args'\n```\n\nDispatcher bug \n``` python\nfrom numba import jit\n\n@jit\ndef foo(a, b):\n return 1\n\n# <class 'int'> <class 'float'>\n# <class 'float'> <class 'int'>\n# <class 'float'> <class 'float'>\n# <class 'int'> <class 'int'>\n\nfoo(1, 0.1)\nfoo(0.1, 1)\nfoo(0.1, 0.1)\nfoo(1, 1)\n```\n\n```\nTraceback (most recent call last):\n File \"test_disp.py\", line 15, in <module>\n foo(1, 1)\n File \"/Users/sklam/dev/numba/numba/dispatcher.py\", line 161, in _explain_ambiguous\n tuple(self.overloads.keys()), args, kws)\n File \"/Users/sklam/dev/numba/numba/typing/templates.py\", line 84, in resolve_overload\n if len(args) == len(case.args):\nAttributeError: 'tuple' object has no attribute 'args'\n```\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\nimport contextlib\nimport functools\nimport inspect\nimport sys\n\nfrom numba import _dispatcher, compiler, utils\nfrom numba.typeconv.rules import default_type_manager\nfrom numba import typing\nfrom numba.typing.templates import resolve_overload\nfrom numba import types, sigutils\nfrom numba.bytecode import get_code_object\n\n\nclass _OverloadedBase(_dispatcher.Dispatcher):\n \"\"\"\n Common base class for dispatcher Implementations.\n \"\"\"\n\n __numba__ = \"py_func\"\n\n def __init__(self, arg_count, py_func):\n self.tm = default_type_manager\n _dispatcher.Dispatcher.__init__(self, self.tm.get_pointer(), arg_count)\n\n # A mapping of signatures to entry points\n self.overloads = {}\n # A mapping of signatures to types.Function objects\n self._function_types = {}\n # A mapping of signatures to compile results\n self._compileinfos = {}\n\n self.py_func = py_func\n # other parts of Numba assume the old Python 2 name for code object\n self.func_code = get_code_object(py_func)\n # but newer python uses a different name\n self.__code__ = self.func_code\n\n self.doc = py_func.__doc__\n self._compiling = False\n\n utils.finalize(self, self._make_finalizer())\n\n def _make_finalizer(self):\n \"\"\"\n Return a finalizer function that will release references to\n related compiled functions.\n \"\"\"\n overloads = self.overloads\n targetctx = self.targetctx\n # Early-bind utils.shutting_down() into the function's local namespace\n # (see issue #689)\n def finalizer(shutting_down=utils.shutting_down):\n # The finalizer may crash at shutdown, skip it (resources\n # will be cleared by the process exiting, anyway).\n if shutting_down():\n return\n # This function must *not* hold any reference to self:\n # we take care to bind the necessary objects in the closure.\n for func in overloads.values():\n try:\n targetctx.remove_user_function(func)\n targetctx.remove_native_function(func)\n except KeyError:\n # Not a native function (object mode presumably)\n pass\n\n return finalizer\n\n @property\n def signatures(self):\n \"\"\"\n Returns a list of compiled function signatures.\n \"\"\"\n return list(self.overloads)\n\n def disable_compile(self, val=True):\n \"\"\"Disable the compilation of new signatures at call time.\n \"\"\"\n self._disable_compile(int(val))\n\n def add_overload(self, cres):\n args = tuple(cres.signature.args)\n sig = [a._code for a in args]\n self._insert(sig, cres.entry_point, cres.objectmode)\n self.overloads[args] = cres.entry_point\n self._compileinfos[args] = cres\n\n # Add native function for correct typing the code generation\n target = cres.target_context\n cfunc = cres.entry_point\n if cfunc in target.native_funcs:\n target.dynamic_map_function(cfunc)\n # Create function type for typing\n func_name = cres.fndesc.mangled_name\n name = \"CallTemplate(%s)\" % cres.fndesc.mangled_name\n # The `key` isn't really used except for diagnosis here,\n # so avoid keeping a reference to `cfunc`.\n call_template = typing.make_concrete_template(\n name, key=func_name, signatures=[cres.signature])\n self._function_types[args] = call_template\n\n def get_call_template(self, args, kws):\n \"\"\"\n Get a typing.ConcreteTemplate for this dispatcher and the given *args*\n and *kws*. This allows to resolve the return type.\n \"\"\"\n if kws:\n raise TypeError(\"kwargs not supported\")\n # Ensure an overload is available, but avoid compiler re-entrance\n if not self.is_compiling:\n self.compile(tuple(args))\n return self._function_types[args]\n\n def get_overload(self, sig):\n args, return_type = sigutils.normalize_signature(sig)\n return self.overloads[tuple(args)]\n\n @contextlib.contextmanager\n def _compile_lock(self):\n if self._compiling:\n raise RuntimeError(\"Compiler re-entrant\")\n self._compiling = True\n try:\n yield\n finally:\n self._compiling = False\n\n @property\n def is_compiling(self):\n return self._compiling\n\n def jit(self, sig, **kws):\n \"\"\"Alias of compile(sig, **kws)\n \"\"\"\n return self.compile(sig, **kws)\n\n def _compile_for_args(self, *args, **kws):\n \"\"\"\n For internal use. Compile a specialized version of the function\n for the given *args* and *kws*, and return the resulting callable.\n \"\"\"\n assert not kws\n sig = tuple([self.typeof_pyval(a) for a in args])\n return self.jit(sig)\n\n def inspect_types(self, file=None):\n if file is None:\n file = sys.stdout\n\n for ver, res in utils.iteritems(self._compileinfos):\n print(\"%s %s\" % (self.py_func.__name__, ver), file=file)\n print('-' * 80, file=file)\n print(res.type_annotation, file=file)\n print('=' * 80, file=file)\n\n def _explain_ambiguous(self, *args, **kws):\n assert not kws, \"kwargs not handled\"\n args = tuple([self.typeof_pyval(a) for a in args])\n resolve_overload(self.typingctx, self.py_func,\n tuple(self.overloads.keys()), args, kws)\n\n def __repr__(self):\n return \"%s(%s)\" % (type(self).__name__, self.py_func)\n\n def typeof_pyval(self, val):\n \"\"\"\n Resolve the Numba type of Python value *val*.\n This is called from numba._dispatcher as a fallback if the native code\n cannot decide the type.\n \"\"\"\n if isinstance(val, utils.INT_TYPES):\n # Ensure no autoscaling of integer type, to match the\n # typecode() function in _dispatcher.c.\n return types.int64\n\n tp = self.typingctx.resolve_data_type(val)\n if tp is None:\n tp = types.pyobject\n return tp\n\n\nclass Overloaded(_OverloadedBase):\n \"\"\"\n Implementation of user-facing dispatcher objects (i.e. created using\n the @jit decorator).\n This is an abstract base class. Subclasses should define the targetdescr\n class attribute.\n \"\"\"\n\n def __init__(self, py_func, locals={}, targetoptions={}):\n \"\"\"\n Parameters\n ----------\n py_func: function object to be compiled\n locals: dict, optional\n Mapping of local variable names to Numba types. Used to override\n the types deduced by the type inference engine.\n targetoptions: dict, optional\n Target-specific config options.\n \"\"\"\n self.typingctx = self.targetdescr.typing_context\n self.targetctx = self.targetdescr.target_context\n\n argspec = inspect.getargspec(py_func)\n argct = len(argspec.args)\n\n _OverloadedBase.__init__(self, argct, py_func)\n\n functools.update_wrapper(self, py_func)\n\n self.targetoptions = targetoptions\n self.locals = locals\n\n self.typingctx.insert_overloaded(self)\n\n def compile(self, sig, locals={}, **targetoptions):\n with self._compile_lock():\n locs = self.locals.copy()\n locs.update(locals)\n\n topt = self.targetoptions.copy()\n topt.update(targetoptions)\n\n flags = compiler.Flags()\n self.targetdescr.options.parse_as_flags(flags, topt)\n\n args, return_type = sigutils.normalize_signature(sig)\n\n # Don't recompile if signature already exist.\n existing = self.overloads.get(tuple(args))\n if existing is not None:\n return existing\n\n cres = compiler.compile_extra(self.typingctx, self.targetctx,\n self.py_func,\n args=args, return_type=return_type,\n flags=flags, locals=locs)\n\n # Check typing error if object mode is used\n if cres.typing_error is not None and not flags.enable_pyobject:\n raise cres.typing_error\n\n self.add_overload(cres)\n return cres.entry_point\n\n\nclass LiftedLoop(_OverloadedBase):\n \"\"\"\n Implementation of the hidden dispatcher objects used for lifted loop\n (a lifted loop is really compiled as a separate function).\n \"\"\"\n\n def __init__(self, bytecode, typingctx, targetctx, locals, flags):\n self.typingctx = typingctx\n self.targetctx = targetctx\n\n argspec = bytecode.argspec\n argct = len(argspec.args)\n\n _OverloadedBase.__init__(self, argct, bytecode.func)\n\n self.locals = locals\n self.flags = flags\n self.bytecode = bytecode\n\n def compile(self, sig):\n with self._compile_lock():\n # FIXME this is mostly duplicated from Overloaded\n flags = self.flags\n args, return_type = sigutils.normalize_signature(sig)\n\n # Don't recompile if signature already exist.\n existing = self.overloads.get(tuple(args))\n if existing is not None:\n return existing.entry_point\n\n assert not flags.enable_looplift, \"Enable looplift flags is on\"\n cres = compiler.compile_bytecode(typingctx=self.typingctx,\n targetctx=self.targetctx,\n bc=self.bytecode,\n args=args,\n return_type=return_type,\n flags=flags,\n locals=self.locals)\n\n # Check typing error if object mode is used\n if cres.typing_error is not None and not flags.enable_pyobject:\n raise cres.typing_error\n\n self.add_overload(cres)\n return cres.entry_point\n\n\n# Initialize dispatcher\n_dispatcher.init_types(dict((str(t), t._code) for t in types.number_domain))\n", "path": "numba/dispatcher.py"}]} | 3,994 | 189 |
gh_patches_debug_19442 | rasdani/github-patches | git_diff | scrapy__scrapy-6098 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Re-enable deps on 3.12 when ready
- [x] bpython (requires greenlet)
- [x] uvloop
- [x] pyftpdlib
Related: https://github.com/scrapy/scrapy/pull/6083
</issue>
<code>
[start of scrapy/contracts/__init__.py]
1 import re
2 import sys
3 from functools import wraps
4 from inspect import getmembers
5 from types import CoroutineType
6 from typing import AsyncGenerator, Dict
7 from unittest import TestCase
8
9 from scrapy.http import Request
10 from scrapy.utils.python import get_spec
11 from scrapy.utils.spider import iterate_spider_output
12
13
14 class Contract:
15 """Abstract class for contracts"""
16
17 request_cls = None
18
19 def __init__(self, method, *args):
20 self.testcase_pre = _create_testcase(method, f"@{self.name} pre-hook")
21 self.testcase_post = _create_testcase(method, f"@{self.name} post-hook")
22 self.args = args
23
24 def add_pre_hook(self, request, results):
25 if hasattr(self, "pre_process"):
26 cb = request.callback
27
28 @wraps(cb)
29 def wrapper(response, **cb_kwargs):
30 try:
31 results.startTest(self.testcase_pre)
32 self.pre_process(response)
33 results.stopTest(self.testcase_pre)
34 except AssertionError:
35 results.addFailure(self.testcase_pre, sys.exc_info())
36 except Exception:
37 results.addError(self.testcase_pre, sys.exc_info())
38 else:
39 results.addSuccess(self.testcase_pre)
40 finally:
41 cb_result = cb(response, **cb_kwargs)
42 if isinstance(cb_result, (AsyncGenerator, CoroutineType)):
43 raise TypeError("Contracts don't support async callbacks")
44 return list(iterate_spider_output(cb_result))
45
46 request.callback = wrapper
47
48 return request
49
50 def add_post_hook(self, request, results):
51 if hasattr(self, "post_process"):
52 cb = request.callback
53
54 @wraps(cb)
55 def wrapper(response, **cb_kwargs):
56 cb_result = cb(response, **cb_kwargs)
57 if isinstance(cb_result, (AsyncGenerator, CoroutineType)):
58 raise TypeError("Contracts don't support async callbacks")
59 output = list(iterate_spider_output(cb_result))
60 try:
61 results.startTest(self.testcase_post)
62 self.post_process(output)
63 results.stopTest(self.testcase_post)
64 except AssertionError:
65 results.addFailure(self.testcase_post, sys.exc_info())
66 except Exception:
67 results.addError(self.testcase_post, sys.exc_info())
68 else:
69 results.addSuccess(self.testcase_post)
70 finally:
71 return output
72
73 request.callback = wrapper
74
75 return request
76
77 def adjust_request_args(self, args):
78 return args
79
80
81 class ContractsManager:
82 contracts: Dict[str, Contract] = {}
83
84 def __init__(self, contracts):
85 for contract in contracts:
86 self.contracts[contract.name] = contract
87
88 def tested_methods_from_spidercls(self, spidercls):
89 is_method = re.compile(r"^\s*@", re.MULTILINE).search
90 methods = []
91 for key, value in getmembers(spidercls):
92 if callable(value) and value.__doc__ and is_method(value.__doc__):
93 methods.append(key)
94
95 return methods
96
97 def extract_contracts(self, method):
98 contracts = []
99 for line in method.__doc__.split("\n"):
100 line = line.strip()
101
102 if line.startswith("@"):
103 name, args = re.match(r"@(\w+)\s*(.*)", line).groups()
104 args = re.split(r"\s+", args)
105
106 contracts.append(self.contracts[name](method, *args))
107
108 return contracts
109
110 def from_spider(self, spider, results):
111 requests = []
112 for method in self.tested_methods_from_spidercls(type(spider)):
113 bound_method = spider.__getattribute__(method)
114 try:
115 requests.append(self.from_method(bound_method, results))
116 except Exception:
117 case = _create_testcase(bound_method, "contract")
118 results.addError(case, sys.exc_info())
119
120 return requests
121
122 def from_method(self, method, results):
123 contracts = self.extract_contracts(method)
124 if contracts:
125 request_cls = Request
126 for contract in contracts:
127 if contract.request_cls is not None:
128 request_cls = contract.request_cls
129
130 # calculate request args
131 args, kwargs = get_spec(request_cls.__init__)
132
133 # Don't filter requests to allow
134 # testing different callbacks on the same URL.
135 kwargs["dont_filter"] = True
136 kwargs["callback"] = method
137
138 for contract in contracts:
139 kwargs = contract.adjust_request_args(kwargs)
140
141 args.remove("self")
142
143 # check if all positional arguments are defined in kwargs
144 if set(args).issubset(set(kwargs)):
145 request = request_cls(**kwargs)
146
147 # execute pre and post hooks in order
148 for contract in reversed(contracts):
149 request = contract.add_pre_hook(request, results)
150 for contract in contracts:
151 request = contract.add_post_hook(request, results)
152
153 self._clean_req(request, method, results)
154 return request
155
156 def _clean_req(self, request, method, results):
157 """stop the request from returning objects and records any errors"""
158
159 cb = request.callback
160
161 @wraps(cb)
162 def cb_wrapper(response, **cb_kwargs):
163 try:
164 output = cb(response, **cb_kwargs)
165 output = list(iterate_spider_output(output))
166 except Exception:
167 case = _create_testcase(method, "callback")
168 results.addError(case, sys.exc_info())
169
170 def eb_wrapper(failure):
171 case = _create_testcase(method, "errback")
172 exc_info = failure.type, failure.value, failure.getTracebackObject()
173 results.addError(case, exc_info)
174
175 request.callback = cb_wrapper
176 request.errback = eb_wrapper
177
178
179 def _create_testcase(method, desc):
180 spider = method.__self__.name
181
182 class ContractTestCase(TestCase):
183 def __str__(_self):
184 return f"[{spider}] {method.__name__} ({desc})"
185
186 name = f"{spider}_{method.__name__}"
187 setattr(ContractTestCase, name, lambda x: x)
188 return ContractTestCase(name)
189
[end of scrapy/contracts/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py
--- a/scrapy/contracts/__init__.py
+++ b/scrapy/contracts/__init__.py
@@ -41,7 +41,9 @@
cb_result = cb(response, **cb_kwargs)
if isinstance(cb_result, (AsyncGenerator, CoroutineType)):
raise TypeError("Contracts don't support async callbacks")
- return list(iterate_spider_output(cb_result))
+ return list( # pylint: disable=return-in-finally
+ iterate_spider_output(cb_result)
+ )
request.callback = wrapper
@@ -68,7 +70,7 @@
else:
results.addSuccess(self.testcase_post)
finally:
- return output
+ return output # pylint: disable=return-in-finally
request.callback = wrapper
| {"golden_diff": "diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py\n--- a/scrapy/contracts/__init__.py\n+++ b/scrapy/contracts/__init__.py\n@@ -41,7 +41,9 @@\n cb_result = cb(response, **cb_kwargs)\n if isinstance(cb_result, (AsyncGenerator, CoroutineType)):\n raise TypeError(\"Contracts don't support async callbacks\")\n- return list(iterate_spider_output(cb_result))\n+ return list( # pylint: disable=return-in-finally\n+ iterate_spider_output(cb_result)\n+ )\n \n request.callback = wrapper\n \n@@ -68,7 +70,7 @@\n else:\n results.addSuccess(self.testcase_post)\n finally:\n- return output\n+ return output # pylint: disable=return-in-finally\n \n request.callback = wrapper\n", "issue": "Re-enable deps on 3.12 when ready\n- [x] bpython (requires greenlet)\r\n- [x] uvloop\r\n- [x] pyftpdlib\r\n\r\nRelated: https://github.com/scrapy/scrapy/pull/6083\n", "before_files": [{"content": "import re\nimport sys\nfrom functools import wraps\nfrom inspect import getmembers\nfrom types import CoroutineType\nfrom typing import AsyncGenerator, Dict\nfrom unittest import TestCase\n\nfrom scrapy.http import Request\nfrom scrapy.utils.python import get_spec\nfrom scrapy.utils.spider import iterate_spider_output\n\n\nclass Contract:\n \"\"\"Abstract class for contracts\"\"\"\n\n request_cls = None\n\n def __init__(self, method, *args):\n self.testcase_pre = _create_testcase(method, f\"@{self.name} pre-hook\")\n self.testcase_post = _create_testcase(method, f\"@{self.name} post-hook\")\n self.args = args\n\n def add_pre_hook(self, request, results):\n if hasattr(self, \"pre_process\"):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response, **cb_kwargs):\n try:\n results.startTest(self.testcase_pre)\n self.pre_process(response)\n results.stopTest(self.testcase_pre)\n except AssertionError:\n results.addFailure(self.testcase_pre, sys.exc_info())\n except Exception:\n results.addError(self.testcase_pre, sys.exc_info())\n else:\n results.addSuccess(self.testcase_pre)\n finally:\n cb_result = cb(response, **cb_kwargs)\n if isinstance(cb_result, (AsyncGenerator, CoroutineType)):\n raise TypeError(\"Contracts don't support async callbacks\")\n return list(iterate_spider_output(cb_result))\n\n request.callback = wrapper\n\n return request\n\n def add_post_hook(self, request, results):\n if hasattr(self, \"post_process\"):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response, **cb_kwargs):\n cb_result = cb(response, **cb_kwargs)\n if isinstance(cb_result, (AsyncGenerator, CoroutineType)):\n raise TypeError(\"Contracts don't support async callbacks\")\n output = list(iterate_spider_output(cb_result))\n try:\n results.startTest(self.testcase_post)\n self.post_process(output)\n results.stopTest(self.testcase_post)\n except AssertionError:\n results.addFailure(self.testcase_post, sys.exc_info())\n except Exception:\n results.addError(self.testcase_post, sys.exc_info())\n else:\n results.addSuccess(self.testcase_post)\n finally:\n return output\n\n request.callback = wrapper\n\n return request\n\n def adjust_request_args(self, args):\n return args\n\n\nclass ContractsManager:\n contracts: Dict[str, Contract] = {}\n\n def __init__(self, contracts):\n for contract in contracts:\n self.contracts[contract.name] = contract\n\n def tested_methods_from_spidercls(self, spidercls):\n is_method = re.compile(r\"^\\s*@\", re.MULTILINE).search\n methods = []\n for key, value in getmembers(spidercls):\n if callable(value) and value.__doc__ and is_method(value.__doc__):\n methods.append(key)\n\n return methods\n\n def extract_contracts(self, method):\n contracts = []\n for line in method.__doc__.split(\"\\n\"):\n line = line.strip()\n\n if line.startswith(\"@\"):\n name, args = re.match(r\"@(\\w+)\\s*(.*)\", line).groups()\n args = re.split(r\"\\s+\", args)\n\n contracts.append(self.contracts[name](method, *args))\n\n return contracts\n\n def from_spider(self, spider, results):\n requests = []\n for method in self.tested_methods_from_spidercls(type(spider)):\n bound_method = spider.__getattribute__(method)\n try:\n requests.append(self.from_method(bound_method, results))\n except Exception:\n case = _create_testcase(bound_method, \"contract\")\n results.addError(case, sys.exc_info())\n\n return requests\n\n def from_method(self, method, results):\n contracts = self.extract_contracts(method)\n if contracts:\n request_cls = Request\n for contract in contracts:\n if contract.request_cls is not None:\n request_cls = contract.request_cls\n\n # calculate request args\n args, kwargs = get_spec(request_cls.__init__)\n\n # Don't filter requests to allow\n # testing different callbacks on the same URL.\n kwargs[\"dont_filter\"] = True\n kwargs[\"callback\"] = method\n\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n\n args.remove(\"self\")\n\n # check if all positional arguments are defined in kwargs\n if set(args).issubset(set(kwargs)):\n request = request_cls(**kwargs)\n\n # execute pre and post hooks in order\n for contract in reversed(contracts):\n request = contract.add_pre_hook(request, results)\n for contract in contracts:\n request = contract.add_post_hook(request, results)\n\n self._clean_req(request, method, results)\n return request\n\n def _clean_req(self, request, method, results):\n \"\"\"stop the request from returning objects and records any errors\"\"\"\n\n cb = request.callback\n\n @wraps(cb)\n def cb_wrapper(response, **cb_kwargs):\n try:\n output = cb(response, **cb_kwargs)\n output = list(iterate_spider_output(output))\n except Exception:\n case = _create_testcase(method, \"callback\")\n results.addError(case, sys.exc_info())\n\n def eb_wrapper(failure):\n case = _create_testcase(method, \"errback\")\n exc_info = failure.type, failure.value, failure.getTracebackObject()\n results.addError(case, exc_info)\n\n request.callback = cb_wrapper\n request.errback = eb_wrapper\n\n\ndef _create_testcase(method, desc):\n spider = method.__self__.name\n\n class ContractTestCase(TestCase):\n def __str__(_self):\n return f\"[{spider}] {method.__name__} ({desc})\"\n\n name = f\"{spider}_{method.__name__}\"\n setattr(ContractTestCase, name, lambda x: x)\n return ContractTestCase(name)\n", "path": "scrapy/contracts/__init__.py"}]} | 2,344 | 191 |
gh_patches_debug_4154 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1216 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
full links not recognized
**Describe the bug**
Only the first part of a link with a /@ in it is being linked so https://mastodon.social/@username only auto links https://mastodon.social and strips '@username' from being a link
**To Reproduce**
here is an example https://bookwyrm.social/user/wakest/comment/30867
**Expected behavior**
When a link is in a comment, the whole link should be turned into a link not just part of it
**Screenshots**
<img width="858" alt="image" src="https://user-images.githubusercontent.com/7890201/124171841-ae45cd80-da6e-11eb-9071-a74596df184a.png">
**Instance**
https://bookwyrm.social
</issue>
<code>
[start of bookwyrm/views/status.py]
1 """ what are we here for if not for posting """
2 import re
3 from django.contrib.auth.decorators import login_required
4 from django.http import HttpResponseBadRequest
5 from django.shortcuts import get_object_or_404, redirect
6 from django.template.response import TemplateResponse
7 from django.utils.decorators import method_decorator
8 from django.views import View
9 from markdown import markdown
10
11 from bookwyrm import forms, models
12 from bookwyrm.sanitize_html import InputHtmlParser
13 from bookwyrm.settings import DOMAIN
14 from bookwyrm.utils import regex
15 from .helpers import handle_remote_webfinger
16 from .reading import edit_readthrough
17
18
19 # pylint: disable= no-self-use
20 @method_decorator(login_required, name="dispatch")
21 class CreateStatus(View):
22 """the view for *posting*"""
23
24 def get(self, request, status_type): # pylint: disable=unused-argument
25 """compose view (used for delete-and-redraft"""
26 book = get_object_or_404(models.Edition, id=request.GET.get("book"))
27 data = {"book": book}
28 return TemplateResponse(request, "compose.html", data)
29
30 def post(self, request, status_type):
31 """create status of whatever type"""
32 status_type = status_type[0].upper() + status_type[1:]
33
34 try:
35 form = getattr(forms, "%sForm" % status_type)(request.POST)
36 except AttributeError:
37 return HttpResponseBadRequest()
38 if not form.is_valid():
39 return redirect(request.headers.get("Referer", "/"))
40
41 status = form.save(commit=False)
42 if not status.sensitive and status.content_warning:
43 # the cw text field remains populated when you click "remove"
44 status.content_warning = None
45 status.save(broadcast=False)
46
47 # inspect the text for user tags
48 content = status.content
49 for (mention_text, mention_user) in find_mentions(content):
50 # add them to status mentions fk
51 status.mention_users.add(mention_user)
52
53 # turn the mention into a link
54 content = re.sub(
55 r"%s([^@]|$)" % mention_text,
56 r'<a href="%s">%s</a>\g<1>' % (mention_user.remote_id, mention_text),
57 content,
58 )
59 # add reply parent to mentions
60 if status.reply_parent:
61 status.mention_users.add(status.reply_parent.user)
62
63 # deduplicate mentions
64 status.mention_users.set(set(status.mention_users.all()))
65
66 # don't apply formatting to generated notes
67 if not isinstance(status, models.GeneratedNote) and content:
68 status.content = to_markdown(content)
69 # do apply formatting to quotes
70 if hasattr(status, "quote"):
71 status.quote = to_markdown(status.quote)
72
73 status.save(created=True)
74
75 # update a readthorugh, if needed
76 edit_readthrough(request)
77
78 return redirect("/")
79
80
81 @method_decorator(login_required, name="dispatch")
82 class DeleteStatus(View):
83 """tombstone that bad boy"""
84
85 def post(self, request, status_id):
86 """delete and tombstone a status"""
87 status = get_object_or_404(models.Status, id=status_id)
88
89 # don't let people delete other people's statuses
90 if status.user != request.user and not request.user.has_perm("moderate_post"):
91 return HttpResponseBadRequest()
92
93 # perform deletion
94 status.delete()
95 return redirect(request.headers.get("Referer", "/"))
96
97
98 @method_decorator(login_required, name="dispatch")
99 class DeleteAndRedraft(View):
100 """delete a status but let the user re-create it"""
101
102 def post(self, request, status_id):
103 """delete and tombstone a status"""
104 status = get_object_or_404(
105 models.Status.objects.select_subclasses(), id=status_id
106 )
107 if isinstance(status, (models.GeneratedNote, models.ReviewRating)):
108 return HttpResponseBadRequest()
109
110 # don't let people redraft other people's statuses
111 if status.user != request.user:
112 return HttpResponseBadRequest()
113
114 status_type = status.status_type.lower()
115 if status.reply_parent:
116 status_type = "reply"
117
118 data = {
119 "draft": status,
120 "type": status_type,
121 }
122 if hasattr(status, "book"):
123 data["book"] = status.book
124 elif status.mention_books:
125 data["book"] = status.mention_books.first()
126
127 # perform deletion
128 status.delete()
129 return TemplateResponse(request, "compose.html", data)
130
131
132 def find_mentions(content):
133 """detect @mentions in raw status content"""
134 if not content:
135 return
136 for match in re.finditer(regex.STRICT_USERNAME, content):
137 username = match.group().strip().split("@")[1:]
138 if len(username) == 1:
139 # this looks like a local user (@user), fill in the domain
140 username.append(DOMAIN)
141 username = "@".join(username)
142
143 mention_user = handle_remote_webfinger(username)
144 if not mention_user:
145 # we can ignore users we don't know about
146 continue
147 yield (match.group(), mention_user)
148
149
150 def format_links(content):
151 """detect and format links"""
152 return re.sub(
153 r'([^(href=")]|^|\()(https?:\/\/(%s([\w\.\-_\/+&\?=:;,])*))' % regex.DOMAIN,
154 r'\g<1><a href="\g<2>">\g<3></a>',
155 content,
156 )
157
158
159 def to_markdown(content):
160 """catch links and convert to markdown"""
161 content = markdown(content)
162 content = format_links(content)
163 # sanitize resulting html
164 sanitizer = InputHtmlParser()
165 sanitizer.feed(content)
166 return sanitizer.get_output()
167
[end of bookwyrm/views/status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py
--- a/bookwyrm/views/status.py
+++ b/bookwyrm/views/status.py
@@ -150,7 +150,7 @@
def format_links(content):
"""detect and format links"""
return re.sub(
- r'([^(href=")]|^|\()(https?:\/\/(%s([\w\.\-_\/+&\?=:;,])*))' % regex.DOMAIN,
+ r'([^(href=")]|^|\()(https?:\/\/(%s([\w\.\-_\/+&\?=:;,@#])*))' % regex.DOMAIN,
r'\g<1><a href="\g<2>">\g<3></a>',
content,
)
| {"golden_diff": "diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py\n--- a/bookwyrm/views/status.py\n+++ b/bookwyrm/views/status.py\n@@ -150,7 +150,7 @@\n def format_links(content):\n \"\"\"detect and format links\"\"\"\n return re.sub(\n- r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % regex.DOMAIN,\n+ r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,@#])*))' % regex.DOMAIN,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content,\n )\n", "issue": "full links not recognized\n**Describe the bug**\r\nOnly the first part of a link with a /@ in it is being linked so https://mastodon.social/@username only auto links https://mastodon.social and strips '@username' from being a link\r\n\r\n**To Reproduce**\r\nhere is an example https://bookwyrm.social/user/wakest/comment/30867\r\n\r\n**Expected behavior**\r\nWhen a link is in a comment, the whole link should be turned into a link not just part of it\r\n\r\n**Screenshots**\r\n<img width=\"858\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7890201/124171841-ae45cd80-da6e-11eb-9071-a74596df184a.png\">\r\n\r\n**Instance**\r\nhttps://bookwyrm.social\n", "before_files": [{"content": "\"\"\" what are we here for if not for posting \"\"\"\nimport re\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\nfrom markdown import markdown\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.sanitize_html import InputHtmlParser\nfrom bookwyrm.settings import DOMAIN\nfrom bookwyrm.utils import regex\nfrom .helpers import handle_remote_webfinger\nfrom .reading import edit_readthrough\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass CreateStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_type): # pylint: disable=unused-argument\n \"\"\"compose view (used for delete-and-redraft\"\"\"\n book = get_object_or_404(models.Edition, id=request.GET.get(\"book\"))\n data = {\"book\": book}\n return TemplateResponse(request, \"compose.html\", data)\n\n def post(self, request, status_type):\n \"\"\"create status of whatever type\"\"\"\n status_type = status_type[0].upper() + status_type[1:]\n\n try:\n form = getattr(forms, \"%sForm\" % status_type)(request.POST)\n except AttributeError:\n return HttpResponseBadRequest()\n if not form.is_valid():\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n status = form.save(commit=False)\n if not status.sensitive and status.content_warning:\n # the cw text field remains populated when you click \"remove\"\n status.content_warning = None\n status.save(broadcast=False)\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(content):\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n r\"%s([^@]|$)\" % mention_text,\n r'<a href=\"%s\">%s</a>\\g<1>' % (mention_user.remote_id, mention_text),\n content,\n )\n # add reply parent to mentions\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote) and content:\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, \"quote\"):\n status.quote = to_markdown(status.quote)\n\n status.save(created=True)\n\n # update a readthorugh, if needed\n edit_readthrough(request)\n\n return redirect(\"/\")\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteStatus(View):\n \"\"\"tombstone that bad boy\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(models.Status, id=status_id)\n\n # don't let people delete other people's statuses\n if status.user != request.user and not request.user.has_perm(\"moderate_post\"):\n return HttpResponseBadRequest()\n\n # perform deletion\n status.delete()\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteAndRedraft(View):\n \"\"\"delete a status but let the user re-create it\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=status_id\n )\n if isinstance(status, (models.GeneratedNote, models.ReviewRating)):\n return HttpResponseBadRequest()\n\n # don't let people redraft other people's statuses\n if status.user != request.user:\n return HttpResponseBadRequest()\n\n status_type = status.status_type.lower()\n if status.reply_parent:\n status_type = \"reply\"\n\n data = {\n \"draft\": status,\n \"type\": status_type,\n }\n if hasattr(status, \"book\"):\n data[\"book\"] = status.book\n elif status.mention_books:\n data[\"book\"] = status.mention_books.first()\n\n # perform deletion\n status.delete()\n return TemplateResponse(request, \"compose.html\", data)\n\n\ndef find_mentions(content):\n \"\"\"detect @mentions in raw status content\"\"\"\n if not content:\n return\n for match in re.finditer(regex.STRICT_USERNAME, content):\n username = match.group().strip().split(\"@\")[1:]\n if len(username) == 1:\n # this looks like a local user (@user), fill in the domain\n username.append(DOMAIN)\n username = \"@\".join(username)\n\n mention_user = handle_remote_webfinger(username)\n if not mention_user:\n # we can ignore users we don't know about\n continue\n yield (match.group(), mention_user)\n\n\ndef format_links(content):\n \"\"\"detect and format links\"\"\"\n return re.sub(\n r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % regex.DOMAIN,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content,\n )\n\n\ndef to_markdown(content):\n \"\"\"catch links and convert to markdown\"\"\"\n content = markdown(content)\n content = format_links(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n return sanitizer.get_output()\n", "path": "bookwyrm/views/status.py"}]} | 2,345 | 169 |
gh_patches_debug_4651 | rasdani/github-patches | git_diff | svthalia__concrexit-1782 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Shift product sales are not calculated correctly
### Describe the bug
For some reason, the product sales for shifts in the sales app are not calculated properly:
### How to reproduce
1. Check staging.thalia.nu, shift 1
2.
<img width="453" alt="image" src="https://user-images.githubusercontent.com/7915741/123234193-06af2500-d4db-11eb-99b9-a8be74602c1a.png">
3. It should be waaaaay more as there are 200+ orders
### Expected behaviour
Correct calculation
</issue>
<code>
[start of website/sales/models/shift.py]
1 from django.core.exceptions import ValidationError
2 from django.db import models
3 from django.db.models import (
4 Sum,
5 Q,
6 Count,
7 )
8 from django.db.models.expressions import RawSQL
9 from django.utils import timezone
10 from django.utils.translation import gettext_lazy as _
11 from queryable_properties.managers import QueryablePropertiesManager
12 from queryable_properties.properties import (
13 RangeCheckProperty,
14 queryable_property,
15 )
16
17 from activemembers.models import MemberGroup
18 from sales.models.product import ProductList
19
20
21 class Shift(models.Model):
22 class Meta:
23 permissions = [
24 ("override_manager", _("Can access all shifts as manager")),
25 ]
26
27 objects = QueryablePropertiesManager()
28
29 start = models.DateTimeField(verbose_name=_("start"), blank=False, null=False,)
30 end = models.DateTimeField(
31 verbose_name=_("end"),
32 blank=False,
33 null=False,
34 help_text=_(
35 "The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift."
36 ),
37 )
38
39 title = models.CharField(
40 verbose_name=_("title"), blank=True, null=True, max_length=100
41 )
42
43 product_list = models.ForeignKey(
44 ProductList,
45 verbose_name=_("product list"),
46 blank=False,
47 null=False,
48 on_delete=models.PROTECT,
49 )
50
51 managers = models.ManyToManyField(
52 MemberGroup, verbose_name=_("managers"), related_name="manager_shifts"
53 )
54
55 locked = models.BooleanField(
56 verbose_name=_("locked"),
57 blank=False,
58 null=False,
59 default=False,
60 help_text=_(
61 "Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift."
62 ),
63 )
64
65 def clean(self):
66 super().clean()
67 errors = {}
68
69 if self.orders.filter(created_at__lt=self.start):
70 errors.update(
71 {
72 "start": _(
73 "There are already orders created in this shift before this start time."
74 )
75 }
76 )
77
78 if self.end and self.start and self.end <= self.start:
79 errors.update({"end": _("End cannot be before start.")})
80
81 if errors:
82 raise ValidationError(errors)
83
84 def save(
85 self, force_insert=False, force_update=False, using=None, update_fields=None
86 ):
87 if self.locked:
88 self.orders.filter(
89 (Q(payment__isnull=True) & Q(total_amount__gt=0))
90 | Q(order_items__isnull=True)
91 ).delete()
92
93 return super(Shift, self).save(force_insert, force_update, using, update_fields)
94
95 active = RangeCheckProperty("start", "end", timezone.now)
96
97 @queryable_property(annotation_based=True)
98 @classmethod
99 def total_revenue(cls):
100 return RawSQL(
101 """(SELECT CAST(COALESCE(SUM("__orders"."total__"), 0) AS NUMERIC) AS "shift_revenue__"
102 FROM (
103 SELECT "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount", "sales_order"."payment_id", CAST(SUM("sales_orderitem"."total") AS NUMERIC) AS "subtotal__", CAST((SUM("sales_orderitem"."total") - COALESCE("sales_order"."discount", 0)) AS NUMERIC) AS "total__", SUM("sales_orderitem"."amount") AS "num_items__"
104 FROM "sales_order" LEFT JOIN "sales_orderitem" ON "sales_orderitem"."order_id" = "sales_order"."id"
105 GROUP BY "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount"
106 ) AS "__orders"
107 WHERE "__orders"."shift_id"="sales_shift"."id"
108 )""",
109 [],
110 )
111
112 @queryable_property(annotation_based=True)
113 @classmethod
114 def total_revenue_paid(cls):
115 return RawSQL(
116 """(SELECT CAST(COALESCE(SUM("__orders"."total__"), 0) AS NUMERIC) AS "shift_revenue__"
117 FROM (
118 SELECT "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount", "sales_order"."payment_id", CAST(SUM("sales_orderitem"."total") AS NUMERIC) AS "subtotal__", CAST((SUM("sales_orderitem"."total") - COALESCE("sales_order"."discount", 0)) AS NUMERIC) AS "total__", SUM("sales_orderitem"."amount") AS "num_items__"
119 FROM "sales_order" LEFT JOIN "sales_orderitem" ON "sales_orderitem"."order_id" = "sales_order"."id"
120 GROUP BY "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount"
121 ) AS "__orders"
122 WHERE "__orders"."shift_id"="sales_shift"."id"
123 AND ("__orders"."payment_id" IS NOT NULL OR ("__orders"."payment_id" IS NULL AND "__orders"."total__"=0))
124 )""",
125 [],
126 )
127
128 @queryable_property(annotation_based=True)
129 @classmethod
130 def num_orders(cls):
131 return Count("orders")
132
133 @queryable_property(annotation_based=True)
134 @classmethod
135 def num_orders_paid(cls):
136 return RawSQL(
137 """(SELECT COUNT(*) AS "num_orders__"
138 FROM (
139 SELECT "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount", "sales_order"."payment_id", CAST(SUM("sales_orderitem"."total") AS NUMERIC) AS "subtotal__", CAST((SUM("sales_orderitem"."total") - COALESCE("sales_order"."discount", 0)) AS NUMERIC) AS "total__", SUM("sales_orderitem"."amount") AS "num_items__"
140 FROM "sales_order" LEFT JOIN "sales_orderitem" ON "sales_orderitem"."order_id" = "sales_order"."id"
141 GROUP BY "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount"
142 ) AS "__orders"
143 WHERE "__orders"."shift_id"="sales_shift"."id"
144 AND ("__orders"."payment_id" IS NOT NULL OR ("__orders"."payment_id" IS NULL AND "__orders"."total__"=0))
145 )""",
146 [],
147 )
148
149 @property
150 def product_sales(self):
151 qs = (
152 self.orders.exclude(order_items__isnull=True)
153 .values("order_items__product")
154 .distinct()
155 .annotate(sold=Sum("order_items__amount"))
156 )
157 return {
158 item[0]: item[1]
159 for item in qs.values_list("order_items__product__product__name", "sold")
160 }
161
162 def __str__(self):
163 if self.title and self.title != "":
164 return f"Shift {self.pk} - {self.title}"
165 return f"Shift {self.pk}"
166
[end of website/sales/models/shift.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/sales/models/shift.py b/website/sales/models/shift.py
--- a/website/sales/models/shift.py
+++ b/website/sales/models/shift.py
@@ -151,8 +151,8 @@
qs = (
self.orders.exclude(order_items__isnull=True)
.values("order_items__product")
- .distinct()
.annotate(sold=Sum("order_items__amount"))
+ .order_by()
)
return {
item[0]: item[1]
| {"golden_diff": "diff --git a/website/sales/models/shift.py b/website/sales/models/shift.py\n--- a/website/sales/models/shift.py\n+++ b/website/sales/models/shift.py\n@@ -151,8 +151,8 @@\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n- .distinct()\n .annotate(sold=Sum(\"order_items__amount\"))\n+ .order_by()\n )\n return {\n item[0]: item[1]\n", "issue": "Shift product sales are not calculated correctly\n### Describe the bug\r\nFor some reason, the product sales for shifts in the sales app are not calculated properly:\r\n\r\n### How to reproduce\r\n1. Check staging.thalia.nu, shift 1\r\n2. \r\n<img width=\"453\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7915741/123234193-06af2500-d4db-11eb-99b9-a8be74602c1a.png\">\r\n3. It should be waaaaay more as there are 200+ orders\r\n\r\n### Expected behaviour\r\nCorrect calculation\r\n\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import (\n Sum,\n Q,\n Count,\n)\nfrom django.db.models.expressions import RawSQL\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import (\n RangeCheckProperty,\n queryable_property,\n)\n\nfrom activemembers.models import MemberGroup\nfrom sales.models.product import ProductList\n\n\nclass Shift(models.Model):\n class Meta:\n permissions = [\n (\"override_manager\", _(\"Can access all shifts as manager\")),\n ]\n\n objects = QueryablePropertiesManager()\n\n start = models.DateTimeField(verbose_name=_(\"start\"), blank=False, null=False,)\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift.\"\n ),\n )\n\n title = models.CharField(\n verbose_name=_(\"title\"), blank=True, null=True, max_length=100\n )\n\n product_list = models.ForeignKey(\n ProductList,\n verbose_name=_(\"product list\"),\n blank=False,\n null=False,\n on_delete=models.PROTECT,\n )\n\n managers = models.ManyToManyField(\n MemberGroup, verbose_name=_(\"managers\"), related_name=\"manager_shifts\"\n )\n\n locked = models.BooleanField(\n verbose_name=_(\"locked\"),\n blank=False,\n null=False,\n default=False,\n help_text=_(\n \"Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift.\"\n ),\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.orders.filter(created_at__lt=self.start):\n errors.update(\n {\n \"start\": _(\n \"There are already orders created in this shift before this start time.\"\n )\n }\n )\n\n if self.end and self.start and self.end <= self.start:\n errors.update({\"end\": _(\"End cannot be before start.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.locked:\n self.orders.filter(\n (Q(payment__isnull=True) & Q(total_amount__gt=0))\n | Q(order_items__isnull=True)\n ).delete()\n\n return super(Shift, self).save(force_insert, force_update, using, update_fields)\n\n active = RangeCheckProperty(\"start\", \"end\", timezone.now)\n\n @queryable_property(annotation_based=True)\n @classmethod\n def total_revenue(cls):\n return RawSQL(\n \"\"\"(SELECT CAST(COALESCE(SUM(\"__orders\".\"total__\"), 0) AS NUMERIC) AS \"shift_revenue__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n )\"\"\",\n [],\n )\n\n @queryable_property(annotation_based=True)\n @classmethod\n def total_revenue_paid(cls):\n return RawSQL(\n \"\"\"(SELECT CAST(COALESCE(SUM(\"__orders\".\"total__\"), 0) AS NUMERIC) AS \"shift_revenue__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n AND (\"__orders\".\"payment_id\" IS NOT NULL OR (\"__orders\".\"payment_id\" IS NULL AND \"__orders\".\"total__\"=0))\n )\"\"\",\n [],\n )\n\n @queryable_property(annotation_based=True)\n @classmethod\n def num_orders(cls):\n return Count(\"orders\")\n\n @queryable_property(annotation_based=True)\n @classmethod\n def num_orders_paid(cls):\n return RawSQL(\n \"\"\"(SELECT COUNT(*) AS \"num_orders__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n AND (\"__orders\".\"payment_id\" IS NOT NULL OR (\"__orders\".\"payment_id\" IS NULL AND \"__orders\".\"total__\"=0))\n )\"\"\",\n [],\n )\n\n @property\n def product_sales(self):\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n .distinct()\n .annotate(sold=Sum(\"order_items__amount\"))\n )\n return {\n item[0]: item[1]\n for item in qs.values_list(\"order_items__product__product__name\", \"sold\")\n }\n\n def __str__(self):\n if self.title and self.title != \"\":\n return f\"Shift {self.pk} - {self.title}\"\n return f\"Shift {self.pk}\"\n", "path": "website/sales/models/shift.py"}]} | 2,541 | 120 |
gh_patches_debug_22640 | rasdani/github-patches | git_diff | netket__netket-1086 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TensorBoardLog with MPI
Not sure this can be solved in netket, but when using TensorBoardLog with MPI, I'm getting errors.
MWE (mpitest.py):
```python
import netket as nk
logger = nk.logging.TensorBoardLog("test_mpi_log")
```
When you run the above with
```console
mpirun -np 4 python ./mpitest.py
```
You'll see messages like:
```console
KeyError: 'test_mpi_log'
FileExistsError: [Errno 17] File exists: 'test_mpi_log'
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/localjw/lib/python3.8/site-packages/tensorboardX/record_writer.py", line 47, in directory_check
factory = REGISTERED_FACTORIES[prefix]
```
</issue>
<code>
[start of netket/logging/tensorboard.py]
1 # Copyright 2021 The NetKet Authors - All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from numbers import Number
16
17 from netket.utils import deprecated
18
19
20 def tree_log(tree, root, data):
21 """
22 Maps all elements in tree, recursively calling tree_log with a new root string,
23 and when it reaches leaves pushes (string, leave) tuples to data.
24
25 Args:
26 tree: a pytree where the leaf nodes contain data
27 root: the root of the tags used to log to tensorboard
28 data: a container modified in place
29
30 """
31 if tree is None:
32 return
33 elif isinstance(tree, list):
34 for (i, val) in enumerate(tree):
35 tree_log(val, root + f"/{i}", data)
36
37 elif isinstance(tree, list) and hasattr(tree, "_fields"):
38 for key in tree._fields:
39 tree_log(getattr(tree, key), root + f"/{key}", data)
40
41 elif isinstance(tree, tuple):
42 for (i, val) in enumerate(tree):
43 tree_log(val, root + f"/{i}", data)
44
45 elif isinstance(tree, dict):
46 for key, value in tree.items():
47 tree_log(value, root + f"/{key}", data) # noqa: F722
48
49 elif hasattr(tree, "to_compound"):
50 tree_log(tree.to_compound()[1], root, data) # noqa: F722
51
52 elif hasattr(tree, "to_dict"):
53 tree_log(tree.to_dict(), root, data) # noqa: F722
54
55 elif isinstance(tree, complex):
56 tree_log(tree.real, root + "/re", data) # noqa: F722
57 tree_log(tree.imag, root + "/im", data) # noqa: F722
58
59 else:
60 data.append((root, tree))
61
62
63 class TensorBoardLog:
64 """
65 Creates a tensorboard logger using tensorboardX's summarywriter.
66 Refer to its documentation for further details
67
68 https://tensorboardx.readthedocs.io/en/latest/tensorboard.html
69
70 TensorBoardX must be installed.
71
72 Args:
73 logdir (string): Save directory location. Default is
74 runs/**CURRENT_DATETIME_HOSTNAME**, which changes after each run.
75 Use hierarchical folder structure to compare
76 between runs easily. e.g. pass in 'runs/exp1', 'runs/exp2', etc.
77 for each new experiment to compare across them.
78 comment (string): Comment logdir suffix appended to the default
79 ``logdir``. If ``logdir`` is assigned, this argument has no effect.
80 purge_step (int):
81 When logging crashes at step :math:`T+X` and restarts at step :math:`T`,
82 any events whose global_step larger or equal to :math:`T` will be
83 purged and hidden from TensorBoard.
84 Note that crashed and resumed experiments should have the same ``logdir``.
85 max_queue (int): Size of the queue for pending events and
86 summaries before one of the 'add' calls forces a flush to disk.
87 Default is ten items.
88 flush_secs (int): How often, in seconds, to flush the
89 pending events and summaries to disk. Default is every two minutes.
90 filename_suffix (string): Suffix added to all event filenames in
91 the logdir directory. More details on filename construction in
92 tensorboard.summary.writer.event_file_writer.EventFileWriter.
93 write_to_disk (boolean):
94 If pass `False`, TensorBoardLog will not write to disk.
95 Examples:
96 Logging optimisation to tensorboard.
97
98 >>> import pytest; pytest.skip("skip automated test of this docstring")
99 >>>
100 >>> import netket as nk
101 >>> # create a summary writer with automatically generated folder name.
102 >>> writer = nk.logging.TensorBoardLog()
103 >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.local/
104 >>> # create a summary writer using the specified folder name.
105 >>> writer = nk.logging.TensorBoardLog("my_experiment")
106 >>> # folder location: my_experiment
107 >>> # create a summary writer with comment appended.
108 >>> writer = nk.logging.TensorBoardLog(comment="LR_0.1_BATCH_16")
109 >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/
110 """
111
112 def __init__(
113 self,
114 *args,
115 **kwargs,
116 ):
117 from tensorboardX import SummaryWriter
118
119 self._writer = SummaryWriter(*args, **kwargs)
120
121 self._old_step = 0
122
123 def __call__(self, step, item, machine):
124
125 data = []
126 tree_log(item, "", data)
127
128 for key, val in data:
129 if isinstance(val, Number):
130 self._writer.add_scalar(key[1:], val, step)
131
132 self._writer.flush()
133 self._old_step = step
134
135 def _flush_log(self):
136 self._writer.flush()
137
138 def _flush_params(self, _):
139 return None
140
141 def flush(self, machine=None):
142 """
143 Writes to file the content of this logger.
144
145 :param machine: optionally also writes the parameters of the machine.
146 """
147 self._flush_log()
148
149 if machine is not None:
150 self._flush_params(machine)
151
152
153 # TODO: deprecate in 3.1
154 @deprecated(
155 "TBLog has been renamed to `TensorBoardLog` and will be removed in the next"
156 "minor release. Please update your usages."
157 )
158 def TBLog(*args, **kwargs):
159 return TensorBoardLog(*args, **kwargs)
160
[end of netket/logging/tensorboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netket/logging/tensorboard.py b/netket/logging/tensorboard.py
--- a/netket/logging/tensorboard.py
+++ b/netket/logging/tensorboard.py
@@ -114,13 +114,24 @@
*args,
**kwargs,
):
- from tensorboardX import SummaryWriter
+ self._init_args = args
+ """Store the args for the lazily initialized SummaryWriter's constructor."""
+ self._init_kwargs = kwargs
+ """Store the kwargs for the lazily initialized SummaryWriter's constructor."""
- self._writer = SummaryWriter(*args, **kwargs)
+ self._writer = None
+ """Lazily initialized summarywriter constructor"""
self._old_step = 0
+ def _init_tensoboard(self):
+ from tensorboardX import SummaryWriter
+
+ self._writer = SummaryWriter(*self._init_args, **self._init_kwargs)
+
def __call__(self, step, item, machine):
+ if self._writer is None:
+ self._init_tensoboard()
data = []
tree_log(item, "", data)
@@ -133,7 +144,8 @@
self._old_step = step
def _flush_log(self):
- self._writer.flush()
+ if self._writer is not None:
+ self._writer.flush()
def _flush_params(self, _):
return None
| {"golden_diff": "diff --git a/netket/logging/tensorboard.py b/netket/logging/tensorboard.py\n--- a/netket/logging/tensorboard.py\n+++ b/netket/logging/tensorboard.py\n@@ -114,13 +114,24 @@\n *args,\n **kwargs,\n ):\n- from tensorboardX import SummaryWriter\n+ self._init_args = args\n+ \"\"\"Store the args for the lazily initialized SummaryWriter's constructor.\"\"\"\n+ self._init_kwargs = kwargs\n+ \"\"\"Store the kwargs for the lazily initialized SummaryWriter's constructor.\"\"\"\n \n- self._writer = SummaryWriter(*args, **kwargs)\n+ self._writer = None\n+ \"\"\"Lazily initialized summarywriter constructor\"\"\"\n \n self._old_step = 0\n \n+ def _init_tensoboard(self):\n+ from tensorboardX import SummaryWriter\n+\n+ self._writer = SummaryWriter(*self._init_args, **self._init_kwargs)\n+\n def __call__(self, step, item, machine):\n+ if self._writer is None:\n+ self._init_tensoboard()\n \n data = []\n tree_log(item, \"\", data)\n@@ -133,7 +144,8 @@\n self._old_step = step\n \n def _flush_log(self):\n- self._writer.flush()\n+ if self._writer is not None:\n+ self._writer.flush()\n \n def _flush_params(self, _):\n return None\n", "issue": "TensorBoardLog with MPI\nNot sure this can be solved in netket, but when using TensorBoardLog with MPI, I'm getting errors.\r\n\r\nMWE (mpitest.py):\r\n```python\r\nimport netket as nk\r\nlogger = nk.logging.TensorBoardLog(\"test_mpi_log\")\r\n```\r\n\r\nWhen you run the above with\r\n```console\r\nmpirun -np 4 python ./mpitest.py\r\n```\r\n\r\nYou'll see messages like:\r\n```console\r\nKeyError: 'test_mpi_log'\r\n\r\nFileExistsError: [Errno 17] File exists: 'test_mpi_log'\r\nTraceback (most recent call last):\r\n File \"/usr/local/anaconda3/envs/localjw/lib/python3.8/site-packages/tensorboardX/record_writer.py\", line 47, in directory_check\r\n factory = REGISTERED_FACTORIES[prefix]\r\n```\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2021 The NetKet Authors - All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom numbers import Number\n\nfrom netket.utils import deprecated\n\n\ndef tree_log(tree, root, data):\n \"\"\"\n Maps all elements in tree, recursively calling tree_log with a new root string,\n and when it reaches leaves pushes (string, leave) tuples to data.\n\n Args:\n tree: a pytree where the leaf nodes contain data\n root: the root of the tags used to log to tensorboard\n data: a container modified in place\n\n \"\"\"\n if tree is None:\n return\n elif isinstance(tree, list):\n for (i, val) in enumerate(tree):\n tree_log(val, root + f\"/{i}\", data)\n\n elif isinstance(tree, list) and hasattr(tree, \"_fields\"):\n for key in tree._fields:\n tree_log(getattr(tree, key), root + f\"/{key}\", data)\n\n elif isinstance(tree, tuple):\n for (i, val) in enumerate(tree):\n tree_log(val, root + f\"/{i}\", data)\n\n elif isinstance(tree, dict):\n for key, value in tree.items():\n tree_log(value, root + f\"/{key}\", data) # noqa: F722\n\n elif hasattr(tree, \"to_compound\"):\n tree_log(tree.to_compound()[1], root, data) # noqa: F722\n\n elif hasattr(tree, \"to_dict\"):\n tree_log(tree.to_dict(), root, data) # noqa: F722\n\n elif isinstance(tree, complex):\n tree_log(tree.real, root + \"/re\", data) # noqa: F722\n tree_log(tree.imag, root + \"/im\", data) # noqa: F722\n\n else:\n data.append((root, tree))\n\n\nclass TensorBoardLog:\n \"\"\"\n Creates a tensorboard logger using tensorboardX's summarywriter.\n Refer to its documentation for further details\n\n https://tensorboardx.readthedocs.io/en/latest/tensorboard.html\n\n TensorBoardX must be installed.\n\n Args:\n logdir (string): Save directory location. Default is\n runs/**CURRENT_DATETIME_HOSTNAME**, which changes after each run.\n Use hierarchical folder structure to compare\n between runs easily. e.g. pass in 'runs/exp1', 'runs/exp2', etc.\n for each new experiment to compare across them.\n comment (string): Comment logdir suffix appended to the default\n ``logdir``. If ``logdir`` is assigned, this argument has no effect.\n purge_step (int):\n When logging crashes at step :math:`T+X` and restarts at step :math:`T`,\n any events whose global_step larger or equal to :math:`T` will be\n purged and hidden from TensorBoard.\n Note that crashed and resumed experiments should have the same ``logdir``.\n max_queue (int): Size of the queue for pending events and\n summaries before one of the 'add' calls forces a flush to disk.\n Default is ten items.\n flush_secs (int): How often, in seconds, to flush the\n pending events and summaries to disk. Default is every two minutes.\n filename_suffix (string): Suffix added to all event filenames in\n the logdir directory. More details on filename construction in\n tensorboard.summary.writer.event_file_writer.EventFileWriter.\n write_to_disk (boolean):\n If pass `False`, TensorBoardLog will not write to disk.\n Examples:\n Logging optimisation to tensorboard.\n\n >>> import pytest; pytest.skip(\"skip automated test of this docstring\")\n >>>\n >>> import netket as nk\n >>> # create a summary writer with automatically generated folder name.\n >>> writer = nk.logging.TensorBoardLog()\n >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.local/\n >>> # create a summary writer using the specified folder name.\n >>> writer = nk.logging.TensorBoardLog(\"my_experiment\")\n >>> # folder location: my_experiment\n >>> # create a summary writer with comment appended.\n >>> writer = nk.logging.TensorBoardLog(comment=\"LR_0.1_BATCH_16\")\n >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/\n \"\"\"\n\n def __init__(\n self,\n *args,\n **kwargs,\n ):\n from tensorboardX import SummaryWriter\n\n self._writer = SummaryWriter(*args, **kwargs)\n\n self._old_step = 0\n\n def __call__(self, step, item, machine):\n\n data = []\n tree_log(item, \"\", data)\n\n for key, val in data:\n if isinstance(val, Number):\n self._writer.add_scalar(key[1:], val, step)\n\n self._writer.flush()\n self._old_step = step\n\n def _flush_log(self):\n self._writer.flush()\n\n def _flush_params(self, _):\n return None\n\n def flush(self, machine=None):\n \"\"\"\n Writes to file the content of this logger.\n\n :param machine: optionally also writes the parameters of the machine.\n \"\"\"\n self._flush_log()\n\n if machine is not None:\n self._flush_params(machine)\n\n\n# TODO: deprecate in 3.1\n@deprecated(\n \"TBLog has been renamed to `TensorBoardLog` and will be removed in the next\"\n \"minor release. Please update your usages.\"\n)\ndef TBLog(*args, **kwargs):\n return TensorBoardLog(*args, **kwargs)\n", "path": "netket/logging/tensorboard.py"}]} | 2,458 | 325 |
gh_patches_debug_11884 | rasdani/github-patches | git_diff | redis__redis-py-396 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SentinelConnectionPool. Error when working under uwsgi
It raises following exception:
```
redis.sentinel in get_master_address
AttributeError: 'int' object has no attribute 'discover_master'
```
The reason is `ConnectionPool._checkpid()` calls `__init__()` ([connection.py#L397](https://github.com/andymccurdy/redis-py/blob/7210906be09b969e5833de5a967178c4d6989f14/redis/connection.py#L397)) with
`self.connection_class, self.max_connections`
instead
`self.service_name, self.sentinel_manager`.
</issue>
<code>
[start of redis/sentinel.py]
1 import random
2
3 from redis.client import StrictRedis
4 from redis.connection import ConnectionPool, Connection
5 from redis.exceptions import ConnectionError, ResponseError
6 from redis._compat import xrange, nativestr
7
8
9 class MasterNotFoundError(ConnectionError):
10 pass
11
12
13 class SlaveNotFoundError(ConnectionError):
14 pass
15
16
17 class SentinelManagedConnection(Connection):
18 def __init__(self, **kwargs):
19 self.connection_pool = kwargs.pop('connection_pool')
20 super(SentinelManagedConnection, self).__init__(**kwargs)
21
22 def connect_to(self, address):
23 self.host, self.port = address
24 super(SentinelManagedConnection, self).connect()
25 if self.connection_pool.check_connection:
26 self.send_command('PING')
27 if nativestr(self.read_response()) != 'PONG':
28 raise ConnectionError('PING failed')
29
30 def connect(self):
31 if self._sock:
32 return # already connected
33 if self.connection_pool.is_master:
34 self.connect_to(self.connection_pool.get_master_address())
35 else:
36 for slave in self.connection_pool.rotate_slaves():
37 try:
38 return self.connect_to(slave)
39 except ConnectionError:
40 continue
41 raise SlaveNotFoundError # Never be here
42
43
44 class SentinelConnectionPool(ConnectionPool):
45 """
46 Sentinel backed connection pool.
47
48 If ``check_connection`` flag is set to True, SentinelManagedConnection
49 sends a PING command right after establishing the connection.
50 """
51
52 def __init__(self, service_name, sentinel_manager, **kwargs):
53 kwargs['connection_class'] = kwargs.get(
54 'connection_class', SentinelManagedConnection)
55 self.is_master = kwargs.pop('is_master', True)
56 self.check_connection = kwargs.pop('check_connection', False)
57 super(SentinelConnectionPool, self).__init__(**kwargs)
58 self.connection_kwargs['connection_pool'] = self
59 self.service_name = service_name
60 self.sentinel_manager = sentinel_manager
61 self.master_address = None
62 self.slave_rr_counter = None
63
64 def get_master_address(self):
65 master_address = self.sentinel_manager.discover_master(
66 self.service_name)
67 if not self.is_master:
68 pass
69 elif self.master_address is None:
70 self.master_address = master_address
71 elif master_address != self.master_address:
72 self.disconnect() # Master address changed
73 return master_address
74
75 def rotate_slaves(self):
76 "Round-robin slave balancer"
77 slaves = self.sentinel_manager.discover_slaves(self.service_name)
78 if slaves:
79 if self.slave_rr_counter is None:
80 self.slave_rr_counter = random.randint(0, len(slaves) - 1)
81 for _ in xrange(len(slaves)):
82 self.slave_rr_counter = (
83 self.slave_rr_counter + 1) % len(slaves)
84 slave = slaves[self.slave_rr_counter]
85 yield slave
86 # Fallback to the master connection
87 try:
88 yield self.get_master_address()
89 except MasterNotFoundError:
90 pass
91 raise SlaveNotFoundError('No slave found for %r' % (self.service_name))
92
93
94 class Sentinel(object):
95 """
96 Redis Sentinel cluster client
97
98 >>> from redis.sentinel import Sentinel
99 >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
100 >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
101 >>> master.set('foo', 'bar')
102 >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
103 >>> slave.get('foo')
104 'bar'
105
106 ``sentinels`` is a list of sentinel nodes. Each node is represented by
107 a pair (hostname, port).
108
109 Use ``socket_timeout`` to specify a timeout for sentinel clients.
110 It's recommended to use short timeouts.
111
112 Use ``min_other_sentinels`` to filter out sentinels with not enough peers.
113 """
114
115 def __init__(self, sentinels, password=None, socket_timeout=None,
116 min_other_sentinels=0):
117 self.sentinels = [StrictRedis(hostname, port, password=password,
118 socket_timeout=socket_timeout)
119 for hostname, port in sentinels]
120 self.min_other_sentinels = min_other_sentinels
121
122 def check_master_state(self, state, service_name):
123 if not state['is_master'] or state['is_sdown'] or state['is_odown']:
124 return False
125 # Check if our sentinel doesn't see other nodes
126 if state['num-other-sentinels'] < self.min_other_sentinels:
127 return False
128 return True
129
130 def discover_master(self, service_name):
131 """
132 Asks sentinel servers for the Redis master's address corresponding
133 to the service labeled ``service_name``.
134
135 Returns a pair (address, port) or raises MasterNotFoundError if no
136 master is found.
137 """
138 for sentinel_no, sentinel in enumerate(self.sentinels):
139 try:
140 masters = sentinel.sentinel_masters()
141 except ConnectionError:
142 continue
143 state = masters.get(service_name)
144 if state and self.check_master_state(state, service_name):
145 # Put this sentinel at the top of the list
146 self.sentinels[0], self.sentinels[sentinel_no] = (
147 sentinel, self.sentinels[0])
148 return state['ip'], state['port']
149 raise MasterNotFoundError("No master found for %r" % (service_name,))
150
151 def filter_slaves(self, slaves):
152 "Remove slaves that are in an ODOWN or SDOWN state"
153 slaves_alive = []
154 for slave in slaves:
155 if slave['is_odown'] or slave['is_sdown']:
156 continue
157 slaves_alive.append((slave['ip'], slave['port']))
158 return slaves_alive
159
160 def discover_slaves(self, service_name):
161 "Returns a list of alive slaves for service ``service_name``"
162 for sentinel in self.sentinels:
163 try:
164 slaves = sentinel.sentinel_slaves(service_name)
165 except (ConnectionError, ResponseError):
166 continue
167 slaves = self.filter_slaves(slaves)
168 if slaves:
169 return slaves
170 return []
171
172 def master_for(self, service_name, redis_class=StrictRedis,
173 connection_pool_class=SentinelConnectionPool, **kwargs):
174 """
175 Returns a redis client instance for the ``service_name`` master.
176
177 A SentinelConnectionPool class is used to retrive the master's
178 address before establishing a new connection.
179
180 NOTE: If the master's address has changed, any cached connections to
181 the old master are closed.
182
183 By default clients will be a redis.StrictRedis instance. Specify a
184 different class to the ``redis_class`` argument if you desire
185 something different.
186
187 The ``connection_pool_class`` specifies the connection pool to use.
188 The SentinelConnectionPool will be used by default.
189
190 All other arguments are passed directly to the SentinelConnectionPool.
191 """
192 kwargs['is_master'] = True
193 return redis_class(connection_pool=connection_pool_class(
194 service_name, self, **kwargs))
195
196 def slave_for(self, service_name, redis_class=StrictRedis,
197 connection_pool_class=SentinelConnectionPool, **kwargs):
198 """
199 Returns redis client instance for the ``service_name`` slave(s).
200
201 A SentinelConnectionPool class is used to retrive the slave's
202 address before establishing a new connection.
203
204 By default clients will be a redis.StrictRedis instance. Specify a
205 different class to the ``redis_class`` argument if you desire
206 something different.
207
208 The ``connection_pool_class`` specifies the connection pool to use.
209 The SentinelConnectionPool will be used by default.
210
211 All other arguments are passed directly to the SentinelConnectionPool.
212 """
213 kwargs['is_master'] = False
214 return redis_class(connection_pool=connection_pool_class(
215 service_name, self, **kwargs))
216
[end of redis/sentinel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redis/sentinel.py b/redis/sentinel.py
--- a/redis/sentinel.py
+++ b/redis/sentinel.py
@@ -1,3 +1,4 @@
+import os
import random
from redis.client import StrictRedis
@@ -90,6 +91,14 @@
pass
raise SlaveNotFoundError('No slave found for %r' % (self.service_name))
+ def _checkpid(self):
+ if self.pid != os.getpid():
+ self.disconnect()
+ self.__init__(self.service_name, self.sentinel_manager,
+ connection_class=self.connection_class,
+ max_connections=self.max_connections,
+ **self.connection_kwargs)
+
class Sentinel(object):
"""
| {"golden_diff": "diff --git a/redis/sentinel.py b/redis/sentinel.py\n--- a/redis/sentinel.py\n+++ b/redis/sentinel.py\n@@ -1,3 +1,4 @@\n+import os\n import random\n \n from redis.client import StrictRedis\n@@ -90,6 +91,14 @@\n pass\n raise SlaveNotFoundError('No slave found for %r' % (self.service_name))\n \n+ def _checkpid(self):\n+ if self.pid != os.getpid():\n+ self.disconnect()\n+ self.__init__(self.service_name, self.sentinel_manager,\n+ connection_class=self.connection_class,\n+ max_connections=self.max_connections,\n+ **self.connection_kwargs)\n+\n \n class Sentinel(object):\n \"\"\"\n", "issue": "SentinelConnectionPool. Error when working under uwsgi\nIt raises following exception:\n\n```\nredis.sentinel in get_master_address\nAttributeError: 'int' object has no attribute 'discover_master'\n```\n\nThe reason is `ConnectionPool._checkpid()` calls `__init__()` ([connection.py#L397](https://github.com/andymccurdy/redis-py/blob/7210906be09b969e5833de5a967178c4d6989f14/redis/connection.py#L397)) with \n`self.connection_class, self.max_connections`\ninstead\n`self.service_name, self.sentinel_manager`.\n\n", "before_files": [{"content": "import random\n\nfrom redis.client import StrictRedis\nfrom redis.connection import ConnectionPool, Connection\nfrom redis.exceptions import ConnectionError, ResponseError\nfrom redis._compat import xrange, nativestr\n\n\nclass MasterNotFoundError(ConnectionError):\n pass\n\n\nclass SlaveNotFoundError(ConnectionError):\n pass\n\n\nclass SentinelManagedConnection(Connection):\n def __init__(self, **kwargs):\n self.connection_pool = kwargs.pop('connection_pool')\n super(SentinelManagedConnection, self).__init__(**kwargs)\n\n def connect_to(self, address):\n self.host, self.port = address\n super(SentinelManagedConnection, self).connect()\n if self.connection_pool.check_connection:\n self.send_command('PING')\n if nativestr(self.read_response()) != 'PONG':\n raise ConnectionError('PING failed')\n\n def connect(self):\n if self._sock:\n return # already connected\n if self.connection_pool.is_master:\n self.connect_to(self.connection_pool.get_master_address())\n else:\n for slave in self.connection_pool.rotate_slaves():\n try:\n return self.connect_to(slave)\n except ConnectionError:\n continue\n raise SlaveNotFoundError # Never be here\n\n\nclass SentinelConnectionPool(ConnectionPool):\n \"\"\"\n Sentinel backed connection pool.\n\n If ``check_connection`` flag is set to True, SentinelManagedConnection\n sends a PING command right after establishing the connection.\n \"\"\"\n\n def __init__(self, service_name, sentinel_manager, **kwargs):\n kwargs['connection_class'] = kwargs.get(\n 'connection_class', SentinelManagedConnection)\n self.is_master = kwargs.pop('is_master', True)\n self.check_connection = kwargs.pop('check_connection', False)\n super(SentinelConnectionPool, self).__init__(**kwargs)\n self.connection_kwargs['connection_pool'] = self\n self.service_name = service_name\n self.sentinel_manager = sentinel_manager\n self.master_address = None\n self.slave_rr_counter = None\n\n def get_master_address(self):\n master_address = self.sentinel_manager.discover_master(\n self.service_name)\n if not self.is_master:\n pass\n elif self.master_address is None:\n self.master_address = master_address\n elif master_address != self.master_address:\n self.disconnect() # Master address changed\n return master_address\n\n def rotate_slaves(self):\n \"Round-robin slave balancer\"\n slaves = self.sentinel_manager.discover_slaves(self.service_name)\n if slaves:\n if self.slave_rr_counter is None:\n self.slave_rr_counter = random.randint(0, len(slaves) - 1)\n for _ in xrange(len(slaves)):\n self.slave_rr_counter = (\n self.slave_rr_counter + 1) % len(slaves)\n slave = slaves[self.slave_rr_counter]\n yield slave\n # Fallback to the master connection\n try:\n yield self.get_master_address()\n except MasterNotFoundError:\n pass\n raise SlaveNotFoundError('No slave found for %r' % (self.service_name))\n\n\nclass Sentinel(object):\n \"\"\"\n Redis Sentinel cluster client\n\n >>> from redis.sentinel import Sentinel\n >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)\n >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)\n >>> master.set('foo', 'bar')\n >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)\n >>> slave.get('foo')\n 'bar'\n\n ``sentinels`` is a list of sentinel nodes. Each node is represented by\n a pair (hostname, port).\n\n Use ``socket_timeout`` to specify a timeout for sentinel clients.\n It's recommended to use short timeouts.\n\n Use ``min_other_sentinels`` to filter out sentinels with not enough peers.\n \"\"\"\n\n def __init__(self, sentinels, password=None, socket_timeout=None,\n min_other_sentinels=0):\n self.sentinels = [StrictRedis(hostname, port, password=password,\n socket_timeout=socket_timeout)\n for hostname, port in sentinels]\n self.min_other_sentinels = min_other_sentinels\n\n def check_master_state(self, state, service_name):\n if not state['is_master'] or state['is_sdown'] or state['is_odown']:\n return False\n # Check if our sentinel doesn't see other nodes\n if state['num-other-sentinels'] < self.min_other_sentinels:\n return False\n return True\n\n def discover_master(self, service_name):\n \"\"\"\n Asks sentinel servers for the Redis master's address corresponding\n to the service labeled ``service_name``.\n\n Returns a pair (address, port) or raises MasterNotFoundError if no\n master is found.\n \"\"\"\n for sentinel_no, sentinel in enumerate(self.sentinels):\n try:\n masters = sentinel.sentinel_masters()\n except ConnectionError:\n continue\n state = masters.get(service_name)\n if state and self.check_master_state(state, service_name):\n # Put this sentinel at the top of the list\n self.sentinels[0], self.sentinels[sentinel_no] = (\n sentinel, self.sentinels[0])\n return state['ip'], state['port']\n raise MasterNotFoundError(\"No master found for %r\" % (service_name,))\n\n def filter_slaves(self, slaves):\n \"Remove slaves that are in an ODOWN or SDOWN state\"\n slaves_alive = []\n for slave in slaves:\n if slave['is_odown'] or slave['is_sdown']:\n continue\n slaves_alive.append((slave['ip'], slave['port']))\n return slaves_alive\n\n def discover_slaves(self, service_name):\n \"Returns a list of alive slaves for service ``service_name``\"\n for sentinel in self.sentinels:\n try:\n slaves = sentinel.sentinel_slaves(service_name)\n except (ConnectionError, ResponseError):\n continue\n slaves = self.filter_slaves(slaves)\n if slaves:\n return slaves\n return []\n\n def master_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns a redis client instance for the ``service_name`` master.\n\n A SentinelConnectionPool class is used to retrive the master's\n address before establishing a new connection.\n\n NOTE: If the master's address has changed, any cached connections to\n the old master are closed.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other arguments are passed directly to the SentinelConnectionPool.\n \"\"\"\n kwargs['is_master'] = True\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **kwargs))\n\n def slave_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns redis client instance for the ``service_name`` slave(s).\n\n A SentinelConnectionPool class is used to retrive the slave's\n address before establishing a new connection.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other arguments are passed directly to the SentinelConnectionPool.\n \"\"\"\n kwargs['is_master'] = False\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **kwargs))\n", "path": "redis/sentinel.py"}]} | 2,926 | 164 |
gh_patches_debug_35677 | rasdani/github-patches | git_diff | medtagger__MedTagger-519 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Log in user after registration
## Current Behavior
User needs to log in after first registration.
## Expected Behavior
User should be logged into MedTagger right after filling registration form.
## Steps to Reproduce the Problem
1. Register new user.
2. You will be redirected to the login page.
3. Type your login once again...
</issue>
<code>
[start of backend/medtagger/api/auth/business.py]
1 """Module responsible for business logic in all Auth endpoint."""
2 from medtagger.api import InvalidArgumentsException
3 from medtagger.api.security import hash_password, verify_user_password, generate_auth_token
4 from medtagger.database.models import User
5 from medtagger.repositories import roles as RolesRepository, users as UsersRepository
6
7
8 def create_user(email: str, password: str, first_name: str, last_name: str) -> int:
9 """Create user with the given user information. Password is being hashed.
10
11 :param email: user email in string format
12 :param password: user password in string format
13 :param first_name: user first name in string format
14 :param last_name: user last name in string format
15
16 :return: id of the new user
17 """
18 user = UsersRepository.get_user_by_email(email)
19 if user:
20 raise InvalidArgumentsException('User with this email already exists')
21 password_hash = hash_password(password)
22 new_user = User(email, password_hash, first_name, last_name)
23 role = RolesRepository.get_role_with_name('volunteer')
24 if not role:
25 raise InvalidArgumentsException('Role does not exist.')
26 new_user.roles.append(role)
27 return UsersRepository.add_new_user(new_user)
28
29
30 def sign_in_user(email: str, password: str) -> str:
31 """Sign in user using given username and password.
32
33 :param email: user email in string format
34 :param password: user password in string format
35
36 :return: authentication token
37 """
38 user = UsersRepository.get_user_by_email(email)
39 if not user:
40 raise InvalidArgumentsException('User does not exist.')
41 if not verify_user_password(user, password):
42 raise InvalidArgumentsException('Password does not match.')
43 return generate_auth_token(user)
44
[end of backend/medtagger/api/auth/business.py]
[start of backend/medtagger/api/auth/service.py]
1 """Module responsible for definition of Auth service."""
2 from typing import Any
3
4 from flask import request
5 from flask_restplus import Resource
6
7 from medtagger.api import api
8 from medtagger.api.auth.business import create_user, sign_in_user
9 from medtagger.api.auth import serializers
10
11 auth_ns = api.namespace('auth', 'Auth methods')
12
13
14 @auth_ns.route('/register')
15 class Register(Resource):
16 """Register user endpoint."""
17
18 @staticmethod
19 @api.expect(serializers.new_user)
20 @api.doc(responses={201: 'User created', 400: 'Invalid arguments'})
21 def post() -> Any:
22 """Register the user."""
23 user = request.json
24 user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
25 return {'id': user_id}, 201
26
27
28 @auth_ns.route('/sign-in')
29 class SignIn(Resource):
30 """Sign in endpoint."""
31
32 @staticmethod
33 @api.expect(serializers.sign_in)
34 @api.doc(responses={200: 'Signed in', 400: 'User does not exist or wrong password was provided'})
35 def post() -> Any:
36 """Sign in the user."""
37 sign_in = request.json
38 token = sign_in_user(sign_in['email'], sign_in['password'])
39 return {"token": token}, 200
40
[end of backend/medtagger/api/auth/service.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/medtagger/api/auth/business.py b/backend/medtagger/api/auth/business.py
--- a/backend/medtagger/api/auth/business.py
+++ b/backend/medtagger/api/auth/business.py
@@ -1,11 +1,12 @@
"""Module responsible for business logic in all Auth endpoint."""
+from typing import Tuple
from medtagger.api import InvalidArgumentsException
from medtagger.api.security import hash_password, verify_user_password, generate_auth_token
from medtagger.database.models import User
from medtagger.repositories import roles as RolesRepository, users as UsersRepository
-def create_user(email: str, password: str, first_name: str, last_name: str) -> int:
+def create_user(email: str, password: str, first_name: str, last_name: str) -> Tuple[int, str]:
"""Create user with the given user information. Password is being hashed.
:param email: user email in string format
@@ -13,7 +14,7 @@
:param first_name: user first name in string format
:param last_name: user last name in string format
- :return: id of the new user
+ :return: tuple with user id and authentication token
"""
user = UsersRepository.get_user_by_email(email)
if user:
@@ -24,7 +25,9 @@
if not role:
raise InvalidArgumentsException('Role does not exist.')
new_user.roles.append(role)
- return UsersRepository.add_new_user(new_user)
+ user_id = UsersRepository.add_new_user(new_user)
+ user_token = generate_auth_token(new_user)
+ return user_id, user_token
def sign_in_user(email: str, password: str) -> str:
diff --git a/backend/medtagger/api/auth/service.py b/backend/medtagger/api/auth/service.py
--- a/backend/medtagger/api/auth/service.py
+++ b/backend/medtagger/api/auth/service.py
@@ -21,8 +21,8 @@
def post() -> Any:
"""Register the user."""
user = request.json
- user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
- return {'id': user_id}, 201
+ user_id, user_token = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
+ return {'id': user_id, 'token': user_token}, 201
@auth_ns.route('/sign-in')
| {"golden_diff": "diff --git a/backend/medtagger/api/auth/business.py b/backend/medtagger/api/auth/business.py\n--- a/backend/medtagger/api/auth/business.py\n+++ b/backend/medtagger/api/auth/business.py\n@@ -1,11 +1,12 @@\n \"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\n+from typing import Tuple\n from medtagger.api import InvalidArgumentsException\n from medtagger.api.security import hash_password, verify_user_password, generate_auth_token\n from medtagger.database.models import User\n from medtagger.repositories import roles as RolesRepository, users as UsersRepository\n \n \n-def create_user(email: str, password: str, first_name: str, last_name: str) -> int:\n+def create_user(email: str, password: str, first_name: str, last_name: str) -> Tuple[int, str]:\n \"\"\"Create user with the given user information. Password is being hashed.\n \n :param email: user email in string format\n@@ -13,7 +14,7 @@\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n \n- :return: id of the new user\n+ :return: tuple with user id and authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n@@ -24,7 +25,9 @@\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n- return UsersRepository.add_new_user(new_user)\n+ user_id = UsersRepository.add_new_user(new_user)\n+ user_token = generate_auth_token(new_user)\n+ return user_id, user_token\n \n \n def sign_in_user(email: str, password: str) -> str:\ndiff --git a/backend/medtagger/api/auth/service.py b/backend/medtagger/api/auth/service.py\n--- a/backend/medtagger/api/auth/service.py\n+++ b/backend/medtagger/api/auth/service.py\n@@ -21,8 +21,8 @@\n def post() -> Any:\n \"\"\"Register the user.\"\"\"\n user = request.json\n- user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n- return {'id': user_id}, 201\n+ user_id, user_token = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n+ return {'id': user_id, 'token': user_token}, 201\n \n \n @auth_ns.route('/sign-in')\n", "issue": "Log in user after registration\n## Current Behavior\r\n\r\nUser needs to log in after first registration.\r\n\r\n## Expected Behavior\r\n\r\nUser should be logged into MedTagger right after filling registration form.\r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Register new user.\r\n 2. You will be redirected to the login page.\r\n 3. Type your login once again...\r\n\n", "before_files": [{"content": "\"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\nfrom medtagger.api import InvalidArgumentsException\nfrom medtagger.api.security import hash_password, verify_user_password, generate_auth_token\nfrom medtagger.database.models import User\nfrom medtagger.repositories import roles as RolesRepository, users as UsersRepository\n\n\ndef create_user(email: str, password: str, first_name: str, last_name: str) -> int:\n \"\"\"Create user with the given user information. Password is being hashed.\n\n :param email: user email in string format\n :param password: user password in string format\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n\n :return: id of the new user\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n raise InvalidArgumentsException('User with this email already exists')\n password_hash = hash_password(password)\n new_user = User(email, password_hash, first_name, last_name)\n role = RolesRepository.get_role_with_name('volunteer')\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n return UsersRepository.add_new_user(new_user)\n\n\ndef sign_in_user(email: str, password: str) -> str:\n \"\"\"Sign in user using given username and password.\n\n :param email: user email in string format\n :param password: user password in string format\n\n :return: authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if not user:\n raise InvalidArgumentsException('User does not exist.')\n if not verify_user_password(user, password):\n raise InvalidArgumentsException('Password does not match.')\n return generate_auth_token(user)\n", "path": "backend/medtagger/api/auth/business.py"}, {"content": "\"\"\"Module responsible for definition of Auth service.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.auth.business import create_user, sign_in_user\nfrom medtagger.api.auth import serializers\n\nauth_ns = api.namespace('auth', 'Auth methods')\n\n\n@auth_ns.route('/register')\nclass Register(Resource):\n \"\"\"Register user endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.new_user)\n @api.doc(responses={201: 'User created', 400: 'Invalid arguments'})\n def post() -> Any:\n \"\"\"Register the user.\"\"\"\n user = request.json\n user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n return {'id': user_id}, 201\n\n\n@auth_ns.route('/sign-in')\nclass SignIn(Resource):\n \"\"\"Sign in endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.sign_in)\n @api.doc(responses={200: 'Signed in', 400: 'User does not exist or wrong password was provided'})\n def post() -> Any:\n \"\"\"Sign in the user.\"\"\"\n sign_in = request.json\n token = sign_in_user(sign_in['email'], sign_in['password'])\n return {\"token\": token}, 200\n", "path": "backend/medtagger/api/auth/service.py"}]} | 1,473 | 555 |
gh_patches_debug_23773 | rasdani/github-patches | git_diff | mirumee__ariadne-481 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unexpected Snake Case for Acronyms
The snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.
- `getHTTPResponse` is converted to `get_h_t_t_p_response`, or
- `externalID` is converted to "external_i_d`.
These are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.
Possible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case
Unexpected Snake Case for Acronyms
The snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.
- `getHTTPResponse` is converted to `get_h_t_t_p_response`, or
- `externalID` is converted to "external_i_d`.
These are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.
Possible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case
</issue>
<code>
[start of ariadne/utils.py]
1 import asyncio
2 from functools import wraps
3 from typing import Optional, Union, Callable, Dict, Any
4
5 from graphql import GraphQLError, parse
6
7
8 def convert_camel_case_to_snake(graphql_name: str) -> str:
9 python_name = ""
10 for i, c in enumerate(graphql_name.lower()):
11 if (
12 i > 0
13 and (
14 all(
15 (
16 c != graphql_name[i],
17 graphql_name[i - 1] != "_",
18 graphql_name[i - 1] == python_name[-1],
19 )
20 )
21 )
22 or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))
23 ):
24 python_name += "_"
25 python_name += c
26 return python_name
27
28
29 def gql(value: str) -> str:
30 parse(value)
31 return value
32
33
34 def unwrap_graphql_error(
35 error: Union[GraphQLError, Optional[Exception]]
36 ) -> Optional[Exception]:
37 if isinstance(error, GraphQLError):
38 return unwrap_graphql_error(error.original_error)
39 return error
40
41
42 def convert_kwargs_to_snake_case(func: Callable) -> Callable:
43 def convert_to_snake_case(d: Dict) -> Dict:
44 converted: Dict = {}
45 for k, v in d.items():
46 if isinstance(v, dict):
47 v = convert_to_snake_case(v)
48 if isinstance(v, list):
49 v = [convert_to_snake_case(i) if isinstance(i, dict) else i for i in v]
50 converted[convert_camel_case_to_snake(k)] = v
51 return converted
52
53 if asyncio.iscoroutinefunction(func):
54
55 @wraps(func)
56 async def async_wrapper(*args: Any, **kwargs: Any) -> Any:
57 return await func(*args, **convert_to_snake_case(kwargs))
58
59 return async_wrapper
60
61 @wraps(func)
62 def wrapper(*args: Any, **kwargs: Any) -> Any:
63 return func(*args, **convert_to_snake_case(kwargs))
64
65 return wrapper
66
[end of ariadne/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/utils.py b/ariadne/utils.py
--- a/ariadne/utils.py
+++ b/ariadne/utils.py
@@ -6,20 +6,29 @@
def convert_camel_case_to_snake(graphql_name: str) -> str:
+ # pylint: disable=too-many-boolean-expressions
+ max_index = len(graphql_name) - 1
+ lowered_name = graphql_name.lower()
+
python_name = ""
- for i, c in enumerate(graphql_name.lower()):
- if (
- i > 0
- and (
- all(
- (
- c != graphql_name[i],
- graphql_name[i - 1] != "_",
- graphql_name[i - 1] == python_name[-1],
- )
- )
+ for i, c in enumerate(lowered_name):
+ if i > 0 and (
+ # testWord -> test_word
+ (
+ c != graphql_name[i]
+ and graphql_name[i - 1] != "_"
+ and graphql_name[i - 1] == python_name[-1]
+ )
+ # TESTWord -> test_word
+ or (
+ i < max_index
+ and graphql_name[i] != lowered_name[i]
+ and graphql_name[i + 1] == lowered_name[i + 1]
)
- or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))
+ # test134 -> test_134
+ or (c.isdigit() and not graphql_name[i - 1].isdigit())
+ # 134test -> 134_test
+ or (not c.isdigit() and graphql_name[i - 1].isdigit())
):
python_name += "_"
python_name += c
| {"golden_diff": "diff --git a/ariadne/utils.py b/ariadne/utils.py\n--- a/ariadne/utils.py\n+++ b/ariadne/utils.py\n@@ -6,20 +6,29 @@\n \n \n def convert_camel_case_to_snake(graphql_name: str) -> str:\n+ # pylint: disable=too-many-boolean-expressions\n+ max_index = len(graphql_name) - 1\n+ lowered_name = graphql_name.lower()\n+\n python_name = \"\"\n- for i, c in enumerate(graphql_name.lower()):\n- if (\n- i > 0\n- and (\n- all(\n- (\n- c != graphql_name[i],\n- graphql_name[i - 1] != \"_\",\n- graphql_name[i - 1] == python_name[-1],\n- )\n- )\n+ for i, c in enumerate(lowered_name):\n+ if i > 0 and (\n+ # testWord -> test_word\n+ (\n+ c != graphql_name[i]\n+ and graphql_name[i - 1] != \"_\"\n+ and graphql_name[i - 1] == python_name[-1]\n+ )\n+ # TESTWord -> test_word\n+ or (\n+ i < max_index\n+ and graphql_name[i] != lowered_name[i]\n+ and graphql_name[i + 1] == lowered_name[i + 1]\n )\n- or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))\n+ # test134 -> test_134\n+ or (c.isdigit() and not graphql_name[i - 1].isdigit())\n+ # 134test -> 134_test\n+ or (not c.isdigit() and graphql_name[i - 1].isdigit())\n ):\n python_name += \"_\"\n python_name += c\n", "issue": "Unexpected Snake Case for Acronyms\nThe snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.\r\n - `getHTTPResponse` is converted to `get_h_t_t_p_response`, or\r\n - `externalID` is converted to \"external_i_d`. \r\n\r\nThese are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.\r\n\r\nPossible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case\nUnexpected Snake Case for Acronyms\nThe snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.\r\n - `getHTTPResponse` is converted to `get_h_t_t_p_response`, or\r\n - `externalID` is converted to \"external_i_d`. \r\n\r\nThese are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.\r\n\r\nPossible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case\n", "before_files": [{"content": "import asyncio\nfrom functools import wraps\nfrom typing import Optional, Union, Callable, Dict, Any\n\nfrom graphql import GraphQLError, parse\n\n\ndef convert_camel_case_to_snake(graphql_name: str) -> str:\n python_name = \"\"\n for i, c in enumerate(graphql_name.lower()):\n if (\n i > 0\n and (\n all(\n (\n c != graphql_name[i],\n graphql_name[i - 1] != \"_\",\n graphql_name[i - 1] == python_name[-1],\n )\n )\n )\n or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))\n ):\n python_name += \"_\"\n python_name += c\n return python_name\n\n\ndef gql(value: str) -> str:\n parse(value)\n return value\n\n\ndef unwrap_graphql_error(\n error: Union[GraphQLError, Optional[Exception]]\n) -> Optional[Exception]:\n if isinstance(error, GraphQLError):\n return unwrap_graphql_error(error.original_error)\n return error\n\n\ndef convert_kwargs_to_snake_case(func: Callable) -> Callable:\n def convert_to_snake_case(d: Dict) -> Dict:\n converted: Dict = {}\n for k, v in d.items():\n if isinstance(v, dict):\n v = convert_to_snake_case(v)\n if isinstance(v, list):\n v = [convert_to_snake_case(i) if isinstance(i, dict) else i for i in v]\n converted[convert_camel_case_to_snake(k)] = v\n return converted\n\n if asyncio.iscoroutinefunction(func):\n\n @wraps(func)\n async def async_wrapper(*args: Any, **kwargs: Any) -> Any:\n return await func(*args, **convert_to_snake_case(kwargs))\n\n return async_wrapper\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n return func(*args, **convert_to_snake_case(kwargs))\n\n return wrapper\n", "path": "ariadne/utils.py"}]} | 1,377 | 409 |
gh_patches_debug_10822 | rasdani/github-patches | git_diff | translate__pootle-5932 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IntegrityError: (1062, "Duplicate entry 'xxx-stats' for key > 'pootle_revision_content_type_id_xxx_uniq'")
This error has been spotted in the wild.
From code review its hard to see how its happening, and i ~~havent~~ managed to reproduce - *by clicking very fast on editor buttons*
~~It may be related to update_stores~~
</issue>
<code>
[start of pootle/apps/pootle_revision/utils.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import uuid
10
11 from django.contrib.contenttypes.models import ContentType
12 from django.utils.functional import cached_property
13
14 from pootle.core.url_helpers import split_pootle_path
15 from pootle_app.models import Directory
16
17 from .models import Revision
18
19
20 class RevisionContext(object):
21
22 def __init__(self, context):
23 self.context = context
24
25 @cached_property
26 def content_type_id(self):
27 return ContentType.objects.get_for_model(
28 self.context._meta.model).id
29
30 @property
31 def revision_context(self):
32 return self.context.revisions
33
34 def get(self, key=None):
35 """get a revision from db or set one if not set"""
36 if not self.revision_context:
37 return ""
38 return self.revision_context.filter(
39 key=key).values_list("value", flat=True).first() or ""
40
41 def set(self, keys=None, value=None):
42 """get a revision from db or set one if not set"""
43 self.revision_context.filter(key__in=keys).delete()
44 if value:
45 revisions = []
46 for k in keys:
47 revisions.append(
48 Revision(
49 content_type_id=self.content_type_id,
50 object_id=self.context.pk,
51 key=k,
52 value=value))
53 Revision.objects.bulk_create(revisions)
54
55
56 class DirectoryRevision(RevisionContext):
57 pass
58
59
60 class LanguageRevision(RevisionContext):
61
62 @property
63 def revision_context(self):
64 return self.context.directory.revisions
65
66
67 class ProjectRevision(RevisionContext):
68 pass
69
70
71 class ProjectResourceRevision(RevisionContext):
72
73 @property
74 def revision_context(self):
75 first_child = self.context.children.first()
76 if not first_child:
77 return
78 return Directory.objects.get(
79 pootle_path="/projects/%s/"
80 % split_pootle_path(first_child.pootle_path)[1]).revisions
81
82
83 class ProjectSetRevision(RevisionContext):
84
85 @property
86 def revision_context(self):
87 first_project = self.context.children.first()
88 if not first_project:
89 return
90 return first_project.directory.parent.revisions
91
92
93 class TPRevision(RevisionContext):
94 pass
95
96
97 class RevisionUpdater(object):
98
99 def __init__(self, context=None, object_list=None):
100 self.context = context
101 self.object_list = object_list
102
103 @property
104 def object_list_paths(self):
105 return set(
106 self.object_list.values_list(
107 self.related_pootle_path,
108 flat=True))
109
110 @property
111 def all_pootle_paths(self):
112 if self.context and not self.object_list:
113 return set([self.context_path])
114 elif self.object_list:
115 parents = self.object_list_paths
116 if self.context:
117 parents.add(self.context_path)
118 return parents
119 return []
120
121 @property
122 def parents(self):
123 """calculate unit parents for cache update"""
124 return Directory.objects.filter(
125 pootle_path__in=self.get_parent_paths(self.all_pootle_paths))
126
127 def get_parent_paths(self, pootle_paths):
128 paths = set(["/projects/"])
129 for pootle_path in pootle_paths:
130 lang_code, proj_code, dir_path, __ = split_pootle_path(pootle_path)
131 paths.add("/projects/%s/" % proj_code)
132 paths.add("/%s/" % lang_code)
133 paths.add("/%s/%s/" % (lang_code, proj_code))
134 dir_path_parts = dir_path.split("/")
135 for i, name in enumerate(dir_path_parts):
136 if not name:
137 continue
138 paths.add(
139 "/%s/%s/%s/"
140 % (lang_code,
141 proj_code,
142 "/".join(dir_path_parts[:i + 1])))
143 return paths
144
145 @property
146 def new_revision(self):
147 return uuid.uuid4().hex
148
149 @cached_property
150 def content_type_id(self):
151 return ContentType.objects.get_for_model(Directory).id
152
153 def get_revisions(self, parents, keys=None):
154 return Revision.objects.filter(
155 content_type_id=self.content_type_id,
156 key__in=keys or [""],
157 object_id__in=parents)
158
159 def create_revisions(self, parents, keys=None):
160 new_revision = self.new_revision
161 for parent in parents:
162 for key in keys or [""]:
163 yield Revision(
164 content_type_id=self.content_type_id,
165 object_id=parent,
166 key=key,
167 value=new_revision)
168
169 def update(self, keys=None):
170 parents = list(self.parents.values_list("id", flat=True))
171 revisions = self.get_revisions(parents, keys=keys)
172 revisions.delete()
173 Revision.objects.bulk_create(
174 self.create_revisions(parents, keys=keys))
175
176
177 class UnitRevisionUpdater(RevisionUpdater):
178 related_pootle_path = "store__parent__pootle_path"
179
180 @property
181 def context_path(self):
182 return self.context.store.parent.pootle_path
183
184
185 class StoreRevisionUpdater(RevisionUpdater):
186 related_pootle_path = "parent__pootle_path"
187
188 @property
189 def context_path(self):
190 return self.context.parent.pootle_path
191
192
193 class DirectoryRevisionUpdater(RevisionUpdater):
194 related_pootle_path = "pootle_path"
195
196 @property
197 def context_path(self):
198 return self.context.pootle_path
199
[end of pootle/apps/pootle_revision/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/apps/pootle_revision/utils.py b/pootle/apps/pootle_revision/utils.py
--- a/pootle/apps/pootle_revision/utils.py
+++ b/pootle/apps/pootle_revision/utils.py
@@ -169,7 +169,10 @@
def update(self, keys=None):
parents = list(self.parents.values_list("id", flat=True))
revisions = self.get_revisions(parents, keys=keys)
- revisions.delete()
+ # manually get the list of ids and delete those to prevent
+ # django race condition
+ revision_ids = list(revisions.values_list("id", flat=True))
+ revisions.filter(id__in=revision_ids).delete()
Revision.objects.bulk_create(
self.create_revisions(parents, keys=keys))
| {"golden_diff": "diff --git a/pootle/apps/pootle_revision/utils.py b/pootle/apps/pootle_revision/utils.py\n--- a/pootle/apps/pootle_revision/utils.py\n+++ b/pootle/apps/pootle_revision/utils.py\n@@ -169,7 +169,10 @@\n def update(self, keys=None):\n parents = list(self.parents.values_list(\"id\", flat=True))\n revisions = self.get_revisions(parents, keys=keys)\n- revisions.delete()\n+ # manually get the list of ids and delete those to prevent\n+ # django race condition\n+ revision_ids = list(revisions.values_list(\"id\", flat=True))\n+ revisions.filter(id__in=revision_ids).delete()\n Revision.objects.bulk_create(\n self.create_revisions(parents, keys=keys))\n", "issue": "IntegrityError: (1062, \"Duplicate entry 'xxx-stats' for key > 'pootle_revision_content_type_id_xxx_uniq'\")\nThis error has been spotted in the wild.\r\n\r\nFrom code review its hard to see how its happening, and i ~~havent~~ managed to reproduce - *by clicking very fast on editor buttons*\r\n\r\n~~It may be related to update_stores~~\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport uuid\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.url_helpers import split_pootle_path\nfrom pootle_app.models import Directory\n\nfrom .models import Revision\n\n\nclass RevisionContext(object):\n\n def __init__(self, context):\n self.context = context\n\n @cached_property\n def content_type_id(self):\n return ContentType.objects.get_for_model(\n self.context._meta.model).id\n\n @property\n def revision_context(self):\n return self.context.revisions\n\n def get(self, key=None):\n \"\"\"get a revision from db or set one if not set\"\"\"\n if not self.revision_context:\n return \"\"\n return self.revision_context.filter(\n key=key).values_list(\"value\", flat=True).first() or \"\"\n\n def set(self, keys=None, value=None):\n \"\"\"get a revision from db or set one if not set\"\"\"\n self.revision_context.filter(key__in=keys).delete()\n if value:\n revisions = []\n for k in keys:\n revisions.append(\n Revision(\n content_type_id=self.content_type_id,\n object_id=self.context.pk,\n key=k,\n value=value))\n Revision.objects.bulk_create(revisions)\n\n\nclass DirectoryRevision(RevisionContext):\n pass\n\n\nclass LanguageRevision(RevisionContext):\n\n @property\n def revision_context(self):\n return self.context.directory.revisions\n\n\nclass ProjectRevision(RevisionContext):\n pass\n\n\nclass ProjectResourceRevision(RevisionContext):\n\n @property\n def revision_context(self):\n first_child = self.context.children.first()\n if not first_child:\n return\n return Directory.objects.get(\n pootle_path=\"/projects/%s/\"\n % split_pootle_path(first_child.pootle_path)[1]).revisions\n\n\nclass ProjectSetRevision(RevisionContext):\n\n @property\n def revision_context(self):\n first_project = self.context.children.first()\n if not first_project:\n return\n return first_project.directory.parent.revisions\n\n\nclass TPRevision(RevisionContext):\n pass\n\n\nclass RevisionUpdater(object):\n\n def __init__(self, context=None, object_list=None):\n self.context = context\n self.object_list = object_list\n\n @property\n def object_list_paths(self):\n return set(\n self.object_list.values_list(\n self.related_pootle_path,\n flat=True))\n\n @property\n def all_pootle_paths(self):\n if self.context and not self.object_list:\n return set([self.context_path])\n elif self.object_list:\n parents = self.object_list_paths\n if self.context:\n parents.add(self.context_path)\n return parents\n return []\n\n @property\n def parents(self):\n \"\"\"calculate unit parents for cache update\"\"\"\n return Directory.objects.filter(\n pootle_path__in=self.get_parent_paths(self.all_pootle_paths))\n\n def get_parent_paths(self, pootle_paths):\n paths = set([\"/projects/\"])\n for pootle_path in pootle_paths:\n lang_code, proj_code, dir_path, __ = split_pootle_path(pootle_path)\n paths.add(\"/projects/%s/\" % proj_code)\n paths.add(\"/%s/\" % lang_code)\n paths.add(\"/%s/%s/\" % (lang_code, proj_code))\n dir_path_parts = dir_path.split(\"/\")\n for i, name in enumerate(dir_path_parts):\n if not name:\n continue\n paths.add(\n \"/%s/%s/%s/\"\n % (lang_code,\n proj_code,\n \"/\".join(dir_path_parts[:i + 1])))\n return paths\n\n @property\n def new_revision(self):\n return uuid.uuid4().hex\n\n @cached_property\n def content_type_id(self):\n return ContentType.objects.get_for_model(Directory).id\n\n def get_revisions(self, parents, keys=None):\n return Revision.objects.filter(\n content_type_id=self.content_type_id,\n key__in=keys or [\"\"],\n object_id__in=parents)\n\n def create_revisions(self, parents, keys=None):\n new_revision = self.new_revision\n for parent in parents:\n for key in keys or [\"\"]:\n yield Revision(\n content_type_id=self.content_type_id,\n object_id=parent,\n key=key,\n value=new_revision)\n\n def update(self, keys=None):\n parents = list(self.parents.values_list(\"id\", flat=True))\n revisions = self.get_revisions(parents, keys=keys)\n revisions.delete()\n Revision.objects.bulk_create(\n self.create_revisions(parents, keys=keys))\n\n\nclass UnitRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"store__parent__pootle_path\"\n\n @property\n def context_path(self):\n return self.context.store.parent.pootle_path\n\n\nclass StoreRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"parent__pootle_path\"\n\n @property\n def context_path(self):\n return self.context.parent.pootle_path\n\n\nclass DirectoryRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"pootle_path\"\n\n @property\n def context_path(self):\n return self.context.pootle_path\n", "path": "pootle/apps/pootle_revision/utils.py"}]} | 2,342 | 177 |
gh_patches_debug_17526 | rasdani/github-patches | git_diff | nonebot__nonebot2-1968 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: run_sync忽略了上下文变量
### 操作系统
Windows
### Python 版本
3.11.0
### NoneBot 版本
2.0.0rc4
### 适配器
-
### 协议端
-
### 描述问题
[run_sync](https://github.com/nonebot/nonebot2/blob/e98d28f3b4fdda2504ecc07318563ce202464b96/nonebot/utils.py#L114)忽略了上下文变量,可能会导致异常,比如https://github.com/nonebot/nonebot2/issues/1966
### 复现步骤
-
### 期望的结果
使用```copy_context```然后```ctx.run```进executor
### 截图或日志
-
</issue>
<code>
[start of nonebot/utils.py]
1 """本模块包含了 NoneBot 的一些工具函数
2
3 FrontMatter:
4 sidebar_position: 8
5 description: nonebot.utils 模块
6 """
7
8 import re
9 import json
10 import asyncio
11 import inspect
12 import importlib
13 import dataclasses
14 from pathlib import Path
15 from functools import wraps, partial
16 from contextlib import asynccontextmanager
17 from typing_extensions import ParamSpec, get_args, get_origin
18 from typing import (
19 Any,
20 Type,
21 Tuple,
22 Union,
23 TypeVar,
24 Callable,
25 Optional,
26 Coroutine,
27 AsyncGenerator,
28 ContextManager,
29 overload,
30 )
31
32 from pydantic.typing import is_union, is_none_type
33
34 from nonebot.log import logger
35 from nonebot.typing import overrides
36
37 P = ParamSpec("P")
38 R = TypeVar("R")
39 T = TypeVar("T")
40 K = TypeVar("K")
41 V = TypeVar("V")
42
43
44 def escape_tag(s: str) -> str:
45 """用于记录带颜色日志时转义 `<tag>` 类型特殊标签
46
47 参考: [loguru color 标签](https://loguru.readthedocs.io/en/stable/api/logger.html#color)
48
49 参数:
50 s: 需要转义的字符串
51 """
52 return re.sub(r"</?((?:[fb]g\s)?[^<>\s]*)>", r"\\\g<0>", s)
53
54
55 def generic_check_issubclass(
56 cls: Any, class_or_tuple: Union[Type[Any], Tuple[Type[Any], ...]]
57 ) -> bool:
58 """检查 cls 是否是 class_or_tuple 中的一个类型子类。
59
60 特别的,如果 cls 是 `typing.Union` 或 `types.UnionType` 类型,
61 则会检查其中的类型是否是 class_or_tuple 中的一个类型子类。(None 会被忽略)
62 """
63 try:
64 return issubclass(cls, class_or_tuple)
65 except TypeError:
66 origin = get_origin(cls)
67 if is_union(origin):
68 return all(
69 is_none_type(type_) or generic_check_issubclass(type_, class_or_tuple)
70 for type_ in get_args(cls)
71 )
72 elif origin:
73 return issubclass(origin, class_or_tuple)
74 return False
75
76
77 def is_coroutine_callable(call: Callable[..., Any]) -> bool:
78 """检查 call 是否是一个 callable 协程函数"""
79 if inspect.isroutine(call):
80 return inspect.iscoroutinefunction(call)
81 if inspect.isclass(call):
82 return False
83 func_ = getattr(call, "__call__", None)
84 return inspect.iscoroutinefunction(func_)
85
86
87 def is_gen_callable(call: Callable[..., Any]) -> bool:
88 """检查 call 是否是一个生成器函数"""
89 if inspect.isgeneratorfunction(call):
90 return True
91 func_ = getattr(call, "__call__", None)
92 return inspect.isgeneratorfunction(func_)
93
94
95 def is_async_gen_callable(call: Callable[..., Any]) -> bool:
96 """检查 call 是否是一个异步生成器函数"""
97 if inspect.isasyncgenfunction(call):
98 return True
99 func_ = getattr(call, "__call__", None)
100 return inspect.isasyncgenfunction(func_)
101
102
103 def run_sync(call: Callable[P, R]) -> Callable[P, Coroutine[None, None, R]]:
104 """一个用于包装 sync function 为 async function 的装饰器
105
106 参数:
107 call: 被装饰的同步函数
108 """
109
110 @wraps(call)
111 async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
112 loop = asyncio.get_running_loop()
113 pfunc = partial(call, *args, **kwargs)
114 result = await loop.run_in_executor(None, pfunc)
115 return result
116
117 return _wrapper
118
119
120 @asynccontextmanager
121 async def run_sync_ctx_manager(
122 cm: ContextManager[T],
123 ) -> AsyncGenerator[T, None]:
124 """一个用于包装 sync context manager 为 async context manager 的执行函数"""
125 try:
126 yield await run_sync(cm.__enter__)()
127 except Exception as e:
128 ok = await run_sync(cm.__exit__)(type(e), e, None)
129 if not ok:
130 raise e
131 else:
132 await run_sync(cm.__exit__)(None, None, None)
133
134
135 @overload
136 async def run_coro_with_catch(
137 coro: Coroutine[Any, Any, T],
138 exc: Tuple[Type[Exception], ...],
139 ) -> Union[T, None]:
140 ...
141
142
143 @overload
144 async def run_coro_with_catch(
145 coro: Coroutine[Any, Any, T],
146 exc: Tuple[Type[Exception], ...],
147 return_on_err: R,
148 ) -> Union[T, R]:
149 ...
150
151
152 async def run_coro_with_catch(
153 coro: Coroutine[Any, Any, T],
154 exc: Tuple[Type[Exception], ...],
155 return_on_err: Optional[R] = None,
156 ) -> Optional[Union[T, R]]:
157 try:
158 return await coro
159 except exc:
160 return return_on_err
161
162
163 def get_name(obj: Any) -> str:
164 """获取对象的名称"""
165 if inspect.isfunction(obj) or inspect.isclass(obj):
166 return obj.__name__
167 return obj.__class__.__name__
168
169
170 def path_to_module_name(path: Path) -> str:
171 """转换路径为模块名"""
172 rel_path = path.resolve().relative_to(Path.cwd().resolve())
173 if rel_path.stem == "__init__":
174 return ".".join(rel_path.parts[:-1])
175 else:
176 return ".".join(rel_path.parts[:-1] + (rel_path.stem,))
177
178
179 def resolve_dot_notation(
180 obj_str: str, default_attr: str, default_prefix: Optional[str] = None
181 ) -> Any:
182 """解析并导入点分表示法的对象"""
183 modulename, _, cls = obj_str.partition(":")
184 if default_prefix is not None and modulename.startswith("~"):
185 modulename = default_prefix + modulename[1:]
186 module = importlib.import_module(modulename)
187 if not cls:
188 return getattr(module, default_attr)
189 instance = module
190 for attr_str in cls.split("."):
191 instance = getattr(instance, attr_str)
192 return instance
193
194
195 class DataclassEncoder(json.JSONEncoder):
196 """在JSON序列化 {re}`nonebot.adapters._message.Message` (List[Dataclass]) 时使用的 `JSONEncoder`"""
197
198 @overrides(json.JSONEncoder)
199 def default(self, o):
200 if dataclasses.is_dataclass(o):
201 return {f.name: getattr(o, f.name) for f in dataclasses.fields(o)}
202 return super().default(o)
203
204
205 def logger_wrapper(logger_name: str):
206 """用于打印 adapter 的日志。
207
208 参数:
209 logger_name: adapter 的名称
210
211 返回:
212 日志记录函数
213
214 - level: 日志等级
215 - message: 日志信息
216 - exception: 异常信息
217 """
218
219 def log(level: str, message: str, exception: Optional[Exception] = None):
220 logger.opt(colors=True, exception=exception).log(
221 level, f"<m>{escape_tag(logger_name)}</m> | {message}"
222 )
223
224 return log
225
[end of nonebot/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nonebot/utils.py b/nonebot/utils.py
--- a/nonebot/utils.py
+++ b/nonebot/utils.py
@@ -12,6 +12,7 @@
import importlib
import dataclasses
from pathlib import Path
+from contextvars import copy_context
from functools import wraps, partial
from contextlib import asynccontextmanager
from typing_extensions import ParamSpec, get_args, get_origin
@@ -111,7 +112,9 @@
async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
loop = asyncio.get_running_loop()
pfunc = partial(call, *args, **kwargs)
- result = await loop.run_in_executor(None, pfunc)
+ context = copy_context()
+ context_run = context.run
+ result = await loop.run_in_executor(None, context_run, pfunc)
return result
return _wrapper
| {"golden_diff": "diff --git a/nonebot/utils.py b/nonebot/utils.py\n--- a/nonebot/utils.py\n+++ b/nonebot/utils.py\n@@ -12,6 +12,7 @@\n import importlib\n import dataclasses\n from pathlib import Path\n+from contextvars import copy_context\n from functools import wraps, partial\n from contextlib import asynccontextmanager\n from typing_extensions import ParamSpec, get_args, get_origin\n@@ -111,7 +112,9 @@\n async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:\n loop = asyncio.get_running_loop()\n pfunc = partial(call, *args, **kwargs)\n- result = await loop.run_in_executor(None, pfunc)\n+ context = copy_context()\n+ context_run = context.run\n+ result = await loop.run_in_executor(None, context_run, pfunc)\n return result\n \n return _wrapper\n", "issue": "Bug: run_sync\u5ffd\u7565\u4e86\u4e0a\u4e0b\u6587\u53d8\u91cf\n### \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Python \u7248\u672c\n\n3.11.0\n\n### NoneBot \u7248\u672c\n\n2.0.0rc4\n\n### \u9002\u914d\u5668\n\n-\n\n### \u534f\u8bae\u7aef\n\n-\n\n### \u63cf\u8ff0\u95ee\u9898\n\n[run_sync](https://github.com/nonebot/nonebot2/blob/e98d28f3b4fdda2504ecc07318563ce202464b96/nonebot/utils.py#L114)\u5ffd\u7565\u4e86\u4e0a\u4e0b\u6587\u53d8\u91cf\uff0c\u53ef\u80fd\u4f1a\u5bfc\u81f4\u5f02\u5e38\uff0c\u6bd4\u5982https://github.com/nonebot/nonebot2/issues/1966\n\n### \u590d\u73b0\u6b65\u9aa4\n\n-\n\n### \u671f\u671b\u7684\u7ed3\u679c\n\n\u4f7f\u7528```copy_context```\u7136\u540e```ctx.run```\u8fdbexecutor\n\n### \u622a\u56fe\u6216\u65e5\u5fd7\n\n-\n", "before_files": [{"content": "\"\"\"\u672c\u6a21\u5757\u5305\u542b\u4e86 NoneBot \u7684\u4e00\u4e9b\u5de5\u5177\u51fd\u6570\n\nFrontMatter:\n sidebar_position: 8\n description: nonebot.utils \u6a21\u5757\n\"\"\"\n\nimport re\nimport json\nimport asyncio\nimport inspect\nimport importlib\nimport dataclasses\nfrom pathlib import Path\nfrom functools import wraps, partial\nfrom contextlib import asynccontextmanager\nfrom typing_extensions import ParamSpec, get_args, get_origin\nfrom typing import (\n Any,\n Type,\n Tuple,\n Union,\n TypeVar,\n Callable,\n Optional,\n Coroutine,\n AsyncGenerator,\n ContextManager,\n overload,\n)\n\nfrom pydantic.typing import is_union, is_none_type\n\nfrom nonebot.log import logger\nfrom nonebot.typing import overrides\n\nP = ParamSpec(\"P\")\nR = TypeVar(\"R\")\nT = TypeVar(\"T\")\nK = TypeVar(\"K\")\nV = TypeVar(\"V\")\n\n\ndef escape_tag(s: str) -> str:\n \"\"\"\u7528\u4e8e\u8bb0\u5f55\u5e26\u989c\u8272\u65e5\u5fd7\u65f6\u8f6c\u4e49 `<tag>` \u7c7b\u578b\u7279\u6b8a\u6807\u7b7e\n\n \u53c2\u8003: [loguru color \u6807\u7b7e](https://loguru.readthedocs.io/en/stable/api/logger.html#color)\n\n \u53c2\u6570:\n s: \u9700\u8981\u8f6c\u4e49\u7684\u5b57\u7b26\u4e32\n \"\"\"\n return re.sub(r\"</?((?:[fb]g\\s)?[^<>\\s]*)>\", r\"\\\\\\g<0>\", s)\n\n\ndef generic_check_issubclass(\n cls: Any, class_or_tuple: Union[Type[Any], Tuple[Type[Any], ...]]\n) -> bool:\n \"\"\"\u68c0\u67e5 cls \u662f\u5426\u662f class_or_tuple \u4e2d\u7684\u4e00\u4e2a\u7c7b\u578b\u5b50\u7c7b\u3002\n\n \u7279\u522b\u7684\uff0c\u5982\u679c cls \u662f `typing.Union` \u6216 `types.UnionType` \u7c7b\u578b\uff0c\n \u5219\u4f1a\u68c0\u67e5\u5176\u4e2d\u7684\u7c7b\u578b\u662f\u5426\u662f class_or_tuple \u4e2d\u7684\u4e00\u4e2a\u7c7b\u578b\u5b50\u7c7b\u3002\uff08None \u4f1a\u88ab\u5ffd\u7565\uff09\n \"\"\"\n try:\n return issubclass(cls, class_or_tuple)\n except TypeError:\n origin = get_origin(cls)\n if is_union(origin):\n return all(\n is_none_type(type_) or generic_check_issubclass(type_, class_or_tuple)\n for type_ in get_args(cls)\n )\n elif origin:\n return issubclass(origin, class_or_tuple)\n return False\n\n\ndef is_coroutine_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a callable \u534f\u7a0b\u51fd\u6570\"\"\"\n if inspect.isroutine(call):\n return inspect.iscoroutinefunction(call)\n if inspect.isclass(call):\n return False\n func_ = getattr(call, \"__call__\", None)\n return inspect.iscoroutinefunction(func_)\n\n\ndef is_gen_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a\u751f\u6210\u5668\u51fd\u6570\"\"\"\n if inspect.isgeneratorfunction(call):\n return True\n func_ = getattr(call, \"__call__\", None)\n return inspect.isgeneratorfunction(func_)\n\n\ndef is_async_gen_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a\u5f02\u6b65\u751f\u6210\u5668\u51fd\u6570\"\"\"\n if inspect.isasyncgenfunction(call):\n return True\n func_ = getattr(call, \"__call__\", None)\n return inspect.isasyncgenfunction(func_)\n\n\ndef run_sync(call: Callable[P, R]) -> Callable[P, Coroutine[None, None, R]]:\n \"\"\"\u4e00\u4e2a\u7528\u4e8e\u5305\u88c5 sync function \u4e3a async function \u7684\u88c5\u9970\u5668\n\n \u53c2\u6570:\n call: \u88ab\u88c5\u9970\u7684\u540c\u6b65\u51fd\u6570\n \"\"\"\n\n @wraps(call)\n async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:\n loop = asyncio.get_running_loop()\n pfunc = partial(call, *args, **kwargs)\n result = await loop.run_in_executor(None, pfunc)\n return result\n\n return _wrapper\n\n\n@asynccontextmanager\nasync def run_sync_ctx_manager(\n cm: ContextManager[T],\n) -> AsyncGenerator[T, None]:\n \"\"\"\u4e00\u4e2a\u7528\u4e8e\u5305\u88c5 sync context manager \u4e3a async context manager \u7684\u6267\u884c\u51fd\u6570\"\"\"\n try:\n yield await run_sync(cm.__enter__)()\n except Exception as e:\n ok = await run_sync(cm.__exit__)(type(e), e, None)\n if not ok:\n raise e\n else:\n await run_sync(cm.__exit__)(None, None, None)\n\n\n@overload\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n) -> Union[T, None]:\n ...\n\n\n@overload\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n return_on_err: R,\n) -> Union[T, R]:\n ...\n\n\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n return_on_err: Optional[R] = None,\n) -> Optional[Union[T, R]]:\n try:\n return await coro\n except exc:\n return return_on_err\n\n\ndef get_name(obj: Any) -> str:\n \"\"\"\u83b7\u53d6\u5bf9\u8c61\u7684\u540d\u79f0\"\"\"\n if inspect.isfunction(obj) or inspect.isclass(obj):\n return obj.__name__\n return obj.__class__.__name__\n\n\ndef path_to_module_name(path: Path) -> str:\n \"\"\"\u8f6c\u6362\u8def\u5f84\u4e3a\u6a21\u5757\u540d\"\"\"\n rel_path = path.resolve().relative_to(Path.cwd().resolve())\n if rel_path.stem == \"__init__\":\n return \".\".join(rel_path.parts[:-1])\n else:\n return \".\".join(rel_path.parts[:-1] + (rel_path.stem,))\n\n\ndef resolve_dot_notation(\n obj_str: str, default_attr: str, default_prefix: Optional[str] = None\n) -> Any:\n \"\"\"\u89e3\u6790\u5e76\u5bfc\u5165\u70b9\u5206\u8868\u793a\u6cd5\u7684\u5bf9\u8c61\"\"\"\n modulename, _, cls = obj_str.partition(\":\")\n if default_prefix is not None and modulename.startswith(\"~\"):\n modulename = default_prefix + modulename[1:]\n module = importlib.import_module(modulename)\n if not cls:\n return getattr(module, default_attr)\n instance = module\n for attr_str in cls.split(\".\"):\n instance = getattr(instance, attr_str)\n return instance\n\n\nclass DataclassEncoder(json.JSONEncoder):\n \"\"\"\u5728JSON\u5e8f\u5217\u5316 {re}`nonebot.adapters._message.Message` (List[Dataclass]) \u65f6\u4f7f\u7528\u7684 `JSONEncoder`\"\"\"\n\n @overrides(json.JSONEncoder)\n def default(self, o):\n if dataclasses.is_dataclass(o):\n return {f.name: getattr(o, f.name) for f in dataclasses.fields(o)}\n return super().default(o)\n\n\ndef logger_wrapper(logger_name: str):\n \"\"\"\u7528\u4e8e\u6253\u5370 adapter \u7684\u65e5\u5fd7\u3002\n\n \u53c2\u6570:\n logger_name: adapter \u7684\u540d\u79f0\n\n \u8fd4\u56de:\n \u65e5\u5fd7\u8bb0\u5f55\u51fd\u6570\n\n - level: \u65e5\u5fd7\u7b49\u7ea7\n - message: \u65e5\u5fd7\u4fe1\u606f\n - exception: \u5f02\u5e38\u4fe1\u606f\n \"\"\"\n\n def log(level: str, message: str, exception: Optional[Exception] = None):\n logger.opt(colors=True, exception=exception).log(\n level, f\"<m>{escape_tag(logger_name)}</m> | {message}\"\n )\n\n return log\n", "path": "nonebot/utils.py"}]} | 2,856 | 203 |
gh_patches_debug_14058 | rasdani/github-patches | git_diff | pyg-team__pytorch_geometric-8519 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CycleMotif lack of label, therefore do not support GNNExplainer.
### 🐛 Describe the bug
when running ./examples/explain/gnn_explainer_ba_shapes.py, when I replace the dataset:
```
dataset = ExplainerDataset(
graph_generator=BAGraph(num_nodes=300, num_edges=5),
motif_generator='house',
num_motifs=80,
transform=T.Constant(),
)
```
with
```
dataset = ExplainerDataset(
graph_generator=BAGraph(num_nodes=300, num_edges=5),
motif_generator=CycleMotif(num_nodes=6),
num_motifs=80,
transform=T.Constant(),
)
```
There is an error:
```
Traceback (most recent call last):
File "/home/stt/py_github_repo_read/pytorch_geometric/examples/explain/gnn_explainer_ba_shapes.py", line 46, in <module>
out_channels=dataset.num_classes).to(device)
File "/home/stt/py_github_repo_read/pytorch_geometric/torch_geometric/data/in_memory_dataset.py", line 90, in num_classes
return super().num_classes
File "/home/stt/py_github_repo_read/pytorch_geometric/torch_geometric/data/dataset.py", line 173, in num_classes
y = torch.cat([data.y for data in data_list if 'y' in data], dim=0)
RuntimeError: torch.cat(): expected a non-empty list of Tensors
```
The reason behind locate at line 23 in `./torch_geometric/datasets/motif_generator/cycle.py`
```
structure = Data(
num_nodes=num_nodes,
edge_index=torch.stack([row, col], dim=0),
# TODO: lack of y label
)
```
lack of y label as in `./torch_geometric/datasets/motif_generator/house.py`
```
structure = Data(
num_nodes=5,
edge_index=torch.tensor([
[0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4],
[1, 3, 4, 4, 2, 0, 1, 3, 2, 0, 0, 1],
]),
y=torch.tensor([0, 0, 1, 1, 2]),
)
```
According to GNNExplainer original repository, for the cycle motif, the node labels are the same. Therefore, we only need to add `y=torch.tensor([0]*num_nodes)`
### Versions
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:49:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.113.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 24
在线 CPU 列表: 0-23
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i7-13700KF
CPU 系列: 6
型号: 183
每个核的线程数: 2
每个座的核数: 16
座: 1
步进: 1
CPU 最大 MHz: 5400.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 6835.20
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 640 KiB (16 instances)
L1i 缓存: 768 KiB (16 instances)
L2 缓存: 24 MiB (10 instances)
L3 缓存: 30 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.1.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
[conda] torchvision 0.16.0 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
</issue>
<code>
[start of torch_geometric/datasets/explainer_dataset.py]
1 from typing import Any, Callable, Dict, Optional, Union
2
3 import torch
4
5 from torch_geometric.data import InMemoryDataset
6 from torch_geometric.datasets.graph_generator import GraphGenerator
7 from torch_geometric.datasets.motif_generator import MotifGenerator
8 from torch_geometric.explain import Explanation
9
10
11 class ExplainerDataset(InMemoryDataset):
12 r"""Generates a synthetic dataset for evaluating explainabilty algorithms,
13 as described in the `"GNNExplainer: Generating Explanations for Graph
14 Neural Networks" <https://arxiv.org/abs/1903.03894>`__ paper.
15 The :class:`~torch_geometric.datasets.ExplainerDataset` creates synthetic
16 graphs coming from a
17 :class:`~torch_geometric.datasets.graph_generator.GraphGenerator`, and
18 randomly attaches :obj:`num_motifs` many motifs to it coming from a
19 :class:`~torch_geometric.datasets.graph_generator.MotifGenerator`.
20 Ground-truth node-level and edge-level explainabilty masks are given based
21 on whether nodes and edges are part of a certain motif or not.
22
23 For example, to generate a random Barabasi-Albert (BA) graph with 300
24 nodes, in which we want to randomly attach 80 :obj:`"house"` motifs, write:
25
26 .. code-block:: python
27
28 from torch_geometric.datasets import ExplainerDataset
29 from torch_geometric.datasets.graph_generator import BAGraph
30
31 dataset = ExplainerDataset(
32 graph_generator=BAGraph(num_nodes=300, num_edges=5),
33 motif_generator='house',
34 num_motifs=80,
35 )
36
37 .. note::
38
39 For an example of using :class:`ExplainerDataset`, see
40 `examples/explain/gnn_explainer_ba_shapes.py
41 <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/
42 /explain/gnn_explainer_ba_shapes.py>`_.
43
44 Args:
45 graph_generator (GraphGenerator or str): The graph generator to be
46 used, *e.g.*,
47 :class:`torch.geometric.datasets.graph_generator.BAGraph`
48 (or any string that automatically resolves to it).
49 motif_generator (MotifGenerator): The motif generator to be used,
50 *e.g.*,
51 :class:`torch_geometric.datasets.motif_generator.HouseMotif`
52 (or any string that automatically resolves to it).
53 num_motifs (int): The number of motifs to attach to the graph.
54 num_graphs (int, optional): The number of graphs to generate.
55 (default: :obj:`1`)
56 graph_generator_kwargs (Dict[str, Any], optional): Arguments passed to
57 the respective graph generator module in case it gets automatically
58 resolved. (default: :obj:`None`)
59 motif_generator_kwargs (Dict[str, Any], optional): Arguments passed to
60 the respective motif generator module in case it gets automatically
61 resolved. (default: :obj:`None`)
62 transform (callable, optional): A function/transform that takes in an
63 :obj:`torch_geometric.data.Data` object and returns a transformed
64 version. The data object will be transformed before every access.
65 (default: :obj:`None`)
66 """
67 def __init__(
68 self,
69 graph_generator: Union[GraphGenerator, str],
70 motif_generator: Union[MotifGenerator, str],
71 num_motifs: int,
72 num_graphs: int = 1,
73 graph_generator_kwargs: Optional[Dict[str, Any]] = None,
74 motif_generator_kwargs: Optional[Dict[str, Any]] = None,
75 transform: Optional[Callable] = None,
76 ):
77 super().__init__(root=None, transform=transform)
78
79 if num_motifs <= 0:
80 raise ValueError(f"At least one motif needs to be attached to the "
81 f"graph (got {num_motifs})")
82
83 self.graph_generator = GraphGenerator.resolve(
84 graph_generator,
85 **(graph_generator_kwargs or {}),
86 )
87 self.motif_generator = MotifGenerator.resolve(
88 motif_generator,
89 **(motif_generator_kwargs or {}),
90 )
91 self.num_motifs = num_motifs
92
93 # TODO (matthias) support on-the-fly graph generation.
94 data_list = [self.get_graph() for _ in range(num_graphs)]
95 self.data, self.slices = self.collate(data_list)
96
97 def get_graph(self) -> Explanation:
98 data = self.graph_generator()
99
100 edge_indices = [data.edge_index]
101 num_nodes = data.num_nodes
102 node_masks = [torch.zeros(data.num_nodes)]
103 edge_masks = [torch.zeros(data.num_edges)]
104 ys = [torch.zeros(num_nodes, dtype=torch.long)]
105
106 connecting_nodes = torch.randperm(num_nodes)[:self.num_motifs]
107 for i in connecting_nodes.tolist():
108 motif = self.motif_generator()
109
110 # Add motif to the graph.
111 edge_indices.append(motif.edge_index + num_nodes)
112 node_masks.append(torch.ones(motif.num_nodes))
113 edge_masks.append(torch.ones(motif.num_edges))
114
115 # Add random motif connection to the graph.
116 j = int(torch.randint(0, motif.num_nodes, (1, ))) + num_nodes
117 edge_indices.append(torch.tensor([[i, j], [j, i]]))
118 edge_masks.append(torch.zeros(2))
119
120 if 'y' in motif:
121 ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)
122
123 num_nodes += motif.num_nodes
124
125 return Explanation(
126 edge_index=torch.cat(edge_indices, dim=1),
127 y=torch.cat(ys, dim=0) if len(ys) > 1 else None,
128 edge_mask=torch.cat(edge_masks, dim=0),
129 node_mask=torch.cat(node_masks, dim=0),
130 )
131
132 def __repr__(self) -> str:
133 return (f'{self.__class__.__name__}({len(self)}, '
134 f'graph_generator={self.graph_generator}, '
135 f'motif_generator={self.motif_generator}, '
136 f'num_motifs={self.num_motifs})')
137
[end of torch_geometric/datasets/explainer_dataset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torch_geometric/datasets/explainer_dataset.py b/torch_geometric/datasets/explainer_dataset.py
--- a/torch_geometric/datasets/explainer_dataset.py
+++ b/torch_geometric/datasets/explainer_dataset.py
@@ -119,12 +119,14 @@
if 'y' in motif:
ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)
+ else:
+ ys.append(torch.ones(motif.num_nodes, dtype=torch.long))
num_nodes += motif.num_nodes
return Explanation(
edge_index=torch.cat(edge_indices, dim=1),
- y=torch.cat(ys, dim=0) if len(ys) > 1 else None,
+ y=torch.cat(ys, dim=0),
edge_mask=torch.cat(edge_masks, dim=0),
node_mask=torch.cat(node_masks, dim=0),
)
| {"golden_diff": "diff --git a/torch_geometric/datasets/explainer_dataset.py b/torch_geometric/datasets/explainer_dataset.py\n--- a/torch_geometric/datasets/explainer_dataset.py\n+++ b/torch_geometric/datasets/explainer_dataset.py\n@@ -119,12 +119,14 @@\n \n if 'y' in motif:\n ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)\n+ else:\n+ ys.append(torch.ones(motif.num_nodes, dtype=torch.long))\n \n num_nodes += motif.num_nodes\n \n return Explanation(\n edge_index=torch.cat(edge_indices, dim=1),\n- y=torch.cat(ys, dim=0) if len(ys) > 1 else None,\n+ y=torch.cat(ys, dim=0),\n edge_mask=torch.cat(edge_masks, dim=0),\n node_mask=torch.cat(node_masks, dim=0),\n )\n", "issue": "CycleMotif lack of label, therefore do not support GNNExplainer.\n### \ud83d\udc1b Describe the bug\n\nwhen running ./examples/explain/gnn_explainer_ba_shapes.py, when I replace the dataset:\r\n```\r\ndataset = ExplainerDataset(\r\n graph_generator=BAGraph(num_nodes=300, num_edges=5),\r\n motif_generator='house',\r\n num_motifs=80,\r\n transform=T.Constant(),\r\n) \r\n```\r\nwith\r\n```\r\ndataset = ExplainerDataset(\r\n graph_generator=BAGraph(num_nodes=300, num_edges=5),\r\n motif_generator=CycleMotif(num_nodes=6), \r\n num_motifs=80,\r\n transform=T.Constant(),\r\n)\r\n```\r\nThere is an error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/stt/py_github_repo_read/pytorch_geometric/examples/explain/gnn_explainer_ba_shapes.py\", line 46, in <module>\r\n out_channels=dataset.num_classes).to(device)\r\n File \"/home/stt/py_github_repo_read/pytorch_geometric/torch_geometric/data/in_memory_dataset.py\", line 90, in num_classes\r\n return super().num_classes\r\n File \"/home/stt/py_github_repo_read/pytorch_geometric/torch_geometric/data/dataset.py\", line 173, in num_classes\r\n y = torch.cat([data.y for data in data_list if 'y' in data], dim=0)\r\nRuntimeError: torch.cat(): expected a non-empty list of Tensors\r\n```\r\nThe reason behind locate at line 23 in `./torch_geometric/datasets/motif_generator/cycle.py`\r\n```\r\nstructure = Data(\r\n num_nodes=num_nodes,\r\n edge_index=torch.stack([row, col], dim=0),\r\n# TODO: lack of y label\r\n )\r\n```\r\nlack of y label as in `./torch_geometric/datasets/motif_generator/house.py`\r\n```\r\nstructure = Data(\r\n num_nodes=5,\r\n edge_index=torch.tensor([\r\n [0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4],\r\n [1, 3, 4, 4, 2, 0, 1, 3, 2, 0, 0, 1],\r\n ]),\r\n y=torch.tensor([0, 0, 1, 1, 2]),\r\n )\r\n```\r\nAccording to GNNExplainer original repository, for the cycle motif, the node labels are the same. Therefore, we only need to add `y=torch.tensor([0]*num_nodes)`\n\n### Versions\n\nPyTorch version: 2.1.0+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.22.1\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:49:32) [GCC 12.3.0] (64-bit runtime)\r\nPython platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 11.8.89\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: NVIDIA GeForce RTX 3090\r\nGPU 1: NVIDIA GeForce RTX 3090\r\n\r\nNvidia driver version: 535.113.01\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\n\u67b6\u6784\uff1a x86_64\r\nCPU \u8fd0\u884c\u6a21\u5f0f\uff1a 32-bit, 64-bit\r\nAddress sizes: 39 bits physical, 48 bits virtual\r\n\u5b57\u8282\u5e8f\uff1a Little Endian\r\nCPU: 24\r\n\u5728\u7ebf CPU \u5217\u8868\uff1a 0-23\r\n\u5382\u5546 ID\uff1a GenuineIntel\r\n\u578b\u53f7\u540d\u79f0\uff1a 13th Gen Intel(R) Core(TM) i7-13700KF\r\nCPU \u7cfb\u5217\uff1a 6\r\n\u578b\u53f7\uff1a 183\r\n\u6bcf\u4e2a\u6838\u7684\u7ebf\u7a0b\u6570\uff1a 2\r\n\u6bcf\u4e2a\u5ea7\u7684\u6838\u6570\uff1a 16\r\n\u5ea7\uff1a 1\r\n\u6b65\u8fdb\uff1a 1\r\nCPU \u6700\u5927 MHz\uff1a 5400.0000\r\nCPU \u6700\u5c0f MHz\uff1a 800.0000\r\nBogoMIPS\uff1a 6835.20\r\n\u6807\u8bb0\uff1a fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities\r\n\u865a\u62df\u5316\uff1a VT-x\r\nL1d \u7f13\u5b58\uff1a 640 KiB (16 instances)\r\nL1i \u7f13\u5b58\uff1a 768 KiB (16 instances)\r\nL2 \u7f13\u5b58\uff1a 24 MiB (10 instances)\r\nL3 \u7f13\u5b58\uff1a 30 MiB (1 instance)\r\nNUMA \u8282\u70b9\uff1a 1\r\nNUMA \u8282\u70b90 CPU\uff1a 0-23\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.0\r\n[pip3] torch==2.1.0\r\n[pip3] torchvision==0.16.0\r\n[pip3] triton==2.1.0\r\n[conda] numpy 1.26.0 pypi_0 pypi\r\n[conda] torch 2.1.0 pypi_0 pypi\r\n[conda] torchvision 0.16.0 pypi_0 pypi\r\n[conda] triton 2.1.0 pypi_0 pypi\n", "before_files": [{"content": "from typing import Any, Callable, Dict, Optional, Union\n\nimport torch\n\nfrom torch_geometric.data import InMemoryDataset\nfrom torch_geometric.datasets.graph_generator import GraphGenerator\nfrom torch_geometric.datasets.motif_generator import MotifGenerator\nfrom torch_geometric.explain import Explanation\n\n\nclass ExplainerDataset(InMemoryDataset):\n r\"\"\"Generates a synthetic dataset for evaluating explainabilty algorithms,\n as described in the `\"GNNExplainer: Generating Explanations for Graph\n Neural Networks\" <https://arxiv.org/abs/1903.03894>`__ paper.\n The :class:`~torch_geometric.datasets.ExplainerDataset` creates synthetic\n graphs coming from a\n :class:`~torch_geometric.datasets.graph_generator.GraphGenerator`, and\n randomly attaches :obj:`num_motifs` many motifs to it coming from a\n :class:`~torch_geometric.datasets.graph_generator.MotifGenerator`.\n Ground-truth node-level and edge-level explainabilty masks are given based\n on whether nodes and edges are part of a certain motif or not.\n\n For example, to generate a random Barabasi-Albert (BA) graph with 300\n nodes, in which we want to randomly attach 80 :obj:`\"house\"` motifs, write:\n\n .. code-block:: python\n\n from torch_geometric.datasets import ExplainerDataset\n from torch_geometric.datasets.graph_generator import BAGraph\n\n dataset = ExplainerDataset(\n graph_generator=BAGraph(num_nodes=300, num_edges=5),\n motif_generator='house',\n num_motifs=80,\n )\n\n .. note::\n\n For an example of using :class:`ExplainerDataset`, see\n `examples/explain/gnn_explainer_ba_shapes.py\n <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/\n /explain/gnn_explainer_ba_shapes.py>`_.\n\n Args:\n graph_generator (GraphGenerator or str): The graph generator to be\n used, *e.g.*,\n :class:`torch.geometric.datasets.graph_generator.BAGraph`\n (or any string that automatically resolves to it).\n motif_generator (MotifGenerator): The motif generator to be used,\n *e.g.*,\n :class:`torch_geometric.datasets.motif_generator.HouseMotif`\n (or any string that automatically resolves to it).\n num_motifs (int): The number of motifs to attach to the graph.\n num_graphs (int, optional): The number of graphs to generate.\n (default: :obj:`1`)\n graph_generator_kwargs (Dict[str, Any], optional): Arguments passed to\n the respective graph generator module in case it gets automatically\n resolved. (default: :obj:`None`)\n motif_generator_kwargs (Dict[str, Any], optional): Arguments passed to\n the respective motif generator module in case it gets automatically\n resolved. (default: :obj:`None`)\n transform (callable, optional): A function/transform that takes in an\n :obj:`torch_geometric.data.Data` object and returns a transformed\n version. The data object will be transformed before every access.\n (default: :obj:`None`)\n \"\"\"\n def __init__(\n self,\n graph_generator: Union[GraphGenerator, str],\n motif_generator: Union[MotifGenerator, str],\n num_motifs: int,\n num_graphs: int = 1,\n graph_generator_kwargs: Optional[Dict[str, Any]] = None,\n motif_generator_kwargs: Optional[Dict[str, Any]] = None,\n transform: Optional[Callable] = None,\n ):\n super().__init__(root=None, transform=transform)\n\n if num_motifs <= 0:\n raise ValueError(f\"At least one motif needs to be attached to the \"\n f\"graph (got {num_motifs})\")\n\n self.graph_generator = GraphGenerator.resolve(\n graph_generator,\n **(graph_generator_kwargs or {}),\n )\n self.motif_generator = MotifGenerator.resolve(\n motif_generator,\n **(motif_generator_kwargs or {}),\n )\n self.num_motifs = num_motifs\n\n # TODO (matthias) support on-the-fly graph generation.\n data_list = [self.get_graph() for _ in range(num_graphs)]\n self.data, self.slices = self.collate(data_list)\n\n def get_graph(self) -> Explanation:\n data = self.graph_generator()\n\n edge_indices = [data.edge_index]\n num_nodes = data.num_nodes\n node_masks = [torch.zeros(data.num_nodes)]\n edge_masks = [torch.zeros(data.num_edges)]\n ys = [torch.zeros(num_nodes, dtype=torch.long)]\n\n connecting_nodes = torch.randperm(num_nodes)[:self.num_motifs]\n for i in connecting_nodes.tolist():\n motif = self.motif_generator()\n\n # Add motif to the graph.\n edge_indices.append(motif.edge_index + num_nodes)\n node_masks.append(torch.ones(motif.num_nodes))\n edge_masks.append(torch.ones(motif.num_edges))\n\n # Add random motif connection to the graph.\n j = int(torch.randint(0, motif.num_nodes, (1, ))) + num_nodes\n edge_indices.append(torch.tensor([[i, j], [j, i]]))\n edge_masks.append(torch.zeros(2))\n\n if 'y' in motif:\n ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)\n\n num_nodes += motif.num_nodes\n\n return Explanation(\n edge_index=torch.cat(edge_indices, dim=1),\n y=torch.cat(ys, dim=0) if len(ys) > 1 else None,\n edge_mask=torch.cat(edge_masks, dim=0),\n node_mask=torch.cat(node_masks, dim=0),\n )\n\n def __repr__(self) -> str:\n return (f'{self.__class__.__name__}({len(self)}, '\n f'graph_generator={self.graph_generator}, '\n f'motif_generator={self.motif_generator}, '\n f'num_motifs={self.num_motifs})')\n", "path": "torch_geometric/datasets/explainer_dataset.py"}]} | 4,065 | 209 |
gh_patches_debug_23742 | rasdani/github-patches | git_diff | python-discord__bot-1556 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use embed timestamp in mod pings off
When a mods turns off mod pings, a confirmation is sent to inform the user that their pings have successfully been turned off.
In this confirmation, we currently include the time at which it is due to be sent, this time is in UTC.
I propose we refactor this part of the code to instead use an Embed, with a the timestamp field.
https://github.com/python-discord/bot/blob/ce819ade482e82ecbc474bce5fb8ac9dd8b37b40/bot/exts/moderation/modpings.py#L107
This would mean that the time would automatically get converted to the user's current time zone by Discord.
</issue>
<code>
[start of bot/exts/moderation/modpings.py]
1 import datetime
2 import logging
3
4 from async_rediscache import RedisCache
5 from dateutil.parser import isoparse
6 from discord import Member
7 from discord.ext.commands import Cog, Context, group, has_any_role
8
9 from bot.bot import Bot
10 from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles
11 from bot.converters import Expiry
12 from bot.utils.scheduling import Scheduler
13
14 log = logging.getLogger(__name__)
15
16
17 class ModPings(Cog):
18 """Commands for a moderator to turn moderator pings on and off."""
19
20 # RedisCache[discord.Member.id, 'Naïve ISO 8601 string']
21 # The cache's keys are mods who have pings off.
22 # The cache's values are the times when the role should be re-applied to them, stored in ISO format.
23 pings_off_mods = RedisCache()
24
25 def __init__(self, bot: Bot):
26 self.bot = bot
27 self._role_scheduler = Scheduler(self.__class__.__name__)
28
29 self.guild = None
30 self.moderators_role = None
31
32 self.reschedule_task = self.bot.loop.create_task(self.reschedule_roles(), name="mod-pings-reschedule")
33
34 async def reschedule_roles(self) -> None:
35 """Reschedule moderators role re-apply times."""
36 await self.bot.wait_until_guild_available()
37 self.guild = self.bot.get_guild(Guild.id)
38 self.moderators_role = self.guild.get_role(Roles.moderators)
39
40 mod_team = self.guild.get_role(Roles.mod_team)
41 pings_on = self.moderators_role.members
42 pings_off = await self.pings_off_mods.to_dict()
43
44 log.trace("Applying the moderators role to the mod team where necessary.")
45 for mod in mod_team.members:
46 if mod in pings_on: # Make sure that on-duty mods aren't in the cache.
47 if mod in pings_off:
48 await self.pings_off_mods.delete(mod.id)
49 continue
50
51 # Keep the role off only for those in the cache.
52 if mod.id not in pings_off:
53 await self.reapply_role(mod)
54 else:
55 expiry = isoparse(pings_off[mod.id]).replace(tzinfo=None)
56 self._role_scheduler.schedule_at(expiry, mod.id, self.reapply_role(mod))
57
58 async def reapply_role(self, mod: Member) -> None:
59 """Reapply the moderator's role to the given moderator."""
60 log.trace(f"Re-applying role to mod with ID {mod.id}.")
61 await mod.add_roles(self.moderators_role, reason="Pings off period expired.")
62
63 @group(name='modpings', aliases=('modping',), invoke_without_command=True)
64 @has_any_role(*MODERATION_ROLES)
65 async def modpings_group(self, ctx: Context) -> None:
66 """Allow the removal and re-addition of the pingable moderators role."""
67 await ctx.send_help(ctx.command)
68
69 @modpings_group.command(name='off')
70 @has_any_role(*MODERATION_ROLES)
71 async def off_command(self, ctx: Context, duration: Expiry) -> None:
72 """
73 Temporarily removes the pingable moderators role for a set amount of time.
74
75 A unit of time should be appended to the duration.
76 Units (∗case-sensitive):
77 \u2003`y` - years
78 \u2003`m` - months∗
79 \u2003`w` - weeks
80 \u2003`d` - days
81 \u2003`h` - hours
82 \u2003`M` - minutes∗
83 \u2003`s` - seconds
84
85 Alternatively, an ISO 8601 timestamp can be provided for the duration.
86
87 The duration cannot be longer than 30 days.
88 """
89 duration: datetime.datetime
90 delta = duration - datetime.datetime.utcnow()
91 if delta > datetime.timedelta(days=30):
92 await ctx.send(":x: Cannot remove the role for longer than 30 days.")
93 return
94
95 mod = ctx.author
96
97 until_date = duration.replace(microsecond=0).isoformat() # Looks noisy with microseconds.
98 await mod.remove_roles(self.moderators_role, reason=f"Turned pings off until {until_date}.")
99
100 await self.pings_off_mods.set(mod.id, duration.isoformat())
101
102 # Allow rescheduling the task without cancelling it separately via the `on` command.
103 if mod.id in self._role_scheduler:
104 self._role_scheduler.cancel(mod.id)
105 self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))
106
107 await ctx.send(f"{Emojis.check_mark} Moderators role has been removed until {until_date}.")
108
109 @modpings_group.command(name='on')
110 @has_any_role(*MODERATION_ROLES)
111 async def on_command(self, ctx: Context) -> None:
112 """Re-apply the pingable moderators role."""
113 mod = ctx.author
114 if mod in self.moderators_role.members:
115 await ctx.send(":question: You already have the role.")
116 return
117
118 await mod.add_roles(self.moderators_role, reason="Pings off period canceled.")
119
120 await self.pings_off_mods.delete(mod.id)
121
122 # We assume the task exists. Lack of it may indicate a bug.
123 self._role_scheduler.cancel(mod.id)
124
125 await ctx.send(f"{Emojis.check_mark} Moderators role has been re-applied.")
126
127 def cog_unload(self) -> None:
128 """Cancel role tasks when the cog unloads."""
129 log.trace("Cog unload: canceling role tasks.")
130 self.reschedule_task.cancel()
131 self._role_scheduler.cancel_all()
132
133
134 def setup(bot: Bot) -> None:
135 """Load the ModPings cog."""
136 bot.add_cog(ModPings(bot))
137
[end of bot/exts/moderation/modpings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bot/exts/moderation/modpings.py b/bot/exts/moderation/modpings.py
--- a/bot/exts/moderation/modpings.py
+++ b/bot/exts/moderation/modpings.py
@@ -3,11 +3,11 @@
from async_rediscache import RedisCache
from dateutil.parser import isoparse
-from discord import Member
+from discord import Embed, Member
from discord.ext.commands import Cog, Context, group, has_any_role
from bot.bot import Bot
-from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles
+from bot.constants import Colours, Emojis, Guild, Icons, MODERATION_ROLES, Roles
from bot.converters import Expiry
from bot.utils.scheduling import Scheduler
@@ -104,7 +104,9 @@
self._role_scheduler.cancel(mod.id)
self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))
- await ctx.send(f"{Emojis.check_mark} Moderators role has been removed until {until_date}.")
+ embed = Embed(timestamp=duration, colour=Colours.bright_green)
+ embed.set_footer(text="Moderators role has been removed until", icon_url=Icons.green_checkmark)
+ await ctx.send(embed=embed)
@modpings_group.command(name='on')
@has_any_role(*MODERATION_ROLES)
| {"golden_diff": "diff --git a/bot/exts/moderation/modpings.py b/bot/exts/moderation/modpings.py\n--- a/bot/exts/moderation/modpings.py\n+++ b/bot/exts/moderation/modpings.py\n@@ -3,11 +3,11 @@\n \n from async_rediscache import RedisCache\n from dateutil.parser import isoparse\n-from discord import Member\n+from discord import Embed, Member\n from discord.ext.commands import Cog, Context, group, has_any_role\n \n from bot.bot import Bot\n-from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles\n+from bot.constants import Colours, Emojis, Guild, Icons, MODERATION_ROLES, Roles\n from bot.converters import Expiry\n from bot.utils.scheduling import Scheduler\n \n@@ -104,7 +104,9 @@\n self._role_scheduler.cancel(mod.id)\n self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))\n \n- await ctx.send(f\"{Emojis.check_mark} Moderators role has been removed until {until_date}.\")\n+ embed = Embed(timestamp=duration, colour=Colours.bright_green)\n+ embed.set_footer(text=\"Moderators role has been removed until\", icon_url=Icons.green_checkmark)\n+ await ctx.send(embed=embed)\n \n @modpings_group.command(name='on')\n @has_any_role(*MODERATION_ROLES)\n", "issue": "Use embed timestamp in mod pings off\nWhen a mods turns off mod pings, a confirmation is sent to inform the user that their pings have successfully been turned off.\r\n\r\nIn this confirmation, we currently include the time at which it is due to be sent, this time is in UTC.\r\n\r\nI propose we refactor this part of the code to instead use an Embed, with a the timestamp field.\r\nhttps://github.com/python-discord/bot/blob/ce819ade482e82ecbc474bce5fb8ac9dd8b37b40/bot/exts/moderation/modpings.py#L107\r\nThis would mean that the time would automatically get converted to the user's current time zone by Discord.\n", "before_files": [{"content": "import datetime\nimport logging\n\nfrom async_rediscache import RedisCache\nfrom dateutil.parser import isoparse\nfrom discord import Member\nfrom discord.ext.commands import Cog, Context, group, has_any_role\n\nfrom bot.bot import Bot\nfrom bot.constants import Emojis, Guild, MODERATION_ROLES, Roles\nfrom bot.converters import Expiry\nfrom bot.utils.scheduling import Scheduler\n\nlog = logging.getLogger(__name__)\n\n\nclass ModPings(Cog):\n \"\"\"Commands for a moderator to turn moderator pings on and off.\"\"\"\n\n # RedisCache[discord.Member.id, 'Na\u00efve ISO 8601 string']\n # The cache's keys are mods who have pings off.\n # The cache's values are the times when the role should be re-applied to them, stored in ISO format.\n pings_off_mods = RedisCache()\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self._role_scheduler = Scheduler(self.__class__.__name__)\n\n self.guild = None\n self.moderators_role = None\n\n self.reschedule_task = self.bot.loop.create_task(self.reschedule_roles(), name=\"mod-pings-reschedule\")\n\n async def reschedule_roles(self) -> None:\n \"\"\"Reschedule moderators role re-apply times.\"\"\"\n await self.bot.wait_until_guild_available()\n self.guild = self.bot.get_guild(Guild.id)\n self.moderators_role = self.guild.get_role(Roles.moderators)\n\n mod_team = self.guild.get_role(Roles.mod_team)\n pings_on = self.moderators_role.members\n pings_off = await self.pings_off_mods.to_dict()\n\n log.trace(\"Applying the moderators role to the mod team where necessary.\")\n for mod in mod_team.members:\n if mod in pings_on: # Make sure that on-duty mods aren't in the cache.\n if mod in pings_off:\n await self.pings_off_mods.delete(mod.id)\n continue\n\n # Keep the role off only for those in the cache.\n if mod.id not in pings_off:\n await self.reapply_role(mod)\n else:\n expiry = isoparse(pings_off[mod.id]).replace(tzinfo=None)\n self._role_scheduler.schedule_at(expiry, mod.id, self.reapply_role(mod))\n\n async def reapply_role(self, mod: Member) -> None:\n \"\"\"Reapply the moderator's role to the given moderator.\"\"\"\n log.trace(f\"Re-applying role to mod with ID {mod.id}.\")\n await mod.add_roles(self.moderators_role, reason=\"Pings off period expired.\")\n\n @group(name='modpings', aliases=('modping',), invoke_without_command=True)\n @has_any_role(*MODERATION_ROLES)\n async def modpings_group(self, ctx: Context) -> None:\n \"\"\"Allow the removal and re-addition of the pingable moderators role.\"\"\"\n await ctx.send_help(ctx.command)\n\n @modpings_group.command(name='off')\n @has_any_role(*MODERATION_ROLES)\n async def off_command(self, ctx: Context, duration: Expiry) -> None:\n \"\"\"\n Temporarily removes the pingable moderators role for a set amount of time.\n\n A unit of time should be appended to the duration.\n Units (\u2217case-sensitive):\n \\u2003`y` - years\n \\u2003`m` - months\u2217\n \\u2003`w` - weeks\n \\u2003`d` - days\n \\u2003`h` - hours\n \\u2003`M` - minutes\u2217\n \\u2003`s` - seconds\n\n Alternatively, an ISO 8601 timestamp can be provided for the duration.\n\n The duration cannot be longer than 30 days.\n \"\"\"\n duration: datetime.datetime\n delta = duration - datetime.datetime.utcnow()\n if delta > datetime.timedelta(days=30):\n await ctx.send(\":x: Cannot remove the role for longer than 30 days.\")\n return\n\n mod = ctx.author\n\n until_date = duration.replace(microsecond=0).isoformat() # Looks noisy with microseconds.\n await mod.remove_roles(self.moderators_role, reason=f\"Turned pings off until {until_date}.\")\n\n await self.pings_off_mods.set(mod.id, duration.isoformat())\n\n # Allow rescheduling the task without cancelling it separately via the `on` command.\n if mod.id in self._role_scheduler:\n self._role_scheduler.cancel(mod.id)\n self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))\n\n await ctx.send(f\"{Emojis.check_mark} Moderators role has been removed until {until_date}.\")\n\n @modpings_group.command(name='on')\n @has_any_role(*MODERATION_ROLES)\n async def on_command(self, ctx: Context) -> None:\n \"\"\"Re-apply the pingable moderators role.\"\"\"\n mod = ctx.author\n if mod in self.moderators_role.members:\n await ctx.send(\":question: You already have the role.\")\n return\n\n await mod.add_roles(self.moderators_role, reason=\"Pings off period canceled.\")\n\n await self.pings_off_mods.delete(mod.id)\n\n # We assume the task exists. Lack of it may indicate a bug.\n self._role_scheduler.cancel(mod.id)\n\n await ctx.send(f\"{Emojis.check_mark} Moderators role has been re-applied.\")\n\n def cog_unload(self) -> None:\n \"\"\"Cancel role tasks when the cog unloads.\"\"\"\n log.trace(\"Cog unload: canceling role tasks.\")\n self.reschedule_task.cancel()\n self._role_scheduler.cancel_all()\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the ModPings cog.\"\"\"\n bot.add_cog(ModPings(bot))\n", "path": "bot/exts/moderation/modpings.py"}]} | 2,298 | 313 |
gh_patches_debug_17596 | rasdani/github-patches | git_diff | streamlink__streamlink-4517 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SegmentedStreamWriter.close() does not reliably finish before CLI exits (race condition)
### Checklist
- [X] This is a bug report and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest build from the master branch
### Description
### Background
I was doing some works on latest `master` branch, adding some feature on Segmented/HLS streams, and it requires some cleanup, so I added them in the end of `SegmentedStreamWriter.close()`, like
```python
self.closed = True
self.reader.close()
self.executor.shutdown(wait=True, cancel_futures=True)
__my_extra_cleanup()
```
And I found my cleanup code does not always being executed, when HLS streams end normally.
### Problem
When an HLS stream ends normally, the whole shutdown process is triggered by the last line of `SegmentedStreamWriter.run()`, `self.close()`. Then `SegmentedStreamWriter.close()` calls `SegmentedStreamReader.close()`, then the iteration loop of `stream_cli.main:read_stream()` will reach its end, then `main()` exits.
The problem is, `SegmentedStreamWriter.run() -> SegmentedStreamWriter.close()` was run in a separated thread, which means `SegmentedStreamWriter.close()` cannot reliably finish its work before main thread exits.
### To reproduce
To reliably trigger the problem, adding a sleep to original `SegmentedStreamWriter.close()`, like
```python
self.closed = True
self.reader.close()
self.executor.shutdown(wait=True, cancel_futures=True)
time.sleep(3)
log.debug("SegmentedStreamWriter.close() ends")
```
Then run the CLI with a short HLS stream, the `SegmentedStreamWriter.close() ends` message never appears.
### Debug log
```text
......
[cli][info] Stream ended
[cli][info] Closing currently open stream...
```
### Expected result
```text
......
[cli][info] Stream ended
[cli][info] Closing currently open stream...
[stream.segmented][debug] SegmentedStreamWriter.close() ends
```
</issue>
<code>
[start of src/streamlink/stream/segmented.py]
1 import logging
2 import queue
3 from concurrent import futures
4 from concurrent.futures import Future, ThreadPoolExecutor
5 from sys import version_info
6 from threading import Event, Thread
7 from typing import Any, Optional
8
9 from streamlink.buffers import RingBuffer
10 from streamlink.stream.stream import StreamIO
11
12 log = logging.getLogger(__name__)
13
14
15 class CompatThreadPoolExecutor(ThreadPoolExecutor):
16 if version_info < (3, 9):
17 def shutdown(self, wait=True, cancel_futures=False): # pragma: no cover
18 with self._shutdown_lock:
19 self._shutdown = True
20 if cancel_futures:
21 # Drain all work items from the queue, and then cancel their
22 # associated futures.
23 while True:
24 try:
25 work_item = self._work_queue.get_nowait()
26 except queue.Empty:
27 break
28 if work_item is not None:
29 work_item.future.cancel()
30
31 # Send a wake-up to prevent threads calling
32 # _work_queue.get(block=True) from permanently blocking.
33 self._work_queue.put(None)
34 if wait:
35 for t in self._threads:
36 t.join()
37
38
39 class SegmentedStreamWorker(Thread):
40 """The general worker thread.
41
42 This thread is responsible for queueing up segments in the
43 writer thread.
44 """
45
46 def __init__(self, reader, **kwargs):
47 self.closed = False
48 self.reader = reader
49 self.writer = reader.writer
50 self.stream = reader.stream
51 self.session = reader.session
52
53 self._wait = Event()
54
55 super().__init__(daemon=True, name=f"Thread-{self.__class__.__name__}")
56
57 def close(self):
58 """Shuts down the thread."""
59 if self.closed: # pragma: no cover
60 return
61
62 log.debug("Closing worker thread")
63
64 self.closed = True
65 self._wait.set()
66
67 def wait(self, time):
68 """Pauses the thread for a specified time.
69
70 Returns False if interrupted by another thread and True if the
71 time runs out normally.
72 """
73 return not self._wait.wait(time)
74
75 def iter_segments(self):
76 """The iterator that generates segments for the worker thread.
77
78 Should be overridden by the inheriting class.
79 """
80 return
81 yield
82
83 def run(self):
84 for segment in self.iter_segments():
85 if self.closed: # pragma: no cover
86 break
87 self.writer.put(segment)
88
89 # End of stream, tells the writer to exit
90 self.writer.put(None)
91 self.close()
92
93
94 class SegmentedStreamWriter(Thread):
95 """The writer thread.
96
97 This thread is responsible for fetching segments, processing them
98 and finally writing the data to the buffer.
99 """
100
101 def __init__(self, reader, size=20, retries=None, threads=None, timeout=None):
102 self.closed = False
103 self.reader = reader
104 self.stream = reader.stream
105 self.session = reader.session
106
107 if not retries:
108 retries = self.session.options.get("stream-segment-attempts")
109
110 if not threads:
111 threads = self.session.options.get("stream-segment-threads")
112
113 if not timeout:
114 timeout = self.session.options.get("stream-segment-timeout")
115
116 self.retries = retries
117 self.timeout = timeout
118 self.threads = threads
119 self.executor = CompatThreadPoolExecutor(max_workers=self.threads)
120 self.futures = queue.Queue(size)
121
122 super().__init__(daemon=True, name=f"Thread-{self.__class__.__name__}")
123
124 def close(self):
125 """Shuts down the thread, its executor and closes the reader (worker thread and buffer)."""
126 if self.closed: # pragma: no cover
127 return
128
129 log.debug("Closing writer thread")
130
131 self.closed = True
132 self.reader.close()
133 self.executor.shutdown(wait=True, cancel_futures=True)
134
135 def put(self, segment):
136 """Adds a segment to the download pool and write queue."""
137 if self.closed: # pragma: no cover
138 return
139
140 if segment is None:
141 future = None
142 else:
143 future = self.executor.submit(self.fetch, segment, retries=self.retries)
144
145 self.queue(segment, future)
146
147 def queue(self, segment: Any, future: Optional[Future], *data):
148 """Puts values into a queue but aborts if this thread is closed."""
149 while not self.closed: # pragma: no branch
150 try:
151 self._futures_put((segment, future, *data))
152 return
153 except queue.Full: # pragma: no cover
154 continue
155
156 def _futures_put(self, item):
157 self.futures.put(item, block=True, timeout=1)
158
159 def _futures_get(self):
160 return self.futures.get(block=True, timeout=0.5)
161
162 @staticmethod
163 def _future_result(future: Future):
164 return future.result(timeout=0.5)
165
166 def fetch(self, segment):
167 """Fetches a segment.
168
169 Should be overridden by the inheriting class.
170 """
171 pass
172
173 def write(self, segment, result, *data):
174 """Writes a segment to the buffer.
175
176 Should be overridden by the inheriting class.
177 """
178 pass
179
180 def run(self):
181 while not self.closed:
182 try:
183 segment, future, *data = self._futures_get()
184 except queue.Empty: # pragma: no cover
185 continue
186
187 # End of stream
188 if future is None:
189 break
190
191 while not self.closed: # pragma: no branch
192 try:
193 result = self._future_result(future)
194 except futures.TimeoutError: # pragma: no cover
195 continue
196 except futures.CancelledError: # pragma: no cover
197 break
198
199 if result is not None: # pragma: no branch
200 self.write(segment, result, *data)
201
202 break
203
204 self.close()
205
206
207 class SegmentedStreamReader(StreamIO):
208 __worker__ = SegmentedStreamWorker
209 __writer__ = SegmentedStreamWriter
210
211 def __init__(self, stream, timeout=None):
212 super().__init__()
213 self.session = stream.session
214 self.stream = stream
215 self.timeout = timeout or self.session.options.get("stream-timeout")
216
217 buffer_size = self.session.get_option("ringbuffer-size")
218 self.buffer = RingBuffer(buffer_size)
219 self.writer = self.__writer__(self)
220 self.worker = self.__worker__(self)
221
222 def open(self):
223 self.writer.start()
224 self.worker.start()
225
226 def close(self):
227 self.worker.close()
228 self.writer.close()
229 self.buffer.close()
230
231 def read(self, size):
232 return self.buffer.read(
233 size,
234 block=self.writer.is_alive(),
235 timeout=self.timeout
236 )
237
[end of src/streamlink/stream/segmented.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/stream/segmented.py b/src/streamlink/stream/segmented.py
--- a/src/streamlink/stream/segmented.py
+++ b/src/streamlink/stream/segmented.py
@@ -3,7 +3,7 @@
from concurrent import futures
from concurrent.futures import Future, ThreadPoolExecutor
from sys import version_info
-from threading import Event, Thread
+from threading import Event, Thread, current_thread
from typing import Any, Optional
from streamlink.buffers import RingBuffer
@@ -228,6 +228,12 @@
self.writer.close()
self.buffer.close()
+ current = current_thread()
+ if current is not self.worker: # pragma: no branch
+ self.worker.join(timeout=self.timeout)
+ if current is not self.writer: # pragma: no branch
+ self.writer.join(timeout=self.timeout)
+
def read(self, size):
return self.buffer.read(
size,
| {"golden_diff": "diff --git a/src/streamlink/stream/segmented.py b/src/streamlink/stream/segmented.py\n--- a/src/streamlink/stream/segmented.py\n+++ b/src/streamlink/stream/segmented.py\n@@ -3,7 +3,7 @@\n from concurrent import futures\n from concurrent.futures import Future, ThreadPoolExecutor\n from sys import version_info\n-from threading import Event, Thread\n+from threading import Event, Thread, current_thread\n from typing import Any, Optional\n \n from streamlink.buffers import RingBuffer\n@@ -228,6 +228,12 @@\n self.writer.close()\n self.buffer.close()\n \n+ current = current_thread()\n+ if current is not self.worker: # pragma: no branch\n+ self.worker.join(timeout=self.timeout)\n+ if current is not self.writer: # pragma: no branch\n+ self.writer.join(timeout=self.timeout)\n+\n def read(self, size):\n return self.buffer.read(\n size,\n", "issue": "SegmentedStreamWriter.close() does not reliably finish before CLI exits (race condition)\n### Checklist\r\n\r\n- [X] This is a bug report and not a different kind of issue\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\nLatest build from the master branch\r\n\r\n### Description\r\n\r\n### Background\r\nI was doing some works on latest `master` branch, adding some feature on Segmented/HLS streams, and it requires some cleanup, so I added them in the end of `SegmentedStreamWriter.close()`, like\r\n\r\n```python\r\n self.closed = True\r\n self.reader.close()\r\n self.executor.shutdown(wait=True, cancel_futures=True)\r\n\r\n __my_extra_cleanup()\r\n```\r\n\r\nAnd I found my cleanup code does not always being executed, when HLS streams end normally.\r\n\r\n### Problem\r\nWhen an HLS stream ends normally, the whole shutdown process is triggered by the last line of `SegmentedStreamWriter.run()`, `self.close()`. Then `SegmentedStreamWriter.close()` calls `SegmentedStreamReader.close()`, then the iteration loop of `stream_cli.main:read_stream()` will reach its end, then `main()` exits.\r\n\r\nThe problem is, `SegmentedStreamWriter.run() -> SegmentedStreamWriter.close()` was run in a separated thread, which means `SegmentedStreamWriter.close()` cannot reliably finish its work before main thread exits.\r\n\r\n### To reproduce\r\nTo reliably trigger the problem, adding a sleep to original `SegmentedStreamWriter.close()`, like\r\n\r\n```python\r\n self.closed = True\r\n self.reader.close()\r\n self.executor.shutdown(wait=True, cancel_futures=True)\r\n\r\n time.sleep(3)\r\n log.debug(\"SegmentedStreamWriter.close() ends\")\r\n```\r\n\r\nThen run the CLI with a short HLS stream, the `SegmentedStreamWriter.close() ends` message never appears.\r\n\r\n\r\n### Debug log\r\n\r\n```text\r\n......\r\n\r\n[cli][info] Stream ended\r\n[cli][info] Closing currently open stream...\r\n```\r\n\r\n### Expected result\r\n```text\r\n......\r\n\r\n[cli][info] Stream ended\r\n[cli][info] Closing currently open stream...\r\n[stream.segmented][debug] SegmentedStreamWriter.close() ends\r\n```\r\n\n", "before_files": [{"content": "import logging\nimport queue\nfrom concurrent import futures\nfrom concurrent.futures import Future, ThreadPoolExecutor\nfrom sys import version_info\nfrom threading import Event, Thread\nfrom typing import Any, Optional\n\nfrom streamlink.buffers import RingBuffer\nfrom streamlink.stream.stream import StreamIO\n\nlog = logging.getLogger(__name__)\n\n\nclass CompatThreadPoolExecutor(ThreadPoolExecutor):\n if version_info < (3, 9):\n def shutdown(self, wait=True, cancel_futures=False): # pragma: no cover\n with self._shutdown_lock:\n self._shutdown = True\n if cancel_futures:\n # Drain all work items from the queue, and then cancel their\n # associated futures.\n while True:\n try:\n work_item = self._work_queue.get_nowait()\n except queue.Empty:\n break\n if work_item is not None:\n work_item.future.cancel()\n\n # Send a wake-up to prevent threads calling\n # _work_queue.get(block=True) from permanently blocking.\n self._work_queue.put(None)\n if wait:\n for t in self._threads:\n t.join()\n\n\nclass SegmentedStreamWorker(Thread):\n \"\"\"The general worker thread.\n\n This thread is responsible for queueing up segments in the\n writer thread.\n \"\"\"\n\n def __init__(self, reader, **kwargs):\n self.closed = False\n self.reader = reader\n self.writer = reader.writer\n self.stream = reader.stream\n self.session = reader.session\n\n self._wait = Event()\n\n super().__init__(daemon=True, name=f\"Thread-{self.__class__.__name__}\")\n\n def close(self):\n \"\"\"Shuts down the thread.\"\"\"\n if self.closed: # pragma: no cover\n return\n\n log.debug(\"Closing worker thread\")\n\n self.closed = True\n self._wait.set()\n\n def wait(self, time):\n \"\"\"Pauses the thread for a specified time.\n\n Returns False if interrupted by another thread and True if the\n time runs out normally.\n \"\"\"\n return not self._wait.wait(time)\n\n def iter_segments(self):\n \"\"\"The iterator that generates segments for the worker thread.\n\n Should be overridden by the inheriting class.\n \"\"\"\n return\n yield\n\n def run(self):\n for segment in self.iter_segments():\n if self.closed: # pragma: no cover\n break\n self.writer.put(segment)\n\n # End of stream, tells the writer to exit\n self.writer.put(None)\n self.close()\n\n\nclass SegmentedStreamWriter(Thread):\n \"\"\"The writer thread.\n\n This thread is responsible for fetching segments, processing them\n and finally writing the data to the buffer.\n \"\"\"\n\n def __init__(self, reader, size=20, retries=None, threads=None, timeout=None):\n self.closed = False\n self.reader = reader\n self.stream = reader.stream\n self.session = reader.session\n\n if not retries:\n retries = self.session.options.get(\"stream-segment-attempts\")\n\n if not threads:\n threads = self.session.options.get(\"stream-segment-threads\")\n\n if not timeout:\n timeout = self.session.options.get(\"stream-segment-timeout\")\n\n self.retries = retries\n self.timeout = timeout\n self.threads = threads\n self.executor = CompatThreadPoolExecutor(max_workers=self.threads)\n self.futures = queue.Queue(size)\n\n super().__init__(daemon=True, name=f\"Thread-{self.__class__.__name__}\")\n\n def close(self):\n \"\"\"Shuts down the thread, its executor and closes the reader (worker thread and buffer).\"\"\"\n if self.closed: # pragma: no cover\n return\n\n log.debug(\"Closing writer thread\")\n\n self.closed = True\n self.reader.close()\n self.executor.shutdown(wait=True, cancel_futures=True)\n\n def put(self, segment):\n \"\"\"Adds a segment to the download pool and write queue.\"\"\"\n if self.closed: # pragma: no cover\n return\n\n if segment is None:\n future = None\n else:\n future = self.executor.submit(self.fetch, segment, retries=self.retries)\n\n self.queue(segment, future)\n\n def queue(self, segment: Any, future: Optional[Future], *data):\n \"\"\"Puts values into a queue but aborts if this thread is closed.\"\"\"\n while not self.closed: # pragma: no branch\n try:\n self._futures_put((segment, future, *data))\n return\n except queue.Full: # pragma: no cover\n continue\n\n def _futures_put(self, item):\n self.futures.put(item, block=True, timeout=1)\n\n def _futures_get(self):\n return self.futures.get(block=True, timeout=0.5)\n\n @staticmethod\n def _future_result(future: Future):\n return future.result(timeout=0.5)\n\n def fetch(self, segment):\n \"\"\"Fetches a segment.\n\n Should be overridden by the inheriting class.\n \"\"\"\n pass\n\n def write(self, segment, result, *data):\n \"\"\"Writes a segment to the buffer.\n\n Should be overridden by the inheriting class.\n \"\"\"\n pass\n\n def run(self):\n while not self.closed:\n try:\n segment, future, *data = self._futures_get()\n except queue.Empty: # pragma: no cover\n continue\n\n # End of stream\n if future is None:\n break\n\n while not self.closed: # pragma: no branch\n try:\n result = self._future_result(future)\n except futures.TimeoutError: # pragma: no cover\n continue\n except futures.CancelledError: # pragma: no cover\n break\n\n if result is not None: # pragma: no branch\n self.write(segment, result, *data)\n\n break\n\n self.close()\n\n\nclass SegmentedStreamReader(StreamIO):\n __worker__ = SegmentedStreamWorker\n __writer__ = SegmentedStreamWriter\n\n def __init__(self, stream, timeout=None):\n super().__init__()\n self.session = stream.session\n self.stream = stream\n self.timeout = timeout or self.session.options.get(\"stream-timeout\")\n\n buffer_size = self.session.get_option(\"ringbuffer-size\")\n self.buffer = RingBuffer(buffer_size)\n self.writer = self.__writer__(self)\n self.worker = self.__worker__(self)\n\n def open(self):\n self.writer.start()\n self.worker.start()\n\n def close(self):\n self.worker.close()\n self.writer.close()\n self.buffer.close()\n\n def read(self, size):\n return self.buffer.read(\n size,\n block=self.writer.is_alive(),\n timeout=self.timeout\n )\n", "path": "src/streamlink/stream/segmented.py"}]} | 3,162 | 209 |
gh_patches_debug_18668 | rasdani/github-patches | git_diff | celery__celery-7246 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Duplicate param 'uid' in CeleryDaemonCommand
# Checklist
- [ x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Category%3A+Documentation%22+)
for similar or identical bug reports.
- [x ] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [ x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [ x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
#### Related Issues
- None
#### Possible Duplicates
- None
# Description
CeleryDaemonCommand have duplicated CeleryOption for --uid which leads to all docs using it to have duplicated line (beat, multi, worker, etc).
# Suggestions
Remove the duplicate in CeleryDaemonCommand in celery/celery/bin/base.py
</issue>
<code>
[start of celery/bin/base.py]
1 """Click customizations for Celery."""
2 import json
3 import numbers
4 from collections import OrderedDict
5 from functools import update_wrapper
6 from pprint import pformat
7
8 import click
9 from click import ParamType
10 from kombu.utils.objects import cached_property
11
12 from celery._state import get_current_app
13 from celery.signals import user_preload_options
14 from celery.utils import text
15 from celery.utils.log import mlevel
16 from celery.utils.time import maybe_iso8601
17
18 try:
19 from pygments import highlight
20 from pygments.formatters import Terminal256Formatter
21 from pygments.lexers import PythonLexer
22 except ImportError:
23 def highlight(s, *args, **kwargs):
24 """Place holder function in case pygments is missing."""
25 return s
26 LEXER = None
27 FORMATTER = None
28 else:
29 LEXER = PythonLexer()
30 FORMATTER = Terminal256Formatter()
31
32
33 class CLIContext:
34 """Context Object for the CLI."""
35
36 def __init__(self, app, no_color, workdir, quiet=False):
37 """Initialize the CLI context."""
38 self.app = app or get_current_app()
39 self.no_color = no_color
40 self.quiet = quiet
41 self.workdir = workdir
42
43 @cached_property
44 def OK(self):
45 return self.style("OK", fg="green", bold=True)
46
47 @cached_property
48 def ERROR(self):
49 return self.style("ERROR", fg="red", bold=True)
50
51 def style(self, message=None, **kwargs):
52 if self.no_color:
53 return message
54 else:
55 return click.style(message, **kwargs)
56
57 def secho(self, message=None, **kwargs):
58 if self.no_color:
59 kwargs['color'] = False
60 click.echo(message, **kwargs)
61 else:
62 click.secho(message, **kwargs)
63
64 def echo(self, message=None, **kwargs):
65 if self.no_color:
66 kwargs['color'] = False
67 click.echo(message, **kwargs)
68 else:
69 click.echo(message, **kwargs)
70
71 def error(self, message=None, **kwargs):
72 kwargs['err'] = True
73 if self.no_color:
74 kwargs['color'] = False
75 click.echo(message, **kwargs)
76 else:
77 click.secho(message, **kwargs)
78
79 def pretty(self, n):
80 if isinstance(n, list):
81 return self.OK, self.pretty_list(n)
82 if isinstance(n, dict):
83 if 'ok' in n or 'error' in n:
84 return self.pretty_dict_ok_error(n)
85 else:
86 s = json.dumps(n, sort_keys=True, indent=4)
87 if not self.no_color:
88 s = highlight(s, LEXER, FORMATTER)
89 return self.OK, s
90 if isinstance(n, str):
91 return self.OK, n
92 return self.OK, pformat(n)
93
94 def pretty_list(self, n):
95 if not n:
96 return '- empty -'
97 return '\n'.join(
98 f'{self.style("*", fg="white")} {item}' for item in n
99 )
100
101 def pretty_dict_ok_error(self, n):
102 try:
103 return (self.OK,
104 text.indent(self.pretty(n['ok'])[1], 4))
105 except KeyError:
106 pass
107 return (self.ERROR,
108 text.indent(self.pretty(n['error'])[1], 4))
109
110 def say_chat(self, direction, title, body='', show_body=False):
111 if direction == '<-' and self.quiet:
112 return
113 dirstr = not self.quiet and f'{self.style(direction, fg="white", bold=True)} ' or ''
114 self.echo(f'{dirstr} {title}')
115 if body and show_body:
116 self.echo(body)
117
118
119 def handle_preload_options(f):
120 """Extract preload options and return a wrapped callable."""
121 def caller(ctx, *args, **kwargs):
122 app = ctx.obj.app
123
124 preload_options = [o.name for o in app.user_options.get('preload', [])]
125
126 if preload_options:
127 user_options = {
128 preload_option: kwargs[preload_option]
129 for preload_option in preload_options
130 }
131
132 user_preload_options.send(sender=f, app=app, options=user_options)
133
134 return f(ctx, *args, **kwargs)
135
136 return update_wrapper(caller, f)
137
138
139 class CeleryOption(click.Option):
140 """Customized option for Celery."""
141
142 def get_default(self, ctx, *args, **kwargs):
143 if self.default_value_from_context:
144 self.default = ctx.obj[self.default_value_from_context]
145 return super().get_default(ctx, *args, **kwargs)
146
147 def __init__(self, *args, **kwargs):
148 """Initialize a Celery option."""
149 self.help_group = kwargs.pop('help_group', None)
150 self.default_value_from_context = kwargs.pop('default_value_from_context', None)
151 super().__init__(*args, **kwargs)
152
153
154 class CeleryCommand(click.Command):
155 """Customized command for Celery."""
156
157 def format_options(self, ctx, formatter):
158 """Write all the options into the formatter if they exist."""
159 opts = OrderedDict()
160 for param in self.get_params(ctx):
161 rv = param.get_help_record(ctx)
162 if rv is not None:
163 if hasattr(param, 'help_group') and param.help_group:
164 opts.setdefault(str(param.help_group), []).append(rv)
165 else:
166 opts.setdefault('Options', []).append(rv)
167
168 for name, opts_group in opts.items():
169 with formatter.section(name):
170 formatter.write_dl(opts_group)
171
172
173 class CeleryDaemonCommand(CeleryCommand):
174 """Daemon commands."""
175
176 def __init__(self, *args, **kwargs):
177 """Initialize a Celery command with common daemon options."""
178 super().__init__(*args, **kwargs)
179 self.params.append(CeleryOption(('-f', '--logfile'), help_group="Daemonization Options"))
180 self.params.append(CeleryOption(('--pidfile',), help_group="Daemonization Options"))
181 self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options"))
182 self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options"))
183 self.params.append(CeleryOption(('--gid',), help_group="Daemonization Options"))
184 self.params.append(CeleryOption(('--umask',), help_group="Daemonization Options"))
185 self.params.append(CeleryOption(('--executable',), help_group="Daemonization Options"))
186
187
188 class CommaSeparatedList(ParamType):
189 """Comma separated list argument."""
190
191 name = "comma separated list"
192
193 def convert(self, value, param, ctx):
194 return text.str_to_list(value)
195
196
197 class JsonArray(ParamType):
198 """JSON formatted array argument."""
199
200 name = "json array"
201
202 def convert(self, value, param, ctx):
203 if isinstance(value, list):
204 return value
205
206 try:
207 v = json.loads(value)
208 except ValueError as e:
209 self.fail(str(e))
210
211 if not isinstance(v, list):
212 self.fail(f"{value} was not an array")
213
214 return v
215
216
217 class JsonObject(ParamType):
218 """JSON formatted object argument."""
219
220 name = "json object"
221
222 def convert(self, value, param, ctx):
223 if isinstance(value, dict):
224 return value
225
226 try:
227 v = json.loads(value)
228 except ValueError as e:
229 self.fail(str(e))
230
231 if not isinstance(v, dict):
232 self.fail(f"{value} was not an object")
233
234 return v
235
236
237 class ISO8601DateTime(ParamType):
238 """ISO 8601 Date Time argument."""
239
240 name = "iso-86091"
241
242 def convert(self, value, param, ctx):
243 try:
244 return maybe_iso8601(value)
245 except (TypeError, ValueError) as e:
246 self.fail(e)
247
248
249 class ISO8601DateTimeOrFloat(ParamType):
250 """ISO 8601 Date Time or float argument."""
251
252 name = "iso-86091 or float"
253
254 def convert(self, value, param, ctx):
255 try:
256 return float(value)
257 except (TypeError, ValueError):
258 pass
259
260 try:
261 return maybe_iso8601(value)
262 except (TypeError, ValueError) as e:
263 self.fail(e)
264
265
266 class LogLevel(click.Choice):
267 """Log level option."""
268
269 def __init__(self):
270 """Initialize the log level option with the relevant choices."""
271 super().__init__(('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', 'FATAL'))
272
273 def convert(self, value, param, ctx):
274 if isinstance(value, numbers.Integral):
275 return value
276
277 value = value.upper()
278 value = super().convert(value, param, ctx)
279 return mlevel(value)
280
281
282 JSON_ARRAY = JsonArray()
283 JSON_OBJECT = JsonObject()
284 ISO8601 = ISO8601DateTime()
285 ISO8601_OR_FLOAT = ISO8601DateTimeOrFloat()
286 LOG_LEVEL = LogLevel()
287 COMMA_SEPARATED_LIST = CommaSeparatedList()
288
[end of celery/bin/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/celery/bin/base.py b/celery/bin/base.py
--- a/celery/bin/base.py
+++ b/celery/bin/base.py
@@ -179,7 +179,6 @@
self.params.append(CeleryOption(('-f', '--logfile'), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--pidfile',), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options"))
- self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--gid',), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--umask',), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--executable',), help_group="Daemonization Options"))
| {"golden_diff": "diff --git a/celery/bin/base.py b/celery/bin/base.py\n--- a/celery/bin/base.py\n+++ b/celery/bin/base.py\n@@ -179,7 +179,6 @@\n self.params.append(CeleryOption(('-f', '--logfile'), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--pidfile',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n- self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--gid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--umask',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--executable',), help_group=\"Daemonization Options\"))\n", "issue": "Duplicate param 'uid' in CeleryDaemonCommand\n\r\n# Checklist\r\n\r\n- [ x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Category%3A+Documentation%22+)\r\n for similar or identical bug reports.\r\n- [x ] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Category%3A+Documentation%22)\r\n for existing proposed fixes.\r\n- [ x] I have checked the [commit log](https://github.com/celery/celery/commits/master)\r\n to find out if the bug was already fixed in the master branch.\r\n- [ x] I have included all related issues and possible duplicate issues in this issue\r\n (If there are none, check this box anyway).\r\n\r\n## Related Issues and Possible Duplicates\r\n\r\n#### Related Issues\r\n\r\n- None\r\n\r\n#### Possible Duplicates\r\n\r\n- None\r\n\r\n# Description\r\n\r\nCeleryDaemonCommand have duplicated CeleryOption for --uid which leads to all docs using it to have duplicated line (beat, multi, worker, etc).\r\n\r\n# Suggestions\r\n\r\nRemove the duplicate in CeleryDaemonCommand in celery/celery/bin/base.py \r\n\n", "before_files": [{"content": "\"\"\"Click customizations for Celery.\"\"\"\nimport json\nimport numbers\nfrom collections import OrderedDict\nfrom functools import update_wrapper\nfrom pprint import pformat\n\nimport click\nfrom click import ParamType\nfrom kombu.utils.objects import cached_property\n\nfrom celery._state import get_current_app\nfrom celery.signals import user_preload_options\nfrom celery.utils import text\nfrom celery.utils.log import mlevel\nfrom celery.utils.time import maybe_iso8601\n\ntry:\n from pygments import highlight\n from pygments.formatters import Terminal256Formatter\n from pygments.lexers import PythonLexer\nexcept ImportError:\n def highlight(s, *args, **kwargs):\n \"\"\"Place holder function in case pygments is missing.\"\"\"\n return s\n LEXER = None\n FORMATTER = None\nelse:\n LEXER = PythonLexer()\n FORMATTER = Terminal256Formatter()\n\n\nclass CLIContext:\n \"\"\"Context Object for the CLI.\"\"\"\n\n def __init__(self, app, no_color, workdir, quiet=False):\n \"\"\"Initialize the CLI context.\"\"\"\n self.app = app or get_current_app()\n self.no_color = no_color\n self.quiet = quiet\n self.workdir = workdir\n\n @cached_property\n def OK(self):\n return self.style(\"OK\", fg=\"green\", bold=True)\n\n @cached_property\n def ERROR(self):\n return self.style(\"ERROR\", fg=\"red\", bold=True)\n\n def style(self, message=None, **kwargs):\n if self.no_color:\n return message\n else:\n return click.style(message, **kwargs)\n\n def secho(self, message=None, **kwargs):\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.secho(message, **kwargs)\n\n def echo(self, message=None, **kwargs):\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.echo(message, **kwargs)\n\n def error(self, message=None, **kwargs):\n kwargs['err'] = True\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.secho(message, **kwargs)\n\n def pretty(self, n):\n if isinstance(n, list):\n return self.OK, self.pretty_list(n)\n if isinstance(n, dict):\n if 'ok' in n or 'error' in n:\n return self.pretty_dict_ok_error(n)\n else:\n s = json.dumps(n, sort_keys=True, indent=4)\n if not self.no_color:\n s = highlight(s, LEXER, FORMATTER)\n return self.OK, s\n if isinstance(n, str):\n return self.OK, n\n return self.OK, pformat(n)\n\n def pretty_list(self, n):\n if not n:\n return '- empty -'\n return '\\n'.join(\n f'{self.style(\"*\", fg=\"white\")} {item}' for item in n\n )\n\n def pretty_dict_ok_error(self, n):\n try:\n return (self.OK,\n text.indent(self.pretty(n['ok'])[1], 4))\n except KeyError:\n pass\n return (self.ERROR,\n text.indent(self.pretty(n['error'])[1], 4))\n\n def say_chat(self, direction, title, body='', show_body=False):\n if direction == '<-' and self.quiet:\n return\n dirstr = not self.quiet and f'{self.style(direction, fg=\"white\", bold=True)} ' or ''\n self.echo(f'{dirstr} {title}')\n if body and show_body:\n self.echo(body)\n\n\ndef handle_preload_options(f):\n \"\"\"Extract preload options and return a wrapped callable.\"\"\"\n def caller(ctx, *args, **kwargs):\n app = ctx.obj.app\n\n preload_options = [o.name for o in app.user_options.get('preload', [])]\n\n if preload_options:\n user_options = {\n preload_option: kwargs[preload_option]\n for preload_option in preload_options\n }\n\n user_preload_options.send(sender=f, app=app, options=user_options)\n\n return f(ctx, *args, **kwargs)\n\n return update_wrapper(caller, f)\n\n\nclass CeleryOption(click.Option):\n \"\"\"Customized option for Celery.\"\"\"\n\n def get_default(self, ctx, *args, **kwargs):\n if self.default_value_from_context:\n self.default = ctx.obj[self.default_value_from_context]\n return super().get_default(ctx, *args, **kwargs)\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize a Celery option.\"\"\"\n self.help_group = kwargs.pop('help_group', None)\n self.default_value_from_context = kwargs.pop('default_value_from_context', None)\n super().__init__(*args, **kwargs)\n\n\nclass CeleryCommand(click.Command):\n \"\"\"Customized command for Celery.\"\"\"\n\n def format_options(self, ctx, formatter):\n \"\"\"Write all the options into the formatter if they exist.\"\"\"\n opts = OrderedDict()\n for param in self.get_params(ctx):\n rv = param.get_help_record(ctx)\n if rv is not None:\n if hasattr(param, 'help_group') and param.help_group:\n opts.setdefault(str(param.help_group), []).append(rv)\n else:\n opts.setdefault('Options', []).append(rv)\n\n for name, opts_group in opts.items():\n with formatter.section(name):\n formatter.write_dl(opts_group)\n\n\nclass CeleryDaemonCommand(CeleryCommand):\n \"\"\"Daemon commands.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize a Celery command with common daemon options.\"\"\"\n super().__init__(*args, **kwargs)\n self.params.append(CeleryOption(('-f', '--logfile'), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--pidfile',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--gid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--umask',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--executable',), help_group=\"Daemonization Options\"))\n\n\nclass CommaSeparatedList(ParamType):\n \"\"\"Comma separated list argument.\"\"\"\n\n name = \"comma separated list\"\n\n def convert(self, value, param, ctx):\n return text.str_to_list(value)\n\n\nclass JsonArray(ParamType):\n \"\"\"JSON formatted array argument.\"\"\"\n\n name = \"json array\"\n\n def convert(self, value, param, ctx):\n if isinstance(value, list):\n return value\n\n try:\n v = json.loads(value)\n except ValueError as e:\n self.fail(str(e))\n\n if not isinstance(v, list):\n self.fail(f\"{value} was not an array\")\n\n return v\n\n\nclass JsonObject(ParamType):\n \"\"\"JSON formatted object argument.\"\"\"\n\n name = \"json object\"\n\n def convert(self, value, param, ctx):\n if isinstance(value, dict):\n return value\n\n try:\n v = json.loads(value)\n except ValueError as e:\n self.fail(str(e))\n\n if not isinstance(v, dict):\n self.fail(f\"{value} was not an object\")\n\n return v\n\n\nclass ISO8601DateTime(ParamType):\n \"\"\"ISO 8601 Date Time argument.\"\"\"\n\n name = \"iso-86091\"\n\n def convert(self, value, param, ctx):\n try:\n return maybe_iso8601(value)\n except (TypeError, ValueError) as e:\n self.fail(e)\n\n\nclass ISO8601DateTimeOrFloat(ParamType):\n \"\"\"ISO 8601 Date Time or float argument.\"\"\"\n\n name = \"iso-86091 or float\"\n\n def convert(self, value, param, ctx):\n try:\n return float(value)\n except (TypeError, ValueError):\n pass\n\n try:\n return maybe_iso8601(value)\n except (TypeError, ValueError) as e:\n self.fail(e)\n\n\nclass LogLevel(click.Choice):\n \"\"\"Log level option.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the log level option with the relevant choices.\"\"\"\n super().__init__(('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', 'FATAL'))\n\n def convert(self, value, param, ctx):\n if isinstance(value, numbers.Integral):\n return value\n\n value = value.upper()\n value = super().convert(value, param, ctx)\n return mlevel(value)\n\n\nJSON_ARRAY = JsonArray()\nJSON_OBJECT = JsonObject()\nISO8601 = ISO8601DateTime()\nISO8601_OR_FLOAT = ISO8601DateTimeOrFloat()\nLOG_LEVEL = LogLevel()\nCOMMA_SEPARATED_LIST = CommaSeparatedList()\n", "path": "celery/bin/base.py"}]} | 3,563 | 195 |
gh_patches_debug_6113 | rasdani/github-patches | git_diff | hi-primus__optimus-782 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix simple typo: ouput -> output
# Issue Type
[x] Bug (Typo)
# Steps to Replicate
1. Examine optimus/ml/encoding.py.
2. Search for ouput.
# Expected Behaviour
1. Should read output.
</issue>
<code>
[start of optimus/ml/encoding.py]
1 from pyspark.ml import feature, Pipeline
2 from pyspark.ml.feature import StringIndexer, IndexToString, OneHotEncoder, VectorAssembler, Normalizer
3
4 from optimus.helpers.check import is_dataframe, is_, is_str
5 from optimus.helpers.columns import parse_columns, name_col, get_output_cols
6 from optimus.helpers.constants import Actions
7 from optimus.helpers.raiseit import RaiseIt
8
9
10 def n_gram(df, input_col, n=2):
11 """
12 Converts the input array of strings inside of a Spark DF into an array of n-grams.
13 :param df: Pyspark dataframe to analyze
14 :param input_col: Column to analyzer.
15 :param n: number of elements per n-gram >=1.
16 :return: Spark DataFrame with n-grams calculated.
17 """
18
19 is_dataframe(df)
20
21 tokenizer = feature.Tokenizer().setInputCol(input_col) | feature.StopWordsRemover()
22 count = feature.CountVectorizer()
23 gram = feature.NGram(n=n) | feature.CountVectorizer()
24 tf = tokenizer | (count, gram) | feature.VectorAssembler()
25 tfidf = tf | feature.IDF().setOutputCol('features')
26
27 tfidf_model = tfidf.fit(df)
28 df_model = tfidf_model.transform(df)
29 return df_model, tfidf_model
30
31
32 def string_to_index(df, input_cols, output_cols=None, columns=None, **kargs):
33 """
34 Maps a string column of labels to an ML column of label indices. If the input column is
35 numeric, we cast it to string and index the string values.
36 :param df: Dataframe to be transformed
37 :param input_cols: Columns to be indexed.
38 :param output_cols:Column where the ouput is going to be saved
39 :return: Dataframe with indexed columns.
40 """
41 df_actual = df
42
43 if columns is None:
44 input_cols = parse_columns(df, input_cols)
45 if output_cols is None:
46 output_cols = [name_col(input_col, "index_to_string") for input_col in input_cols]
47 output_cols = get_output_cols(input_cols, output_cols)
48 else:
49 input_cols, output_cols = zip(*columns)
50
51 indexers = [StringIndexer(inputCol=input_col, outputCol=output_col, **kargs).fit(df) for input_col, output_col
52 in zip(list(set(input_cols)), list(set(output_cols)))]
53
54 pipeline = Pipeline(stages=indexers)
55 df = pipeline.fit(df).transform(df)
56
57 df = df.preserve_meta(df_actual, Actions.STRING_TO_INDEX.value, output_cols)
58
59 return df
60
61
62 def index_to_string(df, input_cols, output_col=None, **kargs):
63 """
64 Maps a column of indices back to a new column of corresponding string values. The index-string mapping is
65 either from the ML attributes of the input column, or from user-supplied labels (which take precedence over
66 ML attributes).
67 :param df: Dataframe to be transformed.
68 :param input_cols: Columns to be indexed.
69 :param output_col: Column where the output is going to be saved.
70 :return: Dataframe with indexed columns.
71 """
72
73 input_cols = parse_columns(df, input_cols)
74 if output_col is None:
75 output_col = name_col(input_cols, "index_to_string")
76
77 indexers = [IndexToString(inputCol=column, outputCol=output_col, **kargs) for column in
78 list(set(input_cols))]
79
80 pipeline = Pipeline(stages=indexers)
81 df = pipeline.fit(df).transform(df)
82
83 return df
84
85
86 def one_hot_encoder(df, input_cols, output_col=None, **kargs):
87 """
88 Maps a column of label indices to a column of binary vectors, with at most a single one-value.
89 :param df: Dataframe to be transformed.
90 :param input_cols: Columns to be encoded.
91 :param output_col: Column where the output is going to be saved.
92 :return: Dataframe with encoded columns.
93 """
94
95 input_cols = parse_columns(df, input_cols)
96
97 if output_col is None:
98 output_col = name_col(input_cols, "one_hot_encoder")
99
100 encode = [OneHotEncoder(inputCol=column, outputCol=output_col, **kargs) for column in
101 list(set(input_cols))]
102
103 pipeline = Pipeline(stages=encode)
104 df = pipeline.fit(df).transform(df)
105
106 return df
107
108
109 # TODO: Must we use the pipeline version?
110 def vector_assembler(df, input_cols, output_col=None):
111 """
112 Combines a given list of columns into a single vector column.
113 :param df: Dataframe to be transformed.
114 :param input_cols: Columns to be assembled.
115 :param output_col: Column where the output is going to be saved.
116 :return: Dataframe with assembled column.
117 """
118
119 input_cols = parse_columns(df, input_cols)
120
121 if output_col is None:
122 output_col = name_col(input_cols, "vector_assembler")
123
124 assembler = [VectorAssembler(inputCols=input_cols, outputCol=output_col)]
125
126 pipeline = Pipeline(stages=assembler)
127 df = pipeline.fit(df).transform(df)
128
129 return df
130
131
132 def normalizer(df, input_cols, output_col=None, p=2.0):
133 """
134 Transforms a dataset of Vector rows, normalizing each Vector to have unit norm. It takes parameter p, which
135 specifies the p-norm used for normalization. (p=2) by default.
136 :param df: Dataframe to be transformed
137 :param input_cols: Columns to be normalized.
138 :param output_col: Column where the output is going to be saved.
139 :param p: p-norm used for normalization.
140 :return: Dataframe with normalized columns.
141 """
142
143 # Check if columns argument must be a string or list datat ype:
144 if not is_(input_cols, (str, list)):
145 RaiseIt.type_error(input_cols, ["str", "list"])
146
147 if is_str(input_cols):
148 input_cols = [input_cols]
149
150 if is_(input_cols, (float, int)):
151 RaiseIt.type_error(input_cols, ["float", "int"])
152
153 # Try to create a vector
154 if len(input_cols) > 1:
155 df = df.cols.cast(input_cols, "vector")
156
157 if output_col is None:
158 output_col = name_col(input_cols, "normalizer")
159
160 # TODO https://developer.ibm.com/code/2018/04/10/improve-performance-ml-pipelines-wide-dataframes-apache-spark-2-3/
161 normal = [Normalizer(inputCol=col_name, outputCol=output_col, p=p) for col_name in
162 list(set(input_cols))]
163
164 pipeline = Pipeline(stages=normal)
165
166 df = pipeline.fit(df).transform(df)
167
168 return df
169
[end of optimus/ml/encoding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/optimus/ml/encoding.py b/optimus/ml/encoding.py
--- a/optimus/ml/encoding.py
+++ b/optimus/ml/encoding.py
@@ -35,7 +35,7 @@
numeric, we cast it to string and index the string values.
:param df: Dataframe to be transformed
:param input_cols: Columns to be indexed.
- :param output_cols:Column where the ouput is going to be saved
+ :param output_cols:Column where the output is going to be saved
:return: Dataframe with indexed columns.
"""
df_actual = df
| {"golden_diff": "diff --git a/optimus/ml/encoding.py b/optimus/ml/encoding.py\n--- a/optimus/ml/encoding.py\n+++ b/optimus/ml/encoding.py\n@@ -35,7 +35,7 @@\n numeric, we cast it to string and index the string values.\n :param df: Dataframe to be transformed\n :param input_cols: Columns to be indexed.\n- :param output_cols:Column where the ouput is going to be saved\n+ :param output_cols:Column where the output is going to be saved\n :return: Dataframe with indexed columns.\n \"\"\"\n df_actual = df\n", "issue": "Fix simple typo: ouput -> output\n# Issue Type\n\n[x] Bug (Typo)\n\n# Steps to Replicate\n\n1. Examine optimus/ml/encoding.py.\n2. Search for ouput.\n\n# Expected Behaviour\n\n1. Should read output.\n\n\n", "before_files": [{"content": "from pyspark.ml import feature, Pipeline\nfrom pyspark.ml.feature import StringIndexer, IndexToString, OneHotEncoder, VectorAssembler, Normalizer\n\nfrom optimus.helpers.check import is_dataframe, is_, is_str\nfrom optimus.helpers.columns import parse_columns, name_col, get_output_cols\nfrom optimus.helpers.constants import Actions\nfrom optimus.helpers.raiseit import RaiseIt\n\n\ndef n_gram(df, input_col, n=2):\n \"\"\"\n Converts the input array of strings inside of a Spark DF into an array of n-grams.\n :param df: Pyspark dataframe to analyze\n :param input_col: Column to analyzer.\n :param n: number of elements per n-gram >=1.\n :return: Spark DataFrame with n-grams calculated.\n \"\"\"\n\n is_dataframe(df)\n\n tokenizer = feature.Tokenizer().setInputCol(input_col) | feature.StopWordsRemover()\n count = feature.CountVectorizer()\n gram = feature.NGram(n=n) | feature.CountVectorizer()\n tf = tokenizer | (count, gram) | feature.VectorAssembler()\n tfidf = tf | feature.IDF().setOutputCol('features')\n\n tfidf_model = tfidf.fit(df)\n df_model = tfidf_model.transform(df)\n return df_model, tfidf_model\n\n\ndef string_to_index(df, input_cols, output_cols=None, columns=None, **kargs):\n \"\"\"\n Maps a string column of labels to an ML column of label indices. If the input column is\n numeric, we cast it to string and index the string values.\n :param df: Dataframe to be transformed\n :param input_cols: Columns to be indexed.\n :param output_cols:Column where the ouput is going to be saved\n :return: Dataframe with indexed columns.\n \"\"\"\n df_actual = df\n\n if columns is None:\n input_cols = parse_columns(df, input_cols)\n if output_cols is None:\n output_cols = [name_col(input_col, \"index_to_string\") for input_col in input_cols]\n output_cols = get_output_cols(input_cols, output_cols)\n else:\n input_cols, output_cols = zip(*columns)\n\n indexers = [StringIndexer(inputCol=input_col, outputCol=output_col, **kargs).fit(df) for input_col, output_col\n in zip(list(set(input_cols)), list(set(output_cols)))]\n\n pipeline = Pipeline(stages=indexers)\n df = pipeline.fit(df).transform(df)\n\n df = df.preserve_meta(df_actual, Actions.STRING_TO_INDEX.value, output_cols)\n\n return df\n\n\ndef index_to_string(df, input_cols, output_col=None, **kargs):\n \"\"\"\n Maps a column of indices back to a new column of corresponding string values. The index-string mapping is\n either from the ML attributes of the input column, or from user-supplied labels (which take precedence over\n ML attributes).\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be indexed.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with indexed columns.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n if output_col is None:\n output_col = name_col(input_cols, \"index_to_string\")\n\n indexers = [IndexToString(inputCol=column, outputCol=output_col, **kargs) for column in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=indexers)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\ndef one_hot_encoder(df, input_cols, output_col=None, **kargs):\n \"\"\"\n Maps a column of label indices to a column of binary vectors, with at most a single one-value.\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be encoded.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with encoded columns.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n\n if output_col is None:\n output_col = name_col(input_cols, \"one_hot_encoder\")\n\n encode = [OneHotEncoder(inputCol=column, outputCol=output_col, **kargs) for column in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=encode)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\n# TODO: Must we use the pipeline version?\ndef vector_assembler(df, input_cols, output_col=None):\n \"\"\"\n Combines a given list of columns into a single vector column.\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be assembled.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with assembled column.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n\n if output_col is None:\n output_col = name_col(input_cols, \"vector_assembler\")\n\n assembler = [VectorAssembler(inputCols=input_cols, outputCol=output_col)]\n\n pipeline = Pipeline(stages=assembler)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\ndef normalizer(df, input_cols, output_col=None, p=2.0):\n \"\"\"\n Transforms a dataset of Vector rows, normalizing each Vector to have unit norm. It takes parameter p, which\n specifies the p-norm used for normalization. (p=2) by default.\n :param df: Dataframe to be transformed\n :param input_cols: Columns to be normalized.\n :param output_col: Column where the output is going to be saved.\n :param p: p-norm used for normalization.\n :return: Dataframe with normalized columns.\n \"\"\"\n\n # Check if columns argument must be a string or list datat ype:\n if not is_(input_cols, (str, list)):\n RaiseIt.type_error(input_cols, [\"str\", \"list\"])\n\n if is_str(input_cols):\n input_cols = [input_cols]\n\n if is_(input_cols, (float, int)):\n RaiseIt.type_error(input_cols, [\"float\", \"int\"])\n\n # Try to create a vector\n if len(input_cols) > 1:\n df = df.cols.cast(input_cols, \"vector\")\n\n if output_col is None:\n output_col = name_col(input_cols, \"normalizer\")\n\n # TODO https://developer.ibm.com/code/2018/04/10/improve-performance-ml-pipelines-wide-dataframes-apache-spark-2-3/\n normal = [Normalizer(inputCol=col_name, outputCol=output_col, p=p) for col_name in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=normal)\n\n df = pipeline.fit(df).transform(df)\n\n return df\n", "path": "optimus/ml/encoding.py"}]} | 2,477 | 141 |
gh_patches_debug_8649 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-390 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GaNDLF is not running on macOS
**Describe the bug**
Currently, we are requiring `torch==1.8.2`:
https://github.com/CBICA/GaNDLF/blob/e8f922266ec7af1c3fac36439290d22a5e63866d/setup.py#L56
Which is not supported by PyTorch on macOS[[ref](https://pytorch.org/get-started/locally/)].
**To Reproduce**
Steps to reproduce the behavior: https://cbica.github.io/GaNDLF/setup
**Expected behavior**
The only reason for us to drop support of an OS should be if something major is breaking.
**Screenshots**
N.A.
**GaNDLF Version**
<!-- Put the output of the following command:
python -c 'import GANDLF as g;print(g.__version__)'
-->
0.0.14-dev
**Desktop (please complete the following information):**
- OS: macOS
- Version: N.A.
**Additional context**
Reported by @Sofia-Mouchtaris
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import os
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 with open("README.md") as readme_file:
13 readme = readme_file.read()
14
15
16 def git_submodule_update():
17 ## submodule update
18 os.system("git submodule update --init --recursive")
19
20
21 class CustomInstallCommand(install):
22 def run(self):
23 install.run(self)
24 git_submodule_update()
25
26
27 class CustomDevelopCommand(develop):
28 def run(self):
29 develop.run(self)
30 git_submodule_update()
31
32
33 class CustomEggInfoCommand(egg_info):
34 def run(self):
35 egg_info.run(self)
36 git_submodule_update()
37
38
39 # read version.py
40 import sys, re
41
42 try:
43 filepath = "GANDLF/version.py"
44 version_file = open(filepath)
45 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
46
47 except Exception as error:
48 __version__ = "0.0.1"
49 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
50
51 requirements = [
52 "black",
53 "numpy==1.21.0",
54 "scipy",
55 "SimpleITK!=2.0.*",
56 "torch==1.8.2",
57 "torchvision",
58 "tqdm",
59 "torchio==0.18.57",
60 "pandas",
61 "pylint",
62 "scikit-learn>=0.23.2",
63 "pickle5>=0.0.11",
64 "setuptools",
65 "seaborn",
66 "pyyaml",
67 "tiffslide",
68 "scikit-image",
69 "matplotlib",
70 "requests>=2.25.0",
71 "pyvips",
72 "pytest",
73 "coverage",
74 "pytest-cov",
75 "psutil",
76 "medcam",
77 "opencv-python",
78 "torchmetrics",
79 "OpenPatchMiner==0.1.6",
80 "zarr==2.10.3",
81 "pydicom",
82 "onnx",
83 ]
84
85 setup(
86 name="GANDLF",
87 version=__version__,
88 author="Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun Güley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos", # alphabetical order
89 author_email="[email protected]",
90 python_requires=">=3.7",
91 packages=find_packages(),
92 cmdclass={ # this ensures git_submodule_update is called during install
93 "install": CustomInstallCommand,
94 "develop": CustomDevelopCommand,
95 "egg_info": CustomEggInfoCommand,
96 },
97 scripts=[
98 "gandlf_run",
99 "gandlf_constructCSV",
100 "gandlf_collectStats",
101 "gandlf_patchMiner",
102 "gandlf_preprocess",
103 "gandlf_anonymizer",
104 "gandlf_verifyInstall",
105 ],
106 classifiers=[
107 "Development Status :: 3 - Alpha",
108 "Intended Audience :: Science/Research",
109 "License :: OSI Approved :: BSD License",
110 "Natural Language :: English",
111 "Operating System :: OS Independent",
112 "Programming Language :: Python :: 3.7",
113 "Programming Language :: Python :: 3.8",
114 "Programming Language :: Python :: 3.9",
115 "Topic :: Scientific/Engineering :: Medical Science Apps",
116 ],
117 description=(
118 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
119 ),
120 install_requires=requirements,
121 license="BSD-3-Clause License",
122 long_description=readme,
123 long_description_content_type="text/markdown",
124 include_package_data=True,
125 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging",
126 zip_safe=False,
127 )
128
129 ## windows vips installation
130 if os.name == "nt": # proceed for windows
131 from pathlib import Path
132
133 # download and extract if main dll is absent
134 if not Path("./vips/vips-dev-8.10/bin/libvips-42.dll").exists():
135 print("Downloading and extracting VIPS for Windows")
136 url = "https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip"
137 zip_to_extract = "./vips.zip"
138 import urllib.request, zipfile
139
140 urllib.request.urlretrieve(url, zip_to_extract)
141 z = zipfile.ZipFile(zip_to_extract)
142 z.extractall("./vips")
143 z.close()
144 os.remove(zip_to_extract)
145
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,7 +53,6 @@
"numpy==1.21.0",
"scipy",
"SimpleITK!=2.0.*",
- "torch==1.8.2",
"torchvision",
"tqdm",
"torchio==0.18.57",
@@ -82,6 +81,12 @@
"onnx",
]
+# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389
+if sys.platform == "darwin":
+ requirements.append("torch==1.9.0")
+else:
+ requirements.append("torch==1.8.2")
+
setup(
name="GANDLF",
version=__version__,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,7 +53,6 @@\n \"numpy==1.21.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n- \"torch==1.8.2\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n@@ -82,6 +81,12 @@\n \"onnx\",\n ]\n \n+# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389\n+if sys.platform == \"darwin\":\n+ requirements.append(\"torch==1.9.0\")\n+else:\n+ requirements.append(\"torch==1.8.2\")\n+\n setup(\n name=\"GANDLF\",\n version=__version__,\n", "issue": "GaNDLF is not running on macOS\n**Describe the bug**\r\nCurrently, we are requiring `torch==1.8.2`:\r\nhttps://github.com/CBICA/GaNDLF/blob/e8f922266ec7af1c3fac36439290d22a5e63866d/setup.py#L56\r\nWhich is not supported by PyTorch on macOS[[ref](https://pytorch.org/get-started/locally/)].\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior: https://cbica.github.io/GaNDLF/setup\r\n\r\n**Expected behavior**\r\nThe only reason for us to drop support of an OS should be if something major is breaking.\r\n\r\n**Screenshots**\r\nN.A.\r\n\r\n**GaNDLF Version**\r\n<!-- Put the output of the following command:\r\npython -c 'import GANDLF as g;print(g.__version__)'\r\n-->\r\n0.0.14-dev\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: macOS\r\n - Version: N.A.\r\n\r\n**Additional context**\r\nReported by @Sofia-Mouchtaris \r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\nwith open(\"README.md\") as readme_file:\n readme = readme_file.read()\n\n\ndef git_submodule_update():\n ## submodule update\n os.system(\"git submodule update --init --recursive\")\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n git_submodule_update()\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n git_submodule_update()\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n git_submodule_update()\n\n\n# read version.py\nimport sys, re\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.21.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"torch==1.8.2\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n \"pandas\",\n \"pylint\",\n \"scikit-learn>=0.23.2\",\n \"pickle5>=0.0.11\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"scikit-image\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pyvips\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics\",\n \"OpenPatchMiner==0.1.6\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n]\n\nsetup(\n name=\"GANDLF\",\n version=__version__,\n author=\"Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun G\u00fcley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos\", # alphabetical order\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n packages=find_packages(),\n cmdclass={ # this ensures git_submodule_update is called during install\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"BSD-3-Clause License\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging\",\n zip_safe=False,\n)\n\n## windows vips installation\nif os.name == \"nt\": # proceed for windows\n from pathlib import Path\n\n # download and extract if main dll is absent\n if not Path(\"./vips/vips-dev-8.10/bin/libvips-42.dll\").exists():\n print(\"Downloading and extracting VIPS for Windows\")\n url = \"https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip\"\n zip_to_extract = \"./vips.zip\"\n import urllib.request, zipfile\n\n urllib.request.urlretrieve(url, zip_to_extract)\n z = zipfile.ZipFile(zip_to_extract)\n z.extractall(\"./vips\")\n z.close()\n os.remove(zip_to_extract)\n", "path": "setup.py"}]} | 2,200 | 194 |
gh_patches_debug_38715 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-75 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'NPlusOneCallSet' object has no attribute 'should_capture_bracktrace'
When trying to upgrade to 1.1.8, I ran into this exception:
```
File "celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "myapp/celery/worker.py", line 49, in __call__
return super().__call__(*args, **kwargs)
File "celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "myapp/celery/tasks.py", line 129, in sync_backend_wagers
retry = sync.race_wagers_sync(race_id, backend)
File "myapp/libs/sync.py", line 52, in race_wagers_sync
race = ents.Race.query.get(race_id)
File "sqlalchemy/orm/query.py", line 871, in get
ident, loading.load_on_ident)
File "sqlalchemy/orm/query.py", line 905, in _get_impl
return fallback_fn(self, key)
File "sqlalchemy/orm/loading.py", line 231, in load_on_ident
return q.one()
File "sqlalchemy/orm/query.py", line 2837, in one
ret = self.one_or_none()
File "sqlalchemy/orm/query.py", line 2807, in one_or_none
ret = list(self)
File "sqlalchemy/orm/query.py", line 2878, in __iter__
return self._execute_and_instances(context)
File "sqlalchemy/orm/query.py", line 2901, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "sqlalchemy/engine/base.py", line 1207, in _execute_context
context.executemany)
File "sqlalchemy/event/attr.py", line 256, in __call__
fn(*args, **kw)
File "scout_apm/sqlalchemy/__init__.py", line 16, in after_cursor_execute
if tr.callset.should_capture_bracktrace(statement) is True:
AttributeError: 'NPlusOneCallSet' object has no attribute 'should_capture_bracktrace'
```
</issue>
<code>
[start of src/scout_apm/sqlalchemy/__init__.py]
1 from scout_apm.core.tracked_request import TrackedRequest
2
3 from sqlalchemy import event
4
5 def instrument_sqlalchemy(engine):
6 def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):
7 tr = TrackedRequest.instance()
8 span = tr.start_span(operation='SQL/Query')
9 span.tag('db.statement', statement)
10
11 def after_cursor_execute(conn, cursor, statement, parameters, context, executemany):
12 tr = TrackedRequest.instance()
13 span = tr.current_span()
14 if span is not None:
15 tr.callset.update(statement, 1, span.duration())
16 if tr.callset.should_capture_bracktrace(statement) is True:
17 span.capture_backtrace()
18 tr.stop_span()
19
20 if getattr(engine, "_scout_instrumented", False) != True:
21 event.listen(engine, 'before_cursor_execute', before_cursor_execute)
22 event.listen(engine, 'after_cursor_execute', after_cursor_execute)
23 setattr(engine, "_scout_instrumented", True)
24
[end of src/scout_apm/sqlalchemy/__init__.py]
[start of setup.py]
1 from glob import glob
2 from os.path import basename, splitext
3
4 from setuptools import find_packages, setup
5
6 setup(name='scout_apm',
7 version='1.1.8',
8 description='Scout Application Performance Monitoring Agent',
9 long_description='Scout Application Performance Monitoring Agent',
10 url='https://github.com/scoutapp/scout_apm_python',
11 author='Scout',
12 author_email='[email protected]',
13 license='MIT',
14 zip_safe=False,
15 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4',
16 packages=find_packages('src'),
17 package_dir={'': 'src'},
18 py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],
19 entry_points={
20 'console_scripts': [
21 'core-agent-manager = scout_apm.core.cli.core_agent_manager:main'
22 ]
23 },
24 install_requires=['psutil', 'PyYAML', 'requests'],
25 keywords='apm performance monitoring development',
26 classifiers=[
27 'Development Status :: 3 - Alpha',
28 'Intended Audience :: Developers',
29 'Topic :: System :: Monitoring',
30 'License :: Other/Proprietary License',
31 'Operating System :: MacOS',
32 'Operating System :: POSIX',
33 'Operating System :: POSIX :: Linux',
34 'Programming Language :: Python :: 2',
35 'Programming Language :: Python :: 2.7',
36 'Programming Language :: Python :: 3',
37 'Programming Language :: Python :: 3.4',
38 'Programming Language :: Python :: 3.5',
39 'Programming Language :: Python :: 3.6',
40 'Programming Language :: Python :: 3.7',
41 ])
42
[end of setup.py]
[start of src/scout_apm/core/tracked_request.py]
1 from __future__ import absolute_import
2
3 import logging
4 from datetime import datetime
5 from uuid import uuid4
6
7 from scout_apm.core.samplers import Samplers
8 from scout_apm.core.request_manager import RequestManager
9 from scout_apm.core.thread_local import ThreadLocalSingleton
10 from scout_apm.core.n_plus_one_call_set import NPlusOneCallSet
11 import scout_apm.core.backtrace
12
13 # Logging
14 logger = logging.getLogger(__name__)
15
16
17 class TrackedRequest(ThreadLocalSingleton):
18 """
19 This is a container which keeps track of all module instances for a single
20 request. For convenience they are made available as attributes based on
21 their keyname
22 """
23 def __init__(self, *args, **kwargs):
24 self.req_id = 'req-' + str(uuid4())
25 self.start_time = kwargs.get('start_time', datetime.utcnow())
26 self.end_time = kwargs.get('end_time', None)
27 self.active_spans = kwargs.get('active_spans', [])
28 self.complete_spans = kwargs.get('complete_spans', [])
29 self.tags = kwargs.get('tags', {})
30 self.real_request = kwargs.get('real_request', False)
31 self.callset = NPlusOneCallSet()
32 logger.debug('Starting request: %s', self.req_id)
33
34 def mark_real_request(self):
35 self.real_request = True
36
37 def is_real_request(self):
38 return self.real_request
39
40 def tag(self, key, value):
41 if key in self.tags:
42 logger.debug('Overwriting previously set tag for request %s: %s' % self.req_id, key)
43 self.tags[key] = value
44
45 def start_span(self, operation=None):
46 maybe_parent = self.current_span()
47
48 if maybe_parent is not None:
49 parent_id = maybe_parent.span_id
50 else:
51 parent_id = None
52
53 new_span = Span(
54 request_id=self.req_id,
55 operation=operation,
56 parent=parent_id)
57 self.active_spans.append(new_span)
58 return new_span
59
60 def stop_span(self):
61 stopping_span = None
62 try:
63 stopping_span = self.active_spans.pop()
64 except IndexError as e:
65 logger.debug('Exception when stopping span: %s' % repr(e))
66
67 if stopping_span is not None:
68 stopping_span.stop()
69 stopping_span.annotate()
70 self.complete_spans.append(stopping_span)
71
72 if len(self.active_spans) == 0:
73 self.finish()
74
75 def current_span(self):
76 if len(self.active_spans) > 0:
77 return self.active_spans[-1]
78 else:
79 return None
80
81 # Request is done, release any info we have about it.
82 def finish(self):
83 logger.debug('Stopping request: %s', self.req_id)
84 if self.end_time is None:
85 self.end_time = datetime.utcnow()
86 RequestManager.instance().add_request(self)
87 if self.is_real_request():
88 Samplers.ensure_running()
89
90 # This can fail if the Tracked Request was created directly, not through instance()
91 try:
92 self.release()
93 except:
94 pass
95
96
97 class Span:
98 def __init__(self, *args, **kwargs):
99 self.span_id = kwargs.get('span_id', 'span-' + str(uuid4()))
100 self.start_time = kwargs.get('start_time', datetime.utcnow())
101 self.end_time = kwargs.get('end_time', None)
102 self.request_id = kwargs.get('request_id', None)
103 self.operation = kwargs.get('operation', None)
104 self.parent = kwargs.get('parent', None)
105 self.tags = kwargs.get('tags', {})
106
107 def stop(self):
108 self.end_time = datetime.utcnow()
109
110 def tag(self, key, value):
111 if key in self.tags:
112 logger.debug('Overwriting previously set tag for span %s: %s' % self.span_id, key)
113 self.tags[key] = value
114
115 # In seconds
116 def duration(self):
117 if self.end_time is not None:
118 return (self.end_time - self.start_time).total_seconds()
119 else:
120 # Current, running duration
121 return (datetime.utcnow() - self.start_time).total_seconds()
122
123 def duration_in_ms(self):
124 return self.duration() / 1000
125
126 # Add any interesting annotations to the span. Assumes that we are in the
127 # process of stopping this span.
128 def annotate(self):
129 slow_threshold = 0.500
130 if self.duration() > slow_threshold:
131 self.capture_backtrace()
132
133 def capture_backtrace(self):
134 stack = scout_apm.core.backtrace.capture()
135 self.tag('stack', stack)
136
[end of src/scout_apm/core/tracked_request.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages, setup
setup(name='scout_apm',
- version='1.1.8',
+ version='1.1.9',
description='Scout Application Performance Monitoring Agent',
long_description='Scout Application Performance Monitoring Agent',
url='https://github.com/scoutapp/scout_apm_python',
@@ -12,7 +12,7 @@
author_email='[email protected]',
license='MIT',
zip_safe=False,
- python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4',
+ python_requires='>=3.4, <4',
packages=find_packages('src'),
package_dir={'': 'src'},
py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],
@@ -31,8 +31,6 @@
'Operating System :: MacOS',
'Operating System :: POSIX',
'Operating System :: POSIX :: Linux',
- 'Programming Language :: Python :: 2',
- 'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
diff --git a/src/scout_apm/core/tracked_request.py b/src/scout_apm/core/tracked_request.py
--- a/src/scout_apm/core/tracked_request.py
+++ b/src/scout_apm/core/tracked_request.py
@@ -39,7 +39,7 @@
def tag(self, key, value):
if key in self.tags:
- logger.debug('Overwriting previously set tag for request %s: %s' % self.req_id, key)
+ logger.debug('Overwriting previously set tag for request %s: %s' % (self.req_id, key))
self.tags[key] = value
def start_span(self, operation=None):
@@ -109,7 +109,7 @@
def tag(self, key, value):
if key in self.tags:
- logger.debug('Overwriting previously set tag for span %s: %s' % self.span_id, key)
+ logger.debug('Overwriting previously set tag for span %s: %s' % (self.span_id, key))
self.tags[key] = value
# In seconds
diff --git a/src/scout_apm/sqlalchemy/__init__.py b/src/scout_apm/sqlalchemy/__init__.py
--- a/src/scout_apm/sqlalchemy/__init__.py
+++ b/src/scout_apm/sqlalchemy/__init__.py
@@ -13,7 +13,7 @@
span = tr.current_span()
if span is not None:
tr.callset.update(statement, 1, span.duration())
- if tr.callset.should_capture_bracktrace(statement) is True:
+ if tr.callset.should_capture_backtrace(statement) is True:
span.capture_backtrace()
tr.stop_span()
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,7 +4,7 @@\n from setuptools import find_packages, setup\n \n setup(name='scout_apm',\n- version='1.1.8',\n+ version='1.1.9',\n description='Scout Application Performance Monitoring Agent',\n long_description='Scout Application Performance Monitoring Agent',\n url='https://github.com/scoutapp/scout_apm_python',\n@@ -12,7 +12,7 @@\n author_email='[email protected]',\n license='MIT',\n zip_safe=False,\n- python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4',\n+ python_requires='>=3.4, <4',\n packages=find_packages('src'),\n package_dir={'': 'src'},\n py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],\n@@ -31,8 +31,6 @@\n 'Operating System :: MacOS',\n 'Operating System :: POSIX',\n 'Operating System :: POSIX :: Linux',\n- 'Programming Language :: Python :: 2',\n- 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\ndiff --git a/src/scout_apm/core/tracked_request.py b/src/scout_apm/core/tracked_request.py\n--- a/src/scout_apm/core/tracked_request.py\n+++ b/src/scout_apm/core/tracked_request.py\n@@ -39,7 +39,7 @@\n \n def tag(self, key, value):\n if key in self.tags:\n- logger.debug('Overwriting previously set tag for request %s: %s' % self.req_id, key)\n+ logger.debug('Overwriting previously set tag for request %s: %s' % (self.req_id, key))\n self.tags[key] = value\n \n def start_span(self, operation=None):\n@@ -109,7 +109,7 @@\n \n def tag(self, key, value):\n if key in self.tags:\n- logger.debug('Overwriting previously set tag for span %s: %s' % self.span_id, key)\n+ logger.debug('Overwriting previously set tag for span %s: %s' % (self.span_id, key))\n self.tags[key] = value\n \n # In seconds\ndiff --git a/src/scout_apm/sqlalchemy/__init__.py b/src/scout_apm/sqlalchemy/__init__.py\n--- a/src/scout_apm/sqlalchemy/__init__.py\n+++ b/src/scout_apm/sqlalchemy/__init__.py\n@@ -13,7 +13,7 @@\n span = tr.current_span()\n if span is not None:\n tr.callset.update(statement, 1, span.duration())\n- if tr.callset.should_capture_bracktrace(statement) is True:\n+ if tr.callset.should_capture_backtrace(statement) is True:\n span.capture_backtrace()\n tr.stop_span()\n", "issue": "AttributeError: 'NPlusOneCallSet' object has no attribute 'should_capture_bracktrace'\nWhen trying to upgrade to 1.1.8, I ran into this exception:\r\n\r\n```\r\n File \"celery/app/trace.py\", line 374, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n File \"myapp/celery/worker.py\", line 49, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"celery/app/trace.py\", line 629, in __protected_call__\r\n return self.run(*args, **kwargs)\r\n File \"myapp/celery/tasks.py\", line 129, in sync_backend_wagers\r\n retry = sync.race_wagers_sync(race_id, backend)\r\n File \"myapp/libs/sync.py\", line 52, in race_wagers_sync\r\n race = ents.Race.query.get(race_id)\r\n File \"sqlalchemy/orm/query.py\", line 871, in get\r\n ident, loading.load_on_ident)\r\n File \"sqlalchemy/orm/query.py\", line 905, in _get_impl\r\n return fallback_fn(self, key)\r\n File \"sqlalchemy/orm/loading.py\", line 231, in load_on_ident\r\n return q.one()\r\n File \"sqlalchemy/orm/query.py\", line 2837, in one\r\n ret = self.one_or_none()\r\n File \"sqlalchemy/orm/query.py\", line 2807, in one_or_none\r\n ret = list(self)\r\n File \"sqlalchemy/orm/query.py\", line 2878, in __iter__\r\n return self._execute_and_instances(context)\r\n File \"sqlalchemy/orm/query.py\", line 2901, in _execute_and_instances\r\n result = conn.execute(querycontext.statement, self._params)\r\n File \"sqlalchemy/engine/base.py\", line 948, in execute\r\n return meth(self, multiparams, params)\r\n File \"sqlalchemy/sql/elements.py\", line 269, in _execute_on_connection\r\n return connection._execute_clauseelement(self, multiparams, params)\r\n File \"sqlalchemy/engine/base.py\", line 1060, in _execute_clauseelement\r\n compiled_sql, distilled_params\r\n File \"sqlalchemy/engine/base.py\", line 1207, in _execute_context\r\n context.executemany)\r\n File \"sqlalchemy/event/attr.py\", line 256, in __call__\r\n fn(*args, **kw)\r\n File \"scout_apm/sqlalchemy/__init__.py\", line 16, in after_cursor_execute\r\n if tr.callset.should_capture_bracktrace(statement) is True:\r\n AttributeError: 'NPlusOneCallSet' object has no attribute 'should_capture_bracktrace'\r\n```\n", "before_files": [{"content": "from scout_apm.core.tracked_request import TrackedRequest\n\nfrom sqlalchemy import event\n\ndef instrument_sqlalchemy(engine):\n def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tr = TrackedRequest.instance()\n span = tr.start_span(operation='SQL/Query')\n span.tag('db.statement', statement)\n\n def after_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tr = TrackedRequest.instance()\n span = tr.current_span()\n if span is not None:\n tr.callset.update(statement, 1, span.duration())\n if tr.callset.should_capture_bracktrace(statement) is True:\n span.capture_backtrace()\n tr.stop_span()\n\n if getattr(engine, \"_scout_instrumented\", False) != True:\n event.listen(engine, 'before_cursor_execute', before_cursor_execute)\n event.listen(engine, 'after_cursor_execute', after_cursor_execute)\n setattr(engine, \"_scout_instrumented\", True)\n", "path": "src/scout_apm/sqlalchemy/__init__.py"}, {"content": "from glob import glob\nfrom os.path import basename, splitext\n\nfrom setuptools import find_packages, setup\n\nsetup(name='scout_apm',\n version='1.1.8',\n description='Scout Application Performance Monitoring Agent',\n long_description='Scout Application Performance Monitoring Agent',\n url='https://github.com/scoutapp/scout_apm_python',\n author='Scout',\n author_email='[email protected]',\n license='MIT',\n zip_safe=False,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4',\n packages=find_packages('src'),\n package_dir={'': 'src'},\n py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],\n entry_points={\n 'console_scripts': [\n 'core-agent-manager = scout_apm.core.cli.core_agent_manager:main'\n ]\n },\n install_requires=['psutil', 'PyYAML', 'requests'],\n keywords='apm performance monitoring development',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'Topic :: System :: Monitoring',\n 'License :: Other/Proprietary License',\n 'Operating System :: MacOS',\n 'Operating System :: POSIX',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ])\n", "path": "setup.py"}, {"content": "from __future__ import absolute_import\n\nimport logging\nfrom datetime import datetime\nfrom uuid import uuid4\n\nfrom scout_apm.core.samplers import Samplers\nfrom scout_apm.core.request_manager import RequestManager\nfrom scout_apm.core.thread_local import ThreadLocalSingleton\nfrom scout_apm.core.n_plus_one_call_set import NPlusOneCallSet\nimport scout_apm.core.backtrace\n\n# Logging\nlogger = logging.getLogger(__name__)\n\n\nclass TrackedRequest(ThreadLocalSingleton):\n \"\"\"\n This is a container which keeps track of all module instances for a single\n request. For convenience they are made available as attributes based on\n their keyname\n \"\"\"\n def __init__(self, *args, **kwargs):\n self.req_id = 'req-' + str(uuid4())\n self.start_time = kwargs.get('start_time', datetime.utcnow())\n self.end_time = kwargs.get('end_time', None)\n self.active_spans = kwargs.get('active_spans', [])\n self.complete_spans = kwargs.get('complete_spans', [])\n self.tags = kwargs.get('tags', {})\n self.real_request = kwargs.get('real_request', False)\n self.callset = NPlusOneCallSet()\n logger.debug('Starting request: %s', self.req_id)\n\n def mark_real_request(self):\n self.real_request = True\n\n def is_real_request(self):\n return self.real_request\n\n def tag(self, key, value):\n if key in self.tags:\n logger.debug('Overwriting previously set tag for request %s: %s' % self.req_id, key)\n self.tags[key] = value\n\n def start_span(self, operation=None):\n maybe_parent = self.current_span()\n\n if maybe_parent is not None:\n parent_id = maybe_parent.span_id\n else:\n parent_id = None\n\n new_span = Span(\n request_id=self.req_id,\n operation=operation,\n parent=parent_id)\n self.active_spans.append(new_span)\n return new_span\n\n def stop_span(self):\n stopping_span = None\n try:\n stopping_span = self.active_spans.pop()\n except IndexError as e:\n logger.debug('Exception when stopping span: %s' % repr(e))\n\n if stopping_span is not None:\n stopping_span.stop()\n stopping_span.annotate()\n self.complete_spans.append(stopping_span)\n\n if len(self.active_spans) == 0:\n self.finish()\n\n def current_span(self):\n if len(self.active_spans) > 0:\n return self.active_spans[-1]\n else:\n return None\n\n # Request is done, release any info we have about it.\n def finish(self):\n logger.debug('Stopping request: %s', self.req_id)\n if self.end_time is None:\n self.end_time = datetime.utcnow()\n RequestManager.instance().add_request(self)\n if self.is_real_request():\n Samplers.ensure_running()\n\n # This can fail if the Tracked Request was created directly, not through instance()\n try:\n self.release()\n except:\n pass\n\n\nclass Span:\n def __init__(self, *args, **kwargs):\n self.span_id = kwargs.get('span_id', 'span-' + str(uuid4()))\n self.start_time = kwargs.get('start_time', datetime.utcnow())\n self.end_time = kwargs.get('end_time', None)\n self.request_id = kwargs.get('request_id', None)\n self.operation = kwargs.get('operation', None)\n self.parent = kwargs.get('parent', None)\n self.tags = kwargs.get('tags', {})\n\n def stop(self):\n self.end_time = datetime.utcnow()\n\n def tag(self, key, value):\n if key in self.tags:\n logger.debug('Overwriting previously set tag for span %s: %s' % self.span_id, key)\n self.tags[key] = value\n\n # In seconds\n def duration(self):\n if self.end_time is not None:\n return (self.end_time - self.start_time).total_seconds()\n else:\n # Current, running duration\n return (datetime.utcnow() - self.start_time).total_seconds()\n\n def duration_in_ms(self):\n return self.duration() / 1000\n\n # Add any interesting annotations to the span. Assumes that we are in the\n # process of stopping this span.\n def annotate(self):\n slow_threshold = 0.500\n if self.duration() > slow_threshold:\n self.capture_backtrace()\n\n def capture_backtrace(self):\n stack = scout_apm.core.backtrace.capture()\n self.tag('stack', stack)\n", "path": "src/scout_apm/core/tracked_request.py"}]} | 3,213 | 692 |
gh_patches_debug_22726 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1582 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
README.md -> index.md should be optional
This is a really useful feature, but it should be optional.
I have a `README.md` file in my repository exclusively to render it on GitHub, indicating that the documentation uses format not compatible with GitHub, and that it should be viewed on the generated site.
See: https://github.com/Galarzaa90/NabBot/tree/master/docs
Now, I also have an `index.md` file in the `docs` folder, and that is the actual homepage of my site, and now it is getting overriden by `README.md` and there's no way to see the homepage anymore.
See:
- https://galarzaa90.github.io/NabBot/ - Generated Site
- https://github.com/Galarzaa90/NabBot/blob/master/docs/index.md - My index.md file
And this is my `pages`/`nav` section:
```yml
nav:
- Home: index.md
- Changelog: changelog.md
- Install Guide: install.md
- Permissions: permissions.md
- FAQ: faq.md
- Features:
- Overview: features/index.md
- Autoroles: features/autoroles.md
- Groups: features/groups.md
- Commands:
- Overview: commands/index.md
- Admin commands: commands/admin.md
- General commands: commands/general.md
- Info commands: commands/info.md
- Loot commands: commands/loot.md
- Mod commands: commands/mod.md
- Owner commands: commands/owner.md
- Roles commands: commands/roles.md
- Settings commands: commands/settings.md
- Tibia commands: commands/tibia.md
- TibiaWiki commands: commands/tibiawiki.md
- Tracking commands: commands/tracking.md
- Hosting:
- Configuration: hosting/config.md
- Messages: hosting/messages.md
- Cogs: hosting/cogs.md
```
Full file: https://github.com/Galarzaa90/NabBot/blob/master/mkdocs.yml
</issue>
<code>
[start of mkdocs/structure/files.py]
1 # coding: utf-8
2
3 from __future__ import unicode_literals
4 import fnmatch
5 import os
6 import logging
7 from functools import cmp_to_key
8
9 from mkdocs import utils
10
11
12 log = logging.getLogger(__name__)
13
14
15 class Files(object):
16 """ A collection of File objects. """
17 def __init__(self, files):
18 self._files = files
19 self.src_paths = {file.src_path: file for file in files}
20
21 def __iter__(self):
22 return iter(self._files)
23
24 def __len__(self):
25 return len(self._files)
26
27 def __contains__(self, path):
28 return path in self.src_paths
29
30 def get_file_from_path(self, path):
31 """ Return a File instance with File.src_path equal to path. """
32 return self.src_paths.get(os.path.normpath(path))
33
34 def append(self, file):
35 """ Append file to Files collection. """
36 self._files.append(file)
37 self.src_paths[file.src_path] = file
38
39 def copy_static_files(self, dirty=False):
40 """ Copy static files from source to destination. """
41 for file in self:
42 if not file.is_documentation_page():
43 file.copy_file(dirty)
44
45 def documentation_pages(self):
46 """ Return iterable of all Markdown page file objects. """
47 return [file for file in self if file.is_documentation_page()]
48
49 def static_pages(self):
50 """ Return iterable of all static page file objects. """
51 return [file for file in self if file.is_static_page()]
52
53 def media_files(self):
54 """ Return iterable of all file objects which are not documentation or static pages. """
55 return [file for file in self if file.is_media_file()]
56
57 def javascript_files(self):
58 """ Return iterable of all javascript file objects. """
59 return [file for file in self if file.is_javascript()]
60
61 def css_files(self):
62 """ Return iterable of all CSS file objects. """
63 return [file for file in self if file.is_css()]
64
65 def add_files_from_theme(self, env, config):
66 """ Retrieve static files from Jinja environment and add to collection. """
67 def filter(name):
68 patterns = ['.*', '*.py', '*.pyc', '*.html', 'mkdocs_theme.yml']
69 patterns.extend(config['theme'].static_templates)
70 for pattern in patterns:
71 if fnmatch.fnmatch(name, pattern):
72 return False
73 return True
74 for path in env.list_templates(filter_func=filter):
75 for dir in config['theme'].dirs:
76 # Find the first theme dir which contains path
77 if os.path.isfile(os.path.join(dir, path)):
78 self.append(File(path, dir, config['site_dir'], config['use_directory_urls']))
79 break
80
81
82 class File(object):
83 """
84 A MkDocs File object.
85
86 Points to the source and destination locations of a file.
87
88 The `path` argument must be a path that exists relative to `src_dir`.
89
90 The `src_dir` and `dest_dir` must be absolute paths on the local file system.
91
92 The `use_directory_urls` argument controls how destination paths are generated. If `False`, a Markdown file is
93 mapped to an HTML file of the same name (the file extension is changed to `.html`). If True, a Markdown file is
94 mapped to an HTML index file (`index.html`) nested in a directory using the "name" of the file in `path`. The
95 `use_directory_urls` argument has no effect on non-Markdown files.
96
97 File objects have the following properties, which are Unicode strings:
98
99 File.src_path
100 The pure path of the source file relative to the source directory.
101
102 File.abs_src_path
103 The absolute concrete path of the source file.
104
105 File.dest_path
106 The pure path of the destination file relative to the destination directory.
107
108 File.abs_dest_path
109 The absolute concrete path of the destination file.
110
111 File.url
112 The url of the destination file relative to the destination directory as a string.
113 """
114 def __init__(self, path, src_dir, dest_dir, use_directory_urls):
115 self.page = None
116 self.src_path = os.path.normpath(path)
117 self.abs_src_path = os.path.normpath(os.path.join(src_dir, self.src_path))
118 self.name = self._get_stem()
119 self.dest_path = self._get_dest_path(use_directory_urls)
120 self.abs_dest_path = os.path.normpath(os.path.join(dest_dir, self.dest_path))
121 self.url = self._get_url(use_directory_urls)
122
123 def __eq__(self, other):
124
125 def sub_dict(d):
126 return dict((key, value) for key, value in d.items() if key in ['src_path', 'abs_src_path', 'url'])
127
128 return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))
129
130 def __ne__(self, other):
131 return not self.__eq__(other)
132
133 def _get_stem(self):
134 """ Return the name of the file without it's extension. """
135 filename = os.path.basename(self.src_path)
136 stem, ext = os.path.splitext(filename)
137 return 'index' if stem in ('index', 'README') else stem
138
139 def _get_dest_path(self, use_directory_urls):
140 """ Return destination path based on source path. """
141 if self.is_documentation_page():
142 if use_directory_urls:
143 parent, filename = os.path.split(self.src_path)
144 if self.name == 'index':
145 # index.md or README.md => index.html
146 return os.path.join(parent, 'index.html')
147 else:
148 # foo.md => foo/index.html
149 return os.path.join(parent, self.name, 'index.html')
150 else:
151 # foo.md => foo.html
152 root, ext = os.path.splitext(self.src_path)
153 return root + '.html'
154 return self.src_path
155
156 def _get_url(self, use_directory_urls):
157 """ Return url based in destination path. """
158 url = self.dest_path.replace(os.path.sep, '/')
159 dirname, filename = os.path.split(url)
160 if use_directory_urls and filename == 'index.html':
161 if dirname == '':
162 url = '.'
163 else:
164 url = dirname + '/'
165 return url
166
167 def url_relative_to(self, other):
168 """ Return url for file relative to other file. """
169 return utils.get_relative_url(self.url, other.url if isinstance(other, File) else other)
170
171 def copy_file(self, dirty=False):
172 """ Copy source file to destination, ensuring parent directories exist. """
173 if dirty and not self.is_modified():
174 log.debug("Skip copying unmodified file: '{}'".format(self.src_path))
175 else:
176 log.debug("Copying media file: '{}'".format(self.src_path))
177 utils.copy_file(self.abs_src_path, self.abs_dest_path)
178
179 def is_modified(self):
180 if os.path.isfile(self.abs_dest_path):
181 return os.path.getmtime(self.abs_dest_path) < os.path.getmtime(self.abs_src_path)
182 return True
183
184 def is_documentation_page(self):
185 """ Return True if file is a Markdown page. """
186 return os.path.splitext(self.src_path)[1] in utils.markdown_extensions
187
188 def is_static_page(self):
189 """ Return True if file is a static page (html, xml, json). """
190 return os.path.splitext(self.src_path)[1] in (
191 '.html',
192 '.htm',
193 '.xml',
194 '.json',
195 )
196
197 def is_media_file(self):
198 """ Return True if file is not a documentation or static page. """
199 return not (self.is_documentation_page() or self.is_static_page())
200
201 def is_javascript(self):
202 """ Return True if file is a JavaScript file. """
203 return os.path.splitext(self.src_path)[1] in (
204 '.js',
205 '.javascript',
206 )
207
208 def is_css(self):
209 """ Return True if file is a CSS file. """
210 return os.path.splitext(self.src_path)[1] in (
211 '.css',
212 )
213
214
215 def get_files(config):
216 """ Walk the `docs_dir` and return a Files collection. """
217 files = []
218 exclude = ['.*', '/templates']
219
220 for source_dir, dirnames, filenames in os.walk(config['docs_dir'], followlinks=True):
221 relative_dir = os.path.relpath(source_dir, config['docs_dir'])
222
223 for dirname in list(dirnames):
224 path = os.path.normpath(os.path.join(relative_dir, dirname))
225 # Skip any excluded directories
226 if _filter_paths(basename=dirname, path=path, is_dir=True, exclude=exclude):
227 dirnames.remove(dirname)
228 dirnames.sort()
229
230 for filename in _sort_files(filenames):
231 path = os.path.normpath(os.path.join(relative_dir, filename))
232 # Skip any excluded files
233 if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):
234 continue
235 files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))
236
237 return Files(files)
238
239
240 def _sort_files(filenames):
241 """ Always sort `index` as first filename in list. """
242
243 def compare(x, y):
244 if x == y:
245 return 0
246 if os.path.splitext(y)[0] == 'index':
247 return 1
248 if os.path.splitext(x)[0] == 'index' or x < y:
249 return -1
250 return 1
251
252 return sorted(filenames, key=cmp_to_key(compare))
253
254
255 def _filter_paths(basename, path, is_dir, exclude):
256 """ .gitignore style file filtering. """
257 for item in exclude:
258 # Items ending in '/' apply only to directories.
259 if item.endswith('/') and not is_dir:
260 continue
261 # Items starting with '/' apply to the whole path.
262 # In any other cases just the basename is used.
263 match = path if item.startswith('/') else basename
264 if fnmatch.fnmatch(match, item.strip('/')):
265 return True
266 return False
267
[end of mkdocs/structure/files.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/structure/files.py b/mkdocs/structure/files.py
--- a/mkdocs/structure/files.py
+++ b/mkdocs/structure/files.py
@@ -232,20 +232,23 @@
# Skip any excluded files
if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):
continue
+ # Skip README.md is an index file also exists in dir
+ if filename.lower() == 'readme.md' and 'index.md' in filenames:
+ continue
files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))
return Files(files)
def _sort_files(filenames):
- """ Always sort `index` as first filename in list. """
+ """ Always sort `index` or `README` as first filename in list. """
def compare(x, y):
if x == y:
return 0
- if os.path.splitext(y)[0] == 'index':
+ if os.path.splitext(y)[0] in ['index', 'README']:
return 1
- if os.path.splitext(x)[0] == 'index' or x < y:
+ if os.path.splitext(x)[0] in ['index', 'README'] or x < y:
return -1
return 1
| {"golden_diff": "diff --git a/mkdocs/structure/files.py b/mkdocs/structure/files.py\n--- a/mkdocs/structure/files.py\n+++ b/mkdocs/structure/files.py\n@@ -232,20 +232,23 @@\n # Skip any excluded files\n if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):\n continue\n+ # Skip README.md is an index file also exists in dir\n+ if filename.lower() == 'readme.md' and 'index.md' in filenames:\n+ continue\n files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))\n \n return Files(files)\n \n \n def _sort_files(filenames):\n- \"\"\" Always sort `index` as first filename in list. \"\"\"\n+ \"\"\" Always sort `index` or `README` as first filename in list. \"\"\"\n \n def compare(x, y):\n if x == y:\n return 0\n- if os.path.splitext(y)[0] == 'index':\n+ if os.path.splitext(y)[0] in ['index', 'README']:\n return 1\n- if os.path.splitext(x)[0] == 'index' or x < y:\n+ if os.path.splitext(x)[0] in ['index', 'README'] or x < y:\n return -1\n return 1\n", "issue": "README.md -> index.md should be optional\nThis is a really useful feature, but it should be optional.\r\n\r\nI have a `README.md` file in my repository exclusively to render it on GitHub, indicating that the documentation uses format not compatible with GitHub, and that it should be viewed on the generated site.\r\n\r\nSee: https://github.com/Galarzaa90/NabBot/tree/master/docs\r\n\r\nNow, I also have an `index.md` file in the `docs` folder, and that is the actual homepage of my site, and now it is getting overriden by `README.md` and there's no way to see the homepage anymore.\r\n\r\nSee:\r\n- https://galarzaa90.github.io/NabBot/ - Generated Site\r\n- https://github.com/Galarzaa90/NabBot/blob/master/docs/index.md - My index.md file\r\n\r\nAnd this is my `pages`/`nav` section:\r\n```yml\r\nnav:\r\n - Home: index.md\r\n - Changelog: changelog.md\r\n - Install Guide: install.md\r\n - Permissions: permissions.md\r\n - FAQ: faq.md\r\n - Features:\r\n - Overview: features/index.md\r\n - Autoroles: features/autoroles.md\r\n - Groups: features/groups.md\r\n - Commands:\r\n - Overview: commands/index.md\r\n - Admin commands: commands/admin.md\r\n - General commands: commands/general.md\r\n - Info commands: commands/info.md\r\n - Loot commands: commands/loot.md\r\n - Mod commands: commands/mod.md\r\n - Owner commands: commands/owner.md\r\n - Roles commands: commands/roles.md\r\n - Settings commands: commands/settings.md\r\n - Tibia commands: commands/tibia.md\r\n - TibiaWiki commands: commands/tibiawiki.md\r\n - Tracking commands: commands/tracking.md\r\n - Hosting:\r\n - Configuration: hosting/config.md\r\n - Messages: hosting/messages.md\r\n - Cogs: hosting/cogs.md\r\n```\r\n\r\nFull file: https://github.com/Galarzaa90/NabBot/blob/master/mkdocs.yml\n", "before_files": [{"content": "# coding: utf-8\n\nfrom __future__ import unicode_literals\nimport fnmatch\nimport os\nimport logging\nfrom functools import cmp_to_key\n\nfrom mkdocs import utils\n\n\nlog = logging.getLogger(__name__)\n\n\nclass Files(object):\n \"\"\" A collection of File objects. \"\"\"\n def __init__(self, files):\n self._files = files\n self.src_paths = {file.src_path: file for file in files}\n\n def __iter__(self):\n return iter(self._files)\n\n def __len__(self):\n return len(self._files)\n\n def __contains__(self, path):\n return path in self.src_paths\n\n def get_file_from_path(self, path):\n \"\"\" Return a File instance with File.src_path equal to path. \"\"\"\n return self.src_paths.get(os.path.normpath(path))\n\n def append(self, file):\n \"\"\" Append file to Files collection. \"\"\"\n self._files.append(file)\n self.src_paths[file.src_path] = file\n\n def copy_static_files(self, dirty=False):\n \"\"\" Copy static files from source to destination. \"\"\"\n for file in self:\n if not file.is_documentation_page():\n file.copy_file(dirty)\n\n def documentation_pages(self):\n \"\"\" Return iterable of all Markdown page file objects. \"\"\"\n return [file for file in self if file.is_documentation_page()]\n\n def static_pages(self):\n \"\"\" Return iterable of all static page file objects. \"\"\"\n return [file for file in self if file.is_static_page()]\n\n def media_files(self):\n \"\"\" Return iterable of all file objects which are not documentation or static pages. \"\"\"\n return [file for file in self if file.is_media_file()]\n\n def javascript_files(self):\n \"\"\" Return iterable of all javascript file objects. \"\"\"\n return [file for file in self if file.is_javascript()]\n\n def css_files(self):\n \"\"\" Return iterable of all CSS file objects. \"\"\"\n return [file for file in self if file.is_css()]\n\n def add_files_from_theme(self, env, config):\n \"\"\" Retrieve static files from Jinja environment and add to collection. \"\"\"\n def filter(name):\n patterns = ['.*', '*.py', '*.pyc', '*.html', 'mkdocs_theme.yml']\n patterns.extend(config['theme'].static_templates)\n for pattern in patterns:\n if fnmatch.fnmatch(name, pattern):\n return False\n return True\n for path in env.list_templates(filter_func=filter):\n for dir in config['theme'].dirs:\n # Find the first theme dir which contains path\n if os.path.isfile(os.path.join(dir, path)):\n self.append(File(path, dir, config['site_dir'], config['use_directory_urls']))\n break\n\n\nclass File(object):\n \"\"\"\n A MkDocs File object.\n\n Points to the source and destination locations of a file.\n\n The `path` argument must be a path that exists relative to `src_dir`.\n\n The `src_dir` and `dest_dir` must be absolute paths on the local file system.\n\n The `use_directory_urls` argument controls how destination paths are generated. If `False`, a Markdown file is\n mapped to an HTML file of the same name (the file extension is changed to `.html`). If True, a Markdown file is\n mapped to an HTML index file (`index.html`) nested in a directory using the \"name\" of the file in `path`. The\n `use_directory_urls` argument has no effect on non-Markdown files.\n\n File objects have the following properties, which are Unicode strings:\n\n File.src_path\n The pure path of the source file relative to the source directory.\n\n File.abs_src_path\n The absolute concrete path of the source file.\n\n File.dest_path\n The pure path of the destination file relative to the destination directory.\n\n File.abs_dest_path\n The absolute concrete path of the destination file.\n\n File.url\n The url of the destination file relative to the destination directory as a string.\n \"\"\"\n def __init__(self, path, src_dir, dest_dir, use_directory_urls):\n self.page = None\n self.src_path = os.path.normpath(path)\n self.abs_src_path = os.path.normpath(os.path.join(src_dir, self.src_path))\n self.name = self._get_stem()\n self.dest_path = self._get_dest_path(use_directory_urls)\n self.abs_dest_path = os.path.normpath(os.path.join(dest_dir, self.dest_path))\n self.url = self._get_url(use_directory_urls)\n\n def __eq__(self, other):\n\n def sub_dict(d):\n return dict((key, value) for key, value in d.items() if key in ['src_path', 'abs_src_path', 'url'])\n\n return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def _get_stem(self):\n \"\"\" Return the name of the file without it's extension. \"\"\"\n filename = os.path.basename(self.src_path)\n stem, ext = os.path.splitext(filename)\n return 'index' if stem in ('index', 'README') else stem\n\n def _get_dest_path(self, use_directory_urls):\n \"\"\" Return destination path based on source path. \"\"\"\n if self.is_documentation_page():\n if use_directory_urls:\n parent, filename = os.path.split(self.src_path)\n if self.name == 'index':\n # index.md or README.md => index.html\n return os.path.join(parent, 'index.html')\n else:\n # foo.md => foo/index.html\n return os.path.join(parent, self.name, 'index.html')\n else:\n # foo.md => foo.html\n root, ext = os.path.splitext(self.src_path)\n return root + '.html'\n return self.src_path\n\n def _get_url(self, use_directory_urls):\n \"\"\" Return url based in destination path. \"\"\"\n url = self.dest_path.replace(os.path.sep, '/')\n dirname, filename = os.path.split(url)\n if use_directory_urls and filename == 'index.html':\n if dirname == '':\n url = '.'\n else:\n url = dirname + '/'\n return url\n\n def url_relative_to(self, other):\n \"\"\" Return url for file relative to other file. \"\"\"\n return utils.get_relative_url(self.url, other.url if isinstance(other, File) else other)\n\n def copy_file(self, dirty=False):\n \"\"\" Copy source file to destination, ensuring parent directories exist. \"\"\"\n if dirty and not self.is_modified():\n log.debug(\"Skip copying unmodified file: '{}'\".format(self.src_path))\n else:\n log.debug(\"Copying media file: '{}'\".format(self.src_path))\n utils.copy_file(self.abs_src_path, self.abs_dest_path)\n\n def is_modified(self):\n if os.path.isfile(self.abs_dest_path):\n return os.path.getmtime(self.abs_dest_path) < os.path.getmtime(self.abs_src_path)\n return True\n\n def is_documentation_page(self):\n \"\"\" Return True if file is a Markdown page. \"\"\"\n return os.path.splitext(self.src_path)[1] in utils.markdown_extensions\n\n def is_static_page(self):\n \"\"\" Return True if file is a static page (html, xml, json). \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.html',\n '.htm',\n '.xml',\n '.json',\n )\n\n def is_media_file(self):\n \"\"\" Return True if file is not a documentation or static page. \"\"\"\n return not (self.is_documentation_page() or self.is_static_page())\n\n def is_javascript(self):\n \"\"\" Return True if file is a JavaScript file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.js',\n '.javascript',\n )\n\n def is_css(self):\n \"\"\" Return True if file is a CSS file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.css',\n )\n\n\ndef get_files(config):\n \"\"\" Walk the `docs_dir` and return a Files collection. \"\"\"\n files = []\n exclude = ['.*', '/templates']\n\n for source_dir, dirnames, filenames in os.walk(config['docs_dir'], followlinks=True):\n relative_dir = os.path.relpath(source_dir, config['docs_dir'])\n\n for dirname in list(dirnames):\n path = os.path.normpath(os.path.join(relative_dir, dirname))\n # Skip any excluded directories\n if _filter_paths(basename=dirname, path=path, is_dir=True, exclude=exclude):\n dirnames.remove(dirname)\n dirnames.sort()\n\n for filename in _sort_files(filenames):\n path = os.path.normpath(os.path.join(relative_dir, filename))\n # Skip any excluded files\n if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):\n continue\n files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))\n\n return Files(files)\n\n\ndef _sort_files(filenames):\n \"\"\" Always sort `index` as first filename in list. \"\"\"\n\n def compare(x, y):\n if x == y:\n return 0\n if os.path.splitext(y)[0] == 'index':\n return 1\n if os.path.splitext(x)[0] == 'index' or x < y:\n return -1\n return 1\n\n return sorted(filenames, key=cmp_to_key(compare))\n\n\ndef _filter_paths(basename, path, is_dir, exclude):\n \"\"\" .gitignore style file filtering. \"\"\"\n for item in exclude:\n # Items ending in '/' apply only to directories.\n if item.endswith('/') and not is_dir:\n continue\n # Items starting with '/' apply to the whole path.\n # In any other cases just the basename is used.\n match = path if item.startswith('/') else basename\n if fnmatch.fnmatch(match, item.strip('/')):\n return True\n return False\n", "path": "mkdocs/structure/files.py"}]} | 3,842 | 302 |
gh_patches_debug_18920 | rasdani/github-patches | git_diff | google__turbinia-1098 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set a default file size limit for PlasoTask hashers
Currently, all PlasoTask instances will attempt to hash files of any size, potentially very large ones .This could lead to unusually long processing times.
This is a small part of a larger effort to try to optimize how Turbinia configures Plaso tasks to better utilize inherent parallel processing capabilities.
</issue>
<code>
[start of turbinia/workers/plaso.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20 import logging
21
22 from turbinia import config
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.evidence import PlasoFile
25 from turbinia.workers import TurbiniaTask
26 from turbinia.lib import file_helpers
27
28
29 class PlasoTask(TurbiniaTask):
30 """Task to run Plaso (log2timeline)."""
31
32 # Plaso requires the Disk to be attached, but doesn't require it be mounted.
33 REQUIRED_STATES = [
34 state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED
35 ]
36
37 TASK_CONFIG = {
38 # 'none' as indicated in the options for status_view within
39 # the Plaso documentation
40 'status_view': 'none',
41 'hashers': 'all',
42 'partitions': 'all',
43 'vss_stores': 'none',
44 # artifact_filters and file_filter are mutually exclusive
45 # parameters and Plaso will error out if both parameters are used.
46 'artifact_filters': None,
47 'file_filter': None,
48 'custom_artifact_definitions': None,
49 'parsers': None,
50 'yara_rules': None
51 }
52
53 def build_plaso_command(self, base_command, conf):
54 """Builds a typical plaso command, contains logic specific to log2timeline.
55
56 Args:
57 base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)
58 conf (dict): Dynamic config containing the parameters for the command.
59
60 Returns:
61 String for valid Log2timeline command.
62 """
63 self.result.log(
64 'Generating Plaso command line from arguments: {0!s}'.format(conf),
65 level=logging.DEBUG)
66 cmd = [base_command]
67 for k, v in conf.items():
68 cli_args = [
69 'status_view', 'hashers', 'partitions', 'vss_stores',
70 'custom_artifact_definitions', 'parsers', 'artifact_filters',
71 'file_filter', 'yara_rules'
72 ]
73 if (k not in cli_args or not v):
74 continue
75 prepend = '-'
76 if len(k) > 1:
77 prepend = '--'
78 if k == 'file_filter':
79 file_path = file_helpers.write_list_to_temp_file(
80 v, preferred_dir=self.tmp_dir)
81 cmd.extend(['-f', file_path])
82 elif k == 'yara_rules':
83 file_path = file_helpers.write_str_to_temp_file(
84 v, preferred_dir=self.tmp_dir)
85 cmd.extend(['--yara_rules', file_path])
86 elif isinstance(v, list):
87 cmd.extend([prepend + k, ','.join(v)])
88 elif isinstance(v, bool):
89 cmd.append(prepend + k)
90 elif isinstance(v, str):
91 cmd.extend([prepend + k, v])
92 return cmd
93
94 def run(self, evidence, result):
95 """Task that process data with Plaso.
96
97 Args:
98 evidence (Evidence object): The evidence we will process.
99 result (TurbiniaTaskResult): The object to place task results into.
100
101 Returns:
102 TurbiniaTaskResult object.
103 """
104
105 config.LoadConfig()
106
107 # Write plaso file into tmp_dir because sqlite has issues with some shared
108 # filesystems (e.g NFS).
109 plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))
110 plaso_evidence = PlasoFile(source_path=plaso_file)
111 plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))
112
113 cmd = self.build_plaso_command('log2timeline.py', self.task_config)
114
115 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
116 cmd.append('-d')
117
118 if evidence.credentials:
119 for credential_type, credential_data in evidence.credentials:
120 cmd.extend([
121 '--credential', '{0:s}:{1:s}'.format(
122 credential_type, credential_data)
123 ])
124
125 cmd.extend(['--temporary_directory', self.tmp_dir])
126 cmd.extend(['--logfile', plaso_log])
127 cmd.extend(['--unattended'])
128 cmd.extend(['--storage_file', plaso_file])
129 cmd.extend([evidence.local_path])
130
131 result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))
132 self.execute(
133 cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],
134 close=True)
135
136 return result
137
[end of turbinia/workers/plaso.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py
--- a/turbinia/workers/plaso.py
+++ b/turbinia/workers/plaso.py
@@ -39,6 +39,7 @@
# the Plaso documentation
'status_view': 'none',
'hashers': 'all',
+ 'hasher_file_size_limit': '1073741824',
'partitions': 'all',
'vss_stores': 'none',
# artifact_filters and file_filter are mutually exclusive
@@ -66,9 +67,9 @@
cmd = [base_command]
for k, v in conf.items():
cli_args = [
- 'status_view', 'hashers', 'partitions', 'vss_stores',
- 'custom_artifact_definitions', 'parsers', 'artifact_filters',
- 'file_filter', 'yara_rules'
+ 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',
+ 'vss_stores', 'custom_artifact_definitions', 'parsers',
+ 'artifact_filters', 'file_filter', 'yara_rules'
]
if (k not in cli_args or not v):
continue
| {"golden_diff": "diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py\n--- a/turbinia/workers/plaso.py\n+++ b/turbinia/workers/plaso.py\n@@ -39,6 +39,7 @@\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n+ 'hasher_file_size_limit': '1073741824',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n@@ -66,9 +67,9 @@\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n- 'status_view', 'hashers', 'partitions', 'vss_stores',\n- 'custom_artifact_definitions', 'parsers', 'artifact_filters',\n- 'file_filter', 'yara_rules'\n+ 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',\n+ 'vss_stores', 'custom_artifact_definitions', 'parsers',\n+ 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n", "issue": "Set a default file size limit for PlasoTask hashers\nCurrently, all PlasoTask instances will attempt to hash files of any size, potentially very large ones .This could lead to unusually long processing times.\r\n\r\nThis is a small part of a larger effort to try to optimize how Turbinia configures Plaso tasks to better utilize inherent parallel processing capabilities.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [\n state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED\n ]\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n 'artifact_filters': None,\n 'file_filter': None,\n 'custom_artifact_definitions': None,\n 'parsers': None,\n 'yara_rules': None\n }\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'partitions', 'vss_stores',\n 'custom_artifact_definitions', 'parsers', 'artifact_filters',\n 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend(['--storage_file', plaso_file])\n cmd.extend([evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}]} | 2,030 | 282 |
gh_patches_debug_27728 | rasdani/github-patches | git_diff | getmoto__moto-2398 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mock_s3 gives "ConnectionError: Connection refused..." error when Key used in put_object call contains leading forward slash
This is very specific situation, however the behavior of moto in this situation doesn't match what happens when you use the actual boto3 connecting to AWS.
The situation is if you have a Key for a S3 object that you are putting into a S3 bucket that has a leading forward slash ("/").
So if your Key is "/abc" and you try putting an object into a S3 bucket with the key as follows:
```
def do_something():
my_bucket = boto3.resource('s3', region_name='us-west-2').Bucket('my_bucket')
my_bucket.put_object(Key='/abc', Body='ABCD')
```
If you run this snippet of code (along with required imports and so on), then an object with the key "/abc" will be placed in the bucket with the content specified.
When you view this on the AWS Console, it will seem a bit strange because at the "root" level of the S3 bucket, you'll find a "folder" with no name. Upon clicking this no-named folder, you'll see an object named "abc" with the content specified. So this is the expected behavior when boto3 is being used without moto.
When moto is used in a similar situation, lets say as follows:
```
@moto_s3
def do_something_moto()
conn = botot3.resource('s3', region_name='us-west-2')
conn.create_bucket(Bucket='my_bucket')
bucket.put_object(Key='/abc', Body='ABCD')
```
If you run this code, you will get an error with a traceback ending with:
`ConnectionError: Connection refused: PUT https://foobar.s3.us-west-2.amazonaws.com//abc`
If you remove the leading forward slash from the Key in the put_object call:
`bucket.put_object(Key='abc', Body='ABCD')`
Then this code will run without an error. As pointed out above when using the boto3 without moto, a Key with a leading forward slash is a valid key and should not cause an error. This is really an edge case, but I think moto should be made to handle this situation correctly.
As for versions of boto3 and moto being used:
boto3: 1.7.19
moto: 1.3.3
</issue>
<code>
[start of moto/server.py]
1 from __future__ import unicode_literals
2
3 import argparse
4 import json
5 import re
6 import sys
7 from threading import Lock
8
9 import six
10 from flask import Flask
11 from flask.testing import FlaskClient
12
13 from six.moves.urllib.parse import urlencode
14 from werkzeug.routing import BaseConverter
15 from werkzeug.serving import run_simple
16
17 from moto.backends import BACKENDS
18 from moto.core.utils import convert_flask_to_httpretty_response
19
20
21 HTTP_METHODS = ["GET", "POST", "PUT", "DELETE", "HEAD", "PATCH"]
22
23
24 DEFAULT_SERVICE_REGION = ('s3', 'us-east-1')
25
26 # Map of unsigned calls to service-region as per AWS API docs
27 # https://docs.aws.amazon.com/cognito/latest/developerguide/resource-permissions.html#amazon-cognito-signed-versus-unsigned-apis
28 UNSIGNED_REQUESTS = {
29 'AWSCognitoIdentityService': ('cognito-identity', 'us-east-1'),
30 'AWSCognitoIdentityProviderService': ('cognito-idp', 'us-east-1'),
31 }
32
33
34 class DomainDispatcherApplication(object):
35 """
36 Dispatch requests to different applications based on the "Host:" header
37 value. We'll match the host header value with the url_bases of each backend.
38 """
39
40 def __init__(self, create_app, service=None):
41 self.create_app = create_app
42 self.lock = Lock()
43 self.app_instances = {}
44 self.service = service
45
46 def get_backend_for_host(self, host):
47 if host == 'moto_api':
48 return host
49
50 if self.service:
51 return self.service
52
53 if host in BACKENDS:
54 return host
55
56 for backend_name, backend in BACKENDS.items():
57 for url_base in list(backend.values())[0].url_bases:
58 if re.match(url_base, 'http://%s' % host):
59 return backend_name
60
61 def infer_service_region_host(self, environ):
62 auth = environ.get('HTTP_AUTHORIZATION')
63 if auth:
64 # Signed request
65 # Parse auth header to find service assuming a SigV4 request
66 # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
67 # ['Credential=sdffdsa', '20170220', 'us-east-1', 'sns', 'aws4_request']
68 try:
69 credential_scope = auth.split(",")[0].split()[1]
70 _, _, region, service, _ = credential_scope.split("/")
71 except ValueError:
72 # Signature format does not match, this is exceptional and we can't
73 # infer a service-region. A reduced set of services still use
74 # the deprecated SigV2, ergo prefer S3 as most likely default.
75 # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html
76 service, region = DEFAULT_SERVICE_REGION
77 else:
78 # Unsigned request
79 target = environ.get('HTTP_X_AMZ_TARGET')
80 if target:
81 service, _ = target.split('.', 1)
82 service, region = UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)
83 else:
84 # S3 is the last resort when the target is also unknown
85 service, region = DEFAULT_SERVICE_REGION
86
87 if service == 'dynamodb':
88 if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):
89 host = 'dynamodbstreams'
90 else:
91 dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split("_")[1].split(".")[0]
92 # If Newer API version, use dynamodb2
93 if dynamo_api_version > "20111205":
94 host = "dynamodb2"
95 else:
96 host = "{service}.{region}.amazonaws.com".format(
97 service=service, region=region)
98
99 return host
100
101 def get_application(self, environ):
102 path_info = environ.get('PATH_INFO', '')
103
104 # The URL path might contain non-ASCII text, for instance unicode S3 bucket names
105 if six.PY2 and isinstance(path_info, str):
106 path_info = six.u(path_info)
107 if six.PY3 and isinstance(path_info, six.binary_type):
108 path_info = path_info.decode('utf-8')
109
110 if path_info.startswith("/moto-api") or path_info == "/favicon.ico":
111 host = "moto_api"
112 elif path_info.startswith("/latest/meta-data/"):
113 host = "instance_metadata"
114 else:
115 host = environ['HTTP_HOST'].split(':')[0]
116
117 with self.lock:
118 backend = self.get_backend_for_host(host)
119 if not backend:
120 # No regular backend found; try parsing other headers
121 host = self.infer_service_region_host(environ)
122 backend = self.get_backend_for_host(host)
123
124 app = self.app_instances.get(backend, None)
125 if app is None:
126 app = self.create_app(backend)
127 self.app_instances[backend] = app
128 return app
129
130 def __call__(self, environ, start_response):
131 backend_app = self.get_application(environ)
132 return backend_app(environ, start_response)
133
134
135 class RegexConverter(BaseConverter):
136 # http://werkzeug.pocoo.org/docs/routing/#custom-converters
137
138 def __init__(self, url_map, *items):
139 super(RegexConverter, self).__init__(url_map)
140 self.regex = items[0]
141
142
143 class AWSTestHelper(FlaskClient):
144
145 def action_data(self, action_name, **kwargs):
146 """
147 Method calls resource with action_name and returns data of response.
148 """
149 opts = {"Action": action_name}
150 opts.update(kwargs)
151 res = self.get("/?{0}".format(urlencode(opts)),
152 headers={"Host": "{0}.us-east-1.amazonaws.com".format(self.application.service)})
153 return res.data.decode("utf-8")
154
155 def action_json(self, action_name, **kwargs):
156 """
157 Method calls resource with action_name and returns object obtained via
158 deserialization of output.
159 """
160 return json.loads(self.action_data(action_name, **kwargs))
161
162
163 def create_backend_app(service):
164 from werkzeug.routing import Map
165
166 # Create the backend_app
167 backend_app = Flask(__name__)
168 backend_app.debug = True
169 backend_app.service = service
170
171 # Reset view functions to reset the app
172 backend_app.view_functions = {}
173 backend_app.url_map = Map()
174 backend_app.url_map.converters['regex'] = RegexConverter
175 backend = list(BACKENDS[service].values())[0]
176 for url_path, handler in backend.flask_paths.items():
177 if handler.__name__ == 'dispatch':
178 endpoint = '{0}.dispatch'.format(handler.__self__.__name__)
179 else:
180 endpoint = None
181
182 original_endpoint = endpoint
183 index = 2
184 while endpoint in backend_app.view_functions:
185 # HACK: Sometimes we map the same view to multiple url_paths. Flask
186 # requries us to have different names.
187 endpoint = original_endpoint + str(index)
188 index += 1
189
190 backend_app.add_url_rule(
191 url_path,
192 endpoint=endpoint,
193 methods=HTTP_METHODS,
194 view_func=convert_flask_to_httpretty_response(handler),
195 strict_slashes=False,
196 )
197
198 backend_app.test_client_class = AWSTestHelper
199 return backend_app
200
201
202 def main(argv=sys.argv[1:]):
203 parser = argparse.ArgumentParser()
204
205 # Keep this for backwards compat
206 parser.add_argument(
207 "service",
208 type=str,
209 nargs='?', # http://stackoverflow.com/a/4480202/731592
210 default=None)
211 parser.add_argument(
212 '-H', '--host', type=str,
213 help='Which host to bind',
214 default='127.0.0.1')
215 parser.add_argument(
216 '-p', '--port', type=int,
217 help='Port number to use for connection',
218 default=5000)
219 parser.add_argument(
220 '-r', '--reload',
221 action='store_true',
222 help='Reload server on a file change',
223 default=False
224 )
225 parser.add_argument(
226 '-s', '--ssl',
227 action='store_true',
228 help='Enable SSL encrypted connection with auto-generated certificate (use https://... URL)',
229 default=False
230 )
231 parser.add_argument(
232 '-c', '--ssl-cert', type=str,
233 help='Path to SSL certificate',
234 default=None)
235 parser.add_argument(
236 '-k', '--ssl-key', type=str,
237 help='Path to SSL private key',
238 default=None)
239
240 args = parser.parse_args(argv)
241
242 # Wrap the main application
243 main_app = DomainDispatcherApplication(
244 create_backend_app, service=args.service)
245 main_app.debug = True
246
247 ssl_context = None
248 if args.ssl_key and args.ssl_cert:
249 ssl_context = (args.ssl_cert, args.ssl_key)
250 elif args.ssl:
251 ssl_context = 'adhoc'
252
253 run_simple(args.host, args.port, main_app,
254 threaded=True, use_reloader=args.reload,
255 ssl_context=ssl_context)
256
257
258 if __name__ == '__main__':
259 main()
260
[end of moto/server.py]
[start of moto/s3/urls.py]
1 from __future__ import unicode_literals
2
3 from .responses import S3ResponseInstance
4
5 url_bases = [
6 "https?://s3(.*).amazonaws.com",
7 r"https?://(?P<bucket_name>[a-zA-Z0-9\-_.]*)\.?s3(.*).amazonaws.com"
8 ]
9
10 url_paths = {
11 # subdomain bucket
12 '{0}/$': S3ResponseInstance.bucket_response,
13
14 # subdomain key of path-based bucket
15 '{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,
16 # path-based bucket + key
17 '{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,
18 }
19
[end of moto/s3/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/moto/s3/urls.py b/moto/s3/urls.py
--- a/moto/s3/urls.py
+++ b/moto/s3/urls.py
@@ -15,4 +15,6 @@
'{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,
# path-based bucket + key
'{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,
+ # subdomain bucket + key with empty first part of path
+ '{0}//(?P<key_name>.*)$': S3ResponseInstance.key_response,
}
diff --git a/moto/server.py b/moto/server.py
--- a/moto/server.py
+++ b/moto/server.py
@@ -174,10 +174,11 @@
backend_app.url_map.converters['regex'] = RegexConverter
backend = list(BACKENDS[service].values())[0]
for url_path, handler in backend.flask_paths.items():
+ view_func = convert_flask_to_httpretty_response(handler)
if handler.__name__ == 'dispatch':
endpoint = '{0}.dispatch'.format(handler.__self__.__name__)
else:
- endpoint = None
+ endpoint = view_func.__name__
original_endpoint = endpoint
index = 2
@@ -191,7 +192,7 @@
url_path,
endpoint=endpoint,
methods=HTTP_METHODS,
- view_func=convert_flask_to_httpretty_response(handler),
+ view_func=view_func,
strict_slashes=False,
)
| {"golden_diff": "diff --git a/moto/s3/urls.py b/moto/s3/urls.py\n--- a/moto/s3/urls.py\n+++ b/moto/s3/urls.py\n@@ -15,4 +15,6 @@\n '{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,\n # path-based bucket + key\n '{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,\n+ # subdomain bucket + key with empty first part of path\n+ '{0}//(?P<key_name>.*)$': S3ResponseInstance.key_response,\n }\ndiff --git a/moto/server.py b/moto/server.py\n--- a/moto/server.py\n+++ b/moto/server.py\n@@ -174,10 +174,11 @@\n backend_app.url_map.converters['regex'] = RegexConverter\n backend = list(BACKENDS[service].values())[0]\n for url_path, handler in backend.flask_paths.items():\n+ view_func = convert_flask_to_httpretty_response(handler)\n if handler.__name__ == 'dispatch':\n endpoint = '{0}.dispatch'.format(handler.__self__.__name__)\n else:\n- endpoint = None\n+ endpoint = view_func.__name__\n \n original_endpoint = endpoint\n index = 2\n@@ -191,7 +192,7 @@\n url_path,\n endpoint=endpoint,\n methods=HTTP_METHODS,\n- view_func=convert_flask_to_httpretty_response(handler),\n+ view_func=view_func,\n strict_slashes=False,\n )\n", "issue": "mock_s3 gives \"ConnectionError: Connection refused...\" error when Key used in put_object call contains leading forward slash\nThis is very specific situation, however the behavior of moto in this situation doesn't match what happens when you use the actual boto3 connecting to AWS.\r\n\r\nThe situation is if you have a Key for a S3 object that you are putting into a S3 bucket that has a leading forward slash (\"/\").\r\n\r\nSo if your Key is \"/abc\" and you try putting an object into a S3 bucket with the key as follows:\r\n\r\n```\r\ndef do_something():\r\n my_bucket = boto3.resource('s3', region_name='us-west-2').Bucket('my_bucket')\r\n my_bucket.put_object(Key='/abc', Body='ABCD')\r\n```\r\n\r\nIf you run this snippet of code (along with required imports and so on), then an object with the key \"/abc\" will be placed in the bucket with the content specified.\r\n\r\nWhen you view this on the AWS Console, it will seem a bit strange because at the \"root\" level of the S3 bucket, you'll find a \"folder\" with no name. Upon clicking this no-named folder, you'll see an object named \"abc\" with the content specified. So this is the expected behavior when boto3 is being used without moto.\r\n\r\nWhen moto is used in a similar situation, lets say as follows:\r\n\r\n```\r\n@moto_s3\r\ndef do_something_moto()\r\n conn = botot3.resource('s3', region_name='us-west-2')\r\n conn.create_bucket(Bucket='my_bucket')\r\n bucket.put_object(Key='/abc', Body='ABCD')\r\n```\r\n\r\nIf you run this code, you will get an error with a traceback ending with:\r\n\r\n`ConnectionError: Connection refused: PUT https://foobar.s3.us-west-2.amazonaws.com//abc`\r\n\r\nIf you remove the leading forward slash from the Key in the put_object call:\r\n\r\n`bucket.put_object(Key='abc', Body='ABCD')`\r\n\r\nThen this code will run without an error. As pointed out above when using the boto3 without moto, a Key with a leading forward slash is a valid key and should not cause an error. This is really an edge case, but I think moto should be made to handle this situation correctly.\r\n\r\nAs for versions of boto3 and moto being used:\r\nboto3: 1.7.19\r\nmoto: 1.3.3\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport json\nimport re\nimport sys\nfrom threading import Lock\n\nimport six\nfrom flask import Flask\nfrom flask.testing import FlaskClient\n\nfrom six.moves.urllib.parse import urlencode\nfrom werkzeug.routing import BaseConverter\nfrom werkzeug.serving import run_simple\n\nfrom moto.backends import BACKENDS\nfrom moto.core.utils import convert_flask_to_httpretty_response\n\n\nHTTP_METHODS = [\"GET\", \"POST\", \"PUT\", \"DELETE\", \"HEAD\", \"PATCH\"]\n\n\nDEFAULT_SERVICE_REGION = ('s3', 'us-east-1')\n\n# Map of unsigned calls to service-region as per AWS API docs\n# https://docs.aws.amazon.com/cognito/latest/developerguide/resource-permissions.html#amazon-cognito-signed-versus-unsigned-apis\nUNSIGNED_REQUESTS = {\n 'AWSCognitoIdentityService': ('cognito-identity', 'us-east-1'),\n 'AWSCognitoIdentityProviderService': ('cognito-idp', 'us-east-1'),\n}\n\n\nclass DomainDispatcherApplication(object):\n \"\"\"\n Dispatch requests to different applications based on the \"Host:\" header\n value. We'll match the host header value with the url_bases of each backend.\n \"\"\"\n\n def __init__(self, create_app, service=None):\n self.create_app = create_app\n self.lock = Lock()\n self.app_instances = {}\n self.service = service\n\n def get_backend_for_host(self, host):\n if host == 'moto_api':\n return host\n\n if self.service:\n return self.service\n\n if host in BACKENDS:\n return host\n\n for backend_name, backend in BACKENDS.items():\n for url_base in list(backend.values())[0].url_bases:\n if re.match(url_base, 'http://%s' % host):\n return backend_name\n\n def infer_service_region_host(self, environ):\n auth = environ.get('HTTP_AUTHORIZATION')\n if auth:\n # Signed request\n # Parse auth header to find service assuming a SigV4 request\n # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html\n # ['Credential=sdffdsa', '20170220', 'us-east-1', 'sns', 'aws4_request']\n try:\n credential_scope = auth.split(\",\")[0].split()[1]\n _, _, region, service, _ = credential_scope.split(\"/\")\n except ValueError:\n # Signature format does not match, this is exceptional and we can't\n # infer a service-region. A reduced set of services still use\n # the deprecated SigV2, ergo prefer S3 as most likely default.\n # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html\n service, region = DEFAULT_SERVICE_REGION\n else:\n # Unsigned request\n target = environ.get('HTTP_X_AMZ_TARGET')\n if target:\n service, _ = target.split('.', 1)\n service, region = UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)\n else:\n # S3 is the last resort when the target is also unknown\n service, region = DEFAULT_SERVICE_REGION\n\n if service == 'dynamodb':\n if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):\n host = 'dynamodbstreams'\n else:\n dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split(\"_\")[1].split(\".\")[0]\n # If Newer API version, use dynamodb2\n if dynamo_api_version > \"20111205\":\n host = \"dynamodb2\"\n else:\n host = \"{service}.{region}.amazonaws.com\".format(\n service=service, region=region)\n\n return host\n\n def get_application(self, environ):\n path_info = environ.get('PATH_INFO', '')\n\n # The URL path might contain non-ASCII text, for instance unicode S3 bucket names\n if six.PY2 and isinstance(path_info, str):\n path_info = six.u(path_info)\n if six.PY3 and isinstance(path_info, six.binary_type):\n path_info = path_info.decode('utf-8')\n\n if path_info.startswith(\"/moto-api\") or path_info == \"/favicon.ico\":\n host = \"moto_api\"\n elif path_info.startswith(\"/latest/meta-data/\"):\n host = \"instance_metadata\"\n else:\n host = environ['HTTP_HOST'].split(':')[0]\n\n with self.lock:\n backend = self.get_backend_for_host(host)\n if not backend:\n # No regular backend found; try parsing other headers\n host = self.infer_service_region_host(environ)\n backend = self.get_backend_for_host(host)\n\n app = self.app_instances.get(backend, None)\n if app is None:\n app = self.create_app(backend)\n self.app_instances[backend] = app\n return app\n\n def __call__(self, environ, start_response):\n backend_app = self.get_application(environ)\n return backend_app(environ, start_response)\n\n\nclass RegexConverter(BaseConverter):\n # http://werkzeug.pocoo.org/docs/routing/#custom-converters\n\n def __init__(self, url_map, *items):\n super(RegexConverter, self).__init__(url_map)\n self.regex = items[0]\n\n\nclass AWSTestHelper(FlaskClient):\n\n def action_data(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns data of response.\n \"\"\"\n opts = {\"Action\": action_name}\n opts.update(kwargs)\n res = self.get(\"/?{0}\".format(urlencode(opts)),\n headers={\"Host\": \"{0}.us-east-1.amazonaws.com\".format(self.application.service)})\n return res.data.decode(\"utf-8\")\n\n def action_json(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns object obtained via\n deserialization of output.\n \"\"\"\n return json.loads(self.action_data(action_name, **kwargs))\n\n\ndef create_backend_app(service):\n from werkzeug.routing import Map\n\n # Create the backend_app\n backend_app = Flask(__name__)\n backend_app.debug = True\n backend_app.service = service\n\n # Reset view functions to reset the app\n backend_app.view_functions = {}\n backend_app.url_map = Map()\n backend_app.url_map.converters['regex'] = RegexConverter\n backend = list(BACKENDS[service].values())[0]\n for url_path, handler in backend.flask_paths.items():\n if handler.__name__ == 'dispatch':\n endpoint = '{0}.dispatch'.format(handler.__self__.__name__)\n else:\n endpoint = None\n\n original_endpoint = endpoint\n index = 2\n while endpoint in backend_app.view_functions:\n # HACK: Sometimes we map the same view to multiple url_paths. Flask\n # requries us to have different names.\n endpoint = original_endpoint + str(index)\n index += 1\n\n backend_app.add_url_rule(\n url_path,\n endpoint=endpoint,\n methods=HTTP_METHODS,\n view_func=convert_flask_to_httpretty_response(handler),\n strict_slashes=False,\n )\n\n backend_app.test_client_class = AWSTestHelper\n return backend_app\n\n\ndef main(argv=sys.argv[1:]):\n parser = argparse.ArgumentParser()\n\n # Keep this for backwards compat\n parser.add_argument(\n \"service\",\n type=str,\n nargs='?', # http://stackoverflow.com/a/4480202/731592\n default=None)\n parser.add_argument(\n '-H', '--host', type=str,\n help='Which host to bind',\n default='127.0.0.1')\n parser.add_argument(\n '-p', '--port', type=int,\n help='Port number to use for connection',\n default=5000)\n parser.add_argument(\n '-r', '--reload',\n action='store_true',\n help='Reload server on a file change',\n default=False\n )\n parser.add_argument(\n '-s', '--ssl',\n action='store_true',\n help='Enable SSL encrypted connection with auto-generated certificate (use https://... URL)',\n default=False\n )\n parser.add_argument(\n '-c', '--ssl-cert', type=str,\n help='Path to SSL certificate',\n default=None)\n parser.add_argument(\n '-k', '--ssl-key', type=str,\n help='Path to SSL private key',\n default=None)\n\n args = parser.parse_args(argv)\n\n # Wrap the main application\n main_app = DomainDispatcherApplication(\n create_backend_app, service=args.service)\n main_app.debug = True\n\n ssl_context = None\n if args.ssl_key and args.ssl_cert:\n ssl_context = (args.ssl_cert, args.ssl_key)\n elif args.ssl:\n ssl_context = 'adhoc'\n\n run_simple(args.host, args.port, main_app,\n threaded=True, use_reloader=args.reload,\n ssl_context=ssl_context)\n\n\nif __name__ == '__main__':\n main()\n", "path": "moto/server.py"}, {"content": "from __future__ import unicode_literals\n\nfrom .responses import S3ResponseInstance\n\nurl_bases = [\n \"https?://s3(.*).amazonaws.com\",\n r\"https?://(?P<bucket_name>[a-zA-Z0-9\\-_.]*)\\.?s3(.*).amazonaws.com\"\n]\n\nurl_paths = {\n # subdomain bucket\n '{0}/$': S3ResponseInstance.bucket_response,\n\n # subdomain key of path-based bucket\n '{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,\n # path-based bucket + key\n '{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,\n}\n", "path": "moto/s3/urls.py"}]} | 3,906 | 367 |
gh_patches_debug_4536 | rasdani/github-patches | git_diff | ansible__ansible-modules-extras-1873 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Blockinfile module : Documentation mistake
##### Issue Type:
- Documentation Report
##### Plugin Name:
Blockinfile module
##### Ansible Version:
```
$ ansible --version
ansible 2.0.1.0
config file = /data/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
n.a.
##### Environment:
n.a
##### Summary:
Wrong spacing in official documentation
##### Steps To Reproduce:
```
https://docs.ansible.com/ansible/blockinfile_module.html
Chapter Example, last example using < with_items >
```
##### Expected Results:
Expression with_items: should be level with blockinfile:
##### Actual Results:
```
Example does not work as expression < with_items: > is not level with <blockinfile:>
```
</issue>
<code>
[start of files/blockinfile.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2014, 2015 YAEGASHI Takeshi <[email protected]>
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 import re
22 import os
23 import tempfile
24
25 DOCUMENTATION = """
26 ---
27 module: blockinfile
28 author:
29 - 'YAEGASHI Takeshi (@yaegashi)'
30 extends_documentation_fragment:
31 - files
32 - validate
33 short_description: Insert/update/remove a text block
34 surrounded by marker lines.
35 version_added: '2.0'
36 description:
37 - This module will insert/update/remove a block of multi-line text
38 surrounded by customizable marker lines.
39 notes:
40 - This module supports check mode.
41 - When using 'with_*' loops be aware that if you do not set a unique mark the block will be overwritten on each iteration.
42 options:
43 dest:
44 aliases: [ name, destfile ]
45 required: true
46 description:
47 - The file to modify.
48 state:
49 required: false
50 choices: [ present, absent ]
51 default: present
52 description:
53 - Whether the block should be there or not.
54 marker:
55 required: false
56 default: '# {mark} ANSIBLE MANAGED BLOCK'
57 description:
58 - The marker line template.
59 "{mark}" will be replaced with "BEGIN" or "END".
60 block:
61 aliases: [ content ]
62 required: false
63 default: ''
64 description:
65 - The text to insert inside the marker lines.
66 If it's missing or an empty string,
67 the block will be removed as if C(state) were specified to C(absent).
68 insertafter:
69 required: false
70 default: EOF
71 description:
72 - If specified, the block will be inserted after the last match of
73 specified regular expression. A special value is available; C(EOF) for
74 inserting the block at the end of the file. If specified regular
75 expresion has no matches, C(EOF) will be used instead.
76 choices: [ 'EOF', '*regex*' ]
77 insertbefore:
78 required: false
79 default: None
80 description:
81 - If specified, the block will be inserted before the last match of
82 specified regular expression. A special value is available; C(BOF) for
83 inserting the block at the beginning of the file. If specified regular
84 expresion has no matches, the block will be inserted at the end of the
85 file.
86 choices: [ 'BOF', '*regex*' ]
87 create:
88 required: false
89 default: 'no'
90 choices: [ 'yes', 'no' ]
91 description:
92 - Create a new file if it doesn't exist.
93 backup:
94 required: false
95 default: 'no'
96 choices: [ 'yes', 'no' ]
97 description:
98 - Create a backup file including the timestamp information so you can
99 get the original file back if you somehow clobbered it incorrectly.
100 follow:
101 required: false
102 default: "no"
103 choices: [ "yes", "no" ]
104 description:
105 - 'This flag indicates that filesystem links, if they exist, should be followed.'
106 version_added: "2.1"
107 """
108
109 EXAMPLES = r"""
110 - name: insert/update "Match User" configuation block in /etc/ssh/sshd_config
111 blockinfile:
112 dest: /etc/ssh/sshd_config
113 block: |
114 Match User ansible-agent
115 PasswordAuthentication no
116
117 - name: insert/update eth0 configuration stanza in /etc/network/interfaces
118 (it might be better to copy files into /etc/network/interfaces.d/)
119 blockinfile:
120 dest: /etc/network/interfaces
121 block: |
122 iface eth0 inet static
123 address 192.168.0.1
124 netmask 255.255.255.0
125
126 - name: insert/update HTML surrounded by custom markers after <body> line
127 blockinfile:
128 dest: /var/www/html/index.html
129 marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
130 insertafter: "<body>"
131 content: |
132 <h1>Welcome to {{ansible_hostname}}</h1>
133 <p>Last updated on {{ansible_date_time.iso8601}}</p>
134
135 - name: remove HTML as well as surrounding markers
136 blockinfile:
137 dest: /var/www/html/index.html
138 marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
139 content: ""
140
141 - name: insert/update "Match User" configuation block in /etc/ssh/sshd_config
142 blockinfile:
143 dest: /etc/hosts
144 block: |
145 {{item.name}} {{item.ip}}
146 marker: "# {mark} ANSIBLE MANAGED BLOCK {{item.name}}"
147 with_items:
148 - { name: host1, ip: 10.10.1.10 }
149 - { name: host2, ip: 10.10.1.11 }
150 - { name: host3, ip: 10.10.1.12 }
151 """
152
153
154 def write_changes(module, contents, dest):
155
156 tmpfd, tmpfile = tempfile.mkstemp()
157 f = os.fdopen(tmpfd, 'wb')
158 f.write(contents)
159 f.close()
160
161 validate = module.params.get('validate', None)
162 valid = not validate
163 if validate:
164 if "%s" not in validate:
165 module.fail_json(msg="validate must contain %%s: %s" % (validate))
166 (rc, out, err) = module.run_command(validate % tmpfile)
167 valid = rc == 0
168 if rc != 0:
169 module.fail_json(msg='failed to validate: '
170 'rc:%s error:%s' % (rc, err))
171 if valid:
172 module.atomic_move(tmpfile, dest)
173
174
175 def check_file_attrs(module, changed, message):
176
177 file_args = module.load_file_common_arguments(module.params)
178 if module.set_file_attributes_if_different(file_args, False):
179
180 if changed:
181 message += " and "
182 changed = True
183 message += "ownership, perms or SE linux context changed"
184
185 return message, changed
186
187
188 def main():
189 module = AnsibleModule(
190 argument_spec=dict(
191 dest=dict(required=True, aliases=['name', 'destfile']),
192 state=dict(default='present', choices=['absent', 'present']),
193 marker=dict(default='# {mark} ANSIBLE MANAGED BLOCK', type='str'),
194 block=dict(default='', type='str', aliases=['content']),
195 insertafter=dict(default=None),
196 insertbefore=dict(default=None),
197 create=dict(default=False, type='bool'),
198 backup=dict(default=False, type='bool'),
199 validate=dict(default=None, type='str'),
200 ),
201 mutually_exclusive=[['insertbefore', 'insertafter']],
202 add_file_common_args=True,
203 supports_check_mode=True
204 )
205
206 params = module.params
207 dest = os.path.expanduser(params['dest'])
208 if module.boolean(params.get('follow', None)):
209 dest = os.path.realpath(dest)
210
211 if os.path.isdir(dest):
212 module.fail_json(rc=256,
213 msg='Destination %s is a directory !' % dest)
214
215 if not os.path.exists(dest):
216 if not module.boolean(params['create']):
217 module.fail_json(rc=257,
218 msg='Destination %s does not exist !' % dest)
219 original = None
220 lines = []
221 else:
222 f = open(dest, 'rb')
223 original = f.read()
224 f.close()
225 lines = original.splitlines()
226
227 insertbefore = params['insertbefore']
228 insertafter = params['insertafter']
229 block = params['block']
230 marker = params['marker']
231 present = params['state'] == 'present'
232
233 if insertbefore is None and insertafter is None:
234 insertafter = 'EOF'
235
236 if insertafter not in (None, 'EOF'):
237 insertre = re.compile(insertafter)
238 elif insertbefore not in (None, 'BOF'):
239 insertre = re.compile(insertbefore)
240 else:
241 insertre = None
242
243 marker0 = re.sub(r'{mark}', 'BEGIN', marker)
244 marker1 = re.sub(r'{mark}', 'END', marker)
245 if present and block:
246 # Escape seqeuences like '\n' need to be handled in Ansible 1.x
247 if ANSIBLE_VERSION.startswith('1.'):
248 block = re.sub('', block, '')
249 blocklines = [marker0] + block.splitlines() + [marker1]
250 else:
251 blocklines = []
252
253 n0 = n1 = None
254 for i, line in enumerate(lines):
255 if line.startswith(marker0):
256 n0 = i
257 if line.startswith(marker1):
258 n1 = i
259
260 if None in (n0, n1):
261 n0 = None
262 if insertre is not None:
263 for i, line in enumerate(lines):
264 if insertre.search(line):
265 n0 = i
266 if n0 is None:
267 n0 = len(lines)
268 elif insertafter is not None:
269 n0 += 1
270 elif insertbefore is not None:
271 n0 = 0 # insertbefore=BOF
272 else:
273 n0 = len(lines) # insertafter=EOF
274 elif n0 < n1:
275 lines[n0:n1+1] = []
276 else:
277 lines[n1:n0+1] = []
278 n0 = n1
279
280 lines[n0:n0] = blocklines
281
282 if lines:
283 result = '\n'.join(lines)+'\n'
284 else:
285 result = ''
286 if original == result:
287 msg = ''
288 changed = False
289 elif original is None:
290 msg = 'File created'
291 changed = True
292 elif not blocklines:
293 msg = 'Block removed'
294 changed = True
295 else:
296 msg = 'Block inserted'
297 changed = True
298
299 if changed and not module.check_mode:
300 if module.boolean(params['backup']) and os.path.exists(dest):
301 module.backup_local(dest)
302 write_changes(module, result, dest)
303
304 msg, changed = check_file_attrs(module, changed, msg)
305 module.exit_json(changed=changed, msg=msg)
306
307 # import module snippets
308 from ansible.module_utils.basic import *
309 from ansible.module_utils.splitter import *
310 if __name__ == '__main__':
311 main()
312
[end of files/blockinfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/files/blockinfile.py b/files/blockinfile.py
--- a/files/blockinfile.py
+++ b/files/blockinfile.py
@@ -144,10 +144,10 @@
block: |
{{item.name}} {{item.ip}}
marker: "# {mark} ANSIBLE MANAGED BLOCK {{item.name}}"
- with_items:
- - { name: host1, ip: 10.10.1.10 }
- - { name: host2, ip: 10.10.1.11 }
- - { name: host3, ip: 10.10.1.12 }
+ with_items:
+ - { name: host1, ip: 10.10.1.10 }
+ - { name: host2, ip: 10.10.1.11 }
+ - { name: host3, ip: 10.10.1.12 }
"""
| {"golden_diff": "diff --git a/files/blockinfile.py b/files/blockinfile.py\n--- a/files/blockinfile.py\n+++ b/files/blockinfile.py\n@@ -144,10 +144,10 @@\n block: |\n {{item.name}} {{item.ip}}\n marker: \"# {mark} ANSIBLE MANAGED BLOCK {{item.name}}\"\n- with_items:\n- - { name: host1, ip: 10.10.1.10 }\n- - { name: host2, ip: 10.10.1.11 }\n- - { name: host3, ip: 10.10.1.12 }\n+ with_items:\n+ - { name: host1, ip: 10.10.1.10 }\n+ - { name: host2, ip: 10.10.1.11 }\n+ - { name: host3, ip: 10.10.1.12 }\n \"\"\"\n", "issue": "Blockinfile module : Documentation mistake\n##### Issue Type:\n- Documentation Report\n##### Plugin Name:\n\nBlockinfile module\n##### Ansible Version:\n\n```\n$ ansible --version\nansible 2.0.1.0\n config file = /data/ansible/ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### Ansible Configuration:\n\nn.a.\n##### Environment:\n\nn.a\n##### Summary:\n\nWrong spacing in official documentation\n##### Steps To Reproduce:\n\n```\nhttps://docs.ansible.com/ansible/blockinfile_module.html\nChapter Example, last example using < with_items >\n```\n##### Expected Results:\n\nExpression with_items: should be level with blockinfile:\n##### Actual Results:\n\n```\nExample does not work as expression < with_items: > is not level with <blockinfile:> \n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2014, 2015 YAEGASHI Takeshi <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nimport re\nimport os\nimport tempfile\n\nDOCUMENTATION = \"\"\"\n---\nmodule: blockinfile\nauthor:\n - 'YAEGASHI Takeshi (@yaegashi)'\nextends_documentation_fragment:\n - files\n - validate\nshort_description: Insert/update/remove a text block\n surrounded by marker lines.\nversion_added: '2.0'\ndescription:\n - This module will insert/update/remove a block of multi-line text\n surrounded by customizable marker lines.\nnotes:\n - This module supports check mode.\n - When using 'with_*' loops be aware that if you do not set a unique mark the block will be overwritten on each iteration.\noptions:\n dest:\n aliases: [ name, destfile ]\n required: true\n description:\n - The file to modify.\n state:\n required: false\n choices: [ present, absent ]\n default: present\n description:\n - Whether the block should be there or not.\n marker:\n required: false\n default: '# {mark} ANSIBLE MANAGED BLOCK'\n description:\n - The marker line template.\n \"{mark}\" will be replaced with \"BEGIN\" or \"END\".\n block:\n aliases: [ content ]\n required: false\n default: ''\n description:\n - The text to insert inside the marker lines.\n If it's missing or an empty string,\n the block will be removed as if C(state) were specified to C(absent).\n insertafter:\n required: false\n default: EOF\n description:\n - If specified, the block will be inserted after the last match of\n specified regular expression. A special value is available; C(EOF) for\n inserting the block at the end of the file. If specified regular\n expresion has no matches, C(EOF) will be used instead.\n choices: [ 'EOF', '*regex*' ]\n insertbefore:\n required: false\n default: None\n description:\n - If specified, the block will be inserted before the last match of\n specified regular expression. A special value is available; C(BOF) for\n inserting the block at the beginning of the file. If specified regular\n expresion has no matches, the block will be inserted at the end of the\n file.\n choices: [ 'BOF', '*regex*' ]\n create:\n required: false\n default: 'no'\n choices: [ 'yes', 'no' ]\n description:\n - Create a new file if it doesn't exist.\n backup:\n required: false\n default: 'no'\n choices: [ 'yes', 'no' ]\n description:\n - Create a backup file including the timestamp information so you can\n get the original file back if you somehow clobbered it incorrectly.\n follow:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n description:\n - 'This flag indicates that filesystem links, if they exist, should be followed.'\n version_added: \"2.1\"\n\"\"\"\n\nEXAMPLES = r\"\"\"\n- name: insert/update \"Match User\" configuation block in /etc/ssh/sshd_config\n blockinfile:\n dest: /etc/ssh/sshd_config\n block: |\n Match User ansible-agent\n PasswordAuthentication no\n\n- name: insert/update eth0 configuration stanza in /etc/network/interfaces\n (it might be better to copy files into /etc/network/interfaces.d/)\n blockinfile:\n dest: /etc/network/interfaces\n block: |\n iface eth0 inet static\n address 192.168.0.1\n netmask 255.255.255.0\n\n- name: insert/update HTML surrounded by custom markers after <body> line\n blockinfile:\n dest: /var/www/html/index.html\n marker: \"<!-- {mark} ANSIBLE MANAGED BLOCK -->\"\n insertafter: \"<body>\"\n content: |\n <h1>Welcome to {{ansible_hostname}}</h1>\n <p>Last updated on {{ansible_date_time.iso8601}}</p>\n\n- name: remove HTML as well as surrounding markers\n blockinfile:\n dest: /var/www/html/index.html\n marker: \"<!-- {mark} ANSIBLE MANAGED BLOCK -->\"\n content: \"\"\n\n- name: insert/update \"Match User\" configuation block in /etc/ssh/sshd_config\n blockinfile:\n dest: /etc/hosts\n block: |\n {{item.name}} {{item.ip}}\n marker: \"# {mark} ANSIBLE MANAGED BLOCK {{item.name}}\"\n with_items:\n - { name: host1, ip: 10.10.1.10 }\n - { name: host2, ip: 10.10.1.11 }\n - { name: host3, ip: 10.10.1.12 }\n\"\"\"\n\n\ndef write_changes(module, contents, dest):\n\n tmpfd, tmpfile = tempfile.mkstemp()\n f = os.fdopen(tmpfd, 'wb')\n f.write(contents)\n f.close()\n\n validate = module.params.get('validate', None)\n valid = not validate\n if validate:\n if \"%s\" not in validate:\n module.fail_json(msg=\"validate must contain %%s: %s\" % (validate))\n (rc, out, err) = module.run_command(validate % tmpfile)\n valid = rc == 0\n if rc != 0:\n module.fail_json(msg='failed to validate: '\n 'rc:%s error:%s' % (rc, err))\n if valid:\n module.atomic_move(tmpfile, dest)\n\n\ndef check_file_attrs(module, changed, message):\n\n file_args = module.load_file_common_arguments(module.params)\n if module.set_file_attributes_if_different(file_args, False):\n\n if changed:\n message += \" and \"\n changed = True\n message += \"ownership, perms or SE linux context changed\"\n\n return message, changed\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n dest=dict(required=True, aliases=['name', 'destfile']),\n state=dict(default='present', choices=['absent', 'present']),\n marker=dict(default='# {mark} ANSIBLE MANAGED BLOCK', type='str'),\n block=dict(default='', type='str', aliases=['content']),\n insertafter=dict(default=None),\n insertbefore=dict(default=None),\n create=dict(default=False, type='bool'),\n backup=dict(default=False, type='bool'),\n validate=dict(default=None, type='str'),\n ),\n mutually_exclusive=[['insertbefore', 'insertafter']],\n add_file_common_args=True,\n supports_check_mode=True\n )\n\n params = module.params\n dest = os.path.expanduser(params['dest'])\n if module.boolean(params.get('follow', None)):\n dest = os.path.realpath(dest)\n\n if os.path.isdir(dest):\n module.fail_json(rc=256,\n msg='Destination %s is a directory !' % dest)\n\n if not os.path.exists(dest):\n if not module.boolean(params['create']):\n module.fail_json(rc=257,\n msg='Destination %s does not exist !' % dest)\n original = None\n lines = []\n else:\n f = open(dest, 'rb')\n original = f.read()\n f.close()\n lines = original.splitlines()\n\n insertbefore = params['insertbefore']\n insertafter = params['insertafter']\n block = params['block']\n marker = params['marker']\n present = params['state'] == 'present'\n\n if insertbefore is None and insertafter is None:\n insertafter = 'EOF'\n\n if insertafter not in (None, 'EOF'):\n insertre = re.compile(insertafter)\n elif insertbefore not in (None, 'BOF'):\n insertre = re.compile(insertbefore)\n else:\n insertre = None\n\n marker0 = re.sub(r'{mark}', 'BEGIN', marker)\n marker1 = re.sub(r'{mark}', 'END', marker)\n if present and block:\n # Escape seqeuences like '\\n' need to be handled in Ansible 1.x\n if ANSIBLE_VERSION.startswith('1.'):\n block = re.sub('', block, '')\n blocklines = [marker0] + block.splitlines() + [marker1]\n else:\n blocklines = []\n\n n0 = n1 = None\n for i, line in enumerate(lines):\n if line.startswith(marker0):\n n0 = i\n if line.startswith(marker1):\n n1 = i\n\n if None in (n0, n1):\n n0 = None\n if insertre is not None:\n for i, line in enumerate(lines):\n if insertre.search(line):\n n0 = i\n if n0 is None:\n n0 = len(lines)\n elif insertafter is not None:\n n0 += 1\n elif insertbefore is not None:\n n0 = 0 # insertbefore=BOF\n else:\n n0 = len(lines) # insertafter=EOF\n elif n0 < n1:\n lines[n0:n1+1] = []\n else:\n lines[n1:n0+1] = []\n n0 = n1\n\n lines[n0:n0] = blocklines\n\n if lines:\n result = '\\n'.join(lines)+'\\n'\n else:\n result = ''\n if original == result:\n msg = ''\n changed = False\n elif original is None:\n msg = 'File created'\n changed = True\n elif not blocklines:\n msg = 'Block removed'\n changed = True\n else:\n msg = 'Block inserted'\n changed = True\n\n if changed and not module.check_mode:\n if module.boolean(params['backup']) and os.path.exists(dest):\n module.backup_local(dest)\n write_changes(module, result, dest)\n\n msg, changed = check_file_attrs(module, changed, msg)\n module.exit_json(changed=changed, msg=msg)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.splitter import *\nif __name__ == '__main__':\n main()\n", "path": "files/blockinfile.py"}]} | 3,958 | 223 |
gh_patches_debug_32969 | rasdani/github-patches | git_diff | tough-dev-school__education-backend-20 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Не работают триггеры для бандлов
https://sentry.io/organizations/fedor-borshev/issues/1403243325/?project=1807512&query=is%3Aunresolved
</issue>
<code>
[start of src/orders/models.py]
1 from typing import Optional
2
3 from django.utils import timezone
4 from django.utils.translation import ugettext_lazy as _
5
6 from app.models import DefaultQuerySet, TimestampedModel, models
7 from orders.signals import order_got_shipped
8
9
10 class ItemField(models.ForeignKey):
11 """This is a simple replacement for the ContentType framework -- fields of this type
12 are fields linked to items
13 """
14 def __init__(self, *args, **kwargs):
15 self._is_item = True
16 super().__init__(*args, **kwargs)
17
18
19 class UnknownItemException(Exception):
20 pass
21
22
23 class OrderQuerySet(DefaultQuerySet):
24 def paid(self, invert=False):
25 return self.filter(paid__isnull=invert)
26
27
28 class Order(TimestampedModel):
29 objects = OrderQuerySet.as_manager() # type: OrderQuerySet
30
31 user = models.ForeignKey('users.User', on_delete=models.PROTECT)
32 price = models.DecimalField(max_digits=9, decimal_places=2)
33
34 paid = models.DateTimeField(
35 _('Date when order got paid'),
36 null=True, blank=True,
37 help_text=_('If set during creation, order automaticaly gets shipped'),
38 )
39 shipped = models.DateTimeField(_('Date when order was shipped'), null=True, blank=True)
40
41 course = ItemField('courses.Course', null=True, blank=True, on_delete=models.PROTECT)
42 record = ItemField('courses.Record', null=True, blank=True, on_delete=models.PROTECT)
43 bundle = ItemField('courses.Bundle', null=True, blank=True, on_delete=models.PROTECT)
44
45 class Meta:
46 ordering = ['-id']
47 verbose_name = _('Order')
48 verbose_name_plural = _('Orders')
49
50 def __str__(self):
51 return f'Order #{self.pk}'
52
53 @property
54 def item(self):
55 """Find the attached item. Simple replacement for ContentType framework
56 """
57 for field in self.__class__._meta.get_fields():
58 if getattr(field, '_is_item', False):
59 if getattr(self, f'{field.name}_id', None) is not None:
60 return getattr(self, field.name)
61
62 @classmethod
63 def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:
64 """
65 Given an item model, returns the ForeignKey to it"""
66 for field in cls._meta.get_fields():
67 if getattr(field, '_is_item', False):
68 if field.related_model == item.__class__:
69 return field.name
70
71 def set_item(self, item):
72 foreign_key = self.__class__.get_item_foreignkey(item)
73 if foreign_key is not None:
74 setattr(self, foreign_key, item)
75 return
76
77 raise UnknownItemException('There is not foreignKey for {}'.format(item.__class__))
78
79 def set_paid(self):
80 is_already_paid = self.paid is not None
81
82 self.paid = timezone.now()
83
84 self.save()
85
86 if not is_already_paid and self.item is not None:
87 self.ship()
88
89 def ship(self):
90 """Ship the order. Better call it asynchronously"""
91 self.item.ship(to=self.user)
92
93 self.shipped = timezone.now()
94
95 self.save()
96
97 order_got_shipped.send(
98 sender=self.__class__,
99 order=self,
100 )
101
[end of src/orders/models.py]
[start of src/courses/models.py]
1 from urllib.parse import urljoin
2
3 from django.conf import settings
4 from django.utils.translation import ugettext_lazy as _
5
6 from app.models import TimestampedModel, models
7 from app.s3 import AppS3
8 from shipping.mixins import Shippable
9
10
11 class Course(Shippable, TimestampedModel):
12 name = models.CharField(max_length=255)
13 name_genitive = models.CharField(_('Genitive name'), max_length=255, help_text='«мастер-класса о TDD». К примеру для записей.')
14 name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='«посещение мастер-класса по TDD» или «Доступ к записи курсов кройки и шитья»')
15 full_name = models.CharField(
16 _('Full name for letters'), max_length=255,
17 help_text='Билет на мастер-класс о TDD или «запись курсов кройки и шитья»',
18 )
19 slug = models.SlugField()
20 clickmeeting_room_url = models.URLField(_('Clickmeeting room URL'), null=True, blank=True, help_text=_('If set, every user who purcashes this course gets invited'))
21 template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))
22
23 class Meta:
24 ordering = ['-id']
25 verbose_name = _('Course')
26 verbose_name_plural = _('Courses')
27
28 def get_absolute_url(self):
29 return urljoin(settings.FRONTEND_URL, '/'.join(['courses', self.slug, '']))
30
31
32 class Record(Shippable, TimestampedModel):
33 course = models.ForeignKey(Course, on_delete=models.CASCADE)
34 name = models.CharField(max_length=255)
35 name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='«Доступ к записи курсов кройки и шитья»')
36 full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='«Запись мастер-класса о TDD»')
37 slug = models.SlugField()
38 full_name = models.CharField(
39 _('Full name for letters'), max_length=255,
40 help_text='«Запись мастер-класса о TDD»',
41 )
42
43 s3_object_id = models.CharField(max_length=512)
44 template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))
45
46 class Meta:
47 ordering = ['-id']
48 verbose_name = _('Record')
49 verbose_name_plural = _('Records')
50
51 @property
52 def name_genitive(self):
53 return self.course.name_genitive
54
55 def get_url(self, expires: int = 3 * 24 * 60 * 60):
56 return AppS3().get_presigned_url(self.s3_object_id, expires=expires)
57
58 def __str__(self):
59 return f'Запись {self.name_genitive}'
60
61 def get_absolute_url(self):
62 return self.course.get_absolute_url()
63
64
65 class Bundle(Shippable, TimestampedModel):
66 records = models.ManyToManyField('courses.Record')
67 courses = models.ManyToManyField('courses.Course')
68
69 name = models.CharField(max_length=255)
70 name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='«Доступ к записи курсов кройки и шитья»')
71 full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='«Запись мастер-класса о TDD»')
72 slug = models.SlugField()
73 full_name = models.CharField(
74 _('Full name for letters'), max_length=255,
75 help_text='«Запись мастер-класса о TDD»',
76 )
77
78 class Meta:
79 ordering = ['-id']
80 verbose_name = _('Bundle')
81 verbose_name_plural = _('Bundles')
82
83 def ship(self, to):
84 for record in self.records.iterator():
85 record.ship(to=to)
86
87 for course in self.courses.iterator():
88 course.ship(to=to)
89
[end of src/courses/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/courses/models.py b/src/courses/models.py
--- a/src/courses/models.py
+++ b/src/courses/models.py
@@ -80,6 +80,9 @@
verbose_name = _('Bundle')
verbose_name_plural = _('Bundles')
+ def get_absolute_url(self):
+ return urljoin(settings.FRONTEND_URL, '/'.join(['bundles', self.slug, '']))
+
def ship(self, to):
for record in self.records.iterator():
record.ship(to=to)
diff --git a/src/orders/models.py b/src/orders/models.py
--- a/src/orders/models.py
+++ b/src/orders/models.py
@@ -1,4 +1,4 @@
-from typing import Optional
+from typing import Iterable, Optional
from django.utils import timezone
from django.utils.translation import ugettext_lazy as _
@@ -59,18 +59,28 @@
if getattr(self, f'{field.name}_id', None) is not None:
return getattr(self, field.name)
+ @classmethod
+ def _iterate_items(cls) -> Iterable[models.fields.Field]:
+ for field in cls._meta.get_fields():
+ if getattr(field, '_is_item', False):
+ yield field
+
@classmethod
def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:
"""
Given an item model, returns the ForeignKey to it"""
- for field in cls._meta.get_fields():
- if getattr(field, '_is_item', False):
- if field.related_model == item.__class__:
- return field.name
+ for field in cls._iterate_items():
+ if field.related_model == item.__class__:
+ return field.name
+
+ def reset_items(self):
+ for field in self._iterate_items():
+ setattr(self, field.name, None)
def set_item(self, item):
foreign_key = self.__class__.get_item_foreignkey(item)
if foreign_key is not None:
+ self.reset_items()
setattr(self, foreign_key, item)
return
| {"golden_diff": "diff --git a/src/courses/models.py b/src/courses/models.py\n--- a/src/courses/models.py\n+++ b/src/courses/models.py\n@@ -80,6 +80,9 @@\n verbose_name = _('Bundle')\n verbose_name_plural = _('Bundles')\n \n+ def get_absolute_url(self):\n+ return urljoin(settings.FRONTEND_URL, '/'.join(['bundles', self.slug, '']))\n+\n def ship(self, to):\n for record in self.records.iterator():\n record.ship(to=to)\ndiff --git a/src/orders/models.py b/src/orders/models.py\n--- a/src/orders/models.py\n+++ b/src/orders/models.py\n@@ -1,4 +1,4 @@\n-from typing import Optional\n+from typing import Iterable, Optional\n \n from django.utils import timezone\n from django.utils.translation import ugettext_lazy as _\n@@ -59,18 +59,28 @@\n if getattr(self, f'{field.name}_id', None) is not None:\n return getattr(self, field.name)\n \n+ @classmethod\n+ def _iterate_items(cls) -> Iterable[models.fields.Field]:\n+ for field in cls._meta.get_fields():\n+ if getattr(field, '_is_item', False):\n+ yield field\n+\n @classmethod\n def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:\n \"\"\"\n Given an item model, returns the ForeignKey to it\"\"\"\n- for field in cls._meta.get_fields():\n- if getattr(field, '_is_item', False):\n- if field.related_model == item.__class__:\n- return field.name\n+ for field in cls._iterate_items():\n+ if field.related_model == item.__class__:\n+ return field.name\n+\n+ def reset_items(self):\n+ for field in self._iterate_items():\n+ setattr(self, field.name, None)\n \n def set_item(self, item):\n foreign_key = self.__class__.get_item_foreignkey(item)\n if foreign_key is not None:\n+ self.reset_items()\n setattr(self, foreign_key, item)\n return\n", "issue": "\u041d\u0435 \u0440\u0430\u0431\u043e\u0442\u0430\u044e\u0442 \u0442\u0440\u0438\u0433\u0433\u0435\u0440\u044b \u0434\u043b\u044f \u0431\u0430\u043d\u0434\u043b\u043e\u0432\nhttps://sentry.io/organizations/fedor-borshev/issues/1403243325/?project=1807512&query=is%3Aunresolved\n", "before_files": [{"content": "from typing import Optional\n\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom app.models import DefaultQuerySet, TimestampedModel, models\nfrom orders.signals import order_got_shipped\n\n\nclass ItemField(models.ForeignKey):\n \"\"\"This is a simple replacement for the ContentType framework -- fields of this type\n are fields linked to items\n \"\"\"\n def __init__(self, *args, **kwargs):\n self._is_item = True\n super().__init__(*args, **kwargs)\n\n\nclass UnknownItemException(Exception):\n pass\n\n\nclass OrderQuerySet(DefaultQuerySet):\n def paid(self, invert=False):\n return self.filter(paid__isnull=invert)\n\n\nclass Order(TimestampedModel):\n objects = OrderQuerySet.as_manager() # type: OrderQuerySet\n\n user = models.ForeignKey('users.User', on_delete=models.PROTECT)\n price = models.DecimalField(max_digits=9, decimal_places=2)\n\n paid = models.DateTimeField(\n _('Date when order got paid'),\n null=True, blank=True,\n help_text=_('If set during creation, order automaticaly gets shipped'),\n )\n shipped = models.DateTimeField(_('Date when order was shipped'), null=True, blank=True)\n\n course = ItemField('courses.Course', null=True, blank=True, on_delete=models.PROTECT)\n record = ItemField('courses.Record', null=True, blank=True, on_delete=models.PROTECT)\n bundle = ItemField('courses.Bundle', null=True, blank=True, on_delete=models.PROTECT)\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Order')\n verbose_name_plural = _('Orders')\n\n def __str__(self):\n return f'Order #{self.pk}'\n\n @property\n def item(self):\n \"\"\"Find the attached item. Simple replacement for ContentType framework\n \"\"\"\n for field in self.__class__._meta.get_fields():\n if getattr(field, '_is_item', False):\n if getattr(self, f'{field.name}_id', None) is not None:\n return getattr(self, field.name)\n\n @classmethod\n def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:\n \"\"\"\n Given an item model, returns the ForeignKey to it\"\"\"\n for field in cls._meta.get_fields():\n if getattr(field, '_is_item', False):\n if field.related_model == item.__class__:\n return field.name\n\n def set_item(self, item):\n foreign_key = self.__class__.get_item_foreignkey(item)\n if foreign_key is not None:\n setattr(self, foreign_key, item)\n return\n\n raise UnknownItemException('There is not foreignKey for {}'.format(item.__class__))\n\n def set_paid(self):\n is_already_paid = self.paid is not None\n\n self.paid = timezone.now()\n\n self.save()\n\n if not is_already_paid and self.item is not None:\n self.ship()\n\n def ship(self):\n \"\"\"Ship the order. Better call it asynchronously\"\"\"\n self.item.ship(to=self.user)\n\n self.shipped = timezone.now()\n\n self.save()\n\n order_got_shipped.send(\n sender=self.__class__,\n order=self,\n )\n", "path": "src/orders/models.py"}, {"content": "from urllib.parse import urljoin\n\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom app.models import TimestampedModel, models\nfrom app.s3 import AppS3\nfrom shipping.mixins import Shippable\n\n\nclass Course(Shippable, TimestampedModel):\n name = models.CharField(max_length=255)\n name_genitive = models.CharField(_('Genitive name'), max_length=255, help_text='\u00ab\u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb. \u041a \u043f\u0440\u0438\u043c\u0435\u0440\u0443 \u0434\u043b\u044f \u0437\u0430\u043f\u0438\u0441\u0435\u0439.')\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u043f\u043e\u0441\u0435\u0449\u0435\u043d\u0438\u0435 \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043f\u043e TDD\u00bb \u0438\u043b\u0438 \u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u0411\u0438\u043b\u0435\u0442 \u043d\u0430 \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441 \u043e TDD \u0438\u043b\u0438 \u00ab\u0437\u0430\u043f\u0438\u0441\u044c \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb',\n )\n slug = models.SlugField()\n clickmeeting_room_url = models.URLField(_('Clickmeeting room URL'), null=True, blank=True, help_text=_('If set, every user who purcashes this course gets invited'))\n template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Course')\n verbose_name_plural = _('Courses')\n\n def get_absolute_url(self):\n return urljoin(settings.FRONTEND_URL, '/'.join(['courses', self.slug, '']))\n\n\nclass Record(Shippable, TimestampedModel):\n course = models.ForeignKey(Course, on_delete=models.CASCADE)\n name = models.CharField(max_length=255)\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb')\n slug = models.SlugField()\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb',\n )\n\n s3_object_id = models.CharField(max_length=512)\n template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Record')\n verbose_name_plural = _('Records')\n\n @property\n def name_genitive(self):\n return self.course.name_genitive\n\n def get_url(self, expires: int = 3 * 24 * 60 * 60):\n return AppS3().get_presigned_url(self.s3_object_id, expires=expires)\n\n def __str__(self):\n return f'\u0417\u0430\u043f\u0438\u0441\u044c {self.name_genitive}'\n\n def get_absolute_url(self):\n return self.course.get_absolute_url()\n\n\nclass Bundle(Shippable, TimestampedModel):\n records = models.ManyToManyField('courses.Record')\n courses = models.ManyToManyField('courses.Course')\n\n name = models.CharField(max_length=255)\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb')\n slug = models.SlugField()\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb',\n )\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Bundle')\n verbose_name_plural = _('Bundles')\n\n def ship(self, to):\n for record in self.records.iterator():\n record.ship(to=to)\n\n for course in self.courses.iterator():\n course.ship(to=to)\n", "path": "src/courses/models.py"}]} | 2,605 | 452 |
gh_patches_debug_56924 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-697 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set default renderer to JSONRenderer in DRF backend
For reference: http://www.django-rest-framework.org/api-guide/renderers/#setting-the-renderers
</issue>
<code>
[start of settings/common.py]
1 """
2 Django settings for evalai project.
3
4 Generated by 'django-admin startproject' using Django 1.10.2.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.10/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.10/ref/settings/
11 """
12
13 import datetime
14 import os
15 import sys
16
17 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
18 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
19 APPS_DIR = os.path.join(BASE_DIR, 'apps')
20
21 sys.path.append(APPS_DIR)
22
23 # Quick-start development settings - unsuitable for production
24 # See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
25
26 # SECURITY WARNING: keep the secret key used in production secret!
27 SECRET_KEY = os.environ.get('SECRET_KEY', 'random_secret_key')
28
29 # SECURITY WARNING: don't run with debug turned on in production!
30 DEBUG = True
31
32 ALLOWED_HOSTS = []
33
34
35 # Application definition
36
37 DEFAULT_APPS = [
38 'django.contrib.admin',
39 'django.contrib.auth',
40 'django.contrib.contenttypes',
41 'django.contrib.sessions',
42 'django.contrib.messages',
43 'django.contrib.staticfiles',
44 'django.contrib.sites',
45 ]
46
47 OUR_APPS = [
48 'accounts',
49 'analytics',
50 'base',
51 'challenges',
52 'hosts',
53 'jobs',
54 'participants',
55 'submissions',
56 'web',
57 ]
58
59 THIRD_PARTY_APPS = [
60 'allauth',
61 'allauth.account',
62 'corsheaders',
63 'rest_auth',
64 'rest_auth.registration',
65 'rest_framework.authtoken',
66 'rest_framework',
67 'rest_framework_docs',
68 'rest_framework_expiring_authtoken',
69 ]
70
71 INSTALLED_APPS = DEFAULT_APPS + OUR_APPS + THIRD_PARTY_APPS
72
73 MIDDLEWARE = [
74 'corsheaders.middleware.CorsMiddleware',
75 'django.middleware.security.SecurityMiddleware',
76 'django.contrib.sessions.middleware.SessionMiddleware',
77 'django.middleware.common.CommonMiddleware',
78 'django.middleware.csrf.CsrfViewMiddleware',
79 'django.contrib.auth.middleware.AuthenticationMiddleware',
80 'django.contrib.messages.middleware.MessageMiddleware',
81 'django.middleware.clickjacking.XFrameOptionsMiddleware',
82 ]
83
84 ROOT_URLCONF = 'evalai.urls'
85
86
87 TEMPLATES = [
88 {
89 'BACKEND': 'django.template.backends.django.DjangoTemplates',
90 'DIRS': [],
91 'APP_DIRS': True,
92 'OPTIONS': {
93 'context_processors': [
94 'django.template.context_processors.debug',
95 'django.template.context_processors.request',
96 'django.contrib.auth.context_processors.auth',
97 'django.contrib.messages.context_processors.messages',
98 ],
99 },
100 },
101 ]
102
103 WSGI_APPLICATION = 'evalai.wsgi.application'
104
105
106 # Password validation
107 # https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
108
109 AUTH_PASSWORD_VALIDATORS = [
110 {
111 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa
112 },
113 {
114 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa
115 },
116 {
117 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa
118 },
119 {
120 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa
121 },
122 ]
123
124
125 # Internationalization
126 # https://docs.djangoproject.com/en/1.10/topics/i18n/
127
128 LANGUAGE_CODE = 'en-us'
129
130 TIME_ZONE = 'UTC'
131
132 USE_I18N = True
133
134 USE_L10N = True
135
136 USE_TZ = True
137
138 # Static files (CSS, JavaScript, Images)
139 # https://docs.djangoproject.com/en/1.10/howto/static-files/
140
141 STATIC_URL = '/static/'
142
143 MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
144 MEDIA_URL = "/media/"
145
146 SITE_ID = 1
147
148 REST_FRAMEWORK = {
149 'DEFAULT_PAGINATION_CLASS': (
150 'rest_framework.pagination.LimitOffsetPagination'),
151 'PAGE_SIZE': 10,
152 'DEFAULT_PERMISSION_CLASSES': [
153 'rest_framework.permissions.IsAuthenticatedOrReadOnly'
154 ],
155 'DEFAULT_AUTHENTICATION_CLASSES': [
156 'rest_framework_expiring_authtoken.authentication.ExpiringTokenAuthentication',
157 ],
158 'TEST_REQUEST_DEFAULT_FORMAT': 'json',
159 'DEFAULT_THROTTLE_CLASSES': (
160 'rest_framework.throttling.AnonRateThrottle',
161 'rest_framework.throttling.UserRateThrottle'
162 ),
163 'DEFAULT_THROTTLE_RATES': {
164 'anon': '100/minute',
165 'user': '100/minute'
166 }
167 }
168
169 # ALLAUTH SETTINGS
170 ACCOUNT_EMAIL_REQUIRED = True
171 OLD_PASSWORD_FIELD_ENABLED = True
172
173 AUTHENTICATION_BACKENDS = (
174 # Needed to login by username in Django admin, regardless of `allauth`
175 'django.contrib.auth.backends.ModelBackend',
176 # `allauth` specific authentication methods, such as login by e-mail
177 'allauth.account.auth_backends.AuthenticationBackend',
178 )
179
180 # CORS Settings
181 CORS_ORIGIN_ALLOW_ALL = True
182
183 # REST Framework Expiring Tokens Configuration
184 EXPIRING_TOKEN_LIFESPAN = datetime.timedelta(days=7)
185
186 # Logging
187 LOGGING = {
188 'version': 1,
189 'disable_existing_loggers': False,
190 'root': {
191 'level': 'INFO',
192 'handlers': ['console'],
193 },
194 'filters': {
195 'require_debug_false': {
196 '()': 'django.utils.log.RequireDebugFalse',
197 },
198 'require_debug_true': {
199 '()': 'django.utils.log.RequireDebugTrue',
200 }
201 },
202 'formatters': {
203 'simple': {
204 'format': '[%(asctime)s] %(levelname)s %(message)s',
205 'datefmt': '%Y-%m-%d %H:%M:%S'
206 },
207 'verbose': {
208 'format': '[%(asctime)s] %(levelname)s %(module)s %(message)s',
209 'datefmt': '%Y-%m-%d %H:%M:%S'
210 }
211 },
212 'handlers': {
213 'console': {
214 'level': 'INFO',
215 'filters': ['require_debug_true'],
216 'class': 'logging.StreamHandler',
217 'formatter': 'simple'
218 },
219 'logfile': {
220 'level': 'DEBUG',
221 'class': 'logging.handlers.RotatingFileHandler',
222 'filename': "/tmp/logfile",
223 'maxBytes': 50000,
224 'backupCount': 10,
225 'formatter': 'verbose'
226 },
227 'mail_admins': {
228 'level': 'ERROR',
229 'class': 'django.utils.log.AdminEmailHandler',
230 'filters': ['require_debug_false'],
231 }
232 },
233 'loggers': {
234 'django': {
235 'handlers': ['console'],
236 'propagate': True,
237 },
238 'django.request': {
239 'handlers': ['mail_admins'],
240 'level': 'ERROR',
241 'propagate': False,
242 },
243 'django.security': {
244 'handlers': ['mail_admins'],
245 'level': 'ERROR',
246 'propagate': False,
247 },
248 'django.db.backends': {
249 'handlers': ['mail_admins'],
250 'level': 'ERROR',
251 'propagate': False,
252 }
253 }
254 }
255
256 CACHES = {
257 'default': {
258 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
259 }
260 }
261
[end of settings/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/settings/common.py b/settings/common.py
--- a/settings/common.py
+++ b/settings/common.py
@@ -163,7 +163,10 @@
'DEFAULT_THROTTLE_RATES': {
'anon': '100/minute',
'user': '100/minute'
- }
+ },
+ 'DEFAULT_RENDERER_CLASSES': (
+ 'rest_framework.renderers.JSONRenderer',
+ )
}
# ALLAUTH SETTINGS
| {"golden_diff": "diff --git a/settings/common.py b/settings/common.py\n--- a/settings/common.py\n+++ b/settings/common.py\n@@ -163,7 +163,10 @@\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '100/minute',\n 'user': '100/minute'\n- }\n+ },\n+ 'DEFAULT_RENDERER_CLASSES': (\n+ 'rest_framework.renderers.JSONRenderer',\n+ )\n }\n \n # ALLAUTH SETTINGS\n", "issue": "Set default renderer to JSONRenderer in DRF backend\nFor reference: http://www.django-rest-framework.org/api-guide/renderers/#setting-the-renderers\n", "before_files": [{"content": "\"\"\"\nDjango settings for evalai project.\n\nGenerated by 'django-admin startproject' using Django 1.10.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport datetime\nimport os\nimport sys\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nAPPS_DIR = os.path.join(BASE_DIR, 'apps')\n\nsys.path.append(APPS_DIR)\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get('SECRET_KEY', 'random_secret_key')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nDEFAULT_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.sites',\n]\n\nOUR_APPS = [\n 'accounts',\n 'analytics',\n 'base',\n 'challenges',\n 'hosts',\n 'jobs',\n 'participants',\n 'submissions',\n 'web',\n]\n\nTHIRD_PARTY_APPS = [\n 'allauth',\n 'allauth.account',\n 'corsheaders',\n 'rest_auth',\n 'rest_auth.registration',\n 'rest_framework.authtoken',\n 'rest_framework',\n 'rest_framework_docs',\n 'rest_framework_expiring_authtoken',\n]\n\nINSTALLED_APPS = DEFAULT_APPS + OUR_APPS + THIRD_PARTY_APPS\n\nMIDDLEWARE = [\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'evalai.urls'\n\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'evalai.wsgi.application'\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\n\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nMEDIA_URL = \"/media/\"\n\nSITE_ID = 1\n\nREST_FRAMEWORK = {\n 'DEFAULT_PAGINATION_CLASS': (\n 'rest_framework.pagination.LimitOffsetPagination'),\n 'PAGE_SIZE': 10,\n 'DEFAULT_PERMISSION_CLASSES': [\n 'rest_framework.permissions.IsAuthenticatedOrReadOnly'\n ],\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework_expiring_authtoken.authentication.ExpiringTokenAuthentication',\n ],\n 'TEST_REQUEST_DEFAULT_FORMAT': 'json',\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '100/minute',\n 'user': '100/minute'\n }\n}\n\n# ALLAUTH SETTINGS\nACCOUNT_EMAIL_REQUIRED = True\nOLD_PASSWORD_FIELD_ENABLED = True\n\nAUTHENTICATION_BACKENDS = (\n # Needed to login by username in Django admin, regardless of `allauth`\n 'django.contrib.auth.backends.ModelBackend',\n # `allauth` specific authentication methods, such as login by e-mail\n 'allauth.account.auth_backends.AuthenticationBackend',\n)\n\n# CORS Settings\nCORS_ORIGIN_ALLOW_ALL = True\n\n# REST Framework Expiring Tokens Configuration\nEXPIRING_TOKEN_LIFESPAN = datetime.timedelta(days=7)\n\n# Logging\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'root': {\n 'level': 'INFO',\n 'handlers': ['console'],\n },\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse',\n },\n 'require_debug_true': {\n '()': 'django.utils.log.RequireDebugTrue',\n }\n },\n 'formatters': {\n 'simple': {\n 'format': '[%(asctime)s] %(levelname)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n },\n 'verbose': {\n 'format': '[%(asctime)s] %(levelname)s %(module)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n }\n },\n 'handlers': {\n 'console': {\n 'level': 'INFO',\n 'filters': ['require_debug_true'],\n 'class': 'logging.StreamHandler',\n 'formatter': 'simple'\n },\n 'logfile': {\n 'level': 'DEBUG',\n 'class': 'logging.handlers.RotatingFileHandler',\n 'filename': \"/tmp/logfile\",\n 'maxBytes': 50000,\n 'backupCount': 10,\n 'formatter': 'verbose'\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'class': 'django.utils.log.AdminEmailHandler',\n 'filters': ['require_debug_false'],\n }\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'propagate': True,\n },\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.security': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.db.backends': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n }\n }\n}\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n }\n}\n", "path": "settings/common.py"}]} | 2,860 | 106 |
gh_patches_debug_2514 | rasdani/github-patches | git_diff | liberapay__liberapay.com-173 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Changing organization type doesn't work
In identity tab, when I change the organization type to set Organization instead of Business, my changes are not saved.
</issue>
<code>
[start of liberapay/security/authentication.py]
1 """Defines website authentication helpers.
2 """
3 import binascii
4
5 from six.moves.urllib.parse import urlencode
6
7 from aspen import Response
8
9 from liberapay.constants import SESSION, SESSION_TIMEOUT
10 from liberapay.exceptions import AuthRequired
11 from liberapay.models.participant import Participant
12
13
14 class _ANON(object):
15 ANON = True
16 is_admin = False
17 id = None
18 __bool__ = __nonzero__ = lambda *a: False
19 get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)
20 __repr__ = lambda self: '<ANON>'
21
22
23 ANON = _ANON()
24
25
26 def _get_body(request):
27 try:
28 body = request.body
29 except Response:
30 return
31 if not isinstance(body, dict):
32 return
33 return body
34
35
36 def sign_in_with_form_data(body, state):
37 p = None
38 _, website = state['_'], state['website']
39
40 if body.get('log-in.id'):
41 id = body.pop('log-in.id')
42 k = 'email' if '@' in id else 'username'
43 p = Participant.authenticate(
44 k, 'password',
45 id, body.pop('log-in.password')
46 )
47 if p and p.status == 'closed':
48 p.update_status('active')
49
50 elif body.get('sign-in.username'):
51 if body.pop('sign-in.terms') != 'agree':
52 raise Response(400, 'you have to agree to the terms')
53 kind = body.pop('sign-in.kind')
54 if kind not in ('individual', 'organization'):
55 raise Response(400, 'bad kind')
56 with website.db.get_cursor() as c:
57 p = Participant.make_active(
58 body.pop('sign-in.username'), kind, body.pop('sign-in.password'),
59 cursor=c
60 )
61 p.add_email(body.pop('sign-in.email'), cursor=c)
62 p.authenticated = True
63
64 elif body.get('email-login.email'):
65 email = body.pop('email-login.email')
66 p = Participant._from_thing('email', email)
67 if p:
68 p.start_session()
69 qs = {'log-in.id': p.id, 'log-in.token': p.session_token}
70 p.send_email(
71 'password_reset',
72 email=email,
73 link=p.url('settings/', qs),
74 link_validity=SESSION_TIMEOUT,
75 )
76 state['email-login.sent-to'] = email
77 else:
78 state['sign-in.error'] = _(
79 "We didn't find any account whose primary email address is {0}.",
80 email
81 )
82 p = None
83
84 return p
85
86
87 def start_user_as_anon():
88 """Make sure we always have a user object, regardless of exceptions during authentication.
89 """
90 return {'user': ANON}
91
92
93 def authenticate_user_if_possible(request, state, user, _):
94 """This signs the user in.
95 """
96 if request.line.uri.startswith('/assets/'):
97 return
98
99 # HTTP auth
100 if 'Authorization' in request.headers:
101 header = request.headers['authorization']
102 if not header.startswith('Basic '):
103 raise Response(401, 'Unsupported authentication method')
104 try:
105 creds = binascii.a2b_base64(header[len('Basic '):]).split(':', 1)
106 except binascii.Error:
107 raise Response(400, 'Malformed "Authorization" header')
108 participant = Participant.authenticate('id', 'password', *creds)
109 if not participant:
110 raise Response(401)
111 return {'user': participant}
112
113 # Cookie and form auth
114 # We want to try cookie auth first, but we want form auth to supersede it
115 p = None
116 response = state.setdefault('response', Response())
117 if SESSION in request.headers.cookie:
118 creds = request.headers.cookie[SESSION].value.split(':', 1)
119 p = Participant.authenticate('id', 'session', *creds)
120 if p:
121 state['user'] = p
122 session_p, p = p, None
123 session_suffix = ''
124 redirect_url = request.line.uri
125 if request.method == 'POST':
126 body = _get_body(request)
127 if body:
128 p = sign_in_with_form_data(body, state)
129 carry_on = body.pop('email-login.carry-on', None)
130 if not p and carry_on:
131 p_email = session_p and (
132 session_p.email or session_p.get_emails()[0].address
133 )
134 if p_email != carry_on:
135 state['email-login.carry-on'] = carry_on
136 raise AuthRequired
137 elif request.method == 'GET' and request.qs.get('log-in.id'):
138 id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')
139 p = Participant.authenticate('id', 'session', id, token)
140 if not p and (not session_p or session_p.id != id):
141 raise Response(400, _("This login link is expired or invalid."))
142 else:
143 qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''
144 redirect_url = request.path.raw + qs
145 session_p = p
146 session_suffix = '.em'
147 if p:
148 if session_p:
149 session_p.sign_out(response.headers.cookie)
150 p.sign_in(response.headers.cookie, session_suffix)
151 state['user'] = p
152 if request.body.pop('form.repost', None) != 'true':
153 response.redirect(redirect_url)
154
155
156 def add_auth_to_response(response, request=None, user=ANON):
157 if request is None:
158 return # early parsing must've failed
159 if request.line.uri.startswith('/assets/'):
160 return # assets never get auth headers
161
162 if SESSION in request.headers.cookie:
163 if not user.ANON:
164 user.keep_signed_in(response.headers.cookie)
165
[end of liberapay/security/authentication.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/liberapay/security/authentication.py b/liberapay/security/authentication.py
--- a/liberapay/security/authentication.py
+++ b/liberapay/security/authentication.py
@@ -44,6 +44,8 @@
k, 'password',
id, body.pop('log-in.password')
)
+ if not p:
+ state['sign-in.error'] = _("Bad username or password.")
if p and p.status == 'closed':
p.update_status('active')
| {"golden_diff": "diff --git a/liberapay/security/authentication.py b/liberapay/security/authentication.py\n--- a/liberapay/security/authentication.py\n+++ b/liberapay/security/authentication.py\n@@ -44,6 +44,8 @@\n k, 'password',\n id, body.pop('log-in.password')\n )\n+ if not p:\n+ state['sign-in.error'] = _(\"Bad username or password.\")\n if p and p.status == 'closed':\n p.update_status('active')\n", "issue": "Changing organization type doesn't work\nIn identity tab, when I change the organization type to set Organization instead of Business, my changes are not saved. \n\n", "before_files": [{"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\n\nfrom six.moves.urllib.parse import urlencode\n\nfrom aspen import Response\n\nfrom liberapay.constants import SESSION, SESSION_TIMEOUT\nfrom liberapay.exceptions import AuthRequired\nfrom liberapay.models.participant import Participant\n\n\nclass _ANON(object):\n ANON = True\n is_admin = False\n id = None\n __bool__ = __nonzero__ = lambda *a: False\n get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)\n __repr__ = lambda self: '<ANON>'\n\n\nANON = _ANON()\n\n\ndef _get_body(request):\n try:\n body = request.body\n except Response:\n return\n if not isinstance(body, dict):\n return\n return body\n\n\ndef sign_in_with_form_data(body, state):\n p = None\n _, website = state['_'], state['website']\n\n if body.get('log-in.id'):\n id = body.pop('log-in.id')\n k = 'email' if '@' in id else 'username'\n p = Participant.authenticate(\n k, 'password',\n id, body.pop('log-in.password')\n )\n if p and p.status == 'closed':\n p.update_status('active')\n\n elif body.get('sign-in.username'):\n if body.pop('sign-in.terms') != 'agree':\n raise Response(400, 'you have to agree to the terms')\n kind = body.pop('sign-in.kind')\n if kind not in ('individual', 'organization'):\n raise Response(400, 'bad kind')\n with website.db.get_cursor() as c:\n p = Participant.make_active(\n body.pop('sign-in.username'), kind, body.pop('sign-in.password'),\n cursor=c\n )\n p.add_email(body.pop('sign-in.email'), cursor=c)\n p.authenticated = True\n\n elif body.get('email-login.email'):\n email = body.pop('email-login.email')\n p = Participant._from_thing('email', email)\n if p:\n p.start_session()\n qs = {'log-in.id': p.id, 'log-in.token': p.session_token}\n p.send_email(\n 'password_reset',\n email=email,\n link=p.url('settings/', qs),\n link_validity=SESSION_TIMEOUT,\n )\n state['email-login.sent-to'] = email\n else:\n state['sign-in.error'] = _(\n \"We didn't find any account whose primary email address is {0}.\",\n email\n )\n p = None\n\n return p\n\n\ndef start_user_as_anon():\n \"\"\"Make sure we always have a user object, regardless of exceptions during authentication.\n \"\"\"\n return {'user': ANON}\n\n\ndef authenticate_user_if_possible(request, state, user, _):\n \"\"\"This signs the user in.\n \"\"\"\n if request.line.uri.startswith('/assets/'):\n return\n\n # HTTP auth\n if 'Authorization' in request.headers:\n header = request.headers['authorization']\n if not header.startswith('Basic '):\n raise Response(401, 'Unsupported authentication method')\n try:\n creds = binascii.a2b_base64(header[len('Basic '):]).split(':', 1)\n except binascii.Error:\n raise Response(400, 'Malformed \"Authorization\" header')\n participant = Participant.authenticate('id', 'password', *creds)\n if not participant:\n raise Response(401)\n return {'user': participant}\n\n # Cookie and form auth\n # We want to try cookie auth first, but we want form auth to supersede it\n p = None\n response = state.setdefault('response', Response())\n if SESSION in request.headers.cookie:\n creds = request.headers.cookie[SESSION].value.split(':', 1)\n p = Participant.authenticate('id', 'session', *creds)\n if p:\n state['user'] = p\n session_p, p = p, None\n session_suffix = ''\n redirect_url = request.line.uri\n if request.method == 'POST':\n body = _get_body(request)\n if body:\n p = sign_in_with_form_data(body, state)\n carry_on = body.pop('email-login.carry-on', None)\n if not p and carry_on:\n p_email = session_p and (\n session_p.email or session_p.get_emails()[0].address\n )\n if p_email != carry_on:\n state['email-login.carry-on'] = carry_on\n raise AuthRequired\n elif request.method == 'GET' and request.qs.get('log-in.id'):\n id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')\n p = Participant.authenticate('id', 'session', id, token)\n if not p and (not session_p or session_p.id != id):\n raise Response(400, _(\"This login link is expired or invalid.\"))\n else:\n qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''\n redirect_url = request.path.raw + qs\n session_p = p\n session_suffix = '.em'\n if p:\n if session_p:\n session_p.sign_out(response.headers.cookie)\n p.sign_in(response.headers.cookie, session_suffix)\n state['user'] = p\n if request.body.pop('form.repost', None) != 'true':\n response.redirect(redirect_url)\n\n\ndef add_auth_to_response(response, request=None, user=ANON):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers\n\n if SESSION in request.headers.cookie:\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "liberapay/security/authentication.py"}]} | 2,220 | 108 |
gh_patches_debug_28945 | rasdani/github-patches | git_diff | huggingface__transformers-13400 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Zero-shot classification pipeline truncation support
Transformers 4.10.0 brought [a change](https://github.com/huggingface/transformers/pull/13299/files#diff-c5af53af9b08fb383b49d7a07c1a56c890198b5cd48adc97aeef753fe2e7d60dR91) that modified the default truncation strategy to TruncationStrategy.DO_NOT_TRUNCATE for the ZeroShotClassificationPipeline.
That uncovered an issue in that the [ZeroShotClassificationPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_classification.py#L217 ) doesn't appear to pass kwargs to the parent's call method. So even when calling the pipeline with truncation=True, it doesn't allow for truncation.
Thank you for the assistance in advance, appreciate all the work you guys do.
</issue>
<code>
[start of src/transformers/pipelines/zero_shot_classification.py]
1 from typing import List, Union
2
3 import numpy as np
4
5 from ..file_utils import add_end_docstrings, is_torch_available
6 from ..tokenization_utils import TruncationStrategy
7 from ..utils import logging
8 from .base import PIPELINE_INIT_ARGS, ArgumentHandler, Pipeline
9
10
11 if is_torch_available():
12 import torch
13
14 logger = logging.get_logger(__name__)
15
16
17 class ZeroShotClassificationArgumentHandler(ArgumentHandler):
18 """
19 Handles arguments for zero-shot for text classification by turning each possible label into an NLI
20 premise/hypothesis pair.
21 """
22
23 def _parse_labels(self, labels):
24 if isinstance(labels, str):
25 labels = [label.strip() for label in labels.split(",")]
26 return labels
27
28 def __call__(self, sequences, labels, hypothesis_template):
29 if len(labels) == 0 or len(sequences) == 0:
30 raise ValueError("You must include at least one label and at least one sequence.")
31 if hypothesis_template.format(labels[0]) == hypothesis_template:
32 raise ValueError(
33 (
34 'The provided hypothesis_template "{}" was not able to be formatted with the target labels. '
35 "Make sure the passed template includes formatting syntax such as {{}} where the label should go."
36 ).format(hypothesis_template)
37 )
38
39 if isinstance(sequences, str):
40 sequences = [sequences]
41 labels = self._parse_labels(labels)
42
43 sequence_pairs = []
44 for sequence in sequences:
45 sequence_pairs.extend([[sequence, hypothesis_template.format(label)] for label in labels])
46
47 return sequence_pairs
48
49
50 @add_end_docstrings(PIPELINE_INIT_ARGS)
51 class ZeroShotClassificationPipeline(Pipeline):
52 """
53 NLI-based zero-shot classification pipeline using a :obj:`ModelForSequenceClassification` trained on NLI (natural
54 language inference) tasks.
55
56 Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis
57 pair and passed to the pretrained model. Then, the logit for `entailment` is taken as the logit for the candidate
58 label being valid. Any NLI model can be used, but the id of the `entailment` label must be included in the model
59 config's :attr:`~transformers.PretrainedConfig.label2id`.
60
61 This NLI pipeline can currently be loaded from :func:`~transformers.pipeline` using the following task identifier:
62 :obj:`"zero-shot-classification"`.
63
64 The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list
65 of available models on `huggingface.co/models <https://huggingface.co/models?search=nli>`__.
66 """
67
68 def __init__(self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs):
69 super().__init__(*args, **kwargs)
70 self._args_parser = args_parser
71 if self.entailment_id == -1:
72 logger.warning(
73 "Failed to determine 'entailment' label id from the label2id mapping in the model config. Setting to "
74 "-1. Define a descriptive label2id mapping in the model config to ensure correct outputs."
75 )
76
77 @property
78 def entailment_id(self):
79 for label, ind in self.model.config.label2id.items():
80 if label.lower().startswith("entail"):
81 return ind
82 return -1
83
84 def _parse_and_tokenize(
85 self,
86 sequences,
87 candidate_labels,
88 hypothesis_template,
89 padding=True,
90 add_special_tokens=True,
91 truncation=TruncationStrategy.DO_NOT_TRUNCATE,
92 **kwargs
93 ):
94 """
95 Parse arguments and tokenize only_first so that hypothesis (label) is not truncated
96 """
97 sequence_pairs = self._args_parser(sequences, candidate_labels, hypothesis_template)
98 return_tensors = self.framework
99 if getattr(self.tokenizer, "pad_token", None) is None:
100 # XXX some tokenizers do not have a padding token, we use simple lists
101 # and no padding then
102 logger.warning("The tokenizer {self.tokenizer} does not have a pad token, we're not running it as a batch")
103 padding = False
104 inputs = []
105 for sequence_pair in sequence_pairs:
106 model_input = self.tokenizer(
107 text=sequence_pair[0],
108 text_pair=sequence_pair[1],
109 add_special_tokens=add_special_tokens,
110 return_tensors=return_tensors,
111 padding=padding,
112 truncation=truncation,
113 )
114 inputs.append(model_input)
115 else:
116 inputs = self.tokenizer(
117 sequence_pairs,
118 add_special_tokens=add_special_tokens,
119 return_tensors=return_tensors,
120 padding=padding,
121 truncation=truncation,
122 )
123
124 return inputs
125
126 def _forward(self, inputs, return_tensors=False):
127 """
128 Internal framework specific forward dispatching
129
130 Args:
131 inputs: dict holding all the keyword arguments for required by the model forward method.
132 return_tensors: Whether to return native framework (pt/tf) tensors rather than numpy array
133
134 Returns:
135 Numpy array
136 """
137 # Encode for forward
138 with self.device_placement():
139 if self.framework == "tf":
140 if isinstance(inputs, list):
141 predictions = []
142 for input_ in inputs:
143 prediction = self.model(input_.data, training=False)[0]
144 predictions.append(prediction)
145 else:
146 predictions = self.model(inputs.data, training=False)[0]
147 else:
148 with torch.no_grad():
149 if isinstance(inputs, list):
150 predictions = []
151 for input_ in inputs:
152 model_input = self.ensure_tensor_on_device(**input_)
153 prediction = self.model(**model_input)[0].cpu()
154 predictions.append(prediction)
155
156 else:
157 inputs = self.ensure_tensor_on_device(**inputs)
158 predictions = self.model(**inputs)[0].cpu()
159
160 if return_tensors:
161 return predictions
162 else:
163 if isinstance(predictions, list):
164 predictions = np.array([p.numpy() for p in predictions])
165 else:
166 predictions = predictions.numpy()
167 return predictions
168
169 def __call__(
170 self,
171 sequences: Union[str, List[str]],
172 candidate_labels,
173 hypothesis_template="This example is {}.",
174 multi_label=False,
175 **kwargs,
176 ):
177 """
178 Classify the sequence(s) given as inputs. See the :obj:`~transformers.ZeroShotClassificationPipeline`
179 documentation for more information.
180
181 Args:
182 sequences (:obj:`str` or :obj:`List[str]`):
183 The sequence(s) to classify, will be truncated if the model input is too large.
184 candidate_labels (:obj:`str` or :obj:`List[str]`):
185 The set of possible class labels to classify each sequence into. Can be a single label, a string of
186 comma-separated labels, or a list of labels.
187 hypothesis_template (:obj:`str`, `optional`, defaults to :obj:`"This example is {}."`):
188 The template used to turn each label into an NLI-style hypothesis. This template must include a {} or
189 similar syntax for the candidate label to be inserted into the template. For example, the default
190 template is :obj:`"This example is {}."` With the candidate label :obj:`"sports"`, this would be fed
191 into the model like :obj:`"<cls> sequence to classify <sep> This example is sports . <sep>"`. The
192 default template works well in many cases, but it may be worthwhile to experiment with different
193 templates depending on the task setting.
194 multi_label (:obj:`bool`, `optional`, defaults to :obj:`False`):
195 Whether or not multiple candidate labels can be true. If :obj:`False`, the scores are normalized such
196 that the sum of the label likelihoods for each sequence is 1. If :obj:`True`, the labels are considered
197 independent and probabilities are normalized for each candidate by doing a softmax of the entailment
198 score vs. the contradiction score.
199
200 Return:
201 A :obj:`dict` or a list of :obj:`dict`: Each result comes as a dictionary with the following keys:
202
203 - **sequence** (:obj:`str`) -- The sequence for which this is the output.
204 - **labels** (:obj:`List[str]`) -- The labels sorted by order of likelihood.
205 - **scores** (:obj:`List[float]`) -- The probabilities for each of the labels.
206 """
207 if "multi_class" in kwargs and kwargs["multi_class"] is not None:
208 multi_label = kwargs.pop("multi_class")
209 logger.warning(
210 "The `multi_class` argument has been deprecated and renamed to `multi_label`. "
211 "`multi_class` will be removed in a future version of Transformers."
212 )
213
214 if sequences and isinstance(sequences, str):
215 sequences = [sequences]
216
217 outputs = super().__call__(sequences, candidate_labels, hypothesis_template)
218 if isinstance(outputs, list):
219 # XXX: Some tokenizers cannot handle batching because they don't
220 # have pad_token, so outputs will be a list, however, because outputs
221 # is only n logits and sequence_length is not present anymore, we
222 # can recreate a tensor out of outputs.
223 outputs = np.array(outputs)
224 num_sequences = len(sequences)
225 candidate_labels = self._args_parser._parse_labels(candidate_labels)
226 reshaped_outputs = outputs.reshape((num_sequences, len(candidate_labels), -1))
227
228 if len(candidate_labels) == 1:
229 multi_label = True
230
231 if not multi_label:
232 # softmax the "entailment" logits over all candidate labels
233 entail_logits = reshaped_outputs[..., self.entailment_id]
234 scores = np.exp(entail_logits) / np.exp(entail_logits).sum(-1, keepdims=True)
235 else:
236 # softmax over the entailment vs. contradiction dim for each label independently
237 entailment_id = self.entailment_id
238 contradiction_id = -1 if entailment_id == 0 else 0
239 entail_contr_logits = reshaped_outputs[..., [contradiction_id, entailment_id]]
240 scores = np.exp(entail_contr_logits) / np.exp(entail_contr_logits).sum(-1, keepdims=True)
241 scores = scores[..., 1]
242
243 result = []
244 for iseq in range(num_sequences):
245 top_inds = list(reversed(scores[iseq].argsort()))
246 result.append(
247 {
248 "sequence": sequences if isinstance(sequences, str) else sequences[iseq],
249 "labels": [candidate_labels[i] for i in top_inds],
250 "scores": scores[iseq][top_inds].tolist(),
251 }
252 )
253
254 if len(result) == 1:
255 return result[0]
256 return result
257
[end of src/transformers/pipelines/zero_shot_classification.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/transformers/pipelines/zero_shot_classification.py b/src/transformers/pipelines/zero_shot_classification.py
--- a/src/transformers/pipelines/zero_shot_classification.py
+++ b/src/transformers/pipelines/zero_shot_classification.py
@@ -88,7 +88,7 @@
hypothesis_template,
padding=True,
add_special_tokens=True,
- truncation=TruncationStrategy.DO_NOT_TRUNCATE,
+ truncation=TruncationStrategy.ONLY_FIRST,
**kwargs
):
"""
@@ -113,13 +113,31 @@
)
inputs.append(model_input)
else:
- inputs = self.tokenizer(
- sequence_pairs,
- add_special_tokens=add_special_tokens,
- return_tensors=return_tensors,
- padding=padding,
- truncation=truncation,
- )
+ try:
+ inputs = self.tokenizer(
+ sequence_pairs,
+ add_special_tokens=add_special_tokens,
+ return_tensors=return_tensors,
+ padding=padding,
+ truncation=truncation,
+ )
+ except Exception as e:
+ if "too short" in str(e):
+ # tokenizers might yell that we want to truncate
+ # to a value that is not even reached by the input.
+ # In that case we don't want to truncate.
+ # It seems there's not a really better way to catch that
+ # exception.
+
+ inputs = self.tokenizer(
+ sequence_pairs,
+ add_special_tokens=add_special_tokens,
+ return_tensors=return_tensors,
+ padding=padding,
+ truncation=TruncationStrategy.DO_NOT_TRUNCATE,
+ )
+ else:
+ raise e
return inputs
| {"golden_diff": "diff --git a/src/transformers/pipelines/zero_shot_classification.py b/src/transformers/pipelines/zero_shot_classification.py\n--- a/src/transformers/pipelines/zero_shot_classification.py\n+++ b/src/transformers/pipelines/zero_shot_classification.py\n@@ -88,7 +88,7 @@\n hypothesis_template,\n padding=True,\n add_special_tokens=True,\n- truncation=TruncationStrategy.DO_NOT_TRUNCATE,\n+ truncation=TruncationStrategy.ONLY_FIRST,\n **kwargs\n ):\n \"\"\"\n@@ -113,13 +113,31 @@\n )\n inputs.append(model_input)\n else:\n- inputs = self.tokenizer(\n- sequence_pairs,\n- add_special_tokens=add_special_tokens,\n- return_tensors=return_tensors,\n- padding=padding,\n- truncation=truncation,\n- )\n+ try:\n+ inputs = self.tokenizer(\n+ sequence_pairs,\n+ add_special_tokens=add_special_tokens,\n+ return_tensors=return_tensors,\n+ padding=padding,\n+ truncation=truncation,\n+ )\n+ except Exception as e:\n+ if \"too short\" in str(e):\n+ # tokenizers might yell that we want to truncate\n+ # to a value that is not even reached by the input.\n+ # In that case we don't want to truncate.\n+ # It seems there's not a really better way to catch that\n+ # exception.\n+\n+ inputs = self.tokenizer(\n+ sequence_pairs,\n+ add_special_tokens=add_special_tokens,\n+ return_tensors=return_tensors,\n+ padding=padding,\n+ truncation=TruncationStrategy.DO_NOT_TRUNCATE,\n+ )\n+ else:\n+ raise e\n \n return inputs\n", "issue": "Zero-shot classification pipeline truncation support\nTransformers 4.10.0 brought [a change](https://github.com/huggingface/transformers/pull/13299/files#diff-c5af53af9b08fb383b49d7a07c1a56c890198b5cd48adc97aeef753fe2e7d60dR91) that modified the default truncation strategy to TruncationStrategy.DO_NOT_TRUNCATE for the ZeroShotClassificationPipeline.\r\n\r\nThat uncovered an issue in that the [ZeroShotClassificationPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_classification.py#L217 ) doesn't appear to pass kwargs to the parent's call method. So even when calling the pipeline with truncation=True, it doesn't allow for truncation.\r\n\r\nThank you for the assistance in advance, appreciate all the work you guys do. \n", "before_files": [{"content": "from typing import List, Union\n\nimport numpy as np\n\nfrom ..file_utils import add_end_docstrings, is_torch_available\nfrom ..tokenization_utils import TruncationStrategy\nfrom ..utils import logging\nfrom .base import PIPELINE_INIT_ARGS, ArgumentHandler, Pipeline\n\n\nif is_torch_available():\n import torch\n\nlogger = logging.get_logger(__name__)\n\n\nclass ZeroShotClassificationArgumentHandler(ArgumentHandler):\n \"\"\"\n Handles arguments for zero-shot for text classification by turning each possible label into an NLI\n premise/hypothesis pair.\n \"\"\"\n\n def _parse_labels(self, labels):\n if isinstance(labels, str):\n labels = [label.strip() for label in labels.split(\",\")]\n return labels\n\n def __call__(self, sequences, labels, hypothesis_template):\n if len(labels) == 0 or len(sequences) == 0:\n raise ValueError(\"You must include at least one label and at least one sequence.\")\n if hypothesis_template.format(labels[0]) == hypothesis_template:\n raise ValueError(\n (\n 'The provided hypothesis_template \"{}\" was not able to be formatted with the target labels. '\n \"Make sure the passed template includes formatting syntax such as {{}} where the label should go.\"\n ).format(hypothesis_template)\n )\n\n if isinstance(sequences, str):\n sequences = [sequences]\n labels = self._parse_labels(labels)\n\n sequence_pairs = []\n for sequence in sequences:\n sequence_pairs.extend([[sequence, hypothesis_template.format(label)] for label in labels])\n\n return sequence_pairs\n\n\n@add_end_docstrings(PIPELINE_INIT_ARGS)\nclass ZeroShotClassificationPipeline(Pipeline):\n \"\"\"\n NLI-based zero-shot classification pipeline using a :obj:`ModelForSequenceClassification` trained on NLI (natural\n language inference) tasks.\n\n Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis\n pair and passed to the pretrained model. Then, the logit for `entailment` is taken as the logit for the candidate\n label being valid. Any NLI model can be used, but the id of the `entailment` label must be included in the model\n config's :attr:`~transformers.PretrainedConfig.label2id`.\n\n This NLI pipeline can currently be loaded from :func:`~transformers.pipeline` using the following task identifier:\n :obj:`\"zero-shot-classification\"`.\n\n The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list\n of available models on `huggingface.co/models <https://huggingface.co/models?search=nli>`__.\n \"\"\"\n\n def __init__(self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._args_parser = args_parser\n if self.entailment_id == -1:\n logger.warning(\n \"Failed to determine 'entailment' label id from the label2id mapping in the model config. Setting to \"\n \"-1. Define a descriptive label2id mapping in the model config to ensure correct outputs.\"\n )\n\n @property\n def entailment_id(self):\n for label, ind in self.model.config.label2id.items():\n if label.lower().startswith(\"entail\"):\n return ind\n return -1\n\n def _parse_and_tokenize(\n self,\n sequences,\n candidate_labels,\n hypothesis_template,\n padding=True,\n add_special_tokens=True,\n truncation=TruncationStrategy.DO_NOT_TRUNCATE,\n **kwargs\n ):\n \"\"\"\n Parse arguments and tokenize only_first so that hypothesis (label) is not truncated\n \"\"\"\n sequence_pairs = self._args_parser(sequences, candidate_labels, hypothesis_template)\n return_tensors = self.framework\n if getattr(self.tokenizer, \"pad_token\", None) is None:\n # XXX some tokenizers do not have a padding token, we use simple lists\n # and no padding then\n logger.warning(\"The tokenizer {self.tokenizer} does not have a pad token, we're not running it as a batch\")\n padding = False\n inputs = []\n for sequence_pair in sequence_pairs:\n model_input = self.tokenizer(\n text=sequence_pair[0],\n text_pair=sequence_pair[1],\n add_special_tokens=add_special_tokens,\n return_tensors=return_tensors,\n padding=padding,\n truncation=truncation,\n )\n inputs.append(model_input)\n else:\n inputs = self.tokenizer(\n sequence_pairs,\n add_special_tokens=add_special_tokens,\n return_tensors=return_tensors,\n padding=padding,\n truncation=truncation,\n )\n\n return inputs\n\n def _forward(self, inputs, return_tensors=False):\n \"\"\"\n Internal framework specific forward dispatching\n\n Args:\n inputs: dict holding all the keyword arguments for required by the model forward method.\n return_tensors: Whether to return native framework (pt/tf) tensors rather than numpy array\n\n Returns:\n Numpy array\n \"\"\"\n # Encode for forward\n with self.device_placement():\n if self.framework == \"tf\":\n if isinstance(inputs, list):\n predictions = []\n for input_ in inputs:\n prediction = self.model(input_.data, training=False)[0]\n predictions.append(prediction)\n else:\n predictions = self.model(inputs.data, training=False)[0]\n else:\n with torch.no_grad():\n if isinstance(inputs, list):\n predictions = []\n for input_ in inputs:\n model_input = self.ensure_tensor_on_device(**input_)\n prediction = self.model(**model_input)[0].cpu()\n predictions.append(prediction)\n\n else:\n inputs = self.ensure_tensor_on_device(**inputs)\n predictions = self.model(**inputs)[0].cpu()\n\n if return_tensors:\n return predictions\n else:\n if isinstance(predictions, list):\n predictions = np.array([p.numpy() for p in predictions])\n else:\n predictions = predictions.numpy()\n return predictions\n\n def __call__(\n self,\n sequences: Union[str, List[str]],\n candidate_labels,\n hypothesis_template=\"This example is {}.\",\n multi_label=False,\n **kwargs,\n ):\n \"\"\"\n Classify the sequence(s) given as inputs. See the :obj:`~transformers.ZeroShotClassificationPipeline`\n documentation for more information.\n\n Args:\n sequences (:obj:`str` or :obj:`List[str]`):\n The sequence(s) to classify, will be truncated if the model input is too large.\n candidate_labels (:obj:`str` or :obj:`List[str]`):\n The set of possible class labels to classify each sequence into. Can be a single label, a string of\n comma-separated labels, or a list of labels.\n hypothesis_template (:obj:`str`, `optional`, defaults to :obj:`\"This example is {}.\"`):\n The template used to turn each label into an NLI-style hypothesis. This template must include a {} or\n similar syntax for the candidate label to be inserted into the template. For example, the default\n template is :obj:`\"This example is {}.\"` With the candidate label :obj:`\"sports\"`, this would be fed\n into the model like :obj:`\"<cls> sequence to classify <sep> This example is sports . <sep>\"`. The\n default template works well in many cases, but it may be worthwhile to experiment with different\n templates depending on the task setting.\n multi_label (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether or not multiple candidate labels can be true. If :obj:`False`, the scores are normalized such\n that the sum of the label likelihoods for each sequence is 1. If :obj:`True`, the labels are considered\n independent and probabilities are normalized for each candidate by doing a softmax of the entailment\n score vs. the contradiction score.\n\n Return:\n A :obj:`dict` or a list of :obj:`dict`: Each result comes as a dictionary with the following keys:\n\n - **sequence** (:obj:`str`) -- The sequence for which this is the output.\n - **labels** (:obj:`List[str]`) -- The labels sorted by order of likelihood.\n - **scores** (:obj:`List[float]`) -- The probabilities for each of the labels.\n \"\"\"\n if \"multi_class\" in kwargs and kwargs[\"multi_class\"] is not None:\n multi_label = kwargs.pop(\"multi_class\")\n logger.warning(\n \"The `multi_class` argument has been deprecated and renamed to `multi_label`. \"\n \"`multi_class` will be removed in a future version of Transformers.\"\n )\n\n if sequences and isinstance(sequences, str):\n sequences = [sequences]\n\n outputs = super().__call__(sequences, candidate_labels, hypothesis_template)\n if isinstance(outputs, list):\n # XXX: Some tokenizers cannot handle batching because they don't\n # have pad_token, so outputs will be a list, however, because outputs\n # is only n logits and sequence_length is not present anymore, we\n # can recreate a tensor out of outputs.\n outputs = np.array(outputs)\n num_sequences = len(sequences)\n candidate_labels = self._args_parser._parse_labels(candidate_labels)\n reshaped_outputs = outputs.reshape((num_sequences, len(candidate_labels), -1))\n\n if len(candidate_labels) == 1:\n multi_label = True\n\n if not multi_label:\n # softmax the \"entailment\" logits over all candidate labels\n entail_logits = reshaped_outputs[..., self.entailment_id]\n scores = np.exp(entail_logits) / np.exp(entail_logits).sum(-1, keepdims=True)\n else:\n # softmax over the entailment vs. contradiction dim for each label independently\n entailment_id = self.entailment_id\n contradiction_id = -1 if entailment_id == 0 else 0\n entail_contr_logits = reshaped_outputs[..., [contradiction_id, entailment_id]]\n scores = np.exp(entail_contr_logits) / np.exp(entail_contr_logits).sum(-1, keepdims=True)\n scores = scores[..., 1]\n\n result = []\n for iseq in range(num_sequences):\n top_inds = list(reversed(scores[iseq].argsort()))\n result.append(\n {\n \"sequence\": sequences if isinstance(sequences, str) else sequences[iseq],\n \"labels\": [candidate_labels[i] for i in top_inds],\n \"scores\": scores[iseq][top_inds].tolist(),\n }\n )\n\n if len(result) == 1:\n return result[0]\n return result\n", "path": "src/transformers/pipelines/zero_shot_classification.py"}]} | 3,730 | 391 |
gh_patches_debug_38885 | rasdani/github-patches | git_diff | scikit-image__scikit-image-4471 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
profile_line gives wrong output for a binary mask
## Description
As of now (skimage 0.12.3), profile_line sometimes returns wrong output for binary masks (np.array(dtype=bool)). See below for an example. If the image is cast to an uint8 first, it returns the expected result.
A possible patch might be even just specifying the allowed dtypes in the docstring.
## Way to reproduce
``` python
from numpy import meshgrid, pi, cos, sin, all, uint8
from skimage.measure import profile_line
shape = (200, 200)
center_x, center_y = (140, 150)
radius = 20
x, y = meshgrid(range(shape[1]), range(shape[0]))
mask = (y - center_y)**2 + (x - center_x)**2 < radius**2
src = (center_y, center_x)
phi = 4*pi/9.
dy = 31 * cos(phi)
dx = 31 * sin(phi)
dst = (center_y + dy, center_x + dx)
profile_uint8 = profile_line(mask.astype(uint8), src, dst)
print profile_uint8
assert all(profile_uint8[:radius] == 1)
profile_bool = profile_line(mask, src, dst)
print profile_bool
assert all(profile_bool[:radius] == 1) # Fails
assert all(profile_bool == profile_uint8) # Fails
```
The output is (there is a zero in the middle of the profile_line output):
```
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. (ZERO HERE?!) 1. 1.
1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-24-1a94903a5586> in <module>()
19 profile_bool = profile_line(mask, src, dst)
20 print profile_bool
---> 21 assert all(profile_bool[:radius] == 1) # Fails
22
23 assert all(profile_bool == profile_uint8) # Fails
AssertionError:
```
</issue>
<code>
[start of skimage/measure/profile.py]
1 import numpy as np
2 from scipy import ndimage as ndi
3
4
5 def profile_line(image, src, dst, linewidth=1,
6 order=1, mode='constant', cval=0.0,
7 *, reduce_func=np.mean):
8 """Return the intensity profile of an image measured along a scan line.
9
10 Parameters
11 ----------
12 image : numeric array, shape (M, N[, C])
13 The image, either grayscale (2D array) or multichannel
14 (3D array, where the final axis contains the channel
15 information).
16 src : 2-tuple of numeric scalar (float or int)
17 The start point of the scan line.
18 dst : 2-tuple of numeric scalar (float or int)
19 The end point of the scan line. The destination point is *included*
20 in the profile, in contrast to standard numpy indexing.
21 linewidth : int, optional
22 Width of the scan, perpendicular to the line
23 order : int in {0, 1, 2, 3, 4, 5}, optional
24 The order of the spline interpolation to compute image values at
25 non-integer coordinates. 0 means nearest-neighbor interpolation.
26 mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional
27 How to compute any values falling outside of the image.
28 cval : float, optional
29 If `mode` is 'constant', what constant value to use outside the image.
30 reduce_func : callable, optional
31 Function used to calculate the aggregation of pixel values
32 perpendicular to the profile_line direction when `linewidth` > 1.
33 If set to None the unreduced array will be returned.
34
35 Returns
36 -------
37 return_value : array
38 The intensity profile along the scan line. The length of the profile
39 is the ceil of the computed length of the scan line.
40
41 Examples
42 --------
43 >>> x = np.array([[1, 1, 1, 2, 2, 2]])
44 >>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])
45 >>> img
46 array([[0, 0, 0, 0, 0, 0],
47 [1, 1, 1, 2, 2, 2],
48 [1, 1, 1, 2, 2, 2],
49 [1, 1, 1, 2, 2, 2],
50 [0, 0, 0, 0, 0, 0]])
51 >>> profile_line(img, (2, 1), (2, 4))
52 array([1., 1., 2., 2.])
53 >>> profile_line(img, (1, 0), (1, 6), cval=4)
54 array([1., 1., 1., 2., 2., 2., 4.])
55
56 The destination point is included in the profile, in contrast to
57 standard numpy indexing.
58 For example:
59
60 >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds
61 array([1., 1., 1., 2., 2., 2., 0.])
62 >>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row
63 array([1., 1., 1., 2., 2., 2.])
64
65 For different reduce_func inputs:
66
67 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)
68 array([0.66666667, 0.66666667, 0.66666667, 1.33333333])
69 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)
70 array([1, 1, 1, 2])
71 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)
72 array([2, 2, 2, 4])
73
74 The unreduced array will be returned when `reduce_func` is None or when
75 `reduce_func` acts on each pixel value individually.
76
77 >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,
78 ... reduce_func=None)
79 array([[1, 1, 2],
80 [1, 1, 2],
81 [1, 1, 2],
82 [0, 0, 0]])
83 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)
84 array([[1. , 1. , 0. ],
85 [1. , 1. , 0. ],
86 [1. , 1. , 0. ],
87 [1.41421356, 1.41421356, 0. ]])
88 """
89 perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)
90 if image.ndim == 3:
91 pixels = [ndi.map_coordinates(image[..., i], perp_lines,
92 order=order, mode=mode, cval=cval)
93 for i in range(image.shape[2])]
94 pixels = np.transpose(np.asarray(pixels), (1, 2, 0))
95 else:
96 pixels = ndi.map_coordinates(image, perp_lines,
97 order=order, mode=mode, cval=cval)
98 # The outputted array with reduce_func=None gives an array where the
99 # row values (axis=1) are flipped. Here, we make this consistent.
100 pixels = np.flip(pixels, axis=1)
101
102 if reduce_func is None:
103 intensities = pixels
104 else:
105 try:
106 intensities = reduce_func(pixels, axis=1)
107 except TypeError: # function doesn't allow axis kwarg
108 intensities = np.apply_along_axis(reduce_func, arr=pixels, axis=1)
109
110 return intensities
111
112
113 def _line_profile_coordinates(src, dst, linewidth=1):
114 """Return the coordinates of the profile of an image along a scan line.
115
116 Parameters
117 ----------
118 src : 2-tuple of numeric scalar (float or int)
119 The start point of the scan line.
120 dst : 2-tuple of numeric scalar (float or int)
121 The end point of the scan line.
122 linewidth : int, optional
123 Width of the scan, perpendicular to the line
124
125 Returns
126 -------
127 coords : array, shape (2, N, C), float
128 The coordinates of the profile along the scan line. The length of the
129 profile is the ceil of the computed length of the scan line.
130
131 Notes
132 -----
133 This is a utility method meant to be used internally by skimage functions.
134 The destination point is included in the profile, in contrast to
135 standard numpy indexing.
136 """
137 src_row, src_col = src = np.asarray(src, dtype=float)
138 dst_row, dst_col = dst = np.asarray(dst, dtype=float)
139 d_row, d_col = dst - src
140 theta = np.arctan2(d_row, d_col)
141
142 length = int(np.ceil(np.hypot(d_row, d_col) + 1))
143 # we add one above because we include the last point in the profile
144 # (in contrast to standard numpy indexing)
145 line_col = np.linspace(src_col, dst_col, length)
146 line_row = np.linspace(src_row, dst_row, length)
147
148 # we subtract 1 from linewidth to change from pixel-counting
149 # (make this line 3 pixels wide) to point distances (the
150 # distance between pixel centers)
151 col_width = (linewidth - 1) * np.sin(-theta) / 2
152 row_width = (linewidth - 1) * np.cos(theta) / 2
153 perp_rows = np.array([np.linspace(row_i - row_width, row_i + row_width,
154 linewidth) for row_i in line_row])
155 perp_cols = np.array([np.linspace(col_i - col_width, col_i + col_width,
156 linewidth) for col_i in line_col])
157 return np.array([perp_rows, perp_cols])
158
[end of skimage/measure/profile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/measure/profile.py b/skimage/measure/profile.py
--- a/skimage/measure/profile.py
+++ b/skimage/measure/profile.py
@@ -1,9 +1,10 @@
+from warnings import warn
import numpy as np
from scipy import ndimage as ndi
def profile_line(image, src, dst, linewidth=1,
- order=1, mode='constant', cval=0.0,
+ order=None, mode='constant', cval=0.0,
*, reduce_func=np.mean):
"""Return the intensity profile of an image measured along a scan line.
@@ -21,8 +22,9 @@
linewidth : int, optional
Width of the scan, perpendicular to the line
order : int in {0, 1, 2, 3, 4, 5}, optional
- The order of the spline interpolation to compute image values at
- non-integer coordinates. 0 means nearest-neighbor interpolation.
+ The order of the spline interpolation, default is 0 if
+ image.dtype is bool and 1 otherwise. The order has to be in
+ the range 0-5. See `skimage.transform.warp` for detail.
mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional
How to compute any values falling outside of the image.
cval : float, optional
@@ -86,14 +88,26 @@
[1. , 1. , 0. ],
[1.41421356, 1.41421356, 0. ]])
"""
+ if order is None:
+ order = 0 if image.dtype == bool else 1
+
+ if image.dtype == bool and order != 0:
+ warn("Input image dtype is bool. Interpolation is not defined "
+ "with bool data type. Please set order to 0 or explicitely "
+ "cast input image to another data type. Starting from version "
+ "0.19 a ValueError will be raised instead of this warning.",
+ FutureWarning, stacklevel=2)
+
perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)
if image.ndim == 3:
pixels = [ndi.map_coordinates(image[..., i], perp_lines,
- order=order, mode=mode, cval=cval)
- for i in range(image.shape[2])]
+ prefilter=order > 1,
+ order=order, mode=mode,
+ cval=cval) for i in
+ range(image.shape[2])]
pixels = np.transpose(np.asarray(pixels), (1, 2, 0))
else:
- pixels = ndi.map_coordinates(image, perp_lines,
+ pixels = ndi.map_coordinates(image, perp_lines, prefilter=order > 1,
order=order, mode=mode, cval=cval)
# The outputted array with reduce_func=None gives an array where the
# row values (axis=1) are flipped. Here, we make this consistent.
| {"golden_diff": "diff --git a/skimage/measure/profile.py b/skimage/measure/profile.py\n--- a/skimage/measure/profile.py\n+++ b/skimage/measure/profile.py\n@@ -1,9 +1,10 @@\n+from warnings import warn\n import numpy as np\n from scipy import ndimage as ndi\n \n \n def profile_line(image, src, dst, linewidth=1,\n- order=1, mode='constant', cval=0.0,\n+ order=None, mode='constant', cval=0.0,\n *, reduce_func=np.mean):\n \"\"\"Return the intensity profile of an image measured along a scan line.\n \n@@ -21,8 +22,9 @@\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n order : int in {0, 1, 2, 3, 4, 5}, optional\n- The order of the spline interpolation to compute image values at\n- non-integer coordinates. 0 means nearest-neighbor interpolation.\n+ The order of the spline interpolation, default is 0 if\n+ image.dtype is bool and 1 otherwise. The order has to be in\n+ the range 0-5. See `skimage.transform.warp` for detail.\n mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional\n How to compute any values falling outside of the image.\n cval : float, optional\n@@ -86,14 +88,26 @@\n [1. , 1. , 0. ],\n [1.41421356, 1.41421356, 0. ]])\n \"\"\"\n+ if order is None:\n+ order = 0 if image.dtype == bool else 1\n+\n+ if image.dtype == bool and order != 0:\n+ warn(\"Input image dtype is bool. Interpolation is not defined \"\n+ \"with bool data type. Please set order to 0 or explicitely \"\n+ \"cast input image to another data type. Starting from version \"\n+ \"0.19 a ValueError will be raised instead of this warning.\",\n+ FutureWarning, stacklevel=2)\n+\n perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)\n if image.ndim == 3:\n pixels = [ndi.map_coordinates(image[..., i], perp_lines,\n- order=order, mode=mode, cval=cval)\n- for i in range(image.shape[2])]\n+ prefilter=order > 1,\n+ order=order, mode=mode,\n+ cval=cval) for i in\n+ range(image.shape[2])]\n pixels = np.transpose(np.asarray(pixels), (1, 2, 0))\n else:\n- pixels = ndi.map_coordinates(image, perp_lines,\n+ pixels = ndi.map_coordinates(image, perp_lines, prefilter=order > 1,\n order=order, mode=mode, cval=cval)\n # The outputted array with reduce_func=None gives an array where the\n # row values (axis=1) are flipped. Here, we make this consistent.\n", "issue": "profile_line gives wrong output for a binary mask\n## Description\n\nAs of now (skimage 0.12.3), profile_line sometimes returns wrong output for binary masks (np.array(dtype=bool)). See below for an example. If the image is cast to an uint8 first, it returns the expected result.\n\nA possible patch might be even just specifying the allowed dtypes in the docstring.\n## Way to reproduce\n\n``` python\nfrom numpy import meshgrid, pi, cos, sin, all, uint8\nfrom skimage.measure import profile_line\n\nshape = (200, 200)\ncenter_x, center_y = (140, 150)\nradius = 20\nx, y = meshgrid(range(shape[1]), range(shape[0]))\nmask = (y - center_y)**2 + (x - center_x)**2 < radius**2\nsrc = (center_y, center_x)\nphi = 4*pi/9.\ndy = 31 * cos(phi)\ndx = 31 * sin(phi)\ndst = (center_y + dy, center_x + dx)\n\nprofile_uint8 = profile_line(mask.astype(uint8), src, dst)\nprint profile_uint8\nassert all(profile_uint8[:radius] == 1)\n\nprofile_bool = profile_line(mask, src, dst)\nprint profile_bool\nassert all(profile_bool[:radius] == 1) # Fails\n\nassert all(profile_bool == profile_uint8) # Fails\n```\n\nThe output is (there is a zero in the middle of the profile_line output):\n\n```\n[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. (ZERO HERE?!) 1. 1.\n 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n\n---------------------------------------------------------------------------\nAssertionError Traceback (most recent call last)\n<ipython-input-24-1a94903a5586> in <module>()\n 19 profile_bool = profile_line(mask, src, dst)\n 20 print profile_bool\n---> 21 assert all(profile_bool[:radius] == 1) # Fails\n 22 \n 23 assert all(profile_bool == profile_uint8) # Fails\n\nAssertionError: \n```\n\n", "before_files": [{"content": "import numpy as np\nfrom scipy import ndimage as ndi\n\n\ndef profile_line(image, src, dst, linewidth=1,\n order=1, mode='constant', cval=0.0,\n *, reduce_func=np.mean):\n \"\"\"Return the intensity profile of an image measured along a scan line.\n\n Parameters\n ----------\n image : numeric array, shape (M, N[, C])\n The image, either grayscale (2D array) or multichannel\n (3D array, where the final axis contains the channel\n information).\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line. The destination point is *included*\n in the profile, in contrast to standard numpy indexing.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n order : int in {0, 1, 2, 3, 4, 5}, optional\n The order of the spline interpolation to compute image values at\n non-integer coordinates. 0 means nearest-neighbor interpolation.\n mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional\n How to compute any values falling outside of the image.\n cval : float, optional\n If `mode` is 'constant', what constant value to use outside the image.\n reduce_func : callable, optional\n Function used to calculate the aggregation of pixel values\n perpendicular to the profile_line direction when `linewidth` > 1.\n If set to None the unreduced array will be returned.\n\n Returns\n -------\n return_value : array\n The intensity profile along the scan line. The length of the profile\n is the ceil of the computed length of the scan line.\n\n Examples\n --------\n >>> x = np.array([[1, 1, 1, 2, 2, 2]])\n >>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])\n >>> img\n array([[0, 0, 0, 0, 0, 0],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [0, 0, 0, 0, 0, 0]])\n >>> profile_line(img, (2, 1), (2, 4))\n array([1., 1., 2., 2.])\n >>> profile_line(img, (1, 0), (1, 6), cval=4)\n array([1., 1., 1., 2., 2., 2., 4.])\n\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n For example:\n\n >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds\n array([1., 1., 1., 2., 2., 2., 0.])\n >>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row\n array([1., 1., 1., 2., 2., 2.])\n\n For different reduce_func inputs:\n\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)\n array([0.66666667, 0.66666667, 0.66666667, 1.33333333])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)\n array([1, 1, 1, 2])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)\n array([2, 2, 2, 4])\n\n The unreduced array will be returned when `reduce_func` is None or when\n `reduce_func` acts on each pixel value individually.\n\n >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,\n ... reduce_func=None)\n array([[1, 1, 2],\n [1, 1, 2],\n [1, 1, 2],\n [0, 0, 0]])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)\n array([[1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1.41421356, 1.41421356, 0. ]])\n \"\"\"\n perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)\n if image.ndim == 3:\n pixels = [ndi.map_coordinates(image[..., i], perp_lines,\n order=order, mode=mode, cval=cval)\n for i in range(image.shape[2])]\n pixels = np.transpose(np.asarray(pixels), (1, 2, 0))\n else:\n pixels = ndi.map_coordinates(image, perp_lines,\n order=order, mode=mode, cval=cval)\n # The outputted array with reduce_func=None gives an array where the\n # row values (axis=1) are flipped. Here, we make this consistent.\n pixels = np.flip(pixels, axis=1)\n\n if reduce_func is None:\n intensities = pixels\n else:\n try:\n intensities = reduce_func(pixels, axis=1)\n except TypeError: # function doesn't allow axis kwarg\n intensities = np.apply_along_axis(reduce_func, arr=pixels, axis=1)\n\n return intensities\n\n\ndef _line_profile_coordinates(src, dst, linewidth=1):\n \"\"\"Return the coordinates of the profile of an image along a scan line.\n\n Parameters\n ----------\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n\n Returns\n -------\n coords : array, shape (2, N, C), float\n The coordinates of the profile along the scan line. The length of the\n profile is the ceil of the computed length of the scan line.\n\n Notes\n -----\n This is a utility method meant to be used internally by skimage functions.\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n \"\"\"\n src_row, src_col = src = np.asarray(src, dtype=float)\n dst_row, dst_col = dst = np.asarray(dst, dtype=float)\n d_row, d_col = dst - src\n theta = np.arctan2(d_row, d_col)\n\n length = int(np.ceil(np.hypot(d_row, d_col) + 1))\n # we add one above because we include the last point in the profile\n # (in contrast to standard numpy indexing)\n line_col = np.linspace(src_col, dst_col, length)\n line_row = np.linspace(src_row, dst_row, length)\n\n # we subtract 1 from linewidth to change from pixel-counting\n # (make this line 3 pixels wide) to point distances (the\n # distance between pixel centers)\n col_width = (linewidth - 1) * np.sin(-theta) / 2\n row_width = (linewidth - 1) * np.cos(theta) / 2\n perp_rows = np.array([np.linspace(row_i - row_width, row_i + row_width,\n linewidth) for row_i in line_row])\n perp_cols = np.array([np.linspace(col_i - col_width, col_i + col_width,\n linewidth) for col_i in line_col])\n return np.array([perp_rows, perp_cols])\n", "path": "skimage/measure/profile.py"}]} | 3,472 | 707 |
gh_patches_debug_49043 | rasdani/github-patches | git_diff | arviz-devs__arviz-2032 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plot_dot
**Describe the bug**
plotdot fig size doesn't behave the way I expect, in that when I set `figsize` in an axes that triple a previous one its not triple the size. There also are some minor bugs where the dots seem to be overlapping some
**To Reproduce**
```
samples = stats.beta(2,2).rvs(100)
width = 10
fig, ax = plt.subplots(figsize=(width, 10))
az.plot_dot(samples, ax=ax)
ax.set_title(f"Width: {width}")
ax.set_xlim(0,1)
```
Then try this, but see that figure is not three times the width
```
width = 30
fig, ax = plt.subplots(figsize=(width, 10))
az.plot_dot(samples, ax=ax)
ax.set_title(f"Width: {width}")
ax.set_xlim(0,1)
```


**Expected behavior**
Figsize from `plt.subplots` is respected
**Additional context**
Arviz '0.12.0'
</issue>
<code>
[start of arviz/plots/backends/matplotlib/dotplot.py]
1 """Matplotlib dotplot."""
2 import math
3 import warnings
4 import numpy as np
5 import matplotlib.pyplot as plt
6 from matplotlib import _pylab_helpers
7
8 from ...plot_utils import _scale_fig_size
9 from . import backend_kwarg_defaults, create_axes_grid, backend_show
10 from ...plot_utils import plot_point_interval
11 from ...dotplot import wilkinson_algorithm, layout_stacks
12
13
14 def plot_dot(
15 values,
16 binwidth,
17 dotsize,
18 stackratio,
19 hdi_prob,
20 quartiles,
21 rotated,
22 dotcolor,
23 intervalcolor,
24 markersize,
25 markercolor,
26 marker,
27 figsize,
28 linewidth,
29 point_estimate,
30 nquantiles,
31 point_interval,
32 ax,
33 show,
34 backend_kwargs,
35 plot_kwargs,
36 ):
37 """Matplotlib dotplot."""
38 if backend_kwargs is None:
39 backend_kwargs = {}
40
41 backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}
42
43 backend_kwargs.setdefault("figsize", figsize)
44 backend_kwargs["squeeze"] = True
45
46 (figsize, _, _, _, auto_linewidth, auto_markersize) = _scale_fig_size(figsize, None)
47
48 if plot_kwargs is None:
49 plot_kwargs = {}
50 plot_kwargs.setdefault("color", dotcolor)
51
52 if linewidth is None:
53 linewidth = auto_linewidth
54
55 if markersize is None:
56 markersize = auto_markersize
57
58 if ax is None:
59 fig_manager = _pylab_helpers.Gcf.get_active()
60 if fig_manager is not None:
61 ax = fig_manager.canvas.figure.gca()
62 else:
63 _, ax = create_axes_grid(
64 1,
65 backend_kwargs=backend_kwargs,
66 )
67
68 if point_interval:
69 ax = plot_point_interval(
70 ax,
71 values,
72 point_estimate,
73 hdi_prob,
74 quartiles,
75 linewidth,
76 markersize,
77 markercolor,
78 marker,
79 rotated,
80 intervalcolor,
81 "matplotlib",
82 )
83
84 if nquantiles > values.shape[0]:
85 warnings.warn(
86 "nquantiles must be less than or equal to the number of data points", UserWarning
87 )
88 nquantiles = values.shape[0]
89 else:
90 qlist = np.linspace(1 / (2 * nquantiles), 1 - 1 / (2 * nquantiles), nquantiles)
91 values = np.quantile(values, qlist)
92
93 if binwidth is None:
94 binwidth = math.sqrt((values[-1] - values[0] + 1) ** 2 / (2 * nquantiles * np.pi))
95
96 ## Wilkinson's Algorithm
97 stack_locs, stack_count = wilkinson_algorithm(values, binwidth)
98 x, y = layout_stacks(stack_locs, stack_count, binwidth, stackratio, rotated)
99
100 for (x_i, y_i) in zip(x, y):
101 dot = plt.Circle((x_i, y_i), dotsize * binwidth / 2, **plot_kwargs)
102 ax.add_patch(dot)
103
104 if rotated:
105 ax.tick_params(bottom=False, labelbottom=False)
106 else:
107 ax.tick_params(left=False, labelleft=False)
108
109 ax.set_aspect("equal", adjustable="box")
110 ax.autoscale()
111
112 if backend_show(show):
113 plt.show()
114
115 return ax
116
[end of arviz/plots/backends/matplotlib/dotplot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/arviz/plots/backends/matplotlib/dotplot.py b/arviz/plots/backends/matplotlib/dotplot.py
--- a/arviz/plots/backends/matplotlib/dotplot.py
+++ b/arviz/plots/backends/matplotlib/dotplot.py
@@ -106,7 +106,7 @@
else:
ax.tick_params(left=False, labelleft=False)
- ax.set_aspect("equal", adjustable="box")
+ ax.set_aspect("equal", adjustable="datalim")
ax.autoscale()
if backend_show(show):
| {"golden_diff": "diff --git a/arviz/plots/backends/matplotlib/dotplot.py b/arviz/plots/backends/matplotlib/dotplot.py\n--- a/arviz/plots/backends/matplotlib/dotplot.py\n+++ b/arviz/plots/backends/matplotlib/dotplot.py\n@@ -106,7 +106,7 @@\n else:\n ax.tick_params(left=False, labelleft=False)\n \n- ax.set_aspect(\"equal\", adjustable=\"box\")\n+ ax.set_aspect(\"equal\", adjustable=\"datalim\")\n ax.autoscale()\n \n if backend_show(show):\n", "issue": "plot_dot \n**Describe the bug**\r\nplotdot fig size doesn't behave the way I expect, in that when I set `figsize` in an axes that triple a previous one its not triple the size. There also are some minor bugs where the dots seem to be overlapping some\r\n\r\n**To Reproduce**\r\n```\r\nsamples = stats.beta(2,2).rvs(100)\r\n\r\nwidth = 10\r\nfig, ax = plt.subplots(figsize=(width, 10))\r\naz.plot_dot(samples, ax=ax)\r\nax.set_title(f\"Width: {width}\")\r\nax.set_xlim(0,1)\r\n```\r\n\r\nThen try this, but see that figure is not three times the width\r\n```\r\nwidth = 30\r\nfig, ax = plt.subplots(figsize=(width, 10))\r\naz.plot_dot(samples, ax=ax)\r\nax.set_title(f\"Width: {width}\")\r\nax.set_xlim(0,1)\r\n```\r\n\r\n\r\n\r\n\r\n\r\n**Expected behavior**\r\nFigsize from `plt.subplots` is respected\r\n\r\n**Additional context**\r\nArviz '0.12.0'\r\n\n", "before_files": [{"content": "\"\"\"Matplotlib dotplot.\"\"\"\nimport math\nimport warnings\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import _pylab_helpers\n\nfrom ...plot_utils import _scale_fig_size\nfrom . import backend_kwarg_defaults, create_axes_grid, backend_show\nfrom ...plot_utils import plot_point_interval\nfrom ...dotplot import wilkinson_algorithm, layout_stacks\n\n\ndef plot_dot(\n values,\n binwidth,\n dotsize,\n stackratio,\n hdi_prob,\n quartiles,\n rotated,\n dotcolor,\n intervalcolor,\n markersize,\n markercolor,\n marker,\n figsize,\n linewidth,\n point_estimate,\n nquantiles,\n point_interval,\n ax,\n show,\n backend_kwargs,\n plot_kwargs,\n):\n \"\"\"Matplotlib dotplot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}\n\n backend_kwargs.setdefault(\"figsize\", figsize)\n backend_kwargs[\"squeeze\"] = True\n\n (figsize, _, _, _, auto_linewidth, auto_markersize) = _scale_fig_size(figsize, None)\n\n if plot_kwargs is None:\n plot_kwargs = {}\n plot_kwargs.setdefault(\"color\", dotcolor)\n\n if linewidth is None:\n linewidth = auto_linewidth\n\n if markersize is None:\n markersize = auto_markersize\n\n if ax is None:\n fig_manager = _pylab_helpers.Gcf.get_active()\n if fig_manager is not None:\n ax = fig_manager.canvas.figure.gca()\n else:\n _, ax = create_axes_grid(\n 1,\n backend_kwargs=backend_kwargs,\n )\n\n if point_interval:\n ax = plot_point_interval(\n ax,\n values,\n point_estimate,\n hdi_prob,\n quartiles,\n linewidth,\n markersize,\n markercolor,\n marker,\n rotated,\n intervalcolor,\n \"matplotlib\",\n )\n\n if nquantiles > values.shape[0]:\n warnings.warn(\n \"nquantiles must be less than or equal to the number of data points\", UserWarning\n )\n nquantiles = values.shape[0]\n else:\n qlist = np.linspace(1 / (2 * nquantiles), 1 - 1 / (2 * nquantiles), nquantiles)\n values = np.quantile(values, qlist)\n\n if binwidth is None:\n binwidth = math.sqrt((values[-1] - values[0] + 1) ** 2 / (2 * nquantiles * np.pi))\n\n ## Wilkinson's Algorithm\n stack_locs, stack_count = wilkinson_algorithm(values, binwidth)\n x, y = layout_stacks(stack_locs, stack_count, binwidth, stackratio, rotated)\n\n for (x_i, y_i) in zip(x, y):\n dot = plt.Circle((x_i, y_i), dotsize * binwidth / 2, **plot_kwargs)\n ax.add_patch(dot)\n\n if rotated:\n ax.tick_params(bottom=False, labelbottom=False)\n else:\n ax.tick_params(left=False, labelleft=False)\n\n ax.set_aspect(\"equal\", adjustable=\"box\")\n ax.autoscale()\n\n if backend_show(show):\n plt.show()\n\n return ax\n", "path": "arviz/plots/backends/matplotlib/dotplot.py"}]} | 1,847 | 127 |
gh_patches_debug_9828 | rasdani/github-patches | git_diff | secdev__scapy-3473 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
L2TP post_build is broken
### Brief description
l2tp.py post_build is supposed to update the length. However, it only does this if current length is None, and the length field is initialized to 0, not None, resulting in the length never being updated.
### Scapy version
2.4.5
### Python version
3.8
### Operating system
Ubuntu 20.04
### Additional environment information
_No response_
### How to reproduce
print( (L2TP(header=['control', 'length'], version=2) / 'blahblah').build() )
### Actual result
b'\xc0\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00blahblah'
### Expected result
b'\xc0\x02\x00\x14\x00\x00\x00\x00\x00\x00\x00\x00blahblah'
### Related resources
_No response_
</issue>
<code>
[start of scapy/layers/l2tp.py]
1 # This file is part of Scapy
2 # See http://www.secdev.org/projects/scapy for more information
3 # Copyright (C) Philippe Biondi <[email protected]>
4 # This program is published under a GPLv2 license
5
6 """
7 L2TP (Layer 2 Tunneling Protocol) for VPNs.
8
9 [RFC 2661]
10 """
11
12 import struct
13
14 from scapy.packet import Packet, bind_layers, bind_bottom_up
15 from scapy.fields import BitEnumField, ConditionalField, FlagsField, \
16 PadField, ShortField
17 from scapy.layers.inet import UDP
18 from scapy.layers.ppp import PPP
19
20
21 class L2TP(Packet):
22 name = "L2TP"
23 fields_desc = [
24 FlagsField("hdr", 0, 12, ['res00', 'res01', 'res02', 'res03', 'priority', 'offset', # noqa: E501
25 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501
26 BitEnumField("version", 2, 4, {2: 'L2TPv2'}),
27
28 ConditionalField(ShortField("len", 0),
29 lambda pkt: pkt.hdr & 'control+length'),
30 ShortField("tunnel_id", 0),
31 ShortField("session_id", 0),
32 ConditionalField(ShortField("ns", 0),
33 lambda pkt: pkt.hdr & 'sequence+control'),
34 ConditionalField(ShortField("nr", 0),
35 lambda pkt: pkt.hdr & 'sequence+control'),
36 ConditionalField(
37 PadField(ShortField("offset", 0), 4, b"\x00"),
38 lambda pkt: not (pkt.hdr & 'control') and pkt.hdr & 'offset'
39 )
40 ]
41
42 def post_build(self, pkt, pay):
43 if self.len is None and self.hdr & 'control+length':
44 tmp_len = len(pkt) + len(pay)
45 pkt = pkt[:2] + struct.pack("!H", tmp_len) + pkt[4:]
46 return pkt + pay
47
48
49 bind_bottom_up(UDP, L2TP, dport=1701)
50 bind_bottom_up(UDP, L2TP, sport=1701)
51 bind_layers(UDP, L2TP, dport=1701, sport=1701)
52 bind_layers(L2TP, PPP,)
53
[end of scapy/layers/l2tp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scapy/layers/l2tp.py b/scapy/layers/l2tp.py
--- a/scapy/layers/l2tp.py
+++ b/scapy/layers/l2tp.py
@@ -25,7 +25,7 @@
'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501
BitEnumField("version", 2, 4, {2: 'L2TPv2'}),
- ConditionalField(ShortField("len", 0),
+ ConditionalField(ShortField("len", None),
lambda pkt: pkt.hdr & 'control+length'),
ShortField("tunnel_id", 0),
ShortField("session_id", 0),
| {"golden_diff": "diff --git a/scapy/layers/l2tp.py b/scapy/layers/l2tp.py\n--- a/scapy/layers/l2tp.py\n+++ b/scapy/layers/l2tp.py\n@@ -25,7 +25,7 @@\n 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501\n BitEnumField(\"version\", 2, 4, {2: 'L2TPv2'}),\n \n- ConditionalField(ShortField(\"len\", 0),\n+ ConditionalField(ShortField(\"len\", None),\n lambda pkt: pkt.hdr & 'control+length'),\n ShortField(\"tunnel_id\", 0),\n ShortField(\"session_id\", 0),\n", "issue": "L2TP post_build is broken\n### Brief description\n\nl2tp.py post_build is supposed to update the length. However, it only does this if current length is None, and the length field is initialized to 0, not None, resulting in the length never being updated. \n\n### Scapy version\n\n2.4.5\n\n### Python version\n\n3.8\n\n### Operating system\n\nUbuntu 20.04\n\n### Additional environment information\n\n_No response_\n\n### How to reproduce\n\nprint( (L2TP(header=['control', 'length'], version=2) / 'blahblah').build() )\r\n\n\n### Actual result\n\nb'\\xc0\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00blahblah'\n\n### Expected result\n\nb'\\xc0\\x02\\x00\\x14\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00blahblah'\n\n### Related resources\n\n_No response_\n", "before_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nL2TP (Layer 2 Tunneling Protocol) for VPNs.\n\n[RFC 2661]\n\"\"\"\n\nimport struct\n\nfrom scapy.packet import Packet, bind_layers, bind_bottom_up\nfrom scapy.fields import BitEnumField, ConditionalField, FlagsField, \\\n PadField, ShortField\nfrom scapy.layers.inet import UDP\nfrom scapy.layers.ppp import PPP\n\n\nclass L2TP(Packet):\n name = \"L2TP\"\n fields_desc = [\n FlagsField(\"hdr\", 0, 12, ['res00', 'res01', 'res02', 'res03', 'priority', 'offset', # noqa: E501\n 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501\n BitEnumField(\"version\", 2, 4, {2: 'L2TPv2'}),\n\n ConditionalField(ShortField(\"len\", 0),\n lambda pkt: pkt.hdr & 'control+length'),\n ShortField(\"tunnel_id\", 0),\n ShortField(\"session_id\", 0),\n ConditionalField(ShortField(\"ns\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(ShortField(\"nr\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(\n PadField(ShortField(\"offset\", 0), 4, b\"\\x00\"),\n lambda pkt: not (pkt.hdr & 'control') and pkt.hdr & 'offset'\n )\n ]\n\n def post_build(self, pkt, pay):\n if self.len is None and self.hdr & 'control+length':\n tmp_len = len(pkt) + len(pay)\n pkt = pkt[:2] + struct.pack(\"!H\", tmp_len) + pkt[4:]\n return pkt + pay\n\n\nbind_bottom_up(UDP, L2TP, dport=1701)\nbind_bottom_up(UDP, L2TP, sport=1701)\nbind_layers(UDP, L2TP, dport=1701, sport=1701)\nbind_layers(L2TP, PPP,)\n", "path": "scapy/layers/l2tp.py"}]} | 1,409 | 173 |
gh_patches_debug_18028 | rasdani/github-patches | git_diff | Mailu__Mailu-1316 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rainloop Webmail - Authentication fails if you have a special character in your password
In the admin interface, you can define a new password and you can put a special character like `è`.
It works fine with admin interface but it doesn't work at all with the Rainloop webmail. If you try to log in, you will have a message to indicate that the authentication fails, see screenshoot in french:

</issue>
<code>
[start of core/admin/mailu/internal/nginx.py]
1 from mailu import models
2 from flask import current_app as app
3
4 import re
5 import urllib
6 import ipaddress
7 import socket
8 import tenacity
9
10
11 SUPPORTED_AUTH_METHODS = ["none", "plain"]
12
13
14 STATUSES = {
15 "authentication": ("Authentication credentials invalid", {
16 "imap": "AUTHENTICATIONFAILED",
17 "smtp": "535 5.7.8",
18 "pop3": "-ERR Authentication failed"
19 }),
20 }
21
22
23 def handle_authentication(headers):
24 """ Handle an HTTP nginx authentication request
25 See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol
26 """
27 method = headers["Auth-Method"]
28 protocol = headers["Auth-Protocol"]
29 # Incoming mail, no authentication
30 if method == "none" and protocol == "smtp":
31 server, port = get_server(headers["Auth-Protocol"], False)
32 return {
33 "Auth-Status": "OK",
34 "Auth-Server": server,
35 "Auth-Port": port
36 }
37 # Authenticated user
38 elif method == "plain":
39 server, port = get_server(headers["Auth-Protocol"], True)
40 user_email = urllib.parse.unquote(headers["Auth-User"])
41 password = urllib.parse.unquote(headers["Auth-Pass"])
42 ip = urllib.parse.unquote(headers["Client-Ip"])
43 user = models.User.query.get(user_email)
44 status = False
45 if user:
46 for token in user.tokens:
47 if (token.check_password(password) and
48 (not token.ip or token.ip == ip)):
49 status = True
50 if user.check_password(password):
51 status = True
52 if status:
53 if protocol == "imap" and not user.enable_imap:
54 status = False
55 elif protocol == "pop3" and not user.enable_pop:
56 status = False
57 if status and user.enabled:
58 return {
59 "Auth-Status": "OK",
60 "Auth-Server": server,
61 "Auth-Port": port
62 }
63 else:
64 status, code = get_status(protocol, "authentication")
65 return {
66 "Auth-Status": status,
67 "Auth-Error-Code": code,
68 "Auth-Wait": 0
69 }
70 # Unexpected
71 return {}
72
73
74 def get_status(protocol, status):
75 """ Return the proper error code depending on the protocol
76 """
77 status, codes = STATUSES[status]
78 return status, codes[protocol]
79
80 def extract_host_port(host_and_port, default_port):
81 host, _, port = re.match('^(.*)(:([0-9]*))?$', host_and_port).groups()
82 return host, int(port) if port else default_port
83
84 def get_server(protocol, authenticated=False):
85 if protocol == "imap":
86 hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)
87 elif protocol == "pop3":
88 hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)
89 elif protocol == "smtp":
90 if authenticated:
91 hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)
92 else:
93 hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)
94 try:
95 # test if hostname is already resolved to an ip adddress
96 ipaddress.ip_address(hostname)
97 except:
98 # hostname is not an ip address - so we need to resolve it
99 hostname = resolve_hostname(hostname)
100 return hostname, port
101
102 @tenacity.retry(stop=tenacity.stop_after_attempt(100),
103 wait=tenacity.wait_random(min=2, max=5))
104 def resolve_hostname(hostname):
105 """ This function uses system DNS to resolve a hostname.
106 It is capable of retrying in case the host is not immediately available
107 """
108 return socket.gethostbyname(hostname)
109
[end of core/admin/mailu/internal/nginx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py
--- a/core/admin/mailu/internal/nginx.py
+++ b/core/admin/mailu/internal/nginx.py
@@ -37,8 +37,14 @@
# Authenticated user
elif method == "plain":
server, port = get_server(headers["Auth-Protocol"], True)
- user_email = urllib.parse.unquote(headers["Auth-User"])
- password = urllib.parse.unquote(headers["Auth-Pass"])
+ # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should
+ # be ASCII and are generally considered ISO8859-1. However when passing
+ # the password, nginx does not transcode the input UTF string, thus
+ # we need to manually decode.
+ raw_user_email = urllib.parse.unquote(headers["Auth-User"])
+ user_email = raw_user_email.encode("iso8859-1").decode("utf8")
+ raw_password = urllib.parse.unquote(headers["Auth-Pass"])
+ password = raw_password.encode("iso8859-1").decode("utf8")
ip = urllib.parse.unquote(headers["Client-Ip"])
user = models.User.query.get(user_email)
status = False
| {"golden_diff": "diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py\n--- a/core/admin/mailu/internal/nginx.py\n+++ b/core/admin/mailu/internal/nginx.py\n@@ -37,8 +37,14 @@\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n- user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n- password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n+ # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n+ # be ASCII and are generally considered ISO8859-1. However when passing\n+ # the password, nginx does not transcode the input UTF string, thus\n+ # we need to manually decode.\n+ raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n+ user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n+ raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n+ password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n user = models.User.query.get(user_email)\n status = False\n", "issue": "Rainloop Webmail - Authentication fails if you have a special character in your password\nIn the admin interface, you can define a new password and you can put a special character like `\u00e8`.\r\n\r\nIt works fine with admin interface but it doesn't work at all with the Rainloop webmail. If you try to log in, you will have a message to indicate that the authentication fails, see screenshoot in french:\r\n\r\n\r\n\n", "before_files": [{"content": "from mailu import models\nfrom flask import current_app as app\n\nimport re\nimport urllib\nimport ipaddress\nimport socket\nimport tenacity\n\n\nSUPPORTED_AUTH_METHODS = [\"none\", \"plain\"]\n\n\nSTATUSES = {\n \"authentication\": (\"Authentication credentials invalid\", {\n \"imap\": \"AUTHENTICATIONFAILED\",\n \"smtp\": \"535 5.7.8\",\n \"pop3\": \"-ERR Authentication failed\"\n }),\n}\n\n\ndef handle_authentication(headers):\n \"\"\" Handle an HTTP nginx authentication request\n See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol\n \"\"\"\n method = headers[\"Auth-Method\"]\n protocol = headers[\"Auth-Protocol\"]\n # Incoming mail, no authentication\n if method == \"none\" and protocol == \"smtp\":\n server, port = get_server(headers[\"Auth-Protocol\"], False)\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n user = models.User.query.get(user_email)\n status = False\n if user:\n for token in user.tokens:\n if (token.check_password(password) and\n (not token.ip or token.ip == ip)):\n status = True\n if user.check_password(password):\n status = True\n if status:\n if protocol == \"imap\" and not user.enable_imap:\n status = False\n elif protocol == \"pop3\" and not user.enable_pop:\n status = False\n if status and user.enabled:\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"authentication\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\": code,\n \"Auth-Wait\": 0\n }\n # Unexpected\n return {}\n\n\ndef get_status(protocol, status):\n \"\"\" Return the proper error code depending on the protocol\n \"\"\"\n status, codes = STATUSES[status]\n return status, codes[protocol]\n\ndef extract_host_port(host_and_port, default_port):\n host, _, port = re.match('^(.*)(:([0-9]*))?$', host_and_port).groups()\n return host, int(port) if port else default_port\n\ndef get_server(protocol, authenticated=False):\n if protocol == \"imap\":\n hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)\n elif protocol == \"pop3\":\n hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)\n elif protocol == \"smtp\":\n if authenticated:\n hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)\n else:\n hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)\n try:\n # test if hostname is already resolved to an ip adddress\n ipaddress.ip_address(hostname)\n except:\n # hostname is not an ip address - so we need to resolve it\n hostname = resolve_hostname(hostname)\n return hostname, port\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n return socket.gethostbyname(hostname)\n", "path": "core/admin/mailu/internal/nginx.py"}]} | 1,759 | 291 |
gh_patches_debug_15801 | rasdani/github-patches | git_diff | pyca__cryptography-1430 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OpenSSL's HMAC Context isn't marked as implementing MACContext
It ought to be.
</issue>
<code>
[start of cryptography/hazmat/backends/commoncrypto/hmac.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 from cryptography import utils
17 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
18 from cryptography.hazmat.primitives import interfaces
19
20
21 @utils.register_interface(interfaces.HashContext)
22 class _HMACContext(object):
23 def __init__(self, backend, key, algorithm, ctx=None):
24 self.algorithm = algorithm
25 self._backend = backend
26 if ctx is None:
27 ctx = self._backend._ffi.new("CCHmacContext *")
28 try:
29 alg = self._backend._supported_hmac_algorithms[algorithm.name]
30 except KeyError:
31 raise UnsupportedAlgorithm(
32 "{0} is not a supported HMAC hash on this backend.".format(
33 algorithm.name),
34 _Reasons.UNSUPPORTED_HASH
35 )
36
37 self._backend._lib.CCHmacInit(ctx, alg, key, len(key))
38
39 self._ctx = ctx
40 self._key = key
41
42 def copy(self):
43 copied_ctx = self._backend._ffi.new("CCHmacContext *")
44 # CommonCrypto has no APIs for copying HMACs, so we have to copy the
45 # underlying struct.
46 copied_ctx[0] = self._ctx[0]
47 return _HMACContext(
48 self._backend, self._key, self.algorithm, ctx=copied_ctx
49 )
50
51 def update(self, data):
52 self._backend._lib.CCHmacUpdate(self._ctx, data, len(data))
53
54 def finalize(self):
55 buf = self._backend._ffi.new("unsigned char[]",
56 self.algorithm.digest_size)
57 self._backend._lib.CCHmacFinal(self._ctx, buf)
58 return self._backend._ffi.buffer(buf)[:]
59
[end of cryptography/hazmat/backends/commoncrypto/hmac.py]
[start of cryptography/hazmat/backends/openssl/hmac.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16
17 from cryptography import utils
18 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
19 from cryptography.hazmat.primitives import interfaces
20
21
22 @utils.register_interface(interfaces.HashContext)
23 class _HMACContext(object):
24 def __init__(self, backend, key, algorithm, ctx=None):
25 self.algorithm = algorithm
26 self._backend = backend
27
28 if ctx is None:
29 ctx = self._backend._ffi.new("HMAC_CTX *")
30 self._backend._lib.HMAC_CTX_init(ctx)
31 ctx = self._backend._ffi.gc(
32 ctx, self._backend._lib.HMAC_CTX_cleanup
33 )
34 evp_md = self._backend._lib.EVP_get_digestbyname(
35 algorithm.name.encode('ascii'))
36 if evp_md == self._backend._ffi.NULL:
37 raise UnsupportedAlgorithm(
38 "{0} is not a supported hash on this backend.".format(
39 algorithm.name),
40 _Reasons.UNSUPPORTED_HASH
41 )
42 res = self._backend._lib.Cryptography_HMAC_Init_ex(
43 ctx, key, len(key), evp_md, self._backend._ffi.NULL
44 )
45 assert res != 0
46
47 self._ctx = ctx
48 self._key = key
49
50 def copy(self):
51 copied_ctx = self._backend._ffi.new("HMAC_CTX *")
52 self._backend._lib.HMAC_CTX_init(copied_ctx)
53 copied_ctx = self._backend._ffi.gc(
54 copied_ctx, self._backend._lib.HMAC_CTX_cleanup
55 )
56 res = self._backend._lib.Cryptography_HMAC_CTX_copy(
57 copied_ctx, self._ctx
58 )
59 assert res != 0
60 return _HMACContext(
61 self._backend, self._key, self.algorithm, ctx=copied_ctx
62 )
63
64 def update(self, data):
65 res = self._backend._lib.Cryptography_HMAC_Update(
66 self._ctx, data, len(data)
67 )
68 assert res != 0
69
70 def finalize(self):
71 buf = self._backend._ffi.new("unsigned char[]",
72 self._backend._lib.EVP_MAX_MD_SIZE)
73 outlen = self._backend._ffi.new("unsigned int *")
74 res = self._backend._lib.Cryptography_HMAC_Final(
75 self._ctx, buf, outlen
76 )
77 assert res != 0
78 assert outlen[0] == self.algorithm.digest_size
79 self._backend._lib.HMAC_CTX_cleanup(self._ctx)
80 return self._backend._ffi.buffer(buf)[:outlen[0]]
81
[end of cryptography/hazmat/backends/openssl/hmac.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cryptography/hazmat/backends/commoncrypto/hmac.py b/cryptography/hazmat/backends/commoncrypto/hmac.py
--- a/cryptography/hazmat/backends/commoncrypto/hmac.py
+++ b/cryptography/hazmat/backends/commoncrypto/hmac.py
@@ -18,6 +18,7 @@
from cryptography.hazmat.primitives import interfaces
[email protected]_interface(interfaces.MACContext)
@utils.register_interface(interfaces.HashContext)
class _HMACContext(object):
def __init__(self, backend, key, algorithm, ctx=None):
diff --git a/cryptography/hazmat/backends/openssl/hmac.py b/cryptography/hazmat/backends/openssl/hmac.py
--- a/cryptography/hazmat/backends/openssl/hmac.py
+++ b/cryptography/hazmat/backends/openssl/hmac.py
@@ -19,6 +19,7 @@
from cryptography.hazmat.primitives import interfaces
[email protected]_interface(interfaces.MACContext)
@utils.register_interface(interfaces.HashContext)
class _HMACContext(object):
def __init__(self, backend, key, algorithm, ctx=None):
| {"golden_diff": "diff --git a/cryptography/hazmat/backends/commoncrypto/hmac.py b/cryptography/hazmat/backends/commoncrypto/hmac.py\n--- a/cryptography/hazmat/backends/commoncrypto/hmac.py\n+++ b/cryptography/hazmat/backends/commoncrypto/hmac.py\n@@ -18,6 +18,7 @@\n from cryptography.hazmat.primitives import interfaces\n \n \[email protected]_interface(interfaces.MACContext)\n @utils.register_interface(interfaces.HashContext)\n class _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\ndiff --git a/cryptography/hazmat/backends/openssl/hmac.py b/cryptography/hazmat/backends/openssl/hmac.py\n--- a/cryptography/hazmat/backends/openssl/hmac.py\n+++ b/cryptography/hazmat/backends/openssl/hmac.py\n@@ -19,6 +19,7 @@\n from cryptography.hazmat.primitives import interfaces\n \n \[email protected]_interface(interfaces.MACContext)\n @utils.register_interface(interfaces.HashContext)\n class _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n", "issue": "OpenSSL's HMAC Context isn't marked as implementing MACContext\nIt ought to be.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n if ctx is None:\n ctx = self._backend._ffi.new(\"CCHmacContext *\")\n try:\n alg = self._backend._supported_hmac_algorithms[algorithm.name]\n except KeyError:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported HMAC hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n\n self._backend._lib.CCHmacInit(ctx, alg, key, len(key))\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"CCHmacContext *\")\n # CommonCrypto has no APIs for copying HMACs, so we have to copy the\n # underlying struct.\n copied_ctx[0] = self._ctx[0]\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n self._backend._lib.CCHmacUpdate(self._ctx, data, len(data))\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self.algorithm.digest_size)\n self._backend._lib.CCHmacFinal(self._ctx, buf)\n return self._backend._ffi.buffer(buf)[:]\n", "path": "cryptography/hazmat/backends/commoncrypto/hmac.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n\n if ctx is None:\n ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(ctx)\n ctx = self._backend._ffi.gc(\n ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n evp_md = self._backend._lib.EVP_get_digestbyname(\n algorithm.name.encode('ascii'))\n if evp_md == self._backend._ffi.NULL:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n res = self._backend._lib.Cryptography_HMAC_Init_ex(\n ctx, key, len(key), evp_md, self._backend._ffi.NULL\n )\n assert res != 0\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(copied_ctx)\n copied_ctx = self._backend._ffi.gc(\n copied_ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n res = self._backend._lib.Cryptography_HMAC_CTX_copy(\n copied_ctx, self._ctx\n )\n assert res != 0\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n res = self._backend._lib.Cryptography_HMAC_Update(\n self._ctx, data, len(data)\n )\n assert res != 0\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self._backend._lib.EVP_MAX_MD_SIZE)\n outlen = self._backend._ffi.new(\"unsigned int *\")\n res = self._backend._lib.Cryptography_HMAC_Final(\n self._ctx, buf, outlen\n )\n assert res != 0\n assert outlen[0] == self.algorithm.digest_size\n self._backend._lib.HMAC_CTX_cleanup(self._ctx)\n return self._backend._ffi.buffer(buf)[:outlen[0]]\n", "path": "cryptography/hazmat/backends/openssl/hmac.py"}]} | 2,028 | 252 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.