problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_9180 | rasdani/github-patches | git_diff | streamlit__streamlit-1722 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Top Half of Seaborn Chart Title Gets Cut Off
# Summary
When adding a title to a seaborn or perhaps MATPLOTLIB chart (have not tested with MATPLOTLIB yet) chart, the top half of the title is cut off or not displayed
```
# Steps to reproduce
import matplotlib.pyplot as plt
import seaborn as sns
import streamlit as st
mpg = sns.load_dataset("mpg")
option = st.sidebar.multiselect('Choose country of origin:', mpg.origin.unique()
sns.relplot(x="horsepower", y="mpg", hue="origin", size="weight", sizes=(40, 400), alpha=0.5, palette="muted", height=6, data=mpg.query("origin == @option"))
plt.title('MPG vs Weight by Country of Origin')
st.pyplot()
```
## Expected behavior:
seaborn chart title to be fully visible
## Actual behavior:
Top half of seaborn chart title is cut off
## Is this a regression?
That is, did this use to work the way you expected in the past? First time using streamlit, so not sure if worked in the past
yes? maybe?
# Debug info
- Streamlit version: 0.47.4
- Python version: 3.7
- Using Conda
- OS version: Windows 10
- Browser version: Chrome version 77
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/elements/pyplot.py`
Content:
```
1 # Copyright 2018-2020 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Streamlit support for Matplotlib PyPlot charts."""
16
17 import io
18
19 try:
20 import matplotlib # noqa: F401
21 import matplotlib.pyplot as plt
22
23 plt.ioff()
24 except ImportError:
25 raise ImportError("pyplot() command requires matplotlib")
26
27 import streamlit.elements.image_proto as image_proto
28
29 from streamlit.logger import get_logger
30
31 LOGGER = get_logger(__name__)
32
33
34 def marshall(coordinates, new_element_proto, fig=None, clear_figure=True, **kwargs):
35 """Construct a matplotlib.pyplot figure.
36
37 See DeltaGenerator.vega_lite_chart for docs.
38 """
39 # You can call .savefig() on a Figure object or directly on the pyplot
40 # module, in which case you're doing it to the latest Figure.
41 if not fig:
42 if clear_figure is None:
43 clear_figure = True
44 fig = plt
45
46 # Normally, dpi is set to 'figure', and the figure's dpi is set to 100.
47 # So here we pick double of that to make things look good in a high
48 # DPI display.
49 options = {"dpi": 200, "format": "png"}
50
51 # If some of the options are passed in from kwargs then replace
52 # the values in options with the ones from kwargs
53 options = {a: kwargs.get(a, b) for a, b in options.items()}
54 # Merge options back into kwargs.
55 kwargs.update(options)
56
57 image = io.BytesIO()
58 fig.savefig(image, **kwargs)
59 image_proto.marshall_images(
60 coordinates,
61 image,
62 None,
63 -2,
64 new_element_proto.imgs,
65 False,
66 channels="RGB",
67 format="PNG",
68 )
69
70 # Clear the figure after rendering it. This means that subsequent
71 # plt calls will be starting fresh.
72 if clear_figure:
73 fig.clf()
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/streamlit/elements/pyplot.py b/lib/streamlit/elements/pyplot.py
--- a/lib/streamlit/elements/pyplot.py
+++ b/lib/streamlit/elements/pyplot.py
@@ -46,7 +46,7 @@
# Normally, dpi is set to 'figure', and the figure's dpi is set to 100.
# So here we pick double of that to make things look good in a high
# DPI display.
- options = {"dpi": 200, "format": "png"}
+ options = {"bbox_inches": "tight", "dpi": 200, "format": "png"}
# If some of the options are passed in from kwargs then replace
# the values in options with the ones from kwargs
| {"golden_diff": "diff --git a/lib/streamlit/elements/pyplot.py b/lib/streamlit/elements/pyplot.py\n--- a/lib/streamlit/elements/pyplot.py\n+++ b/lib/streamlit/elements/pyplot.py\n@@ -46,7 +46,7 @@\n # Normally, dpi is set to 'figure', and the figure's dpi is set to 100.\n # So here we pick double of that to make things look good in a high\n # DPI display.\n- options = {\"dpi\": 200, \"format\": \"png\"}\n+ options = {\"bbox_inches\": \"tight\", \"dpi\": 200, \"format\": \"png\"}\n \n # If some of the options are passed in from kwargs then replace\n # the values in options with the ones from kwargs\n", "issue": "Top Half of Seaborn Chart Title Gets Cut Off\n# Summary\r\nWhen adding a title to a seaborn or perhaps MATPLOTLIB chart (have not tested with MATPLOTLIB yet) chart, the top half of the title is cut off or not displayed\r\n\r\n```\r\n# Steps to reproduce\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\nimport streamlit as st\r\n\r\nmpg = sns.load_dataset(\"mpg\")\r\noption = st.sidebar.multiselect('Choose country of origin:', mpg.origin.unique()\r\nsns.relplot(x=\"horsepower\", y=\"mpg\", hue=\"origin\", size=\"weight\", sizes=(40, 400), alpha=0.5, palette=\"muted\", height=6, data=mpg.query(\"origin == @option\"))\r\nplt.title('MPG vs Weight by Country of Origin')\r\nst.pyplot()\r\n```\r\n\r\n## Expected behavior:\r\nseaborn chart title to be fully visible\r\n\r\n## Actual behavior:\r\nTop half of seaborn chart title is cut off\r\n\r\n## Is this a regression?\r\nThat is, did this use to work the way you expected in the past? First time using streamlit, so not sure if worked in the past\r\nyes? maybe?\r\n\r\n# Debug info\r\n- Streamlit version: 0.47.4\r\n- Python version: 3.7\r\n- Using Conda\r\n- OS version: Windows 10\r\n- Browser version: Chrome version 77\r\n\n", "before_files": [{"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Streamlit support for Matplotlib PyPlot charts.\"\"\"\n\nimport io\n\ntry:\n import matplotlib # noqa: F401\n import matplotlib.pyplot as plt\n\n plt.ioff()\nexcept ImportError:\n raise ImportError(\"pyplot() command requires matplotlib\")\n\nimport streamlit.elements.image_proto as image_proto\n\nfrom streamlit.logger import get_logger\n\nLOGGER = get_logger(__name__)\n\n\ndef marshall(coordinates, new_element_proto, fig=None, clear_figure=True, **kwargs):\n \"\"\"Construct a matplotlib.pyplot figure.\n\n See DeltaGenerator.vega_lite_chart for docs.\n \"\"\"\n # You can call .savefig() on a Figure object or directly on the pyplot\n # module, in which case you're doing it to the latest Figure.\n if not fig:\n if clear_figure is None:\n clear_figure = True\n fig = plt\n\n # Normally, dpi is set to 'figure', and the figure's dpi is set to 100.\n # So here we pick double of that to make things look good in a high\n # DPI display.\n options = {\"dpi\": 200, \"format\": \"png\"}\n\n # If some of the options are passed in from kwargs then replace\n # the values in options with the ones from kwargs\n options = {a: kwargs.get(a, b) for a, b in options.items()}\n # Merge options back into kwargs.\n kwargs.update(options)\n\n image = io.BytesIO()\n fig.savefig(image, **kwargs)\n image_proto.marshall_images(\n coordinates,\n image,\n None,\n -2,\n new_element_proto.imgs,\n False,\n channels=\"RGB\",\n format=\"PNG\",\n )\n\n # Clear the figure after rendering it. This means that subsequent\n # plt calls will be starting fresh.\n if clear_figure:\n fig.clf()\n", "path": "lib/streamlit/elements/pyplot.py"}], "after_files": [{"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Streamlit support for Matplotlib PyPlot charts.\"\"\"\n\nimport io\n\ntry:\n import matplotlib # noqa: F401\n import matplotlib.pyplot as plt\n\n plt.ioff()\nexcept ImportError:\n raise ImportError(\"pyplot() command requires matplotlib\")\n\nimport streamlit.elements.image_proto as image_proto\n\nfrom streamlit.logger import get_logger\n\nLOGGER = get_logger(__name__)\n\n\ndef marshall(coordinates, new_element_proto, fig=None, clear_figure=True, **kwargs):\n \"\"\"Construct a matplotlib.pyplot figure.\n\n See DeltaGenerator.vega_lite_chart for docs.\n \"\"\"\n # You can call .savefig() on a Figure object or directly on the pyplot\n # module, in which case you're doing it to the latest Figure.\n if not fig:\n if clear_figure is None:\n clear_figure = True\n fig = plt\n\n # Normally, dpi is set to 'figure', and the figure's dpi is set to 100.\n # So here we pick double of that to make things look good in a high\n # DPI display.\n options = {\"bbox_inches\": \"tight\", \"dpi\": 200, \"format\": \"png\"}\n\n # If some of the options are passed in from kwargs then replace\n # the values in options with the ones from kwargs\n options = {a: kwargs.get(a, b) for a, b in options.items()}\n # Merge options back into kwargs.\n kwargs.update(options)\n\n image = io.BytesIO()\n fig.savefig(image, **kwargs)\n image_proto.marshall_images(\n coordinates,\n image,\n None,\n -2,\n new_element_proto.imgs,\n False,\n channels=\"RGB\",\n format=\"PNG\",\n )\n\n # Clear the figure after rendering it. This means that subsequent\n # plt calls will be starting fresh.\n if clear_figure:\n fig.clf()\n", "path": "lib/streamlit/elements/pyplot.py"}]} | 1,246 | 174 |
gh_patches_debug_7479 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-4389 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NBA URLs FAIL without INDEX.HTML
the NBA extractor does not work if URL does not explicitly end with index.html (which appears to be the default)
URL: http://www.nba.com/video/games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/
C:>youtube-dl -v http://www.nba.com/video/games/hornets/2014/12/05/0021400276-n
yk-cha-play5.nba/
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://www.nba.com/video/games/hornets/2014/1
2/05/0021400276-nyk-cha-play5.nba/']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2014.12.06.1
[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-40824-
[debug] Proxy map: {}
[NBA] /games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/: Downloading webpa
ge
[debug] Invoking downloader on u'http://ht-mobile.cdn.turner.com/nba/big/games/h
ornets/2014/12/05/0021400276-nyk-cha-play5.nba/_nba_1280x720.mp4'
ERROR: unable to download video data: HTTP Error 404: Not Found
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 1091, in process_info
File "youtube_dl\YoutubeDL.pyo", line 1067, in dl
File "youtube_dl\downloader\common.pyo", line 294, in download
File "youtube_dl\downloader\http.pyo", line 66, in real_download
File "youtube_dl\YoutubeDL.pyo", line 1325, in urlopen
File "urllib2.pyo", line 410, in open
File "urllib2.pyo", line 523, in http_response
File "urllib2.pyo", line 448, in error
File "urllib2.pyo", line 382, in _call_chain
File "urllib2.pyo", line 531, in http_error_default
HTTPError: HTTP Error 404: Not Found
(same vid but with index.html)
URL: http://www.nba.com/video/games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/index.html
C:>youtube-dl -v http://www.nba.com/video/games/hornets/2014/12/05/0021400276-n
yk-cha-play5.nba/index.html
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://www.nba.com/video/games/hornets/2014/1
2/05/0021400276-nyk-cha-play5.nba/index.html']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2014.12.06.1
[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-40824-
[debug] Proxy map: {}
[NBA] /games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba: Downloading webpag
e
[debug] Invoking downloader on u'http://ht-mobile.cdn.turner.com/nba/big/games/h
ornets/2014/12/05/0021400276-nyk-cha-play5.nba_nba_1280x720.mp4'
[download] Destination: Walker From Behind-0021400276-nyk-cha-play5.nba.mp4
[download] 100% of 5.76MiB in 00:04
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/nba.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import re
4
5 from .common import InfoExtractor
6 from ..utils import (
7 remove_end,
8 parse_duration,
9 )
10
11
12 class NBAIE(InfoExtractor):
13 _VALID_URL = r'https?://(?:watch\.|www\.)?nba\.com/(?:nba/)?video(?P<id>/[^?]*?)(?:/index\.html)?(?:\?.*)?$'
14 _TEST = {
15 'url': 'http://www.nba.com/video/games/nets/2012/12/04/0021200253-okc-bkn-recap.nba/index.html',
16 'md5': 'c0edcfc37607344e2ff8f13c378c88a4',
17 'info_dict': {
18 'id': '0021200253-okc-bkn-recap.nba',
19 'ext': 'mp4',
20 'title': 'Thunder vs. Nets',
21 'description': 'Kevin Durant scores 32 points and dishes out six assists as the Thunder beat the Nets in Brooklyn.',
22 'duration': 181,
23 },
24 }
25
26 def _real_extract(self, url):
27 mobj = re.match(self._VALID_URL, url)
28 video_id = mobj.group('id')
29
30 webpage = self._download_webpage(url, video_id)
31
32 video_url = 'http://ht-mobile.cdn.turner.com/nba/big' + video_id + '_nba_1280x720.mp4'
33
34 shortened_video_id = video_id.rpartition('/')[2]
35 title = remove_end(
36 self._og_search_title(webpage, default=shortened_video_id), ' : NBA.com')
37
38 description = self._og_search_description(webpage)
39 duration = parse_duration(
40 self._html_search_meta('duration', webpage, 'duration', fatal=False))
41
42 return {
43 'id': shortened_video_id,
44 'url': video_url,
45 'title': title,
46 'description': description,
47 'duration': duration,
48 }
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/youtube_dl/extractor/nba.py b/youtube_dl/extractor/nba.py
--- a/youtube_dl/extractor/nba.py
+++ b/youtube_dl/extractor/nba.py
@@ -10,7 +10,7 @@
class NBAIE(InfoExtractor):
- _VALID_URL = r'https?://(?:watch\.|www\.)?nba\.com/(?:nba/)?video(?P<id>/[^?]*?)(?:/index\.html)?(?:\?.*)?$'
+ _VALID_URL = r'https?://(?:watch\.|www\.)?nba\.com/(?:nba/)?video(?P<id>/[^?]*?)/?(?:/index\.html)?(?:\?.*)?$'
_TEST = {
'url': 'http://www.nba.com/video/games/nets/2012/12/04/0021200253-okc-bkn-recap.nba/index.html',
'md5': 'c0edcfc37607344e2ff8f13c378c88a4',
| {"golden_diff": "diff --git a/youtube_dl/extractor/nba.py b/youtube_dl/extractor/nba.py\n--- a/youtube_dl/extractor/nba.py\n+++ b/youtube_dl/extractor/nba.py\n@@ -10,7 +10,7 @@\n \n \n class NBAIE(InfoExtractor):\n- _VALID_URL = r'https?://(?:watch\\.|www\\.)?nba\\.com/(?:nba/)?video(?P<id>/[^?]*?)(?:/index\\.html)?(?:\\?.*)?$'\n+ _VALID_URL = r'https?://(?:watch\\.|www\\.)?nba\\.com/(?:nba/)?video(?P<id>/[^?]*?)/?(?:/index\\.html)?(?:\\?.*)?$'\n _TEST = {\n 'url': 'http://www.nba.com/video/games/nets/2012/12/04/0021200253-okc-bkn-recap.nba/index.html',\n 'md5': 'c0edcfc37607344e2ff8f13c378c88a4',\n", "issue": "NBA URLs FAIL without INDEX.HTML\nthe NBA extractor does not work if URL does not explicitly end with index.html (which appears to be the default)\n\nURL: http://www.nba.com/video/games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/\n\nC:>youtube-dl -v http://www.nba.com/video/games/hornets/2014/12/05/0021400276-n\nyk-cha-play5.nba/\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: ['-v', 'http://www.nba.com/video/games/hornets/2014/1\n2/05/0021400276-nyk-cha-play5.nba/']\n[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252\n[debug] youtube-dl version 2014.12.06.1\n[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1\n[debug] exe versions: ffmpeg N-40824-\n[debug] Proxy map: {}\n[NBA] /games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/: Downloading webpa\nge\n[debug] Invoking downloader on u'http://ht-mobile.cdn.turner.com/nba/big/games/h\nornets/2014/12/05/0021400276-nyk-cha-play5.nba/_nba_1280x720.mp4'\nERROR: unable to download video data: HTTP Error 404: Not Found\nTraceback (most recent call last):\n File \"youtube_dl\\YoutubeDL.pyo\", line 1091, in process_info\n File \"youtube_dl\\YoutubeDL.pyo\", line 1067, in dl\n File \"youtube_dl\\downloader\\common.pyo\", line 294, in download\n File \"youtube_dl\\downloader\\http.pyo\", line 66, in real_download\n File \"youtube_dl\\YoutubeDL.pyo\", line 1325, in urlopen\n File \"urllib2.pyo\", line 410, in open\n File \"urllib2.pyo\", line 523, in http_response\n File \"urllib2.pyo\", line 448, in error\n File \"urllib2.pyo\", line 382, in _call_chain\n File \"urllib2.pyo\", line 531, in http_error_default\nHTTPError: HTTP Error 404: Not Found\n\n(same vid but with index.html)\nURL: http://www.nba.com/video/games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba/index.html\n\nC:>youtube-dl -v http://www.nba.com/video/games/hornets/2014/12/05/0021400276-n\nyk-cha-play5.nba/index.html\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: ['-v', 'http://www.nba.com/video/games/hornets/2014/1\n2/05/0021400276-nyk-cha-play5.nba/index.html']\n[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252\n[debug] youtube-dl version 2014.12.06.1\n[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1\n[debug] exe versions: ffmpeg N-40824-\n[debug] Proxy map: {}\n[NBA] /games/hornets/2014/12/05/0021400276-nyk-cha-play5.nba: Downloading webpag\ne\n[debug] Invoking downloader on u'http://ht-mobile.cdn.turner.com/nba/big/games/h\nornets/2014/12/05/0021400276-nyk-cha-play5.nba_nba_1280x720.mp4'\n[download] Destination: Walker From Behind-0021400276-nyk-cha-play5.nba.mp4\n[download] 100% of 5.76MiB in 00:04\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n remove_end,\n parse_duration,\n)\n\n\nclass NBAIE(InfoExtractor):\n _VALID_URL = r'https?://(?:watch\\.|www\\.)?nba\\.com/(?:nba/)?video(?P<id>/[^?]*?)(?:/index\\.html)?(?:\\?.*)?$'\n _TEST = {\n 'url': 'http://www.nba.com/video/games/nets/2012/12/04/0021200253-okc-bkn-recap.nba/index.html',\n 'md5': 'c0edcfc37607344e2ff8f13c378c88a4',\n 'info_dict': {\n 'id': '0021200253-okc-bkn-recap.nba',\n 'ext': 'mp4',\n 'title': 'Thunder vs. Nets',\n 'description': 'Kevin Durant scores 32 points and dishes out six assists as the Thunder beat the Nets in Brooklyn.',\n 'duration': 181,\n },\n }\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = 'http://ht-mobile.cdn.turner.com/nba/big' + video_id + '_nba_1280x720.mp4'\n\n shortened_video_id = video_id.rpartition('/')[2]\n title = remove_end(\n self._og_search_title(webpage, default=shortened_video_id), ' : NBA.com')\n\n description = self._og_search_description(webpage)\n duration = parse_duration(\n self._html_search_meta('duration', webpage, 'duration', fatal=False))\n\n return {\n 'id': shortened_video_id,\n 'url': video_url,\n 'title': title,\n 'description': description,\n 'duration': duration,\n }\n", "path": "youtube_dl/extractor/nba.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n remove_end,\n parse_duration,\n)\n\n\nclass NBAIE(InfoExtractor):\n _VALID_URL = r'https?://(?:watch\\.|www\\.)?nba\\.com/(?:nba/)?video(?P<id>/[^?]*?)/?(?:/index\\.html)?(?:\\?.*)?$'\n _TEST = {\n 'url': 'http://www.nba.com/video/games/nets/2012/12/04/0021200253-okc-bkn-recap.nba/index.html',\n 'md5': 'c0edcfc37607344e2ff8f13c378c88a4',\n 'info_dict': {\n 'id': '0021200253-okc-bkn-recap.nba',\n 'ext': 'mp4',\n 'title': 'Thunder vs. Nets',\n 'description': 'Kevin Durant scores 32 points and dishes out six assists as the Thunder beat the Nets in Brooklyn.',\n 'duration': 181,\n },\n }\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = 'http://ht-mobile.cdn.turner.com/nba/big' + video_id + '_nba_1280x720.mp4'\n\n shortened_video_id = video_id.rpartition('/')[2]\n title = remove_end(\n self._og_search_title(webpage, default=shortened_video_id), ' : NBA.com')\n\n description = self._og_search_description(webpage)\n duration = parse_duration(\n self._html_search_meta('duration', webpage, 'duration', fatal=False))\n\n return {\n 'id': shortened_video_id,\n 'url': video_url,\n 'title': title,\n 'description': description,\n 'duration': duration,\n }\n", "path": "youtube_dl/extractor/nba.py"}]} | 1,928 | 260 |
gh_patches_debug_8060 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1318 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken links/css in readthedocs 404 page
The [404.html](https://github.com/mkdocs/mkdocs/blob/master/mkdocs/themes/readthedocs/404.html) added in 0.17.0 seems to have broken links and css ([failing CI build](https://travis-ci.org/opensciencegrid/docs/builds/290469999?utm_source=github_status&utm_medium=notification)). The links in the generated `404.html` file all start with a `docs/...` prefix but when I inspect the `site` dir after a `mkdocs build`, there is no `docs` directory.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/commands/serve.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import logging
4 import shutil
5 import tempfile
6
7 from os.path import isfile, join
8 from mkdocs.commands.build import build
9 from mkdocs.config import load_config
10
11 log = logging.getLogger(__name__)
12
13
14 def _get_handler(site_dir, StaticFileHandler):
15
16 from tornado.template import Loader
17
18 class WebHandler(StaticFileHandler):
19
20 def write_error(self, status_code, **kwargs):
21
22 if status_code in (404, 500):
23 error_page = '{}.html'.format(status_code)
24 if isfile(join(site_dir, error_page)):
25 self.write(Loader(site_dir).load(error_page).generate())
26 else:
27 super(WebHandler, self).write_error(status_code, **kwargs)
28
29 return WebHandler
30
31
32 def _livereload(host, port, config, builder, site_dir):
33
34 # We are importing here for anyone that has issues with livereload. Even if
35 # this fails, the --no-livereload alternative should still work.
36 from livereload import Server
37 import livereload.handlers
38
39 class LiveReloadServer(Server):
40
41 def get_web_handlers(self, script):
42 handlers = super(LiveReloadServer, self).get_web_handlers(script)
43 # replace livereload handler
44 return [(handlers[0][0], _get_handler(site_dir, livereload.handlers.StaticFileHandler), handlers[0][2],)]
45
46 server = LiveReloadServer()
47
48 # Watch the documentation files, the config file and the theme files.
49 server.watch(config['docs_dir'], builder)
50 server.watch(config['config_file_path'], builder)
51
52 for d in config['theme'].dirs:
53 server.watch(d, builder)
54
55 # Run `serve` plugin events.
56 server = config['plugins'].run_event('serve', server, config=config)
57
58 server.serve(root=site_dir, host=host, port=port, restart_delay=0)
59
60
61 def _static_server(host, port, site_dir):
62
63 # Importing here to seperate the code paths from the --livereload
64 # alternative.
65 from tornado import ioloop
66 from tornado import web
67
68 application = web.Application([
69 (r"/(.*)", _get_handler(site_dir, web.StaticFileHandler), {
70 "path": site_dir,
71 "default_filename": "index.html"
72 }),
73 ])
74 application.listen(port=port, address=host)
75
76 log.info('Running at: http://%s:%s/', host, port)
77 log.info('Hold ctrl+c to quit.')
78 try:
79 ioloop.IOLoop.instance().start()
80 except KeyboardInterrupt:
81 log.info('Stopping server...')
82
83
84 def serve(config_file=None, dev_addr=None, strict=None, theme=None,
85 theme_dir=None, livereload='livereload'):
86 """
87 Start the MkDocs development server
88
89 By default it will serve the documentation on http://localhost:8000/ and
90 it will rebuild the documentation and refresh the page automatically
91 whenever a file is edited.
92 """
93
94 # Create a temporary build directory, and set some options to serve it
95 tempdir = tempfile.mkdtemp()
96
97 def builder():
98 log.info("Building documentation...")
99 config = load_config(
100 config_file=config_file,
101 dev_addr=dev_addr,
102 strict=strict,
103 theme=theme,
104 theme_dir=theme_dir
105 )
106 config['site_dir'] = tempdir
107 live_server = livereload in ['dirty', 'livereload']
108 dirty = livereload == 'dirty'
109 build(config, live_server=live_server, dirty=dirty)
110 return config
111
112 try:
113 # Perform the initial build
114 config = builder()
115
116 host, port = config['dev_addr']
117
118 if livereload in ['livereload', 'dirty']:
119 _livereload(host, port, config, builder, tempdir)
120 else:
121 _static_server(host, port, tempdir)
122 finally:
123 shutil.rmtree(tempdir)
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/commands/serve.py b/mkdocs/commands/serve.py
--- a/mkdocs/commands/serve.py
+++ b/mkdocs/commands/serve.py
@@ -103,7 +103,10 @@
theme=theme,
theme_dir=theme_dir
)
+ # Override a few config settings after validation
config['site_dir'] = tempdir
+ config['site_url'] = 'http://{0}/'.format(config['dev_addr'])
+
live_server = livereload in ['dirty', 'livereload']
dirty = livereload == 'dirty'
build(config, live_server=live_server, dirty=dirty)
| {"golden_diff": "diff --git a/mkdocs/commands/serve.py b/mkdocs/commands/serve.py\n--- a/mkdocs/commands/serve.py\n+++ b/mkdocs/commands/serve.py\n@@ -103,7 +103,10 @@\n theme=theme,\n theme_dir=theme_dir\n )\n+ # Override a few config settings after validation\n config['site_dir'] = tempdir\n+ config['site_url'] = 'http://{0}/'.format(config['dev_addr'])\n+\n live_server = livereload in ['dirty', 'livereload']\n dirty = livereload == 'dirty'\n build(config, live_server=live_server, dirty=dirty)\n", "issue": "Broken links/css in readthedocs 404 page \nThe [404.html](https://github.com/mkdocs/mkdocs/blob/master/mkdocs/themes/readthedocs/404.html) added in 0.17.0 seems to have broken links and css ([failing CI build](https://travis-ci.org/opensciencegrid/docs/builds/290469999?utm_source=github_status&utm_medium=notification)). The links in the generated `404.html` file all start with a `docs/...` prefix but when I inspect the `site` dir after a `mkdocs build`, there is no `docs` directory.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\nimport shutil\nimport tempfile\n\nfrom os.path import isfile, join\nfrom mkdocs.commands.build import build\nfrom mkdocs.config import load_config\n\nlog = logging.getLogger(__name__)\n\n\ndef _get_handler(site_dir, StaticFileHandler):\n\n from tornado.template import Loader\n\n class WebHandler(StaticFileHandler):\n\n def write_error(self, status_code, **kwargs):\n\n if status_code in (404, 500):\n error_page = '{}.html'.format(status_code)\n if isfile(join(site_dir, error_page)):\n self.write(Loader(site_dir).load(error_page).generate())\n else:\n super(WebHandler, self).write_error(status_code, **kwargs)\n\n return WebHandler\n\n\ndef _livereload(host, port, config, builder, site_dir):\n\n # We are importing here for anyone that has issues with livereload. Even if\n # this fails, the --no-livereload alternative should still work.\n from livereload import Server\n import livereload.handlers\n\n class LiveReloadServer(Server):\n\n def get_web_handlers(self, script):\n handlers = super(LiveReloadServer, self).get_web_handlers(script)\n # replace livereload handler\n return [(handlers[0][0], _get_handler(site_dir, livereload.handlers.StaticFileHandler), handlers[0][2],)]\n\n server = LiveReloadServer()\n\n # Watch the documentation files, the config file and the theme files.\n server.watch(config['docs_dir'], builder)\n server.watch(config['config_file_path'], builder)\n\n for d in config['theme'].dirs:\n server.watch(d, builder)\n\n # Run `serve` plugin events.\n server = config['plugins'].run_event('serve', server, config=config)\n\n server.serve(root=site_dir, host=host, port=port, restart_delay=0)\n\n\ndef _static_server(host, port, site_dir):\n\n # Importing here to seperate the code paths from the --livereload\n # alternative.\n from tornado import ioloop\n from tornado import web\n\n application = web.Application([\n (r\"/(.*)\", _get_handler(site_dir, web.StaticFileHandler), {\n \"path\": site_dir,\n \"default_filename\": \"index.html\"\n }),\n ])\n application.listen(port=port, address=host)\n\n log.info('Running at: http://%s:%s/', host, port)\n log.info('Hold ctrl+c to quit.')\n try:\n ioloop.IOLoop.instance().start()\n except KeyboardInterrupt:\n log.info('Stopping server...')\n\n\ndef serve(config_file=None, dev_addr=None, strict=None, theme=None,\n theme_dir=None, livereload='livereload'):\n \"\"\"\n Start the MkDocs development server\n\n By default it will serve the documentation on http://localhost:8000/ and\n it will rebuild the documentation and refresh the page automatically\n whenever a file is edited.\n \"\"\"\n\n # Create a temporary build directory, and set some options to serve it\n tempdir = tempfile.mkdtemp()\n\n def builder():\n log.info(\"Building documentation...\")\n config = load_config(\n config_file=config_file,\n dev_addr=dev_addr,\n strict=strict,\n theme=theme,\n theme_dir=theme_dir\n )\n config['site_dir'] = tempdir\n live_server = livereload in ['dirty', 'livereload']\n dirty = livereload == 'dirty'\n build(config, live_server=live_server, dirty=dirty)\n return config\n\n try:\n # Perform the initial build\n config = builder()\n\n host, port = config['dev_addr']\n\n if livereload in ['livereload', 'dirty']:\n _livereload(host, port, config, builder, tempdir)\n else:\n _static_server(host, port, tempdir)\n finally:\n shutil.rmtree(tempdir)\n", "path": "mkdocs/commands/serve.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\nimport shutil\nimport tempfile\n\nfrom os.path import isfile, join\nfrom mkdocs.commands.build import build\nfrom mkdocs.config import load_config\n\nlog = logging.getLogger(__name__)\n\n\ndef _get_handler(site_dir, StaticFileHandler):\n\n from tornado.template import Loader\n\n class WebHandler(StaticFileHandler):\n\n def write_error(self, status_code, **kwargs):\n\n if status_code in (404, 500):\n error_page = '{}.html'.format(status_code)\n if isfile(join(site_dir, error_page)):\n self.write(Loader(site_dir).load(error_page).generate())\n else:\n super(WebHandler, self).write_error(status_code, **kwargs)\n\n return WebHandler\n\n\ndef _livereload(host, port, config, builder, site_dir):\n\n # We are importing here for anyone that has issues with livereload. Even if\n # this fails, the --no-livereload alternative should still work.\n from livereload import Server\n import livereload.handlers\n\n class LiveReloadServer(Server):\n\n def get_web_handlers(self, script):\n handlers = super(LiveReloadServer, self).get_web_handlers(script)\n # replace livereload handler\n return [(handlers[0][0], _get_handler(site_dir, livereload.handlers.StaticFileHandler), handlers[0][2],)]\n\n server = LiveReloadServer()\n\n # Watch the documentation files, the config file and the theme files.\n server.watch(config['docs_dir'], builder)\n server.watch(config['config_file_path'], builder)\n\n for d in config['theme'].dirs:\n server.watch(d, builder)\n\n # Run `serve` plugin events.\n server = config['plugins'].run_event('serve', server, config=config)\n\n server.serve(root=site_dir, host=host, port=port, restart_delay=0)\n\n\ndef _static_server(host, port, site_dir):\n\n # Importing here to seperate the code paths from the --livereload\n # alternative.\n from tornado import ioloop\n from tornado import web\n\n application = web.Application([\n (r\"/(.*)\", _get_handler(site_dir, web.StaticFileHandler), {\n \"path\": site_dir,\n \"default_filename\": \"index.html\"\n }),\n ])\n application.listen(port=port, address=host)\n\n log.info('Running at: http://%s:%s/', host, port)\n log.info('Hold ctrl+c to quit.')\n try:\n ioloop.IOLoop.instance().start()\n except KeyboardInterrupt:\n log.info('Stopping server...')\n\n\ndef serve(config_file=None, dev_addr=None, strict=None, theme=None,\n theme_dir=None, livereload='livereload'):\n \"\"\"\n Start the MkDocs development server\n\n By default it will serve the documentation on http://localhost:8000/ and\n it will rebuild the documentation and refresh the page automatically\n whenever a file is edited.\n \"\"\"\n\n # Create a temporary build directory, and set some options to serve it\n tempdir = tempfile.mkdtemp()\n\n def builder():\n log.info(\"Building documentation...\")\n config = load_config(\n config_file=config_file,\n dev_addr=dev_addr,\n strict=strict,\n theme=theme,\n theme_dir=theme_dir\n )\n # Override a few config settings after validation\n config['site_dir'] = tempdir\n config['site_url'] = 'http://{0}/'.format(config['dev_addr'])\n\n live_server = livereload in ['dirty', 'livereload']\n dirty = livereload == 'dirty'\n build(config, live_server=live_server, dirty=dirty)\n return config\n\n try:\n # Perform the initial build\n config = builder()\n\n host, port = config['dev_addr']\n\n if livereload in ['livereload', 'dirty']:\n _livereload(host, port, config, builder, tempdir)\n else:\n _static_server(host, port, tempdir)\n finally:\n shutil.rmtree(tempdir)\n", "path": "mkdocs/commands/serve.py"}]} | 1,544 | 153 |
gh_patches_debug_1527 | rasdani/github-patches | git_diff | hydroshare__hydroshare-2401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Discover page: search box does NOT obey SOLR syntax
The helpful text that suggests that SOLR syntax works in the search box has been wrong for over a year. It now tokenizes terms and is not compatible with SOLR syntax.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_core/discovery_form.py`
Content:
```
1 from haystack.forms import FacetedSearchForm
2 from haystack.query import SQ, SearchQuerySet
3 from crispy_forms.layout import *
4 from crispy_forms.bootstrap import *
5 from django import forms
6
7 class DiscoveryForm(FacetedSearchForm):
8 NElat = forms.CharField(widget = forms.HiddenInput(), required=False)
9 NElng = forms.CharField(widget = forms.HiddenInput(), required=False)
10 SWlat = forms.CharField(widget = forms.HiddenInput(), required=False)
11 SWlng = forms.CharField(widget = forms.HiddenInput(), required=False)
12 start_date = forms.DateField(label='From Date', required=False)
13 end_date = forms.DateField(label='To Date', required=False)
14
15 def search(self):
16 if not self.cleaned_data.get('q'):
17 sqs = self.searchqueryset.filter(discoverable=True).filter(is_replaced_by=False)
18 else:
19 # This corrects for an failed match of complete words, as documented in issue #2308.
20 # The text__startswith=cdata matches stemmed words in documents with an unstemmed cdata.
21 # The text=cdata matches stemmed words after stemming cdata as well.
22 # The stem of "Industrial", according to the aggressive default stemmer, is "industri".
23 # Thus "Industrial" does not match "Industrial" in the document according to
24 # startswith, but does match according to text=cdata.
25 cdata = self.cleaned_data.get('q')
26 sqs = self.searchqueryset.filter(SQ(text__startswith=cdata)|SQ(text=cdata))\
27 .filter(discoverable=True)\
28 .filter(is_replaced_by=False)
29
30 geo_sq = SQ()
31 if self.cleaned_data['NElng'] and self.cleaned_data['SWlng']:
32 if float(self.cleaned_data['NElng']) > float(self.cleaned_data['SWlng']):
33 geo_sq.add(SQ(coverage_east__lte=float(self.cleaned_data['NElng'])), SQ.AND)
34 geo_sq.add(SQ(coverage_east__gte=float(self.cleaned_data['SWlng'])), SQ.AND)
35 else:
36 geo_sq.add(SQ(coverage_east__gte=float(self.cleaned_data['SWlng'])), SQ.AND)
37 geo_sq.add(SQ(coverage_east__lte=float(180)), SQ.OR)
38 geo_sq.add(SQ(coverage_east__lte=float(self.cleaned_data['NElng'])), SQ.AND)
39 geo_sq.add(SQ(coverage_east__gte=float(-180)), SQ.AND)
40
41 if self.cleaned_data['NElat'] and self.cleaned_data['SWlat']:
42 geo_sq.add(SQ(coverage_north__lte=float(self.cleaned_data['NElat'])), SQ.AND)
43 geo_sq.add(SQ(coverage_north__gte=float(self.cleaned_data['SWlat'])), SQ.AND)
44
45 if geo_sq:
46 sqs = sqs.filter(geo_sq)
47
48
49 # Check to see if a start_date was chosen.
50 if self.cleaned_data['start_date']:
51 sqs = sqs.filter(coverage_start_date__gte=self.cleaned_data['start_date'])
52
53 # Check to see if an end_date was chosen.
54 if self.cleaned_data['end_date']:
55 sqs = sqs.filter(coverage_end_date__lte=self.cleaned_data['end_date'])
56
57 author_sq = SQ()
58 subjects_sq = SQ()
59 resource_sq = SQ()
60 public_sq = SQ()
61 owner_sq = SQ()
62 discoverable_sq = SQ()
63 published_sq = SQ()
64 variable_sq = SQ()
65 sample_medium_sq = SQ()
66 units_name_sq = SQ()
67 # We need to process each facet to ensure that the field name and the
68 # value are quoted correctly and separately:
69
70 for facet in self.selected_facets:
71 if ":" not in facet:
72 continue
73
74 field, value = facet.split(":", 1)
75
76 if value:
77 if "creators" in field:
78 author_sq.add(SQ(creators=sqs.query.clean(value)), SQ.OR)
79
80 elif "subjects" in field:
81 subjects_sq.add(SQ(subjects=sqs.query.clean(value)), SQ.OR)
82
83 elif "resource_type" in field:
84 resource_sq.add(SQ(resource_type=sqs.query.clean(value)), SQ.OR)
85
86 elif "public" in field:
87 public_sq.add(SQ(public=sqs.query.clean(value)), SQ.OR)
88
89 elif "owners_names" in field:
90 owner_sq.add(SQ(owners_names=sqs.query.clean(value)), SQ.OR)
91
92 elif "discoverable" in field:
93 discoverable_sq.add(SQ(discoverable=sqs.query.clean(value)), SQ.OR)
94
95 elif "published" in field:
96 published_sq.add(SQ(published=sqs.query.clean(value)), SQ.OR)
97
98 elif 'variable_names' in field:
99 variable_sq.add(SQ(variable_names=sqs.query.clean(value)), SQ.OR)
100
101 elif 'sample_mediums' in field:
102 sample_medium_sq.add(SQ(sample_mediums=sqs.query.clean(value)), SQ.OR)
103
104 elif 'units_names' in field:
105 units_name_sq.add(SQ(units_names=sqs.query.clean(value)), SQ.OR)
106
107 else:
108 continue
109
110 if author_sq:
111 sqs = sqs.filter(author_sq)
112 if subjects_sq:
113 sqs = sqs.filter(subjects_sq)
114 if resource_sq:
115 sqs = sqs.filter(resource_sq)
116 if public_sq:
117 sqs = sqs.filter(public_sq)
118 if owner_sq:
119 sqs = sqs.filter(owner_sq)
120 if discoverable_sq:
121 sqs = sqs.filter(discoverable_sq)
122 if published_sq:
123 sqs = sqs.filter(published_sq)
124 if variable_sq:
125 sqs = sqs.filter(variable_sq)
126 if sample_medium_sq:
127 sqs = sqs.filter(sample_medium_sq)
128 if units_name_sq:
129 sqs = sqs.filter(units_name_sq)
130
131 return sqs
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hs_core/discovery_form.py b/hs_core/discovery_form.py
--- a/hs_core/discovery_form.py
+++ b/hs_core/discovery_form.py
@@ -1,7 +1,5 @@
from haystack.forms import FacetedSearchForm
-from haystack.query import SQ, SearchQuerySet
-from crispy_forms.layout import *
-from crispy_forms.bootstrap import *
+from haystack.query import SQ
from django import forms
class DiscoveryForm(FacetedSearchForm):
| {"golden_diff": "diff --git a/hs_core/discovery_form.py b/hs_core/discovery_form.py\n--- a/hs_core/discovery_form.py\n+++ b/hs_core/discovery_form.py\n@@ -1,7 +1,5 @@\n from haystack.forms import FacetedSearchForm\n-from haystack.query import SQ, SearchQuerySet\n-from crispy_forms.layout import *\n-from crispy_forms.bootstrap import *\n+from haystack.query import SQ\n from django import forms\n \n class DiscoveryForm(FacetedSearchForm):\n", "issue": "Discover page: search box does NOT obey SOLR syntax\nThe helpful text that suggests that SOLR syntax works in the search box has been wrong for over a year. It now tokenizes terms and is not compatible with SOLR syntax. \n", "before_files": [{"content": "from haystack.forms import FacetedSearchForm\nfrom haystack.query import SQ, SearchQuerySet\nfrom crispy_forms.layout import *\nfrom crispy_forms.bootstrap import *\nfrom django import forms\n\nclass DiscoveryForm(FacetedSearchForm):\n NElat = forms.CharField(widget = forms.HiddenInput(), required=False)\n NElng = forms.CharField(widget = forms.HiddenInput(), required=False)\n SWlat = forms.CharField(widget = forms.HiddenInput(), required=False)\n SWlng = forms.CharField(widget = forms.HiddenInput(), required=False)\n start_date = forms.DateField(label='From Date', required=False)\n end_date = forms.DateField(label='To Date', required=False)\n\n def search(self):\n if not self.cleaned_data.get('q'):\n sqs = self.searchqueryset.filter(discoverable=True).filter(is_replaced_by=False)\n else:\n # This corrects for an failed match of complete words, as documented in issue #2308.\n # The text__startswith=cdata matches stemmed words in documents with an unstemmed cdata.\n # The text=cdata matches stemmed words after stemming cdata as well.\n # The stem of \"Industrial\", according to the aggressive default stemmer, is \"industri\".\n # Thus \"Industrial\" does not match \"Industrial\" in the document according to\n # startswith, but does match according to text=cdata.\n cdata = self.cleaned_data.get('q')\n sqs = self.searchqueryset.filter(SQ(text__startswith=cdata)|SQ(text=cdata))\\\n .filter(discoverable=True)\\\n .filter(is_replaced_by=False)\n\n geo_sq = SQ()\n if self.cleaned_data['NElng'] and self.cleaned_data['SWlng']:\n if float(self.cleaned_data['NElng']) > float(self.cleaned_data['SWlng']):\n geo_sq.add(SQ(coverage_east__lte=float(self.cleaned_data['NElng'])), SQ.AND)\n geo_sq.add(SQ(coverage_east__gte=float(self.cleaned_data['SWlng'])), SQ.AND)\n else:\n geo_sq.add(SQ(coverage_east__gte=float(self.cleaned_data['SWlng'])), SQ.AND)\n geo_sq.add(SQ(coverage_east__lte=float(180)), SQ.OR)\n geo_sq.add(SQ(coverage_east__lte=float(self.cleaned_data['NElng'])), SQ.AND)\n geo_sq.add(SQ(coverage_east__gte=float(-180)), SQ.AND)\n\n if self.cleaned_data['NElat'] and self.cleaned_data['SWlat']:\n geo_sq.add(SQ(coverage_north__lte=float(self.cleaned_data['NElat'])), SQ.AND)\n geo_sq.add(SQ(coverage_north__gte=float(self.cleaned_data['SWlat'])), SQ.AND)\n\n if geo_sq:\n sqs = sqs.filter(geo_sq)\n\n\n # Check to see if a start_date was chosen.\n if self.cleaned_data['start_date']:\n sqs = sqs.filter(coverage_start_date__gte=self.cleaned_data['start_date'])\n\n # Check to see if an end_date was chosen.\n if self.cleaned_data['end_date']:\n sqs = sqs.filter(coverage_end_date__lte=self.cleaned_data['end_date'])\n\n author_sq = SQ()\n subjects_sq = SQ()\n resource_sq = SQ()\n public_sq = SQ()\n owner_sq = SQ()\n discoverable_sq = SQ()\n published_sq = SQ()\n variable_sq = SQ()\n sample_medium_sq = SQ()\n units_name_sq = SQ()\n # We need to process each facet to ensure that the field name and the\n # value are quoted correctly and separately:\n\n for facet in self.selected_facets:\n if \":\" not in facet:\n continue\n\n field, value = facet.split(\":\", 1)\n\n if value:\n if \"creators\" in field:\n author_sq.add(SQ(creators=sqs.query.clean(value)), SQ.OR)\n\n elif \"subjects\" in field:\n subjects_sq.add(SQ(subjects=sqs.query.clean(value)), SQ.OR)\n\n elif \"resource_type\" in field:\n resource_sq.add(SQ(resource_type=sqs.query.clean(value)), SQ.OR)\n\n elif \"public\" in field:\n public_sq.add(SQ(public=sqs.query.clean(value)), SQ.OR)\n\n elif \"owners_names\" in field:\n owner_sq.add(SQ(owners_names=sqs.query.clean(value)), SQ.OR)\n\n elif \"discoverable\" in field:\n discoverable_sq.add(SQ(discoverable=sqs.query.clean(value)), SQ.OR)\n\n elif \"published\" in field:\n published_sq.add(SQ(published=sqs.query.clean(value)), SQ.OR)\n\n elif 'variable_names' in field:\n variable_sq.add(SQ(variable_names=sqs.query.clean(value)), SQ.OR)\n\n elif 'sample_mediums' in field:\n sample_medium_sq.add(SQ(sample_mediums=sqs.query.clean(value)), SQ.OR)\n\n elif 'units_names' in field:\n units_name_sq.add(SQ(units_names=sqs.query.clean(value)), SQ.OR)\n\n else:\n continue\n\n if author_sq:\n sqs = sqs.filter(author_sq)\n if subjects_sq:\n sqs = sqs.filter(subjects_sq)\n if resource_sq:\n sqs = sqs.filter(resource_sq)\n if public_sq:\n sqs = sqs.filter(public_sq)\n if owner_sq:\n sqs = sqs.filter(owner_sq)\n if discoverable_sq:\n sqs = sqs.filter(discoverable_sq)\n if published_sq:\n sqs = sqs.filter(published_sq)\n if variable_sq:\n sqs = sqs.filter(variable_sq)\n if sample_medium_sq:\n sqs = sqs.filter(sample_medium_sq)\n if units_name_sq:\n sqs = sqs.filter(units_name_sq)\n\n return sqs\n", "path": "hs_core/discovery_form.py"}], "after_files": [{"content": "from haystack.forms import FacetedSearchForm\nfrom haystack.query import SQ\nfrom django import forms\n\nclass DiscoveryForm(FacetedSearchForm):\n NElat = forms.CharField(widget = forms.HiddenInput(), required=False)\n NElng = forms.CharField(widget = forms.HiddenInput(), required=False)\n SWlat = forms.CharField(widget = forms.HiddenInput(), required=False)\n SWlng = forms.CharField(widget = forms.HiddenInput(), required=False)\n start_date = forms.DateField(label='From Date', required=False)\n end_date = forms.DateField(label='To Date', required=False)\n\n def search(self):\n if not self.cleaned_data.get('q'):\n sqs = self.searchqueryset.filter(discoverable=True).filter(is_replaced_by=False)\n else:\n # This corrects for an failed match of complete words, as documented in issue #2308.\n # The text__startswith=cdata matches stemmed words in documents with an unstemmed cdata.\n # The text=cdata matches stemmed words after stemming cdata as well.\n # The stem of \"Industrial\", according to the aggressive default stemmer, is \"industri\".\n # Thus \"Industrial\" does not match \"Industrial\" in the document according to\n # startswith, but does match according to text=cdata.\n cdata = self.cleaned_data.get('q')\n sqs = self.searchqueryset.filter(SQ(text__startswith=cdata)|SQ(text=cdata))\\\n .filter(discoverable=True)\\\n .filter(is_replaced_by=False)\n\n geo_sq = SQ()\n if self.cleaned_data['NElng'] and self.cleaned_data['SWlng']:\n if float(self.cleaned_data['NElng']) > float(self.cleaned_data['SWlng']):\n geo_sq.add(SQ(coverage_east__lte=float(self.cleaned_data['NElng'])), SQ.AND)\n geo_sq.add(SQ(coverage_east__gte=float(self.cleaned_data['SWlng'])), SQ.AND)\n else:\n geo_sq.add(SQ(coverage_east__gte=float(self.cleaned_data['SWlng'])), SQ.AND)\n geo_sq.add(SQ(coverage_east__lte=float(180)), SQ.OR)\n geo_sq.add(SQ(coverage_east__lte=float(self.cleaned_data['NElng'])), SQ.AND)\n geo_sq.add(SQ(coverage_east__gte=float(-180)), SQ.AND)\n\n if self.cleaned_data['NElat'] and self.cleaned_data['SWlat']:\n geo_sq.add(SQ(coverage_north__lte=float(self.cleaned_data['NElat'])), SQ.AND)\n geo_sq.add(SQ(coverage_north__gte=float(self.cleaned_data['SWlat'])), SQ.AND)\n\n if geo_sq:\n sqs = sqs.filter(geo_sq)\n\n\n # Check to see if a start_date was chosen.\n if self.cleaned_data['start_date']:\n sqs = sqs.filter(coverage_start_date__gte=self.cleaned_data['start_date'])\n\n # Check to see if an end_date was chosen.\n if self.cleaned_data['end_date']:\n sqs = sqs.filter(coverage_end_date__lte=self.cleaned_data['end_date'])\n\n author_sq = SQ()\n subjects_sq = SQ()\n resource_sq = SQ()\n public_sq = SQ()\n owner_sq = SQ()\n discoverable_sq = SQ()\n published_sq = SQ()\n variable_sq = SQ()\n sample_medium_sq = SQ()\n units_name_sq = SQ()\n # We need to process each facet to ensure that the field name and the\n # value are quoted correctly and separately:\n\n for facet in self.selected_facets:\n if \":\" not in facet:\n continue\n\n field, value = facet.split(\":\", 1)\n\n if value:\n if \"creators\" in field:\n author_sq.add(SQ(creators=sqs.query.clean(value)), SQ.OR)\n\n elif \"subjects\" in field:\n subjects_sq.add(SQ(subjects=sqs.query.clean(value)), SQ.OR)\n\n elif \"resource_type\" in field:\n resource_sq.add(SQ(resource_type=sqs.query.clean(value)), SQ.OR)\n\n elif \"public\" in field:\n public_sq.add(SQ(public=sqs.query.clean(value)), SQ.OR)\n\n elif \"owners_names\" in field:\n owner_sq.add(SQ(owners_names=sqs.query.clean(value)), SQ.OR)\n\n elif \"discoverable\" in field:\n discoverable_sq.add(SQ(discoverable=sqs.query.clean(value)), SQ.OR)\n\n elif \"published\" in field:\n published_sq.add(SQ(published=sqs.query.clean(value)), SQ.OR)\n\n elif 'variable_names' in field:\n variable_sq.add(SQ(variable_names=sqs.query.clean(value)), SQ.OR)\n\n elif 'sample_mediums' in field:\n sample_medium_sq.add(SQ(sample_mediums=sqs.query.clean(value)), SQ.OR)\n\n elif 'units_names' in field:\n units_name_sq.add(SQ(units_names=sqs.query.clean(value)), SQ.OR)\n\n else:\n continue\n\n if author_sq:\n sqs = sqs.filter(author_sq)\n if subjects_sq:\n sqs = sqs.filter(subjects_sq)\n if resource_sq:\n sqs = sqs.filter(resource_sq)\n if public_sq:\n sqs = sqs.filter(public_sq)\n if owner_sq:\n sqs = sqs.filter(owner_sq)\n if discoverable_sq:\n sqs = sqs.filter(discoverable_sq)\n if published_sq:\n sqs = sqs.filter(published_sq)\n if variable_sq:\n sqs = sqs.filter(variable_sq)\n if sample_medium_sq:\n sqs = sqs.filter(sample_medium_sq)\n if units_name_sq:\n sqs = sqs.filter(units_name_sq)\n\n return sqs\n", "path": "hs_core/discovery_form.py"}]} | 1,870 | 102 |
gh_patches_debug_19832 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-2461 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fixer la migration vers la 2.44.0
- [x] champs de traductions _xx créés avec NOT NULL et donc impossible à créer
- [x] géométries des sites à transformer en geometrycollection
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/outdoor/migrations/0003_auto_20201214_1408.py`
Content:
```
1 # Generated by Django 3.1.4 on 2020-12-14 14:08
2
3 from django.conf import settings
4 import django.contrib.gis.db.models.fields
5 from django.db import migrations, models
6 import django.db.models.deletion
7
8
9 class Migration(migrations.Migration):
10
11 dependencies = [
12 ('outdoor', '0002_practice_sitepractice'),
13 ]
14
15 operations = [
16 migrations.AlterModelOptions(
17 name='site',
18 options={'ordering': ('name',), 'verbose_name': 'Outdoor site', 'verbose_name_plural': 'Outdoor sites'},
19 ),
20 migrations.AlterField(
21 model_name='site',
22 name='geom',
23 field=django.contrib.gis.db.models.fields.GeometryCollectionField(srid=settings.SRID, verbose_name='Location'),
24 ),
25 migrations.AlterField(
26 model_name='sitepractice',
27 name='site',
28 field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='site_practices', to='outdoor.site', verbose_name='Outdoor site'),
29 ),
30 ]
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geotrek/outdoor/migrations/0003_auto_20201214_1408.py b/geotrek/outdoor/migrations/0003_auto_20201214_1408.py
--- a/geotrek/outdoor/migrations/0003_auto_20201214_1408.py
+++ b/geotrek/outdoor/migrations/0003_auto_20201214_1408.py
@@ -17,10 +17,17 @@
name='site',
options={'ordering': ('name',), 'verbose_name': 'Outdoor site', 'verbose_name_plural': 'Outdoor sites'},
),
- migrations.AlterField(
- model_name='site',
- name='geom',
- field=django.contrib.gis.db.models.fields.GeometryCollectionField(srid=settings.SRID, verbose_name='Location'),
+ migrations.SeparateDatabaseAndState(
+ database_operations=[
+ migrations.RunSQL('ALTER TABLE "outdoor_site" ALTER COLUMN "geom" TYPE geometry(GeometryCollection,2154) USING ST_ForceCollection(geom);')
+ ],
+ state_operations=[
+ migrations.AlterField(
+ model_name='site',
+ name='geom',
+ field=django.contrib.gis.db.models.fields.GeometryCollectionField(srid=settings.SRID, verbose_name='Location'),
+ ),
+ ]
),
migrations.AlterField(
model_name='sitepractice',
| {"golden_diff": "diff --git a/geotrek/outdoor/migrations/0003_auto_20201214_1408.py b/geotrek/outdoor/migrations/0003_auto_20201214_1408.py\n--- a/geotrek/outdoor/migrations/0003_auto_20201214_1408.py\n+++ b/geotrek/outdoor/migrations/0003_auto_20201214_1408.py\n@@ -17,10 +17,17 @@\n name='site',\n options={'ordering': ('name',), 'verbose_name': 'Outdoor site', 'verbose_name_plural': 'Outdoor sites'},\n ),\n- migrations.AlterField(\n- model_name='site',\n- name='geom',\n- field=django.contrib.gis.db.models.fields.GeometryCollectionField(srid=settings.SRID, verbose_name='Location'),\n+ migrations.SeparateDatabaseAndState(\n+ database_operations=[\n+ migrations.RunSQL('ALTER TABLE \"outdoor_site\" ALTER COLUMN \"geom\" TYPE geometry(GeometryCollection,2154) USING ST_ForceCollection(geom);')\n+ ],\n+ state_operations=[\n+ migrations.AlterField(\n+ model_name='site',\n+ name='geom',\n+ field=django.contrib.gis.db.models.fields.GeometryCollectionField(srid=settings.SRID, verbose_name='Location'),\n+ ),\n+ ]\n ),\n migrations.AlterField(\n model_name='sitepractice',\n", "issue": "Fixer la migration vers la 2.44.0\n- [x] champs de traductions _xx cr\u00e9\u00e9s avec NOT NULL et donc impossible \u00e0 cr\u00e9er\r\n- [x] g\u00e9om\u00e9tries des sites \u00e0 transformer en geometrycollection\n", "before_files": [{"content": "# Generated by Django 3.1.4 on 2020-12-14 14:08\n\nfrom django.conf import settings\nimport django.contrib.gis.db.models.fields\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('outdoor', '0002_practice_sitepractice'),\n ]\n\n operations = [\n migrations.AlterModelOptions(\n name='site',\n options={'ordering': ('name',), 'verbose_name': 'Outdoor site', 'verbose_name_plural': 'Outdoor sites'},\n ),\n migrations.AlterField(\n model_name='site',\n name='geom',\n field=django.contrib.gis.db.models.fields.GeometryCollectionField(srid=settings.SRID, verbose_name='Location'),\n ),\n migrations.AlterField(\n model_name='sitepractice',\n name='site',\n field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='site_practices', to='outdoor.site', verbose_name='Outdoor site'),\n ),\n ]\n", "path": "geotrek/outdoor/migrations/0003_auto_20201214_1408.py"}], "after_files": [{"content": "# Generated by Django 3.1.4 on 2020-12-14 14:08\n\nfrom django.conf import settings\nimport django.contrib.gis.db.models.fields\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('outdoor', '0002_practice_sitepractice'),\n ]\n\n operations = [\n migrations.AlterModelOptions(\n name='site',\n options={'ordering': ('name',), 'verbose_name': 'Outdoor site', 'verbose_name_plural': 'Outdoor sites'},\n ),\n migrations.SeparateDatabaseAndState(\n database_operations=[\n migrations.RunSQL('ALTER TABLE \"outdoor_site\" ALTER COLUMN \"geom\" TYPE geometry(GeometryCollection,2154) USING ST_ForceCollection(geom);')\n ],\n state_operations=[\n migrations.AlterField(\n model_name='site',\n name='geom',\n field=django.contrib.gis.db.models.fields.GeometryCollectionField(srid=settings.SRID, verbose_name='Location'),\n ),\n ]\n ),\n migrations.AlterField(\n model_name='sitepractice',\n name='site',\n field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='site_practices', to='outdoor.site', verbose_name='Outdoor site'),\n ),\n ]\n", "path": "geotrek/outdoor/migrations/0003_auto_20201214_1408.py"}]} | 617 | 341 |
gh_patches_debug_15166 | rasdani/github-patches | git_diff | feast-dev__feast-4025 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate PostgreSQLRegistryStore
Right now we have 2 ways to use postgres as a registry backend. The first is with scalable `SqlRegistry` that uses `sqlalchemy`, another is an older option of using `PostgreSQLRegistryStore` which keeps the whole proto in a single table. Since we are [recommending](https://docs.feast.dev/tutorials/using-scalable-registry) the scalable registry anyway, we should deprecate `PostgreSQLRegistryStore` and remove it soon after. Or maybe remove it directly? It's under contribs as of now.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py`
Content:
```
1 from typing import Optional
2
3 import psycopg2
4 from psycopg2 import sql
5
6 from feast.infra.registry.registry_store import RegistryStore
7 from feast.infra.utils.postgres.connection_utils import _get_conn
8 from feast.infra.utils.postgres.postgres_config import PostgreSQLConfig
9 from feast.protos.feast.core.Registry_pb2 import Registry as RegistryProto
10 from feast.repo_config import RegistryConfig
11
12
13 class PostgresRegistryConfig(RegistryConfig):
14 host: str
15 port: int
16 database: str
17 db_schema: str
18 user: str
19 password: str
20 sslmode: Optional[str]
21 sslkey_path: Optional[str]
22 sslcert_path: Optional[str]
23 sslrootcert_path: Optional[str]
24
25
26 class PostgreSQLRegistryStore(RegistryStore):
27 def __init__(self, config: PostgresRegistryConfig, registry_path: str):
28 self.db_config = PostgreSQLConfig(
29 host=config.host,
30 port=config.port,
31 database=config.database,
32 db_schema=config.db_schema,
33 user=config.user,
34 password=config.password,
35 sslmode=getattr(config, "sslmode", None),
36 sslkey_path=getattr(config, "sslkey_path", None),
37 sslcert_path=getattr(config, "sslcert_path", None),
38 sslrootcert_path=getattr(config, "sslrootcert_path", None),
39 )
40 self.table_name = config.path
41 self.cache_ttl_seconds = config.cache_ttl_seconds
42
43 def get_registry_proto(self) -> RegistryProto:
44 registry_proto = RegistryProto()
45 try:
46 with _get_conn(self.db_config) as conn, conn.cursor() as cur:
47 cur.execute(
48 sql.SQL(
49 """
50 SELECT registry
51 FROM {}
52 WHERE version = (SELECT max(version) FROM {})
53 """
54 ).format(
55 sql.Identifier(self.table_name),
56 sql.Identifier(self.table_name),
57 )
58 )
59 row = cur.fetchone()
60 if row:
61 registry_proto = registry_proto.FromString(row[0])
62 except psycopg2.errors.UndefinedTable:
63 pass
64 return registry_proto
65
66 def update_registry_proto(self, registry_proto: RegistryProto):
67 """
68 Overwrites the current registry proto with the proto passed in. This method
69 writes to the registry path.
70
71 Args:
72 registry_proto: the new RegistryProto
73 """
74 schema_name = self.db_config.db_schema or self.db_config.user
75 with _get_conn(self.db_config) as conn, conn.cursor() as cur:
76 cur.execute(
77 """
78 SELECT schema_name
79 FROM information_schema.schemata
80 WHERE schema_name = %s
81 """,
82 (schema_name,),
83 )
84 schema_exists = cur.fetchone()
85 if not schema_exists:
86 cur.execute(
87 sql.SQL("CREATE SCHEMA IF NOT EXISTS {} AUTHORIZATION {}").format(
88 sql.Identifier(schema_name),
89 sql.Identifier(self.db_config.user),
90 ),
91 )
92
93 cur.execute(
94 sql.SQL(
95 """
96 CREATE TABLE IF NOT EXISTS {} (
97 version BIGSERIAL PRIMARY KEY,
98 registry BYTEA NOT NULL
99 );
100 """
101 ).format(sql.Identifier(self.table_name)),
102 )
103 # Do we want to keep track of the history or just keep the latest?
104 cur.execute(
105 sql.SQL(
106 """
107 INSERT INTO {} (registry)
108 VALUES (%s);
109 """
110 ).format(sql.Identifier(self.table_name)),
111 [registry_proto.SerializeToString()],
112 )
113
114 def teardown(self):
115 with _get_conn(self.db_config) as conn, conn.cursor() as cur:
116 cur.execute(
117 sql.SQL(
118 """
119 DROP TABLE IF EXISTS {};
120 """
121 ).format(sql.Identifier(self.table_name))
122 )
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py b/sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py
--- a/sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py
+++ b/sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py
@@ -1,3 +1,4 @@
+import warnings
from typing import Optional
import psycopg2
@@ -37,6 +38,11 @@
sslcert_path=getattr(config, "sslcert_path", None),
sslrootcert_path=getattr(config, "sslrootcert_path", None),
)
+ warnings.warn(
+ "PostgreSQLRegistryStore is deprecated and will be removed in the future releases. Please use SqlRegistry instead.",
+ DeprecationWarning,
+ )
+
self.table_name = config.path
self.cache_ttl_seconds = config.cache_ttl_seconds
| {"golden_diff": "diff --git a/sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py b/sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py\n--- a/sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py\n+++ b/sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py\n@@ -1,3 +1,4 @@\n+import warnings\n from typing import Optional\n \n import psycopg2\n@@ -37,6 +38,11 @@\n sslcert_path=getattr(config, \"sslcert_path\", None),\n sslrootcert_path=getattr(config, \"sslrootcert_path\", None),\n )\n+ warnings.warn(\n+ \"PostgreSQLRegistryStore is deprecated and will be removed in the future releases. Please use SqlRegistry instead.\",\n+ DeprecationWarning,\n+ )\n+\n self.table_name = config.path\n self.cache_ttl_seconds = config.cache_ttl_seconds\n", "issue": "Deprecate PostgreSQLRegistryStore\nRight now we have 2 ways to use postgres as a registry backend. The first is with scalable `SqlRegistry` that uses `sqlalchemy`, another is an older option of using `PostgreSQLRegistryStore` which keeps the whole proto in a single table. Since we are [recommending](https://docs.feast.dev/tutorials/using-scalable-registry) the scalable registry anyway, we should deprecate `PostgreSQLRegistryStore` and remove it soon after. Or maybe remove it directly? It's under contribs as of now.\n", "before_files": [{"content": "from typing import Optional\n\nimport psycopg2\nfrom psycopg2 import sql\n\nfrom feast.infra.registry.registry_store import RegistryStore\nfrom feast.infra.utils.postgres.connection_utils import _get_conn\nfrom feast.infra.utils.postgres.postgres_config import PostgreSQLConfig\nfrom feast.protos.feast.core.Registry_pb2 import Registry as RegistryProto\nfrom feast.repo_config import RegistryConfig\n\n\nclass PostgresRegistryConfig(RegistryConfig):\n host: str\n port: int\n database: str\n db_schema: str\n user: str\n password: str\n sslmode: Optional[str]\n sslkey_path: Optional[str]\n sslcert_path: Optional[str]\n sslrootcert_path: Optional[str]\n\n\nclass PostgreSQLRegistryStore(RegistryStore):\n def __init__(self, config: PostgresRegistryConfig, registry_path: str):\n self.db_config = PostgreSQLConfig(\n host=config.host,\n port=config.port,\n database=config.database,\n db_schema=config.db_schema,\n user=config.user,\n password=config.password,\n sslmode=getattr(config, \"sslmode\", None),\n sslkey_path=getattr(config, \"sslkey_path\", None),\n sslcert_path=getattr(config, \"sslcert_path\", None),\n sslrootcert_path=getattr(config, \"sslrootcert_path\", None),\n )\n self.table_name = config.path\n self.cache_ttl_seconds = config.cache_ttl_seconds\n\n def get_registry_proto(self) -> RegistryProto:\n registry_proto = RegistryProto()\n try:\n with _get_conn(self.db_config) as conn, conn.cursor() as cur:\n cur.execute(\n sql.SQL(\n \"\"\"\n SELECT registry\n FROM {}\n WHERE version = (SELECT max(version) FROM {})\n \"\"\"\n ).format(\n sql.Identifier(self.table_name),\n sql.Identifier(self.table_name),\n )\n )\n row = cur.fetchone()\n if row:\n registry_proto = registry_proto.FromString(row[0])\n except psycopg2.errors.UndefinedTable:\n pass\n return registry_proto\n\n def update_registry_proto(self, registry_proto: RegistryProto):\n \"\"\"\n Overwrites the current registry proto with the proto passed in. This method\n writes to the registry path.\n\n Args:\n registry_proto: the new RegistryProto\n \"\"\"\n schema_name = self.db_config.db_schema or self.db_config.user\n with _get_conn(self.db_config) as conn, conn.cursor() as cur:\n cur.execute(\n \"\"\"\n SELECT schema_name\n FROM information_schema.schemata\n WHERE schema_name = %s\n \"\"\",\n (schema_name,),\n )\n schema_exists = cur.fetchone()\n if not schema_exists:\n cur.execute(\n sql.SQL(\"CREATE SCHEMA IF NOT EXISTS {} AUTHORIZATION {}\").format(\n sql.Identifier(schema_name),\n sql.Identifier(self.db_config.user),\n ),\n )\n\n cur.execute(\n sql.SQL(\n \"\"\"\n CREATE TABLE IF NOT EXISTS {} (\n version BIGSERIAL PRIMARY KEY,\n registry BYTEA NOT NULL\n );\n \"\"\"\n ).format(sql.Identifier(self.table_name)),\n )\n # Do we want to keep track of the history or just keep the latest?\n cur.execute(\n sql.SQL(\n \"\"\"\n INSERT INTO {} (registry)\n VALUES (%s);\n \"\"\"\n ).format(sql.Identifier(self.table_name)),\n [registry_proto.SerializeToString()],\n )\n\n def teardown(self):\n with _get_conn(self.db_config) as conn, conn.cursor() as cur:\n cur.execute(\n sql.SQL(\n \"\"\"\n DROP TABLE IF EXISTS {};\n \"\"\"\n ).format(sql.Identifier(self.table_name))\n )\n", "path": "sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py"}], "after_files": [{"content": "import warnings\nfrom typing import Optional\n\nimport psycopg2\nfrom psycopg2 import sql\n\nfrom feast.infra.registry.registry_store import RegistryStore\nfrom feast.infra.utils.postgres.connection_utils import _get_conn\nfrom feast.infra.utils.postgres.postgres_config import PostgreSQLConfig\nfrom feast.protos.feast.core.Registry_pb2 import Registry as RegistryProto\nfrom feast.repo_config import RegistryConfig\n\n\nclass PostgresRegistryConfig(RegistryConfig):\n host: str\n port: int\n database: str\n db_schema: str\n user: str\n password: str\n sslmode: Optional[str]\n sslkey_path: Optional[str]\n sslcert_path: Optional[str]\n sslrootcert_path: Optional[str]\n\n\nclass PostgreSQLRegistryStore(RegistryStore):\n def __init__(self, config: PostgresRegistryConfig, registry_path: str):\n self.db_config = PostgreSQLConfig(\n host=config.host,\n port=config.port,\n database=config.database,\n db_schema=config.db_schema,\n user=config.user,\n password=config.password,\n sslmode=getattr(config, \"sslmode\", None),\n sslkey_path=getattr(config, \"sslkey_path\", None),\n sslcert_path=getattr(config, \"sslcert_path\", None),\n sslrootcert_path=getattr(config, \"sslrootcert_path\", None),\n )\n warnings.warn(\n \"PostgreSQLRegistryStore is deprecated and will be removed in the future releases. Please use SqlRegistry instead.\",\n DeprecationWarning,\n )\n\n self.table_name = config.path\n self.cache_ttl_seconds = config.cache_ttl_seconds\n\n def get_registry_proto(self) -> RegistryProto:\n registry_proto = RegistryProto()\n try:\n with _get_conn(self.db_config) as conn, conn.cursor() as cur:\n cur.execute(\n sql.SQL(\n \"\"\"\n SELECT registry\n FROM {}\n WHERE version = (SELECT max(version) FROM {})\n \"\"\"\n ).format(\n sql.Identifier(self.table_name),\n sql.Identifier(self.table_name),\n )\n )\n row = cur.fetchone()\n if row:\n registry_proto = registry_proto.FromString(row[0])\n except psycopg2.errors.UndefinedTable:\n pass\n return registry_proto\n\n def update_registry_proto(self, registry_proto: RegistryProto):\n \"\"\"\n Overwrites the current registry proto with the proto passed in. This method\n writes to the registry path.\n\n Args:\n registry_proto: the new RegistryProto\n \"\"\"\n schema_name = self.db_config.db_schema or self.db_config.user\n with _get_conn(self.db_config) as conn, conn.cursor() as cur:\n cur.execute(\n \"\"\"\n SELECT schema_name\n FROM information_schema.schemata\n WHERE schema_name = %s\n \"\"\",\n (schema_name,),\n )\n schema_exists = cur.fetchone()\n if not schema_exists:\n cur.execute(\n sql.SQL(\"CREATE SCHEMA IF NOT EXISTS {} AUTHORIZATION {}\").format(\n sql.Identifier(schema_name),\n sql.Identifier(self.db_config.user),\n ),\n )\n\n cur.execute(\n sql.SQL(\n \"\"\"\n CREATE TABLE IF NOT EXISTS {} (\n version BIGSERIAL PRIMARY KEY,\n registry BYTEA NOT NULL\n );\n \"\"\"\n ).format(sql.Identifier(self.table_name)),\n )\n # Do we want to keep track of the history or just keep the latest?\n cur.execute(\n sql.SQL(\n \"\"\"\n INSERT INTO {} (registry)\n VALUES (%s);\n \"\"\"\n ).format(sql.Identifier(self.table_name)),\n [registry_proto.SerializeToString()],\n )\n\n def teardown(self):\n with _get_conn(self.db_config) as conn, conn.cursor() as cur:\n cur.execute(\n sql.SQL(\n \"\"\"\n DROP TABLE IF EXISTS {};\n \"\"\"\n ).format(sql.Identifier(self.table_name))\n )\n", "path": "sdk/python/feast/infra/registry/contrib/postgres/postgres_registry_store.py"}]} | 1,425 | 213 |
gh_patches_debug_1584 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-361 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Training classification accuracy >1
**Describe the bug**
Hi, I am training a classification model on MRI brain scans using vgg16 in gandlf. We expect to have accuracy in the range of 0-1. But Training classification accuracy is displayed >1. Validation accuracy seems correct. Attaching the screenshot below. Can you please have a look?
**Screenshots**

**GaNDLF Version**
0.0.14
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `GANDLF/metrics/regression.py`
Content:
```
1 """
2 All the metrics are to be called from here
3 """
4 import torch
5 from sklearn.metrics import balanced_accuracy_score
6 import numpy as np
7
8
9 def classification_accuracy(output, label, params):
10 if params["problem_type"] == "classification":
11 predicted_classes = torch.argmax(output, 1)
12 else:
13 predicted_classes = output
14 acc = torch.sum(predicted_classes == label) / len(label)
15 return acc
16
17
18 def balanced_acc_score(output, label, params):
19 if params["problem_type"] == "classification":
20 predicted_classes = torch.argmax(output, 1)
21 else:
22 predicted_classes = output
23
24 return torch.from_numpy(
25 np.array(balanced_accuracy_score(predicted_classes.cpu(), label.cpu()))
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/GANDLF/metrics/regression.py b/GANDLF/metrics/regression.py
--- a/GANDLF/metrics/regression.py
+++ b/GANDLF/metrics/regression.py
@@ -11,7 +11,8 @@
predicted_classes = torch.argmax(output, 1)
else:
predicted_classes = output
- acc = torch.sum(predicted_classes == label) / len(label)
+
+ acc = torch.sum(predicted_classes == label.squeeze()) / len(label)
return acc
| {"golden_diff": "diff --git a/GANDLF/metrics/regression.py b/GANDLF/metrics/regression.py\n--- a/GANDLF/metrics/regression.py\n+++ b/GANDLF/metrics/regression.py\n@@ -11,7 +11,8 @@\n predicted_classes = torch.argmax(output, 1)\n else:\n predicted_classes = output\n- acc = torch.sum(predicted_classes == label) / len(label)\n+\n+ acc = torch.sum(predicted_classes == label.squeeze()) / len(label)\n return acc\n", "issue": "Training classification accuracy >1\n**Describe the bug**\r\nHi, I am training a classification model on MRI brain scans using vgg16 in gandlf. We expect to have accuracy in the range of 0-1. But Training classification accuracy is displayed >1. Validation accuracy seems correct. Attaching the screenshot below. Can you please have a look?\r\n\r\n**Screenshots**\r\n\r\n\r\n**GaNDLF Version**\r\n0.0.14\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nAll the metrics are to be called from here\n\"\"\"\nimport torch\nfrom sklearn.metrics import balanced_accuracy_score\nimport numpy as np\n\n\ndef classification_accuracy(output, label, params):\n if params[\"problem_type\"] == \"classification\":\n predicted_classes = torch.argmax(output, 1)\n else:\n predicted_classes = output\n acc = torch.sum(predicted_classes == label) / len(label)\n return acc\n\n\ndef balanced_acc_score(output, label, params):\n if params[\"problem_type\"] == \"classification\":\n predicted_classes = torch.argmax(output, 1)\n else:\n predicted_classes = output\n\n return torch.from_numpy(\n np.array(balanced_accuracy_score(predicted_classes.cpu(), label.cpu()))\n )\n", "path": "GANDLF/metrics/regression.py"}], "after_files": [{"content": "\"\"\"\nAll the metrics are to be called from here\n\"\"\"\nimport torch\nfrom sklearn.metrics import balanced_accuracy_score\nimport numpy as np\n\n\ndef classification_accuracy(output, label, params):\n if params[\"problem_type\"] == \"classification\":\n predicted_classes = torch.argmax(output, 1)\n else:\n predicted_classes = output\n\n acc = torch.sum(predicted_classes == label.squeeze()) / len(label)\n return acc\n\n\ndef balanced_acc_score(output, label, params):\n if params[\"problem_type\"] == \"classification\":\n predicted_classes = torch.argmax(output, 1)\n else:\n predicted_classes = output\n\n return torch.from_numpy(\n np.array(balanced_accuracy_score(predicted_classes.cpu(), label.cpu()))\n )\n", "path": "GANDLF/metrics/regression.py"}]} | 643 | 114 |
gh_patches_debug_3916 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-379 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Execfile does not exist in py3k
<!--
Thanks for reporting issues of python-telegram-bot!
To make it easier for us to help you please enter detailed information below.
Please note, we only support the latest version of python-telegram-bot and
master branch. Please make sure to upgrade & recreate the issue on the latest
version prior to opening an issue.
-->
### Steps to reproduce
1. Use python 3
2. Try to install from git:
`$ pip install -e git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram`
### Expected behaviour
The library should be installed.
### Actual behaviour
NameError due to `execfile` not being a thing in python 3.
See here for alternatives: https://stackoverflow.com/a/437857
I would fix it myself, but I am unable to actually find the execfile call anywhere .-.
### Configuration
**Operating System:**
Windows 10 Education
**Version of Python, python-telegram-bot & dependencies:**
Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]
### Logs
``````
$ pip install -e git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram
Obtaining telegram from git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram
Skipping because already up-to-date.
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Development\telegram\VocaBot2\src\telegram\setup.py", line 20, in <module>
execfile(os.path.join('telegram', 'version.py'))
NameError: name 'execfile' is not defined
Command "python setup.py egg_info" failed with error code 1```
``````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 """The setup and build script for the python-telegram-bot library."""
3
4 import codecs
5 import os
6 from setuptools import setup, find_packages
7
8
9 def requirements():
10 """Build the requirements list for this project"""
11 requirements_list = []
12
13 with open('requirements.txt') as requirements:
14 for install in requirements:
15 requirements_list.append(install.strip())
16
17 return requirements_list
18
19
20 def execfile(fn):
21 with open(fn) as f:
22 code = compile(f.read(), fn, 'exec')
23 exec(code)
24
25
26 with codecs.open('README.rst', 'r', 'utf-8') as fd:
27 execfile(os.path.join('telegram', 'version.py'))
28
29 setup(name='python-telegram-bot',
30 version=__version__,
31 author='Leandro Toledo',
32 author_email='[email protected]',
33 license='LGPLv3',
34 url='https://github.com/python-telegram-bot/python-telegram-bot',
35 keywords='python telegram bot api wrapper',
36 description='Not just a Python wrapper around the Telegram Bot API',
37 long_description=fd.read(),
38 packages=find_packages(exclude=['tests*']),
39 install_requires=requirements(),
40 include_package_data=True,
41 classifiers=[
42 'Development Status :: 5 - Production/Stable',
43 'Intended Audience :: Developers',
44 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',
45 'Operating System :: OS Independent',
46 'Topic :: Software Development :: Libraries :: Python Modules',
47 'Topic :: Communications :: Chat',
48 'Topic :: Internet',
49 'Programming Language :: Python',
50 'Programming Language :: Python :: 2',
51 'Programming Language :: Python :: 2.6',
52 'Programming Language :: Python :: 2.7',
53 'Programming Language :: Python :: 3',
54 'Programming Language :: Python :: 3.3',
55 'Programming Language :: Python :: 3.4',
56 'Programming Language :: Python :: 3.5',
57 ],)
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,8 +16,12 @@
return requirements_list
+
with codecs.open('README.rst', 'r', 'utf-8') as fd:
- execfile(os.path.join('telegram', 'version.py'))
+ fn = os.path.join('telegram', 'version.py')
+ with open(fn) as fh:
+ code = compile(fh.read(), fn, 'exec')
+ exec(code)
setup(name='python-telegram-bot',
version=__version__,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,8 +16,12 @@\n \n return requirements_list\n \n+\n with codecs.open('README.rst', 'r', 'utf-8') as fd:\n- execfile(os.path.join('telegram', 'version.py'))\n+ fn = os.path.join('telegram', 'version.py')\n+ with open(fn) as fh:\n+ code = compile(fh.read(), fn, 'exec')\n+ exec(code)\n \n setup(name='python-telegram-bot',\n version=__version__,\n", "issue": "Execfile does not exist in py3k\n<!--\nThanks for reporting issues of python-telegram-bot!\nTo make it easier for us to help you please enter detailed information below.\n\nPlease note, we only support the latest version of python-telegram-bot and\nmaster branch. Please make sure to upgrade & recreate the issue on the latest\nversion prior to opening an issue.\n-->\n### Steps to reproduce\n1. Use python 3\n2. Try to install from git:\n `$ pip install -e git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram`\n### Expected behaviour\n\nThe library should be installed.\n### Actual behaviour\n\nNameError due to `execfile` not being a thing in python 3.\nSee here for alternatives: https://stackoverflow.com/a/437857\nI would fix it myself, but I am unable to actually find the execfile call anywhere .-.\n### Configuration\n\n**Operating System:**\nWindows 10 Education\n\n**Version of Python, python-telegram-bot & dependencies:**\nPython 3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]\n### Logs\n\n``````\n$ pip install -e git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram\nObtaining telegram from git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram\n Skipping because already up-to-date.\n Complete output from command python setup.py egg_info:\n Traceback (most recent call last):\n File \"<string>\", line 1, in <module>\n File \"C:\\Development\\telegram\\VocaBot2\\src\\telegram\\setup.py\", line 20, in <module>\n execfile(os.path.join('telegram', 'version.py'))\n NameError: name 'execfile' is not defined\nCommand \"python setup.py egg_info\" failed with error code 1```\n``````\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"The setup and build script for the python-telegram-bot library.\"\"\"\n\nimport codecs\nimport os\nfrom setuptools import setup, find_packages\n\n\ndef requirements():\n \"\"\"Build the requirements list for this project\"\"\"\n requirements_list = []\n\n with open('requirements.txt') as requirements:\n for install in requirements:\n requirements_list.append(install.strip())\n\n return requirements_list\n\n\ndef execfile(fn):\n with open(fn) as f:\n code = compile(f.read(), fn, 'exec')\n exec(code)\n\n\nwith codecs.open('README.rst', 'r', 'utf-8') as fd:\n execfile(os.path.join('telegram', 'version.py'))\n\n setup(name='python-telegram-bot',\n version=__version__,\n author='Leandro Toledo',\n author_email='[email protected]',\n license='LGPLv3',\n url='https://github.com/python-telegram-bot/python-telegram-bot',\n keywords='python telegram bot api wrapper',\n description='Not just a Python wrapper around the Telegram Bot API',\n long_description=fd.read(),\n packages=find_packages(exclude=['tests*']),\n install_requires=requirements(),\n include_package_data=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Communications :: Chat',\n 'Topic :: Internet',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"The setup and build script for the python-telegram-bot library.\"\"\"\n\nimport codecs\nimport os\nfrom setuptools import setup, find_packages\n\n\ndef requirements():\n \"\"\"Build the requirements list for this project\"\"\"\n requirements_list = []\n\n with open('requirements.txt') as requirements:\n for install in requirements:\n requirements_list.append(install.strip())\n\n return requirements_list\n\n\nwith codecs.open('README.rst', 'r', 'utf-8') as fd:\n fn = os.path.join('telegram', 'version.py')\n with open(fn) as fh:\n code = compile(fh.read(), fn, 'exec')\n exec(code)\n\n setup(name='python-telegram-bot',\n version=__version__,\n author='Leandro Toledo',\n author_email='[email protected]',\n license='LGPLv3',\n url='https://github.com/python-telegram-bot/python-telegram-bot',\n keywords='python telegram bot api wrapper',\n description='Not just a Python wrapper around the Telegram Bot API',\n long_description=fd.read(),\n packages=find_packages(exclude=['tests*']),\n install_requires=requirements(),\n include_package_data=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Communications :: Chat',\n 'Topic :: Internet',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],)\n", "path": "setup.py"}]} | 1,351 | 129 |
gh_patches_debug_25537 | rasdani/github-patches | git_diff | Qiskit__qiskit-2947 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
assemble.py _expand_parameters(circuits, run_config) apparently broken
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master
- **Python version**: 3.6.8
- **Operating system**: Linux
### What is the current behavior?
`Parameter` binding does not succeed as reported by user `@Adrian Auer` in Qiskit Slack.
### Steps to reproduce the problem
```
from qiskit import Aer, QuantumCircuit, QuantumRegister, execute
from qiskit.circuit import Parameter
# create m = 2 circuits
qr = QuantumRegister(1)
quantum_circuit_1 = QuantumCircuit(qr)
quantum_circuit_2 = QuantumCircuit(qr)
theta = Parameter('theta')
# add parametrized gates
quantum_circuit_1.u3(theta, 0, 0, qr[0])
quantum_circuit_2.u3(theta, 3.14, 0, qr[0])
circuits = [quantum_circuit_1, quantum_circuit_2]
# inspect parameters property
for circuit in circuits:
print(circuit.parameters)
# bind parameter to n = 1 values
job = execute(circuits,
Aer.get_backend('qasm_simulator'),
shots=512,
parameter_binds=[{theta: 1}])
```
Result is error:
```
Traceback (most recent call last):
File "adrian_auer_example.py", line 25, in <module>
parameter_binds=[{theta: 1}])
File "/home/jax/work/QISKit/DEV/qiskit-terra/qiskit/execute.py", line 218, in execute
run_config=run_config
File "/home/jax/work/QISKit/DEV/qiskit-terra/qiskit/compiler/assemble.py", line 149, in assemble
run_config=run_config)
File "/home/jax/work/QISKit/DEV/qiskit-terra/qiskit/compiler/assemble.py", line 298, in _expand_parameters
'Circuit parameters: {}').format(all_bind_parameters, all_circuit_parameters))
qiskit.exceptions.QiskitError: 'Mismatch between run_config.parameter_binds and all circuit parameters. Parameter binds: [dict_keys([Parameter(theta)])] Circuit parameters: [{Parameter(theta)}, {Parameter(theta)}]'
```
### What is the expected behavior?
Parameter would bind and circuits would execute.
### Suggested solutions
In `qiskit/compiler/assembly.py:_expand_parameters` lines 293-294 both of the following tests are failing:
```
or any(unique_parameters != bind_params for bind_params in all_bind_parameters) \
or any(unique_parameters != parameters for parameters in all_circuit_parameters):
```
It appears to be because `unique_parameters` is a `list` of `Parameter` each of which is being compared to the elements of a list of dictionaries.
The comparison should be re-examined so that types match up.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/circuit/parameter.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14 """
15 Parameter Class for variable parameters.
16 """
17
18 import sympy
19
20 from .parameterexpression import ParameterExpression
21
22
23 class Parameter(ParameterExpression):
24 """Parameter Class for variable parameters"""
25 def __init__(self, name):
26 self._name = name
27
28 symbol = sympy.Symbol(name)
29 super().__init__(symbol_map={self: symbol}, expr=symbol)
30
31 def subs(self, parameter_map):
32 """Substitute self with the corresponding parameter in parameter_map."""
33 return parameter_map[self]
34
35 @property
36 def name(self):
37 """Returns the name of the Parameter."""
38 return self._name
39
40 def __str__(self):
41 return self.name
42
43 def __copy__(self):
44 return self
45
46 def __deepcopy__(self, memo=None):
47 return self
48
49 def __repr__(self):
50 return '{}({})'.format(self.__class__.__name__, self.name)
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/circuit/parameter.py b/qiskit/circuit/parameter.py
--- a/qiskit/circuit/parameter.py
+++ b/qiskit/circuit/parameter.py
@@ -15,6 +15,8 @@
Parameter Class for variable parameters.
"""
+from uuid import uuid4
+
import sympy
from .parameterexpression import ParameterExpression
@@ -22,6 +24,27 @@
class Parameter(ParameterExpression):
"""Parameter Class for variable parameters"""
+
+ def __new__(cls, _, uuid=None):
+ # Parameter relies on self._uuid being set prior to other attributes
+ # (e.g. symbol_map) which may depend on self._uuid for Parameter's hash
+ # or __eq__ functions.
+
+ obj = object.__new__(cls)
+
+ if uuid is None:
+ obj._uuid = uuid4()
+ else:
+ obj._uuid = uuid
+
+ return obj
+
+ def __getnewargs__(self):
+ # Unpickling won't in general call __init__ but will always call
+ # __new__. Specify arguments to be passed to __new__ when unpickling.
+
+ return (self.name, self._uuid)
+
def __init__(self, name):
self._name = name
@@ -48,3 +71,9 @@
def __repr__(self):
return '{}({})'.format(self.__class__.__name__, self.name)
+
+ def __eq__(self, other):
+ return isinstance(other, Parameter) and self._uuid == other._uuid
+
+ def __hash__(self):
+ return hash(self._uuid)
| {"golden_diff": "diff --git a/qiskit/circuit/parameter.py b/qiskit/circuit/parameter.py\n--- a/qiskit/circuit/parameter.py\n+++ b/qiskit/circuit/parameter.py\n@@ -15,6 +15,8 @@\n Parameter Class for variable parameters.\n \"\"\"\n \n+from uuid import uuid4\n+\n import sympy\n \n from .parameterexpression import ParameterExpression\n@@ -22,6 +24,27 @@\n \n class Parameter(ParameterExpression):\n \"\"\"Parameter Class for variable parameters\"\"\"\n+\n+ def __new__(cls, _, uuid=None):\n+ # Parameter relies on self._uuid being set prior to other attributes\n+ # (e.g. symbol_map) which may depend on self._uuid for Parameter's hash\n+ # or __eq__ functions.\n+\n+ obj = object.__new__(cls)\n+\n+ if uuid is None:\n+ obj._uuid = uuid4()\n+ else:\n+ obj._uuid = uuid\n+\n+ return obj\n+\n+ def __getnewargs__(self):\n+ # Unpickling won't in general call __init__ but will always call\n+ # __new__. Specify arguments to be passed to __new__ when unpickling.\n+\n+ return (self.name, self._uuid)\n+\n def __init__(self, name):\n self._name = name\n \n@@ -48,3 +71,9 @@\n \n def __repr__(self):\n return '{}({})'.format(self.__class__.__name__, self.name)\n+\n+ def __eq__(self, other):\n+ return isinstance(other, Parameter) and self._uuid == other._uuid\n+\n+ def __hash__(self):\n+ return hash(self._uuid)\n", "issue": "assemble.py _expand_parameters(circuits, run_config) apparently broken\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: master\r\n- **Python version**: 3.6.8\r\n- **Operating system**: Linux\r\n\r\n### What is the current behavior?\r\n`Parameter` binding does not succeed as reported by user `@Adrian Auer` in Qiskit Slack.\r\n\r\n### Steps to reproduce the problem\r\n\r\n```\r\nfrom qiskit import Aer, QuantumCircuit, QuantumRegister, execute\r\nfrom qiskit.circuit import Parameter\r\n\r\n# create m = 2 circuits\r\nqr = QuantumRegister(1)\r\nquantum_circuit_1 = QuantumCircuit(qr)\r\nquantum_circuit_2 = QuantumCircuit(qr)\r\n\r\ntheta = Parameter('theta')\r\n\r\n# add parametrized gates\r\nquantum_circuit_1.u3(theta, 0, 0, qr[0])\r\nquantum_circuit_2.u3(theta, 3.14, 0, qr[0])\r\n\r\ncircuits = [quantum_circuit_1, quantum_circuit_2]\r\n\r\n# inspect parameters property\r\nfor circuit in circuits:\r\n print(circuit.parameters)\r\n\r\n# bind parameter to n = 1 values\r\njob = execute(circuits,\r\n Aer.get_backend('qasm_simulator'),\r\n shots=512,\r\n parameter_binds=[{theta: 1}])\r\n```\r\nResult is error:\r\n```\r\nTraceback (most recent call last):\r\n File \"adrian_auer_example.py\", line 25, in <module>\r\n parameter_binds=[{theta: 1}])\r\n File \"/home/jax/work/QISKit/DEV/qiskit-terra/qiskit/execute.py\", line 218, in execute\r\n run_config=run_config\r\n File \"/home/jax/work/QISKit/DEV/qiskit-terra/qiskit/compiler/assemble.py\", line 149, in assemble\r\n run_config=run_config)\r\n File \"/home/jax/work/QISKit/DEV/qiskit-terra/qiskit/compiler/assemble.py\", line 298, in _expand_parameters\r\n 'Circuit parameters: {}').format(all_bind_parameters, all_circuit_parameters))\r\nqiskit.exceptions.QiskitError: 'Mismatch between run_config.parameter_binds and all circuit parameters. Parameter binds: [dict_keys([Parameter(theta)])] Circuit parameters: [{Parameter(theta)}, {Parameter(theta)}]'\r\n```\r\n### What is the expected behavior?\r\nParameter would bind and circuits would execute.\r\n\r\n\r\n### Suggested solutions\r\nIn `qiskit/compiler/assembly.py:_expand_parameters` lines 293-294 both of the following tests are failing:\r\n```\r\nor any(unique_parameters != bind_params for bind_params in all_bind_parameters) \\\r\n or any(unique_parameters != parameters for parameters in all_circuit_parameters):\r\n```\r\nIt appears to be because `unique_parameters` is a `list` of `Parameter` each of which is being compared to the elements of a list of dictionaries.\r\nThe comparison should be re-examined so that types match up.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\"\"\"\nParameter Class for variable parameters.\n\"\"\"\n\nimport sympy\n\nfrom .parameterexpression import ParameterExpression\n\n\nclass Parameter(ParameterExpression):\n \"\"\"Parameter Class for variable parameters\"\"\"\n def __init__(self, name):\n self._name = name\n\n symbol = sympy.Symbol(name)\n super().__init__(symbol_map={self: symbol}, expr=symbol)\n\n def subs(self, parameter_map):\n \"\"\"Substitute self with the corresponding parameter in parameter_map.\"\"\"\n return parameter_map[self]\n\n @property\n def name(self):\n \"\"\"Returns the name of the Parameter.\"\"\"\n return self._name\n\n def __str__(self):\n return self.name\n\n def __copy__(self):\n return self\n\n def __deepcopy__(self, memo=None):\n return self\n\n def __repr__(self):\n return '{}({})'.format(self.__class__.__name__, self.name)\n", "path": "qiskit/circuit/parameter.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\"\"\"\nParameter Class for variable parameters.\n\"\"\"\n\nfrom uuid import uuid4\n\nimport sympy\n\nfrom .parameterexpression import ParameterExpression\n\n\nclass Parameter(ParameterExpression):\n \"\"\"Parameter Class for variable parameters\"\"\"\n\n def __new__(cls, _, uuid=None):\n # Parameter relies on self._uuid being set prior to other attributes\n # (e.g. symbol_map) which may depend on self._uuid for Parameter's hash\n # or __eq__ functions.\n\n obj = object.__new__(cls)\n\n if uuid is None:\n obj._uuid = uuid4()\n else:\n obj._uuid = uuid\n\n return obj\n\n def __getnewargs__(self):\n # Unpickling won't in general call __init__ but will always call\n # __new__. Specify arguments to be passed to __new__ when unpickling.\n\n return (self.name, self._uuid)\n\n def __init__(self, name):\n self._name = name\n\n symbol = sympy.Symbol(name)\n super().__init__(symbol_map={self: symbol}, expr=symbol)\n\n def subs(self, parameter_map):\n \"\"\"Substitute self with the corresponding parameter in parameter_map.\"\"\"\n return parameter_map[self]\n\n @property\n def name(self):\n \"\"\"Returns the name of the Parameter.\"\"\"\n return self._name\n\n def __str__(self):\n return self.name\n\n def __copy__(self):\n return self\n\n def __deepcopy__(self, memo=None):\n return self\n\n def __repr__(self):\n return '{}({})'.format(self.__class__.__name__, self.name)\n\n def __eq__(self, other):\n return isinstance(other, Parameter) and self._uuid == other._uuid\n\n def __hash__(self):\n return hash(self._uuid)\n", "path": "qiskit/circuit/parameter.py"}]} | 1,370 | 377 |
gh_patches_debug_874 | rasdani/github-patches | git_diff | bridgecrewio__checkov-1497 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
checkov fails with junit-xml==1.8
**Describe the bug**
checkov fails with junit-xml==1.8
**To Reproduce**
Steps to reproduce the behavior:
1. pip3 install junit-xml==1.8
2. checkov -d .
3. See error:
```
Traceback (most recent call last):
File "/usr/local/bin/checkov", line 2, in <module>
from checkov.main import run
File "/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/main.py", line 12, in <module>
from checkov.arm.runner import Runner as arm_runner
File "/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/arm/runner.py", line 7, in <module>
from checkov.common.output.report import Report
File "/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/common/output/report.py", line 5, in <module>
from junit_xml import TestCase, TestSuite, to_xml_report_string
ImportError: cannot import name 'to_xml_report_string' from 'junit_xml' (/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/junit_xml/__init__.py)
```
**Expected behavior**
checkov runs fine with junit-xml==1.9 so a reasonable fix would be to pin that version (or greater) in setup.py install_requires.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: RHEL 7
- Checkov Version [e.g. 22]: 2.0.350
**Additional context**
Add any other context about the problem here (e.g. code snippets).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import logging
3 import os
4 from importlib import util
5 from os import path
6
7 import setuptools
8 from setuptools import setup
9
10 # read the contents of your README file
11 this_directory = path.abspath(path.dirname(__file__))
12 with open(path.join(this_directory, "README.md"), encoding="utf-8") as f:
13 long_description = f.read()
14
15 logger = logging.getLogger(__name__)
16 spec = util.spec_from_file_location(
17 "checkov.version", os.path.join("checkov", "version.py")
18 )
19 # noinspection PyUnresolvedReferences
20 mod = util.module_from_spec(spec)
21 spec.loader.exec_module(mod) # type: ignore
22 version = mod.version # type: ignore
23
24 setup(
25 extras_require={
26 "dev": [
27 "pytest==5.3.1",
28 "coverage",
29 "coverage-badge",
30 "GitPython==3.1.7",
31 "bandit",
32 "jsonschema",
33 ]
34 },
35 install_requires=[
36 "bc-python-hcl2>=0.3.18",
37 "cloudsplaining>=0.4.1",
38 "deep_merge",
39 "tabulate",
40 "colorama",
41 "termcolor",
42 "junit-xml",
43 "dpath>=1.5.0,<2",
44 "pyyaml>=5.4.1",
45 "boto3==1.17.*",
46 "GitPython",
47 "six==1.15.0",
48 "jmespath",
49 "tqdm",
50 "update_checker",
51 "semantic_version",
52 "packaging",
53 "networkx",
54 "dockerfile-parse",
55 "docker",
56 "configargparse",
57 "detect-secrets",
58 "policyuniverse",
59 "typing-extensions",
60 ],
61 license="Apache License 2.0",
62 name="checkov",
63 version=version,
64 python_requires=">=3.7",
65 description="Infrastructure as code static analysis",
66 author="bridgecrew",
67 author_email="[email protected]",
68 url="https://github.com/bridgecrewio/checkov",
69 packages=setuptools.find_packages(exclude=["tests*", "integration_tests*"]),
70 include_package_data=True,
71 package_dir={
72 "checkov.terraform.checks.graph_checks": "checkov/terraform/checks/graph_checks"
73 },
74 package_data={
75 "checkov.terraform.checks.graph_checks": [
76 "aws/*.yaml",
77 "gcp/*.yaml",
78 "azure/*.yaml",
79 ]
80 },
81 scripts=["bin/checkov", "bin/checkov.cmd"],
82 long_description=long_description,
83 long_description_content_type="text/markdown",
84 classifiers=[
85 "Environment :: Console",
86 "Intended Audience :: Developers",
87 "Intended Audience :: System Administrators",
88 "Programming Language :: Python :: 3.7",
89 "Programming Language :: Python :: 3.8",
90 "Programming Language :: Python :: 3.9",
91 "Topic :: Security",
92 "Topic :: Software Development :: Build Tools",
93 ],
94 )
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -39,7 +39,7 @@
"tabulate",
"colorama",
"termcolor",
- "junit-xml",
+ "junit-xml>=1.9",
"dpath>=1.5.0,<2",
"pyyaml>=5.4.1",
"boto3==1.17.*",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -39,7 +39,7 @@\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n- \"junit-xml\",\n+ \"junit-xml>=1.9\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.*\",\n", "issue": "checkov fails with junit-xml==1.8\n**Describe the bug**\r\ncheckov fails with junit-xml==1.8\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. pip3 install junit-xml==1.8\r\n2. checkov -d .\r\n3. See error: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/checkov\", line 2, in <module>\r\n from checkov.main import run\r\n File \"/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/main.py\", line 12, in <module>\r\n from checkov.arm.runner import Runner as arm_runner\r\n File \"/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/arm/runner.py\", line 7, in <module>\r\n from checkov.common.output.report import Report\r\n File \"/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/common/output/report.py\", line 5, in <module>\r\n from junit_xml import TestCase, TestSuite, to_xml_report_string\r\nImportError: cannot import name 'to_xml_report_string' from 'junit_xml' (/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/junit_xml/__init__.py)\r\n```\r\n\r\n**Expected behavior**\r\ncheckov runs fine with junit-xml==1.9 so a reasonable fix would be to pin that version (or greater) in setup.py install_requires.\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: RHEL 7\r\n - Checkov Version [e.g. 22]: 2.0.350\r\n\r\n**Additional context**\r\nAdd any other context about the problem here (e.g. code snippets).\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\",\n \"jsonschema\",\n ]\n },\n install_requires=[\n \"bc-python-hcl2>=0.3.18\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.*\",\n \"GitPython\",\n \"six==1.15.0\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\",\n \"configargparse\",\n \"detect-secrets\",\n \"policyuniverse\",\n \"typing-extensions\",\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\", \"integration_tests*\"]),\n include_package_data=True,\n package_dir={\n \"checkov.terraform.checks.graph_checks\": \"checkov/terraform/checks/graph_checks\"\n },\n package_data={\n \"checkov.terraform.checks.graph_checks\": [\n \"aws/*.yaml\",\n \"gcp/*.yaml\",\n \"azure/*.yaml\",\n ]\n },\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\",\n \"jsonschema\",\n ]\n },\n install_requires=[\n \"bc-python-hcl2>=0.3.18\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml>=1.9\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.*\",\n \"GitPython\",\n \"six==1.15.0\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\",\n \"configargparse\",\n \"detect-secrets\",\n \"policyuniverse\",\n \"typing-extensions\",\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\", \"integration_tests*\"]),\n include_package_data=True,\n package_dir={\n \"checkov.terraform.checks.graph_checks\": \"checkov/terraform/checks/graph_checks\"\n },\n package_data={\n \"checkov.terraform.checks.graph_checks\": [\n \"aws/*.yaml\",\n \"gcp/*.yaml\",\n \"azure/*.yaml\",\n ]\n },\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n ],\n)\n", "path": "setup.py"}]} | 1,495 | 103 |
gh_patches_debug_27849 | rasdani/github-patches | git_diff | DataDog__dd-agent-1401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[marathon] 404 when the configured URL has a trailing slash
The web server powering Marathon apparently doesn't like double slashes, so dd-agent is getting a 404 back when trying to hit the /v2/apps endpoint.
e.g., with this configuration:
```
instances:
- url: "http://10.0.10.1:8080/"
```
The agent reports this error:
```
instance #0 [ERROR]: Exception('Got 404 when hitting http://10.0.10.1:8080/',)
```
This works fine if you leave off the trailing slash.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checks.d/marathon.py`
Content:
```
1 # project
2 from checks import AgentCheck
3
4 # 3rd party
5 import requests
6
7 class Marathon(AgentCheck):
8
9 DEFAULT_TIMEOUT = 5
10 SERVICE_CHECK_NAME = 'marathon.can_connect'
11
12 APP_METRICS = [
13 'backoffFactor',
14 'backoffSeconds',
15 'cpus',
16 'dist',
17 'instances',
18 'mem',
19 'taskRateLimit',
20 'tasksRunning',
21 'tasksStaged'
22 ]
23
24 def check(self, instance):
25 if 'url' not in instance:
26 raise Exception('Marathon instance missing "url" value.')
27
28 # Load values from the instance config
29 url = instance['url']
30 instance_tags = instance.get('tags', [])
31 default_timeout = self.init_config.get('default_timeout', self.DEFAULT_TIMEOUT)
32 timeout = float(instance.get('timeout', default_timeout))
33
34 response = self.get_json(url + "/v2/apps", timeout)
35 if response is not None:
36 self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)
37 for app in response['apps']:
38 tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags
39 for attr in self.APP_METRICS:
40 if attr in app:
41 self.gauge('marathon.' + attr, app[attr], tags=tags)
42 versions_reply = self.get_json(url + "/v2/apps/" + app['id'] + "/versions", timeout)
43 if versions_reply is not None:
44 self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)
45
46 def get_json(self, url, timeout):
47 try:
48 r = requests.get(url, timeout=timeout)
49 r.raise_for_status()
50 except requests.exceptions.Timeout:
51 # If there's a timeout
52 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,
53 message='%s timed out after %s seconds.' % (url, timeout),
54 tags = ["url:{}".format(url)])
55 raise Exception("Timeout when hitting %s" % url)
56
57 except requests.exceptions.HTTPError:
58 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,
59 message='%s returned a status of %s' % (url, r.status_code),
60 tags = ["url:{}".format(url)])
61 raise Exception("Got %s when hitting %s" % (r.status_code, url))
62
63 else:
64 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK,
65 tags = ["url:{}".format(url)]
66 )
67
68 return r.json()
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checks.d/marathon.py b/checks.d/marathon.py
--- a/checks.d/marathon.py
+++ b/checks.d/marathon.py
@@ -1,3 +1,6 @@
+# stdlib
+from urlparse import urljoin
+
# project
from checks import AgentCheck
@@ -31,7 +34,7 @@
default_timeout = self.init_config.get('default_timeout', self.DEFAULT_TIMEOUT)
timeout = float(instance.get('timeout', default_timeout))
- response = self.get_json(url + "/v2/apps", timeout)
+ response = self.get_json(urljoin(url, "/v2/apps"), timeout)
if response is not None:
self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)
for app in response['apps']:
@@ -39,7 +42,10 @@
for attr in self.APP_METRICS:
if attr in app:
self.gauge('marathon.' + attr, app[attr], tags=tags)
- versions_reply = self.get_json(url + "/v2/apps/" + app['id'] + "/versions", timeout)
+
+ query_url = urljoin(url, "/v2/apps/{0}/versions".format(app['id']))
+ versions_reply = self.get_json(query_url, timeout)
+
if versions_reply is not None:
self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)
| {"golden_diff": "diff --git a/checks.d/marathon.py b/checks.d/marathon.py\n--- a/checks.d/marathon.py\n+++ b/checks.d/marathon.py\n@@ -1,3 +1,6 @@\n+# stdlib\n+from urlparse import urljoin\n+\n # project\n from checks import AgentCheck\n \n@@ -31,7 +34,7 @@\n default_timeout = self.init_config.get('default_timeout', self.DEFAULT_TIMEOUT)\n timeout = float(instance.get('timeout', default_timeout))\n \n- response = self.get_json(url + \"/v2/apps\", timeout)\n+ response = self.get_json(urljoin(url, \"/v2/apps\"), timeout)\n if response is not None:\n self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)\n for app in response['apps']:\n@@ -39,7 +42,10 @@\n for attr in self.APP_METRICS:\n if attr in app:\n self.gauge('marathon.' + attr, app[attr], tags=tags)\n- versions_reply = self.get_json(url + \"/v2/apps/\" + app['id'] + \"/versions\", timeout)\n+\n+ query_url = urljoin(url, \"/v2/apps/{0}/versions\".format(app['id']))\n+ versions_reply = self.get_json(query_url, timeout)\n+\n if versions_reply is not None:\n self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)\n", "issue": "[marathon] 404 when the configured URL has a trailing slash\nThe web server powering Marathon apparently doesn't like double slashes, so dd-agent is getting a 404 back when trying to hit the /v2/apps endpoint.\n\ne.g., with this configuration:\n\n```\ninstances:\n- url: \"http://10.0.10.1:8080/\"\n```\n\nThe agent reports this error:\n\n```\ninstance #0 [ERROR]: Exception('Got 404 when hitting http://10.0.10.1:8080/',)\n```\n\nThis works fine if you leave off the trailing slash. \n\n", "before_files": [{"content": "# project\nfrom checks import AgentCheck\n\n# 3rd party\nimport requests\n\nclass Marathon(AgentCheck):\n\n DEFAULT_TIMEOUT = 5\n SERVICE_CHECK_NAME = 'marathon.can_connect'\n\n APP_METRICS = [\n 'backoffFactor',\n 'backoffSeconds',\n 'cpus',\n 'dist',\n 'instances',\n 'mem',\n 'taskRateLimit',\n 'tasksRunning',\n 'tasksStaged'\n ]\n\n def check(self, instance):\n if 'url' not in instance:\n raise Exception('Marathon instance missing \"url\" value.')\n\n # Load values from the instance config\n url = instance['url']\n instance_tags = instance.get('tags', [])\n default_timeout = self.init_config.get('default_timeout', self.DEFAULT_TIMEOUT)\n timeout = float(instance.get('timeout', default_timeout))\n\n response = self.get_json(url + \"/v2/apps\", timeout)\n if response is not None:\n self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)\n for app in response['apps']:\n tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags\n for attr in self.APP_METRICS:\n if attr in app:\n self.gauge('marathon.' + attr, app[attr], tags=tags)\n versions_reply = self.get_json(url + \"/v2/apps/\" + app['id'] + \"/versions\", timeout)\n if versions_reply is not None:\n self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)\n\n def get_json(self, url, timeout):\n try:\n r = requests.get(url, timeout=timeout)\n r.raise_for_status()\n except requests.exceptions.Timeout:\n # If there's a timeout\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n message='%s timed out after %s seconds.' % (url, timeout),\n tags = [\"url:{}\".format(url)])\n raise Exception(\"Timeout when hitting %s\" % url)\n\n except requests.exceptions.HTTPError:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n message='%s returned a status of %s' % (url, r.status_code),\n tags = [\"url:{}\".format(url)])\n raise Exception(\"Got %s when hitting %s\" % (r.status_code, url))\n\n else:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK,\n tags = [\"url:{}\".format(url)]\n )\n\n return r.json()\n", "path": "checks.d/marathon.py"}], "after_files": [{"content": "# stdlib\nfrom urlparse import urljoin\n\n# project\nfrom checks import AgentCheck\n\n# 3rd party\nimport requests\n\nclass Marathon(AgentCheck):\n\n DEFAULT_TIMEOUT = 5\n SERVICE_CHECK_NAME = 'marathon.can_connect'\n\n APP_METRICS = [\n 'backoffFactor',\n 'backoffSeconds',\n 'cpus',\n 'dist',\n 'instances',\n 'mem',\n 'taskRateLimit',\n 'tasksRunning',\n 'tasksStaged'\n ]\n\n def check(self, instance):\n if 'url' not in instance:\n raise Exception('Marathon instance missing \"url\" value.')\n\n # Load values from the instance config\n url = instance['url']\n instance_tags = instance.get('tags', [])\n default_timeout = self.init_config.get('default_timeout', self.DEFAULT_TIMEOUT)\n timeout = float(instance.get('timeout', default_timeout))\n\n response = self.get_json(urljoin(url, \"/v2/apps\"), timeout)\n if response is not None:\n self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)\n for app in response['apps']:\n tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags\n for attr in self.APP_METRICS:\n if attr in app:\n self.gauge('marathon.' + attr, app[attr], tags=tags)\n\n query_url = urljoin(url, \"/v2/apps/{0}/versions\".format(app['id']))\n versions_reply = self.get_json(query_url, timeout)\n\n if versions_reply is not None:\n self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)\n\n def get_json(self, url, timeout):\n try:\n r = requests.get(url, timeout=timeout)\n r.raise_for_status()\n except requests.exceptions.Timeout:\n # If there's a timeout\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n message='%s timed out after %s seconds.' % (url, timeout),\n tags = [\"url:{}\".format(url)])\n raise Exception(\"Timeout when hitting %s\" % url)\n\n except requests.exceptions.HTTPError:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n message='%s returned a status of %s' % (url, r.status_code),\n tags = [\"url:{}\".format(url)])\n raise Exception(\"Got %s when hitting %s\" % (r.status_code, url))\n\n else:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK,\n tags = [\"url:{}\".format(url)]\n )\n\n return r.json()\n", "path": "checks.d/marathon.py"}]} | 1,083 | 319 |
gh_patches_debug_14725 | rasdani/github-patches | git_diff | wagtail__wagtail-1218 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
wagtailfrontendcache throws an error when a root page is created without a site
On a new install of wagtail w/ wagtailfrontendcache enabled, I go through the following steps:
1. Go to the admin
2. Delete the default "welcome" page from the database
3. Create a new root page
After I create the new root page, I get the following error:
```
[17/Apr/2015 20:02:28] ERROR [django.request:231] Internal Server Error: /admin/pages/new/pages/genericpage/1/
Traceback (most recent call last):
File "/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/wagtail/wagtailadmin/views/pages.py", line 211, in create
revision.publish()
File "/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/wagtail/wagtailcore/models.py", line 1141, in publish
page_published.send(sender=page.specific_class, instance=page.specific)
File "/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 198, in send
response = receiver(signal=self, sender=sender, **named)
File "/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/wagtail/contrib/wagtailfrontendcache/signal_handlers.py", line 9, in page_published_signal_handler
purge_page_from_cache(instance)
File "/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/wagtail/contrib/wagtailfrontendcache/utils.py", line 100, in purge_page_from_cache
logger.info("[%s] Purging URL: %s", backend_name, page.full_url + path[1:])
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
Digging into the code, this error is the request of the following line:
```
logger.info("[%s] Purging URL: %s", backend_name, page.full_url + path[1:])
```
This line failes because `page.full_url` is set to `None` when it executes, which results in the line throwing an exception. The new page model is still saved, so I just need to refresh the admin in my browser to get back into a good working state.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/contrib/wagtailfrontendcache/utils.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.utils.module_loading import import_string
5 from django.core.exceptions import ImproperlyConfigured
6
7
8 logger = logging.getLogger('wagtail.frontendcache')
9
10
11 class InvalidFrontendCacheBackendError(ImproperlyConfigured):
12 pass
13
14
15 def get_backends(backend_settings=None, backends=None):
16 # Get backend settings from WAGTAILFRONTENDCACHE setting
17 if backend_settings is None:
18 backend_settings = getattr(settings, 'WAGTAILFRONTENDCACHE', None)
19
20 # Fallback to using WAGTAILFRONTENDCACHE_LOCATION setting (backwards compatibility)
21 if backend_settings is None:
22 cache_location = getattr(settings, 'WAGTAILFRONTENDCACHE_LOCATION', None)
23
24 if cache_location is not None:
25 backend_settings = {
26 'default': {
27 'BACKEND': 'wagtail.contrib.wagtailfrontendcache.backends.HTTPBackend',
28 'LOCATION': cache_location,
29 },
30 }
31
32 # No settings found, return empty list
33 if backend_settings is None:
34 return {}
35
36 backend_objects = {}
37
38 for backend_name, _backend_config in backend_settings.items():
39 if backends is not None and backend_name not in backends:
40 continue
41
42 backend_config = _backend_config.copy()
43 backend = backend_config.pop('BACKEND')
44
45 # Try to import the backend
46 try:
47 backend_cls = import_string(backend)
48 except ImportError as e:
49 raise InvalidFrontendCacheBackendError("Could not find backend '%s': %s" % (
50 backend, e))
51
52 backend_objects[backend_name] = backend_cls(backend_config)
53
54 return backend_objects
55
56
57 def purge_url_from_cache(url, backend_settings=None, backends=None):
58 for backend_name, backend in get_backends(backend_settings=backend_settings, backends=backends).items():
59 logger.info("[%s] Purging URL: %s", backend_name, url)
60 backend.purge(url)
61
62
63 def purge_page_from_cache(page, backend_settings=None, backends=None):
64 for backend_name, backend in get_backends(backend_settings=backend_settings, backends=backends).items():
65 # Purge cached paths from cache
66 for path in page.specific.get_cached_paths():
67 logger.info("[%s] Purging URL: %s", backend_name, page.full_url + path[1:])
68 backend.purge(page.full_url + path[1:])
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/contrib/wagtailfrontendcache/utils.py b/wagtail/contrib/wagtailfrontendcache/utils.py
--- a/wagtail/contrib/wagtailfrontendcache/utils.py
+++ b/wagtail/contrib/wagtailfrontendcache/utils.py
@@ -61,8 +61,12 @@
def purge_page_from_cache(page, backend_settings=None, backends=None):
+ page_url = page.full_url
+ if page_url is None: # nothing to be done if the page has no routable URL
+ return
+
for backend_name, backend in get_backends(backend_settings=backend_settings, backends=backends).items():
# Purge cached paths from cache
for path in page.specific.get_cached_paths():
- logger.info("[%s] Purging URL: %s", backend_name, page.full_url + path[1:])
- backend.purge(page.full_url + path[1:])
+ logger.info("[%s] Purging URL: %s", backend_name, page_url + path[1:])
+ backend.purge(page_url + path[1:])
| {"golden_diff": "diff --git a/wagtail/contrib/wagtailfrontendcache/utils.py b/wagtail/contrib/wagtailfrontendcache/utils.py\n--- a/wagtail/contrib/wagtailfrontendcache/utils.py\n+++ b/wagtail/contrib/wagtailfrontendcache/utils.py\n@@ -61,8 +61,12 @@\n \n \n def purge_page_from_cache(page, backend_settings=None, backends=None):\n+ page_url = page.full_url\n+ if page_url is None: # nothing to be done if the page has no routable URL\n+ return\n+\n for backend_name, backend in get_backends(backend_settings=backend_settings, backends=backends).items():\n # Purge cached paths from cache\n for path in page.specific.get_cached_paths():\n- logger.info(\"[%s] Purging URL: %s\", backend_name, page.full_url + path[1:])\n- backend.purge(page.full_url + path[1:])\n+ logger.info(\"[%s] Purging URL: %s\", backend_name, page_url + path[1:])\n+ backend.purge(page_url + path[1:])\n", "issue": "wagtailfrontendcache throws an error when a root page is created without a site\nOn a new install of wagtail w/ wagtailfrontendcache enabled, I go through the following steps:\n1. Go to the admin\n2. Delete the default \"welcome\" page from the database\n3. Create a new root page\n\nAfter I create the new root page, I get the following error:\n\n```\n[17/Apr/2015 20:02:28] ERROR [django.request:231] Internal Server Error: /admin/pages/new/pages/genericpage/1/\nTraceback (most recent call last):\n File \"/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/django/core/handlers/base.py\", line 111, in get_response\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\n File \"/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/django/contrib/auth/decorators.py\", line 21, in _wrapped_view\n return view_func(request, *args, **kwargs)\n File \"/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/wagtail/wagtailadmin/views/pages.py\", line 211, in create\n revision.publish()\n File \"/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/wagtail/wagtailcore/models.py\", line 1141, in publish\n page_published.send(sender=page.specific_class, instance=page.specific)\n File \"/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/django/dispatch/dispatcher.py\", line 198, in send\n response = receiver(signal=self, sender=sender, **named)\n File \"/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/wagtail/contrib/wagtailfrontendcache/signal_handlers.py\", line 9, in page_published_signal_handler\n purge_page_from_cache(instance)\n File \"/Users/jryding/.virtualenvs/cms/lib/python2.7/site-packages/wagtail/contrib/wagtailfrontendcache/utils.py\", line 100, in purge_page_from_cache\n logger.info(\"[%s] Purging URL: %s\", backend_name, page.full_url + path[1:])\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'str'\n```\n\nDigging into the code, this error is the request of the following line:\n\n```\nlogger.info(\"[%s] Purging URL: %s\", backend_name, page.full_url + path[1:])\n```\n\nThis line failes because `page.full_url` is set to `None` when it executes, which results in the line throwing an exception. The new page model is still saved, so I just need to refresh the admin in my browser to get back into a good working state.\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.utils.module_loading import import_string\nfrom django.core.exceptions import ImproperlyConfigured\n\n\nlogger = logging.getLogger('wagtail.frontendcache')\n\n\nclass InvalidFrontendCacheBackendError(ImproperlyConfigured):\n pass\n\n\ndef get_backends(backend_settings=None, backends=None):\n # Get backend settings from WAGTAILFRONTENDCACHE setting\n if backend_settings is None:\n backend_settings = getattr(settings, 'WAGTAILFRONTENDCACHE', None)\n\n # Fallback to using WAGTAILFRONTENDCACHE_LOCATION setting (backwards compatibility)\n if backend_settings is None:\n cache_location = getattr(settings, 'WAGTAILFRONTENDCACHE_LOCATION', None)\n\n if cache_location is not None:\n backend_settings = {\n 'default': {\n 'BACKEND': 'wagtail.contrib.wagtailfrontendcache.backends.HTTPBackend',\n 'LOCATION': cache_location,\n },\n }\n\n # No settings found, return empty list\n if backend_settings is None:\n return {}\n\n backend_objects = {}\n\n for backend_name, _backend_config in backend_settings.items():\n if backends is not None and backend_name not in backends:\n continue\n\n backend_config = _backend_config.copy()\n backend = backend_config.pop('BACKEND')\n\n # Try to import the backend\n try:\n backend_cls = import_string(backend)\n except ImportError as e:\n raise InvalidFrontendCacheBackendError(\"Could not find backend '%s': %s\" % (\n backend, e))\n\n backend_objects[backend_name] = backend_cls(backend_config)\n\n return backend_objects\n\n\ndef purge_url_from_cache(url, backend_settings=None, backends=None):\n for backend_name, backend in get_backends(backend_settings=backend_settings, backends=backends).items():\n logger.info(\"[%s] Purging URL: %s\", backend_name, url)\n backend.purge(url)\n\n\ndef purge_page_from_cache(page, backend_settings=None, backends=None):\n for backend_name, backend in get_backends(backend_settings=backend_settings, backends=backends).items():\n # Purge cached paths from cache\n for path in page.specific.get_cached_paths():\n logger.info(\"[%s] Purging URL: %s\", backend_name, page.full_url + path[1:])\n backend.purge(page.full_url + path[1:])\n", "path": "wagtail/contrib/wagtailfrontendcache/utils.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.utils.module_loading import import_string\nfrom django.core.exceptions import ImproperlyConfigured\n\n\nlogger = logging.getLogger('wagtail.frontendcache')\n\n\nclass InvalidFrontendCacheBackendError(ImproperlyConfigured):\n pass\n\n\ndef get_backends(backend_settings=None, backends=None):\n # Get backend settings from WAGTAILFRONTENDCACHE setting\n if backend_settings is None:\n backend_settings = getattr(settings, 'WAGTAILFRONTENDCACHE', None)\n\n # Fallback to using WAGTAILFRONTENDCACHE_LOCATION setting (backwards compatibility)\n if backend_settings is None:\n cache_location = getattr(settings, 'WAGTAILFRONTENDCACHE_LOCATION', None)\n\n if cache_location is not None:\n backend_settings = {\n 'default': {\n 'BACKEND': 'wagtail.contrib.wagtailfrontendcache.backends.HTTPBackend',\n 'LOCATION': cache_location,\n },\n }\n\n # No settings found, return empty list\n if backend_settings is None:\n return {}\n\n backend_objects = {}\n\n for backend_name, _backend_config in backend_settings.items():\n if backends is not None and backend_name not in backends:\n continue\n\n backend_config = _backend_config.copy()\n backend = backend_config.pop('BACKEND')\n\n # Try to import the backend\n try:\n backend_cls = import_string(backend)\n except ImportError as e:\n raise InvalidFrontendCacheBackendError(\"Could not find backend '%s': %s\" % (\n backend, e))\n\n backend_objects[backend_name] = backend_cls(backend_config)\n\n return backend_objects\n\n\ndef purge_url_from_cache(url, backend_settings=None, backends=None):\n for backend_name, backend in get_backends(backend_settings=backend_settings, backends=backends).items():\n logger.info(\"[%s] Purging URL: %s\", backend_name, url)\n backend.purge(url)\n\n\ndef purge_page_from_cache(page, backend_settings=None, backends=None):\n page_url = page.full_url\n if page_url is None: # nothing to be done if the page has no routable URL\n return\n\n for backend_name, backend in get_backends(backend_settings=backend_settings, backends=backends).items():\n # Purge cached paths from cache\n for path in page.specific.get_cached_paths():\n logger.info(\"[%s] Purging URL: %s\", backend_name, page_url + path[1:])\n backend.purge(page_url + path[1:])\n", "path": "wagtail/contrib/wagtailfrontendcache/utils.py"}]} | 1,553 | 244 |
gh_patches_debug_61672 | rasdani/github-patches | git_diff | joke2k__faker-640 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MacOS 10.13 OSError: [Errno 24] Too many open files
```shell
return [i for i in os.listdir(path) if os.path.isdir(os.path.join(path, i)) and not i.startswith('_')]
OSError: [Errno 24] Too many open files: '/Users/abcdefg/.pyenv/versions/3.6.3/envs/weixin3/lib/python3.6/site-packages/faker/providers/address'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/utils/loading.py`
Content:
```
1 import os
2 from importlib import import_module
3 import pkgutil
4
5
6 def list_module(module):
7 path = os.path.dirname(module.__file__)
8 modules = [name for finder, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
9 if len(modules) > 0:
10 return modules
11 return [i for i in os.listdir(path) if os.path.isdir(os.path.join(path, i)) and not i.startswith('_')]
12
13
14 def find_available_locales(providers):
15 available_locales = set()
16
17 for provider_path in providers:
18
19 provider_module = import_module(provider_path)
20 if getattr(provider_module, 'localized', False):
21 langs = list_module(provider_module)
22 available_locales.update(langs)
23 return available_locales
24
25
26 def find_available_providers(modules):
27 available_providers = set()
28 for providers_mod in modules:
29 providers = ['.'.join([providers_mod.__package__, mod]) for mod in list_module(providers_mod)]
30 available_providers.update(providers)
31 return sorted(available_providers)
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/utils/loading.py b/faker/utils/loading.py
--- a/faker/utils/loading.py
+++ b/faker/utils/loading.py
@@ -6,9 +6,7 @@
def list_module(module):
path = os.path.dirname(module.__file__)
modules = [name for finder, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
- if len(modules) > 0:
- return modules
- return [i for i in os.listdir(path) if os.path.isdir(os.path.join(path, i)) and not i.startswith('_')]
+ return modules
def find_available_locales(providers):
| {"golden_diff": "diff --git a/faker/utils/loading.py b/faker/utils/loading.py\n--- a/faker/utils/loading.py\n+++ b/faker/utils/loading.py\n@@ -6,9 +6,7 @@\n def list_module(module):\n path = os.path.dirname(module.__file__)\n modules = [name for finder, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n- if len(modules) > 0:\n- return modules\n- return [i for i in os.listdir(path) if os.path.isdir(os.path.join(path, i)) and not i.startswith('_')]\n+ return modules\n \n \n def find_available_locales(providers):\n", "issue": "MacOS 10.13 OSError: [Errno 24] Too many open files\n```shell\r\n return [i for i in os.listdir(path) if os.path.isdir(os.path.join(path, i)) and not i.startswith('_')]\r\nOSError: [Errno 24] Too many open files: '/Users/abcdefg/.pyenv/versions/3.6.3/envs/weixin3/lib/python3.6/site-packages/faker/providers/address'\r\n```\r\n\r\n\n", "before_files": [{"content": "import os\nfrom importlib import import_module\nimport pkgutil\n\n\ndef list_module(module):\n path = os.path.dirname(module.__file__)\n modules = [name for finder, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n if len(modules) > 0:\n return modules\n return [i for i in os.listdir(path) if os.path.isdir(os.path.join(path, i)) and not i.startswith('_')]\n\n\ndef find_available_locales(providers):\n available_locales = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n return available_locales\n\n\ndef find_available_providers(modules):\n available_providers = set()\n for providers_mod in modules:\n providers = ['.'.join([providers_mod.__package__, mod]) for mod in list_module(providers_mod)]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}], "after_files": [{"content": "import os\nfrom importlib import import_module\nimport pkgutil\n\n\ndef list_module(module):\n path = os.path.dirname(module.__file__)\n modules = [name for finder, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n return modules\n\n\ndef find_available_locales(providers):\n available_locales = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n return available_locales\n\n\ndef find_available_providers(modules):\n available_providers = set()\n for providers_mod in modules:\n providers = ['.'.join([providers_mod.__package__, mod]) for mod in list_module(providers_mod)]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}]} | 647 | 140 |
gh_patches_debug_7500 | rasdani/github-patches | git_diff | conda-forge__conda-smithy-138 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve lint messages on whitespace
The message about the spacing for selectors isn't clear to see the problem in the number of spaces:
`Selectors are suggested to take a " # []" form.`
It would be clearer to have a message like: `<two spaces> # <one space> [selector]`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_smithy/lint_recipe.py`
Content:
```
1 import os
2 import re
3
4 import jinja2
5 import ruamel.yaml
6
7
8 EXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',
9 'test', 'app', 'about', 'extra']
10
11 REQUIREMENTS_ORDER = ['build', 'run']
12
13
14 class NullUndefined(jinja2.Undefined):
15 def __unicode__(self):
16 return unicode(self._undefined_name)
17
18
19 def get_section(parent, name, lints):
20 section = parent.get(name, {})
21 if not isinstance(section, dict):
22 lints.append('The "{}" section was expected to be a dictionary, but '
23 'got a {}.'.format(name, type(section).__name__))
24 section = {}
25 return section
26
27
28 def lintify(meta, recipe_dir=None):
29 lints = []
30 major_sections = list(meta.keys())
31
32 # If the recipe_dir exists (no guarantee within this function) , we can
33 # find the meta.yaml within it.
34 meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')
35
36 source_section = get_section(meta, 'source', lints)
37 build_section = get_section(meta, 'build', lints)
38 requirements_section = get_section(meta, 'requirements', lints)
39 about_section = get_section(meta, 'about', lints)
40 extra_section = get_section(meta, 'extra', lints)
41
42 # 1: Top level meta.yaml keys should have a specific order.
43 section_order_sorted = sorted(major_sections,
44 key=EXPECTED_SECTION_ORDER.index)
45 if major_sections != section_order_sorted:
46 lints.append('The top level meta keys are in an unexpected order. '
47 'Expecting {}.'.format(section_order_sorted))
48
49 # 2: The about section should have a home, license and summary.
50 for about_item in ['home', 'license', 'summary']:
51 # if the section doesn't exist, or is just empty, lint it.
52 if not about_section.get(about_item, ''):
53 lints.append('The {} item is expected in the about section.'
54 ''.format(about_item))
55
56 # 3: The recipe should have some maintainers.
57 if not extra_section.get('recipe-maintainers', []):
58 lints.append('The recipe could do with some maintainers listed in '
59 'the "extra/recipe-maintainers" section.')
60
61 # 4: The recipe should have some tests.
62 if 'test' not in major_sections:
63 test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',
64 'run_test.pl']
65 a_test_file_exists = (recipe_dir is not None and
66 any(os.path.exists(os.path.join(recipe_dir,
67 test_file))
68 for test_file in test_files))
69 if not a_test_file_exists:
70 lints.append('The recipe must have some tests.')
71
72 # 5: License cannot be 'unknown.'
73 license = about_section.get('license', '').lower()
74 if 'unknown' == license.strip():
75 lints.append('The recipe license cannot be unknown.')
76
77 # 6: Selectors should be in a tidy form.
78 if recipe_dir is not None and os.path.exists(meta_fname):
79 bad_selectors = []
80 # Good selectors look like ".*\s\s#\s[...]"
81 good_selectors_pat = re.compile(r'(.+?)\s{2,}#\s\[(.+)\](?(2).*)$')
82 with open(meta_fname, 'r') as fh:
83 for selector_line in selector_lines(fh):
84 if not good_selectors_pat.match(selector_line):
85 bad_selectors.append(selector_line)
86 if bad_selectors:
87 lints.append('Selectors are suggested to take a '
88 '" # [<selector>]" form.')
89
90 # 7: The build section should have a build number.
91 if build_section.get('number', None) is None:
92 lints.append('The recipe must have a `build/number` section.')
93
94 # 8: The build section should be before the run section in requirements.
95 requirements_order_sorted = sorted(requirements_section,
96 key=REQUIREMENTS_ORDER.index)
97 if requirements_section.keys() != requirements_order_sorted:
98 lints.append('The `requirements/build` section should be defined '
99 'before the `requirements/run` section.')
100
101 # 9: Files downloaded should have a hash.
102 if ('url' in source_section and
103 not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):
104 lints.append('When defining a source/url please add a sha256, sha1 '
105 'or md5 checksum (sha256 preferably).')
106
107 return lints
108
109
110 def selector_lines(lines):
111 # Using the same pattern defined in conda-build (metadata.py),
112 # we identify selectors.
113 sel_pat = re.compile(r'(.+?)\s*(#.*)?\[(.+)\](?(2).*)$')
114
115 for line in lines:
116 line = line.rstrip()
117 if line.lstrip().startswith('#'):
118 # Don't bother with comment only lines
119 continue
120 m = sel_pat.match(line)
121 if m:
122 m.group(3)
123 yield line
124
125
126 def main(recipe_dir):
127 recipe_dir = os.path.abspath(recipe_dir)
128 recipe_meta = os.path.join(recipe_dir, 'meta.yaml')
129 if not os.path.exists(recipe_dir):
130 raise IOError('Feedstock has no recipe/meta.yaml.')
131
132 env = jinja2.Environment(undefined=NullUndefined)
133
134 with open(recipe_meta, 'r') as fh:
135 content = env.from_string(''.join(fh)).render()
136 meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)
137 results = lintify(meta, recipe_dir)
138 return results
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_smithy/lint_recipe.py b/conda_smithy/lint_recipe.py
--- a/conda_smithy/lint_recipe.py
+++ b/conda_smithy/lint_recipe.py
@@ -85,7 +85,7 @@
bad_selectors.append(selector_line)
if bad_selectors:
lints.append('Selectors are suggested to take a '
- '" # [<selector>]" form.')
+ '"<two spaces>#<one space>[<selector>]" form.')
# 7: The build section should have a build number.
if build_section.get('number', None) is None:
| {"golden_diff": "diff --git a/conda_smithy/lint_recipe.py b/conda_smithy/lint_recipe.py\n--- a/conda_smithy/lint_recipe.py\n+++ b/conda_smithy/lint_recipe.py\n@@ -85,7 +85,7 @@\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n- '\" # [<selector>]\" form.')\n+ '\"<two spaces>#<one space>[<selector>]\" form.')\n \n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n", "issue": "Improve lint messages on whitespace\nThe message about the spacing for selectors isn't clear to see the problem in the number of spaces:\n\n`Selectors are suggested to take a \" # []\" form.`\n\nIt would be clearer to have a message like: `<two spaces> # <one space> [selector]`\n\n", "before_files": [{"content": "import os\nimport re\n\nimport jinja2\nimport ruamel.yaml\n\n\nEXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',\n 'test', 'app', 'about', 'extra']\n\nREQUIREMENTS_ORDER = ['build', 'run']\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return unicode(self._undefined_name)\n\n\ndef get_section(parent, name, lints):\n section = parent.get(name, {})\n if not isinstance(section, dict):\n lints.append('The \"{}\" section was expected to be a dictionary, but '\n 'got a {}.'.format(name, type(section).__name__))\n section = {}\n return section\n\n\ndef lintify(meta, recipe_dir=None):\n lints = []\n major_sections = list(meta.keys())\n\n # If the recipe_dir exists (no guarantee within this function) , we can\n # find the meta.yaml within it.\n meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')\n\n source_section = get_section(meta, 'source', lints)\n build_section = get_section(meta, 'build', lints)\n requirements_section = get_section(meta, 'requirements', lints)\n about_section = get_section(meta, 'about', lints)\n extra_section = get_section(meta, 'extra', lints)\n\n # 1: Top level meta.yaml keys should have a specific order.\n section_order_sorted = sorted(major_sections,\n key=EXPECTED_SECTION_ORDER.index)\n if major_sections != section_order_sorted:\n lints.append('The top level meta keys are in an unexpected order. '\n 'Expecting {}.'.format(section_order_sorted))\n\n # 2: The about section should have a home, license and summary.\n for about_item in ['home', 'license', 'summary']:\n # if the section doesn't exist, or is just empty, lint it.\n if not about_section.get(about_item, ''):\n lints.append('The {} item is expected in the about section.'\n ''.format(about_item))\n\n # 3: The recipe should have some maintainers.\n if not extra_section.get('recipe-maintainers', []):\n lints.append('The recipe could do with some maintainers listed in '\n 'the \"extra/recipe-maintainers\" section.')\n\n # 4: The recipe should have some tests.\n if 'test' not in major_sections:\n test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',\n 'run_test.pl']\n a_test_file_exists = (recipe_dir is not None and\n any(os.path.exists(os.path.join(recipe_dir,\n test_file))\n for test_file in test_files))\n if not a_test_file_exists:\n lints.append('The recipe must have some tests.')\n\n # 5: License cannot be 'unknown.'\n license = about_section.get('license', '').lower()\n if 'unknown' == license.strip():\n lints.append('The recipe license cannot be unknown.')\n\n # 6: Selectors should be in a tidy form.\n if recipe_dir is not None and os.path.exists(meta_fname):\n bad_selectors = []\n # Good selectors look like \".*\\s\\s#\\s[...]\"\n good_selectors_pat = re.compile(r'(.+?)\\s{2,}#\\s\\[(.+)\\](?(2).*)$')\n with open(meta_fname, 'r') as fh:\n for selector_line in selector_lines(fh):\n if not good_selectors_pat.match(selector_line):\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n '\" # [<selector>]\" form.')\n\n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n lints.append('The recipe must have a `build/number` section.')\n\n # 8: The build section should be before the run section in requirements.\n requirements_order_sorted = sorted(requirements_section,\n key=REQUIREMENTS_ORDER.index)\n if requirements_section.keys() != requirements_order_sorted:\n lints.append('The `requirements/build` section should be defined '\n 'before the `requirements/run` section.')\n\n # 9: Files downloaded should have a hash.\n if ('url' in source_section and\n not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):\n lints.append('When defining a source/url please add a sha256, sha1 '\n 'or md5 checksum (sha256 preferably).')\n\n return lints\n\n\ndef selector_lines(lines):\n # Using the same pattern defined in conda-build (metadata.py),\n # we identify selectors.\n sel_pat = re.compile(r'(.+?)\\s*(#.*)?\\[(.+)\\](?(2).*)$')\n\n for line in lines:\n line = line.rstrip()\n if line.lstrip().startswith('#'):\n # Don't bother with comment only lines\n continue\n m = sel_pat.match(line)\n if m:\n m.group(3)\n yield line\n\n\ndef main(recipe_dir):\n recipe_dir = os.path.abspath(recipe_dir)\n recipe_meta = os.path.join(recipe_dir, 'meta.yaml')\n if not os.path.exists(recipe_dir):\n raise IOError('Feedstock has no recipe/meta.yaml.')\n\n env = jinja2.Environment(undefined=NullUndefined)\n\n with open(recipe_meta, 'r') as fh:\n content = env.from_string(''.join(fh)).render()\n meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)\n results = lintify(meta, recipe_dir)\n return results\n", "path": "conda_smithy/lint_recipe.py"}], "after_files": [{"content": "import os\nimport re\n\nimport jinja2\nimport ruamel.yaml\n\n\nEXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',\n 'test', 'app', 'about', 'extra']\n\nREQUIREMENTS_ORDER = ['build', 'run']\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return unicode(self._undefined_name)\n\n\ndef get_section(parent, name, lints):\n section = parent.get(name, {})\n if not isinstance(section, dict):\n lints.append('The \"{}\" section was expected to be a dictionary, but '\n 'got a {}.'.format(name, type(section).__name__))\n section = {}\n return section\n\n\ndef lintify(meta, recipe_dir=None):\n lints = []\n major_sections = list(meta.keys())\n\n # If the recipe_dir exists (no guarantee within this function) , we can\n # find the meta.yaml within it.\n meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')\n\n source_section = get_section(meta, 'source', lints)\n build_section = get_section(meta, 'build', lints)\n requirements_section = get_section(meta, 'requirements', lints)\n about_section = get_section(meta, 'about', lints)\n extra_section = get_section(meta, 'extra', lints)\n\n # 1: Top level meta.yaml keys should have a specific order.\n section_order_sorted = sorted(major_sections,\n key=EXPECTED_SECTION_ORDER.index)\n if major_sections != section_order_sorted:\n lints.append('The top level meta keys are in an unexpected order. '\n 'Expecting {}.'.format(section_order_sorted))\n\n # 2: The about section should have a home, license and summary.\n for about_item in ['home', 'license', 'summary']:\n # if the section doesn't exist, or is just empty, lint it.\n if not about_section.get(about_item, ''):\n lints.append('The {} item is expected in the about section.'\n ''.format(about_item))\n\n # 3: The recipe should have some maintainers.\n if not extra_section.get('recipe-maintainers', []):\n lints.append('The recipe could do with some maintainers listed in '\n 'the \"extra/recipe-maintainers\" section.')\n\n # 4: The recipe should have some tests.\n if 'test' not in major_sections:\n test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',\n 'run_test.pl']\n a_test_file_exists = (recipe_dir is not None and\n any(os.path.exists(os.path.join(recipe_dir,\n test_file))\n for test_file in test_files))\n if not a_test_file_exists:\n lints.append('The recipe must have some tests.')\n\n # 5: License cannot be 'unknown.'\n license = about_section.get('license', '').lower()\n if 'unknown' == license.strip():\n lints.append('The recipe license cannot be unknown.')\n\n # 6: Selectors should be in a tidy form.\n if recipe_dir is not None and os.path.exists(meta_fname):\n bad_selectors = []\n # Good selectors look like \".*\\s\\s#\\s[...]\"\n good_selectors_pat = re.compile(r'(.+?)\\s{2,}#\\s\\[(.+)\\](?(2).*)$')\n with open(meta_fname, 'r') as fh:\n for selector_line in selector_lines(fh):\n if not good_selectors_pat.match(selector_line):\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n '\"<two spaces>#<one space>[<selector>]\" form.')\n\n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n lints.append('The recipe must have a `build/number` section.')\n\n # 8: The build section should be before the run section in requirements.\n requirements_order_sorted = sorted(requirements_section,\n key=REQUIREMENTS_ORDER.index)\n if requirements_section.keys() != requirements_order_sorted:\n lints.append('The `requirements/build` section should be defined '\n 'before the `requirements/run` section.')\n\n # 9: Files downloaded should have a hash.\n if ('url' in source_section and\n not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):\n lints.append('When defining a source/url please add a sha256, sha1 '\n 'or md5 checksum (sha256 preferably).')\n\n return lints\n\n\ndef selector_lines(lines):\n # Using the same pattern defined in conda-build (metadata.py),\n # we identify selectors.\n sel_pat = re.compile(r'(.+?)\\s*(#.*)?\\[(.+)\\](?(2).*)$')\n\n for line in lines:\n line = line.rstrip()\n if line.lstrip().startswith('#'):\n # Don't bother with comment only lines\n continue\n m = sel_pat.match(line)\n if m:\n m.group(3)\n yield line\n\n\ndef main(recipe_dir):\n recipe_dir = os.path.abspath(recipe_dir)\n recipe_meta = os.path.join(recipe_dir, 'meta.yaml')\n if not os.path.exists(recipe_dir):\n raise IOError('Feedstock has no recipe/meta.yaml.')\n\n env = jinja2.Environment(undefined=NullUndefined)\n\n with open(recipe_meta, 'r') as fh:\n content = env.from_string(''.join(fh)).render()\n meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)\n results = lintify(meta, recipe_dir)\n return results\n", "path": "conda_smithy/lint_recipe.py"}]} | 1,881 | 142 |
gh_patches_debug_50581 | rasdani/github-patches | git_diff | streamlit__streamlit-6828 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Right-side label of `st.slider` and `st.select_slider` overflows when inside `st.expander`
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
Right-side label of `st.slider` and `st.select_slider` overflows when inside `st.expander`.
In the past I submitted a similar issue for the left-side label (see https://github.com/streamlit/streamlit/issues/5898); now it is the right-side label that is misbehaving.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-6297)
```Python
import streamlit as st
st.title("Right-side label of slider and select_slider overflows when inside expander")
with st.expander('Example st.expander'):
single_value = st.slider(
label='Example st.slider',
min_value=9_500_000,
max_value=10_000_000,
value=10_000_000
)
first_value,last_value = st.slider(
label='Example st.slider (range mode)',
min_value=9_500_000,
max_value=10_000_000,
value=(9_500_000,10_000_000)
)
single_value = st.select_slider(
label='Example st.select_slider',
options=['Maradona','Ronaldo','Pele','This is a very, very long label'],
value='This is a very, very long label'
)
first_value,last_value = st.select_slider(
label='Example st.select_slider (range mode)',
options=['Maradona','Ronaldo','Pele','This is a very, very long label'],
value=['Maradona','This is a very, very long label']
)
```
### Steps To Reproduce
1. Run the reproducible code example
2. Open the expander and see that all the right side labels are overflowing
### Expected Behavior
Labels should not overflow beyond the widgets width.
### Current Behavior
_No response_
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.20.0
- Python version: 3.11
- Operating System: macOS
- Browser: Brave
- Virtual environment: conda
### Additional Information
_No response_
### Are you willing to submit a PR?
- [ ] Yes, I am willing to submit a PR!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_select_slider.py`
Content:
```
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import numpy as np
16 import pandas as pd
17
18 import streamlit as st
19 from streamlit import runtime
20
21 w1 = st.select_slider(
22 "Label 1",
23 value=("orange", "blue"),
24 options=["red", "orange", "yellow", "green", "blue", "indigo", "violet"],
25 )
26 st.write("Value 1:", w1)
27
28 w2 = st.select_slider(
29 "Label 2",
30 options=np.array([1, 2, 3, 4, 5]),
31 )
32 st.write("Value 2:", w2)
33
34 w3 = st.select_slider(
35 "Label 3",
36 value=[2, 5],
37 options=pd.Series([1, 2, 3, 4, 5, 6, 7, 8, 9]),
38 )
39 st.write("Value 3:", w3)
40
41 w4 = st.select_slider(
42 "Label 4",
43 value=5,
44 options=pd.DataFrame(
45 {
46 "first column": [1, 2, 3, 4, 5],
47 "second column": [10, 20, 30, 40, 50],
48 }
49 ),
50 )
51 st.write("Value 4:", w4)
52
53 w5 = st.select_slider(
54 "Label 5",
55 value=("orange", "blue"),
56 options=["red", "orange", "yellow", "green", "blue", "indigo", "violet"],
57 disabled=True,
58 )
59 st.write("Value 5:", w5)
60
61 w6 = st.select_slider(
62 "Label 6",
63 options=["red", "orange", "yellow", "green", "blue", "indigo", "violet"],
64 label_visibility="hidden",
65 )
66
67 st.write("Value 6:", w6)
68
69
70 w7 = st.select_slider(
71 "Label 7",
72 options=["red", "orange", "yellow", "green", "blue", "indigo", "violet"],
73 label_visibility="collapsed",
74 )
75
76 st.write("Value 7:", w7)
77
78 if runtime.exists():
79
80 def on_change():
81 st.session_state.select_slider_changed = True
82
83 st.select_slider(
84 "Label 8",
85 options=np.array([1, 2, 3, 4, 5]),
86 key="select_slider8",
87 on_change=on_change,
88 )
89 st.write("Value 8:", st.session_state.select_slider8)
90 st.write("Select slider changed:", "select_slider_changed" in st.session_state)
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_select_slider.py b/e2e/scripts/st_select_slider.py
--- a/e2e/scripts/st_select_slider.py
+++ b/e2e/scripts/st_select_slider.py
@@ -88,3 +88,12 @@
)
st.write("Value 8:", st.session_state.select_slider8)
st.write("Select slider changed:", "select_slider_changed" in st.session_state)
+
+with st.expander("Expander", expanded=True):
+ w9 = st.select_slider(
+ label="Label 9",
+ options=["foo", "bar", "baz", "This is a very, very long option"],
+ value="This is a very, very long option",
+ )
+
+ st.write("Value 9:", w9)
| {"golden_diff": "diff --git a/e2e/scripts/st_select_slider.py b/e2e/scripts/st_select_slider.py\n--- a/e2e/scripts/st_select_slider.py\n+++ b/e2e/scripts/st_select_slider.py\n@@ -88,3 +88,12 @@\n )\n st.write(\"Value 8:\", st.session_state.select_slider8)\n st.write(\"Select slider changed:\", \"select_slider_changed\" in st.session_state)\n+\n+with st.expander(\"Expander\", expanded=True):\n+ w9 = st.select_slider(\n+ label=\"Label 9\",\n+ options=[\"foo\", \"bar\", \"baz\", \"This is a very, very long option\"],\n+ value=\"This is a very, very long option\",\n+ )\n+\n+ st.write(\"Value 9:\", w9)\n", "issue": "Right-side label of `st.slider` and `st.select_slider` overflows when inside `st.expander`\n### Checklist\r\n\r\n- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I have provided sufficient information below to help reproduce this issue.\r\n\r\n### Summary\r\n\r\nRight-side label of `st.slider` and `st.select_slider` overflows when inside `st.expander`.\r\n\r\nIn the past I submitted a similar issue for the left-side label (see https://github.com/streamlit/streamlit/issues/5898); now it is the right-side label that is misbehaving.\r\n\r\n### Reproducible Code Example\r\n\r\n[](https://issues.streamlitapp.com/?issue=gh-6297)\r\n\r\n```Python\r\nimport streamlit as st\r\n\r\nst.title(\"Right-side label of slider and select_slider overflows when inside expander\")\r\n\r\nwith st.expander('Example st.expander'):\r\n\r\n single_value = st.slider(\r\n label='Example st.slider',\r\n min_value=9_500_000,\r\n max_value=10_000_000,\r\n value=10_000_000\r\n )\r\n\r\n first_value,last_value = st.slider(\r\n label='Example st.slider (range mode)',\r\n min_value=9_500_000,\r\n max_value=10_000_000,\r\n value=(9_500_000,10_000_000)\r\n )\r\n\r\n single_value = st.select_slider(\r\n label='Example st.select_slider',\r\n options=['Maradona','Ronaldo','Pele','This is a very, very long label'],\r\n value='This is a very, very long label'\r\n )\r\n\r\n first_value,last_value = st.select_slider(\r\n label='Example st.select_slider (range mode)',\r\n options=['Maradona','Ronaldo','Pele','This is a very, very long label'],\r\n value=['Maradona','This is a very, very long label']\r\n )\r\n```\r\n\r\n\r\n### Steps To Reproduce\r\n\r\n1. Run the reproducible code example\r\n2. Open the expander and see that all the right side labels are overflowing\r\n\r\n### Expected Behavior\r\n\r\nLabels should not overflow beyond the widgets width.\r\n\r\n### Current Behavior\r\n\r\n_No response_\r\n\r\n### Is this a regression?\r\n\r\n- [X] Yes, this used to work in a previous version.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 1.20.0\r\n- Python version: 3.11\r\n- Operating System: macOS\r\n- Browser: Brave\r\n- Virtual environment: conda\r\n\r\n### Additional Information\r\n\r\n_No response_\r\n\r\n### Are you willing to submit a PR?\r\n\r\n- [ ] Yes, I am willing to submit a PR!\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport pandas as pd\n\nimport streamlit as st\nfrom streamlit import runtime\n\nw1 = st.select_slider(\n \"Label 1\",\n value=(\"orange\", \"blue\"),\n options=[\"red\", \"orange\", \"yellow\", \"green\", \"blue\", \"indigo\", \"violet\"],\n)\nst.write(\"Value 1:\", w1)\n\nw2 = st.select_slider(\n \"Label 2\",\n options=np.array([1, 2, 3, 4, 5]),\n)\nst.write(\"Value 2:\", w2)\n\nw3 = st.select_slider(\n \"Label 3\",\n value=[2, 5],\n options=pd.Series([1, 2, 3, 4, 5, 6, 7, 8, 9]),\n)\nst.write(\"Value 3:\", w3)\n\nw4 = st.select_slider(\n \"Label 4\",\n value=5,\n options=pd.DataFrame(\n {\n \"first column\": [1, 2, 3, 4, 5],\n \"second column\": [10, 20, 30, 40, 50],\n }\n ),\n)\nst.write(\"Value 4:\", w4)\n\nw5 = st.select_slider(\n \"Label 5\",\n value=(\"orange\", \"blue\"),\n options=[\"red\", \"orange\", \"yellow\", \"green\", \"blue\", \"indigo\", \"violet\"],\n disabled=True,\n)\nst.write(\"Value 5:\", w5)\n\nw6 = st.select_slider(\n \"Label 6\",\n options=[\"red\", \"orange\", \"yellow\", \"green\", \"blue\", \"indigo\", \"violet\"],\n label_visibility=\"hidden\",\n)\n\nst.write(\"Value 6:\", w6)\n\n\nw7 = st.select_slider(\n \"Label 7\",\n options=[\"red\", \"orange\", \"yellow\", \"green\", \"blue\", \"indigo\", \"violet\"],\n label_visibility=\"collapsed\",\n)\n\nst.write(\"Value 7:\", w7)\n\nif runtime.exists():\n\n def on_change():\n st.session_state.select_slider_changed = True\n\n st.select_slider(\n \"Label 8\",\n options=np.array([1, 2, 3, 4, 5]),\n key=\"select_slider8\",\n on_change=on_change,\n )\n st.write(\"Value 8:\", st.session_state.select_slider8)\n st.write(\"Select slider changed:\", \"select_slider_changed\" in st.session_state)\n", "path": "e2e/scripts/st_select_slider.py"}], "after_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport pandas as pd\n\nimport streamlit as st\nfrom streamlit import runtime\n\nw1 = st.select_slider(\n \"Label 1\",\n value=(\"orange\", \"blue\"),\n options=[\"red\", \"orange\", \"yellow\", \"green\", \"blue\", \"indigo\", \"violet\"],\n)\nst.write(\"Value 1:\", w1)\n\nw2 = st.select_slider(\n \"Label 2\",\n options=np.array([1, 2, 3, 4, 5]),\n)\nst.write(\"Value 2:\", w2)\n\nw3 = st.select_slider(\n \"Label 3\",\n value=[2, 5],\n options=pd.Series([1, 2, 3, 4, 5, 6, 7, 8, 9]),\n)\nst.write(\"Value 3:\", w3)\n\nw4 = st.select_slider(\n \"Label 4\",\n value=5,\n options=pd.DataFrame(\n {\n \"first column\": [1, 2, 3, 4, 5],\n \"second column\": [10, 20, 30, 40, 50],\n }\n ),\n)\nst.write(\"Value 4:\", w4)\n\nw5 = st.select_slider(\n \"Label 5\",\n value=(\"orange\", \"blue\"),\n options=[\"red\", \"orange\", \"yellow\", \"green\", \"blue\", \"indigo\", \"violet\"],\n disabled=True,\n)\nst.write(\"Value 5:\", w5)\n\nw6 = st.select_slider(\n \"Label 6\",\n options=[\"red\", \"orange\", \"yellow\", \"green\", \"blue\", \"indigo\", \"violet\"],\n label_visibility=\"hidden\",\n)\n\nst.write(\"Value 6:\", w6)\n\n\nw7 = st.select_slider(\n \"Label 7\",\n options=[\"red\", \"orange\", \"yellow\", \"green\", \"blue\", \"indigo\", \"violet\"],\n label_visibility=\"collapsed\",\n)\n\nst.write(\"Value 7:\", w7)\n\nif runtime.exists():\n\n def on_change():\n st.session_state.select_slider_changed = True\n\n st.select_slider(\n \"Label 8\",\n options=np.array([1, 2, 3, 4, 5]),\n key=\"select_slider8\",\n on_change=on_change,\n )\n st.write(\"Value 8:\", st.session_state.select_slider8)\n st.write(\"Select slider changed:\", \"select_slider_changed\" in st.session_state)\n\nwith st.expander(\"Expander\", expanded=True):\n w9 = st.select_slider(\n label=\"Label 9\",\n options=[\"foo\", \"bar\", \"baz\", \"This is a very, very long option\"],\n value=\"This is a very, very long option\",\n )\n\n st.write(\"Value 9:\", w9)\n", "path": "e2e/scripts/st_select_slider.py"}]} | 1,781 | 173 |
gh_patches_debug_25216 | rasdani/github-patches | git_diff | TileDB-Inc__TileDB-Py-167 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tiledb.open doesn't accept an optional ctx
it accepts a config which creates a ctx, would it make sense to have both?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tiledb/highlevel.py`
Content:
```
1 import tiledb
2 from tiledb.libtiledb import *
3
4 import numpy as np
5
6 def open(uri, key=None, attr=None, mode='r', config=None):
7 """
8 Open a TileDB array at the given URI
9
10 :param uri: any TileDB supported URI
11 :param key: encryption key, str or None
12 :param str mode: (default 'r') Open the array object in read 'r' or write 'w' mode
13 :param attr: attribute name to select from a multi-attribute array, str or None
14 :param config: TileDB config dictionary, dict or None
15 :return:
16 """
17 if config:
18 cfg = tiledb.Config(config)
19 ctx = tiledb.Ctx(cfg)
20 else:
21 ctx = default_ctx()
22
23 schema = ArraySchema.load(uri, ctx=ctx)
24 if not schema:
25 raise Exception("Unable to load tiledb ArraySchema from URI: '{}'".format(uri))
26
27 if schema.sparse:
28 return tiledb.SparseArray(uri, mode=mode, key=key, attr=attr, ctx=ctx)
29 elif not schema.sparse:
30 return tiledb.DenseArray(uri, mode=mode, key=key, attr=attr, ctx=ctx)
31 else:
32 raise Exception("Unknown TileDB array type")
33
34
35 def save(uri, array, config=None, **kw):
36 """
37 Save array-like object at the given URI.
38
39 :param uri: str or None
40 :param array: array-like object convertible to NumPy
41 :param config: TileDB config dictionary, dict or None
42 :param kw: optional keyword args will be forwarded to tiledb.Array constructor
43 :return:
44 """
45 if not isinstance(array, np.ndarray):
46 raise ValueError("expected NumPy ndarray, not '{}'".format(type(array)))
47 if config:
48 cfg = Config(config)
49 ctx = tiledb.Ctx(cfg)
50 else:
51 ctx = default_ctx()
52
53 return tiledb.from_numpy(uri, array, ctx=ctx)
54
55
56 def empty_like(uri, arr, config=None, key=None, tile=None):
57 """
58 Create and return an empty, writeable DenseArray with schema based on
59 a NumPy-array like object.
60
61 :param uri:
62 :param arr: NumPy ndarray, or shape tuple
63 :param ctx:
64 :param kw:
65 :return:
66 """
67 if config:
68 cfg = tiledb.Config(config)
69 ctx = tiledb.Ctx(cfg)
70 else:
71 ctx = default_ctx()
72
73 if arr is ArraySchema:
74 schema = arr
75 else:
76 schema = schema_like(arr, tile=tile, ctx=ctx)
77
78 tiledb.DenseArray.create(uri, key=key, schema=schema)
79 return tiledb.DenseArray(uri, mode='w', key=key, ctx=ctx)
80
81
82 def from_numpy(uri, array, ctx=default_ctx(), **kw):
83 """
84 Convenience method, see `tiledb.DenseArray.from_numpy`
85 """
86 if not isinstance(array, np.ndarray):
87 raise Exception("from_numpy is only currently supported for numpy.ndarray")
88
89 return DenseArray.from_numpy(uri, array, ctx=ctx, **kw)
90
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tiledb/highlevel.py b/tiledb/highlevel.py
--- a/tiledb/highlevel.py
+++ b/tiledb/highlevel.py
@@ -3,7 +3,7 @@
import numpy as np
-def open(uri, key=None, attr=None, mode='r', config=None):
+def open(uri, key=None, attr=None, mode='r', config=None, ctx=None):
"""
Open a TileDB array at the given URI
@@ -14,10 +14,14 @@
:param config: TileDB config dictionary, dict or None
:return:
"""
+ if ctx and config:
+ raise ValueError("Received extra Ctx or Config argument: either one may be provided, but not both")
+
if config:
cfg = tiledb.Config(config)
ctx = tiledb.Ctx(cfg)
- else:
+
+ if ctx is None:
ctx = default_ctx()
schema = ArraySchema.load(uri, ctx=ctx)
@@ -88,3 +92,23 @@
return DenseArray.from_numpy(uri, array, ctx=ctx, **kw)
+def array_exists(uri, isdense=False, issparse=False):
+ """
+ Check if arrays exists and is open-able at the given URI
+
+ Optionally restrict to `isdense` or `issparse` array types.
+ """
+ try:
+ a = tiledb.open(uri)
+ except TileDBError as exc:
+ return False
+
+ if isdense:
+ rval = not a.schema.sparse
+ elif issparse:
+ rval = a.schema.sparse
+ else:
+ rval = True
+
+ a.close()
+ return rval
| {"golden_diff": "diff --git a/tiledb/highlevel.py b/tiledb/highlevel.py\n--- a/tiledb/highlevel.py\n+++ b/tiledb/highlevel.py\n@@ -3,7 +3,7 @@\n \n import numpy as np\n \n-def open(uri, key=None, attr=None, mode='r', config=None):\n+def open(uri, key=None, attr=None, mode='r', config=None, ctx=None):\n \"\"\"\n Open a TileDB array at the given URI\n \n@@ -14,10 +14,14 @@\n :param config: TileDB config dictionary, dict or None\n :return:\n \"\"\"\n+ if ctx and config:\n+ raise ValueError(\"Received extra Ctx or Config argument: either one may be provided, but not both\")\n+\n if config:\n cfg = tiledb.Config(config)\n ctx = tiledb.Ctx(cfg)\n- else:\n+\n+ if ctx is None:\n ctx = default_ctx()\n \n schema = ArraySchema.load(uri, ctx=ctx)\n@@ -88,3 +92,23 @@\n \n return DenseArray.from_numpy(uri, array, ctx=ctx, **kw)\n \n+def array_exists(uri, isdense=False, issparse=False):\n+ \"\"\"\n+ Check if arrays exists and is open-able at the given URI\n+\n+ Optionally restrict to `isdense` or `issparse` array types.\n+ \"\"\"\n+ try:\n+ a = tiledb.open(uri)\n+ except TileDBError as exc:\n+ return False\n+\n+ if isdense:\n+ rval = not a.schema.sparse\n+ elif issparse:\n+ rval = a.schema.sparse\n+ else:\n+ rval = True\n+\n+ a.close()\n+ return rval\n", "issue": "tiledb.open doesn't accept an optional ctx\nit accepts a config which creates a ctx, would it make sense to have both?\n", "before_files": [{"content": "import tiledb\nfrom tiledb.libtiledb import *\n\nimport numpy as np\n\ndef open(uri, key=None, attr=None, mode='r', config=None):\n \"\"\"\n Open a TileDB array at the given URI\n\n :param uri: any TileDB supported URI\n :param key: encryption key, str or None\n :param str mode: (default 'r') Open the array object in read 'r' or write 'w' mode\n :param attr: attribute name to select from a multi-attribute array, str or None\n :param config: TileDB config dictionary, dict or None\n :return:\n \"\"\"\n if config:\n cfg = tiledb.Config(config)\n ctx = tiledb.Ctx(cfg)\n else:\n ctx = default_ctx()\n\n schema = ArraySchema.load(uri, ctx=ctx)\n if not schema:\n raise Exception(\"Unable to load tiledb ArraySchema from URI: '{}'\".format(uri))\n\n if schema.sparse:\n return tiledb.SparseArray(uri, mode=mode, key=key, attr=attr, ctx=ctx)\n elif not schema.sparse:\n return tiledb.DenseArray(uri, mode=mode, key=key, attr=attr, ctx=ctx)\n else:\n raise Exception(\"Unknown TileDB array type\")\n\n\ndef save(uri, array, config=None, **kw):\n \"\"\"\n Save array-like object at the given URI.\n\n :param uri: str or None\n :param array: array-like object convertible to NumPy\n :param config: TileDB config dictionary, dict or None\n :param kw: optional keyword args will be forwarded to tiledb.Array constructor\n :return:\n \"\"\"\n if not isinstance(array, np.ndarray):\n raise ValueError(\"expected NumPy ndarray, not '{}'\".format(type(array)))\n if config:\n cfg = Config(config)\n ctx = tiledb.Ctx(cfg)\n else:\n ctx = default_ctx()\n\n return tiledb.from_numpy(uri, array, ctx=ctx)\n\n\ndef empty_like(uri, arr, config=None, key=None, tile=None):\n \"\"\"\n Create and return an empty, writeable DenseArray with schema based on\n a NumPy-array like object.\n\n :param uri:\n :param arr: NumPy ndarray, or shape tuple\n :param ctx:\n :param kw:\n :return:\n \"\"\"\n if config:\n cfg = tiledb.Config(config)\n ctx = tiledb.Ctx(cfg)\n else:\n ctx = default_ctx()\n\n if arr is ArraySchema:\n schema = arr\n else:\n schema = schema_like(arr, tile=tile, ctx=ctx)\n\n tiledb.DenseArray.create(uri, key=key, schema=schema)\n return tiledb.DenseArray(uri, mode='w', key=key, ctx=ctx)\n\n\ndef from_numpy(uri, array, ctx=default_ctx(), **kw):\n \"\"\"\n Convenience method, see `tiledb.DenseArray.from_numpy`\n \"\"\"\n if not isinstance(array, np.ndarray):\n raise Exception(\"from_numpy is only currently supported for numpy.ndarray\")\n\n return DenseArray.from_numpy(uri, array, ctx=ctx, **kw)\n\n", "path": "tiledb/highlevel.py"}], "after_files": [{"content": "import tiledb\nfrom tiledb.libtiledb import *\n\nimport numpy as np\n\ndef open(uri, key=None, attr=None, mode='r', config=None, ctx=None):\n \"\"\"\n Open a TileDB array at the given URI\n\n :param uri: any TileDB supported URI\n :param key: encryption key, str or None\n :param str mode: (default 'r') Open the array object in read 'r' or write 'w' mode\n :param attr: attribute name to select from a multi-attribute array, str or None\n :param config: TileDB config dictionary, dict or None\n :return:\n \"\"\"\n if ctx and config:\n raise ValueError(\"Received extra Ctx or Config argument: either one may be provided, but not both\")\n\n if config:\n cfg = tiledb.Config(config)\n ctx = tiledb.Ctx(cfg)\n\n if ctx is None:\n ctx = default_ctx()\n\n schema = ArraySchema.load(uri, ctx=ctx)\n if not schema:\n raise Exception(\"Unable to load tiledb ArraySchema from URI: '{}'\".format(uri))\n\n if schema.sparse:\n return tiledb.SparseArray(uri, mode=mode, key=key, attr=attr, ctx=ctx)\n elif not schema.sparse:\n return tiledb.DenseArray(uri, mode=mode, key=key, attr=attr, ctx=ctx)\n else:\n raise Exception(\"Unknown TileDB array type\")\n\n\ndef save(uri, array, config=None, **kw):\n \"\"\"\n Save array-like object at the given URI.\n\n :param uri: str or None\n :param array: array-like object convertible to NumPy\n :param config: TileDB config dictionary, dict or None\n :param kw: optional keyword args will be forwarded to tiledb.Array constructor\n :return:\n \"\"\"\n if not isinstance(array, np.ndarray):\n raise ValueError(\"expected NumPy ndarray, not '{}'\".format(type(array)))\n if config:\n cfg = Config(config)\n ctx = tiledb.Ctx(cfg)\n else:\n ctx = default_ctx()\n\n return tiledb.from_numpy(uri, array, ctx=ctx)\n\n\ndef empty_like(uri, arr, config=None, key=None, tile=None):\n \"\"\"\n Create and return an empty, writeable DenseArray with schema based on\n a NumPy-array like object.\n\n :param uri:\n :param arr: NumPy ndarray, or shape tuple\n :param ctx:\n :param kw:\n :return:\n \"\"\"\n if config:\n cfg = tiledb.Config(config)\n ctx = tiledb.Ctx(cfg)\n else:\n ctx = default_ctx()\n\n if arr is ArraySchema:\n schema = arr\n else:\n schema = schema_like(arr, tile=tile, ctx=ctx)\n\n tiledb.DenseArray.create(uri, key=key, schema=schema)\n return tiledb.DenseArray(uri, mode='w', key=key, ctx=ctx)\n\n\ndef from_numpy(uri, array, ctx=default_ctx(), **kw):\n \"\"\"\n Convenience method, see `tiledb.DenseArray.from_numpy`\n \"\"\"\n if not isinstance(array, np.ndarray):\n raise Exception(\"from_numpy is only currently supported for numpy.ndarray\")\n\n return DenseArray.from_numpy(uri, array, ctx=ctx, **kw)\n\ndef array_exists(uri, isdense=False, issparse=False):\n \"\"\"\n Check if arrays exists and is open-able at the given URI\n\n Optionally restrict to `isdense` or `issparse` array types.\n \"\"\"\n try:\n a = tiledb.open(uri)\n except TileDBError as exc:\n return False\n\n if isdense:\n rval = not a.schema.sparse\n elif issparse:\n rval = a.schema.sparse\n else:\n rval = True\n\n a.close()\n return rval\n", "path": "tiledb/highlevel.py"}]} | 1,142 | 384 |
gh_patches_debug_13516 | rasdani/github-patches | git_diff | ivy-llc__ivy-19895 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dropout2d
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/mindspore/ops/function/nn_func.py`
Content:
```
1 """Includes Mindspore Frontend functions listed in the TODO list
2 https://github.com/unifyai/ivy/issues/14951."""
3
4 # local
5 import ivy
6 from ivy.func_wrapper import with_supported_dtypes
7 from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
8
9
10 @with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
11 @to_ivy_arrays_and_back
12 def selu(input_x):
13 return ivy.selu(input_x)
14
15
16 @with_supported_dtypes({"2.0 and below": ("float16", "float32")}, "mindspore")
17 @to_ivy_arrays_and_back
18 def softsign(x):
19 return ivy.divide(x, ivy.add(1, ivy.abs(x)))
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/mindspore/ops/function/nn_func.py b/ivy/functional/frontends/mindspore/ops/function/nn_func.py
--- a/ivy/functional/frontends/mindspore/ops/function/nn_func.py
+++ b/ivy/functional/frontends/mindspore/ops/function/nn_func.py
@@ -6,6 +6,23 @@
from ivy.func_wrapper import with_supported_dtypes
from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
+@with_supported_dtypes(
+ {
+ "2.0.0 and below": (
+ "int8",
+ "int16",
+ "int32",
+ "int64",
+ "float16",
+ "float32",
+ "float64",
+ )
+ },
+ "mindspore",
+)
+@to_ivy_arrays_and_back
+def dropout2d(input, p=0.5, training=True):
+ return ivy.dropout2d(input, p, training=training, data_format="NCHW")
@with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
@to_ivy_arrays_and_back
| {"golden_diff": "diff --git a/ivy/functional/frontends/mindspore/ops/function/nn_func.py b/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n--- a/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n+++ b/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n@@ -6,6 +6,23 @@\n from ivy.func_wrapper import with_supported_dtypes\n from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n \n+@with_supported_dtypes(\n+ {\n+ \"2.0.0 and below\": (\n+ \"int8\",\n+ \"int16\",\n+ \"int32\",\n+ \"int64\",\n+ \"float16\",\n+ \"float32\",\n+ \"float64\",\n+ )\n+ },\n+ \"mindspore\",\n+)\n+@to_ivy_arrays_and_back\n+def dropout2d(input, p=0.5, training=True):\n+ return ivy.dropout2d(input, p, training=training, data_format=\"NCHW\")\n \n @with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n @to_ivy_arrays_and_back\n", "issue": "dropout2d\n\n", "before_files": [{"content": "\"\"\"Includes Mindspore Frontend functions listed in the TODO list\nhttps://github.com/unifyai/ivy/issues/14951.\"\"\"\n\n# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef selu(input_x):\n return ivy.selu(input_x)\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef softsign(x):\n return ivy.divide(x, ivy.add(1, ivy.abs(x)))\n", "path": "ivy/functional/frontends/mindspore/ops/function/nn_func.py"}], "after_files": [{"content": "\"\"\"Includes Mindspore Frontend functions listed in the TODO list\nhttps://github.com/unifyai/ivy/issues/14951.\"\"\"\n\n# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef dropout2d(input, p=0.5, training=True):\n return ivy.dropout2d(input, p, training=training, data_format=\"NCHW\")\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef selu(input_x):\n return ivy.selu(input_x)\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef softsign(x):\n return ivy.divide(x, ivy.add(1, ivy.abs(x)))\n", "path": "ivy/functional/frontends/mindspore/ops/function/nn_func.py"}]} | 489 | 292 |
gh_patches_debug_39053 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2444 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create a Deployment type 1 specific script.
Create a script to download docker-compose from the Mathesar repo and help the user in setting up Mathesar.
- [ ] The script should be self-documenting, i.e., the prompts during interactive portions should help the user understand the implications of what they're doing, and any error states should give enough info for the user to find solutions in the troubleshooting documentation on docs.mathesar.org
- [ ] The script should also add help text in the installation CLI that would list the database Privileges needed for Mathesar to function effectively.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `install.py`
Content:
```
1 """
2 This script installs functions and types for Mathesar onto the configured DB.
3 """
4 import getopt
5 import sys
6
7 import getpass
8
9 import django
10 from decouple import UndefinedValueError, config as decouple_config
11 from django.contrib.auth import get_user_model
12 from django.core import management
13
14 from django.conf import settings
15 from db import install
16
17
18 def main():
19 skip_confirm = False
20 (opts, _) = getopt.getopt(sys.argv[1:], ":s", ["skip-confirm"])
21 for (opt, value) in opts:
22 if (opt == "-s") or (opt == "--skip-confirm"):
23 skip_confirm = True
24 check_missing_dj_config()
25 django.setup()
26 management.call_command('migrate')
27 debug_mode = decouple_config('DEBUG', default=False, cast=bool)
28 #
29 if not debug_mode:
30 management.call_command('collectstatic', no_input='y')
31 if not superuser_exists():
32 print("------------Setting up Admin user------------")
33 print("Admin user does not exists. We need at least one admin")
34 create_superuser(skip_confirm)
35
36 print("------------Setting up User Databases------------")
37 user_databases = [key for key in settings.DATABASES if key != "default"]
38 for database_key in user_databases:
39 install_on_db_with_key(database_key, skip_confirm)
40
41
42 def superuser_exists():
43 return get_user_model().objects.filter(is_superuser=True).exists()
44
45
46 def create_superuser(skip_confirm):
47 # TODO Replace argument name used for default admin user creation.
48 if not skip_confirm:
49 print("Please enter the details to create a new admin user ")
50 username = input("Username: ")
51 email = input("Email: ")
52 password = getpass.getpass('Password: ')
53 else:
54 username = "admin"
55 email = "[email protected]"
56 password = "password"
57 get_user_model().objects.create_superuser(username, email, password)
58 print(f"Admin user with username {username} was created successfully")
59
60
61 def check_missing_dj_config():
62 # TODO Add documentation link
63 documentation_link = ""
64 try:
65 decouple_config('ALLOWED_HOSTS')
66 decouple_config('SECRET_KEY')
67 decouple_config('DJANGO_DATABASE_KEY')
68 decouple_config('DJANGO_SETTINGS_MODULE')
69 decouple_config('DJANGO_DATABASE_URL')
70 decouple_config('MATHESAR_DATABASES')
71 except UndefinedValueError as e:
72 missing_config_key = e.args[0]
73 raise Exception(f"{missing_config_key} environment variable is missing."
74 f" Please follow the documentation {documentation_link} to add the missing environment variable.")
75
76
77 def install_on_db_with_key(database_key, skip_confirm):
78 install.install_mathesar(
79 database_name=settings.DATABASES[database_key]["NAME"],
80 username=settings.DATABASES[database_key]["USER"],
81 password=settings.DATABASES[database_key]["PASSWORD"],
82 hostname=settings.DATABASES[database_key]["HOST"],
83 port=settings.DATABASES[database_key]["PORT"],
84 skip_confirm=skip_confirm
85 )
86
87
88 if __name__ == "__main__":
89 main()
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/install.py b/install.py
--- a/install.py
+++ b/install.py
@@ -4,11 +4,7 @@
import getopt
import sys
-import getpass
-
import django
-from decouple import UndefinedValueError, config as decouple_config
-from django.contrib.auth import get_user_model
from django.core import management
from django.conf import settings
@@ -16,64 +12,20 @@
def main():
- skip_confirm = False
+ # skip_confirm is temporarily enabled by default as we don't have any use for interactive prompts with docker only deployments
+ skip_confirm = True
(opts, _) = getopt.getopt(sys.argv[1:], ":s", ["skip-confirm"])
for (opt, value) in opts:
if (opt == "-s") or (opt == "--skip-confirm"):
skip_confirm = True
- check_missing_dj_config()
django.setup()
management.call_command('migrate')
- debug_mode = decouple_config('DEBUG', default=False, cast=bool)
- #
- if not debug_mode:
- management.call_command('collectstatic', no_input='y')
- if not superuser_exists():
- print("------------Setting up Admin user------------")
- print("Admin user does not exists. We need at least one admin")
- create_superuser(skip_confirm)
-
print("------------Setting up User Databases------------")
user_databases = [key for key in settings.DATABASES if key != "default"]
for database_key in user_databases:
install_on_db_with_key(database_key, skip_confirm)
-def superuser_exists():
- return get_user_model().objects.filter(is_superuser=True).exists()
-
-
-def create_superuser(skip_confirm):
- # TODO Replace argument name used for default admin user creation.
- if not skip_confirm:
- print("Please enter the details to create a new admin user ")
- username = input("Username: ")
- email = input("Email: ")
- password = getpass.getpass('Password: ')
- else:
- username = "admin"
- email = "[email protected]"
- password = "password"
- get_user_model().objects.create_superuser(username, email, password)
- print(f"Admin user with username {username} was created successfully")
-
-
-def check_missing_dj_config():
- # TODO Add documentation link
- documentation_link = ""
- try:
- decouple_config('ALLOWED_HOSTS')
- decouple_config('SECRET_KEY')
- decouple_config('DJANGO_DATABASE_KEY')
- decouple_config('DJANGO_SETTINGS_MODULE')
- decouple_config('DJANGO_DATABASE_URL')
- decouple_config('MATHESAR_DATABASES')
- except UndefinedValueError as e:
- missing_config_key = e.args[0]
- raise Exception(f"{missing_config_key} environment variable is missing."
- f" Please follow the documentation {documentation_link} to add the missing environment variable.")
-
-
def install_on_db_with_key(database_key, skip_confirm):
install.install_mathesar(
database_name=settings.DATABASES[database_key]["NAME"],
| {"golden_diff": "diff --git a/install.py b/install.py\n--- a/install.py\n+++ b/install.py\n@@ -4,11 +4,7 @@\n import getopt\n import sys\n \n-import getpass\n-\n import django\n-from decouple import UndefinedValueError, config as decouple_config\n-from django.contrib.auth import get_user_model\n from django.core import management\n \n from django.conf import settings\n@@ -16,64 +12,20 @@\n \n \n def main():\n- skip_confirm = False\n+ # skip_confirm is temporarily enabled by default as we don't have any use for interactive prompts with docker only deployments\n+ skip_confirm = True\n (opts, _) = getopt.getopt(sys.argv[1:], \":s\", [\"skip-confirm\"])\n for (opt, value) in opts:\n if (opt == \"-s\") or (opt == \"--skip-confirm\"):\n skip_confirm = True\n- check_missing_dj_config()\n django.setup()\n management.call_command('migrate')\n- debug_mode = decouple_config('DEBUG', default=False, cast=bool)\n- #\n- if not debug_mode:\n- management.call_command('collectstatic', no_input='y')\n- if not superuser_exists():\n- print(\"------------Setting up Admin user------------\")\n- print(\"Admin user does not exists. We need at least one admin\")\n- create_superuser(skip_confirm)\n-\n print(\"------------Setting up User Databases------------\")\n user_databases = [key for key in settings.DATABASES if key != \"default\"]\n for database_key in user_databases:\n install_on_db_with_key(database_key, skip_confirm)\n \n \n-def superuser_exists():\n- return get_user_model().objects.filter(is_superuser=True).exists()\n-\n-\n-def create_superuser(skip_confirm):\n- # TODO Replace argument name used for default admin user creation.\n- if not skip_confirm:\n- print(\"Please enter the details to create a new admin user \")\n- username = input(\"Username: \")\n- email = input(\"Email: \")\n- password = getpass.getpass('Password: ')\n- else:\n- username = \"admin\"\n- email = \"[email protected]\"\n- password = \"password\"\n- get_user_model().objects.create_superuser(username, email, password)\n- print(f\"Admin user with username {username} was created successfully\")\n-\n-\n-def check_missing_dj_config():\n- # TODO Add documentation link\n- documentation_link = \"\"\n- try:\n- decouple_config('ALLOWED_HOSTS')\n- decouple_config('SECRET_KEY')\n- decouple_config('DJANGO_DATABASE_KEY')\n- decouple_config('DJANGO_SETTINGS_MODULE')\n- decouple_config('DJANGO_DATABASE_URL')\n- decouple_config('MATHESAR_DATABASES')\n- except UndefinedValueError as e:\n- missing_config_key = e.args[0]\n- raise Exception(f\"{missing_config_key} environment variable is missing.\"\n- f\" Please follow the documentation {documentation_link} to add the missing environment variable.\")\n-\n-\n def install_on_db_with_key(database_key, skip_confirm):\n install.install_mathesar(\n database_name=settings.DATABASES[database_key][\"NAME\"],\n", "issue": "Create a Deployment type 1 specific script.\nCreate a script to download docker-compose from the Mathesar repo and help the user in setting up Mathesar.\r\n\r\n- [ ] The script should be self-documenting, i.e., the prompts during interactive portions should help the user understand the implications of what they're doing, and any error states should give enough info for the user to find solutions in the troubleshooting documentation on docs.mathesar.org\r\n \r\n- [ ] The script should also add help text in the installation CLI that would list the database Privileges needed for Mathesar to function effectively.\n", "before_files": [{"content": "\"\"\"\nThis script installs functions and types for Mathesar onto the configured DB.\n\"\"\"\nimport getopt\nimport sys\n\nimport getpass\n\nimport django\nfrom decouple import UndefinedValueError, config as decouple_config\nfrom django.contrib.auth import get_user_model\nfrom django.core import management\n\nfrom django.conf import settings\nfrom db import install\n\n\ndef main():\n skip_confirm = False\n (opts, _) = getopt.getopt(sys.argv[1:], \":s\", [\"skip-confirm\"])\n for (opt, value) in opts:\n if (opt == \"-s\") or (opt == \"--skip-confirm\"):\n skip_confirm = True\n check_missing_dj_config()\n django.setup()\n management.call_command('migrate')\n debug_mode = decouple_config('DEBUG', default=False, cast=bool)\n #\n if not debug_mode:\n management.call_command('collectstatic', no_input='y')\n if not superuser_exists():\n print(\"------------Setting up Admin user------------\")\n print(\"Admin user does not exists. We need at least one admin\")\n create_superuser(skip_confirm)\n\n print(\"------------Setting up User Databases------------\")\n user_databases = [key for key in settings.DATABASES if key != \"default\"]\n for database_key in user_databases:\n install_on_db_with_key(database_key, skip_confirm)\n\n\ndef superuser_exists():\n return get_user_model().objects.filter(is_superuser=True).exists()\n\n\ndef create_superuser(skip_confirm):\n # TODO Replace argument name used for default admin user creation.\n if not skip_confirm:\n print(\"Please enter the details to create a new admin user \")\n username = input(\"Username: \")\n email = input(\"Email: \")\n password = getpass.getpass('Password: ')\n else:\n username = \"admin\"\n email = \"[email protected]\"\n password = \"password\"\n get_user_model().objects.create_superuser(username, email, password)\n print(f\"Admin user with username {username} was created successfully\")\n\n\ndef check_missing_dj_config():\n # TODO Add documentation link\n documentation_link = \"\"\n try:\n decouple_config('ALLOWED_HOSTS')\n decouple_config('SECRET_KEY')\n decouple_config('DJANGO_DATABASE_KEY')\n decouple_config('DJANGO_SETTINGS_MODULE')\n decouple_config('DJANGO_DATABASE_URL')\n decouple_config('MATHESAR_DATABASES')\n except UndefinedValueError as e:\n missing_config_key = e.args[0]\n raise Exception(f\"{missing_config_key} environment variable is missing.\"\n f\" Please follow the documentation {documentation_link} to add the missing environment variable.\")\n\n\ndef install_on_db_with_key(database_key, skip_confirm):\n install.install_mathesar(\n database_name=settings.DATABASES[database_key][\"NAME\"],\n username=settings.DATABASES[database_key][\"USER\"],\n password=settings.DATABASES[database_key][\"PASSWORD\"],\n hostname=settings.DATABASES[database_key][\"HOST\"],\n port=settings.DATABASES[database_key][\"PORT\"],\n skip_confirm=skip_confirm\n )\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install.py"}], "after_files": [{"content": "\"\"\"\nThis script installs functions and types for Mathesar onto the configured DB.\n\"\"\"\nimport getopt\nimport sys\n\nimport django\nfrom django.core import management\n\nfrom django.conf import settings\nfrom db import install\n\n\ndef main():\n # skip_confirm is temporarily enabled by default as we don't have any use for interactive prompts with docker only deployments\n skip_confirm = True\n (opts, _) = getopt.getopt(sys.argv[1:], \":s\", [\"skip-confirm\"])\n for (opt, value) in opts:\n if (opt == \"-s\") or (opt == \"--skip-confirm\"):\n skip_confirm = True\n django.setup()\n management.call_command('migrate')\n print(\"------------Setting up User Databases------------\")\n user_databases = [key for key in settings.DATABASES if key != \"default\"]\n for database_key in user_databases:\n install_on_db_with_key(database_key, skip_confirm)\n\n\ndef install_on_db_with_key(database_key, skip_confirm):\n install.install_mathesar(\n database_name=settings.DATABASES[database_key][\"NAME\"],\n username=settings.DATABASES[database_key][\"USER\"],\n password=settings.DATABASES[database_key][\"PASSWORD\"],\n hostname=settings.DATABASES[database_key][\"HOST\"],\n port=settings.DATABASES[database_key][\"PORT\"],\n skip_confirm=skip_confirm\n )\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install.py"}]} | 1,208 | 680 |
gh_patches_debug_11189 | rasdani/github-patches | git_diff | nf-core__tools-1520 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
modules install to also print/write `include` statements
### Description of feature
print `include` statements to terminal for easy copy to destination files
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nf_core/modules/install.py`
Content:
```
1 import os
2 import questionary
3 import logging
4
5 import nf_core.utils
6 import nf_core.modules.module_utils
7
8 from .modules_command import ModuleCommand
9 from .module_utils import get_module_git_log, module_exist_in_repo
10
11 log = logging.getLogger(__name__)
12
13
14 class ModuleInstall(ModuleCommand):
15 def __init__(self, pipeline_dir, force=False, prompt=False, sha=None, update_all=False):
16 super().__init__(pipeline_dir)
17 self.force = force
18 self.prompt = prompt
19 self.sha = sha
20 self.update_all = update_all
21
22 def install(self, module):
23 if self.repo_type == "modules":
24 log.error("You cannot install a module in a clone of nf-core/modules")
25 return False
26 # Check whether pipelines is valid
27 if not self.has_valid_directory():
28 return False
29
30 # Verify that 'modules.json' is consistent with the installed modules
31 self.modules_json_up_to_date()
32
33 # Get the available modules
34 try:
35 self.modules_repo.get_modules_file_tree()
36 except LookupError as e:
37 log.error(e)
38 return False
39
40 if self.prompt and self.sha is not None:
41 log.error("Cannot use '--sha' and '--prompt' at the same time!")
42 return False
43
44 # Verify that the provided SHA exists in the repo
45 if self.sha:
46 try:
47 nf_core.modules.module_utils.sha_exists(self.sha, self.modules_repo)
48 except UserWarning:
49 log.error(f"Commit SHA '{self.sha}' doesn't exist in '{self.modules_repo.name}'")
50 return False
51 except LookupError as e:
52 log.error(e)
53 return False
54
55 if module is None:
56 module = questionary.autocomplete(
57 "Tool name:",
58 choices=self.modules_repo.modules_avail_module_names,
59 style=nf_core.utils.nfcore_question_style,
60 ).unsafe_ask()
61
62 # Check that the supplied name is an available module
63 if module and module not in self.modules_repo.modules_avail_module_names:
64 log.error("Module '{}' not found in list of available modules.".format(module))
65 log.info("Use the command 'nf-core modules list' to view available software")
66 return False
67
68 # Load 'modules.json'
69 modules_json = self.load_modules_json()
70 if not modules_json:
71 return False
72
73 if not module_exist_in_repo(module, self.modules_repo):
74 warn_msg = f"Module '{module}' not found in remote '{self.modules_repo.name}' ({self.modules_repo.branch})"
75 log.warning(warn_msg)
76 return False
77
78 if self.modules_repo.name in modules_json["repos"]:
79 current_entry = modules_json["repos"][self.modules_repo.name].get(module)
80 else:
81 current_entry = None
82
83 # Set the install folder based on the repository name
84 install_folder = [self.dir, "modules", self.modules_repo.owner, self.modules_repo.repo]
85
86 # Compute the module directory
87 module_dir = os.path.join(*install_folder, module)
88
89 # Check that the module is not already installed
90 if (current_entry is not None and os.path.exists(module_dir)) and not self.force:
91
92 log.error(f"Module is already installed.")
93 repo_flag = "" if self.modules_repo.name == "nf-core/modules" else f"-g {self.modules_repo.name} "
94 branch_flag = "" if self.modules_repo.branch == "master" else f"-b {self.modules_repo.branch} "
95
96 log.info(
97 f"To update '{module}' run 'nf-core modules {repo_flag}{branch_flag}update {module}'. To force reinstallation use '--force'"
98 )
99 return False
100
101 if self.sha:
102 version = self.sha
103 elif self.prompt:
104 try:
105 version = nf_core.modules.module_utils.prompt_module_version_sha(
106 module,
107 installed_sha=current_entry["git_sha"] if not current_entry is None else None,
108 modules_repo=self.modules_repo,
109 )
110 except SystemError as e:
111 log.error(e)
112 return False
113 else:
114 # Fetch the latest commit for the module
115 try:
116 git_log = get_module_git_log(module, modules_repo=self.modules_repo, per_page=1, page_nbr=1)
117 except UserWarning:
118 log.error(f"Was unable to fetch version of module '{module}'")
119 return False
120 version = git_log[0]["git_sha"]
121
122 if self.force:
123 log.info(f"Removing installed version of '{self.modules_repo.name}/{module}'")
124 self.clear_module_dir(module, module_dir)
125
126 log.info(f"{'Rei' if self.force else 'I'}nstalling '{module}'")
127 log.debug(f"Installing module '{module}' at modules hash {version} from {self.modules_repo.name}")
128
129 # Download module files
130 if not self.download_module_file(module, version, self.modules_repo, install_folder):
131 return False
132
133 # Update module.json with newly installed module
134 self.update_modules_json(modules_json, self.modules_repo.name, module, version)
135 return True
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nf_core/modules/install.py b/nf_core/modules/install.py
--- a/nf_core/modules/install.py
+++ b/nf_core/modules/install.py
@@ -130,6 +130,10 @@
if not self.download_module_file(module, version, self.modules_repo, install_folder):
return False
+ # Print include statement
+ module_name = "_".join(module.upper().split("/"))
+ log.info(f"Include statement: include {{ {module_name} }} from '.{os.path.join(*install_folder, module)}/main’")
+
# Update module.json with newly installed module
self.update_modules_json(modules_json, self.modules_repo.name, module, version)
return True
| {"golden_diff": "diff --git a/nf_core/modules/install.py b/nf_core/modules/install.py\n--- a/nf_core/modules/install.py\n+++ b/nf_core/modules/install.py\n@@ -130,6 +130,10 @@\n if not self.download_module_file(module, version, self.modules_repo, install_folder):\n return False\n \n+ # Print include statement\n+ module_name = \"_\".join(module.upper().split(\"/\"))\n+ log.info(f\"Include statement: include {{ {module_name} }} from '.{os.path.join(*install_folder, module)}/main\u2019\")\n+\n # Update module.json with newly installed module\n self.update_modules_json(modules_json, self.modules_repo.name, module, version)\n return True\n", "issue": "modules install to also print/write `include` statements\n### Description of feature\n\nprint `include` statements to terminal for easy copy to destination files\n", "before_files": [{"content": "import os\nimport questionary\nimport logging\n\nimport nf_core.utils\nimport nf_core.modules.module_utils\n\nfrom .modules_command import ModuleCommand\nfrom .module_utils import get_module_git_log, module_exist_in_repo\n\nlog = logging.getLogger(__name__)\n\n\nclass ModuleInstall(ModuleCommand):\n def __init__(self, pipeline_dir, force=False, prompt=False, sha=None, update_all=False):\n super().__init__(pipeline_dir)\n self.force = force\n self.prompt = prompt\n self.sha = sha\n self.update_all = update_all\n\n def install(self, module):\n if self.repo_type == \"modules\":\n log.error(\"You cannot install a module in a clone of nf-core/modules\")\n return False\n # Check whether pipelines is valid\n if not self.has_valid_directory():\n return False\n\n # Verify that 'modules.json' is consistent with the installed modules\n self.modules_json_up_to_date()\n\n # Get the available modules\n try:\n self.modules_repo.get_modules_file_tree()\n except LookupError as e:\n log.error(e)\n return False\n\n if self.prompt and self.sha is not None:\n log.error(\"Cannot use '--sha' and '--prompt' at the same time!\")\n return False\n\n # Verify that the provided SHA exists in the repo\n if self.sha:\n try:\n nf_core.modules.module_utils.sha_exists(self.sha, self.modules_repo)\n except UserWarning:\n log.error(f\"Commit SHA '{self.sha}' doesn't exist in '{self.modules_repo.name}'\")\n return False\n except LookupError as e:\n log.error(e)\n return False\n\n if module is None:\n module = questionary.autocomplete(\n \"Tool name:\",\n choices=self.modules_repo.modules_avail_module_names,\n style=nf_core.utils.nfcore_question_style,\n ).unsafe_ask()\n\n # Check that the supplied name is an available module\n if module and module not in self.modules_repo.modules_avail_module_names:\n log.error(\"Module '{}' not found in list of available modules.\".format(module))\n log.info(\"Use the command 'nf-core modules list' to view available software\")\n return False\n\n # Load 'modules.json'\n modules_json = self.load_modules_json()\n if not modules_json:\n return False\n\n if not module_exist_in_repo(module, self.modules_repo):\n warn_msg = f\"Module '{module}' not found in remote '{self.modules_repo.name}' ({self.modules_repo.branch})\"\n log.warning(warn_msg)\n return False\n\n if self.modules_repo.name in modules_json[\"repos\"]:\n current_entry = modules_json[\"repos\"][self.modules_repo.name].get(module)\n else:\n current_entry = None\n\n # Set the install folder based on the repository name\n install_folder = [self.dir, \"modules\", self.modules_repo.owner, self.modules_repo.repo]\n\n # Compute the module directory\n module_dir = os.path.join(*install_folder, module)\n\n # Check that the module is not already installed\n if (current_entry is not None and os.path.exists(module_dir)) and not self.force:\n\n log.error(f\"Module is already installed.\")\n repo_flag = \"\" if self.modules_repo.name == \"nf-core/modules\" else f\"-g {self.modules_repo.name} \"\n branch_flag = \"\" if self.modules_repo.branch == \"master\" else f\"-b {self.modules_repo.branch} \"\n\n log.info(\n f\"To update '{module}' run 'nf-core modules {repo_flag}{branch_flag}update {module}'. To force reinstallation use '--force'\"\n )\n return False\n\n if self.sha:\n version = self.sha\n elif self.prompt:\n try:\n version = nf_core.modules.module_utils.prompt_module_version_sha(\n module,\n installed_sha=current_entry[\"git_sha\"] if not current_entry is None else None,\n modules_repo=self.modules_repo,\n )\n except SystemError as e:\n log.error(e)\n return False\n else:\n # Fetch the latest commit for the module\n try:\n git_log = get_module_git_log(module, modules_repo=self.modules_repo, per_page=1, page_nbr=1)\n except UserWarning:\n log.error(f\"Was unable to fetch version of module '{module}'\")\n return False\n version = git_log[0][\"git_sha\"]\n\n if self.force:\n log.info(f\"Removing installed version of '{self.modules_repo.name}/{module}'\")\n self.clear_module_dir(module, module_dir)\n\n log.info(f\"{'Rei' if self.force else 'I'}nstalling '{module}'\")\n log.debug(f\"Installing module '{module}' at modules hash {version} from {self.modules_repo.name}\")\n\n # Download module files\n if not self.download_module_file(module, version, self.modules_repo, install_folder):\n return False\n\n # Update module.json with newly installed module\n self.update_modules_json(modules_json, self.modules_repo.name, module, version)\n return True\n", "path": "nf_core/modules/install.py"}], "after_files": [{"content": "import os\nimport questionary\nimport logging\n\nimport nf_core.utils\nimport nf_core.modules.module_utils\n\nfrom .modules_command import ModuleCommand\nfrom .module_utils import get_module_git_log, module_exist_in_repo\n\nlog = logging.getLogger(__name__)\n\n\nclass ModuleInstall(ModuleCommand):\n def __init__(self, pipeline_dir, force=False, prompt=False, sha=None, update_all=False):\n super().__init__(pipeline_dir)\n self.force = force\n self.prompt = prompt\n self.sha = sha\n self.update_all = update_all\n\n def install(self, module):\n if self.repo_type == \"modules\":\n log.error(\"You cannot install a module in a clone of nf-core/modules\")\n return False\n # Check whether pipelines is valid\n if not self.has_valid_directory():\n return False\n\n # Verify that 'modules.json' is consistent with the installed modules\n self.modules_json_up_to_date()\n\n # Get the available modules\n try:\n self.modules_repo.get_modules_file_tree()\n except LookupError as e:\n log.error(e)\n return False\n\n if self.prompt and self.sha is not None:\n log.error(\"Cannot use '--sha' and '--prompt' at the same time!\")\n return False\n\n # Verify that the provided SHA exists in the repo\n if self.sha:\n try:\n nf_core.modules.module_utils.sha_exists(self.sha, self.modules_repo)\n except UserWarning:\n log.error(f\"Commit SHA '{self.sha}' doesn't exist in '{self.modules_repo.name}'\")\n return False\n except LookupError as e:\n log.error(e)\n return False\n\n if module is None:\n module = questionary.autocomplete(\n \"Tool name:\",\n choices=self.modules_repo.modules_avail_module_names,\n style=nf_core.utils.nfcore_question_style,\n ).unsafe_ask()\n\n # Check that the supplied name is an available module\n if module and module not in self.modules_repo.modules_avail_module_names:\n log.error(\"Module '{}' not found in list of available modules.\".format(module))\n log.info(\"Use the command 'nf-core modules list' to view available software\")\n return False\n\n # Load 'modules.json'\n modules_json = self.load_modules_json()\n if not modules_json:\n return False\n\n if not module_exist_in_repo(module, self.modules_repo):\n warn_msg = f\"Module '{module}' not found in remote '{self.modules_repo.name}' ({self.modules_repo.branch})\"\n log.warning(warn_msg)\n return False\n\n if self.modules_repo.name in modules_json[\"repos\"]:\n current_entry = modules_json[\"repos\"][self.modules_repo.name].get(module)\n else:\n current_entry = None\n\n # Set the install folder based on the repository name\n install_folder = [self.dir, \"modules\", self.modules_repo.owner, self.modules_repo.repo]\n\n # Compute the module directory\n module_dir = os.path.join(*install_folder, module)\n\n # Check that the module is not already installed\n if (current_entry is not None and os.path.exists(module_dir)) and not self.force:\n\n log.error(f\"Module is already installed.\")\n repo_flag = \"\" if self.modules_repo.name == \"nf-core/modules\" else f\"-g {self.modules_repo.name} \"\n branch_flag = \"\" if self.modules_repo.branch == \"master\" else f\"-b {self.modules_repo.branch} \"\n\n log.info(\n f\"To update '{module}' run 'nf-core modules {repo_flag}{branch_flag}update {module}'. To force reinstallation use '--force'\"\n )\n return False\n\n if self.sha:\n version = self.sha\n elif self.prompt:\n try:\n version = nf_core.modules.module_utils.prompt_module_version_sha(\n module,\n installed_sha=current_entry[\"git_sha\"] if not current_entry is None else None,\n modules_repo=self.modules_repo,\n )\n except SystemError as e:\n log.error(e)\n return False\n else:\n # Fetch the latest commit for the module\n try:\n git_log = get_module_git_log(module, modules_repo=self.modules_repo, per_page=1, page_nbr=1)\n except UserWarning:\n log.error(f\"Was unable to fetch version of module '{module}'\")\n return False\n version = git_log[0][\"git_sha\"]\n\n if self.force:\n log.info(f\"Removing installed version of '{self.modules_repo.name}/{module}'\")\n self.clear_module_dir(module, module_dir)\n\n log.info(f\"{'Rei' if self.force else 'I'}nstalling '{module}'\")\n log.debug(f\"Installing module '{module}' at modules hash {version} from {self.modules_repo.name}\")\n\n # Download module files\n if not self.download_module_file(module, version, self.modules_repo, install_folder):\n return False\n\n # Print include statement\n module_name = \"_\".join(module.upper().split(\"/\"))\n log.info(f\"Include statement: include {{ {module_name} }} from '.{os.path.join(*install_folder, module)}/main\u2019\")\n\n # Update module.json with newly installed module\n self.update_modules_json(modules_json, self.modules_repo.name, module, version)\n return True\n", "path": "nf_core/modules/install.py"}]} | 1,666 | 159 |
gh_patches_debug_64335 | rasdani/github-patches | git_diff | sopel-irc__sopel-1442 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[wiktionary] Relax case sensitivity a bit?
Looking up anything other than the exact capitalization in Wiktionary's database appears to fail. It might be worthwhile, for user experience, to always try the all-lowercase version of a query if no results are returned for it as entered.
That said, the MediaWiki API would allow querying automatically for several variants (using `action=query&titles=variant1|Variant2|etc.`) and then fetching the first one that exists for display, if making multiple requests is OK in the plugin.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/wiktionary.py`
Content:
```
1 # coding=utf-8
2 """
3 wiktionary.py - Sopel Wiktionary Module
4 Copyright 2009, Sean B. Palmer, inamidst.com
5 Licensed under the Eiffel Forum License 2.
6
7 https://sopel.chat
8 """
9 from __future__ import unicode_literals, absolute_import, print_function, division
10
11 import re
12 import requests
13 from sopel import web
14 from sopel.module import commands, example
15
16 uri = 'https://en.wiktionary.org/w/index.php?title=%s&printable=yes'
17 r_sup = re.compile(r'<sup[^>]+>.+</sup>') # Superscripts that are references only, not ordinal indicators, etc...
18 r_tag = re.compile(r'<[^>]+>')
19 r_ul = re.compile(r'(?ims)<ul>.*?</ul>')
20
21
22 def text(html):
23 text = r_sup.sub('', html) # Remove superscripts that are references from definition
24 text = r_tag.sub('', text).strip()
25 text = text.replace('\n', ' ')
26 text = text.replace('\r', '')
27 text = text.replace('(intransitive', '(intr.')
28 text = text.replace('(transitive', '(trans.')
29 text = web.decode(text)
30 return text
31
32
33 def wikt(word):
34 bytes = requests.get(uri % web.quote(word)).text
35 bytes = r_ul.sub('', bytes)
36
37 mode = None
38 etymology = None
39 definitions = {}
40 for line in bytes.splitlines():
41 if 'id="Etymology"' in line:
42 mode = 'etymology'
43 elif 'id="Noun"' in line:
44 mode = 'noun'
45 elif 'id="Verb"' in line:
46 mode = 'verb'
47 elif 'id="Adjective"' in line:
48 mode = 'adjective'
49 elif 'id="Adverb"' in line:
50 mode = 'adverb'
51 elif 'id="Interjection"' in line:
52 mode = 'interjection'
53 elif 'id="Particle"' in line:
54 mode = 'particle'
55 elif 'id="Preposition"' in line:
56 mode = 'preposition'
57 elif 'id="Prefix"' in line:
58 mode = 'prefix'
59 elif 'id="Suffix"' in line:
60 mode = 'suffix'
61 # 'id="' can occur in definition lines <li> when <sup> tag is used for references;
62 # make sure those are not excluded (see e.g., abecedarian).
63 elif ('id="' in line) and ('<li>' not in line):
64 mode = None
65
66 elif (mode == 'etmyology') and ('<p>' in line):
67 etymology = text(line)
68 elif (mode is not None) and ('<li>' in line):
69 definitions.setdefault(mode, []).append(text(line))
70
71 if '<hr' in line:
72 break
73 return etymology, definitions
74
75
76 parts = ('preposition', 'particle', 'noun', 'verb',
77 'adjective', 'adverb', 'interjection',
78 'prefix', 'suffix')
79
80
81 def format(result, definitions, number=2):
82 for part in parts:
83 if part in definitions:
84 defs = definitions[part][:number]
85 result += u' — {}: '.format(part)
86 n = ['%s. %s' % (i + 1, e.strip(' .')) for i, e in enumerate(defs)]
87 result += ', '.join(n)
88 return result.strip(' .,')
89
90
91 @commands('wt', 'define', 'dict')
92 @example('.wt bailiwick')
93 def wiktionary(bot, trigger):
94 """Look up a word on Wiktionary."""
95 word = trigger.group(2)
96 if word is None:
97 bot.reply('You must tell me what to look up!')
98 return
99
100 _etymology, definitions = wikt(word)
101 if not definitions:
102 bot.say("Couldn't get any definitions for %s." % word)
103 return
104
105 result = format(word, definitions)
106 if len(result) < 150:
107 result = format(word, definitions, 3)
108 if len(result) < 150:
109 result = format(word, definitions, 5)
110
111 if len(result) > 300:
112 result = result[:295] + '[...]'
113 bot.say(result)
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/wiktionary.py b/sopel/modules/wiktionary.py
--- a/sopel/modules/wiktionary.py
+++ b/sopel/modules/wiktionary.py
@@ -99,8 +99,11 @@
_etymology, definitions = wikt(word)
if not definitions:
- bot.say("Couldn't get any definitions for %s." % word)
- return
+ # Cast word to lower to check in case of mismatched user input
+ _etymology, definitions = wikt(word.lower())
+ if not definitions:
+ bot.say("Couldn't get any definitions for %s." % word)
+ return
result = format(word, definitions)
if len(result) < 150:
| {"golden_diff": "diff --git a/sopel/modules/wiktionary.py b/sopel/modules/wiktionary.py\n--- a/sopel/modules/wiktionary.py\n+++ b/sopel/modules/wiktionary.py\n@@ -99,8 +99,11 @@\n \n _etymology, definitions = wikt(word)\n if not definitions:\n- bot.say(\"Couldn't get any definitions for %s.\" % word)\n- return\n+ # Cast word to lower to check in case of mismatched user input\n+ _etymology, definitions = wikt(word.lower())\n+ if not definitions:\n+ bot.say(\"Couldn't get any definitions for %s.\" % word)\n+ return\n \n result = format(word, definitions)\n if len(result) < 150:\n", "issue": "[wiktionary] Relax case sensitivity a bit?\nLooking up anything other than the exact capitalization in Wiktionary's database appears to fail. It might be worthwhile, for user experience, to always try the all-lowercase version of a query if no results are returned for it as entered.\r\n\r\nThat said, the MediaWiki API would allow querying automatically for several variants (using `action=query&titles=variant1|Variant2|etc.`) and then fetching the first one that exists for display, if making multiple requests is OK in the plugin.\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nwiktionary.py - Sopel Wiktionary Module\nCopyright 2009, Sean B. Palmer, inamidst.com\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport re\nimport requests\nfrom sopel import web\nfrom sopel.module import commands, example\n\nuri = 'https://en.wiktionary.org/w/index.php?title=%s&printable=yes'\nr_sup = re.compile(r'<sup[^>]+>.+</sup>') # Superscripts that are references only, not ordinal indicators, etc...\nr_tag = re.compile(r'<[^>]+>')\nr_ul = re.compile(r'(?ims)<ul>.*?</ul>')\n\n\ndef text(html):\n text = r_sup.sub('', html) # Remove superscripts that are references from definition\n text = r_tag.sub('', text).strip()\n text = text.replace('\\n', ' ')\n text = text.replace('\\r', '')\n text = text.replace('(intransitive', '(intr.')\n text = text.replace('(transitive', '(trans.')\n text = web.decode(text)\n return text\n\n\ndef wikt(word):\n bytes = requests.get(uri % web.quote(word)).text\n bytes = r_ul.sub('', bytes)\n\n mode = None\n etymology = None\n definitions = {}\n for line in bytes.splitlines():\n if 'id=\"Etymology\"' in line:\n mode = 'etymology'\n elif 'id=\"Noun\"' in line:\n mode = 'noun'\n elif 'id=\"Verb\"' in line:\n mode = 'verb'\n elif 'id=\"Adjective\"' in line:\n mode = 'adjective'\n elif 'id=\"Adverb\"' in line:\n mode = 'adverb'\n elif 'id=\"Interjection\"' in line:\n mode = 'interjection'\n elif 'id=\"Particle\"' in line:\n mode = 'particle'\n elif 'id=\"Preposition\"' in line:\n mode = 'preposition'\n elif 'id=\"Prefix\"' in line:\n mode = 'prefix'\n elif 'id=\"Suffix\"' in line:\n mode = 'suffix'\n # 'id=\"' can occur in definition lines <li> when <sup> tag is used for references;\n # make sure those are not excluded (see e.g., abecedarian).\n elif ('id=\"' in line) and ('<li>' not in line):\n mode = None\n\n elif (mode == 'etmyology') and ('<p>' in line):\n etymology = text(line)\n elif (mode is not None) and ('<li>' in line):\n definitions.setdefault(mode, []).append(text(line))\n\n if '<hr' in line:\n break\n return etymology, definitions\n\n\nparts = ('preposition', 'particle', 'noun', 'verb',\n 'adjective', 'adverb', 'interjection',\n 'prefix', 'suffix')\n\n\ndef format(result, definitions, number=2):\n for part in parts:\n if part in definitions:\n defs = definitions[part][:number]\n result += u' \u2014 {}: '.format(part)\n n = ['%s. %s' % (i + 1, e.strip(' .')) for i, e in enumerate(defs)]\n result += ', '.join(n)\n return result.strip(' .,')\n\n\n@commands('wt', 'define', 'dict')\n@example('.wt bailiwick')\ndef wiktionary(bot, trigger):\n \"\"\"Look up a word on Wiktionary.\"\"\"\n word = trigger.group(2)\n if word is None:\n bot.reply('You must tell me what to look up!')\n return\n\n _etymology, definitions = wikt(word)\n if not definitions:\n bot.say(\"Couldn't get any definitions for %s.\" % word)\n return\n\n result = format(word, definitions)\n if len(result) < 150:\n result = format(word, definitions, 3)\n if len(result) < 150:\n result = format(word, definitions, 5)\n\n if len(result) > 300:\n result = result[:295] + '[...]'\n bot.say(result)\n", "path": "sopel/modules/wiktionary.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nwiktionary.py - Sopel Wiktionary Module\nCopyright 2009, Sean B. Palmer, inamidst.com\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport re\nimport requests\nfrom sopel import web\nfrom sopel.module import commands, example\n\nuri = 'https://en.wiktionary.org/w/index.php?title=%s&printable=yes'\nr_sup = re.compile(r'<sup[^>]+>.+</sup>') # Superscripts that are references only, not ordinal indicators, etc...\nr_tag = re.compile(r'<[^>]+>')\nr_ul = re.compile(r'(?ims)<ul>.*?</ul>')\n\n\ndef text(html):\n text = r_sup.sub('', html) # Remove superscripts that are references from definition\n text = r_tag.sub('', text).strip()\n text = text.replace('\\n', ' ')\n text = text.replace('\\r', '')\n text = text.replace('(intransitive', '(intr.')\n text = text.replace('(transitive', '(trans.')\n text = web.decode(text)\n return text\n\n\ndef wikt(word):\n bytes = requests.get(uri % web.quote(word)).text\n bytes = r_ul.sub('', bytes)\n\n mode = None\n etymology = None\n definitions = {}\n for line in bytes.splitlines():\n if 'id=\"Etymology\"' in line:\n mode = 'etymology'\n elif 'id=\"Noun\"' in line:\n mode = 'noun'\n elif 'id=\"Verb\"' in line:\n mode = 'verb'\n elif 'id=\"Adjective\"' in line:\n mode = 'adjective'\n elif 'id=\"Adverb\"' in line:\n mode = 'adverb'\n elif 'id=\"Interjection\"' in line:\n mode = 'interjection'\n elif 'id=\"Particle\"' in line:\n mode = 'particle'\n elif 'id=\"Preposition\"' in line:\n mode = 'preposition'\n elif 'id=\"Prefix\"' in line:\n mode = 'prefix'\n elif 'id=\"Suffix\"' in line:\n mode = 'suffix'\n # 'id=\"' can occur in definition lines <li> when <sup> tag is used for references;\n # make sure those are not excluded (see e.g., abecedarian).\n elif ('id=\"' in line) and ('<li>' not in line):\n mode = None\n\n elif (mode == 'etmyology') and ('<p>' in line):\n etymology = text(line)\n elif (mode is not None) and ('<li>' in line):\n definitions.setdefault(mode, []).append(text(line))\n\n if '<hr' in line:\n break\n return etymology, definitions\n\n\nparts = ('preposition', 'particle', 'noun', 'verb',\n 'adjective', 'adverb', 'interjection',\n 'prefix', 'suffix')\n\n\ndef format(result, definitions, number=2):\n for part in parts:\n if part in definitions:\n defs = definitions[part][:number]\n result += u' \u2014 {}: '.format(part)\n n = ['%s. %s' % (i + 1, e.strip(' .')) for i, e in enumerate(defs)]\n result += ', '.join(n)\n return result.strip(' .,')\n\n\n@commands('wt', 'define', 'dict')\n@example('.wt bailiwick')\ndef wiktionary(bot, trigger):\n \"\"\"Look up a word on Wiktionary.\"\"\"\n word = trigger.group(2)\n if word is None:\n bot.reply('You must tell me what to look up!')\n return\n\n _etymology, definitions = wikt(word)\n if not definitions:\n # Cast word to lower to check in case of mismatched user input\n _etymology, definitions = wikt(word.lower())\n if not definitions:\n bot.say(\"Couldn't get any definitions for %s.\" % word)\n return\n\n result = format(word, definitions)\n if len(result) < 150:\n result = format(word, definitions, 3)\n if len(result) < 150:\n result = format(word, definitions, 5)\n\n if len(result) > 300:\n result = result[:295] + '[...]'\n bot.say(result)\n", "path": "sopel/modules/wiktionary.py"}]} | 1,553 | 175 |
gh_patches_debug_41108 | rasdani/github-patches | git_diff | beeware__toga-1070 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ios cannot add widgets to main_window.content
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Try to run this app in ios:
```
import toga
class MyApp(toga.App):
def startup(self):
"""
Construct and show the Toga application.
Usually, you would add your application to a main content box.
We then create a main window (with a name matching the app), and
show the main window.
"""
self.state = GameState()
self.main_window = toga.MainWindow(title=self.formal_name)
box = toga.Box()
label = toga.Label("Test1")
box.add(label)
self.main_window.content = box
label2 = toga.Label("Test2")
box.add(label2)
self.main_window.show()
MyApp().main_loop()
```
**Expected behavior**
The application should open and shows the labels "Test1" and "Test2". Instead, you get a black screen. In the xcode debugger there is a log message on the "box.add(label2)" line about the label (test2) not having the attribute "viewport".
If you comment out box.add(label2), you correctly get a window that has "Test1".
On targets macos and android, the above test will correctly show "Test1 Test2"
**Environment:**
- Operating System: macos
- Python version: 3.8
- Software versions:
- Briefcase: 0.3.3
- Toga: 0.3.0.dev23
- IOS: iphone 11
This may be related to #225, as you will get a similar error if you try to rebind main_window.content to a different widget.
ios cannot add widgets to main_window.content
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Try to run this app in ios:
```
import toga
class MyApp(toga.App):
def startup(self):
"""
Construct and show the Toga application.
Usually, you would add your application to a main content box.
We then create a main window (with a name matching the app), and
show the main window.
"""
self.state = GameState()
self.main_window = toga.MainWindow(title=self.formal_name)
box = toga.Box()
label = toga.Label("Test1")
box.add(label)
self.main_window.content = box
label2 = toga.Label("Test2")
box.add(label2)
self.main_window.show()
MyApp().main_loop()
```
**Expected behavior**
The application should open and shows the labels "Test1" and "Test2". Instead, you get a black screen. In the xcode debugger there is a log message on the "box.add(label2)" line about the label (test2) not having the attribute "viewport".
If you comment out box.add(label2), you correctly get a window that has "Test1".
On targets macos and android, the above test will correctly show "Test1 Test2"
**Environment:**
- Operating System: macos
- Python version: 3.8
- Software versions:
- Briefcase: 0.3.3
- Toga: 0.3.0.dev23
- IOS: iphone 11
This may be related to #225, as you will get a similar error if you try to rebind main_window.content to a different widget.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/iOS/toga_iOS/widgets/base.py`
Content:
```
1 from toga_iOS.constraints import Constraints
2
3
4 class Widget:
5 def __init__(self, interface):
6 self.interface = interface
7 self.interface._impl = self
8 self._container = None
9 self.constraints = None
10 self.native = None
11 self.create()
12 self.interface.style.reapply()
13
14 def set_app(self, app):
15 pass
16
17 def set_window(self, window):
18 pass
19
20 @property
21 def container(self):
22 return self._container
23
24 @container.setter
25 def container(self, container):
26 self._container = container
27 if self.constraints:
28 self._container.native.addSubview(self.native)
29 self.constraints.container = container
30
31 for child in self.interface.children:
32 child._impl.container = container
33 self.rehint()
34
35 def set_enabled(self, value):
36 self.native.enabled = self.interface.enabled
37
38 def focus(self):
39 self.interface.factory.not_implemented("Widget.focus()")
40
41 # APPLICATOR
42
43 def set_bounds(self, x, y, width, height):
44 if self.container:
45 viewport = self.container.viewport
46 else:
47 viewport = self.viewport
48
49 self.constraints.update(
50 x, y + viewport.statusbar_height,
51 width, height
52 )
53
54 def set_alignment(self, alignment):
55 pass
56
57 def set_hidden(self, hidden):
58 if self._container:
59 for view in self._container._impl.subviews:
60 if view._impl:
61 view.setHidden(hidden)
62
63 def set_font(self, font):
64 # By default, font can't be changed
65 pass
66
67 def set_color(self, color):
68 # By default, color can't be changed
69 pass
70
71 def set_background_color(self, color):
72 # By default, background color can't be changed
73 pass
74
75 # INTERFACE
76
77 def add_child(self, child):
78 if self.container:
79 child.viewport = self.root.viewport
80 child.container = self.container
81
82 def add_constraints(self):
83 self.native.translatesAutoresizingMaskIntoConstraints = False
84 self.constraints = Constraints(self)
85
86 def rehint(self):
87 pass
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/iOS/toga_iOS/widgets/base.py b/src/iOS/toga_iOS/widgets/base.py
--- a/src/iOS/toga_iOS/widgets/base.py
+++ b/src/iOS/toga_iOS/widgets/base.py
@@ -6,10 +6,12 @@
self.interface = interface
self.interface._impl = self
self._container = None
+ self._viewport = None
self.constraints = None
self.native = None
self.create()
self.interface.style.reapply()
+ self.set_enabled(self.interface.enabled)
def set_app(self, app):
pass
@@ -23,15 +25,35 @@
@container.setter
def container(self, container):
- self._container = container
- if self.constraints:
+ if self.container:
+ if container:
+ raise RuntimeError('Already have a container')
+ else:
+ # existing container should be removed
+ self.constraints = None
+ self._container = None
+ self.native.removeFromSuperview()
+ elif container:
+ # setting container
+ self._container = container
self._container.native.addSubview(self.native)
+ if not self.constraints:
+ self.add_constraints()
self.constraints.container = container
for child in self.interface.children:
child._impl.container = container
+
self.rehint()
+ @property
+ def viewport(self):
+ return self._viewport
+
+ @viewport.setter
+ def viewport(self, viewport):
+ self._viewport = viewport
+
def set_enabled(self, value):
self.native.enabled = self.interface.enabled
@@ -41,13 +63,13 @@
# APPLICATOR
def set_bounds(self, x, y, width, height):
+ offset_y = 0
if self.container:
- viewport = self.container.viewport
- else:
- viewport = self.viewport
-
+ offset_y = self.container.viewport.statusbar_height
+ elif self.viewport:
+ offset_y = self.viewport.statusbar_height
self.constraints.update(
- x, y + viewport.statusbar_height,
+ x, y + offset_y,
width, height
)
@@ -55,8 +77,8 @@
pass
def set_hidden(self, hidden):
- if self._container:
- for view in self._container._impl.subviews:
+ if self.container:
+ for view in self.container._impl.subviews:
if view._impl:
view.setHidden(hidden)
@@ -75,10 +97,16 @@
# INTERFACE
def add_child(self, child):
- if self.container:
- child.viewport = self.root.viewport
+
+ if self.viewport:
+ # we are the the top level UIView
+ child.container = self
+ else:
child.container = self.container
+ def remove_child(self, child):
+ child.container = None
+
def add_constraints(self):
self.native.translatesAutoresizingMaskIntoConstraints = False
self.constraints = Constraints(self)
| {"golden_diff": "diff --git a/src/iOS/toga_iOS/widgets/base.py b/src/iOS/toga_iOS/widgets/base.py\n--- a/src/iOS/toga_iOS/widgets/base.py\n+++ b/src/iOS/toga_iOS/widgets/base.py\n@@ -6,10 +6,12 @@\n self.interface = interface\n self.interface._impl = self\n self._container = None\n+ self._viewport = None\n self.constraints = None\n self.native = None\n self.create()\n self.interface.style.reapply()\n+ self.set_enabled(self.interface.enabled)\n \n def set_app(self, app):\n pass\n@@ -23,15 +25,35 @@\n \n @container.setter\n def container(self, container):\n- self._container = container\n- if self.constraints:\n+ if self.container:\n+ if container:\n+ raise RuntimeError('Already have a container')\n+ else:\n+ # existing container should be removed\n+ self.constraints = None\n+ self._container = None\n+ self.native.removeFromSuperview()\n+ elif container:\n+ # setting container\n+ self._container = container\n self._container.native.addSubview(self.native)\n+ if not self.constraints:\n+ self.add_constraints()\n self.constraints.container = container\n \n for child in self.interface.children:\n child._impl.container = container\n+\n self.rehint()\n \n+ @property\n+ def viewport(self):\n+ return self._viewport\n+\n+ @viewport.setter\n+ def viewport(self, viewport):\n+ self._viewport = viewport\n+\n def set_enabled(self, value):\n self.native.enabled = self.interface.enabled\n \n@@ -41,13 +63,13 @@\n # APPLICATOR\n \n def set_bounds(self, x, y, width, height):\n+ offset_y = 0\n if self.container:\n- viewport = self.container.viewport\n- else:\n- viewport = self.viewport\n-\n+ offset_y = self.container.viewport.statusbar_height\n+ elif self.viewport:\n+ offset_y = self.viewport.statusbar_height\n self.constraints.update(\n- x, y + viewport.statusbar_height,\n+ x, y + offset_y,\n width, height\n )\n \n@@ -55,8 +77,8 @@\n pass\n \n def set_hidden(self, hidden):\n- if self._container:\n- for view in self._container._impl.subviews:\n+ if self.container:\n+ for view in self.container._impl.subviews:\n if view._impl:\n view.setHidden(hidden)\n \n@@ -75,10 +97,16 @@\n # INTERFACE\n \n def add_child(self, child):\n- if self.container:\n- child.viewport = self.root.viewport\n+\n+ if self.viewport:\n+ # we are the the top level UIView\n+ child.container = self\n+ else:\n child.container = self.container\n \n+ def remove_child(self, child):\n+ child.container = None\n+\n def add_constraints(self):\n self.native.translatesAutoresizingMaskIntoConstraints = False\n self.constraints = Constraints(self)\n", "issue": "ios cannot add widgets to main_window.content \n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**To Reproduce**\r\nTry to run this app in ios:\r\n```\r\nimport toga\r\nclass MyApp(toga.App):\r\n\r\n def startup(self):\r\n \"\"\"\r\n Construct and show the Toga application.\r\n\r\n Usually, you would add your application to a main content box.\r\n We then create a main window (with a name matching the app), and\r\n show the main window.\r\n \"\"\"\r\n self.state = GameState()\r\n self.main_window = toga.MainWindow(title=self.formal_name)\r\n box = toga.Box()\r\n label = toga.Label(\"Test1\")\r\n box.add(label)\r\n self.main_window.content = box\r\n label2 = toga.Label(\"Test2\")\r\n box.add(label2)\r\n self.main_window.show()\r\nMyApp().main_loop()\r\n```\r\n\r\n**Expected behavior**\r\nThe application should open and shows the labels \"Test1\" and \"Test2\". Instead, you get a black screen. In the xcode debugger there is a log message on the \"box.add(label2)\" line about the label (test2) not having the attribute \"viewport\".\r\n\r\nIf you comment out box.add(label2), you correctly get a window that has \"Test1\".\r\n\r\nOn targets macos and android, the above test will correctly show \"Test1 Test2\"\r\n\r\n**Environment:**\r\n - Operating System: macos\r\n - Python version: 3.8\r\n - Software versions:\r\n - Briefcase: 0.3.3\r\n - Toga: 0.3.0.dev23\r\n - IOS: iphone 11\r\n\r\nThis may be related to #225, as you will get a similar error if you try to rebind main_window.content to a different widget.\nios cannot add widgets to main_window.content \n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**To Reproduce**\r\nTry to run this app in ios:\r\n```\r\nimport toga\r\nclass MyApp(toga.App):\r\n\r\n def startup(self):\r\n \"\"\"\r\n Construct and show the Toga application.\r\n\r\n Usually, you would add your application to a main content box.\r\n We then create a main window (with a name matching the app), and\r\n show the main window.\r\n \"\"\"\r\n self.state = GameState()\r\n self.main_window = toga.MainWindow(title=self.formal_name)\r\n box = toga.Box()\r\n label = toga.Label(\"Test1\")\r\n box.add(label)\r\n self.main_window.content = box\r\n label2 = toga.Label(\"Test2\")\r\n box.add(label2)\r\n self.main_window.show()\r\nMyApp().main_loop()\r\n```\r\n\r\n**Expected behavior**\r\nThe application should open and shows the labels \"Test1\" and \"Test2\". Instead, you get a black screen. In the xcode debugger there is a log message on the \"box.add(label2)\" line about the label (test2) not having the attribute \"viewport\".\r\n\r\nIf you comment out box.add(label2), you correctly get a window that has \"Test1\".\r\n\r\nOn targets macos and android, the above test will correctly show \"Test1 Test2\"\r\n\r\n**Environment:**\r\n - Operating System: macos\r\n - Python version: 3.8\r\n - Software versions:\r\n - Briefcase: 0.3.3\r\n - Toga: 0.3.0.dev23\r\n - IOS: iphone 11\r\n\r\nThis may be related to #225, as you will get a similar error if you try to rebind main_window.content to a different widget.\n", "before_files": [{"content": "from toga_iOS.constraints import Constraints\n\n\nclass Widget:\n def __init__(self, interface):\n self.interface = interface\n self.interface._impl = self\n self._container = None\n self.constraints = None\n self.native = None\n self.create()\n self.interface.style.reapply()\n\n def set_app(self, app):\n pass\n\n def set_window(self, window):\n pass\n\n @property\n def container(self):\n return self._container\n\n @container.setter\n def container(self, container):\n self._container = container\n if self.constraints:\n self._container.native.addSubview(self.native)\n self.constraints.container = container\n\n for child in self.interface.children:\n child._impl.container = container\n self.rehint()\n\n def set_enabled(self, value):\n self.native.enabled = self.interface.enabled\n\n def focus(self):\n self.interface.factory.not_implemented(\"Widget.focus()\")\n\n # APPLICATOR\n\n def set_bounds(self, x, y, width, height):\n if self.container:\n viewport = self.container.viewport\n else:\n viewport = self.viewport\n\n self.constraints.update(\n x, y + viewport.statusbar_height,\n width, height\n )\n\n def set_alignment(self, alignment):\n pass\n\n def set_hidden(self, hidden):\n if self._container:\n for view in self._container._impl.subviews:\n if view._impl:\n view.setHidden(hidden)\n\n def set_font(self, font):\n # By default, font can't be changed\n pass\n\n def set_color(self, color):\n # By default, color can't be changed\n pass\n\n def set_background_color(self, color):\n # By default, background color can't be changed\n pass\n\n # INTERFACE\n\n def add_child(self, child):\n if self.container:\n child.viewport = self.root.viewport\n child.container = self.container\n\n def add_constraints(self):\n self.native.translatesAutoresizingMaskIntoConstraints = False\n self.constraints = Constraints(self)\n\n def rehint(self):\n pass\n", "path": "src/iOS/toga_iOS/widgets/base.py"}], "after_files": [{"content": "from toga_iOS.constraints import Constraints\n\n\nclass Widget:\n def __init__(self, interface):\n self.interface = interface\n self.interface._impl = self\n self._container = None\n self._viewport = None\n self.constraints = None\n self.native = None\n self.create()\n self.interface.style.reapply()\n self.set_enabled(self.interface.enabled)\n\n def set_app(self, app):\n pass\n\n def set_window(self, window):\n pass\n\n @property\n def container(self):\n return self._container\n\n @container.setter\n def container(self, container):\n if self.container:\n if container:\n raise RuntimeError('Already have a container')\n else:\n # existing container should be removed\n self.constraints = None\n self._container = None\n self.native.removeFromSuperview()\n elif container:\n # setting container\n self._container = container\n self._container.native.addSubview(self.native)\n if not self.constraints:\n self.add_constraints()\n self.constraints.container = container\n\n for child in self.interface.children:\n child._impl.container = container\n\n self.rehint()\n\n @property\n def viewport(self):\n return self._viewport\n\n @viewport.setter\n def viewport(self, viewport):\n self._viewport = viewport\n\n def set_enabled(self, value):\n self.native.enabled = self.interface.enabled\n\n def focus(self):\n self.interface.factory.not_implemented(\"Widget.focus()\")\n\n # APPLICATOR\n\n def set_bounds(self, x, y, width, height):\n offset_y = 0\n if self.container:\n offset_y = self.container.viewport.statusbar_height\n elif self.viewport:\n offset_y = self.viewport.statusbar_height\n self.constraints.update(\n x, y + offset_y,\n width, height\n )\n\n def set_alignment(self, alignment):\n pass\n\n def set_hidden(self, hidden):\n if self.container:\n for view in self.container._impl.subviews:\n if view._impl:\n view.setHidden(hidden)\n\n def set_font(self, font):\n # By default, font can't be changed\n pass\n\n def set_color(self, color):\n # By default, color can't be changed\n pass\n\n def set_background_color(self, color):\n # By default, background color can't be changed\n pass\n\n # INTERFACE\n\n def add_child(self, child):\n\n if self.viewport:\n # we are the the top level UIView\n child.container = self\n else:\n child.container = self.container\n\n def remove_child(self, child):\n child.container = None\n\n def add_constraints(self):\n self.native.translatesAutoresizingMaskIntoConstraints = False\n self.constraints = Constraints(self)\n\n def rehint(self):\n pass\n", "path": "src/iOS/toga_iOS/widgets/base.py"}]} | 1,651 | 681 |
gh_patches_debug_3372 | rasdani/github-patches | git_diff | pytorch__vision-6638 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SBU download is failing
This is happening since roughly a month (first detection in #6274), but was masked by #6268.
http://www.cs.virginia.edu/~vicente/sbucaptions/
The server is still up, but the path is no longer valid. I'll reach out to the author to see if this can be fixed. In the meantime we should probably disable the test.
cc @pmeier @YosuaMichael
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/datasets/sbu.py`
Content:
```
1 import os
2 from typing import Any, Callable, Optional, Tuple
3
4 from PIL import Image
5
6 from .utils import check_integrity, download_url
7 from .vision import VisionDataset
8
9
10 class SBU(VisionDataset):
11 """`SBU Captioned Photo <http://www.cs.virginia.edu/~vicente/sbucaptions/>`_ Dataset.
12
13 Args:
14 root (string): Root directory of dataset where tarball
15 ``SBUCaptionedPhotoDataset.tar.gz`` exists.
16 transform (callable, optional): A function/transform that takes in a PIL image
17 and returns a transformed version. E.g, ``transforms.RandomCrop``
18 target_transform (callable, optional): A function/transform that takes in the
19 target and transforms it.
20 download (bool, optional): If True, downloads the dataset from the internet and
21 puts it in root directory. If dataset is already downloaded, it is not
22 downloaded again.
23 """
24
25 url = "http://www.cs.virginia.edu/~vicente/sbucaptions/SBUCaptionedPhotoDataset.tar.gz"
26 filename = "SBUCaptionedPhotoDataset.tar.gz"
27 md5_checksum = "9aec147b3488753cf758b4d493422285"
28
29 def __init__(
30 self,
31 root: str,
32 transform: Optional[Callable] = None,
33 target_transform: Optional[Callable] = None,
34 download: bool = True,
35 ) -> None:
36 super().__init__(root, transform=transform, target_transform=target_transform)
37
38 if download:
39 self.download()
40
41 if not self._check_integrity():
42 raise RuntimeError("Dataset not found or corrupted. You can use download=True to download it")
43
44 # Read the caption for each photo
45 self.photos = []
46 self.captions = []
47
48 file1 = os.path.join(self.root, "dataset", "SBU_captioned_photo_dataset_urls.txt")
49 file2 = os.path.join(self.root, "dataset", "SBU_captioned_photo_dataset_captions.txt")
50
51 for line1, line2 in zip(open(file1), open(file2)):
52 url = line1.rstrip()
53 photo = os.path.basename(url)
54 filename = os.path.join(self.root, "dataset", photo)
55 if os.path.exists(filename):
56 caption = line2.rstrip()
57 self.photos.append(photo)
58 self.captions.append(caption)
59
60 def __getitem__(self, index: int) -> Tuple[Any, Any]:
61 """
62 Args:
63 index (int): Index
64
65 Returns:
66 tuple: (image, target) where target is a caption for the photo.
67 """
68 filename = os.path.join(self.root, "dataset", self.photos[index])
69 img = Image.open(filename).convert("RGB")
70 if self.transform is not None:
71 img = self.transform(img)
72
73 target = self.captions[index]
74 if self.target_transform is not None:
75 target = self.target_transform(target)
76
77 return img, target
78
79 def __len__(self) -> int:
80 """The number of photos in the dataset."""
81 return len(self.photos)
82
83 def _check_integrity(self) -> bool:
84 """Check the md5 checksum of the downloaded tarball."""
85 root = self.root
86 fpath = os.path.join(root, self.filename)
87 if not check_integrity(fpath, self.md5_checksum):
88 return False
89 return True
90
91 def download(self) -> None:
92 """Download and extract the tarball, and download each individual photo."""
93 import tarfile
94
95 if self._check_integrity():
96 print("Files already downloaded and verified")
97 return
98
99 download_url(self.url, self.root, self.filename, self.md5_checksum)
100
101 # Extract file
102 with tarfile.open(os.path.join(self.root, self.filename), "r:gz") as tar:
103 tar.extractall(path=self.root)
104
105 # Download individual photos
106 with open(os.path.join(self.root, "dataset", "SBU_captioned_photo_dataset_urls.txt")) as fh:
107 for line in fh:
108 url = line.rstrip()
109 try:
110 download_url(url, os.path.join(self.root, "dataset"))
111 except OSError:
112 # The images point to public images on Flickr.
113 # Note: Images might be removed by users at anytime.
114 pass
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchvision/datasets/sbu.py b/torchvision/datasets/sbu.py
--- a/torchvision/datasets/sbu.py
+++ b/torchvision/datasets/sbu.py
@@ -22,7 +22,7 @@
downloaded again.
"""
- url = "http://www.cs.virginia.edu/~vicente/sbucaptions/SBUCaptionedPhotoDataset.tar.gz"
+ url = "https://www.cs.rice.edu/~vo9/sbucaptions/SBUCaptionedPhotoDataset.tar.gz"
filename = "SBUCaptionedPhotoDataset.tar.gz"
md5_checksum = "9aec147b3488753cf758b4d493422285"
| {"golden_diff": "diff --git a/torchvision/datasets/sbu.py b/torchvision/datasets/sbu.py\n--- a/torchvision/datasets/sbu.py\n+++ b/torchvision/datasets/sbu.py\n@@ -22,7 +22,7 @@\n downloaded again.\n \"\"\"\n \n- url = \"http://www.cs.virginia.edu/~vicente/sbucaptions/SBUCaptionedPhotoDataset.tar.gz\"\n+ url = \"https://www.cs.rice.edu/~vo9/sbucaptions/SBUCaptionedPhotoDataset.tar.gz\"\n filename = \"SBUCaptionedPhotoDataset.tar.gz\"\n md5_checksum = \"9aec147b3488753cf758b4d493422285\"\n", "issue": "SBU download is failing\nThis is happening since roughly a month (first detection in #6274), but was masked by #6268. \r\n\r\nhttp://www.cs.virginia.edu/~vicente/sbucaptions/\r\n\r\nThe server is still up, but the path is no longer valid. I'll reach out to the author to see if this can be fixed. In the meantime we should probably disable the test.\r\n\r\ncc @pmeier @YosuaMichael\n", "before_files": [{"content": "import os\nfrom typing import Any, Callable, Optional, Tuple\n\nfrom PIL import Image\n\nfrom .utils import check_integrity, download_url\nfrom .vision import VisionDataset\n\n\nclass SBU(VisionDataset):\n \"\"\"`SBU Captioned Photo <http://www.cs.virginia.edu/~vicente/sbucaptions/>`_ Dataset.\n\n Args:\n root (string): Root directory of dataset where tarball\n ``SBUCaptionedPhotoDataset.tar.gz`` exists.\n transform (callable, optional): A function/transform that takes in a PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n download (bool, optional): If True, downloads the dataset from the internet and\n puts it in root directory. If dataset is already downloaded, it is not\n downloaded again.\n \"\"\"\n\n url = \"http://www.cs.virginia.edu/~vicente/sbucaptions/SBUCaptionedPhotoDataset.tar.gz\"\n filename = \"SBUCaptionedPhotoDataset.tar.gz\"\n md5_checksum = \"9aec147b3488753cf758b4d493422285\"\n\n def __init__(\n self,\n root: str,\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n download: bool = True,\n ) -> None:\n super().__init__(root, transform=transform, target_transform=target_transform)\n\n if download:\n self.download()\n\n if not self._check_integrity():\n raise RuntimeError(\"Dataset not found or corrupted. You can use download=True to download it\")\n\n # Read the caption for each photo\n self.photos = []\n self.captions = []\n\n file1 = os.path.join(self.root, \"dataset\", \"SBU_captioned_photo_dataset_urls.txt\")\n file2 = os.path.join(self.root, \"dataset\", \"SBU_captioned_photo_dataset_captions.txt\")\n\n for line1, line2 in zip(open(file1), open(file2)):\n url = line1.rstrip()\n photo = os.path.basename(url)\n filename = os.path.join(self.root, \"dataset\", photo)\n if os.path.exists(filename):\n caption = line2.rstrip()\n self.photos.append(photo)\n self.captions.append(caption)\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n tuple: (image, target) where target is a caption for the photo.\n \"\"\"\n filename = os.path.join(self.root, \"dataset\", self.photos[index])\n img = Image.open(filename).convert(\"RGB\")\n if self.transform is not None:\n img = self.transform(img)\n\n target = self.captions[index]\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return img, target\n\n def __len__(self) -> int:\n \"\"\"The number of photos in the dataset.\"\"\"\n return len(self.photos)\n\n def _check_integrity(self) -> bool:\n \"\"\"Check the md5 checksum of the downloaded tarball.\"\"\"\n root = self.root\n fpath = os.path.join(root, self.filename)\n if not check_integrity(fpath, self.md5_checksum):\n return False\n return True\n\n def download(self) -> None:\n \"\"\"Download and extract the tarball, and download each individual photo.\"\"\"\n import tarfile\n\n if self._check_integrity():\n print(\"Files already downloaded and verified\")\n return\n\n download_url(self.url, self.root, self.filename, self.md5_checksum)\n\n # Extract file\n with tarfile.open(os.path.join(self.root, self.filename), \"r:gz\") as tar:\n tar.extractall(path=self.root)\n\n # Download individual photos\n with open(os.path.join(self.root, \"dataset\", \"SBU_captioned_photo_dataset_urls.txt\")) as fh:\n for line in fh:\n url = line.rstrip()\n try:\n download_url(url, os.path.join(self.root, \"dataset\"))\n except OSError:\n # The images point to public images on Flickr.\n # Note: Images might be removed by users at anytime.\n pass\n", "path": "torchvision/datasets/sbu.py"}], "after_files": [{"content": "import os\nfrom typing import Any, Callable, Optional, Tuple\n\nfrom PIL import Image\n\nfrom .utils import check_integrity, download_url\nfrom .vision import VisionDataset\n\n\nclass SBU(VisionDataset):\n \"\"\"`SBU Captioned Photo <http://www.cs.virginia.edu/~vicente/sbucaptions/>`_ Dataset.\n\n Args:\n root (string): Root directory of dataset where tarball\n ``SBUCaptionedPhotoDataset.tar.gz`` exists.\n transform (callable, optional): A function/transform that takes in a PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n download (bool, optional): If True, downloads the dataset from the internet and\n puts it in root directory. If dataset is already downloaded, it is not\n downloaded again.\n \"\"\"\n\n url = \"https://www.cs.rice.edu/~vo9/sbucaptions/SBUCaptionedPhotoDataset.tar.gz\"\n filename = \"SBUCaptionedPhotoDataset.tar.gz\"\n md5_checksum = \"9aec147b3488753cf758b4d493422285\"\n\n def __init__(\n self,\n root: str,\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n download: bool = True,\n ) -> None:\n super().__init__(root, transform=transform, target_transform=target_transform)\n\n if download:\n self.download()\n\n if not self._check_integrity():\n raise RuntimeError(\"Dataset not found or corrupted. You can use download=True to download it\")\n\n # Read the caption for each photo\n self.photos = []\n self.captions = []\n\n file1 = os.path.join(self.root, \"dataset\", \"SBU_captioned_photo_dataset_urls.txt\")\n file2 = os.path.join(self.root, \"dataset\", \"SBU_captioned_photo_dataset_captions.txt\")\n\n for line1, line2 in zip(open(file1), open(file2)):\n url = line1.rstrip()\n photo = os.path.basename(url)\n filename = os.path.join(self.root, \"dataset\", photo)\n if os.path.exists(filename):\n caption = line2.rstrip()\n self.photos.append(photo)\n self.captions.append(caption)\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n tuple: (image, target) where target is a caption for the photo.\n \"\"\"\n filename = os.path.join(self.root, \"dataset\", self.photos[index])\n img = Image.open(filename).convert(\"RGB\")\n if self.transform is not None:\n img = self.transform(img)\n\n target = self.captions[index]\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return img, target\n\n def __len__(self) -> int:\n \"\"\"The number of photos in the dataset.\"\"\"\n return len(self.photos)\n\n def _check_integrity(self) -> bool:\n \"\"\"Check the md5 checksum of the downloaded tarball.\"\"\"\n root = self.root\n fpath = os.path.join(root, self.filename)\n if not check_integrity(fpath, self.md5_checksum):\n return False\n return True\n\n def download(self) -> None:\n \"\"\"Download and extract the tarball, and download each individual photo.\"\"\"\n import tarfile\n\n if self._check_integrity():\n print(\"Files already downloaded and verified\")\n return\n\n download_url(self.url, self.root, self.filename, self.md5_checksum)\n\n # Extract file\n with tarfile.open(os.path.join(self.root, self.filename), \"r:gz\") as tar:\n tar.extractall(path=self.root)\n\n # Download individual photos\n with open(os.path.join(self.root, \"dataset\", \"SBU_captioned_photo_dataset_urls.txt\")) as fh:\n for line in fh:\n url = line.rstrip()\n try:\n download_url(url, os.path.join(self.root, \"dataset\"))\n except OSError:\n # The images point to public images on Flickr.\n # Note: Images might be removed by users at anytime.\n pass\n", "path": "torchvision/datasets/sbu.py"}]} | 1,552 | 174 |
gh_patches_debug_30593 | rasdani/github-patches | git_diff | Qiskit__qiskit-2733 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow for np.integer as pulse duration
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
`duration` of pulse command should be python integer, but this limitation might degrade usability. It is better to allow for `np.integers` - e.g. when we prepare series of pulse commands with many durations with `np.arange`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/pulse/commands/pulse_decorators.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 # pylint: disable=missing-return-doc, missing-return-type-doc
16
17 """
18 Pulse decorators.
19 """
20
21 import functools
22 from typing import Callable
23
24 import numpy as np
25
26 from qiskit.pulse.exceptions import PulseError
27
28 from .sample_pulse import SamplePulse
29
30
31 def functional_pulse(func: Callable):
32 """A decorator for generating SamplePulse from python callable.
33 Args:
34 func: A function describing pulse envelope.
35 Raises:
36 PulseError: when invalid function is specified.
37 """
38 @functools.wraps(func)
39 def to_pulse(duration, *args, name=None, **kwargs):
40 """Return SamplePulse."""
41 if isinstance(duration, int) and duration > 0:
42 samples = func(duration, *args, **kwargs)
43 samples = np.asarray(samples, dtype=np.complex128)
44 return SamplePulse(samples=samples, name=name)
45 raise PulseError('The first argument must be an integer value representing duration.')
46
47 return to_pulse
48
```
Path: `qiskit/pulse/commands/command.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Base command.
17 """
18 import re
19
20 from abc import ABCMeta, abstractmethod
21 from typing import List, Optional
22
23 from qiskit.pulse.exceptions import PulseError
24 from qiskit.pulse.channels import Channel
25 from qiskit.pulse.timeslots import TimeslotCollection
26
27 from .instruction import Instruction
28
29
30 class MetaCount(ABCMeta):
31 """Meta class to count class instances."""
32 def __new__(mcs, name, bases, namespace):
33 new_cls = super(MetaCount, mcs).__new__(mcs, name, bases, namespace)
34 new_cls.instances_counter = 0
35 return new_cls
36
37
38 class Command(metaclass=MetaCount):
39 """Abstract command class."""
40
41 # Counter for the number of instances in this class
42 prefix = 'c'
43
44 @abstractmethod
45 def __init__(self, duration: int = None):
46 """Create a new command.
47
48 Args:
49 duration: Duration of this command.
50 Raises:
51 PulseError: when duration is not number of points
52 """
53 if isinstance(duration, int):
54 self._duration = duration
55 else:
56 raise PulseError('Pulse duration should be integer.')
57
58 self._name = Command.create_name()
59
60 @classmethod
61 def create_name(cls, name: str = None) -> str:
62 """Autogenerate names for pulse commands."""
63 if name is None:
64 try:
65 name = '%s%i' % (cls.prefix, cls.instances_counter) # pylint: disable=E1101
66 except TypeError:
67 raise PulseError("prefix and counter must be non-None when name is None.")
68 else:
69 try:
70 name = str(name)
71 except Exception:
72 raise PulseError("The pulse command name should be castable to a string "
73 "(or None for autogenerate a name).")
74 name_format = re.compile('[a-zA-Z][a-zA-Z0-9_]*')
75 if name_format.match(name) is None:
76 raise PulseError("%s is an invalid OpenPulse command name." % name)
77
78 cls.instances_counter += 1 # pylint: disable=E1101
79
80 return name
81
82 @property
83 def duration(self) -> int:
84 """Duration of this command."""
85 return self._duration
86
87 @property
88 def name(self) -> str:
89 """Name of this command."""
90 return self._name
91
92 @abstractmethod
93 def to_instruction(self, command, *channels: List[Channel],
94 timeslots: Optional[TimeslotCollection] = None,
95 name: Optional[str] = None) -> Instruction:
96 """Create an instruction from command."""
97 pass
98
99 def __call__(self, *args, **kwargs):
100 """Creates an Instruction obtained from call to `to_instruction` wrapped in a Schedule."""
101 return self.to_instruction(*args, **kwargs)
102
103 def __eq__(self, other: 'Command'):
104 """Two Commands are the same if they are of the same type
105 and have the same duration and name.
106
107 Args:
108 other: other Command
109
110 Returns:
111 bool: are self and other equal
112 """
113 if type(self) is type(other) and \
114 self._duration == other._duration and \
115 self._name == other._name:
116 return True
117 return False
118
119 def __hash__(self):
120 return hash((type(self), self._duration, self._name))
121
122 def __repr__(self):
123 return '%s(name=%s, duration=%d)' % (self.__class__.__name__,
124 self._name, self._duration)
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/pulse/commands/command.py b/qiskit/pulse/commands/command.py
--- a/qiskit/pulse/commands/command.py
+++ b/qiskit/pulse/commands/command.py
@@ -18,7 +18,8 @@
import re
from abc import ABCMeta, abstractmethod
-from typing import List, Optional
+from typing import List, Optional, Union
+import numpy as np
from qiskit.pulse.exceptions import PulseError
from qiskit.pulse.channels import Channel
@@ -42,7 +43,7 @@
prefix = 'c'
@abstractmethod
- def __init__(self, duration: int = None):
+ def __init__(self, duration: Union[int, np.integer] = None):
"""Create a new command.
Args:
@@ -50,8 +51,8 @@
Raises:
PulseError: when duration is not number of points
"""
- if isinstance(duration, int):
- self._duration = duration
+ if isinstance(duration, (int, np.integer)):
+ self._duration = int(duration)
else:
raise PulseError('Pulse duration should be integer.')
diff --git a/qiskit/pulse/commands/pulse_decorators.py b/qiskit/pulse/commands/pulse_decorators.py
--- a/qiskit/pulse/commands/pulse_decorators.py
+++ b/qiskit/pulse/commands/pulse_decorators.py
@@ -38,7 +38,7 @@
@functools.wraps(func)
def to_pulse(duration, *args, name=None, **kwargs):
"""Return SamplePulse."""
- if isinstance(duration, int) and duration > 0:
+ if isinstance(duration, (int, np.integer)) and duration > 0:
samples = func(duration, *args, **kwargs)
samples = np.asarray(samples, dtype=np.complex128)
return SamplePulse(samples=samples, name=name)
| {"golden_diff": "diff --git a/qiskit/pulse/commands/command.py b/qiskit/pulse/commands/command.py\n--- a/qiskit/pulse/commands/command.py\n+++ b/qiskit/pulse/commands/command.py\n@@ -18,7 +18,8 @@\n import re\n \n from abc import ABCMeta, abstractmethod\n-from typing import List, Optional\n+from typing import List, Optional, Union\n+import numpy as np\n \n from qiskit.pulse.exceptions import PulseError\n from qiskit.pulse.channels import Channel\n@@ -42,7 +43,7 @@\n prefix = 'c'\n \n @abstractmethod\n- def __init__(self, duration: int = None):\n+ def __init__(self, duration: Union[int, np.integer] = None):\n \"\"\"Create a new command.\n \n Args:\n@@ -50,8 +51,8 @@\n Raises:\n PulseError: when duration is not number of points\n \"\"\"\n- if isinstance(duration, int):\n- self._duration = duration\n+ if isinstance(duration, (int, np.integer)):\n+ self._duration = int(duration)\n else:\n raise PulseError('Pulse duration should be integer.')\n \ndiff --git a/qiskit/pulse/commands/pulse_decorators.py b/qiskit/pulse/commands/pulse_decorators.py\n--- a/qiskit/pulse/commands/pulse_decorators.py\n+++ b/qiskit/pulse/commands/pulse_decorators.py\n@@ -38,7 +38,7 @@\n @functools.wraps(func)\n def to_pulse(duration, *args, name=None, **kwargs):\n \"\"\"Return SamplePulse.\"\"\"\n- if isinstance(duration, int) and duration > 0:\n+ if isinstance(duration, (int, np.integer)) and duration > 0:\n samples = func(duration, *args, **kwargs)\n samples = np.asarray(samples, dtype=np.complex128)\n return SamplePulse(samples=samples, name=name)\n", "issue": "Allow for np.integer as pulse duration\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues to confirm this idea does not exist. -->\r\n\r\n### What is the expected enhancement?\r\n`duration` of pulse command should be python integer, but this limitation might degrade usability. It is better to allow for `np.integers` - e.g. when we prepare series of pulse commands with many durations with `np.arange`.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n# pylint: disable=missing-return-doc, missing-return-type-doc\n\n\"\"\"\nPulse decorators.\n\"\"\"\n\nimport functools\nfrom typing import Callable\n\nimport numpy as np\n\nfrom qiskit.pulse.exceptions import PulseError\n\nfrom .sample_pulse import SamplePulse\n\n\ndef functional_pulse(func: Callable):\n \"\"\"A decorator for generating SamplePulse from python callable.\n Args:\n func: A function describing pulse envelope.\n Raises:\n PulseError: when invalid function is specified.\n \"\"\"\n @functools.wraps(func)\n def to_pulse(duration, *args, name=None, **kwargs):\n \"\"\"Return SamplePulse.\"\"\"\n if isinstance(duration, int) and duration > 0:\n samples = func(duration, *args, **kwargs)\n samples = np.asarray(samples, dtype=np.complex128)\n return SamplePulse(samples=samples, name=name)\n raise PulseError('The first argument must be an integer value representing duration.')\n\n return to_pulse\n", "path": "qiskit/pulse/commands/pulse_decorators.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\nBase command.\n\"\"\"\nimport re\n\nfrom abc import ABCMeta, abstractmethod\nfrom typing import List, Optional\n\nfrom qiskit.pulse.exceptions import PulseError\nfrom qiskit.pulse.channels import Channel\nfrom qiskit.pulse.timeslots import TimeslotCollection\n\nfrom .instruction import Instruction\n\n\nclass MetaCount(ABCMeta):\n \"\"\"Meta class to count class instances.\"\"\"\n def __new__(mcs, name, bases, namespace):\n new_cls = super(MetaCount, mcs).__new__(mcs, name, bases, namespace)\n new_cls.instances_counter = 0\n return new_cls\n\n\nclass Command(metaclass=MetaCount):\n \"\"\"Abstract command class.\"\"\"\n\n # Counter for the number of instances in this class\n prefix = 'c'\n\n @abstractmethod\n def __init__(self, duration: int = None):\n \"\"\"Create a new command.\n\n Args:\n duration: Duration of this command.\n Raises:\n PulseError: when duration is not number of points\n \"\"\"\n if isinstance(duration, int):\n self._duration = duration\n else:\n raise PulseError('Pulse duration should be integer.')\n\n self._name = Command.create_name()\n\n @classmethod\n def create_name(cls, name: str = None) -> str:\n \"\"\"Autogenerate names for pulse commands.\"\"\"\n if name is None:\n try:\n name = '%s%i' % (cls.prefix, cls.instances_counter) # pylint: disable=E1101\n except TypeError:\n raise PulseError(\"prefix and counter must be non-None when name is None.\")\n else:\n try:\n name = str(name)\n except Exception:\n raise PulseError(\"The pulse command name should be castable to a string \"\n \"(or None for autogenerate a name).\")\n name_format = re.compile('[a-zA-Z][a-zA-Z0-9_]*')\n if name_format.match(name) is None:\n raise PulseError(\"%s is an invalid OpenPulse command name.\" % name)\n\n cls.instances_counter += 1 # pylint: disable=E1101\n\n return name\n\n @property\n def duration(self) -> int:\n \"\"\"Duration of this command.\"\"\"\n return self._duration\n\n @property\n def name(self) -> str:\n \"\"\"Name of this command.\"\"\"\n return self._name\n\n @abstractmethod\n def to_instruction(self, command, *channels: List[Channel],\n timeslots: Optional[TimeslotCollection] = None,\n name: Optional[str] = None) -> Instruction:\n \"\"\"Create an instruction from command.\"\"\"\n pass\n\n def __call__(self, *args, **kwargs):\n \"\"\"Creates an Instruction obtained from call to `to_instruction` wrapped in a Schedule.\"\"\"\n return self.to_instruction(*args, **kwargs)\n\n def __eq__(self, other: 'Command'):\n \"\"\"Two Commands are the same if they are of the same type\n and have the same duration and name.\n\n Args:\n other: other Command\n\n Returns:\n bool: are self and other equal\n \"\"\"\n if type(self) is type(other) and \\\n self._duration == other._duration and \\\n self._name == other._name:\n return True\n return False\n\n def __hash__(self):\n return hash((type(self), self._duration, self._name))\n\n def __repr__(self):\n return '%s(name=%s, duration=%d)' % (self.__class__.__name__,\n self._name, self._duration)\n", "path": "qiskit/pulse/commands/command.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n# pylint: disable=missing-return-doc, missing-return-type-doc\n\n\"\"\"\nPulse decorators.\n\"\"\"\n\nimport functools\nfrom typing import Callable\n\nimport numpy as np\n\nfrom qiskit.pulse.exceptions import PulseError\n\nfrom .sample_pulse import SamplePulse\n\n\ndef functional_pulse(func: Callable):\n \"\"\"A decorator for generating SamplePulse from python callable.\n Args:\n func: A function describing pulse envelope.\n Raises:\n PulseError: when invalid function is specified.\n \"\"\"\n @functools.wraps(func)\n def to_pulse(duration, *args, name=None, **kwargs):\n \"\"\"Return SamplePulse.\"\"\"\n if isinstance(duration, (int, np.integer)) and duration > 0:\n samples = func(duration, *args, **kwargs)\n samples = np.asarray(samples, dtype=np.complex128)\n return SamplePulse(samples=samples, name=name)\n raise PulseError('The first argument must be an integer value representing duration.')\n\n return to_pulse\n", "path": "qiskit/pulse/commands/pulse_decorators.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\nBase command.\n\"\"\"\nimport re\n\nfrom abc import ABCMeta, abstractmethod\nfrom typing import List, Optional, Union\nimport numpy as np\n\nfrom qiskit.pulse.exceptions import PulseError\nfrom qiskit.pulse.channels import Channel\nfrom qiskit.pulse.timeslots import TimeslotCollection\n\nfrom .instruction import Instruction\n\n\nclass MetaCount(ABCMeta):\n \"\"\"Meta class to count class instances.\"\"\"\n def __new__(mcs, name, bases, namespace):\n new_cls = super(MetaCount, mcs).__new__(mcs, name, bases, namespace)\n new_cls.instances_counter = 0\n return new_cls\n\n\nclass Command(metaclass=MetaCount):\n \"\"\"Abstract command class.\"\"\"\n\n # Counter for the number of instances in this class\n prefix = 'c'\n\n @abstractmethod\n def __init__(self, duration: Union[int, np.integer] = None):\n \"\"\"Create a new command.\n\n Args:\n duration: Duration of this command.\n Raises:\n PulseError: when duration is not number of points\n \"\"\"\n if isinstance(duration, (int, np.integer)):\n self._duration = int(duration)\n else:\n raise PulseError('Pulse duration should be integer.')\n\n self._name = Command.create_name()\n\n @classmethod\n def create_name(cls, name: str = None) -> str:\n \"\"\"Autogenerate names for pulse commands.\"\"\"\n if name is None:\n try:\n name = '%s%i' % (cls.prefix, cls.instances_counter) # pylint: disable=E1101\n except TypeError:\n raise PulseError(\"prefix and counter must be non-None when name is None.\")\n else:\n try:\n name = str(name)\n except Exception:\n raise PulseError(\"The pulse command name should be castable to a string \"\n \"(or None for autogenerate a name).\")\n name_format = re.compile('[a-zA-Z][a-zA-Z0-9_]*')\n if name_format.match(name) is None:\n raise PulseError(\"%s is an invalid OpenPulse command name.\" % name)\n\n cls.instances_counter += 1 # pylint: disable=E1101\n\n return name\n\n @property\n def duration(self) -> int:\n \"\"\"Duration of this command.\"\"\"\n return self._duration\n\n @property\n def name(self) -> str:\n \"\"\"Name of this command.\"\"\"\n return self._name\n\n @abstractmethod\n def to_instruction(self, command, *channels: List[Channel],\n timeslots: Optional[TimeslotCollection] = None,\n name: Optional[str] = None) -> Instruction:\n \"\"\"Create an instruction from command.\"\"\"\n pass\n\n def __call__(self, *args, **kwargs):\n \"\"\"Creates an Instruction obtained from call to `to_instruction` wrapped in a Schedule.\"\"\"\n return self.to_instruction(*args, **kwargs)\n\n def __eq__(self, other: 'Command'):\n \"\"\"Two Commands are the same if they are of the same type\n and have the same duration and name.\n\n Args:\n other: other Command\n\n Returns:\n bool: are self and other equal\n \"\"\"\n if type(self) is type(other) and \\\n self._duration == other._duration and \\\n self._name == other._name:\n return True\n return False\n\n def __hash__(self):\n return hash((type(self), self._duration, self._name))\n\n def __repr__(self):\n return '%s(name=%s, duration=%d)' % (self.__class__.__name__,\n self._name, self._duration)\n", "path": "qiskit/pulse/commands/command.py"}]} | 2,004 | 439 |
gh_patches_debug_25013 | rasdani/github-patches | git_diff | pandas-dev__pandas-24034 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add columns-parameter like in feather.read_dataframe
Wes suggested to raise this feature request here again, after I wondered why `pandas.read_feather` is lacking a `columns`-parameter like the [`feather-format`](https://github.com/wesm/feather) package has it.
You can read in only a subset of columns from a feather-file with e.g.:
`df = feather.read_dataframe('df_test.feather', columns='b')`
It would be really nice if `pandas.read_feather` also had this built in, so you don't have to install and import the feather-package just for that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandas/io/feather_format.py`
Content:
```
1 """ feather-format compat """
2
3 from distutils.version import LooseVersion
4
5 from pandas.compat import range
6 from pandas.util._decorators import deprecate_kwarg
7
8 from pandas import DataFrame, Int64Index, RangeIndex
9
10 from pandas.io.common import _stringify_path
11
12
13 def _try_import():
14 # since pandas is a dependency of pyarrow
15 # we need to import on first use
16 try:
17 import pyarrow
18 from pyarrow import feather
19 except ImportError:
20 # give a nice error message
21 raise ImportError("pyarrow is not installed\n\n"
22 "you can install via conda\n"
23 "conda install pyarrow -c conda-forge\n"
24 "or via pip\n"
25 "pip install -U pyarrow\n")
26
27 if LooseVersion(pyarrow.__version__) < LooseVersion('0.4.1'):
28 raise ImportError("pyarrow >= 0.4.1 required for feather support\n\n"
29 "you can install via conda\n"
30 "conda install pyarrow -c conda-forge"
31 "or via pip\n"
32 "pip install -U pyarrow\n")
33
34 return feather, pyarrow
35
36
37 def to_feather(df, path):
38 """
39 Write a DataFrame to the feather-format
40
41 Parameters
42 ----------
43 df : DataFrame
44 path : string file path, or file-like object
45
46 """
47 path = _stringify_path(path)
48 if not isinstance(df, DataFrame):
49 raise ValueError("feather only support IO with DataFrames")
50
51 feather = _try_import()[0]
52 valid_types = {'string', 'unicode'}
53
54 # validate index
55 # --------------
56
57 # validate that we have only a default index
58 # raise on anything else as we don't serialize the index
59
60 if not isinstance(df.index, Int64Index):
61 raise ValueError("feather does not support serializing {} "
62 "for the index; you can .reset_index()"
63 "to make the index into column(s)".format(
64 type(df.index)))
65
66 if not df.index.equals(RangeIndex.from_range(range(len(df)))):
67 raise ValueError("feather does not support serializing a "
68 "non-default index for the index; you "
69 "can .reset_index() to make the index "
70 "into column(s)")
71
72 if df.index.name is not None:
73 raise ValueError("feather does not serialize index meta-data on a "
74 "default index")
75
76 # validate columns
77 # ----------------
78
79 # must have value column names (strings only)
80 if df.columns.inferred_type not in valid_types:
81 raise ValueError("feather must have string column names")
82
83 feather.write_feather(df, path)
84
85
86 @deprecate_kwarg(old_arg_name='nthreads', new_arg_name='use_threads')
87 def read_feather(path, use_threads=True):
88 """
89 Load a feather-format object from the file path
90
91 .. versionadded 0.20.0
92
93 Parameters
94 ----------
95 path : string file path, or file-like object
96 nthreads : int, default 1
97 Number of CPU threads to use when reading to pandas.DataFrame
98
99 .. versionadded 0.21.0
100 .. deprecated 0.24.0
101 use_threads : bool, default True
102 Whether to parallelize reading using multiple threads
103
104 .. versionadded 0.24.0
105
106 Returns
107 -------
108 type of object stored in file
109
110 """
111
112 feather, pyarrow = _try_import()
113 path = _stringify_path(path)
114
115 if LooseVersion(pyarrow.__version__) < LooseVersion('0.11.0'):
116 int_use_threads = int(use_threads)
117 if int_use_threads < 1:
118 int_use_threads = 1
119 return feather.read_feather(path, nthreads=int_use_threads)
120
121 return feather.read_feather(path, use_threads=bool(use_threads))
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -84,7 +84,7 @@
@deprecate_kwarg(old_arg_name='nthreads', new_arg_name='use_threads')
-def read_feather(path, use_threads=True):
+def read_feather(path, columns=None, use_threads=True):
"""
Load a feather-format object from the file path
@@ -93,6 +93,10 @@
Parameters
----------
path : string file path, or file-like object
+ columns : sequence, default None
+ If not provided, all columns are read
+
+ .. versionadded 0.24.0
nthreads : int, default 1
Number of CPU threads to use when reading to pandas.DataFrame
@@ -116,6 +120,8 @@
int_use_threads = int(use_threads)
if int_use_threads < 1:
int_use_threads = 1
- return feather.read_feather(path, nthreads=int_use_threads)
+ return feather.read_feather(path, columns=columns,
+ nthreads=int_use_threads)
- return feather.read_feather(path, use_threads=bool(use_threads))
+ return feather.read_feather(path, columns=columns,
+ use_threads=bool(use_threads))
| {"golden_diff": "diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py\n--- a/pandas/io/feather_format.py\n+++ b/pandas/io/feather_format.py\n@@ -84,7 +84,7 @@\n \n \n @deprecate_kwarg(old_arg_name='nthreads', new_arg_name='use_threads')\n-def read_feather(path, use_threads=True):\n+def read_feather(path, columns=None, use_threads=True):\n \"\"\"\n Load a feather-format object from the file path\n \n@@ -93,6 +93,10 @@\n Parameters\n ----------\n path : string file path, or file-like object\n+ columns : sequence, default None\n+ If not provided, all columns are read\n+\n+ .. versionadded 0.24.0\n nthreads : int, default 1\n Number of CPU threads to use when reading to pandas.DataFrame\n \n@@ -116,6 +120,8 @@\n int_use_threads = int(use_threads)\n if int_use_threads < 1:\n int_use_threads = 1\n- return feather.read_feather(path, nthreads=int_use_threads)\n+ return feather.read_feather(path, columns=columns,\n+ nthreads=int_use_threads)\n \n- return feather.read_feather(path, use_threads=bool(use_threads))\n+ return feather.read_feather(path, columns=columns,\n+ use_threads=bool(use_threads))\n", "issue": "Add columns-parameter like in feather.read_dataframe\nWes suggested to raise this feature request here again, after I wondered why `pandas.read_feather` is lacking a `columns`-parameter like the [`feather-format`](https://github.com/wesm/feather) package has it.\r\n\r\nYou can read in only a subset of columns from a feather-file with e.g.: \r\n`df = feather.read_dataframe('df_test.feather', columns='b')`\r\n\r\nIt would be really nice if `pandas.read_feather` also had this built in, so you don't have to install and import the feather-package just for that.\n", "before_files": [{"content": "\"\"\" feather-format compat \"\"\"\n\nfrom distutils.version import LooseVersion\n\nfrom pandas.compat import range\nfrom pandas.util._decorators import deprecate_kwarg\n\nfrom pandas import DataFrame, Int64Index, RangeIndex\n\nfrom pandas.io.common import _stringify_path\n\n\ndef _try_import():\n # since pandas is a dependency of pyarrow\n # we need to import on first use\n try:\n import pyarrow\n from pyarrow import feather\n except ImportError:\n # give a nice error message\n raise ImportError(\"pyarrow is not installed\\n\\n\"\n \"you can install via conda\\n\"\n \"conda install pyarrow -c conda-forge\\n\"\n \"or via pip\\n\"\n \"pip install -U pyarrow\\n\")\n\n if LooseVersion(pyarrow.__version__) < LooseVersion('0.4.1'):\n raise ImportError(\"pyarrow >= 0.4.1 required for feather support\\n\\n\"\n \"you can install via conda\\n\"\n \"conda install pyarrow -c conda-forge\"\n \"or via pip\\n\"\n \"pip install -U pyarrow\\n\")\n\n return feather, pyarrow\n\n\ndef to_feather(df, path):\n \"\"\"\n Write a DataFrame to the feather-format\n\n Parameters\n ----------\n df : DataFrame\n path : string file path, or file-like object\n\n \"\"\"\n path = _stringify_path(path)\n if not isinstance(df, DataFrame):\n raise ValueError(\"feather only support IO with DataFrames\")\n\n feather = _try_import()[0]\n valid_types = {'string', 'unicode'}\n\n # validate index\n # --------------\n\n # validate that we have only a default index\n # raise on anything else as we don't serialize the index\n\n if not isinstance(df.index, Int64Index):\n raise ValueError(\"feather does not support serializing {} \"\n \"for the index; you can .reset_index()\"\n \"to make the index into column(s)\".format(\n type(df.index)))\n\n if not df.index.equals(RangeIndex.from_range(range(len(df)))):\n raise ValueError(\"feather does not support serializing a \"\n \"non-default index for the index; you \"\n \"can .reset_index() to make the index \"\n \"into column(s)\")\n\n if df.index.name is not None:\n raise ValueError(\"feather does not serialize index meta-data on a \"\n \"default index\")\n\n # validate columns\n # ----------------\n\n # must have value column names (strings only)\n if df.columns.inferred_type not in valid_types:\n raise ValueError(\"feather must have string column names\")\n\n feather.write_feather(df, path)\n\n\n@deprecate_kwarg(old_arg_name='nthreads', new_arg_name='use_threads')\ndef read_feather(path, use_threads=True):\n \"\"\"\n Load a feather-format object from the file path\n\n .. versionadded 0.20.0\n\n Parameters\n ----------\n path : string file path, or file-like object\n nthreads : int, default 1\n Number of CPU threads to use when reading to pandas.DataFrame\n\n .. versionadded 0.21.0\n .. deprecated 0.24.0\n use_threads : bool, default True\n Whether to parallelize reading using multiple threads\n\n .. versionadded 0.24.0\n\n Returns\n -------\n type of object stored in file\n\n \"\"\"\n\n feather, pyarrow = _try_import()\n path = _stringify_path(path)\n\n if LooseVersion(pyarrow.__version__) < LooseVersion('0.11.0'):\n int_use_threads = int(use_threads)\n if int_use_threads < 1:\n int_use_threads = 1\n return feather.read_feather(path, nthreads=int_use_threads)\n\n return feather.read_feather(path, use_threads=bool(use_threads))\n", "path": "pandas/io/feather_format.py"}], "after_files": [{"content": "\"\"\" feather-format compat \"\"\"\n\nfrom distutils.version import LooseVersion\n\nfrom pandas.compat import range\nfrom pandas.util._decorators import deprecate_kwarg\n\nfrom pandas import DataFrame, Int64Index, RangeIndex\n\nfrom pandas.io.common import _stringify_path\n\n\ndef _try_import():\n # since pandas is a dependency of pyarrow\n # we need to import on first use\n try:\n import pyarrow\n from pyarrow import feather\n except ImportError:\n # give a nice error message\n raise ImportError(\"pyarrow is not installed\\n\\n\"\n \"you can install via conda\\n\"\n \"conda install pyarrow -c conda-forge\\n\"\n \"or via pip\\n\"\n \"pip install -U pyarrow\\n\")\n\n if LooseVersion(pyarrow.__version__) < LooseVersion('0.4.1'):\n raise ImportError(\"pyarrow >= 0.4.1 required for feather support\\n\\n\"\n \"you can install via conda\\n\"\n \"conda install pyarrow -c conda-forge\"\n \"or via pip\\n\"\n \"pip install -U pyarrow\\n\")\n\n return feather, pyarrow\n\n\ndef to_feather(df, path):\n \"\"\"\n Write a DataFrame to the feather-format\n\n Parameters\n ----------\n df : DataFrame\n path : string file path, or file-like object\n\n \"\"\"\n path = _stringify_path(path)\n if not isinstance(df, DataFrame):\n raise ValueError(\"feather only support IO with DataFrames\")\n\n feather = _try_import()[0]\n valid_types = {'string', 'unicode'}\n\n # validate index\n # --------------\n\n # validate that we have only a default index\n # raise on anything else as we don't serialize the index\n\n if not isinstance(df.index, Int64Index):\n raise ValueError(\"feather does not support serializing {} \"\n \"for the index; you can .reset_index()\"\n \"to make the index into column(s)\".format(\n type(df.index)))\n\n if not df.index.equals(RangeIndex.from_range(range(len(df)))):\n raise ValueError(\"feather does not support serializing a \"\n \"non-default index for the index; you \"\n \"can .reset_index() to make the index \"\n \"into column(s)\")\n\n if df.index.name is not None:\n raise ValueError(\"feather does not serialize index meta-data on a \"\n \"default index\")\n\n # validate columns\n # ----------------\n\n # must have value column names (strings only)\n if df.columns.inferred_type not in valid_types:\n raise ValueError(\"feather must have string column names\")\n\n feather.write_feather(df, path)\n\n\n@deprecate_kwarg(old_arg_name='nthreads', new_arg_name='use_threads')\ndef read_feather(path, columns=None, use_threads=True):\n \"\"\"\n Load a feather-format object from the file path\n\n .. versionadded 0.20.0\n\n Parameters\n ----------\n path : string file path, or file-like object\n columns : sequence, default None\n If not provided, all columns are read\n\n .. versionadded 0.24.0\n nthreads : int, default 1\n Number of CPU threads to use when reading to pandas.DataFrame\n\n .. versionadded 0.21.0\n .. deprecated 0.24.0\n use_threads : bool, default True\n Whether to parallelize reading using multiple threads\n\n .. versionadded 0.24.0\n\n Returns\n -------\n type of object stored in file\n\n \"\"\"\n\n feather, pyarrow = _try_import()\n path = _stringify_path(path)\n\n if LooseVersion(pyarrow.__version__) < LooseVersion('0.11.0'):\n int_use_threads = int(use_threads)\n if int_use_threads < 1:\n int_use_threads = 1\n return feather.read_feather(path, columns=columns,\n nthreads=int_use_threads)\n\n return feather.read_feather(path, columns=columns,\n use_threads=bool(use_threads))\n", "path": "pandas/io/feather_format.py"}]} | 1,507 | 314 |
gh_patches_debug_2620 | rasdani/github-patches | git_diff | Textualize__textual-4189 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SyntaxWarning for loading indicator widget
I receive this warning after upgrading to `0.52.0`:
```
/Users/cthompson/Library/Caches/pypoetry/virtualenvs/dolphie-z84eXs3q-py3.11/lib/python3.11/site-packages/textual/widgets/_loading_indicator.py:57: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.app.animation_level is "none":
```
https://github.com/Textualize/textual/blob/main/src/textual/widgets/_loading_indicator.py#L57
Seems we just need to change `is "none"` to `== "none"`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/widgets/_loading_indicator.py`
Content:
```
1 from __future__ import annotations
2
3 from time import time
4
5 from rich.console import RenderableType
6 from rich.style import Style
7 from rich.text import Text
8
9 from ..color import Gradient
10 from ..events import Mount
11 from ..widget import Widget
12
13
14 class LoadingIndicator(Widget):
15 """Display an animated loading indicator."""
16
17 DEFAULT_CSS = """
18 LoadingIndicator {
19 width: 100%;
20 height: 100%;
21 min-height: 1;
22 content-align: center middle;
23 color: $accent;
24 }
25 LoadingIndicator.-textual-loading-indicator {
26 layer: _loading;
27 background: $boost;
28 dock: top;
29 }
30 """
31
32 def __init__(
33 self,
34 name: str | None = None,
35 id: str | None = None,
36 classes: str | None = None,
37 disabled: bool = False,
38 ):
39 """Initialize a loading indicator.
40
41 Args:
42 name: The name of the widget.
43 id: The ID of the widget in the DOM.
44 classes: The CSS classes for the widget.
45 disabled: Whether the widget is disabled or not.
46 """
47 super().__init__(name=name, id=id, classes=classes, disabled=disabled)
48
49 self._start_time: float = 0.0
50 """The time the loading indicator was mounted (a Unix timestamp)."""
51
52 def _on_mount(self, _: Mount) -> None:
53 self._start_time = time()
54 self.auto_refresh = 1 / 16
55
56 def render(self) -> RenderableType:
57 if self.app.animation_level is "none":
58 return Text("Loading...")
59
60 elapsed = time() - self._start_time
61 speed = 0.8
62 dot = "\u25cf"
63 _, _, background, color = self.colors
64
65 gradient = Gradient(
66 (0.0, background.blend(color, 0.1)),
67 (0.7, color),
68 (1.0, color.lighten(0.1)),
69 )
70
71 blends = [(elapsed * speed - dot_number / 8) % 1 for dot_number in range(5)]
72
73 dots = [
74 (
75 f"{dot} ",
76 Style.from_color(gradient.get_color((1 - blend) ** 2).rich_color),
77 )
78 for blend in blends
79 ]
80 indicator = Text.assemble(*dots)
81 indicator.rstrip()
82 return indicator
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/textual/widgets/_loading_indicator.py b/src/textual/widgets/_loading_indicator.py
--- a/src/textual/widgets/_loading_indicator.py
+++ b/src/textual/widgets/_loading_indicator.py
@@ -54,7 +54,7 @@
self.auto_refresh = 1 / 16
def render(self) -> RenderableType:
- if self.app.animation_level is "none":
+ if self.app.animation_level == "none":
return Text("Loading...")
elapsed = time() - self._start_time
| {"golden_diff": "diff --git a/src/textual/widgets/_loading_indicator.py b/src/textual/widgets/_loading_indicator.py\n--- a/src/textual/widgets/_loading_indicator.py\n+++ b/src/textual/widgets/_loading_indicator.py\n@@ -54,7 +54,7 @@\n self.auto_refresh = 1 / 16\n \n def render(self) -> RenderableType:\n- if self.app.animation_level is \"none\":\n+ if self.app.animation_level == \"none\":\n return Text(\"Loading...\")\n \n elapsed = time() - self._start_time\n", "issue": "SyntaxWarning for loading indicator widget\nI receive this warning after upgrading to `0.52.0`:\r\n\r\n```\r\n/Users/cthompson/Library/Caches/pypoetry/virtualenvs/dolphie-z84eXs3q-py3.11/lib/python3.11/site-packages/textual/widgets/_loading_indicator.py:57: SyntaxWarning: \"is\" with a literal. Did you mean \"==\"?\r\n if self.app.animation_level is \"none\":\r\n```\r\n\r\nhttps://github.com/Textualize/textual/blob/main/src/textual/widgets/_loading_indicator.py#L57\r\n\r\nSeems we just need to change `is \"none\"` to `== \"none\"`\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom time import time\n\nfrom rich.console import RenderableType\nfrom rich.style import Style\nfrom rich.text import Text\n\nfrom ..color import Gradient\nfrom ..events import Mount\nfrom ..widget import Widget\n\n\nclass LoadingIndicator(Widget):\n \"\"\"Display an animated loading indicator.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n LoadingIndicator {\n width: 100%;\n height: 100%;\n min-height: 1;\n content-align: center middle;\n color: $accent;\n }\n LoadingIndicator.-textual-loading-indicator {\n layer: _loading;\n background: $boost;\n dock: top;\n }\n \"\"\"\n\n def __init__(\n self,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ):\n \"\"\"Initialize a loading indicator.\n\n Args:\n name: The name of the widget.\n id: The ID of the widget in the DOM.\n classes: The CSS classes for the widget.\n disabled: Whether the widget is disabled or not.\n \"\"\"\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n\n self._start_time: float = 0.0\n \"\"\"The time the loading indicator was mounted (a Unix timestamp).\"\"\"\n\n def _on_mount(self, _: Mount) -> None:\n self._start_time = time()\n self.auto_refresh = 1 / 16\n\n def render(self) -> RenderableType:\n if self.app.animation_level is \"none\":\n return Text(\"Loading...\")\n\n elapsed = time() - self._start_time\n speed = 0.8\n dot = \"\\u25cf\"\n _, _, background, color = self.colors\n\n gradient = Gradient(\n (0.0, background.blend(color, 0.1)),\n (0.7, color),\n (1.0, color.lighten(0.1)),\n )\n\n blends = [(elapsed * speed - dot_number / 8) % 1 for dot_number in range(5)]\n\n dots = [\n (\n f\"{dot} \",\n Style.from_color(gradient.get_color((1 - blend) ** 2).rich_color),\n )\n for blend in blends\n ]\n indicator = Text.assemble(*dots)\n indicator.rstrip()\n return indicator\n", "path": "src/textual/widgets/_loading_indicator.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom time import time\n\nfrom rich.console import RenderableType\nfrom rich.style import Style\nfrom rich.text import Text\n\nfrom ..color import Gradient\nfrom ..events import Mount\nfrom ..widget import Widget\n\n\nclass LoadingIndicator(Widget):\n \"\"\"Display an animated loading indicator.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n LoadingIndicator {\n width: 100%;\n height: 100%;\n min-height: 1;\n content-align: center middle;\n color: $accent;\n }\n LoadingIndicator.-textual-loading-indicator {\n layer: _loading;\n background: $boost;\n dock: top;\n }\n \"\"\"\n\n def __init__(\n self,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ):\n \"\"\"Initialize a loading indicator.\n\n Args:\n name: The name of the widget.\n id: The ID of the widget in the DOM.\n classes: The CSS classes for the widget.\n disabled: Whether the widget is disabled or not.\n \"\"\"\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n\n self._start_time: float = 0.0\n \"\"\"The time the loading indicator was mounted (a Unix timestamp).\"\"\"\n\n def _on_mount(self, _: Mount) -> None:\n self._start_time = time()\n self.auto_refresh = 1 / 16\n\n def render(self) -> RenderableType:\n if self.app.animation_level == \"none\":\n return Text(\"Loading...\")\n\n elapsed = time() - self._start_time\n speed = 0.8\n dot = \"\\u25cf\"\n _, _, background, color = self.colors\n\n gradient = Gradient(\n (0.0, background.blend(color, 0.1)),\n (0.7, color),\n (1.0, color.lighten(0.1)),\n )\n\n blends = [(elapsed * speed - dot_number / 8) % 1 for dot_number in range(5)]\n\n dots = [\n (\n f\"{dot} \",\n Style.from_color(gradient.get_color((1 - blend) ** 2).rich_color),\n )\n for blend in blends\n ]\n indicator = Text.assemble(*dots)\n indicator.rstrip()\n return indicator\n", "path": "src/textual/widgets/_loading_indicator.py"}]} | 1,093 | 118 |
gh_patches_debug_17695 | rasdani/github-patches | git_diff | geopandas__geopandas-1359 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DOC: update read_postgis docstring for CRS changes
The read_postgis docstring still mentions to use a dict:
https://github.com/geopandas/geopandas/blob/078062d303e401aaa5e37e04c0e7c3ce188920fe/geopandas/io/sql.py#L33-L36
we should update this to follow how CRS is explained in other places now we use pyproj.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/io/sql.py`
Content:
```
1 import sys
2
3 import pandas as pd
4
5 import shapely.wkb
6
7 from geopandas import GeoDataFrame
8
9
10 def read_postgis(
11 sql,
12 con,
13 geom_col="geom",
14 crs=None,
15 index_col=None,
16 coerce_float=True,
17 parse_dates=None,
18 params=None,
19 ):
20 """
21 Returns a GeoDataFrame corresponding to the result of the query
22 string, which must contain a geometry column in WKB representation.
23
24 Parameters
25 ----------
26 sql : string
27 SQL query to execute in selecting entries from database, or name
28 of the table to read from the database.
29 con : DB connection object or SQLAlchemy engine
30 Active connection to the database to query.
31 geom_col : string, default 'geom'
32 column name to convert to shapely geometries
33 crs : dict or str, optional
34 CRS to use for the returned GeoDataFrame; if not set, tries to
35 determine CRS from the SRID associated with the first geometry in
36 the database, and assigns that to all geometries.
37
38 See the documentation for pandas.read_sql for further explanation
39 of the following parameters:
40 index_col, coerce_float, parse_dates, params
41
42 Returns
43 -------
44 GeoDataFrame
45
46 Example
47 -------
48 PostGIS
49 >>> sql = "SELECT geom, kind FROM polygons"
50 SpatiaLite
51 >>> sql = "SELECT ST_AsBinary(geom) AS geom, kind FROM polygons"
52 >>> df = geopandas.read_postgis(sql, con)
53 """
54
55 df = pd.read_sql(
56 sql,
57 con,
58 index_col=index_col,
59 coerce_float=coerce_float,
60 parse_dates=parse_dates,
61 params=params,
62 )
63
64 if geom_col not in df:
65 raise ValueError("Query missing geometry column '{}'".format(geom_col))
66
67 geoms = df[geom_col].dropna()
68
69 if not geoms.empty:
70 load_geom_bytes = shapely.wkb.loads
71 """Load from Python 3 binary."""
72
73 def load_geom_buffer(x):
74 """Load from Python 2 binary."""
75 return shapely.wkb.loads(str(x))
76
77 def load_geom_text(x):
78 """Load from binary encoded as text."""
79 return shapely.wkb.loads(str(x), hex=True)
80
81 if sys.version_info.major < 3:
82 if isinstance(geoms.iat[0], buffer):
83 load_geom = load_geom_buffer
84 else:
85 load_geom = load_geom_text
86 elif isinstance(geoms.iat[0], bytes):
87 load_geom = load_geom_bytes
88 else:
89 load_geom = load_geom_text
90
91 df[geom_col] = geoms = geoms.apply(load_geom)
92 if crs is None:
93 srid = shapely.geos.lgeos.GEOSGetSRID(geoms.iat[0]._geom)
94 # if no defined SRID in geodatabase, returns SRID of 0
95 if srid != 0:
96 crs = "epsg:{}".format(srid)
97
98 return GeoDataFrame(df, crs=crs, geometry=geom_col)
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geopandas/io/sql.py b/geopandas/io/sql.py
--- a/geopandas/io/sql.py
+++ b/geopandas/io/sql.py
@@ -30,10 +30,12 @@
Active connection to the database to query.
geom_col : string, default 'geom'
column name to convert to shapely geometries
- crs : dict or str, optional
- CRS to use for the returned GeoDataFrame; if not set, tries to
- determine CRS from the SRID associated with the first geometry in
- the database, and assigns that to all geometries.
+ crs : pyproj.CRS, optional
+ CRS to use for the returned GeoDataFrame. The value can be anything accepted
+ by :meth:`pyproj.CRS.from_user_input() <pyproj.crs.CRS.from_user_input>`,
+ such as an authority string (eg "EPSG:4326") or a WKT string.
+ If not set, tries to determine CRS from the SRID associated with the
+ first geometry in the database, and assigns that to all geometries.
See the documentation for pandas.read_sql for further explanation
of the following parameters:
| {"golden_diff": "diff --git a/geopandas/io/sql.py b/geopandas/io/sql.py\n--- a/geopandas/io/sql.py\n+++ b/geopandas/io/sql.py\n@@ -30,10 +30,12 @@\n Active connection to the database to query.\n geom_col : string, default 'geom'\n column name to convert to shapely geometries\n- crs : dict or str, optional\n- CRS to use for the returned GeoDataFrame; if not set, tries to\n- determine CRS from the SRID associated with the first geometry in\n- the database, and assigns that to all geometries.\n+ crs : pyproj.CRS, optional\n+ CRS to use for the returned GeoDataFrame. The value can be anything accepted\n+ by :meth:`pyproj.CRS.from_user_input() <pyproj.crs.CRS.from_user_input>`,\n+ such as an authority string (eg \"EPSG:4326\") or a WKT string.\n+ If not set, tries to determine CRS from the SRID associated with the\n+ first geometry in the database, and assigns that to all geometries.\n \n See the documentation for pandas.read_sql for further explanation\n of the following parameters:\n", "issue": "DOC: update read_postgis docstring for CRS changes\nThe read_postgis docstring still mentions to use a dict:\r\n\r\nhttps://github.com/geopandas/geopandas/blob/078062d303e401aaa5e37e04c0e7c3ce188920fe/geopandas/io/sql.py#L33-L36\r\n\r\nwe should update this to follow how CRS is explained in other places now we use pyproj.\n", "before_files": [{"content": "import sys\n\nimport pandas as pd\n\nimport shapely.wkb\n\nfrom geopandas import GeoDataFrame\n\n\ndef read_postgis(\n sql,\n con,\n geom_col=\"geom\",\n crs=None,\n index_col=None,\n coerce_float=True,\n parse_dates=None,\n params=None,\n):\n \"\"\"\n Returns a GeoDataFrame corresponding to the result of the query\n string, which must contain a geometry column in WKB representation.\n\n Parameters\n ----------\n sql : string\n SQL query to execute in selecting entries from database, or name\n of the table to read from the database.\n con : DB connection object or SQLAlchemy engine\n Active connection to the database to query.\n geom_col : string, default 'geom'\n column name to convert to shapely geometries\n crs : dict or str, optional\n CRS to use for the returned GeoDataFrame; if not set, tries to\n determine CRS from the SRID associated with the first geometry in\n the database, and assigns that to all geometries.\n\n See the documentation for pandas.read_sql for further explanation\n of the following parameters:\n index_col, coerce_float, parse_dates, params\n\n Returns\n -------\n GeoDataFrame\n\n Example\n -------\n PostGIS\n >>> sql = \"SELECT geom, kind FROM polygons\"\n SpatiaLite\n >>> sql = \"SELECT ST_AsBinary(geom) AS geom, kind FROM polygons\"\n >>> df = geopandas.read_postgis(sql, con)\n \"\"\"\n\n df = pd.read_sql(\n sql,\n con,\n index_col=index_col,\n coerce_float=coerce_float,\n parse_dates=parse_dates,\n params=params,\n )\n\n if geom_col not in df:\n raise ValueError(\"Query missing geometry column '{}'\".format(geom_col))\n\n geoms = df[geom_col].dropna()\n\n if not geoms.empty:\n load_geom_bytes = shapely.wkb.loads\n \"\"\"Load from Python 3 binary.\"\"\"\n\n def load_geom_buffer(x):\n \"\"\"Load from Python 2 binary.\"\"\"\n return shapely.wkb.loads(str(x))\n\n def load_geom_text(x):\n \"\"\"Load from binary encoded as text.\"\"\"\n return shapely.wkb.loads(str(x), hex=True)\n\n if sys.version_info.major < 3:\n if isinstance(geoms.iat[0], buffer):\n load_geom = load_geom_buffer\n else:\n load_geom = load_geom_text\n elif isinstance(geoms.iat[0], bytes):\n load_geom = load_geom_bytes\n else:\n load_geom = load_geom_text\n\n df[geom_col] = geoms = geoms.apply(load_geom)\n if crs is None:\n srid = shapely.geos.lgeos.GEOSGetSRID(geoms.iat[0]._geom)\n # if no defined SRID in geodatabase, returns SRID of 0\n if srid != 0:\n crs = \"epsg:{}\".format(srid)\n\n return GeoDataFrame(df, crs=crs, geometry=geom_col)\n", "path": "geopandas/io/sql.py"}], "after_files": [{"content": "import sys\n\nimport pandas as pd\n\nimport shapely.wkb\n\nfrom geopandas import GeoDataFrame\n\n\ndef read_postgis(\n sql,\n con,\n geom_col=\"geom\",\n crs=None,\n index_col=None,\n coerce_float=True,\n parse_dates=None,\n params=None,\n):\n \"\"\"\n Returns a GeoDataFrame corresponding to the result of the query\n string, which must contain a geometry column in WKB representation.\n\n Parameters\n ----------\n sql : string\n SQL query to execute in selecting entries from database, or name\n of the table to read from the database.\n con : DB connection object or SQLAlchemy engine\n Active connection to the database to query.\n geom_col : string, default 'geom'\n column name to convert to shapely geometries\n crs : pyproj.CRS, optional\n CRS to use for the returned GeoDataFrame. The value can be anything accepted\n by :meth:`pyproj.CRS.from_user_input() <pyproj.crs.CRS.from_user_input>`,\n such as an authority string (eg \"EPSG:4326\") or a WKT string.\n If not set, tries to determine CRS from the SRID associated with the\n first geometry in the database, and assigns that to all geometries.\n\n See the documentation for pandas.read_sql for further explanation\n of the following parameters:\n index_col, coerce_float, parse_dates, params\n\n Returns\n -------\n GeoDataFrame\n\n Example\n -------\n PostGIS\n >>> sql = \"SELECT geom, kind FROM polygons\"\n SpatiaLite\n >>> sql = \"SELECT ST_AsBinary(geom) AS geom, kind FROM polygons\"\n >>> df = geopandas.read_postgis(sql, con)\n \"\"\"\n\n df = pd.read_sql(\n sql,\n con,\n index_col=index_col,\n coerce_float=coerce_float,\n parse_dates=parse_dates,\n params=params,\n )\n\n if geom_col not in df:\n raise ValueError(\"Query missing geometry column '{}'\".format(geom_col))\n\n geoms = df[geom_col].dropna()\n\n if not geoms.empty:\n load_geom_bytes = shapely.wkb.loads\n \"\"\"Load from Python 3 binary.\"\"\"\n\n def load_geom_buffer(x):\n \"\"\"Load from Python 2 binary.\"\"\"\n return shapely.wkb.loads(str(x))\n\n def load_geom_text(x):\n \"\"\"Load from binary encoded as text.\"\"\"\n return shapely.wkb.loads(str(x), hex=True)\n\n if sys.version_info.major < 3:\n if isinstance(geoms.iat[0], buffer):\n load_geom = load_geom_buffer\n else:\n load_geom = load_geom_text\n elif isinstance(geoms.iat[0], bytes):\n load_geom = load_geom_bytes\n else:\n load_geom = load_geom_text\n\n df[geom_col] = geoms = geoms.apply(load_geom)\n if crs is None:\n srid = shapely.geos.lgeos.GEOSGetSRID(geoms.iat[0]._geom)\n # if no defined SRID in geodatabase, returns SRID of 0\n if srid != 0:\n crs = \"epsg:{}\".format(srid)\n\n return GeoDataFrame(df, crs=crs, geometry=geom_col)\n", "path": "geopandas/io/sql.py"}]} | 1,237 | 271 |
gh_patches_debug_787 | rasdani/github-patches | git_diff | translate__pootle-5820 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Adding a new languages forces you to specify special characters
If you add a new language you are blocked with the UI insisting that you add special characters.

The part in red is Afrikaans and say "This field is required"
Not all languages require special characters.
Also the model allows `blank=True`
So somewhere we're blocking.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_app/forms.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import re
10 import urlparse
11 from collections import OrderedDict
12
13 from django import forms
14 from django.contrib.auth import get_user_model
15
16 from pootle.i18n.gettext import ugettext_lazy as _
17 from pootle_language.models import Language
18 from pootle_project.models import Project
19 from pootle_store.models import Store
20
21
22 LANGCODE_RE = re.compile("^[a-z]{2,}([_-]([a-z]{2,}|[0-9]{3}))*(@[a-z0-9]+)?$",
23 re.IGNORECASE)
24
25
26 class LanguageForm(forms.ModelForm):
27
28 specialchars = forms.CharField(strip=False)
29
30 class Meta(object):
31 model = Language
32 fields = ('id', 'code', 'fullname', 'specialchars', 'nplurals',
33 'pluralequation',)
34
35 def clean_code(self):
36 if (not self.cleaned_data['code'] == 'templates' and
37 not LANGCODE_RE.match(self.cleaned_data['code'])):
38 raise forms.ValidationError(
39 _('Language code does not follow the ISO convention')
40 )
41
42 return self.cleaned_data["code"]
43
44 def clean_specialchars(self):
45 """Ensures inputted characters are unique."""
46 chars = self.cleaned_data['specialchars']
47 return u''.join(
48 OrderedDict((char, None) for char in list(chars)).keys()
49 )
50
51
52 class ProjectForm(forms.ModelForm):
53
54 source_language = forms.ModelChoiceField(label=_('Source Language'),
55 queryset=Language.objects.none())
56
57 class Meta(object):
58 model = Project
59 fields = ('id', 'code', 'fullname', 'checkstyle',
60 'filetypes', 'treestyle', 'source_language', 'ignoredfiles',
61 'report_email', 'screenshot_search_prefix', 'disabled',)
62
63 def __init__(self, *args, **kwargs):
64 super(ProjectForm, self).__init__(*args, **kwargs)
65
66 queryset = Language.objects.exclude(code='templates')
67 self.fields['source_language'].queryset = queryset
68
69 self.fields["filetypes"].initial = [
70 self.fields["filetypes"].queryset.get(name="po")]
71
72 if self.instance.id:
73 if (self.instance.treestyle != 'auto' and
74 self.instance.translationproject_set.count() and
75 self.instance.treestyle == self.instance._detect_treestyle()):
76 self.fields['treestyle'].required = False
77
78 def clean_filetypes(self):
79 value = self.cleaned_data.get('filetypes', [])
80 if not self.instance.pk:
81 return value
82 for filetype in self.instance.filetypes.all():
83 if filetype not in value:
84 has_stores = Store.objects.filter(
85 translation_project__project=self.instance, filetype=filetype)
86 if has_stores.exists():
87 raise forms.ValidationError(
88 _("You cannot remove a file type from a Project, "
89 "if there are Stores of that file type ('%s')"
90 % filetype))
91 return value
92
93 def clean_fullname(self):
94 return self.cleaned_data['fullname'].strip()
95
96 def clean_treestyle(self):
97 value = self.cleaned_data.get('treestyle', None)
98 if not value:
99 value = self.instance.treestyle
100 return value
101
102 def clean_code(self):
103 return self.cleaned_data['code'].strip()
104
105
106 class UserForm(forms.ModelForm):
107
108 password = forms.CharField(label=_('Password'), required=False,
109 widget=forms.PasswordInput)
110
111 class Meta(object):
112 model = get_user_model()
113 fields = ('id', 'username', 'is_active', 'full_name', 'email',
114 'is_superuser', 'twitter', 'linkedin', 'website', 'bio')
115
116 def __init__(self, *args, **kwargs):
117 super(UserForm, self).__init__(*args, **kwargs)
118
119 # Require setting the password for new users
120 if self.instance.pk is None:
121 self.fields['password'].required = True
122
123 def save(self, commit=True):
124 password = self.cleaned_data['password']
125
126 if password != '':
127 user = super(UserForm, self).save(commit=False)
128 user.set_password(password)
129
130 if commit:
131 user.save()
132 else:
133 user = super(UserForm, self).save(commit=commit)
134
135 return user
136
137 def clean_linkedin(self):
138 url = self.cleaned_data['linkedin']
139 if url != '':
140 parsed = urlparse.urlparse(url)
141 if 'linkedin.com' not in parsed.netloc or parsed.path == '/':
142 raise forms.ValidationError(
143 _('Please enter a valid LinkedIn user profile URL.')
144 )
145
146 return url
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_app/forms.py b/pootle/apps/pootle_app/forms.py
--- a/pootle/apps/pootle_app/forms.py
+++ b/pootle/apps/pootle_app/forms.py
@@ -25,7 +25,7 @@
class LanguageForm(forms.ModelForm):
- specialchars = forms.CharField(strip=False)
+ specialchars = forms.CharField(strip=False, required=False)
class Meta(object):
model = Language
| {"golden_diff": "diff --git a/pootle/apps/pootle_app/forms.py b/pootle/apps/pootle_app/forms.py\n--- a/pootle/apps/pootle_app/forms.py\n+++ b/pootle/apps/pootle_app/forms.py\n@@ -25,7 +25,7 @@\n \n class LanguageForm(forms.ModelForm):\n \n- specialchars = forms.CharField(strip=False)\n+ specialchars = forms.CharField(strip=False, required=False)\n \n class Meta(object):\n model = Language\n", "issue": "Adding a new languages forces you to specify special characters\nIf you add a new language you are blocked with the UI insisting that you add special characters.\r\n\r\n\r\n\r\nThe part in red is Afrikaans and say \"This field is required\"\r\n\r\nNot all languages require special characters.\r\n\r\nAlso the model allows `blank=True`\r\n\r\nSo somewhere we're blocking.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport re\nimport urlparse\nfrom collections import OrderedDict\n\nfrom django import forms\nfrom django.contrib.auth import get_user_model\n\nfrom pootle.i18n.gettext import ugettext_lazy as _\nfrom pootle_language.models import Language\nfrom pootle_project.models import Project\nfrom pootle_store.models import Store\n\n\nLANGCODE_RE = re.compile(\"^[a-z]{2,}([_-]([a-z]{2,}|[0-9]{3}))*(@[a-z0-9]+)?$\",\n re.IGNORECASE)\n\n\nclass LanguageForm(forms.ModelForm):\n\n specialchars = forms.CharField(strip=False)\n\n class Meta(object):\n model = Language\n fields = ('id', 'code', 'fullname', 'specialchars', 'nplurals',\n 'pluralequation',)\n\n def clean_code(self):\n if (not self.cleaned_data['code'] == 'templates' and\n not LANGCODE_RE.match(self.cleaned_data['code'])):\n raise forms.ValidationError(\n _('Language code does not follow the ISO convention')\n )\n\n return self.cleaned_data[\"code\"]\n\n def clean_specialchars(self):\n \"\"\"Ensures inputted characters are unique.\"\"\"\n chars = self.cleaned_data['specialchars']\n return u''.join(\n OrderedDict((char, None) for char in list(chars)).keys()\n )\n\n\nclass ProjectForm(forms.ModelForm):\n\n source_language = forms.ModelChoiceField(label=_('Source Language'),\n queryset=Language.objects.none())\n\n class Meta(object):\n model = Project\n fields = ('id', 'code', 'fullname', 'checkstyle',\n 'filetypes', 'treestyle', 'source_language', 'ignoredfiles',\n 'report_email', 'screenshot_search_prefix', 'disabled',)\n\n def __init__(self, *args, **kwargs):\n super(ProjectForm, self).__init__(*args, **kwargs)\n\n queryset = Language.objects.exclude(code='templates')\n self.fields['source_language'].queryset = queryset\n\n self.fields[\"filetypes\"].initial = [\n self.fields[\"filetypes\"].queryset.get(name=\"po\")]\n\n if self.instance.id:\n if (self.instance.treestyle != 'auto' and\n self.instance.translationproject_set.count() and\n self.instance.treestyle == self.instance._detect_treestyle()):\n self.fields['treestyle'].required = False\n\n def clean_filetypes(self):\n value = self.cleaned_data.get('filetypes', [])\n if not self.instance.pk:\n return value\n for filetype in self.instance.filetypes.all():\n if filetype not in value:\n has_stores = Store.objects.filter(\n translation_project__project=self.instance, filetype=filetype)\n if has_stores.exists():\n raise forms.ValidationError(\n _(\"You cannot remove a file type from a Project, \"\n \"if there are Stores of that file type ('%s')\"\n % filetype))\n return value\n\n def clean_fullname(self):\n return self.cleaned_data['fullname'].strip()\n\n def clean_treestyle(self):\n value = self.cleaned_data.get('treestyle', None)\n if not value:\n value = self.instance.treestyle\n return value\n\n def clean_code(self):\n return self.cleaned_data['code'].strip()\n\n\nclass UserForm(forms.ModelForm):\n\n password = forms.CharField(label=_('Password'), required=False,\n widget=forms.PasswordInput)\n\n class Meta(object):\n model = get_user_model()\n fields = ('id', 'username', 'is_active', 'full_name', 'email',\n 'is_superuser', 'twitter', 'linkedin', 'website', 'bio')\n\n def __init__(self, *args, **kwargs):\n super(UserForm, self).__init__(*args, **kwargs)\n\n # Require setting the password for new users\n if self.instance.pk is None:\n self.fields['password'].required = True\n\n def save(self, commit=True):\n password = self.cleaned_data['password']\n\n if password != '':\n user = super(UserForm, self).save(commit=False)\n user.set_password(password)\n\n if commit:\n user.save()\n else:\n user = super(UserForm, self).save(commit=commit)\n\n return user\n\n def clean_linkedin(self):\n url = self.cleaned_data['linkedin']\n if url != '':\n parsed = urlparse.urlparse(url)\n if 'linkedin.com' not in parsed.netloc or parsed.path == '/':\n raise forms.ValidationError(\n _('Please enter a valid LinkedIn user profile URL.')\n )\n\n return url\n", "path": "pootle/apps/pootle_app/forms.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport re\nimport urlparse\nfrom collections import OrderedDict\n\nfrom django import forms\nfrom django.contrib.auth import get_user_model\n\nfrom pootle.i18n.gettext import ugettext_lazy as _\nfrom pootle_language.models import Language\nfrom pootle_project.models import Project\nfrom pootle_store.models import Store\n\n\nLANGCODE_RE = re.compile(\"^[a-z]{2,}([_-]([a-z]{2,}|[0-9]{3}))*(@[a-z0-9]+)?$\",\n re.IGNORECASE)\n\n\nclass LanguageForm(forms.ModelForm):\n\n specialchars = forms.CharField(strip=False, required=False)\n\n class Meta(object):\n model = Language\n fields = ('id', 'code', 'fullname', 'specialchars', 'nplurals',\n 'pluralequation',)\n\n def clean_code(self):\n if (not self.cleaned_data['code'] == 'templates' and\n not LANGCODE_RE.match(self.cleaned_data['code'])):\n raise forms.ValidationError(\n _('Language code does not follow the ISO convention')\n )\n\n return self.cleaned_data[\"code\"]\n\n def clean_specialchars(self):\n \"\"\"Ensures inputted characters are unique.\"\"\"\n chars = self.cleaned_data['specialchars']\n return u''.join(\n OrderedDict((char, None) for char in list(chars)).keys()\n )\n\n\nclass ProjectForm(forms.ModelForm):\n\n source_language = forms.ModelChoiceField(label=_('Source Language'),\n queryset=Language.objects.none())\n\n class Meta(object):\n model = Project\n fields = ('id', 'code', 'fullname', 'checkstyle',\n 'filetypes', 'treestyle', 'source_language', 'ignoredfiles',\n 'report_email', 'screenshot_search_prefix', 'disabled',)\n\n def __init__(self, *args, **kwargs):\n super(ProjectForm, self).__init__(*args, **kwargs)\n\n queryset = Language.objects.exclude(code='templates')\n self.fields['source_language'].queryset = queryset\n\n self.fields[\"filetypes\"].initial = [\n self.fields[\"filetypes\"].queryset.get(name=\"po\")]\n\n if self.instance.id:\n if (self.instance.treestyle != 'auto' and\n self.instance.translationproject_set.count() and\n self.instance.treestyle == self.instance._detect_treestyle()):\n self.fields['treestyle'].required = False\n\n def clean_filetypes(self):\n value = self.cleaned_data.get('filetypes', [])\n if not self.instance.pk:\n return value\n for filetype in self.instance.filetypes.all():\n if filetype not in value:\n has_stores = Store.objects.filter(\n translation_project__project=self.instance, filetype=filetype)\n if has_stores.exists():\n raise forms.ValidationError(\n _(\"You cannot remove a file type from a Project, \"\n \"if there are Stores of that file type ('%s')\"\n % filetype))\n return value\n\n def clean_fullname(self):\n return self.cleaned_data['fullname'].strip()\n\n def clean_treestyle(self):\n value = self.cleaned_data.get('treestyle', None)\n if not value:\n value = self.instance.treestyle\n return value\n\n def clean_code(self):\n return self.cleaned_data['code'].strip()\n\n\nclass UserForm(forms.ModelForm):\n\n password = forms.CharField(label=_('Password'), required=False,\n widget=forms.PasswordInput)\n\n class Meta(object):\n model = get_user_model()\n fields = ('id', 'username', 'is_active', 'full_name', 'email',\n 'is_superuser', 'twitter', 'linkedin', 'website', 'bio')\n\n def __init__(self, *args, **kwargs):\n super(UserForm, self).__init__(*args, **kwargs)\n\n # Require setting the password for new users\n if self.instance.pk is None:\n self.fields['password'].required = True\n\n def save(self, commit=True):\n password = self.cleaned_data['password']\n\n if password != '':\n user = super(UserForm, self).save(commit=False)\n user.set_password(password)\n\n if commit:\n user.save()\n else:\n user = super(UserForm, self).save(commit=commit)\n\n return user\n\n def clean_linkedin(self):\n url = self.cleaned_data['linkedin']\n if url != '':\n parsed = urlparse.urlparse(url)\n if 'linkedin.com' not in parsed.netloc or parsed.path == '/':\n raise forms.ValidationError(\n _('Please enter a valid LinkedIn user profile URL.')\n )\n\n return url\n", "path": "pootle/apps/pootle_app/forms.py"}]} | 1,797 | 106 |
gh_patches_debug_1334 | rasdani/github-patches | git_diff | zulip__zulip-11317 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve formatting for "arguments" sections with long examples.
The line-wrapping for this endpoint's API documentation looks really ugly:

We should either remove the maximum width on "description", or figure out a way to use more than one line for a given endpoint that doesn't look so bad (e.g. having the example be on the next line after the rest of the endpoint description).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zerver/lib/bugdown/api_arguments_table_generator.py`
Content:
```
1 import re
2 import os
3 import ujson
4
5 from django.utils.html import escape as escape_html
6 from markdown.extensions import Extension
7 from markdown.preprocessors import Preprocessor
8 from zerver.lib.openapi import get_openapi_parameters
9 from typing import Any, Dict, Optional, List
10 import markdown
11
12 REGEXP = re.compile(r'\{generate_api_arguments_table\|\s*(.+?)\s*\|\s*(.+)\s*\}')
13
14
15 class MarkdownArgumentsTableGenerator(Extension):
16 def __init__(self, configs: Optional[Dict[str, Any]]=None) -> None:
17 if configs is None:
18 configs = {}
19 self.config = {
20 'base_path': ['.', 'Default location from which to evaluate relative paths for the JSON files.'],
21 }
22 for key, value in configs.items():
23 self.setConfig(key, value)
24
25 def extendMarkdown(self, md: markdown.Markdown, md_globals: Dict[str, Any]) -> None:
26 md.preprocessors.add(
27 'generate_api_arguments', APIArgumentsTablePreprocessor(md, self.getConfigs()), '_begin'
28 )
29
30
31 class APIArgumentsTablePreprocessor(Preprocessor):
32 def __init__(self, md: markdown.Markdown, config: Dict[str, Any]) -> None:
33 super(APIArgumentsTablePreprocessor, self).__init__(md)
34 self.base_path = config['base_path']
35
36 def run(self, lines: List[str]) -> List[str]:
37 done = False
38 while not done:
39 for line in lines:
40 loc = lines.index(line)
41 match = REGEXP.search(line)
42
43 if not match:
44 continue
45
46 filename = match.group(1)
47 doc_name = match.group(2)
48 filename = os.path.expanduser(filename)
49
50 is_openapi_format = filename.endswith('.yaml')
51
52 if not os.path.isabs(filename):
53 parent_dir = self.base_path
54 filename = os.path.normpath(os.path.join(parent_dir, filename))
55
56 if is_openapi_format:
57 endpoint, method = doc_name.rsplit(':', 1)
58 arguments = [] # type: List[Dict[str, Any]]
59
60 try:
61 arguments = get_openapi_parameters(endpoint, method)
62 except KeyError as e:
63 # Don't raise an exception if the "parameters"
64 # field is missing; we assume that's because the
65 # endpoint doesn't accept any parameters
66 if e.args != ('parameters',):
67 raise e
68 else:
69 with open(filename, 'r') as fp:
70 json_obj = ujson.load(fp)
71 arguments = json_obj[doc_name]
72
73 if arguments:
74 text = self.render_table(arguments)
75 else:
76 text = ['This endpoint does not consume any arguments.']
77 # The line that contains the directive to include the macro
78 # may be preceded or followed by text or tags, in that case
79 # we need to make sure that any preceding or following text
80 # stays the same.
81 line_split = REGEXP.split(line, maxsplit=0)
82 preceding = line_split[0]
83 following = line_split[-1]
84 text = [preceding] + text + [following]
85 lines = lines[:loc] + text + lines[loc+1:]
86 break
87 else:
88 done = True
89 return lines
90
91 def render_table(self, arguments: List[Dict[str, Any]]) -> List[str]:
92 table = []
93 beginning = """
94 <table class="table">
95 <thead>
96 <tr>
97 <th>Argument</th>
98 <th>Example</th>
99 <th>Required</th>
100 <th>Description</th>
101 </tr>
102 </thead>
103 <tbody>
104 """
105 tr = """
106 <tr>
107 <td><code>{argument}</code></td>
108 <td><code>{example}</code></td>
109 <td>{required}</td>
110 <td>{description}</td>
111 </tr>
112 """
113
114 table.append(beginning)
115
116 md_engine = markdown.Markdown(extensions=[])
117
118 for argument in arguments:
119 description = argument['description']
120
121 oneof = ['`' + item + '`'
122 for item in argument.get('schema', {}).get('enum', [])]
123 if oneof:
124 description += '\nMust be one of: {}.'.format(', '.join(oneof))
125
126 default = argument.get('schema', {}).get('default')
127 if default is not None:
128 description += '\nDefaults to `{}`.'.format(ujson.dumps(default))
129
130 # TODO: Swagger allows indicating where the argument goes
131 # (path, querystring, form data...). A column in the table should
132 # be added for this.
133 table.append(tr.format(
134 argument=argument.get('argument') or argument.get('name'),
135 # Show this as JSON to avoid changing the quoting style, which
136 # may cause problems with JSON encoding.
137 example=escape_html(ujson.dumps(argument['example'])),
138 required='Yes' if argument.get('required') else 'No',
139 description=md_engine.convert(description),
140 ))
141
142 table.append("</tbody>")
143 table.append("</table>")
144
145 return table
146
147 def makeExtension(*args: Any, **kwargs: str) -> MarkdownArgumentsTableGenerator:
148 return MarkdownArgumentsTableGenerator(kwargs)
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zerver/lib/bugdown/api_arguments_table_generator.py b/zerver/lib/bugdown/api_arguments_table_generator.py
--- a/zerver/lib/bugdown/api_arguments_table_generator.py
+++ b/zerver/lib/bugdown/api_arguments_table_generator.py
@@ -105,7 +105,7 @@
tr = """
<tr>
<td><code>{argument}</code></td>
- <td><code>{example}</code></td>
+ <td class="json-api-example"><code>{example}</code></td>
<td>{required}</td>
<td>{description}</td>
</tr>
| {"golden_diff": "diff --git a/zerver/lib/bugdown/api_arguments_table_generator.py b/zerver/lib/bugdown/api_arguments_table_generator.py\n--- a/zerver/lib/bugdown/api_arguments_table_generator.py\n+++ b/zerver/lib/bugdown/api_arguments_table_generator.py\n@@ -105,7 +105,7 @@\n tr = \"\"\"\n <tr>\n <td><code>{argument}</code></td>\n- <td><code>{example}</code></td>\n+ <td class=\"json-api-example\"><code>{example}</code></td>\n <td>{required}</td>\n <td>{description}</td>\n </tr>\n", "issue": "Improve formatting for \"arguments\" sections with long examples.\nThe line-wrapping for this endpoint's API documentation looks really ugly:\r\n\r\n\r\n\r\nWe should either remove the maximum width on \"description\", or figure out a way to use more than one line for a given endpoint that doesn't look so bad (e.g. having the example be on the next line after the rest of the endpoint description).\n", "before_files": [{"content": "import re\nimport os\nimport ujson\n\nfrom django.utils.html import escape as escape_html\nfrom markdown.extensions import Extension\nfrom markdown.preprocessors import Preprocessor\nfrom zerver.lib.openapi import get_openapi_parameters\nfrom typing import Any, Dict, Optional, List\nimport markdown\n\nREGEXP = re.compile(r'\\{generate_api_arguments_table\\|\\s*(.+?)\\s*\\|\\s*(.+)\\s*\\}')\n\n\nclass MarkdownArgumentsTableGenerator(Extension):\n def __init__(self, configs: Optional[Dict[str, Any]]=None) -> None:\n if configs is None:\n configs = {}\n self.config = {\n 'base_path': ['.', 'Default location from which to evaluate relative paths for the JSON files.'],\n }\n for key, value in configs.items():\n self.setConfig(key, value)\n\n def extendMarkdown(self, md: markdown.Markdown, md_globals: Dict[str, Any]) -> None:\n md.preprocessors.add(\n 'generate_api_arguments', APIArgumentsTablePreprocessor(md, self.getConfigs()), '_begin'\n )\n\n\nclass APIArgumentsTablePreprocessor(Preprocessor):\n def __init__(self, md: markdown.Markdown, config: Dict[str, Any]) -> None:\n super(APIArgumentsTablePreprocessor, self).__init__(md)\n self.base_path = config['base_path']\n\n def run(self, lines: List[str]) -> List[str]:\n done = False\n while not done:\n for line in lines:\n loc = lines.index(line)\n match = REGEXP.search(line)\n\n if not match:\n continue\n\n filename = match.group(1)\n doc_name = match.group(2)\n filename = os.path.expanduser(filename)\n\n is_openapi_format = filename.endswith('.yaml')\n\n if not os.path.isabs(filename):\n parent_dir = self.base_path\n filename = os.path.normpath(os.path.join(parent_dir, filename))\n\n if is_openapi_format:\n endpoint, method = doc_name.rsplit(':', 1)\n arguments = [] # type: List[Dict[str, Any]]\n\n try:\n arguments = get_openapi_parameters(endpoint, method)\n except KeyError as e:\n # Don't raise an exception if the \"parameters\"\n # field is missing; we assume that's because the\n # endpoint doesn't accept any parameters\n if e.args != ('parameters',):\n raise e\n else:\n with open(filename, 'r') as fp:\n json_obj = ujson.load(fp)\n arguments = json_obj[doc_name]\n\n if arguments:\n text = self.render_table(arguments)\n else:\n text = ['This endpoint does not consume any arguments.']\n # The line that contains the directive to include the macro\n # may be preceded or followed by text or tags, in that case\n # we need to make sure that any preceding or following text\n # stays the same.\n line_split = REGEXP.split(line, maxsplit=0)\n preceding = line_split[0]\n following = line_split[-1]\n text = [preceding] + text + [following]\n lines = lines[:loc] + text + lines[loc+1:]\n break\n else:\n done = True\n return lines\n\n def render_table(self, arguments: List[Dict[str, Any]]) -> List[str]:\n table = []\n beginning = \"\"\"\n<table class=\"table\">\n <thead>\n <tr>\n <th>Argument</th>\n <th>Example</th>\n <th>Required</th>\n <th>Description</th>\n </tr>\n </thead>\n<tbody>\n\"\"\"\n tr = \"\"\"\n<tr>\n <td><code>{argument}</code></td>\n <td><code>{example}</code></td>\n <td>{required}</td>\n <td>{description}</td>\n</tr>\n\"\"\"\n\n table.append(beginning)\n\n md_engine = markdown.Markdown(extensions=[])\n\n for argument in arguments:\n description = argument['description']\n\n oneof = ['`' + item + '`'\n for item in argument.get('schema', {}).get('enum', [])]\n if oneof:\n description += '\\nMust be one of: {}.'.format(', '.join(oneof))\n\n default = argument.get('schema', {}).get('default')\n if default is not None:\n description += '\\nDefaults to `{}`.'.format(ujson.dumps(default))\n\n # TODO: Swagger allows indicating where the argument goes\n # (path, querystring, form data...). A column in the table should\n # be added for this.\n table.append(tr.format(\n argument=argument.get('argument') or argument.get('name'),\n # Show this as JSON to avoid changing the quoting style, which\n # may cause problems with JSON encoding.\n example=escape_html(ujson.dumps(argument['example'])),\n required='Yes' if argument.get('required') else 'No',\n description=md_engine.convert(description),\n ))\n\n table.append(\"</tbody>\")\n table.append(\"</table>\")\n\n return table\n\ndef makeExtension(*args: Any, **kwargs: str) -> MarkdownArgumentsTableGenerator:\n return MarkdownArgumentsTableGenerator(kwargs)\n", "path": "zerver/lib/bugdown/api_arguments_table_generator.py"}], "after_files": [{"content": "import re\nimport os\nimport ujson\n\nfrom django.utils.html import escape as escape_html\nfrom markdown.extensions import Extension\nfrom markdown.preprocessors import Preprocessor\nfrom zerver.lib.openapi import get_openapi_parameters\nfrom typing import Any, Dict, Optional, List\nimport markdown\n\nREGEXP = re.compile(r'\\{generate_api_arguments_table\\|\\s*(.+?)\\s*\\|\\s*(.+)\\s*\\}')\n\n\nclass MarkdownArgumentsTableGenerator(Extension):\n def __init__(self, configs: Optional[Dict[str, Any]]=None) -> None:\n if configs is None:\n configs = {}\n self.config = {\n 'base_path': ['.', 'Default location from which to evaluate relative paths for the JSON files.'],\n }\n for key, value in configs.items():\n self.setConfig(key, value)\n\n def extendMarkdown(self, md: markdown.Markdown, md_globals: Dict[str, Any]) -> None:\n md.preprocessors.add(\n 'generate_api_arguments', APIArgumentsTablePreprocessor(md, self.getConfigs()), '_begin'\n )\n\n\nclass APIArgumentsTablePreprocessor(Preprocessor):\n def __init__(self, md: markdown.Markdown, config: Dict[str, Any]) -> None:\n super(APIArgumentsTablePreprocessor, self).__init__(md)\n self.base_path = config['base_path']\n\n def run(self, lines: List[str]) -> List[str]:\n done = False\n while not done:\n for line in lines:\n loc = lines.index(line)\n match = REGEXP.search(line)\n\n if not match:\n continue\n\n filename = match.group(1)\n doc_name = match.group(2)\n filename = os.path.expanduser(filename)\n\n is_openapi_format = filename.endswith('.yaml')\n\n if not os.path.isabs(filename):\n parent_dir = self.base_path\n filename = os.path.normpath(os.path.join(parent_dir, filename))\n\n if is_openapi_format:\n endpoint, method = doc_name.rsplit(':', 1)\n arguments = [] # type: List[Dict[str, Any]]\n\n try:\n arguments = get_openapi_parameters(endpoint, method)\n except KeyError as e:\n # Don't raise an exception if the \"parameters\"\n # field is missing; we assume that's because the\n # endpoint doesn't accept any parameters\n if e.args != ('parameters',):\n raise e\n else:\n with open(filename, 'r') as fp:\n json_obj = ujson.load(fp)\n arguments = json_obj[doc_name]\n\n if arguments:\n text = self.render_table(arguments)\n else:\n text = ['This endpoint does not consume any arguments.']\n # The line that contains the directive to include the macro\n # may be preceded or followed by text or tags, in that case\n # we need to make sure that any preceding or following text\n # stays the same.\n line_split = REGEXP.split(line, maxsplit=0)\n preceding = line_split[0]\n following = line_split[-1]\n text = [preceding] + text + [following]\n lines = lines[:loc] + text + lines[loc+1:]\n break\n else:\n done = True\n return lines\n\n def render_table(self, arguments: List[Dict[str, Any]]) -> List[str]:\n table = []\n beginning = \"\"\"\n<table class=\"table\">\n <thead>\n <tr>\n <th>Argument</th>\n <th>Example</th>\n <th>Required</th>\n <th>Description</th>\n </tr>\n </thead>\n<tbody>\n\"\"\"\n tr = \"\"\"\n<tr>\n <td><code>{argument}</code></td>\n <td class=\"json-api-example\"><code>{example}</code></td>\n <td>{required}</td>\n <td>{description}</td>\n</tr>\n\"\"\"\n\n table.append(beginning)\n\n md_engine = markdown.Markdown(extensions=[])\n\n for argument in arguments:\n description = argument['description']\n\n oneof = ['`' + item + '`'\n for item in argument.get('schema', {}).get('enum', [])]\n if oneof:\n description += '\\nMust be one of: {}.'.format(', '.join(oneof))\n\n default = argument.get('schema', {}).get('default')\n if default is not None:\n description += '\\nDefaults to `{}`.'.format(ujson.dumps(default))\n\n # TODO: Swagger allows indicating where the argument goes\n # (path, querystring, form data...). A column in the table should\n # be added for this.\n table.append(tr.format(\n argument=argument.get('argument') or argument.get('name'),\n # Show this as JSON to avoid changing the quoting style, which\n # may cause problems with JSON encoding.\n example=escape_html(ujson.dumps(argument['example'])),\n required='Yes' if argument.get('required') else 'No',\n description=md_engine.convert(description),\n ))\n\n table.append(\"</tbody>\")\n table.append(\"</table>\")\n\n return table\n\ndef makeExtension(*args: Any, **kwargs: str) -> MarkdownArgumentsTableGenerator:\n return MarkdownArgumentsTableGenerator(kwargs)\n", "path": "zerver/lib/bugdown/api_arguments_table_generator.py"}]} | 1,871 | 139 |
gh_patches_debug_5532 | rasdani/github-patches | git_diff | pyca__cryptography-3216 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HKDF key-length inconsistency
For too small key sizes, `HKDF.derive()` outputs an empty array instead of a small key:
Program:
```python
#!/usr/bin/env python3.5
import cryptography
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
from cryptography.hazmat.backends import default_backend
print("cryptography.io:{}".format(cryptography.__version__))
hkdf = HKDF(algorithm=hashes.SHA256(), length=4, salt=b"salt",
info=b"some-test", backend=default_backend())
key = hkdf.derive(b"my secret passphrase")
print("Derived key: {}".format(key))
```
Output:
```
cryptography.io:1.5.2
Derived key: b''
```
Suggested fix:
I am not quite sure why the division by 8 in the snippet below was added. The cumulative size of the output array is always `self._algorithm.digest_size * len(output)` and thus we can stop after `self._algorithm.digest_size * len(output) >= self._length`. At first I thought this might be a clever trick taken from the paper, but I didn't find it there. I guess there was a mixup between bits and bytes at some point.
```python
# class HKDFExpand
def _expand(self, key_material):
output = [b""]
counter = 1
while (self._algorithm.digest_size // 8) * len(output) < self._length:
h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)
h.update(output[-1])
h.update(self._info)
h.update(six.int2byte(counter))
output.append(h.finalize())
counter += 1
return b"".join(output)[:self._length]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/primitives/kdf/hkdf.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import six
8
9 from cryptography import utils
10 from cryptography.exceptions import (
11 AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons
12 )
13 from cryptography.hazmat.backends.interfaces import HMACBackend
14 from cryptography.hazmat.primitives import constant_time, hmac
15 from cryptography.hazmat.primitives.kdf import KeyDerivationFunction
16
17
18 @utils.register_interface(KeyDerivationFunction)
19 class HKDF(object):
20 def __init__(self, algorithm, length, salt, info, backend):
21 if not isinstance(backend, HMACBackend):
22 raise UnsupportedAlgorithm(
23 "Backend object does not implement HMACBackend.",
24 _Reasons.BACKEND_MISSING_INTERFACE
25 )
26
27 self._algorithm = algorithm
28
29 if not (salt is None or isinstance(salt, bytes)):
30 raise TypeError("salt must be bytes.")
31
32 if salt is None:
33 salt = b"\x00" * (self._algorithm.digest_size // 8)
34
35 self._salt = salt
36
37 self._backend = backend
38
39 self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)
40
41 def _extract(self, key_material):
42 h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)
43 h.update(key_material)
44 return h.finalize()
45
46 def derive(self, key_material):
47 if not isinstance(key_material, bytes):
48 raise TypeError("key_material must be bytes.")
49
50 return self._hkdf_expand.derive(self._extract(key_material))
51
52 def verify(self, key_material, expected_key):
53 if not constant_time.bytes_eq(self.derive(key_material), expected_key):
54 raise InvalidKey
55
56
57 @utils.register_interface(KeyDerivationFunction)
58 class HKDFExpand(object):
59 def __init__(self, algorithm, length, info, backend):
60 if not isinstance(backend, HMACBackend):
61 raise UnsupportedAlgorithm(
62 "Backend object does not implement HMACBackend.",
63 _Reasons.BACKEND_MISSING_INTERFACE
64 )
65
66 self._algorithm = algorithm
67
68 self._backend = backend
69
70 max_length = 255 * (algorithm.digest_size // 8)
71
72 if length > max_length:
73 raise ValueError(
74 "Can not derive keys larger than {0} octets.".format(
75 max_length
76 ))
77
78 self._length = length
79
80 if not (info is None or isinstance(info, bytes)):
81 raise TypeError("info must be bytes.")
82
83 if info is None:
84 info = b""
85
86 self._info = info
87
88 self._used = False
89
90 def _expand(self, key_material):
91 output = [b""]
92 counter = 1
93
94 while (self._algorithm.digest_size // 8) * len(output) < self._length:
95 h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)
96 h.update(output[-1])
97 h.update(self._info)
98 h.update(six.int2byte(counter))
99 output.append(h.finalize())
100 counter += 1
101
102 return b"".join(output)[:self._length]
103
104 def derive(self, key_material):
105 if not isinstance(key_material, bytes):
106 raise TypeError("key_material must be bytes.")
107
108 if self._used:
109 raise AlreadyFinalized
110
111 self._used = True
112 return self._expand(key_material)
113
114 def verify(self, key_material, expected_key):
115 if not constant_time.bytes_eq(self.derive(key_material), expected_key):
116 raise InvalidKey
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/primitives/kdf/hkdf.py b/src/cryptography/hazmat/primitives/kdf/hkdf.py
--- a/src/cryptography/hazmat/primitives/kdf/hkdf.py
+++ b/src/cryptography/hazmat/primitives/kdf/hkdf.py
@@ -91,7 +91,7 @@
output = [b""]
counter = 1
- while (self._algorithm.digest_size // 8) * len(output) < self._length:
+ while self._algorithm.digest_size * (len(output) - 1) < self._length:
h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)
h.update(output[-1])
h.update(self._info)
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/kdf/hkdf.py b/src/cryptography/hazmat/primitives/kdf/hkdf.py\n--- a/src/cryptography/hazmat/primitives/kdf/hkdf.py\n+++ b/src/cryptography/hazmat/primitives/kdf/hkdf.py\n@@ -91,7 +91,7 @@\n output = [b\"\"]\n counter = 1\n \n- while (self._algorithm.digest_size // 8) * len(output) < self._length:\n+ while self._algorithm.digest_size * (len(output) - 1) < self._length:\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\n h.update(output[-1])\n h.update(self._info)\n", "issue": "HKDF key-length inconsistency\nFor too small key sizes, `HKDF.derive()` outputs an empty array instead of a small key:\r\n\r\nProgram:\r\n```python\r\n#!/usr/bin/env python3.5\r\nimport cryptography\r\nfrom cryptography.hazmat.primitives import hashes\r\nfrom cryptography.hazmat.primitives.kdf.hkdf import HKDF\r\nfrom cryptography.hazmat.backends import default_backend\r\n\r\nprint(\"cryptography.io:{}\".format(cryptography.__version__))\r\n\r\nhkdf = HKDF(algorithm=hashes.SHA256(), length=4, salt=b\"salt\",\r\n info=b\"some-test\", backend=default_backend())\r\n\r\nkey = hkdf.derive(b\"my secret passphrase\")\r\nprint(\"Derived key: {}\".format(key))\r\n```\r\n\r\nOutput:\r\n```\r\ncryptography.io:1.5.2\r\nDerived key: b''\r\n```\r\n\r\nSuggested fix:\r\n\r\nI am not quite sure why the division by 8 in the snippet below was added. The cumulative size of the output array is always `self._algorithm.digest_size * len(output)` and thus we can stop after `self._algorithm.digest_size * len(output) >= self._length`. At first I thought this might be a clever trick taken from the paper, but I didn't find it there. I guess there was a mixup between bits and bytes at some point.\r\n\r\n```python\r\n# class HKDFExpand\r\ndef _expand(self, key_material):\r\n output = [b\"\"]\r\n counter = 1\r\n\r\n while (self._algorithm.digest_size // 8) * len(output) < self._length:\r\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\r\n h.update(output[-1])\r\n h.update(self._info)\r\n h.update(six.int2byte(counter))\r\n output.append(h.finalize())\r\n counter += 1\r\n\r\n return b\"\".join(output)[:self._length]\r\n```\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.backends.interfaces import HMACBackend\nfrom cryptography.hazmat.primitives import constant_time, hmac\nfrom cryptography.hazmat.primitives.kdf import KeyDerivationFunction\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDF(object):\n def __init__(self, algorithm, length, salt, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n if not (salt is None or isinstance(salt, bytes)):\n raise TypeError(\"salt must be bytes.\")\n\n if salt is None:\n salt = b\"\\x00\" * (self._algorithm.digest_size // 8)\n\n self._salt = salt\n\n self._backend = backend\n\n self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)\n\n def _extract(self, key_material):\n h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)\n h.update(key_material)\n return h.finalize()\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n return self._hkdf_expand.derive(self._extract(key_material))\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDFExpand(object):\n def __init__(self, algorithm, length, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n self._backend = backend\n\n max_length = 255 * (algorithm.digest_size // 8)\n\n if length > max_length:\n raise ValueError(\n \"Can not derive keys larger than {0} octets.\".format(\n max_length\n ))\n\n self._length = length\n\n if not (info is None or isinstance(info, bytes)):\n raise TypeError(\"info must be bytes.\")\n\n if info is None:\n info = b\"\"\n\n self._info = info\n\n self._used = False\n\n def _expand(self, key_material):\n output = [b\"\"]\n counter = 1\n\n while (self._algorithm.digest_size // 8) * len(output) < self._length:\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\n h.update(output[-1])\n h.update(self._info)\n h.update(six.int2byte(counter))\n output.append(h.finalize())\n counter += 1\n\n return b\"\".join(output)[:self._length]\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n if self._used:\n raise AlreadyFinalized\n\n self._used = True\n return self._expand(key_material)\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n", "path": "src/cryptography/hazmat/primitives/kdf/hkdf.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.backends.interfaces import HMACBackend\nfrom cryptography.hazmat.primitives import constant_time, hmac\nfrom cryptography.hazmat.primitives.kdf import KeyDerivationFunction\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDF(object):\n def __init__(self, algorithm, length, salt, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n if not (salt is None or isinstance(salt, bytes)):\n raise TypeError(\"salt must be bytes.\")\n\n if salt is None:\n salt = b\"\\x00\" * (self._algorithm.digest_size // 8)\n\n self._salt = salt\n\n self._backend = backend\n\n self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)\n\n def _extract(self, key_material):\n h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)\n h.update(key_material)\n return h.finalize()\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n return self._hkdf_expand.derive(self._extract(key_material))\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDFExpand(object):\n def __init__(self, algorithm, length, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n self._backend = backend\n\n max_length = 255 * (algorithm.digest_size // 8)\n\n if length > max_length:\n raise ValueError(\n \"Can not derive keys larger than {0} octets.\".format(\n max_length\n ))\n\n self._length = length\n\n if not (info is None or isinstance(info, bytes)):\n raise TypeError(\"info must be bytes.\")\n\n if info is None:\n info = b\"\"\n\n self._info = info\n\n self._used = False\n\n def _expand(self, key_material):\n output = [b\"\"]\n counter = 1\n\n while self._algorithm.digest_size * (len(output) - 1) < self._length:\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\n h.update(output[-1])\n h.update(self._info)\n h.update(six.int2byte(counter))\n output.append(h.finalize())\n counter += 1\n\n return b\"\".join(output)[:self._length]\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n if self._used:\n raise AlreadyFinalized\n\n self._used = True\n return self._expand(key_material)\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n", "path": "src/cryptography/hazmat/primitives/kdf/hkdf.py"}]} | 1,713 | 169 |
gh_patches_debug_20557 | rasdani/github-patches | git_diff | iterative__dvc-5408 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dvc exp gc: can't run experiments after garbage collection
# Bug Report
## Description
Experiments fail after garbage collection via `dvc exp gc`.
### Reproduce
1. dvc init
2. dvc stage add -n foo -M foo.yaml 'echo "score: 1" > foo.yaml'
3. git add .
4. git commit -m "add stage"
5. dvc exp run
6. dvc exp gc -fT
7. dvc exp run
```console
Stage 'foo' didn't change, skipping
ERROR: Failed to set 'refs/exps/d1/5cee9fd7a4fc20c5d7167671df4ed78b3cfb7a/exp-985c5'
ERROR: Failed to set 'refs/exps/d1/5cee9fd7a4fc20c5d7167671df4ed78b3cfb7a/exp-985c5'
```
### Expected
`dvc exp run` to successfully run experiment.
### Environment information
**Output of `dvc version`:**
```console
$ dvc version
DVC version: 2.0.0a0+adedd1
---------------------------------
Platform: Python 3.9.1 on Linux-5.8.0-38-generic-x86_64-with-glibc2.31
Supports: http, https
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: None
Workspace directory: ext4 on /dev/mapper/vgubuntu-root
Repo: dvc, git
```
**Additional Information (if any):**
<!--
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/repo/experiments/gc.py`
Content:
```
1 import logging
2 from typing import Optional
3
4 from dvc.repo import locked
5
6 from .utils import exp_refs
7
8 logger = logging.getLogger(__name__)
9
10
11 @locked
12 def gc(
13 repo,
14 all_branches: Optional[bool] = False,
15 all_tags: Optional[bool] = False,
16 all_commits: Optional[bool] = False,
17 workspace: Optional[bool] = False,
18 queued: Optional[bool] = False,
19 ):
20 keep_revs = set(
21 repo.brancher(
22 all_branches=all_branches,
23 all_tags=all_tags,
24 all_commits=all_commits,
25 sha_only=True,
26 )
27 )
28 if workspace:
29 keep_revs.add(repo.scm.get_rev())
30
31 if not keep_revs:
32 return 0
33
34 removed = 0
35 for ref_info in exp_refs(repo.scm):
36 if ref_info.baseline_sha not in keep_revs:
37 repo.scm.remove_ref(str(ref_info))
38 removed += 1
39
40 delete_stashes = []
41 for _, entry in repo.experiments.stash_revs.items():
42 if not queued or entry.baseline_rev not in keep_revs:
43 delete_stashes.append(entry.index)
44 for index in sorted(delete_stashes, reverse=True):
45 repo.experiments.stash.drop(index)
46 removed += len(delete_stashes)
47
48 return removed
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/repo/experiments/gc.py b/dvc/repo/experiments/gc.py
--- a/dvc/repo/experiments/gc.py
+++ b/dvc/repo/experiments/gc.py
@@ -3,6 +3,7 @@
from dvc.repo import locked
+from .base import EXEC_APPLY, EXEC_BRANCH, EXEC_CHECKPOINT
from .utils import exp_refs
logger = logging.getLogger(__name__)
@@ -31,9 +32,20 @@
if not keep_revs:
return 0
+ exec_branch = repo.scm.get_ref(EXEC_BRANCH, follow=False)
+ exec_apply = repo.scm.get_ref(EXEC_APPLY)
+ exec_checkpoint = repo.scm.get_ref(EXEC_CHECKPOINT)
+
removed = 0
for ref_info in exp_refs(repo.scm):
if ref_info.baseline_sha not in keep_revs:
+ ref = repo.scm.get_ref(str(ref_info))
+ if exec_branch and str(ref_info):
+ repo.scm.remove_ref(EXEC_BRANCH)
+ if exec_apply and exec_apply == ref:
+ repo.scm.remove_ref(EXEC_APPLY)
+ if exec_checkpoint and exec_checkpoint == ref:
+ repo.scm.remove_ref(EXEC_CHECKPOINT)
repo.scm.remove_ref(str(ref_info))
removed += 1
| {"golden_diff": "diff --git a/dvc/repo/experiments/gc.py b/dvc/repo/experiments/gc.py\n--- a/dvc/repo/experiments/gc.py\n+++ b/dvc/repo/experiments/gc.py\n@@ -3,6 +3,7 @@\n \n from dvc.repo import locked\n \n+from .base import EXEC_APPLY, EXEC_BRANCH, EXEC_CHECKPOINT\n from .utils import exp_refs\n \n logger = logging.getLogger(__name__)\n@@ -31,9 +32,20 @@\n if not keep_revs:\n return 0\n \n+ exec_branch = repo.scm.get_ref(EXEC_BRANCH, follow=False)\n+ exec_apply = repo.scm.get_ref(EXEC_APPLY)\n+ exec_checkpoint = repo.scm.get_ref(EXEC_CHECKPOINT)\n+\n removed = 0\n for ref_info in exp_refs(repo.scm):\n if ref_info.baseline_sha not in keep_revs:\n+ ref = repo.scm.get_ref(str(ref_info))\n+ if exec_branch and str(ref_info):\n+ repo.scm.remove_ref(EXEC_BRANCH)\n+ if exec_apply and exec_apply == ref:\n+ repo.scm.remove_ref(EXEC_APPLY)\n+ if exec_checkpoint and exec_checkpoint == ref:\n+ repo.scm.remove_ref(EXEC_CHECKPOINT)\n repo.scm.remove_ref(str(ref_info))\n removed += 1\n", "issue": "dvc exp gc: can't run experiments after garbage collection\n# Bug Report\r\n\r\n## Description\r\n\r\nExperiments fail after garbage collection via `dvc exp gc`.\r\n\r\n### Reproduce\r\n\r\n1. dvc init\r\n2. dvc stage add -n foo -M foo.yaml 'echo \"score: 1\" > foo.yaml'\r\n3. git add .\r\n4. git commit -m \"add stage\"\r\n5. dvc exp run\r\n6. dvc exp gc -fT\r\n7. dvc exp run\r\n\r\n```console\r\nStage 'foo' didn't change, skipping\r\nERROR: Failed to set 'refs/exps/d1/5cee9fd7a4fc20c5d7167671df4ed78b3cfb7a/exp-985c5'\r\nERROR: Failed to set 'refs/exps/d1/5cee9fd7a4fc20c5d7167671df4ed78b3cfb7a/exp-985c5'\r\n```\r\n\r\n### Expected\r\n\r\n`dvc exp run` to successfully run experiment.\r\n\r\n### Environment information\r\n\r\n**Output of `dvc version`:**\r\n\r\n```console\r\n$ dvc version\r\nDVC version: 2.0.0a0+adedd1\r\n---------------------------------\r\nPlatform: Python 3.9.1 on Linux-5.8.0-38-generic-x86_64-with-glibc2.31\r\nSupports: http, https\r\nCache types: <https://error.dvc.org/no-dvc-cache>\r\nCaches: local\r\nRemotes: None\r\nWorkspace directory: ext4 on /dev/mapper/vgubuntu-root\r\nRepo: dvc, git\r\n```\r\n\r\n**Additional Information (if any):**\r\n\r\n<!--\r\nIf applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.\r\n-->\n", "before_files": [{"content": "import logging\nfrom typing import Optional\n\nfrom dvc.repo import locked\n\nfrom .utils import exp_refs\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\ndef gc(\n repo,\n all_branches: Optional[bool] = False,\n all_tags: Optional[bool] = False,\n all_commits: Optional[bool] = False,\n workspace: Optional[bool] = False,\n queued: Optional[bool] = False,\n):\n keep_revs = set(\n repo.brancher(\n all_branches=all_branches,\n all_tags=all_tags,\n all_commits=all_commits,\n sha_only=True,\n )\n )\n if workspace:\n keep_revs.add(repo.scm.get_rev())\n\n if not keep_revs:\n return 0\n\n removed = 0\n for ref_info in exp_refs(repo.scm):\n if ref_info.baseline_sha not in keep_revs:\n repo.scm.remove_ref(str(ref_info))\n removed += 1\n\n delete_stashes = []\n for _, entry in repo.experiments.stash_revs.items():\n if not queued or entry.baseline_rev not in keep_revs:\n delete_stashes.append(entry.index)\n for index in sorted(delete_stashes, reverse=True):\n repo.experiments.stash.drop(index)\n removed += len(delete_stashes)\n\n return removed\n", "path": "dvc/repo/experiments/gc.py"}], "after_files": [{"content": "import logging\nfrom typing import Optional\n\nfrom dvc.repo import locked\n\nfrom .base import EXEC_APPLY, EXEC_BRANCH, EXEC_CHECKPOINT\nfrom .utils import exp_refs\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\ndef gc(\n repo,\n all_branches: Optional[bool] = False,\n all_tags: Optional[bool] = False,\n all_commits: Optional[bool] = False,\n workspace: Optional[bool] = False,\n queued: Optional[bool] = False,\n):\n keep_revs = set(\n repo.brancher(\n all_branches=all_branches,\n all_tags=all_tags,\n all_commits=all_commits,\n sha_only=True,\n )\n )\n if workspace:\n keep_revs.add(repo.scm.get_rev())\n\n if not keep_revs:\n return 0\n\n exec_branch = repo.scm.get_ref(EXEC_BRANCH, follow=False)\n exec_apply = repo.scm.get_ref(EXEC_APPLY)\n exec_checkpoint = repo.scm.get_ref(EXEC_CHECKPOINT)\n\n removed = 0\n for ref_info in exp_refs(repo.scm):\n if ref_info.baseline_sha not in keep_revs:\n ref = repo.scm.get_ref(str(ref_info))\n if exec_branch and str(ref_info):\n repo.scm.remove_ref(EXEC_BRANCH)\n if exec_apply and exec_apply == ref:\n repo.scm.remove_ref(EXEC_APPLY)\n if exec_checkpoint and exec_checkpoint == ref:\n repo.scm.remove_ref(EXEC_CHECKPOINT)\n repo.scm.remove_ref(str(ref_info))\n removed += 1\n\n delete_stashes = []\n for _, entry in repo.experiments.stash_revs.items():\n if not queued or entry.baseline_rev not in keep_revs:\n delete_stashes.append(entry.index)\n for index in sorted(delete_stashes, reverse=True):\n repo.experiments.stash.drop(index)\n removed += len(delete_stashes)\n\n return removed\n", "path": "dvc/repo/experiments/gc.py"}]} | 1,056 | 295 |
gh_patches_debug_23973 | rasdani/github-patches | git_diff | cal-itp__benefits-579 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto select Transit Agency when only 1 Active
Simplify the flow for a user, and avoid having to deal with the design of a single button on the `core:index` page.
When there is only 1 active transit agency, and user lands on `core:index`, forward to the active agency's `core:agency_index` page.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/core/views.py`
Content:
```
1 """
2 The core application: view definition for the root of the webapp.
3 """
4 from django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError
5 from django.template import loader
6 from django.template.response import TemplateResponse
7 from django.urls import reverse
8 from django.utils.translation import gettext as _
9
10 from . import models, session, viewmodels
11 from .middleware import pageview_decorator
12
13
14 def PageTemplateResponse(request, page_vm):
15 """Helper returns a TemplateResponse using the common page template."""
16 return TemplateResponse(request, "core/page.html", page_vm.context_dict())
17
18
19 def _index_content_title():
20 """Helper returns the content title for the common index page."""
21 return _("core.pages.index.content_title")
22
23
24 def _index_url():
25 """Helper computes the index url path."""
26 return reverse("core:index")
27
28
29 @pageview_decorator
30 def index(request):
31 """View handler for the main entry page."""
32 session.reset(request)
33
34 # generate a button to the landing page for each active agency
35 agencies = models.TransitAgency.all_active()
36 buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]
37 buttons[0].classes.append("mt-3")
38 buttons[0].label = _("core.pages.index.chooseprovider")
39
40 page = viewmodels.Page(
41 title=_("core.pages.index.title"),
42 content_title=_index_content_title(),
43 buttons=buttons,
44 classes="home",
45 )
46
47 return PageTemplateResponse(request, page)
48
49
50 @pageview_decorator
51 def agency_index(request, agency):
52 """View handler for an agency entry page."""
53 session.reset(request)
54 session.update(request, agency=agency, origin=agency.index_url)
55
56 button = viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index"))
57 button.label = _("core.pages.agency_index.button.label")
58
59 page = viewmodels.Page(
60 title=_("core.pages.agency_index.title"),
61 content_title=_("core.pages.agency_index.content_title"),
62 button=button,
63 classes="home",
64 )
65
66 help_page = reverse("core:help")
67 context_dict = {**page.context_dict(), **{"info_link": f"{help_page}#about"}}
68
69 return TemplateResponse(request, "core/agency_index.html", context_dict)
70
71
72 @pageview_decorator
73 def help(request):
74 """View handler for the help page."""
75 if session.active_agency(request):
76 agency = session.agency(request)
77 buttons = viewmodels.Button.agency_contact_links(agency)
78 else:
79 buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]
80
81 buttons.append(viewmodels.Button.home(request, _("core.buttons.back")))
82
83 page = viewmodels.Page(
84 title=_("core.buttons.help"),
85 content_title=_("core.buttons.help"),
86 buttons=buttons,
87 noimage=True,
88 )
89
90 return TemplateResponse(request, "core/help.html", page.context_dict())
91
92
93 @pageview_decorator
94 def bad_request(request, exception, template_name="400.html"):
95 """View handler for HTTP 400 Bad Request responses."""
96 if session.active_agency(request):
97 session.update(request, origin=session.agency(request).index_url)
98 else:
99 session.update(request, origin=_index_url())
100
101 home = viewmodels.Button.home(request)
102 page = viewmodels.ErrorPage.error(button=home)
103 t = loader.get_template(template_name)
104
105 return HttpResponseBadRequest(t.render(page.context_dict()))
106
107
108 @pageview_decorator
109 def csrf_failure(request, reason):
110 """
111 View handler for CSRF_FAILURE_VIEW with custom data.
112 """
113 if session.active_agency(request):
114 session.update(request, origin=session.agency(request).index_url)
115 else:
116 session.update(request, origin=_index_url())
117
118 home = viewmodels.Button.home(request)
119 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
120 t = loader.get_template("400.html")
121
122 return HttpResponseNotFound(t.render(page.context_dict()))
123
124
125 @pageview_decorator
126 def page_not_found(request, exception, template_name="404.html"):
127 """View handler for HTTP 404 Not Found responses."""
128 if session.active_agency(request):
129 session.update(request, origin=session.agency(request).index_url)
130 else:
131 session.update(request, origin=_index_url())
132
133 home = viewmodels.Button.home(request)
134 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
135 t = loader.get_template(template_name)
136
137 return HttpResponseNotFound(t.render(page.context_dict()))
138
139
140 @pageview_decorator
141 def server_error(request, template_name="500.html"):
142 """View handler for HTTP 500 Server Error responses."""
143 if session.active_agency(request):
144 session.update(request, origin=session.agency(request).index_url)
145 else:
146 session.update(request, origin=_index_url())
147
148 home = viewmodels.Button.home(request)
149 page = viewmodels.ErrorPage.error(button=home)
150 t = loader.get_template(template_name)
151
152 return HttpResponseServerError(t.render(page.context_dict()))
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benefits/core/views.py b/benefits/core/views.py
--- a/benefits/core/views.py
+++ b/benefits/core/views.py
@@ -2,6 +2,7 @@
The core application: view definition for the root of the webapp.
"""
from django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError
+from django.shortcuts import redirect
from django.template import loader
from django.template.response import TemplateResponse
from django.urls import reverse
@@ -31,8 +32,13 @@
"""View handler for the main entry page."""
session.reset(request)
- # generate a button to the landing page for each active agency
agencies = models.TransitAgency.all_active()
+
+ if len(agencies) == 1:
+ agency = agencies[0]
+ return redirect(agency.index_url)
+
+ # generate a button to the landing page for each active agency
buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]
buttons[0].classes.append("mt-3")
buttons[0].label = _("core.pages.index.chooseprovider")
| {"golden_diff": "diff --git a/benefits/core/views.py b/benefits/core/views.py\n--- a/benefits/core/views.py\n+++ b/benefits/core/views.py\n@@ -2,6 +2,7 @@\n The core application: view definition for the root of the webapp.\n \"\"\"\n from django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\n+from django.shortcuts import redirect\n from django.template import loader\n from django.template.response import TemplateResponse\n from django.urls import reverse\n@@ -31,8 +32,13 @@\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n \n- # generate a button to the landing page for each active agency\n agencies = models.TransitAgency.all_active()\n+\n+ if len(agencies) == 1:\n+ agency = agencies[0]\n+ return redirect(agency.index_url)\n+\n+ # generate a button to the landing page for each active agency\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n", "issue": "Auto select Transit Agency when only 1 Active\nSimplify the flow for a user, and avoid having to deal with the design of a single button on the `core:index` page.\r\n\r\nWhen there is only 1 active transit agency, and user lands on `core:index`, forward to the active agency's `core:agency_index` page.\n", "before_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import models, session, viewmodels\nfrom .middleware import pageview_decorator\n\n\ndef PageTemplateResponse(request, page_vm):\n \"\"\"Helper returns a TemplateResponse using the common page template.\"\"\"\n return TemplateResponse(request, \"core/page.html\", page_vm.context_dict())\n\n\ndef _index_content_title():\n \"\"\"Helper returns the content title for the common index page.\"\"\"\n return _(\"core.pages.index.content_title\")\n\n\ndef _index_url():\n \"\"\"Helper computes the index url path.\"\"\"\n return reverse(\"core:index\")\n\n\n@pageview_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n # generate a button to the landing page for each active agency\n agencies = models.TransitAgency.all_active()\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.index.title\"),\n content_title=_index_content_title(),\n buttons=buttons,\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\n@pageview_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n button.label = _(\"core.pages.agency_index.button.label\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.agency_index.title\"),\n content_title=_(\"core.pages.agency_index.content_title\"),\n button=button,\n classes=\"home\",\n )\n\n help_page = reverse(\"core:help\")\n context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n\n return TemplateResponse(request, \"core/agency_index.html\", context_dict)\n\n\n@pageview_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n noimage=True,\n )\n\n return TemplateResponse(request, \"core/help.html\", page.context_dict())\n\n\n@pageview_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}], "after_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.shortcuts import redirect\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import models, session, viewmodels\nfrom .middleware import pageview_decorator\n\n\ndef PageTemplateResponse(request, page_vm):\n \"\"\"Helper returns a TemplateResponse using the common page template.\"\"\"\n return TemplateResponse(request, \"core/page.html\", page_vm.context_dict())\n\n\ndef _index_content_title():\n \"\"\"Helper returns the content title for the common index page.\"\"\"\n return _(\"core.pages.index.content_title\")\n\n\ndef _index_url():\n \"\"\"Helper computes the index url path.\"\"\"\n return reverse(\"core:index\")\n\n\n@pageview_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n agencies = models.TransitAgency.all_active()\n\n if len(agencies) == 1:\n agency = agencies[0]\n return redirect(agency.index_url)\n\n # generate a button to the landing page for each active agency\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.index.title\"),\n content_title=_index_content_title(),\n buttons=buttons,\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\n@pageview_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n button.label = _(\"core.pages.agency_index.button.label\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.agency_index.title\"),\n content_title=_(\"core.pages.agency_index.content_title\"),\n button=button,\n classes=\"home\",\n )\n\n help_page = reverse(\"core:help\")\n context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n\n return TemplateResponse(request, \"core/agency_index.html\", context_dict)\n\n\n@pageview_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n noimage=True,\n )\n\n return TemplateResponse(request, \"core/help.html\", page.context_dict())\n\n\n@pageview_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}]} | 1,773 | 251 |
gh_patches_debug_18481 | rasdani/github-patches | git_diff | ietf-tools__datatracker-4966 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing chair email aliases for research area groups
### Describe the issue
There are two Research Area Groups in the datatracker, anrw and irtfopen. These groups have a chair listed, but the email expansions at https://datatracker.ietf.org/group/irtfopen/email/ appear to be missing the chairs alias ([email protected]) and email sent to that address bounces. Similarly, presumably, for anrw.
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/group/management/commands/generate_group_aliases.py`
Content:
```
1 # Copyright The IETF Trust 2012-2021, All Rights Reserved
2 # -*- coding: utf-8 -*-
3
4 # This was written as a script by Markus Stenberg <[email protected]>.
5 # It was turned into a management command by Russ Housley <[email protected]>.
6
7 import datetime
8 import io
9 import os
10 import shutil
11 import stat
12 import time
13
14 from tempfile import mkstemp
15
16 from django.conf import settings
17 from django.core.management.base import BaseCommand
18 from django.utils import timezone
19
20 import debug # pyflakes:ignore
21
22 from ietf.group.models import Group
23 from ietf.group.utils import get_group_ad_emails, get_group_role_emails, get_child_group_role_emails
24 from ietf.name.models import GroupTypeName
25 from ietf.utils.aliases import dump_sublist
26
27 DEFAULT_YEARS = 5
28 ACTIVE_STATES=['active', 'bof', 'proposed']
29 GROUP_TYPES=['wg', 'rg', 'dir', 'team', 'review', 'program', 'rfcedtyp']
30 NO_AD_GROUP_TYPES=['rg', 'team', 'program', 'rfcedtyp']
31 IETF_DOMAIN=['ietf.org', ]
32 IRTF_DOMAIN=['irtf.org', ]
33 IAB_DOMAIN=['iab.org', ]
34
35 class Command(BaseCommand):
36 help = ('Generate the group-aliases and group-virtual files for Internet-Draft '
37 'mail aliases, placing them in the file configured in '
38 'settings.GROUP_ALIASES_PATH and settings.GROUP_VIRTUAL_PATH, '
39 'respectively. The generation includes aliases for groups that '
40 'have seen activity in the last %s years.' % (DEFAULT_YEARS))
41
42 def handle(self, *args, **options):
43 show_since = timezone.now() - datetime.timedelta(DEFAULT_YEARS*365)
44
45 date = time.strftime("%Y-%m-%d_%H:%M:%S")
46 signature = '# Generated by %s at %s\n' % (os.path.abspath(__file__), date)
47
48 ahandle, aname = mkstemp()
49 os.close(ahandle)
50 afile = io.open(aname,"w")
51
52 vhandle, vname = mkstemp()
53 os.close(vhandle)
54 vfile = io.open(vname,"w")
55
56 afile.write(signature)
57 vfile.write(signature)
58 vfile.write("%s anything\n" % settings.GROUP_VIRTUAL_DOMAIN)
59
60 # Loop through each group type and build -ads and -chairs entries
61 for g in GROUP_TYPES:
62 domains = []
63 domains += IETF_DOMAIN
64 if g == 'rg':
65 domains += IRTF_DOMAIN
66 if g == 'program':
67 domains += IAB_DOMAIN
68
69 entries = Group.objects.filter(type=g).all()
70 active_entries = entries.filter(state__in=ACTIVE_STATES)
71 inactive_recent_entries = entries.exclude(state__in=ACTIVE_STATES).filter(time__gte=show_since)
72 interesting_entries = active_entries | inactive_recent_entries
73
74 for e in interesting_entries.distinct().iterator():
75 name = e.acronym
76
77 # Research groups, teams, and programs do not have -ads lists
78 if not g in NO_AD_GROUP_TYPES:
79 dump_sublist(afile, vfile, name+'-ads', domains, settings.GROUP_VIRTUAL_DOMAIN, get_group_ad_emails(e))
80 # All group types have -chairs lists
81 dump_sublist(afile, vfile, name+'-chairs', domains, settings.GROUP_VIRTUAL_DOMAIN, get_group_role_emails(e, ['chair', 'secr']))
82
83 # The area lists include every chair in active working groups in the area
84 areas = Group.objects.filter(type='area').all()
85 active_areas = areas.filter(state__in=ACTIVE_STATES)
86 for area in active_areas:
87 name = area.acronym
88 area_ad_emails = get_group_role_emails(area, ['pre-ad', 'ad', 'chair'])
89 dump_sublist(afile, vfile, name+'-ads', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, area_ad_emails)
90 dump_sublist(afile, vfile, name+'-chairs', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, (get_child_group_role_emails(area, ['chair', 'secr']) | area_ad_emails))
91
92 # Other groups with chairs that require Internet-Draft submission approval
93 gtypes = GroupTypeName.objects.values_list('slug', flat=True)
94 special_groups = Group.objects.filter(type__features__req_subm_approval=True, acronym__in=gtypes, state='active')
95 for group in special_groups:
96 dump_sublist(afile, vfile, group.acronym+'-chairs', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, get_group_role_emails(group, ['chair', 'delegate']))
97
98 afile.close()
99 vfile.close()
100
101 os.chmod(aname, stat.S_IWUSR|stat.S_IRUSR|stat.S_IRGRP|stat.S_IROTH)
102 os.chmod(vname, stat.S_IWUSR|stat.S_IRUSR|stat.S_IRGRP|stat.S_IROTH)
103
104 shutil.move(aname, settings.GROUP_ALIASES_PATH)
105 shutil.move(vname, settings.GROUP_VIRTUAL_PATH)
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ietf/group/management/commands/generate_group_aliases.py b/ietf/group/management/commands/generate_group_aliases.py
--- a/ietf/group/management/commands/generate_group_aliases.py
+++ b/ietf/group/management/commands/generate_group_aliases.py
@@ -26,8 +26,8 @@
DEFAULT_YEARS = 5
ACTIVE_STATES=['active', 'bof', 'proposed']
-GROUP_TYPES=['wg', 'rg', 'dir', 'team', 'review', 'program', 'rfcedtyp']
-NO_AD_GROUP_TYPES=['rg', 'team', 'program', 'rfcedtyp']
+GROUP_TYPES=['wg', 'rg', 'rag', 'dir', 'team', 'review', 'program', 'rfcedtyp']
+NO_AD_GROUP_TYPES=['rg', 'rag', 'team', 'program', 'rfcedtyp']
IETF_DOMAIN=['ietf.org', ]
IRTF_DOMAIN=['irtf.org', ]
IAB_DOMAIN=['iab.org', ]
@@ -61,7 +61,7 @@
for g in GROUP_TYPES:
domains = []
domains += IETF_DOMAIN
- if g == 'rg':
+ if g in ('rg', 'rag'):
domains += IRTF_DOMAIN
if g == 'program':
domains += IAB_DOMAIN
| {"golden_diff": "diff --git a/ietf/group/management/commands/generate_group_aliases.py b/ietf/group/management/commands/generate_group_aliases.py\n--- a/ietf/group/management/commands/generate_group_aliases.py\n+++ b/ietf/group/management/commands/generate_group_aliases.py\n@@ -26,8 +26,8 @@\n \n DEFAULT_YEARS = 5\n ACTIVE_STATES=['active', 'bof', 'proposed']\n-GROUP_TYPES=['wg', 'rg', 'dir', 'team', 'review', 'program', 'rfcedtyp']\n-NO_AD_GROUP_TYPES=['rg', 'team', 'program', 'rfcedtyp']\n+GROUP_TYPES=['wg', 'rg', 'rag', 'dir', 'team', 'review', 'program', 'rfcedtyp']\n+NO_AD_GROUP_TYPES=['rg', 'rag', 'team', 'program', 'rfcedtyp']\n IETF_DOMAIN=['ietf.org', ]\n IRTF_DOMAIN=['irtf.org', ]\n IAB_DOMAIN=['iab.org', ]\n@@ -61,7 +61,7 @@\n for g in GROUP_TYPES:\n domains = []\n domains += IETF_DOMAIN\n- if g == 'rg':\n+ if g in ('rg', 'rag'):\n domains += IRTF_DOMAIN\n if g == 'program':\n domains += IAB_DOMAIN\n", "issue": "Missing chair email aliases for research area groups\n### Describe the issue\n\nThere are two Research Area Groups in the datatracker, anrw and irtfopen. These groups have a chair listed, but the email expansions at https://datatracker.ietf.org/group/irtfopen/email/ appear to be missing the chairs alias ([email protected]) and email sent to that address bounces. Similarly, presumably, for anrw.\r\n\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2012-2021, All Rights Reserved\n# -*- coding: utf-8 -*-\n\n# This was written as a script by Markus Stenberg <[email protected]>.\n# It was turned into a management command by Russ Housley <[email protected]>.\n\nimport datetime\nimport io\nimport os\nimport shutil\nimport stat\nimport time\n\nfrom tempfile import mkstemp\n \nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.utils import timezone\n\nimport debug # pyflakes:ignore\n\nfrom ietf.group.models import Group\nfrom ietf.group.utils import get_group_ad_emails, get_group_role_emails, get_child_group_role_emails\nfrom ietf.name.models import GroupTypeName\nfrom ietf.utils.aliases import dump_sublist\n\nDEFAULT_YEARS = 5\nACTIVE_STATES=['active', 'bof', 'proposed']\nGROUP_TYPES=['wg', 'rg', 'dir', 'team', 'review', 'program', 'rfcedtyp']\nNO_AD_GROUP_TYPES=['rg', 'team', 'program', 'rfcedtyp']\nIETF_DOMAIN=['ietf.org', ]\nIRTF_DOMAIN=['irtf.org', ]\nIAB_DOMAIN=['iab.org', ]\n\nclass Command(BaseCommand):\n help = ('Generate the group-aliases and group-virtual files for Internet-Draft '\n 'mail aliases, placing them in the file configured in '\n 'settings.GROUP_ALIASES_PATH and settings.GROUP_VIRTUAL_PATH, '\n 'respectively. The generation includes aliases for groups that '\n 'have seen activity in the last %s years.' % (DEFAULT_YEARS))\n\n def handle(self, *args, **options):\n show_since = timezone.now() - datetime.timedelta(DEFAULT_YEARS*365)\n\n date = time.strftime(\"%Y-%m-%d_%H:%M:%S\")\n signature = '# Generated by %s at %s\\n' % (os.path.abspath(__file__), date)\n\n ahandle, aname = mkstemp()\n os.close(ahandle)\n afile = io.open(aname,\"w\")\n\n vhandle, vname = mkstemp()\n os.close(vhandle)\n vfile = io.open(vname,\"w\")\n\n afile.write(signature)\n vfile.write(signature)\n vfile.write(\"%s anything\\n\" % settings.GROUP_VIRTUAL_DOMAIN)\n\n # Loop through each group type and build -ads and -chairs entries\n for g in GROUP_TYPES:\n domains = []\n domains += IETF_DOMAIN\n if g == 'rg':\n domains += IRTF_DOMAIN\n if g == 'program':\n domains += IAB_DOMAIN\n\n entries = Group.objects.filter(type=g).all()\n active_entries = entries.filter(state__in=ACTIVE_STATES)\n inactive_recent_entries = entries.exclude(state__in=ACTIVE_STATES).filter(time__gte=show_since)\n interesting_entries = active_entries | inactive_recent_entries\n\n for e in interesting_entries.distinct().iterator():\n name = e.acronym\n \n # Research groups, teams, and programs do not have -ads lists\n if not g in NO_AD_GROUP_TYPES:\n dump_sublist(afile, vfile, name+'-ads', domains, settings.GROUP_VIRTUAL_DOMAIN, get_group_ad_emails(e))\n # All group types have -chairs lists\n dump_sublist(afile, vfile, name+'-chairs', domains, settings.GROUP_VIRTUAL_DOMAIN, get_group_role_emails(e, ['chair', 'secr']))\n\n # The area lists include every chair in active working groups in the area\n areas = Group.objects.filter(type='area').all()\n active_areas = areas.filter(state__in=ACTIVE_STATES)\n for area in active_areas:\n name = area.acronym\n area_ad_emails = get_group_role_emails(area, ['pre-ad', 'ad', 'chair'])\n dump_sublist(afile, vfile, name+'-ads', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, area_ad_emails)\n dump_sublist(afile, vfile, name+'-chairs', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, (get_child_group_role_emails(area, ['chair', 'secr']) | area_ad_emails))\n\n # Other groups with chairs that require Internet-Draft submission approval\n gtypes = GroupTypeName.objects.values_list('slug', flat=True)\n special_groups = Group.objects.filter(type__features__req_subm_approval=True, acronym__in=gtypes, state='active')\n for group in special_groups:\n dump_sublist(afile, vfile, group.acronym+'-chairs', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, get_group_role_emails(group, ['chair', 'delegate']))\n\n afile.close()\n vfile.close()\n\n os.chmod(aname, stat.S_IWUSR|stat.S_IRUSR|stat.S_IRGRP|stat.S_IROTH) \n os.chmod(vname, stat.S_IWUSR|stat.S_IRUSR|stat.S_IRGRP|stat.S_IROTH) \n\n shutil.move(aname, settings.GROUP_ALIASES_PATH)\n shutil.move(vname, settings.GROUP_VIRTUAL_PATH)\n", "path": "ietf/group/management/commands/generate_group_aliases.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2012-2021, All Rights Reserved\n# -*- coding: utf-8 -*-\n\n# This was written as a script by Markus Stenberg <[email protected]>.\n# It was turned into a management command by Russ Housley <[email protected]>.\n\nimport datetime\nimport io\nimport os\nimport shutil\nimport stat\nimport time\n\nfrom tempfile import mkstemp\n \nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.utils import timezone\n\nimport debug # pyflakes:ignore\n\nfrom ietf.group.models import Group\nfrom ietf.group.utils import get_group_ad_emails, get_group_role_emails, get_child_group_role_emails\nfrom ietf.name.models import GroupTypeName\nfrom ietf.utils.aliases import dump_sublist\n\nDEFAULT_YEARS = 5\nACTIVE_STATES=['active', 'bof', 'proposed']\nGROUP_TYPES=['wg', 'rg', 'rag', 'dir', 'team', 'review', 'program', 'rfcedtyp']\nNO_AD_GROUP_TYPES=['rg', 'rag', 'team', 'program', 'rfcedtyp']\nIETF_DOMAIN=['ietf.org', ]\nIRTF_DOMAIN=['irtf.org', ]\nIAB_DOMAIN=['iab.org', ]\n\nclass Command(BaseCommand):\n help = ('Generate the group-aliases and group-virtual files for Internet-Draft '\n 'mail aliases, placing them in the file configured in '\n 'settings.GROUP_ALIASES_PATH and settings.GROUP_VIRTUAL_PATH, '\n 'respectively. The generation includes aliases for groups that '\n 'have seen activity in the last %s years.' % (DEFAULT_YEARS))\n\n def handle(self, *args, **options):\n show_since = timezone.now() - datetime.timedelta(DEFAULT_YEARS*365)\n\n date = time.strftime(\"%Y-%m-%d_%H:%M:%S\")\n signature = '# Generated by %s at %s\\n' % (os.path.abspath(__file__), date)\n\n ahandle, aname = mkstemp()\n os.close(ahandle)\n afile = io.open(aname,\"w\")\n\n vhandle, vname = mkstemp()\n os.close(vhandle)\n vfile = io.open(vname,\"w\")\n\n afile.write(signature)\n vfile.write(signature)\n vfile.write(\"%s anything\\n\" % settings.GROUP_VIRTUAL_DOMAIN)\n\n # Loop through each group type and build -ads and -chairs entries\n for g in GROUP_TYPES:\n domains = []\n domains += IETF_DOMAIN\n if g in ('rg', 'rag'):\n domains += IRTF_DOMAIN\n if g == 'program':\n domains += IAB_DOMAIN\n\n entries = Group.objects.filter(type=g).all()\n active_entries = entries.filter(state__in=ACTIVE_STATES)\n inactive_recent_entries = entries.exclude(state__in=ACTIVE_STATES).filter(time__gte=show_since)\n interesting_entries = active_entries | inactive_recent_entries\n\n for e in interesting_entries.distinct().iterator():\n name = e.acronym\n \n # Research groups, teams, and programs do not have -ads lists\n if not g in NO_AD_GROUP_TYPES:\n dump_sublist(afile, vfile, name+'-ads', domains, settings.GROUP_VIRTUAL_DOMAIN, get_group_ad_emails(e))\n # All group types have -chairs lists\n dump_sublist(afile, vfile, name+'-chairs', domains, settings.GROUP_VIRTUAL_DOMAIN, get_group_role_emails(e, ['chair', 'secr']))\n\n # The area lists include every chair in active working groups in the area\n areas = Group.objects.filter(type='area').all()\n active_areas = areas.filter(state__in=ACTIVE_STATES)\n for area in active_areas:\n name = area.acronym\n area_ad_emails = get_group_role_emails(area, ['pre-ad', 'ad', 'chair'])\n dump_sublist(afile, vfile, name+'-ads', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, area_ad_emails)\n dump_sublist(afile, vfile, name+'-chairs', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, (get_child_group_role_emails(area, ['chair', 'secr']) | area_ad_emails))\n\n # Other groups with chairs that require Internet-Draft submission approval\n gtypes = GroupTypeName.objects.values_list('slug', flat=True)\n special_groups = Group.objects.filter(type__features__req_subm_approval=True, acronym__in=gtypes, state='active')\n for group in special_groups:\n dump_sublist(afile, vfile, group.acronym+'-chairs', IETF_DOMAIN, settings.GROUP_VIRTUAL_DOMAIN, get_group_role_emails(group, ['chair', 'delegate']))\n\n afile.close()\n vfile.close()\n\n os.chmod(aname, stat.S_IWUSR|stat.S_IRUSR|stat.S_IRGRP|stat.S_IROTH) \n os.chmod(vname, stat.S_IWUSR|stat.S_IRUSR|stat.S_IRGRP|stat.S_IROTH) \n\n shutil.move(aname, settings.GROUP_ALIASES_PATH)\n shutil.move(vname, settings.GROUP_VIRTUAL_PATH)\n", "path": "ietf/group/management/commands/generate_group_aliases.py"}]} | 1,742 | 288 |
gh_patches_debug_9214 | rasdani/github-patches | git_diff | Frojd__Wagtail-Pipit-35 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use Werkzeug
Add django-extensions and Werkzeug to local requirements, and use runserver_plus in docker-entrypoint.sh.
It is a lot faster than regular runserver in Docker.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Example-Project/src/core/settings/local.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 Write local settings here, or override base settings
6 """
7 from __future__ import absolute_import, unicode_literals
8
9 from core.settings.base import * # NOQA
10
11
12 DEBUG = True
13 TEMPLATES[0]['OPTIONS']['debug'] = DEBUG
14
15 DEBUG_TOOLBAR_PATCH_SETTINGS = False
16
17 # Add django debug toolbar when using local version
18 INSTALLED_APPS += [
19 'debug_toolbar',
20 ]
21
22 MIDDLEWARE_CLASSES += [
23 'debug_toolbar.middleware.DebugToolbarMiddleware',
24 ]
25
26 # Allow weak local passwords
27 AUTH_PASSWORD_VALIDATORS = []
28
29 INTERNAL_IPS = get_env('INTERNAL_IPS', default="").split(',')
30
31
32 # Allow django-debug-bar under docker
33 def show_toolbar(request):
34 # https://gist.github.com/douglasmiranda/9de51aaba14543851ca3
35 return not request.is_ajax()
36
37 DEBUG_TOOLBAR_CONFIG = {
38 'SHOW_TOOLBAR_CALLBACK': 'core.settings.local.show_toolbar',
39 }
40
```
Path: `{{cookiecutter.project_name}}/src/core/settings/local.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 Write local settings here, or override base settings
6 """
7 from __future__ import absolute_import, unicode_literals
8
9 from core.settings.base import * # NOQA
10
11
12 DEBUG = True
13 TEMPLATES[0]['OPTIONS']['debug'] = DEBUG
14
15 DEBUG_TOOLBAR_PATCH_SETTINGS = False
16
17 # Add django debug toolbar when using local version
18 INSTALLED_APPS += [
19 'debug_toolbar',
20 ]
21
22 MIDDLEWARE_CLASSES += [
23 'debug_toolbar.middleware.DebugToolbarMiddleware',
24 ]
25
26 # Allow weak local passwords
27 AUTH_PASSWORD_VALIDATORS = []
28
29 INTERNAL_IPS = get_env('INTERNAL_IPS', default="").split(',')
30
31
32 # Allow django-debug-bar under docker
33 def show_toolbar(request):
34 # https://gist.github.com/douglasmiranda/9de51aaba14543851ca3
35 return not request.is_ajax()
36
37 DEBUG_TOOLBAR_CONFIG = {
38 'SHOW_TOOLBAR_CALLBACK': 'core.settings.local.show_toolbar',
39 }
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Example-Project/src/core/settings/local.py b/Example-Project/src/core/settings/local.py
--- a/Example-Project/src/core/settings/local.py
+++ b/Example-Project/src/core/settings/local.py
@@ -17,6 +17,7 @@
# Add django debug toolbar when using local version
INSTALLED_APPS += [
'debug_toolbar',
+ 'django_extensions',
]
MIDDLEWARE_CLASSES += [
diff --git a/{{cookiecutter.project_name}}/src/core/settings/local.py b/{{cookiecutter.project_name}}/src/core/settings/local.py
--- a/{{cookiecutter.project_name}}/src/core/settings/local.py
+++ b/{{cookiecutter.project_name}}/src/core/settings/local.py
@@ -17,6 +17,7 @@
# Add django debug toolbar when using local version
INSTALLED_APPS += [
'debug_toolbar',
+ 'django_extensions',
]
MIDDLEWARE_CLASSES += [
| {"golden_diff": "diff --git a/Example-Project/src/core/settings/local.py b/Example-Project/src/core/settings/local.py\n--- a/Example-Project/src/core/settings/local.py\n+++ b/Example-Project/src/core/settings/local.py\n@@ -17,6 +17,7 @@\n # Add django debug toolbar when using local version\n INSTALLED_APPS += [\n 'debug_toolbar',\n+ 'django_extensions',\n ]\n \n MIDDLEWARE_CLASSES += [\ndiff --git a/{{cookiecutter.project_name}}/src/core/settings/local.py b/{{cookiecutter.project_name}}/src/core/settings/local.py\n--- a/{{cookiecutter.project_name}}/src/core/settings/local.py\n+++ b/{{cookiecutter.project_name}}/src/core/settings/local.py\n@@ -17,6 +17,7 @@\n # Add django debug toolbar when using local version\n INSTALLED_APPS += [\n 'debug_toolbar',\n+ 'django_extensions',\n ]\n \n MIDDLEWARE_CLASSES += [\n", "issue": "Use Werkzeug\nAdd django-extensions and Werkzeug to local requirements, and use runserver_plus in docker-entrypoint.sh.\r\nIt is a lot faster than regular runserver in Docker.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nWrite local settings here, or override base settings\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nfrom core.settings.base import * # NOQA\n\n\nDEBUG = True\nTEMPLATES[0]['OPTIONS']['debug'] = DEBUG\n\nDEBUG_TOOLBAR_PATCH_SETTINGS = False\n\n# Add django debug toolbar when using local version\nINSTALLED_APPS += [\n 'debug_toolbar',\n]\n\nMIDDLEWARE_CLASSES += [\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n]\n\n# Allow weak local passwords\nAUTH_PASSWORD_VALIDATORS = []\n\nINTERNAL_IPS = get_env('INTERNAL_IPS', default=\"\").split(',')\n\n\n# Allow django-debug-bar under docker\ndef show_toolbar(request):\n # https://gist.github.com/douglasmiranda/9de51aaba14543851ca3\n return not request.is_ajax()\n\nDEBUG_TOOLBAR_CONFIG = {\n 'SHOW_TOOLBAR_CALLBACK': 'core.settings.local.show_toolbar',\n}\n", "path": "Example-Project/src/core/settings/local.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nWrite local settings here, or override base settings\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nfrom core.settings.base import * # NOQA\n\n\nDEBUG = True\nTEMPLATES[0]['OPTIONS']['debug'] = DEBUG\n\nDEBUG_TOOLBAR_PATCH_SETTINGS = False\n\n# Add django debug toolbar when using local version\nINSTALLED_APPS += [\n 'debug_toolbar',\n]\n\nMIDDLEWARE_CLASSES += [\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n]\n\n# Allow weak local passwords\nAUTH_PASSWORD_VALIDATORS = []\n\nINTERNAL_IPS = get_env('INTERNAL_IPS', default=\"\").split(',')\n\n\n# Allow django-debug-bar under docker\ndef show_toolbar(request):\n # https://gist.github.com/douglasmiranda/9de51aaba14543851ca3\n return not request.is_ajax()\n\nDEBUG_TOOLBAR_CONFIG = {\n 'SHOW_TOOLBAR_CALLBACK': 'core.settings.local.show_toolbar',\n}\n", "path": "{{cookiecutter.project_name}}/src/core/settings/local.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nWrite local settings here, or override base settings\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nfrom core.settings.base import * # NOQA\n\n\nDEBUG = True\nTEMPLATES[0]['OPTIONS']['debug'] = DEBUG\n\nDEBUG_TOOLBAR_PATCH_SETTINGS = False\n\n# Add django debug toolbar when using local version\nINSTALLED_APPS += [\n 'debug_toolbar',\n 'django_extensions',\n]\n\nMIDDLEWARE_CLASSES += [\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n]\n\n# Allow weak local passwords\nAUTH_PASSWORD_VALIDATORS = []\n\nINTERNAL_IPS = get_env('INTERNAL_IPS', default=\"\").split(',')\n\n\n# Allow django-debug-bar under docker\ndef show_toolbar(request):\n # https://gist.github.com/douglasmiranda/9de51aaba14543851ca3\n return not request.is_ajax()\n\nDEBUG_TOOLBAR_CONFIG = {\n 'SHOW_TOOLBAR_CALLBACK': 'core.settings.local.show_toolbar',\n}\n", "path": "Example-Project/src/core/settings/local.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nWrite local settings here, or override base settings\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nfrom core.settings.base import * # NOQA\n\n\nDEBUG = True\nTEMPLATES[0]['OPTIONS']['debug'] = DEBUG\n\nDEBUG_TOOLBAR_PATCH_SETTINGS = False\n\n# Add django debug toolbar when using local version\nINSTALLED_APPS += [\n 'debug_toolbar',\n 'django_extensions',\n]\n\nMIDDLEWARE_CLASSES += [\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n]\n\n# Allow weak local passwords\nAUTH_PASSWORD_VALIDATORS = []\n\nINTERNAL_IPS = get_env('INTERNAL_IPS', default=\"\").split(',')\n\n\n# Allow django-debug-bar under docker\ndef show_toolbar(request):\n # https://gist.github.com/douglasmiranda/9de51aaba14543851ca3\n return not request.is_ajax()\n\nDEBUG_TOOLBAR_CONFIG = {\n 'SHOW_TOOLBAR_CALLBACK': 'core.settings.local.show_toolbar',\n}\n", "path": "{{cookiecutter.project_name}}/src/core/settings/local.py"}]} | 912 | 204 |
gh_patches_debug_40022 | rasdani/github-patches | git_diff | scikit-image__scikit-image-5062 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ensure_spacing function may be slow
## Description
The `skimage._shared.coord.ensure_spacing` function introduced in #4760 may be slow when the number of input points is large.
## Way to reproduce
```python
In [1]: import numpy as np
In [2]: from skimage._shared.coord import ensure_spacing
In [3]: for n in range(500, 10001, 500):
...: print(f"n = {n}")
...: x = np.random.rand(n, 2)
...: %timeit ensure_spacing(x, 0.2)
...:
n = 500
27.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
n = 1000
64.7 ms ± 1.93 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
n = 1500
105 ms ± 2.27 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
n = 2000
158 ms ± 1.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
n = 2500
213 ms ± 7.32 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 3000
299 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 3500
378 ms ± 3.05 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 4000
428 ms ± 11.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 4500
554 ms ± 14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 5000
647 ms ± 9.42 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 5500
777 ms ± 7.05 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 6000
870 ms ± 13.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 6500
1.03 s ± 9.56 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 7000
1.17 s ± 12.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 7500
1.33 s ± 28 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 8000
1.45 s ± 18.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 8500
1.63 s ± 36.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 9000
1.87 s ± 50.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 9500
2.07 s ± 23.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n = 10000
2.29 s ± 51.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/_shared/coord.py`
Content:
```
1 import numpy as np
2 from scipy.spatial import cKDTree, distance
3
4
5 def ensure_spacing(coord, spacing=1, p_norm=np.inf):
6 """Returns a subset of coord where a minimum spacing is guaranteed.
7
8 Parameters
9 ----------
10 coord : ndarray
11 The coordinates of the considered points.
12 spacing : float
13 the maximum allowed spacing between the points.
14 p_norm : float
15 Which Minkowski p-norm to use. Should be in the range [1, inf].
16 A finite large p may cause a ValueError if overflow can occur.
17 ``inf`` corresponds to the Chebyshev distance and 2 to the
18 Euclidean distance.
19
20 Returns
21 -------
22 output : ndarray
23 A subset of coord where a minimum spacing is guaranteed.
24
25 """
26
27 output = coord
28 if len(coord):
29 # Use KDtree to find the peaks that are too close to each other
30 tree = cKDTree(coord)
31
32 indices = tree.query_ball_point(coord, r=spacing, p=p_norm)
33 rejected_peaks_indices = set()
34 for idx, candidates in enumerate(indices):
35 if idx not in rejected_peaks_indices:
36 # keep current point and the points at exactly spacing from it
37 candidates.remove(idx)
38 dist = distance.cdist([coord[idx]],
39 coord[candidates],
40 distance.minkowski,
41 p=p_norm).reshape(-1)
42 candidates = [c for c, d in zip(candidates, dist)
43 if d < spacing]
44
45 # candidates.remove(keep)
46 rejected_peaks_indices.update(candidates)
47
48 # Remove the peaks that are too close to each other
49 output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)
50
51 return output
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/_shared/coord.py b/skimage/_shared/coord.py
--- a/skimage/_shared/coord.py
+++ b/skimage/_shared/coord.py
@@ -2,7 +2,7 @@
from scipy.spatial import cKDTree, distance
-def ensure_spacing(coord, spacing=1, p_norm=np.inf):
+def _ensure_spacing(coord, spacing, p_norm):
"""Returns a subset of coord where a minimum spacing is guaranteed.
Parameters
@@ -24,28 +24,72 @@
"""
- output = coord
- if len(coord):
- # Use KDtree to find the peaks that are too close to each other
- tree = cKDTree(coord)
-
- indices = tree.query_ball_point(coord, r=spacing, p=p_norm)
- rejected_peaks_indices = set()
- for idx, candidates in enumerate(indices):
- if idx not in rejected_peaks_indices:
- # keep current point and the points at exactly spacing from it
- candidates.remove(idx)
- dist = distance.cdist([coord[idx]],
- coord[candidates],
- distance.minkowski,
- p=p_norm).reshape(-1)
- candidates = [c for c, d in zip(candidates, dist)
- if d < spacing]
-
- # candidates.remove(keep)
- rejected_peaks_indices.update(candidates)
-
- # Remove the peaks that are too close to each other
- output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)
+ # Use KDtree to find the peaks that are too close to each other
+ tree = cKDTree(coord)
+
+ indices = tree.query_ball_point(coord, r=spacing, p=p_norm)
+ rejected_peaks_indices = set()
+ for idx, candidates in enumerate(indices):
+ if idx not in rejected_peaks_indices:
+ # keep current point and the points at exactly spacing from it
+ candidates.remove(idx)
+ dist = distance.cdist([coord[idx]],
+ coord[candidates],
+ distance.minkowski,
+ p=p_norm).reshape(-1)
+ candidates = [c for c, d in zip(candidates, dist)
+ if d < spacing]
+
+ # candidates.remove(keep)
+ rejected_peaks_indices.update(candidates)
+
+ # Remove the peaks that are too close to each other
+ output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)
+
+ return output
+
+
+def ensure_spacing(coords, spacing=1, p_norm=np.inf, min_split_size=50):
+ """Returns a subset of coord where a minimum spacing is guaranteed.
+
+ Parameters
+ ----------
+ coord : array_like
+ The coordinates of the considered points.
+ spacing : float
+ the maximum allowed spacing between the points.
+ p_norm : float
+ Which Minkowski p-norm to use. Should be in the range [1, inf].
+ A finite large p may cause a ValueError if overflow can occur.
+ ``inf`` corresponds to the Chebyshev distance and 2 to the
+ Euclidean distance.
+ min_split_size : int
+ Minimum split size used to process ``coord`` by batch to save
+ memory. If None, the memory saving strategy is not applied.
+
+ Returns
+ -------
+ output : array_like
+ A subset of coord where a minimum spacing is guaranteed.
+
+ """
+
+ output = coords
+ if len(coords):
+
+ coords = np.atleast_2d(coords)
+ if min_split_size is None:
+ batch_list = [coords]
+ else:
+ coord_count = len(coords)
+ split_count = int(np.log2(coord_count / min_split_size)) + 1
+ split_idx = np.cumsum(
+ [coord_count // (2 ** i) for i in range(1, split_count)])
+ batch_list = np.array_split(coords, split_idx)
+
+ output = np.zeros((0, coords.shape[1]), dtype=coords.dtype)
+ for batch in batch_list:
+ output = _ensure_spacing(np.vstack([output, batch]),
+ spacing, p_norm)
return output
| {"golden_diff": "diff --git a/skimage/_shared/coord.py b/skimage/_shared/coord.py\n--- a/skimage/_shared/coord.py\n+++ b/skimage/_shared/coord.py\n@@ -2,7 +2,7 @@\n from scipy.spatial import cKDTree, distance\n \n \n-def ensure_spacing(coord, spacing=1, p_norm=np.inf):\n+def _ensure_spacing(coord, spacing, p_norm):\n \"\"\"Returns a subset of coord where a minimum spacing is guaranteed.\n \n Parameters\n@@ -24,28 +24,72 @@\n \n \"\"\"\n \n- output = coord\n- if len(coord):\n- # Use KDtree to find the peaks that are too close to each other\n- tree = cKDTree(coord)\n-\n- indices = tree.query_ball_point(coord, r=spacing, p=p_norm)\n- rejected_peaks_indices = set()\n- for idx, candidates in enumerate(indices):\n- if idx not in rejected_peaks_indices:\n- # keep current point and the points at exactly spacing from it\n- candidates.remove(idx)\n- dist = distance.cdist([coord[idx]],\n- coord[candidates],\n- distance.minkowski,\n- p=p_norm).reshape(-1)\n- candidates = [c for c, d in zip(candidates, dist)\n- if d < spacing]\n-\n- # candidates.remove(keep)\n- rejected_peaks_indices.update(candidates)\n-\n- # Remove the peaks that are too close to each other\n- output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)\n+ # Use KDtree to find the peaks that are too close to each other\n+ tree = cKDTree(coord)\n+\n+ indices = tree.query_ball_point(coord, r=spacing, p=p_norm)\n+ rejected_peaks_indices = set()\n+ for idx, candidates in enumerate(indices):\n+ if idx not in rejected_peaks_indices:\n+ # keep current point and the points at exactly spacing from it\n+ candidates.remove(idx)\n+ dist = distance.cdist([coord[idx]],\n+ coord[candidates],\n+ distance.minkowski,\n+ p=p_norm).reshape(-1)\n+ candidates = [c for c, d in zip(candidates, dist)\n+ if d < spacing]\n+\n+ # candidates.remove(keep)\n+ rejected_peaks_indices.update(candidates)\n+\n+ # Remove the peaks that are too close to each other\n+ output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)\n+\n+ return output\n+\n+\n+def ensure_spacing(coords, spacing=1, p_norm=np.inf, min_split_size=50):\n+ \"\"\"Returns a subset of coord where a minimum spacing is guaranteed.\n+\n+ Parameters\n+ ----------\n+ coord : array_like\n+ The coordinates of the considered points.\n+ spacing : float\n+ the maximum allowed spacing between the points.\n+ p_norm : float\n+ Which Minkowski p-norm to use. Should be in the range [1, inf].\n+ A finite large p may cause a ValueError if overflow can occur.\n+ ``inf`` corresponds to the Chebyshev distance and 2 to the\n+ Euclidean distance.\n+ min_split_size : int\n+ Minimum split size used to process ``coord`` by batch to save\n+ memory. If None, the memory saving strategy is not applied.\n+\n+ Returns\n+ -------\n+ output : array_like\n+ A subset of coord where a minimum spacing is guaranteed.\n+\n+ \"\"\"\n+\n+ output = coords\n+ if len(coords):\n+\n+ coords = np.atleast_2d(coords)\n+ if min_split_size is None:\n+ batch_list = [coords]\n+ else:\n+ coord_count = len(coords)\n+ split_count = int(np.log2(coord_count / min_split_size)) + 1\n+ split_idx = np.cumsum(\n+ [coord_count // (2 ** i) for i in range(1, split_count)])\n+ batch_list = np.array_split(coords, split_idx)\n+\n+ output = np.zeros((0, coords.shape[1]), dtype=coords.dtype)\n+ for batch in batch_list:\n+ output = _ensure_spacing(np.vstack([output, batch]),\n+ spacing, p_norm)\n \n return output\n", "issue": "ensure_spacing function may be slow\n## Description\r\n\r\nThe `skimage._shared.coord.ensure_spacing` function introduced in #4760 may be slow when the number of input points is large.\r\n\r\n## Way to reproduce\r\n```python\r\nIn [1]: import numpy as np\r\n\r\nIn [2]: from skimage._shared.coord import ensure_spacing\r\n\r\nIn [3]: for n in range(500, 10001, 500):\r\n ...: print(f\"n = {n}\")\r\n ...: x = np.random.rand(n, 2)\r\n ...: %timeit ensure_spacing(x, 0.2)\r\n ...: \r\nn = 500\r\n27.6 ms \u00b1 857 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\r\nn = 1000\r\n64.7 ms \u00b1 1.93 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\r\nn = 1500\r\n105 ms \u00b1 2.27 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\r\nn = 2000\r\n158 ms \u00b1 1.18 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\r\nn = 2500\r\n213 ms \u00b1 7.32 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 3000\r\n299 ms \u00b1 11.9 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 3500\r\n378 ms \u00b1 3.05 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 4000\r\n428 ms \u00b1 11.7 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 4500\r\n554 ms \u00b1 14 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 5000\r\n647 ms \u00b1 9.42 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 5500\r\n777 ms \u00b1 7.05 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 6000\r\n870 ms \u00b1 13.2 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 6500\r\n1.03 s \u00b1 9.56 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 7000\r\n1.17 s \u00b1 12.8 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 7500\r\n1.33 s \u00b1 28 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 8000\r\n1.45 s \u00b1 18.5 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 8500\r\n1.63 s \u00b1 36.6 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 9000\r\n1.87 s \u00b1 50.4 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 9500\r\n2.07 s \u00b1 23.3 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\nn = 10000\r\n2.29 s \u00b1 51.3 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\n```\r\n\n", "before_files": [{"content": "import numpy as np\nfrom scipy.spatial import cKDTree, distance\n\n\ndef ensure_spacing(coord, spacing=1, p_norm=np.inf):\n \"\"\"Returns a subset of coord where a minimum spacing is guaranteed.\n\n Parameters\n ----------\n coord : ndarray\n The coordinates of the considered points.\n spacing : float\n the maximum allowed spacing between the points.\n p_norm : float\n Which Minkowski p-norm to use. Should be in the range [1, inf].\n A finite large p may cause a ValueError if overflow can occur.\n ``inf`` corresponds to the Chebyshev distance and 2 to the\n Euclidean distance.\n\n Returns\n -------\n output : ndarray\n A subset of coord where a minimum spacing is guaranteed.\n\n \"\"\"\n\n output = coord\n if len(coord):\n # Use KDtree to find the peaks that are too close to each other\n tree = cKDTree(coord)\n\n indices = tree.query_ball_point(coord, r=spacing, p=p_norm)\n rejected_peaks_indices = set()\n for idx, candidates in enumerate(indices):\n if idx not in rejected_peaks_indices:\n # keep current point and the points at exactly spacing from it\n candidates.remove(idx)\n dist = distance.cdist([coord[idx]],\n coord[candidates],\n distance.minkowski,\n p=p_norm).reshape(-1)\n candidates = [c for c, d in zip(candidates, dist)\n if d < spacing]\n\n # candidates.remove(keep)\n rejected_peaks_indices.update(candidates)\n\n # Remove the peaks that are too close to each other\n output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)\n\n return output\n", "path": "skimage/_shared/coord.py"}], "after_files": [{"content": "import numpy as np\nfrom scipy.spatial import cKDTree, distance\n\n\ndef _ensure_spacing(coord, spacing, p_norm):\n \"\"\"Returns a subset of coord where a minimum spacing is guaranteed.\n\n Parameters\n ----------\n coord : ndarray\n The coordinates of the considered points.\n spacing : float\n the maximum allowed spacing between the points.\n p_norm : float\n Which Minkowski p-norm to use. Should be in the range [1, inf].\n A finite large p may cause a ValueError if overflow can occur.\n ``inf`` corresponds to the Chebyshev distance and 2 to the\n Euclidean distance.\n\n Returns\n -------\n output : ndarray\n A subset of coord where a minimum spacing is guaranteed.\n\n \"\"\"\n\n # Use KDtree to find the peaks that are too close to each other\n tree = cKDTree(coord)\n\n indices = tree.query_ball_point(coord, r=spacing, p=p_norm)\n rejected_peaks_indices = set()\n for idx, candidates in enumerate(indices):\n if idx not in rejected_peaks_indices:\n # keep current point and the points at exactly spacing from it\n candidates.remove(idx)\n dist = distance.cdist([coord[idx]],\n coord[candidates],\n distance.minkowski,\n p=p_norm).reshape(-1)\n candidates = [c for c, d in zip(candidates, dist)\n if d < spacing]\n\n # candidates.remove(keep)\n rejected_peaks_indices.update(candidates)\n\n # Remove the peaks that are too close to each other\n output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)\n\n return output\n\n\ndef ensure_spacing(coords, spacing=1, p_norm=np.inf, min_split_size=50):\n \"\"\"Returns a subset of coord where a minimum spacing is guaranteed.\n\n Parameters\n ----------\n coord : array_like\n The coordinates of the considered points.\n spacing : float\n the maximum allowed spacing between the points.\n p_norm : float\n Which Minkowski p-norm to use. Should be in the range [1, inf].\n A finite large p may cause a ValueError if overflow can occur.\n ``inf`` corresponds to the Chebyshev distance and 2 to the\n Euclidean distance.\n min_split_size : int\n Minimum split size used to process ``coord`` by batch to save\n memory. If None, the memory saving strategy is not applied.\n\n Returns\n -------\n output : array_like\n A subset of coord where a minimum spacing is guaranteed.\n\n \"\"\"\n\n output = coords\n if len(coords):\n\n coords = np.atleast_2d(coords)\n if min_split_size is None:\n batch_list = [coords]\n else:\n coord_count = len(coords)\n split_count = int(np.log2(coord_count / min_split_size)) + 1\n split_idx = np.cumsum(\n [coord_count // (2 ** i) for i in range(1, split_count)])\n batch_list = np.array_split(coords, split_idx)\n\n output = np.zeros((0, coords.shape[1]), dtype=coords.dtype)\n for batch in batch_list:\n output = _ensure_spacing(np.vstack([output, batch]),\n spacing, p_norm)\n\n return output\n", "path": "skimage/_shared/coord.py"}]} | 1,644 | 952 |
gh_patches_debug_12945 | rasdani/github-patches | git_diff | netbox-community__netbox-333 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Device Name Filter to Secrets API
Filter secrets by the devices they're associated to.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/secrets/filters.py`
Content:
```
1 import django_filters
2
3 from .models import Secret, SecretRole
4
5
6 class SecretFilter(django_filters.FilterSet):
7 role_id = django_filters.ModelMultipleChoiceFilter(
8 name='role',
9 queryset=SecretRole.objects.all(),
10 label='Role (ID)',
11 )
12 role = django_filters.ModelMultipleChoiceFilter(
13 name='role',
14 queryset=SecretRole.objects.all(),
15 to_field_name='slug',
16 label='Role (slug)',
17 )
18
19 class Meta:
20 model = Secret
21 fields = ['name', 'role_id', 'role']
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/secrets/filters.py b/netbox/secrets/filters.py
--- a/netbox/secrets/filters.py
+++ b/netbox/secrets/filters.py
@@ -1,6 +1,7 @@
import django_filters
from .models import Secret, SecretRole
+from dcim.models import Device
class SecretFilter(django_filters.FilterSet):
@@ -15,7 +16,13 @@
to_field_name='slug',
label='Role (slug)',
)
+ device = django_filters.ModelMultipleChoiceFilter(
+ name='device',
+ queryset=Device.objects.all(),
+ to_field_name='name',
+ label='Device (Name)',
+ )
class Meta:
model = Secret
- fields = ['name', 'role_id', 'role']
+ fields = ['name', 'role_id', 'role', 'device']
| {"golden_diff": "diff --git a/netbox/secrets/filters.py b/netbox/secrets/filters.py\n--- a/netbox/secrets/filters.py\n+++ b/netbox/secrets/filters.py\n@@ -1,6 +1,7 @@\n import django_filters\n \n from .models import Secret, SecretRole\n+from dcim.models import Device\n \n \n class SecretFilter(django_filters.FilterSet):\n@@ -15,7 +16,13 @@\n to_field_name='slug',\n label='Role (slug)',\n )\n+ device = django_filters.ModelMultipleChoiceFilter(\n+ name='device',\n+ queryset=Device.objects.all(),\n+ to_field_name='name',\n+ label='Device (Name)',\n+ )\n \n class Meta:\n model = Secret\n- fields = ['name', 'role_id', 'role']\n+ fields = ['name', 'role_id', 'role', 'device']\n", "issue": "Add Device Name Filter to Secrets API\nFilter secrets by the devices they're associated to.\n\n", "before_files": [{"content": "import django_filters\n\nfrom .models import Secret, SecretRole\n\n\nclass SecretFilter(django_filters.FilterSet):\n role_id = django_filters.ModelMultipleChoiceFilter(\n name='role',\n queryset=SecretRole.objects.all(),\n label='Role (ID)',\n )\n role = django_filters.ModelMultipleChoiceFilter(\n name='role',\n queryset=SecretRole.objects.all(),\n to_field_name='slug',\n label='Role (slug)',\n )\n\n class Meta:\n model = Secret\n fields = ['name', 'role_id', 'role']\n", "path": "netbox/secrets/filters.py"}], "after_files": [{"content": "import django_filters\n\nfrom .models import Secret, SecretRole\nfrom dcim.models import Device\n\n\nclass SecretFilter(django_filters.FilterSet):\n role_id = django_filters.ModelMultipleChoiceFilter(\n name='role',\n queryset=SecretRole.objects.all(),\n label='Role (ID)',\n )\n role = django_filters.ModelMultipleChoiceFilter(\n name='role',\n queryset=SecretRole.objects.all(),\n to_field_name='slug',\n label='Role (slug)',\n )\n device = django_filters.ModelMultipleChoiceFilter(\n name='device',\n queryset=Device.objects.all(),\n to_field_name='name',\n label='Device (Name)',\n )\n\n class Meta:\n model = Secret\n fields = ['name', 'role_id', 'role', 'device']\n", "path": "netbox/secrets/filters.py"}]} | 434 | 195 |
gh_patches_debug_12737 | rasdani/github-patches | git_diff | encode__uvicorn-495 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failing `socket_info.getpeername()` with uvloop against a UDS.
When running against a unix domain socket, we seem to occassionaly get an OSError when looking up the remote address. This appears to only occur with `uvloop`.
```
ERROR: Exception in callback HttpToolsProtocol.connection_made(<_SelectorSoc...e, bufsize=0>>)
handle: <Handle HttpToolsProtocol.connection_made(<_SelectorSoc...e, bufsize=0>>)>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/events.py", line 127, in _run
self._callback(*self._args)
File "/Users/tomchristie/GitHub/encode/uvicorn/uvicorn/protocols/http/httptools_impl.py", line 129, in connection_made
self.client = get_remote_addr(transport)
File "/Users/tomchristie/GitHub/encode/uvicorn/uvicorn/protocols/utils.py", line 8, in get_remote_addr
info = socket_info.getpeername()
OSError: [Errno 22] Invalid argument
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/protocols/utils.py`
Content:
```
1 import socket
2
3
4 def get_remote_addr(transport):
5 socket_info = transport.get_extra_info("socket")
6 if socket_info is not None:
7 info = socket_info.getpeername()
8 family = socket_info.family
9 if family in (socket.AF_INET, socket.AF_INET6):
10 return (str(info[0]), int(info[1]))
11 return None
12 info = transport.get_extra_info("peername")
13 if info is not None and isinstance(info, (list, tuple)) and len(info) == 2:
14 return (str(info[0]), int(info[1]))
15 return None
16
17
18 def get_local_addr(transport):
19 socket_info = transport.get_extra_info("socket")
20 if socket_info is not None:
21 info = socket_info.getsockname()
22 family = socket_info.family
23 if family in (socket.AF_INET, socket.AF_INET6):
24 return (str(info[0]), int(info[1]))
25 return None
26 info = transport.get_extra_info("sockname")
27 if info is not None and isinstance(info, (list, tuple)) and len(info) == 2:
28 return (str(info[0]), int(info[1]))
29 return None
30
31
32 def is_ssl(transport):
33 return bool(transport.get_extra_info("sslcontext"))
34
35
36 def get_client_addr(scope):
37 client = scope.get("client")
38 if not client:
39 return ""
40 return "%s:%d" % client
41
42
43 def get_path_with_query_string(scope):
44 path_with_query_string = scope.get("root_path", "") + scope["path"]
45 if scope["query_string"]:
46 path_with_query_string = "{}?{}".format(
47 path_with_query_string, scope["query_string"].decode("ascii")
48 )
49 return path_with_query_string
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/uvicorn/protocols/utils.py b/uvicorn/protocols/utils.py
--- a/uvicorn/protocols/utils.py
+++ b/uvicorn/protocols/utils.py
@@ -4,8 +4,16 @@
def get_remote_addr(transport):
socket_info = transport.get_extra_info("socket")
if socket_info is not None:
- info = socket_info.getpeername()
- family = socket_info.family
+ try:
+ info = socket_info.getpeername()
+ except OSError:
+ # This case appears to inconsistently occur with uvloop
+ # bound to a unix domain socket.
+ family = None
+ info = None
+ else:
+ family = socket_info.family
+
if family in (socket.AF_INET, socket.AF_INET6):
return (str(info[0]), int(info[1]))
return None
| {"golden_diff": "diff --git a/uvicorn/protocols/utils.py b/uvicorn/protocols/utils.py\n--- a/uvicorn/protocols/utils.py\n+++ b/uvicorn/protocols/utils.py\n@@ -4,8 +4,16 @@\n def get_remote_addr(transport):\n socket_info = transport.get_extra_info(\"socket\")\n if socket_info is not None:\n- info = socket_info.getpeername()\n- family = socket_info.family\n+ try:\n+ info = socket_info.getpeername()\n+ except OSError:\n+ # This case appears to inconsistently occur with uvloop\n+ # bound to a unix domain socket.\n+ family = None\n+ info = None\n+ else:\n+ family = socket_info.family\n+\n if family in (socket.AF_INET, socket.AF_INET6):\n return (str(info[0]), int(info[1]))\n return None\n", "issue": "Failing `socket_info.getpeername()` with uvloop against a UDS.\nWhen running against a unix domain socket, we seem to occassionaly get an OSError when looking up the remote address. This appears to only occur with `uvloop`.\r\n\r\n```\r\nERROR: Exception in callback HttpToolsProtocol.connection_made(<_SelectorSoc...e, bufsize=0>>)\r\nhandle: <Handle HttpToolsProtocol.connection_made(<_SelectorSoc...e, bufsize=0>>)>\r\nTraceback (most recent call last):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/events.py\", line 127, in _run\r\n self._callback(*self._args)\r\n File \"/Users/tomchristie/GitHub/encode/uvicorn/uvicorn/protocols/http/httptools_impl.py\", line 129, in connection_made\r\n self.client = get_remote_addr(transport)\r\n File \"/Users/tomchristie/GitHub/encode/uvicorn/uvicorn/protocols/utils.py\", line 8, in get_remote_addr\r\n info = socket_info.getpeername()\r\nOSError: [Errno 22] Invalid argument\r\n```\n", "before_files": [{"content": "import socket\n\n\ndef get_remote_addr(transport):\n socket_info = transport.get_extra_info(\"socket\")\n if socket_info is not None:\n info = socket_info.getpeername()\n family = socket_info.family\n if family in (socket.AF_INET, socket.AF_INET6):\n return (str(info[0]), int(info[1]))\n return None\n info = transport.get_extra_info(\"peername\")\n if info is not None and isinstance(info, (list, tuple)) and len(info) == 2:\n return (str(info[0]), int(info[1]))\n return None\n\n\ndef get_local_addr(transport):\n socket_info = transport.get_extra_info(\"socket\")\n if socket_info is not None:\n info = socket_info.getsockname()\n family = socket_info.family\n if family in (socket.AF_INET, socket.AF_INET6):\n return (str(info[0]), int(info[1]))\n return None\n info = transport.get_extra_info(\"sockname\")\n if info is not None and isinstance(info, (list, tuple)) and len(info) == 2:\n return (str(info[0]), int(info[1]))\n return None\n\n\ndef is_ssl(transport):\n return bool(transport.get_extra_info(\"sslcontext\"))\n\n\ndef get_client_addr(scope):\n client = scope.get(\"client\")\n if not client:\n return \"\"\n return \"%s:%d\" % client\n\n\ndef get_path_with_query_string(scope):\n path_with_query_string = scope.get(\"root_path\", \"\") + scope[\"path\"]\n if scope[\"query_string\"]:\n path_with_query_string = \"{}?{}\".format(\n path_with_query_string, scope[\"query_string\"].decode(\"ascii\")\n )\n return path_with_query_string\n", "path": "uvicorn/protocols/utils.py"}], "after_files": [{"content": "import socket\n\n\ndef get_remote_addr(transport):\n socket_info = transport.get_extra_info(\"socket\")\n if socket_info is not None:\n try:\n info = socket_info.getpeername()\n except OSError:\n # This case appears to inconsistently occur with uvloop\n # bound to a unix domain socket.\n family = None\n info = None\n else:\n family = socket_info.family\n\n if family in (socket.AF_INET, socket.AF_INET6):\n return (str(info[0]), int(info[1]))\n return None\n info = transport.get_extra_info(\"peername\")\n if info is not None and isinstance(info, (list, tuple)) and len(info) == 2:\n return (str(info[0]), int(info[1]))\n return None\n\n\ndef get_local_addr(transport):\n socket_info = transport.get_extra_info(\"socket\")\n if socket_info is not None:\n info = socket_info.getsockname()\n family = socket_info.family\n if family in (socket.AF_INET, socket.AF_INET6):\n return (str(info[0]), int(info[1]))\n return None\n info = transport.get_extra_info(\"sockname\")\n if info is not None and isinstance(info, (list, tuple)) and len(info) == 2:\n return (str(info[0]), int(info[1]))\n return None\n\n\ndef is_ssl(transport):\n return bool(transport.get_extra_info(\"sslcontext\"))\n\n\ndef get_client_addr(scope):\n client = scope.get(\"client\")\n if not client:\n return \"\"\n return \"%s:%d\" % client\n\n\ndef get_path_with_query_string(scope):\n path_with_query_string = scope.get(\"root_path\", \"\") + scope[\"path\"]\n if scope[\"query_string\"]:\n path_with_query_string = \"{}?{}\".format(\n path_with_query_string, scope[\"query_string\"].decode(\"ascii\")\n )\n return path_with_query_string\n", "path": "uvicorn/protocols/utils.py"}]} | 993 | 194 |
gh_patches_debug_40654 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1186 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change availability endpoint to not require a user to be logged in
### Issue description and context
Current api/v1/available requires a user to be logged in, even though the login required tag was removed already. Troubleshoot this and ensure that a non-logged in user can access the availability endpoint (api/v1/available)
### Acceptance criteria
- [ ] a user not logged in can access api/v1/available/<domain> and will see the response to whether the domain string passed is in fact available.
### Links to other issues
Blocks: #476
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/api/views.py`
Content:
```
1 """Internal API views"""
2 from django.apps import apps
3 from django.views.decorators.http import require_http_methods
4 from django.http import JsonResponse
5
6 import requests
7
8 from cachetools.func import ttl_cache
9
10
11 DOMAIN_FILE_URL = (
12 "https://raw.githubusercontent.com/cisagov/dotgov-data/main/current-full.csv"
13 )
14
15
16 DOMAIN_API_MESSAGES = {
17 "required": "Enter the .gov domain you want. Don’t include “www” or “.gov.”"
18 " For example, if you want www.city.gov, you would enter “city”"
19 " (without the quotes).",
20 "extra_dots": "Enter the .gov domain you want without any periods.",
21 "unavailable": "That domain isn’t available. Try entering another one."
22 " Contact us if you need help coming up with a domain.",
23 "invalid": "Enter a domain using only letters,"
24 " numbers, or hyphens (though we don't recommend using hyphens).",
25 "success": "That domain is available!",
26 }
27
28
29 # this file doesn't change that often, nor is it that big, so cache the result
30 # in memory for ten minutes
31 @ttl_cache(ttl=600)
32 def _domains():
33 """Return a list of the current .gov domains.
34
35 Fetch a file from DOMAIN_FILE_URL, parse the CSV for the domain,
36 lowercase everything and return the list.
37 """
38 DraftDomain = apps.get_model("registrar.DraftDomain")
39 # 5 second timeout
40 file_contents = requests.get(DOMAIN_FILE_URL, timeout=5).text
41 domains = set()
42 # skip the first line
43 for line in file_contents.splitlines()[1:]:
44 # get the domain before the first comma
45 domain = line.split(",", 1)[0]
46 # sanity-check the string we got from the file here
47 if DraftDomain.string_could_be_domain(domain):
48 # lowercase everything when we put it in domains
49 domains.add(domain.lower())
50 return domains
51
52
53 def in_domains(domain):
54 """Return true if the given domain is in the domains list.
55
56 The given domain is lowercased to match against the domains list. If the
57 given domain doesn't end with .gov, ".gov" is added when looking for
58 a match.
59 """
60 Domain = apps.get_model("registrar.Domain")
61 if domain.endswith(".gov"):
62 return Domain.available(domain)
63 else:
64 # domain search string doesn't end with .gov, add it on here
65 return Domain.available(domain + ".gov")
66
67
68 @require_http_methods(["GET"])
69 def available(request, domain=""):
70 """Is a given domain available or not.
71
72 Response is a JSON dictionary with the key "available" and value true or
73 false.
74 """
75 DraftDomain = apps.get_model("registrar.DraftDomain")
76 # validate that the given domain could be a domain name and fail early if
77 # not.
78 if not (
79 DraftDomain.string_could_be_domain(domain)
80 or DraftDomain.string_could_be_domain(domain + ".gov")
81 ):
82 return JsonResponse(
83 {"available": False, "message": DOMAIN_API_MESSAGES["invalid"]}
84 )
85 # a domain is available if it is NOT in the list of current domains
86 if in_domains(domain):
87 return JsonResponse(
88 {"available": False, "message": DOMAIN_API_MESSAGES["unavailable"]}
89 )
90 else:
91 return JsonResponse(
92 {"available": True, "message": DOMAIN_API_MESSAGES["success"]}
93 )
94
```
Path: `src/registrar/models/utility/domain_helper.py`
Content:
```
1 import re
2
3 from api.views import in_domains
4 from registrar.utility import errors
5
6
7 class DomainHelper:
8 """Utility functions and constants for domain names."""
9
10 # a domain name is alphanumeric or hyphen, up to 63 characters, doesn't
11 # begin or end with a hyphen, followed by a TLD of 2-6 alphabetic characters
12 DOMAIN_REGEX = re.compile(r"^(?!-)[A-Za-z0-9-]{1,63}(?<!-)\.[A-Za-z]{2,6}$")
13
14 # a domain name is alphanumeric or hyphen, has at least 2 dots, doesn't
15 # begin or end with a hyphen, followed by a TLD of 2-6 alphabetic characters
16 HOST_REGEX = re.compile(r"^((?!-)[A-Za-z0-9-]{1,63}(?<!-)\.){2,}([A-Za-z]){2,6}$")
17
18 # a domain can be no longer than 253 characters in total
19 MAX_LENGTH = 253
20
21 @classmethod
22 def string_could_be_domain(cls, domain: str | None) -> bool:
23 """Return True if the string could be a domain name, otherwise False."""
24 if not isinstance(domain, str):
25 return False
26 return bool(cls.DOMAIN_REGEX.match(domain))
27
28 @classmethod
29 def validate(cls, domain: str | None, blank_ok=False) -> str:
30 """Attempt to determine if a domain name could be requested."""
31 if domain is None:
32 raise errors.BlankValueError()
33 if not isinstance(domain, str):
34 raise ValueError("Domain name must be a string")
35 domain = domain.lower().strip()
36 if domain == "":
37 if blank_ok:
38 return domain
39 else:
40 raise errors.BlankValueError()
41 if domain.endswith(".gov"):
42 domain = domain[:-4]
43 if "." in domain:
44 raise errors.ExtraDotsError()
45 if not DomainHelper.string_could_be_domain(domain + ".gov"):
46 raise ValueError()
47 if in_domains(domain):
48 raise errors.DomainUnavailableError()
49 return domain
50
51 @classmethod
52 def sld(cls, domain: str):
53 """
54 Get the second level domain. Example: `gsa.gov` -> `gsa`.
55
56 If no TLD is present, returns the original string.
57 """
58 return domain.split(".")[0]
59
60 @classmethod
61 def tld(cls, domain: str):
62 """Get the top level domain. Example: `gsa.gov` -> `gov`."""
63 parts = domain.rsplit(".")
64 return parts[-1] if len(parts) > 1 else ""
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/api/views.py b/src/api/views.py
--- a/src/api/views.py
+++ b/src/api/views.py
@@ -5,6 +5,8 @@
import requests
+from login_required import login_not_required
+
from cachetools.func import ttl_cache
@@ -23,6 +25,7 @@
"invalid": "Enter a domain using only letters,"
" numbers, or hyphens (though we don't recommend using hyphens).",
"success": "That domain is available!",
+ "error": "Error finding domain availability.",
}
@@ -50,22 +53,26 @@
return domains
-def in_domains(domain):
- """Return true if the given domain is in the domains list.
+def check_domain_available(domain):
+ """Return true if the given domain is available.
The given domain is lowercased to match against the domains list. If the
given domain doesn't end with .gov, ".gov" is added when looking for
a match.
"""
Domain = apps.get_model("registrar.Domain")
- if domain.endswith(".gov"):
- return Domain.available(domain)
- else:
- # domain search string doesn't end with .gov, add it on here
- return Domain.available(domain + ".gov")
+ try:
+ if domain.endswith(".gov"):
+ return Domain.available(domain)
+ else:
+ # domain search string doesn't end with .gov, add it on here
+ return Domain.available(domain + ".gov")
+ except Exception:
+ return False
@require_http_methods(["GET"])
+@login_not_required
def available(request, domain=""):
"""Is a given domain available or not.
@@ -83,11 +90,16 @@
{"available": False, "message": DOMAIN_API_MESSAGES["invalid"]}
)
# a domain is available if it is NOT in the list of current domains
- if in_domains(domain):
- return JsonResponse(
- {"available": False, "message": DOMAIN_API_MESSAGES["unavailable"]}
- )
- else:
+ try:
+ if check_domain_available(domain):
+ return JsonResponse(
+ {"available": True, "message": DOMAIN_API_MESSAGES["success"]}
+ )
+ else:
+ return JsonResponse(
+ {"available": False, "message": DOMAIN_API_MESSAGES["unavailable"]}
+ )
+ except Exception:
return JsonResponse(
- {"available": True, "message": DOMAIN_API_MESSAGES["success"]}
+ {"available": False, "message": DOMAIN_API_MESSAGES["error"]}
)
diff --git a/src/registrar/models/utility/domain_helper.py b/src/registrar/models/utility/domain_helper.py
--- a/src/registrar/models/utility/domain_helper.py
+++ b/src/registrar/models/utility/domain_helper.py
@@ -1,6 +1,6 @@
import re
-from api.views import in_domains
+from api.views import check_domain_available
from registrar.utility import errors
@@ -44,7 +44,7 @@
raise errors.ExtraDotsError()
if not DomainHelper.string_could_be_domain(domain + ".gov"):
raise ValueError()
- if in_domains(domain):
+ if not check_domain_available(domain):
raise errors.DomainUnavailableError()
return domain
| {"golden_diff": "diff --git a/src/api/views.py b/src/api/views.py\n--- a/src/api/views.py\n+++ b/src/api/views.py\n@@ -5,6 +5,8 @@\n \n import requests\n \n+from login_required import login_not_required\n+\n from cachetools.func import ttl_cache\n \n \n@@ -23,6 +25,7 @@\n \"invalid\": \"Enter a domain using only letters,\"\n \" numbers, or hyphens (though we don't recommend using hyphens).\",\n \"success\": \"That domain is available!\",\n+ \"error\": \"Error finding domain availability.\",\n }\n \n \n@@ -50,22 +53,26 @@\n return domains\n \n \n-def in_domains(domain):\n- \"\"\"Return true if the given domain is in the domains list.\n+def check_domain_available(domain):\n+ \"\"\"Return true if the given domain is available.\n \n The given domain is lowercased to match against the domains list. If the\n given domain doesn't end with .gov, \".gov\" is added when looking for\n a match.\n \"\"\"\n Domain = apps.get_model(\"registrar.Domain\")\n- if domain.endswith(\".gov\"):\n- return Domain.available(domain)\n- else:\n- # domain search string doesn't end with .gov, add it on here\n- return Domain.available(domain + \".gov\")\n+ try:\n+ if domain.endswith(\".gov\"):\n+ return Domain.available(domain)\n+ else:\n+ # domain search string doesn't end with .gov, add it on here\n+ return Domain.available(domain + \".gov\")\n+ except Exception:\n+ return False\n \n \n @require_http_methods([\"GET\"])\n+@login_not_required\n def available(request, domain=\"\"):\n \"\"\"Is a given domain available or not.\n \n@@ -83,11 +90,16 @@\n {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"invalid\"]}\n )\n # a domain is available if it is NOT in the list of current domains\n- if in_domains(domain):\n- return JsonResponse(\n- {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"unavailable\"]}\n- )\n- else:\n+ try:\n+ if check_domain_available(domain):\n+ return JsonResponse(\n+ {\"available\": True, \"message\": DOMAIN_API_MESSAGES[\"success\"]}\n+ )\n+ else:\n+ return JsonResponse(\n+ {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"unavailable\"]}\n+ )\n+ except Exception:\n return JsonResponse(\n- {\"available\": True, \"message\": DOMAIN_API_MESSAGES[\"success\"]}\n+ {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"error\"]}\n )\ndiff --git a/src/registrar/models/utility/domain_helper.py b/src/registrar/models/utility/domain_helper.py\n--- a/src/registrar/models/utility/domain_helper.py\n+++ b/src/registrar/models/utility/domain_helper.py\n@@ -1,6 +1,6 @@\n import re\n \n-from api.views import in_domains\n+from api.views import check_domain_available\n from registrar.utility import errors\n \n \n@@ -44,7 +44,7 @@\n raise errors.ExtraDotsError()\n if not DomainHelper.string_could_be_domain(domain + \".gov\"):\n raise ValueError()\n- if in_domains(domain):\n+ if not check_domain_available(domain):\n raise errors.DomainUnavailableError()\n return domain\n", "issue": "Change availability endpoint to not require a user to be logged in\n### Issue description and context\n\nCurrent api/v1/available requires a user to be logged in, even though the login required tag was removed already. Troubleshoot this and ensure that a non-logged in user can access the availability endpoint (api/v1/available)\n\n### Acceptance criteria\n\n- [ ] a user not logged in can access api/v1/available/<domain> and will see the response to whether the domain string passed is in fact available.\n\n### Links to other issues\n\nBlocks: #476\n", "before_files": [{"content": "\"\"\"Internal API views\"\"\"\nfrom django.apps import apps\nfrom django.views.decorators.http import require_http_methods\nfrom django.http import JsonResponse\n\nimport requests\n\nfrom cachetools.func import ttl_cache\n\n\nDOMAIN_FILE_URL = (\n \"https://raw.githubusercontent.com/cisagov/dotgov-data/main/current-full.csv\"\n)\n\n\nDOMAIN_API_MESSAGES = {\n \"required\": \"Enter the .gov domain you want. Don\u2019t include \u201cwww\u201d or \u201c.gov.\u201d\"\n \" For example, if you want www.city.gov, you would enter \u201ccity\u201d\"\n \" (without the quotes).\",\n \"extra_dots\": \"Enter the .gov domain you want without any periods.\",\n \"unavailable\": \"That domain isn\u2019t available. Try entering another one.\"\n \" Contact us if you need help coming up with a domain.\",\n \"invalid\": \"Enter a domain using only letters,\"\n \" numbers, or hyphens (though we don't recommend using hyphens).\",\n \"success\": \"That domain is available!\",\n}\n\n\n# this file doesn't change that often, nor is it that big, so cache the result\n# in memory for ten minutes\n@ttl_cache(ttl=600)\ndef _domains():\n \"\"\"Return a list of the current .gov domains.\n\n Fetch a file from DOMAIN_FILE_URL, parse the CSV for the domain,\n lowercase everything and return the list.\n \"\"\"\n DraftDomain = apps.get_model(\"registrar.DraftDomain\")\n # 5 second timeout\n file_contents = requests.get(DOMAIN_FILE_URL, timeout=5).text\n domains = set()\n # skip the first line\n for line in file_contents.splitlines()[1:]:\n # get the domain before the first comma\n domain = line.split(\",\", 1)[0]\n # sanity-check the string we got from the file here\n if DraftDomain.string_could_be_domain(domain):\n # lowercase everything when we put it in domains\n domains.add(domain.lower())\n return domains\n\n\ndef in_domains(domain):\n \"\"\"Return true if the given domain is in the domains list.\n\n The given domain is lowercased to match against the domains list. If the\n given domain doesn't end with .gov, \".gov\" is added when looking for\n a match.\n \"\"\"\n Domain = apps.get_model(\"registrar.Domain\")\n if domain.endswith(\".gov\"):\n return Domain.available(domain)\n else:\n # domain search string doesn't end with .gov, add it on here\n return Domain.available(domain + \".gov\")\n\n\n@require_http_methods([\"GET\"])\ndef available(request, domain=\"\"):\n \"\"\"Is a given domain available or not.\n\n Response is a JSON dictionary with the key \"available\" and value true or\n false.\n \"\"\"\n DraftDomain = apps.get_model(\"registrar.DraftDomain\")\n # validate that the given domain could be a domain name and fail early if\n # not.\n if not (\n DraftDomain.string_could_be_domain(domain)\n or DraftDomain.string_could_be_domain(domain + \".gov\")\n ):\n return JsonResponse(\n {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"invalid\"]}\n )\n # a domain is available if it is NOT in the list of current domains\n if in_domains(domain):\n return JsonResponse(\n {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"unavailable\"]}\n )\n else:\n return JsonResponse(\n {\"available\": True, \"message\": DOMAIN_API_MESSAGES[\"success\"]}\n )\n", "path": "src/api/views.py"}, {"content": "import re\n\nfrom api.views import in_domains\nfrom registrar.utility import errors\n\n\nclass DomainHelper:\n \"\"\"Utility functions and constants for domain names.\"\"\"\n\n # a domain name is alphanumeric or hyphen, up to 63 characters, doesn't\n # begin or end with a hyphen, followed by a TLD of 2-6 alphabetic characters\n DOMAIN_REGEX = re.compile(r\"^(?!-)[A-Za-z0-9-]{1,63}(?<!-)\\.[A-Za-z]{2,6}$\")\n\n # a domain name is alphanumeric or hyphen, has at least 2 dots, doesn't\n # begin or end with a hyphen, followed by a TLD of 2-6 alphabetic characters\n HOST_REGEX = re.compile(r\"^((?!-)[A-Za-z0-9-]{1,63}(?<!-)\\.){2,}([A-Za-z]){2,6}$\")\n\n # a domain can be no longer than 253 characters in total\n MAX_LENGTH = 253\n\n @classmethod\n def string_could_be_domain(cls, domain: str | None) -> bool:\n \"\"\"Return True if the string could be a domain name, otherwise False.\"\"\"\n if not isinstance(domain, str):\n return False\n return bool(cls.DOMAIN_REGEX.match(domain))\n\n @classmethod\n def validate(cls, domain: str | None, blank_ok=False) -> str:\n \"\"\"Attempt to determine if a domain name could be requested.\"\"\"\n if domain is None:\n raise errors.BlankValueError()\n if not isinstance(domain, str):\n raise ValueError(\"Domain name must be a string\")\n domain = domain.lower().strip()\n if domain == \"\":\n if blank_ok:\n return domain\n else:\n raise errors.BlankValueError()\n if domain.endswith(\".gov\"):\n domain = domain[:-4]\n if \".\" in domain:\n raise errors.ExtraDotsError()\n if not DomainHelper.string_could_be_domain(domain + \".gov\"):\n raise ValueError()\n if in_domains(domain):\n raise errors.DomainUnavailableError()\n return domain\n\n @classmethod\n def sld(cls, domain: str):\n \"\"\"\n Get the second level domain. Example: `gsa.gov` -> `gsa`.\n\n If no TLD is present, returns the original string.\n \"\"\"\n return domain.split(\".\")[0]\n\n @classmethod\n def tld(cls, domain: str):\n \"\"\"Get the top level domain. Example: `gsa.gov` -> `gov`.\"\"\"\n parts = domain.rsplit(\".\")\n return parts[-1] if len(parts) > 1 else \"\"\n", "path": "src/registrar/models/utility/domain_helper.py"}], "after_files": [{"content": "\"\"\"Internal API views\"\"\"\nfrom django.apps import apps\nfrom django.views.decorators.http import require_http_methods\nfrom django.http import JsonResponse\n\nimport requests\n\nfrom login_required import login_not_required\n\nfrom cachetools.func import ttl_cache\n\n\nDOMAIN_FILE_URL = (\n \"https://raw.githubusercontent.com/cisagov/dotgov-data/main/current-full.csv\"\n)\n\n\nDOMAIN_API_MESSAGES = {\n \"required\": \"Enter the .gov domain you want. Don\u2019t include \u201cwww\u201d or \u201c.gov.\u201d\"\n \" For example, if you want www.city.gov, you would enter \u201ccity\u201d\"\n \" (without the quotes).\",\n \"extra_dots\": \"Enter the .gov domain you want without any periods.\",\n \"unavailable\": \"That domain isn\u2019t available. Try entering another one.\"\n \" Contact us if you need help coming up with a domain.\",\n \"invalid\": \"Enter a domain using only letters,\"\n \" numbers, or hyphens (though we don't recommend using hyphens).\",\n \"success\": \"That domain is available!\",\n \"error\": \"Error finding domain availability.\",\n}\n\n\n# this file doesn't change that often, nor is it that big, so cache the result\n# in memory for ten minutes\n@ttl_cache(ttl=600)\ndef _domains():\n \"\"\"Return a list of the current .gov domains.\n\n Fetch a file from DOMAIN_FILE_URL, parse the CSV for the domain,\n lowercase everything and return the list.\n \"\"\"\n DraftDomain = apps.get_model(\"registrar.DraftDomain\")\n # 5 second timeout\n file_contents = requests.get(DOMAIN_FILE_URL, timeout=5).text\n domains = set()\n # skip the first line\n for line in file_contents.splitlines()[1:]:\n # get the domain before the first comma\n domain = line.split(\",\", 1)[0]\n # sanity-check the string we got from the file here\n if DraftDomain.string_could_be_domain(domain):\n # lowercase everything when we put it in domains\n domains.add(domain.lower())\n return domains\n\n\ndef check_domain_available(domain):\n \"\"\"Return true if the given domain is available.\n\n The given domain is lowercased to match against the domains list. If the\n given domain doesn't end with .gov, \".gov\" is added when looking for\n a match.\n \"\"\"\n Domain = apps.get_model(\"registrar.Domain\")\n try:\n if domain.endswith(\".gov\"):\n return Domain.available(domain)\n else:\n # domain search string doesn't end with .gov, add it on here\n return Domain.available(domain + \".gov\")\n except Exception:\n return False\n\n\n@require_http_methods([\"GET\"])\n@login_not_required\ndef available(request, domain=\"\"):\n \"\"\"Is a given domain available or not.\n\n Response is a JSON dictionary with the key \"available\" and value true or\n false.\n \"\"\"\n DraftDomain = apps.get_model(\"registrar.DraftDomain\")\n # validate that the given domain could be a domain name and fail early if\n # not.\n if not (\n DraftDomain.string_could_be_domain(domain)\n or DraftDomain.string_could_be_domain(domain + \".gov\")\n ):\n return JsonResponse(\n {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"invalid\"]}\n )\n # a domain is available if it is NOT in the list of current domains\n try:\n if check_domain_available(domain):\n return JsonResponse(\n {\"available\": True, \"message\": DOMAIN_API_MESSAGES[\"success\"]}\n )\n else:\n return JsonResponse(\n {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"unavailable\"]}\n )\n except Exception:\n return JsonResponse(\n {\"available\": False, \"message\": DOMAIN_API_MESSAGES[\"error\"]}\n )\n", "path": "src/api/views.py"}, {"content": "import re\n\nfrom api.views import check_domain_available\nfrom registrar.utility import errors\n\n\nclass DomainHelper:\n \"\"\"Utility functions and constants for domain names.\"\"\"\n\n # a domain name is alphanumeric or hyphen, up to 63 characters, doesn't\n # begin or end with a hyphen, followed by a TLD of 2-6 alphabetic characters\n DOMAIN_REGEX = re.compile(r\"^(?!-)[A-Za-z0-9-]{1,63}(?<!-)\\.[A-Za-z]{2,6}$\")\n\n # a domain name is alphanumeric or hyphen, has at least 2 dots, doesn't\n # begin or end with a hyphen, followed by a TLD of 2-6 alphabetic characters\n HOST_REGEX = re.compile(r\"^((?!-)[A-Za-z0-9-]{1,63}(?<!-)\\.){2,}([A-Za-z]){2,6}$\")\n\n # a domain can be no longer than 253 characters in total\n MAX_LENGTH = 253\n\n @classmethod\n def string_could_be_domain(cls, domain: str | None) -> bool:\n \"\"\"Return True if the string could be a domain name, otherwise False.\"\"\"\n if not isinstance(domain, str):\n return False\n return bool(cls.DOMAIN_REGEX.match(domain))\n\n @classmethod\n def validate(cls, domain: str | None, blank_ok=False) -> str:\n \"\"\"Attempt to determine if a domain name could be requested.\"\"\"\n if domain is None:\n raise errors.BlankValueError()\n if not isinstance(domain, str):\n raise ValueError(\"Domain name must be a string\")\n domain = domain.lower().strip()\n if domain == \"\":\n if blank_ok:\n return domain\n else:\n raise errors.BlankValueError()\n if domain.endswith(\".gov\"):\n domain = domain[:-4]\n if \".\" in domain:\n raise errors.ExtraDotsError()\n if not DomainHelper.string_could_be_domain(domain + \".gov\"):\n raise ValueError()\n if not check_domain_available(domain):\n raise errors.DomainUnavailableError()\n return domain\n\n @classmethod\n def sld(cls, domain: str):\n \"\"\"\n Get the second level domain. Example: `gsa.gov` -> `gsa`.\n\n If no TLD is present, returns the original string.\n \"\"\"\n return domain.split(\".\")[0]\n\n @classmethod\n def tld(cls, domain: str):\n \"\"\"Get the top level domain. Example: `gsa.gov` -> `gov`.\"\"\"\n parts = domain.rsplit(\".\")\n return parts[-1] if len(parts) > 1 else \"\"\n", "path": "src/registrar/models/utility/domain_helper.py"}]} | 2,029 | 720 |
gh_patches_debug_41978 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3239 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
break history page into multiple pages
Each week should get its own page, with paging nav added in the box.
break history page into multiple pages
Each week should get its own page, with paging nav added in the box.
Total Given on history pages for teams includes takes
On the history pages for teams, the "Total Given" amount includes both tips to people **and** takes by members, which is not _correct_.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/utils/history.py`
Content:
```
1 def iter_payday_events(db, participant):
2 """Yields payday events for the given participant.
3 """
4 username = participant.username
5 exchanges = db.all("""
6 SELECT *
7 FROM exchanges
8 WHERE participant=%s
9 """, (username,), back_as=dict)
10 transfers = db.all("""
11 SELECT *
12 FROM transfers
13 WHERE tipper=%(username)s OR tippee=%(username)s
14 """, locals(), back_as=dict)
15
16 if not (exchanges or transfers):
17 return
18
19 if transfers:
20 yield dict(
21 kind='totals',
22 given=sum(t['amount'] for t in transfers if t['tipper'] == username),
23 received=sum(t['amount'] for t in transfers if t['tippee'] == username),
24 )
25
26 payday_dates = db.all("""
27 SELECT ts_start::date
28 FROM paydays
29 ORDER BY ts_start ASC
30 """)
31
32 balance = participant.balance
33 prev_date = None
34 get_timestamp = lambda e: e['timestamp']
35 events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)
36 for event in events:
37
38 event['balance'] = balance
39
40 event_date = event['timestamp'].date()
41 if event_date != prev_date:
42 if prev_date:
43 yield dict(kind='day-close', balance=balance)
44 day_open = dict(kind='day-open', date=event_date, balance=balance)
45 if payday_dates:
46 while payday_dates and payday_dates[-1] > event_date:
47 payday_dates.pop()
48 payday_date = payday_dates[-1] if payday_dates else None
49 if event_date == payday_date:
50 day_open['payday_number'] = len(payday_dates) - 1
51 yield day_open
52 prev_date = event_date
53
54 if 'fee' in event:
55 if event['amount'] > 0:
56 kind = 'charge'
57 if event['status'] in (None, 'succeeded'):
58 balance -= event['amount']
59 else:
60 kind = 'credit'
61 if event['status'] != 'failed':
62 balance -= event['amount'] - event['fee']
63 else:
64 kind = 'transfer'
65 if event['tippee'] == username:
66 balance -= event['amount']
67 else:
68 balance += event['amount']
69 event['kind'] = kind
70
71 yield event
72
73 yield dict(kind='day-close', balance='0.00')
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gratipay/utils/history.py b/gratipay/utils/history.py
--- a/gratipay/utils/history.py
+++ b/gratipay/utils/history.py
@@ -1,16 +1,83 @@
-def iter_payday_events(db, participant):
+from datetime import datetime
+from decimal import Decimal
+
+from psycopg2 import IntegrityError
+
+
+def get_end_of_year_balance(db, participant, year, current_year):
+ if year == current_year:
+ return participant.balance
+ if year < participant.claimed_time.year:
+ return Decimal('0.00')
+
+ balance = db.one("""
+ SELECT balance
+ FROM balances_at
+ WHERE participant = %s
+ AND "at" = %s
+ """, (participant.id, datetime(year+1, 1, 1)))
+ if balance is not None:
+ return balance
+
+ username = participant.username
+ start_balance = get_end_of_year_balance(db, participant, year-1, current_year)
+ delta = db.one("""
+ SELECT (
+ SELECT COALESCE(sum(amount), 0) AS a
+ FROM exchanges
+ WHERE participant = %(username)s
+ AND extract(year from timestamp) = %(year)s
+ AND amount > 0
+ AND (status is null OR status = 'succeeded')
+ ) + (
+ SELECT COALESCE(sum(amount-fee), 0) AS a
+ FROM exchanges
+ WHERE participant = %(username)s
+ AND extract(year from timestamp) = %(year)s
+ AND amount < 0
+ AND (status is null OR status <> 'failed')
+ ) + (
+ SELECT COALESCE(sum(-amount), 0) AS a
+ FROM transfers
+ WHERE tipper = %(username)s
+ AND extract(year from timestamp) = %(year)s
+ ) + (
+ SELECT COALESCE(sum(amount), 0) AS a
+ FROM transfers
+ WHERE tippee = %(username)s
+ AND extract(year from timestamp) = %(year)s
+ ) AS delta
+ """, locals())
+ balance = start_balance + delta
+ try:
+ db.run("""
+ INSERT INTO balances_at
+ (participant, at, balance)
+ VALUES (%s, %s, %s)
+ """, (participant.id, datetime(year+1, 1, 1), balance))
+ except IntegrityError:
+ pass
+ return balance
+
+
+def iter_payday_events(db, participant, year=None):
"""Yields payday events for the given participant.
"""
+ current_year = datetime.utcnow().year
+ year = year or current_year
+
username = participant.username
exchanges = db.all("""
SELECT *
FROM exchanges
- WHERE participant=%s
- """, (username,), back_as=dict)
+ WHERE participant=%(username)s
+ AND extract(year from timestamp) = %(year)s
+ """, locals(), back_as=dict)
transfers = db.all("""
SELECT *
FROM transfers
- WHERE tipper=%(username)s OR tippee=%(username)s
+ WHERE (tipper=%(username)s OR tippee=%(username)s)
+ AND extract(year from timestamp) = %(year)s
""", locals(), back_as=dict)
if not (exchanges or transfers):
@@ -19,7 +86,7 @@
if transfers:
yield dict(
kind='totals',
- given=sum(t['amount'] for t in transfers if t['tipper'] == username),
+ given=sum(t['amount'] for t in transfers if t['tipper'] == username and t['context'] != 'take'),
received=sum(t['amount'] for t in transfers if t['tippee'] == username),
)
@@ -29,7 +96,7 @@
ORDER BY ts_start ASC
""")
- balance = participant.balance
+ balance = get_end_of_year_balance(db, participant, year, current_year)
prev_date = None
get_timestamp = lambda e: e['timestamp']
events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)
@@ -70,4 +137,4 @@
yield event
- yield dict(kind='day-close', balance='0.00')
+ yield dict(kind='day-close', balance=balance)
| {"golden_diff": "diff --git a/gratipay/utils/history.py b/gratipay/utils/history.py\n--- a/gratipay/utils/history.py\n+++ b/gratipay/utils/history.py\n@@ -1,16 +1,83 @@\n-def iter_payday_events(db, participant):\n+from datetime import datetime\n+from decimal import Decimal\n+\n+from psycopg2 import IntegrityError\n+\n+\n+def get_end_of_year_balance(db, participant, year, current_year):\n+ if year == current_year:\n+ return participant.balance\n+ if year < participant.claimed_time.year:\n+ return Decimal('0.00')\n+\n+ balance = db.one(\"\"\"\n+ SELECT balance\n+ FROM balances_at\n+ WHERE participant = %s\n+ AND \"at\" = %s\n+ \"\"\", (participant.id, datetime(year+1, 1, 1)))\n+ if balance is not None:\n+ return balance\n+\n+ username = participant.username\n+ start_balance = get_end_of_year_balance(db, participant, year-1, current_year)\n+ delta = db.one(\"\"\"\n+ SELECT (\n+ SELECT COALESCE(sum(amount), 0) AS a\n+ FROM exchanges\n+ WHERE participant = %(username)s\n+ AND extract(year from timestamp) = %(year)s\n+ AND amount > 0\n+ AND (status is null OR status = 'succeeded')\n+ ) + (\n+ SELECT COALESCE(sum(amount-fee), 0) AS a\n+ FROM exchanges\n+ WHERE participant = %(username)s\n+ AND extract(year from timestamp) = %(year)s\n+ AND amount < 0\n+ AND (status is null OR status <> 'failed')\n+ ) + (\n+ SELECT COALESCE(sum(-amount), 0) AS a\n+ FROM transfers\n+ WHERE tipper = %(username)s\n+ AND extract(year from timestamp) = %(year)s\n+ ) + (\n+ SELECT COALESCE(sum(amount), 0) AS a\n+ FROM transfers\n+ WHERE tippee = %(username)s\n+ AND extract(year from timestamp) = %(year)s\n+ ) AS delta\n+ \"\"\", locals())\n+ balance = start_balance + delta\n+ try:\n+ db.run(\"\"\"\n+ INSERT INTO balances_at\n+ (participant, at, balance)\n+ VALUES (%s, %s, %s)\n+ \"\"\", (participant.id, datetime(year+1, 1, 1), balance))\n+ except IntegrityError:\n+ pass\n+ return balance\n+\n+\n+def iter_payday_events(db, participant, year=None):\n \"\"\"Yields payday events for the given participant.\n \"\"\"\n+ current_year = datetime.utcnow().year\n+ year = year or current_year\n+\n username = participant.username\n exchanges = db.all(\"\"\"\n SELECT *\n FROM exchanges\n- WHERE participant=%s\n- \"\"\", (username,), back_as=dict)\n+ WHERE participant=%(username)s\n+ AND extract(year from timestamp) = %(year)s\n+ \"\"\", locals(), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n- WHERE tipper=%(username)s OR tippee=%(username)s\n+ WHERE (tipper=%(username)s OR tippee=%(username)s)\n+ AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n \n if not (exchanges or transfers):\n@@ -19,7 +86,7 @@\n if transfers:\n yield dict(\n kind='totals',\n- given=sum(t['amount'] for t in transfers if t['tipper'] == username),\n+ given=sum(t['amount'] for t in transfers if t['tipper'] == username and t['context'] != 'take'),\n received=sum(t['amount'] for t in transfers if t['tippee'] == username),\n )\n \n@@ -29,7 +96,7 @@\n ORDER BY ts_start ASC\n \"\"\")\n \n- balance = participant.balance\n+ balance = get_end_of_year_balance(db, participant, year, current_year)\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)\n@@ -70,4 +137,4 @@\n \n yield event\n \n- yield dict(kind='day-close', balance='0.00')\n+ yield dict(kind='day-close', balance=balance)\n", "issue": "break history page into multiple pages\nEach week should get its own page, with paging nav added in the box.\n\nbreak history page into multiple pages\nEach week should get its own page, with paging nav added in the box.\n\nTotal Given on history pages for teams includes takes\nOn the history pages for teams, the \"Total Given\" amount includes both tips to people **and** takes by members, which is not _correct_. \n\n", "before_files": [{"content": "def iter_payday_events(db, participant):\n \"\"\"Yields payday events for the given participant.\n \"\"\"\n username = participant.username\n exchanges = db.all(\"\"\"\n SELECT *\n FROM exchanges\n WHERE participant=%s\n \"\"\", (username,), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n WHERE tipper=%(username)s OR tippee=%(username)s\n \"\"\", locals(), back_as=dict)\n\n if not (exchanges or transfers):\n return\n\n if transfers:\n yield dict(\n kind='totals',\n given=sum(t['amount'] for t in transfers if t['tipper'] == username),\n received=sum(t['amount'] for t in transfers if t['tippee'] == username),\n )\n\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n ORDER BY ts_start ASC\n \"\"\")\n\n balance = participant.balance\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)\n for event in events:\n\n event['balance'] = balance\n\n event_date = event['timestamp'].date()\n if event_date != prev_date:\n if prev_date:\n yield dict(kind='day-close', balance=balance)\n day_open = dict(kind='day-open', date=event_date, balance=balance)\n if payday_dates:\n while payday_dates and payday_dates[-1] > event_date:\n payday_dates.pop()\n payday_date = payday_dates[-1] if payday_dates else None\n if event_date == payday_date:\n day_open['payday_number'] = len(payday_dates) - 1\n yield day_open\n prev_date = event_date\n\n if 'fee' in event:\n if event['amount'] > 0:\n kind = 'charge'\n if event['status'] in (None, 'succeeded'):\n balance -= event['amount']\n else:\n kind = 'credit'\n if event['status'] != 'failed':\n balance -= event['amount'] - event['fee']\n else:\n kind = 'transfer'\n if event['tippee'] == username:\n balance -= event['amount']\n else:\n balance += event['amount']\n event['kind'] = kind\n\n yield event\n\n yield dict(kind='day-close', balance='0.00')\n", "path": "gratipay/utils/history.py"}], "after_files": [{"content": "from datetime import datetime\nfrom decimal import Decimal\n\nfrom psycopg2 import IntegrityError\n\n\ndef get_end_of_year_balance(db, participant, year, current_year):\n if year == current_year:\n return participant.balance\n if year < participant.claimed_time.year:\n return Decimal('0.00')\n\n balance = db.one(\"\"\"\n SELECT balance\n FROM balances_at\n WHERE participant = %s\n AND \"at\" = %s\n \"\"\", (participant.id, datetime(year+1, 1, 1)))\n if balance is not None:\n return balance\n\n username = participant.username\n start_balance = get_end_of_year_balance(db, participant, year-1, current_year)\n delta = db.one(\"\"\"\n SELECT (\n SELECT COALESCE(sum(amount), 0) AS a\n FROM exchanges\n WHERE participant = %(username)s\n AND extract(year from timestamp) = %(year)s\n AND amount > 0\n AND (status is null OR status = 'succeeded')\n ) + (\n SELECT COALESCE(sum(amount-fee), 0) AS a\n FROM exchanges\n WHERE participant = %(username)s\n AND extract(year from timestamp) = %(year)s\n AND amount < 0\n AND (status is null OR status <> 'failed')\n ) + (\n SELECT COALESCE(sum(-amount), 0) AS a\n FROM transfers\n WHERE tipper = %(username)s\n AND extract(year from timestamp) = %(year)s\n ) + (\n SELECT COALESCE(sum(amount), 0) AS a\n FROM transfers\n WHERE tippee = %(username)s\n AND extract(year from timestamp) = %(year)s\n ) AS delta\n \"\"\", locals())\n balance = start_balance + delta\n try:\n db.run(\"\"\"\n INSERT INTO balances_at\n (participant, at, balance)\n VALUES (%s, %s, %s)\n \"\"\", (participant.id, datetime(year+1, 1, 1), balance))\n except IntegrityError:\n pass\n return balance\n\n\ndef iter_payday_events(db, participant, year=None):\n \"\"\"Yields payday events for the given participant.\n \"\"\"\n current_year = datetime.utcnow().year\n year = year or current_year\n\n username = participant.username\n exchanges = db.all(\"\"\"\n SELECT *\n FROM exchanges\n WHERE participant=%(username)s\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n WHERE (tipper=%(username)s OR tippee=%(username)s)\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n\n if not (exchanges or transfers):\n return\n\n if transfers:\n yield dict(\n kind='totals',\n given=sum(t['amount'] for t in transfers if t['tipper'] == username and t['context'] != 'take'),\n received=sum(t['amount'] for t in transfers if t['tippee'] == username),\n )\n\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n ORDER BY ts_start ASC\n \"\"\")\n\n balance = get_end_of_year_balance(db, participant, year, current_year)\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)\n for event in events:\n\n event['balance'] = balance\n\n event_date = event['timestamp'].date()\n if event_date != prev_date:\n if prev_date:\n yield dict(kind='day-close', balance=balance)\n day_open = dict(kind='day-open', date=event_date, balance=balance)\n if payday_dates:\n while payday_dates and payday_dates[-1] > event_date:\n payday_dates.pop()\n payday_date = payday_dates[-1] if payday_dates else None\n if event_date == payday_date:\n day_open['payday_number'] = len(payday_dates) - 1\n yield day_open\n prev_date = event_date\n\n if 'fee' in event:\n if event['amount'] > 0:\n kind = 'charge'\n if event['status'] in (None, 'succeeded'):\n balance -= event['amount']\n else:\n kind = 'credit'\n if event['status'] != 'failed':\n balance -= event['amount'] - event['fee']\n else:\n kind = 'transfer'\n if event['tippee'] == username:\n balance -= event['amount']\n else:\n balance += event['amount']\n event['kind'] = kind\n\n yield event\n\n yield dict(kind='day-close', balance=balance)\n", "path": "gratipay/utils/history.py"}]} | 1,011 | 990 |
gh_patches_debug_13737 | rasdani/github-patches | git_diff | translate__pootle-4191 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A user can't download the terminology files from a Terminology project
I know we switched off download of `pootle-terminology.po` files as they cause a number of problem mostly because there is no associated file. But this seems to have extended to also include the 'Terminology' project.
I can't recall if at the time we decided it was an OK compromise. But now with @phlax's work which means we can have non-existent files we should likely be able to bring this back. At the very least we shouldn't be disabling download of files within Terminology ad only restrict `pootle-terminology.po`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_translationproject/views.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 from django.conf import settings
11 from django.shortcuts import render
12
13 from import_export.views import handle_upload_form
14 from pootle.core.browser import (get_children, get_table_headings, get_parent,
15 get_vfolders)
16 from pootle.core.decorators import (get_path_obj, get_resource,
17 permission_required)
18 from pootle.core.helpers import (get_export_view_context, get_browser_context,
19 get_sidebar_announcements_context,
20 get_translation_context, SIDEBAR_COOKIE_NAME)
21 from pootle.core.utils.json import jsonify
22 from pootle_app.models.permissions import check_permission
23 from pootle_app.views.admin.permissions import admin_permissions as admin_perms
24
25
26 @get_path_obj
27 @permission_required('administrate')
28 def admin_permissions(request, translation_project):
29 language = translation_project.language
30 project = translation_project.project
31
32 ctx = {
33 'page': 'admin-permissions',
34
35 'translation_project': translation_project,
36 'project': project,
37 'language': language,
38 'directory': translation_project.directory,
39 }
40
41 return admin_perms(request, translation_project.directory,
42 'translation_projects/admin/permissions.html', ctx)
43
44
45 @get_path_obj
46 @permission_required('view')
47 @get_resource
48 def browse(request, translation_project, dir_path, filename=None):
49 project = translation_project.project
50 language = translation_project.language
51
52 directory = request.directory
53 store = request.store
54 is_admin = check_permission('administrate', request)
55
56 ctx, cookie_data = get_sidebar_announcements_context(
57 request,
58 (project, language, translation_project, ),
59 )
60
61 ctx.update(get_browser_context(request))
62
63 # TODO improve plugin logic
64 if "import_export" in settings.INSTALLED_APPS:
65 if not translation_project.is_terminology_project:
66 if request.user.is_authenticated():
67 if check_permission('translate', request):
68 ctx.update(handle_upload_form(request, project))
69 ctx.update({'display_download': True,
70 'has_sidebar': True})
71
72 stats = request.resource_obj.get_stats()
73
74 if store is None:
75 table_fields = ['name', 'progress', 'total', 'need-translation',
76 'suggestions', 'critical', 'last-updated', 'activity']
77 ctx.update({
78 'table': {
79 'id': 'tp',
80 'fields': table_fields,
81 'headings': get_table_headings(table_fields),
82 'items': get_children(directory),
83 }
84 })
85
86 if 'virtualfolder' in settings.INSTALLED_APPS:
87 vfolders = get_vfolders(directory, all_vfolders=is_admin)
88 if len(vfolders) > 0:
89 table_fields = ['name', 'priority', 'progress', 'total',
90 'need-translation', 'suggestions', 'critical',
91 'last-updated', 'activity']
92 ctx.update({
93 'vfolders': {
94 'id': 'vfolders',
95 'fields': table_fields,
96 'headings': get_table_headings(table_fields),
97 'items': vfolders,
98 },
99 })
100
101 # FIXME: set vfolders stats in the resource, don't inject them
102 # here.
103 stats['vfolders'] = {}
104
105 for vfolder_treeitem in directory.vf_treeitems.iterator():
106 if request.user.is_superuser or vfolder_treeitem.is_visible:
107 stats['vfolders'][vfolder_treeitem.code] = \
108 vfolder_treeitem.get_stats(include_children=False)
109
110 ctx.update({
111 'parent': get_parent(directory if store is None else store),
112 'translation_project': translation_project,
113 'project': project,
114 'language': language,
115 'stats': jsonify(stats),
116 'is_admin': is_admin,
117 'is_store': store is not None,
118
119 'browser_extends': 'translation_projects/base.html',
120 })
121
122 response = render(request, 'browser/index.html', ctx)
123
124 if cookie_data:
125 response.set_cookie(SIDEBAR_COOKIE_NAME, cookie_data)
126
127 return response
128
129
130 @get_path_obj
131 @permission_required('view')
132 @get_resource
133 def translate(request, translation_project, dir_path, filename):
134 project = translation_project.project
135
136 ctx = get_translation_context(request)
137
138 ctx.update({
139 'language': translation_project.language,
140 'project': project,
141 'translation_project': translation_project,
142
143 'editor_extends': 'translation_projects/base.html',
144 })
145
146 return render(request, "editor/main.html", ctx)
147
148
149 @get_path_obj
150 @permission_required('view')
151 @get_resource
152 def export_view(request, translation_project, dir_path, filename=None):
153 """Displays a list of units with filters applied."""
154 ctx = get_export_view_context(request)
155 ctx.update({
156 'source_language': translation_project.project.source_language,
157 'language': translation_project.language,
158 'project': translation_project.project,
159 })
160
161 return render(request, 'editor/export_view.html', ctx)
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_translationproject/views.py b/pootle/apps/pootle_translationproject/views.py
--- a/pootle/apps/pootle_translationproject/views.py
+++ b/pootle/apps/pootle_translationproject/views.py
@@ -62,12 +62,11 @@
# TODO improve plugin logic
if "import_export" in settings.INSTALLED_APPS:
- if not translation_project.is_terminology_project:
- if request.user.is_authenticated():
- if check_permission('translate', request):
- ctx.update(handle_upload_form(request, project))
- ctx.update({'display_download': True,
- 'has_sidebar': True})
+ if request.user.is_authenticated():
+ if check_permission('translate', request):
+ ctx.update(handle_upload_form(request, project))
+ ctx.update({'display_download': True,
+ 'has_sidebar': True})
stats = request.resource_obj.get_stats()
| {"golden_diff": "diff --git a/pootle/apps/pootle_translationproject/views.py b/pootle/apps/pootle_translationproject/views.py\n--- a/pootle/apps/pootle_translationproject/views.py\n+++ b/pootle/apps/pootle_translationproject/views.py\n@@ -62,12 +62,11 @@\n \n # TODO improve plugin logic\n if \"import_export\" in settings.INSTALLED_APPS:\n- if not translation_project.is_terminology_project:\n- if request.user.is_authenticated():\n- if check_permission('translate', request):\n- ctx.update(handle_upload_form(request, project))\n- ctx.update({'display_download': True,\n- 'has_sidebar': True})\n+ if request.user.is_authenticated():\n+ if check_permission('translate', request):\n+ ctx.update(handle_upload_form(request, project))\n+ ctx.update({'display_download': True,\n+ 'has_sidebar': True})\n \n stats = request.resource_obj.get_stats()\n", "issue": "A user can't download the terminology files from a Terminology project\nI know we switched off download of `pootle-terminology.po` files as they cause a number of problem mostly because there is no associated file. But this seems to have extended to also include the 'Terminology' project.\n\nI can't recall if at the time we decided it was an OK compromise. But now with @phlax's work which means we can have non-existent files we should likely be able to bring this back. At the very least we shouldn't be disabling download of files within Terminology ad only restrict `pootle-terminology.po`\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.conf import settings\nfrom django.shortcuts import render\n\nfrom import_export.views import handle_upload_form\nfrom pootle.core.browser import (get_children, get_table_headings, get_parent,\n get_vfolders)\nfrom pootle.core.decorators import (get_path_obj, get_resource,\n permission_required)\nfrom pootle.core.helpers import (get_export_view_context, get_browser_context,\n get_sidebar_announcements_context,\n get_translation_context, SIDEBAR_COOKIE_NAME)\nfrom pootle.core.utils.json import jsonify\nfrom pootle_app.models.permissions import check_permission\nfrom pootle_app.views.admin.permissions import admin_permissions as admin_perms\n\n\n@get_path_obj\n@permission_required('administrate')\ndef admin_permissions(request, translation_project):\n language = translation_project.language\n project = translation_project.project\n\n ctx = {\n 'page': 'admin-permissions',\n\n 'translation_project': translation_project,\n 'project': project,\n 'language': language,\n 'directory': translation_project.directory,\n }\n\n return admin_perms(request, translation_project.directory,\n 'translation_projects/admin/permissions.html', ctx)\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef browse(request, translation_project, dir_path, filename=None):\n project = translation_project.project\n language = translation_project.language\n\n directory = request.directory\n store = request.store\n is_admin = check_permission('administrate', request)\n\n ctx, cookie_data = get_sidebar_announcements_context(\n request,\n (project, language, translation_project, ),\n )\n\n ctx.update(get_browser_context(request))\n\n # TODO improve plugin logic\n if \"import_export\" in settings.INSTALLED_APPS:\n if not translation_project.is_terminology_project:\n if request.user.is_authenticated():\n if check_permission('translate', request):\n ctx.update(handle_upload_form(request, project))\n ctx.update({'display_download': True,\n 'has_sidebar': True})\n\n stats = request.resource_obj.get_stats()\n\n if store is None:\n table_fields = ['name', 'progress', 'total', 'need-translation',\n 'suggestions', 'critical', 'last-updated', 'activity']\n ctx.update({\n 'table': {\n 'id': 'tp',\n 'fields': table_fields,\n 'headings': get_table_headings(table_fields),\n 'items': get_children(directory),\n }\n })\n\n if 'virtualfolder' in settings.INSTALLED_APPS:\n vfolders = get_vfolders(directory, all_vfolders=is_admin)\n if len(vfolders) > 0:\n table_fields = ['name', 'priority', 'progress', 'total',\n 'need-translation', 'suggestions', 'critical',\n 'last-updated', 'activity']\n ctx.update({\n 'vfolders': {\n 'id': 'vfolders',\n 'fields': table_fields,\n 'headings': get_table_headings(table_fields),\n 'items': vfolders,\n },\n })\n\n # FIXME: set vfolders stats in the resource, don't inject them\n # here.\n stats['vfolders'] = {}\n\n for vfolder_treeitem in directory.vf_treeitems.iterator():\n if request.user.is_superuser or vfolder_treeitem.is_visible:\n stats['vfolders'][vfolder_treeitem.code] = \\\n vfolder_treeitem.get_stats(include_children=False)\n\n ctx.update({\n 'parent': get_parent(directory if store is None else store),\n 'translation_project': translation_project,\n 'project': project,\n 'language': language,\n 'stats': jsonify(stats),\n 'is_admin': is_admin,\n 'is_store': store is not None,\n\n 'browser_extends': 'translation_projects/base.html',\n })\n\n response = render(request, 'browser/index.html', ctx)\n\n if cookie_data:\n response.set_cookie(SIDEBAR_COOKIE_NAME, cookie_data)\n\n return response\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef translate(request, translation_project, dir_path, filename):\n project = translation_project.project\n\n ctx = get_translation_context(request)\n\n ctx.update({\n 'language': translation_project.language,\n 'project': project,\n 'translation_project': translation_project,\n\n 'editor_extends': 'translation_projects/base.html',\n })\n\n return render(request, \"editor/main.html\", ctx)\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef export_view(request, translation_project, dir_path, filename=None):\n \"\"\"Displays a list of units with filters applied.\"\"\"\n ctx = get_export_view_context(request)\n ctx.update({\n 'source_language': translation_project.project.source_language,\n 'language': translation_project.language,\n 'project': translation_project.project,\n })\n\n return render(request, 'editor/export_view.html', ctx)\n", "path": "pootle/apps/pootle_translationproject/views.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.conf import settings\nfrom django.shortcuts import render\n\nfrom import_export.views import handle_upload_form\nfrom pootle.core.browser import (get_children, get_table_headings, get_parent,\n get_vfolders)\nfrom pootle.core.decorators import (get_path_obj, get_resource,\n permission_required)\nfrom pootle.core.helpers import (get_export_view_context, get_browser_context,\n get_sidebar_announcements_context,\n get_translation_context, SIDEBAR_COOKIE_NAME)\nfrom pootle.core.utils.json import jsonify\nfrom pootle_app.models.permissions import check_permission\nfrom pootle_app.views.admin.permissions import admin_permissions as admin_perms\n\n\n@get_path_obj\n@permission_required('administrate')\ndef admin_permissions(request, translation_project):\n language = translation_project.language\n project = translation_project.project\n\n ctx = {\n 'page': 'admin-permissions',\n\n 'translation_project': translation_project,\n 'project': project,\n 'language': language,\n 'directory': translation_project.directory,\n }\n\n return admin_perms(request, translation_project.directory,\n 'translation_projects/admin/permissions.html', ctx)\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef browse(request, translation_project, dir_path, filename=None):\n project = translation_project.project\n language = translation_project.language\n\n directory = request.directory\n store = request.store\n is_admin = check_permission('administrate', request)\n\n ctx, cookie_data = get_sidebar_announcements_context(\n request,\n (project, language, translation_project, ),\n )\n\n ctx.update(get_browser_context(request))\n\n # TODO improve plugin logic\n if \"import_export\" in settings.INSTALLED_APPS:\n if request.user.is_authenticated():\n if check_permission('translate', request):\n ctx.update(handle_upload_form(request, project))\n ctx.update({'display_download': True,\n 'has_sidebar': True})\n\n stats = request.resource_obj.get_stats()\n\n if store is None:\n table_fields = ['name', 'progress', 'total', 'need-translation',\n 'suggestions', 'critical', 'last-updated', 'activity']\n ctx.update({\n 'table': {\n 'id': 'tp',\n 'fields': table_fields,\n 'headings': get_table_headings(table_fields),\n 'items': get_children(directory),\n }\n })\n\n if 'virtualfolder' in settings.INSTALLED_APPS:\n vfolders = get_vfolders(directory, all_vfolders=is_admin)\n if len(vfolders) > 0:\n table_fields = ['name', 'priority', 'progress', 'total',\n 'need-translation', 'suggestions', 'critical',\n 'last-updated', 'activity']\n ctx.update({\n 'vfolders': {\n 'id': 'vfolders',\n 'fields': table_fields,\n 'headings': get_table_headings(table_fields),\n 'items': vfolders,\n },\n })\n\n # FIXME: set vfolders stats in the resource, don't inject them\n # here.\n stats['vfolders'] = {}\n\n for vfolder_treeitem in directory.vf_treeitems.iterator():\n if request.user.is_superuser or vfolder_treeitem.is_visible:\n stats['vfolders'][vfolder_treeitem.code] = \\\n vfolder_treeitem.get_stats(include_children=False)\n\n ctx.update({\n 'parent': get_parent(directory if store is None else store),\n 'translation_project': translation_project,\n 'project': project,\n 'language': language,\n 'stats': jsonify(stats),\n 'is_admin': is_admin,\n 'is_store': store is not None,\n\n 'browser_extends': 'translation_projects/base.html',\n })\n\n response = render(request, 'browser/index.html', ctx)\n\n if cookie_data:\n response.set_cookie(SIDEBAR_COOKIE_NAME, cookie_data)\n\n return response\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef translate(request, translation_project, dir_path, filename):\n project = translation_project.project\n\n ctx = get_translation_context(request)\n\n ctx.update({\n 'language': translation_project.language,\n 'project': project,\n 'translation_project': translation_project,\n\n 'editor_extends': 'translation_projects/base.html',\n })\n\n return render(request, \"editor/main.html\", ctx)\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef export_view(request, translation_project, dir_path, filename=None):\n \"\"\"Displays a list of units with filters applied.\"\"\"\n ctx = get_export_view_context(request)\n ctx.update({\n 'source_language': translation_project.project.source_language,\n 'language': translation_project.language,\n 'project': translation_project.project,\n })\n\n return render(request, 'editor/export_view.html', ctx)\n", "path": "pootle/apps/pootle_translationproject/views.py"}]} | 1,898 | 207 |
gh_patches_debug_34415 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-639 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean library/frameworks from backtraces
We should clean all non-user code from backtraces as much as possible. Here's a heroku app running 2.13.0:
<img width="890" alt="Screenshot 2020-04-13 14 39 56" src="https://user-images.githubusercontent.com/102774/79160522-26a36e80-7d97-11ea-8376-f8dfcfd6ece0.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/core/backtrace.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import sys
5 import traceback
6
7 # Maximum non-Scout frames to target retrieving
8 LIMIT = 50
9 # How many upper frames from inside Scout to ignore
10 IGNORED = 1
11
12
13 if sys.version_info >= (3, 5):
14
15 def capture():
16 return [
17 {"file": frame.filename, "line": frame.lineno, "function": frame.name}
18 for frame in reversed(
19 traceback.extract_stack(limit=LIMIT + IGNORED)[:-IGNORED]
20 )
21 ]
22
23
24 else:
25
26 def capture():
27 return [
28 {"file": frame[0], "line": frame[1], "function": frame[3]}
29 for frame in reversed(
30 traceback.extract_stack(limit=LIMIT + IGNORED)[:-IGNORED]
31 )
32 ]
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/core/backtrace.py b/src/scout_apm/core/backtrace.py
--- a/src/scout_apm/core/backtrace.py
+++ b/src/scout_apm/core/backtrace.py
@@ -1,7 +1,9 @@
# coding=utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
+import itertools
import sys
+import sysconfig
import traceback
# Maximum non-Scout frames to target retrieving
@@ -10,23 +12,55 @@
IGNORED = 1
+def filter_frames(frames):
+ """Filter the stack trace frames down to non-library code."""
+ paths = sysconfig.get_paths()
+ library_paths = {paths["purelib"], paths["platlib"]}
+ for frame in frames:
+ if not any(frame["file"].startswith(exclusion) for exclusion in library_paths):
+ yield frame
+
+
if sys.version_info >= (3, 5):
+ def frame_walker():
+ """Iterate over each frame of the stack.
+
+ Taken from python3/traceback.ExtractSummary.extract to support
+ iterating over the entire stack, but without creating a large
+ data structure.
+ """
+ for frame, lineno in traceback.walk_stack(sys._getframe().f_back):
+ co = frame.f_code
+ filename = co.co_filename
+ name = co.co_name
+ yield {"file": filename, "line": lineno, "function": name}
+
def capture():
- return [
- {"file": frame.filename, "line": frame.lineno, "function": frame.name}
- for frame in reversed(
- traceback.extract_stack(limit=LIMIT + IGNORED)[:-IGNORED]
- )
- ]
+ return list(itertools.islice(filter_frames(frame_walker()), LIMIT))
else:
+ def frame_walker():
+ """Iterate over each frame of the stack.
+
+ Taken from python2.7/traceback.extract_stack to support iterating
+ over the entire stack, but without creating a large data structure.
+ """
+ try:
+ raise ZeroDivisionError
+ except ZeroDivisionError:
+ # Get the current frame
+ f = sys.exc_info()[2].tb_frame.f_back
+
+ while f is not None:
+ lineno = f.f_lineno
+ co = f.f_code
+ filename = co.co_filename
+ name = co.co_name
+ yield {"file": filename, "line": lineno, "function": name}
+ f = f.f_back
+
def capture():
- return [
- {"file": frame[0], "line": frame[1], "function": frame[3]}
- for frame in reversed(
- traceback.extract_stack(limit=LIMIT + IGNORED)[:-IGNORED]
- )
- ]
+ return list(itertools.islice(filter_frames(frame_walker()), LIMIT))
| {"golden_diff": "diff --git a/src/scout_apm/core/backtrace.py b/src/scout_apm/core/backtrace.py\n--- a/src/scout_apm/core/backtrace.py\n+++ b/src/scout_apm/core/backtrace.py\n@@ -1,7 +1,9 @@\n # coding=utf-8\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n+import itertools\n import sys\n+import sysconfig\n import traceback\n \n # Maximum non-Scout frames to target retrieving\n@@ -10,23 +12,55 @@\n IGNORED = 1\n \n \n+def filter_frames(frames):\n+ \"\"\"Filter the stack trace frames down to non-library code.\"\"\"\n+ paths = sysconfig.get_paths()\n+ library_paths = {paths[\"purelib\"], paths[\"platlib\"]}\n+ for frame in frames:\n+ if not any(frame[\"file\"].startswith(exclusion) for exclusion in library_paths):\n+ yield frame\n+\n+\n if sys.version_info >= (3, 5):\n \n+ def frame_walker():\n+ \"\"\"Iterate over each frame of the stack.\n+\n+ Taken from python3/traceback.ExtractSummary.extract to support\n+ iterating over the entire stack, but without creating a large\n+ data structure.\n+ \"\"\"\n+ for frame, lineno in traceback.walk_stack(sys._getframe().f_back):\n+ co = frame.f_code\n+ filename = co.co_filename\n+ name = co.co_name\n+ yield {\"file\": filename, \"line\": lineno, \"function\": name}\n+\n def capture():\n- return [\n- {\"file\": frame.filename, \"line\": frame.lineno, \"function\": frame.name}\n- for frame in reversed(\n- traceback.extract_stack(limit=LIMIT + IGNORED)[:-IGNORED]\n- )\n- ]\n+ return list(itertools.islice(filter_frames(frame_walker()), LIMIT))\n \n \n else:\n \n+ def frame_walker():\n+ \"\"\"Iterate over each frame of the stack.\n+\n+ Taken from python2.7/traceback.extract_stack to support iterating\n+ over the entire stack, but without creating a large data structure.\n+ \"\"\"\n+ try:\n+ raise ZeroDivisionError\n+ except ZeroDivisionError:\n+ # Get the current frame\n+ f = sys.exc_info()[2].tb_frame.f_back\n+\n+ while f is not None:\n+ lineno = f.f_lineno\n+ co = f.f_code\n+ filename = co.co_filename\n+ name = co.co_name\n+ yield {\"file\": filename, \"line\": lineno, \"function\": name}\n+ f = f.f_back\n+\n def capture():\n- return [\n- {\"file\": frame[0], \"line\": frame[1], \"function\": frame[3]}\n- for frame in reversed(\n- traceback.extract_stack(limit=LIMIT + IGNORED)[:-IGNORED]\n- )\n- ]\n+ return list(itertools.islice(filter_frames(frame_walker()), LIMIT))\n", "issue": "Clean library/frameworks from backtraces\nWe should clean all non-user code from backtraces as much as possible. Here's a heroku app running 2.13.0:\r\n\r\n<img width=\"890\" alt=\"Screenshot 2020-04-13 14 39 56\" src=\"https://user-images.githubusercontent.com/102774/79160522-26a36e80-7d97-11ea-8376-f8dfcfd6ece0.png\">\r\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport sys\nimport traceback\n\n# Maximum non-Scout frames to target retrieving\nLIMIT = 50\n# How many upper frames from inside Scout to ignore\nIGNORED = 1\n\n\nif sys.version_info >= (3, 5):\n\n def capture():\n return [\n {\"file\": frame.filename, \"line\": frame.lineno, \"function\": frame.name}\n for frame in reversed(\n traceback.extract_stack(limit=LIMIT + IGNORED)[:-IGNORED]\n )\n ]\n\n\nelse:\n\n def capture():\n return [\n {\"file\": frame[0], \"line\": frame[1], \"function\": frame[3]}\n for frame in reversed(\n traceback.extract_stack(limit=LIMIT + IGNORED)[:-IGNORED]\n )\n ]\n", "path": "src/scout_apm/core/backtrace.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport itertools\nimport sys\nimport sysconfig\nimport traceback\n\n# Maximum non-Scout frames to target retrieving\nLIMIT = 50\n# How many upper frames from inside Scout to ignore\nIGNORED = 1\n\n\ndef filter_frames(frames):\n \"\"\"Filter the stack trace frames down to non-library code.\"\"\"\n paths = sysconfig.get_paths()\n library_paths = {paths[\"purelib\"], paths[\"platlib\"]}\n for frame in frames:\n if not any(frame[\"file\"].startswith(exclusion) for exclusion in library_paths):\n yield frame\n\n\nif sys.version_info >= (3, 5):\n\n def frame_walker():\n \"\"\"Iterate over each frame of the stack.\n\n Taken from python3/traceback.ExtractSummary.extract to support\n iterating over the entire stack, but without creating a large\n data structure.\n \"\"\"\n for frame, lineno in traceback.walk_stack(sys._getframe().f_back):\n co = frame.f_code\n filename = co.co_filename\n name = co.co_name\n yield {\"file\": filename, \"line\": lineno, \"function\": name}\n\n def capture():\n return list(itertools.islice(filter_frames(frame_walker()), LIMIT))\n\n\nelse:\n\n def frame_walker():\n \"\"\"Iterate over each frame of the stack.\n\n Taken from python2.7/traceback.extract_stack to support iterating\n over the entire stack, but without creating a large data structure.\n \"\"\"\n try:\n raise ZeroDivisionError\n except ZeroDivisionError:\n # Get the current frame\n f = sys.exc_info()[2].tb_frame.f_back\n\n while f is not None:\n lineno = f.f_lineno\n co = f.f_code\n filename = co.co_filename\n name = co.co_name\n yield {\"file\": filename, \"line\": lineno, \"function\": name}\n f = f.f_back\n\n def capture():\n return list(itertools.islice(filter_frames(frame_walker()), LIMIT))\n", "path": "src/scout_apm/core/backtrace.py"}]} | 640 | 654 |
gh_patches_debug_571 | rasdani/github-patches | git_diff | Uberspace__lab-28 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change project name to lab in config
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Uberspace 7 lab documentation build configuration file, created by
5 # sphinx-quickstart on Tue Feb 13 12:19:29 2018.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 # import os
21 # import sys
22 # sys.path.insert(0, os.path.abspath('.'))
23
24 import sphinx_rtd_theme
25
26 # -- General configuration ------------------------------------------------
27
28 # If your documentation needs a minimal Sphinx version, state it here.
29 #
30 # needs_sphinx = '1.0'
31
32 # Add any Sphinx extension module names here, as strings. They can be
33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
34 # ones.
35 extensions = []
36
37 # Add any paths that contain templates here, relative to this directory.
38 templates_path = ['_templates']
39
40 # The suffix(es) of source filenames.
41 # You can specify multiple suffix as a list of string:
42 #
43 # source_suffix = ['.rst', '.md']
44 source_suffix = '.rst'
45
46 # The master toctree document.
47 master_doc = 'index'
48
49 # General information about the project.
50 project = 'Uberspace 7 Lab'
51 copyright = '2018, uberspace.de'
52 author = 'uberspace.de'
53
54 # The version info for the project you're documenting, acts as replacement for
55 # |version| and |release|, also used in various other places throughout the
56 # built documents.
57 #
58 # The short X.Y version.
59 release = version = '7'
60
61 # The language for content autogenerated by Sphinx. Refer to documentation
62 # for a list of supported languages.
63 #
64 # This is also used if you do content translation via gettext catalogs.
65 # Usually you set "language" from the command line for these cases.
66 language = None
67
68 # List of patterns, relative to source directory, that match files and
69 # directories to ignore when looking for source files.
70 # This patterns also effect to html_static_path and html_extra_path
71 exclude_patterns = []
72
73 # The name of the Pygments (syntax highlighting) style to use.
74 pygments_style = 'sphinx'
75
76 # If true, `todo` and `todoList` produce output, else they produce nothing.
77 todo_include_todos = False
78
79
80 # -- Options for HTML output ----------------------------------------------
81
82 html_theme = 'sphinx_rtd_theme'
83 html_theme_options = {
84 'display_version': False,
85 'navigation_depth': 2,
86 'collapse_navigation': True
87 }
88 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
89 html_last_updated_fmt = '%b %d, %Y'
90 html_context = {
91 'css_files': ['_static/css/custom.css'],
92 'display_github': True,
93 'github_user': 'Uberspace',
94 'github_repo': 'lab',
95 'github_version': 'master',
96 'conf_py_path': '/source/'
97 }
98 html_show_copyright = False
99 html_favicon = '_static/favicon.ico'
100
101
102 # Theme options are theme-specific and customize the look and feel of a theme
103 # further. For a list of options available for each theme, see the
104 # documentation.
105 #
106 # html_theme_options = {}
107
108 # Add any paths that contain custom static files (such as style sheets) here,
109 # relative to this directory. They are copied after the builtin static files,
110 # so a file named "default.css" will overwrite the builtin "default.css".
111 html_static_path = ['_static']
112
113 # Custom sidebar templates, must be a dictionary that maps document names
114 # to template names.
115 #
116 # This is required for the alabaster theme
117 # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
118 html_sidebars = {
119 '**': [
120 'relations.html', # needs 'show_related': True theme option to display
121 'searchbox.html',
122 ]
123 }
124
125
126 # -- Options for HTMLHelp output ------------------------------------------
127
128 # Output file base name for HTML help builder.
129 htmlhelp_basename = 'Uberspace7labdoc'
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/source/conf.py b/source/conf.py
--- a/source/conf.py
+++ b/source/conf.py
@@ -47,7 +47,7 @@
master_doc = 'index'
# General information about the project.
-project = 'Uberspace 7 Lab'
+project = 'UberLab'
copyright = '2018, uberspace.de'
author = 'uberspace.de'
| {"golden_diff": "diff --git a/source/conf.py b/source/conf.py\n--- a/source/conf.py\n+++ b/source/conf.py\n@@ -47,7 +47,7 @@\n master_doc = 'index'\n \n # General information about the project.\n-project = 'Uberspace 7 Lab'\n+project = 'UberLab'\n copyright = '2018, uberspace.de'\n author = 'uberspace.de'\n", "issue": "Change project name to lab in config\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Uberspace 7 lab documentation build configuration file, created by\n# sphinx-quickstart on Tue Feb 13 12:19:29 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nimport sphinx_rtd_theme\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = []\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Uberspace 7 Lab'\ncopyright = '2018, uberspace.de'\nauthor = 'uberspace.de'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nrelease = version = '7'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_options = {\n 'display_version': False,\n 'navigation_depth': 2,\n 'collapse_navigation': True\n}\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_last_updated_fmt = '%b %d, %Y'\nhtml_context = {\n 'css_files': ['_static/css/custom.css'],\n 'display_github': True,\n 'github_user': 'Uberspace', \n 'github_repo': 'lab', \n 'github_version': 'master',\n 'conf_py_path': '/source/'\n}\nhtml_show_copyright = False\nhtml_favicon = '_static/favicon.ico'\n\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\nhtml_sidebars = {\n '**': [\n 'relations.html', # needs 'show_related': True theme option to display\n 'searchbox.html',\n ]\n}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Uberspace7labdoc'\n", "path": "source/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Uberspace 7 lab documentation build configuration file, created by\n# sphinx-quickstart on Tue Feb 13 12:19:29 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nimport sphinx_rtd_theme\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = []\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'UberLab'\ncopyright = '2018, uberspace.de'\nauthor = 'uberspace.de'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nrelease = version = '7'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_options = {\n 'display_version': False,\n 'navigation_depth': 2,\n 'collapse_navigation': True\n}\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_last_updated_fmt = '%b %d, %Y'\nhtml_context = {\n 'css_files': ['_static/css/custom.css'],\n 'display_github': True,\n 'github_user': 'Uberspace', \n 'github_repo': 'lab', \n 'github_version': 'master',\n 'conf_py_path': '/source/'\n}\nhtml_show_copyright = False\nhtml_favicon = '_static/favicon.ico'\n\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\nhtml_sidebars = {\n '**': [\n 'relations.html', # needs 'show_related': True theme option to display\n 'searchbox.html',\n ]\n}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Uberspace7labdoc'\n", "path": "source/conf.py"}]} | 1,524 | 88 |
gh_patches_debug_5357 | rasdani/github-patches | git_diff | beeware__toga-1011 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not specifying an `on_select` callback for Table leads to a TypeError
**Describe the bug**
If you leave out a `on_select` callable on `Table` and click a row, you get a traceback in the terminal:
```python
Traceback (most recent call last):
File "C:\\Users\\brcan\\Desktop\\snippets\\.venv\\lib\\site-packages\\toga_winforms\\widgets\\table.py", line 74, in _native_item_selection_changed
self.interface.on_select(self.interface, row=self.interface.data[e.ItemIndex])
TypeError : 'NoneType' object is not callable
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://toga.readthedocs.io/en/latest/tutorial/tutorial-2.html
2. Run the example
3. Click on any row of the table
4. See error
**Expected behavior**
No error.
**Environment:**
- Operating System: Windows 10
- Python version: CPython 3.8.3
- Software versions:
- Briefcase:
- Toga: 0.3.0dev22
- ...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/winforms/toga_winforms/widgets/table.py`
Content:
```
1 from travertino.size import at_least
2
3 from toga_winforms.libs import WinForms
4
5 from .base import Widget
6
7
8 class Table(Widget):
9 def create(self):
10 self._container = self
11 self.native = WinForms.ListView()
12 self.native.View = WinForms.View.Details
13 self._cache = []
14 self._first_item = 0
15
16 dataColumn = []
17 for i, (heading, accessor) in enumerate(zip(
18 self.interface.headings,
19 self.interface._accessors
20 )):
21 dataColumn.append(self._create_column(heading, accessor))
22
23 self.native.FullRowSelect = True
24 self.native.MultiSelect = self.interface.multiple_select
25 self.native.DoubleBuffered = True
26 self.native.VirtualMode = True
27 self.native.Columns.AddRange(dataColumn)
28
29 self.native.ItemSelectionChanged += self._native_item_selection_changed
30 self.native.RetrieveVirtualItem += self._native_retrieve_virtual_item
31 self.native.CacheVirtualItems += self._native_cache_virtual_items
32 self.native.VirtualItemsSelectionRangeChanged += self._native_virtual_item_selection_range_changed
33
34 def _native_virtual_item_selection_range_changed(self, sender, e):
35 # update selection interface property
36 self.interface._selection = self._selected_rows()
37
38 # `Shift` key or Range selection handler
39 if e.IsSelected and self.interface.multiple_select and self.interface.on_select:
40 # call on select with the last row of the multi selection
41 selected = self.interface.data[e.EndIndex]
42 self.interface.on_select(self.interface, row=selected)
43
44 def _native_retrieve_virtual_item(self, sender, e):
45 # Because ListView is in VirtualMode, it's necessary implement
46 # VirtualItemsSelectionRangeChanged event to create ListViewItem when it's needed
47 if self._cache and e.ItemIndex >= self._first_item and \
48 e.ItemIndex < self._first_item + len(self._cache):
49 e.Item = self._cache[e.ItemIndex - self._first_item]
50 else:
51 e.Item = WinForms.ListViewItem(self.row_data(self.interface.data[e.ItemIndex]))
52
53 def _native_cache_virtual_items(self, sender, e):
54 if self._cache and e.StartIndex >= self._first_item and \
55 e.EndIndex <= self._first_item + len(self._cache):
56 # If the newly requested cache is a subset of the old cache,
57 # no need to rebuild everything, so do nothing
58 return
59
60 # Now we need to rebuild the cache.
61 self._first_item = e.StartIndex
62 new_length = e.EndIndex - e.StartIndex + 1
63 self._cache = []
64
65 # Fill the cache with the appropriate ListViewItems.
66 for i in range(new_length):
67 self._cache.append(WinForms.ListViewItem(self.row_data(self.interface.data[i])))
68
69 def _native_item_selection_changed(self, sender, e):
70 # update selection interface property
71 self.interface._selection = self._selected_rows()
72
73 if e.IsSelected:
74 self.interface.on_select(self.interface, row=self.interface.data[e.ItemIndex])
75
76 def _selected_rows(self):
77 if not self.native.SelectedIndices.Count:
78 return None
79
80 if self.interface.multiple_select:
81 selected = [row for i, row in enumerate(self.interface.data) if i in self.native.SelectedIndices]
82 return selected
83 else:
84 return self.interface.data[self.native.SelectedIndices[0]]
85
86 def _create_column(self, heading, accessor):
87 col = WinForms.ColumnHeader()
88 col.Text = heading
89 col.Name = accessor
90 return col
91
92 def change_source(self, source):
93 self.update_data()
94
95 def row_data(self, item):
96 # TODO: Winforms can't support icons in tree cells; so, if the data source
97 # specifies an icon, strip it when converting to row data.
98 def strip_icon(item, attr):
99 val = getattr(item, attr, self.interface.missing_value)
100
101 if isinstance(val, tuple):
102 return str(val[1])
103 return str(val)
104
105 return [
106 strip_icon(item, attr)
107 for attr in self.interface._accessors
108 ]
109
110 def update_data(self):
111 self.native.VirtualListSize = len(self.interface.data)
112 self._cache = []
113
114 def insert(self, index, item):
115 self.update_data()
116
117 def change(self, item):
118 self.interface.factory.not_implemented('Table.change()')
119
120 def remove(self, item):
121 self.update_data()
122
123 def clear(self):
124 self.update_data()
125
126 def set_on_select(self, handler):
127 pass
128
129 def scroll_to_row(self, row):
130 self.native.EnsureVisible(row)
131
132 def rehint(self):
133 self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)
134 self.interface.intrinsic.height = at_least(self.interface.MIN_HEIGHT)
135
136 def remove_column(self, accessor):
137 self.native.Columns.RemoveByKey(accessor)
138
139 def add_column(self, heading, accessor):
140 self.native.Columns.Add(self._create_column(heading, accessor))
141 self.update_data()
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/winforms/toga_winforms/widgets/table.py b/src/winforms/toga_winforms/widgets/table.py
--- a/src/winforms/toga_winforms/widgets/table.py
+++ b/src/winforms/toga_winforms/widgets/table.py
@@ -70,7 +70,7 @@
# update selection interface property
self.interface._selection = self._selected_rows()
- if e.IsSelected:
+ if e.IsSelected and self.interface.on_select:
self.interface.on_select(self.interface, row=self.interface.data[e.ItemIndex])
def _selected_rows(self):
| {"golden_diff": "diff --git a/src/winforms/toga_winforms/widgets/table.py b/src/winforms/toga_winforms/widgets/table.py\n--- a/src/winforms/toga_winforms/widgets/table.py\n+++ b/src/winforms/toga_winforms/widgets/table.py\n@@ -70,7 +70,7 @@\n # update selection interface property\n self.interface._selection = self._selected_rows()\n \n- if e.IsSelected:\n+ if e.IsSelected and self.interface.on_select:\n self.interface.on_select(self.interface, row=self.interface.data[e.ItemIndex])\n \n def _selected_rows(self):\n", "issue": "Not specifying an `on_select` callback for Table leads to a TypeError\n**Describe the bug**\r\nIf you leave out a `on_select` callable on `Table` and click a row, you get a traceback in the terminal:\r\n```python\r\nTraceback (most recent call last):\r\n File \"C:\\\\Users\\\\brcan\\\\Desktop\\\\snippets\\\\.venv\\\\lib\\\\site-packages\\\\toga_winforms\\\\widgets\\\\table.py\", line 74, in _native_item_selection_changed\r\n self.interface.on_select(self.interface, row=self.interface.data[e.ItemIndex])\r\nTypeError : 'NoneType' object is not callable\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to https://toga.readthedocs.io/en/latest/tutorial/tutorial-2.html\r\n2. Run the example\r\n3. Click on any row of the table\r\n4. See error\r\n\r\n**Expected behavior**\r\nNo error.\r\n\r\n**Environment:**\r\n - Operating System: Windows 10\r\n - Python version: CPython 3.8.3\r\n - Software versions:\r\n - Briefcase:\r\n - Toga: 0.3.0dev22\r\n - ...\r\n\n", "before_files": [{"content": "from travertino.size import at_least\n\nfrom toga_winforms.libs import WinForms\n\nfrom .base import Widget\n\n\nclass Table(Widget):\n def create(self):\n self._container = self\n self.native = WinForms.ListView()\n self.native.View = WinForms.View.Details\n self._cache = []\n self._first_item = 0\n\n dataColumn = []\n for i, (heading, accessor) in enumerate(zip(\n self.interface.headings,\n self.interface._accessors\n )):\n dataColumn.append(self._create_column(heading, accessor))\n\n self.native.FullRowSelect = True\n self.native.MultiSelect = self.interface.multiple_select\n self.native.DoubleBuffered = True\n self.native.VirtualMode = True\n self.native.Columns.AddRange(dataColumn)\n\n self.native.ItemSelectionChanged += self._native_item_selection_changed\n self.native.RetrieveVirtualItem += self._native_retrieve_virtual_item\n self.native.CacheVirtualItems += self._native_cache_virtual_items\n self.native.VirtualItemsSelectionRangeChanged += self._native_virtual_item_selection_range_changed\n\n def _native_virtual_item_selection_range_changed(self, sender, e):\n # update selection interface property\n self.interface._selection = self._selected_rows()\n\n # `Shift` key or Range selection handler\n if e.IsSelected and self.interface.multiple_select and self.interface.on_select:\n # call on select with the last row of the multi selection\n selected = self.interface.data[e.EndIndex]\n self.interface.on_select(self.interface, row=selected)\n\n def _native_retrieve_virtual_item(self, sender, e):\n # Because ListView is in VirtualMode, it's necessary implement\n # VirtualItemsSelectionRangeChanged event to create ListViewItem when it's needed\n if self._cache and e.ItemIndex >= self._first_item and \\\n e.ItemIndex < self._first_item + len(self._cache):\n e.Item = self._cache[e.ItemIndex - self._first_item]\n else:\n e.Item = WinForms.ListViewItem(self.row_data(self.interface.data[e.ItemIndex]))\n\n def _native_cache_virtual_items(self, sender, e):\n if self._cache and e.StartIndex >= self._first_item and \\\n e.EndIndex <= self._first_item + len(self._cache):\n # If the newly requested cache is a subset of the old cache,\n # no need to rebuild everything, so do nothing\n return\n\n # Now we need to rebuild the cache.\n self._first_item = e.StartIndex\n new_length = e.EndIndex - e.StartIndex + 1\n self._cache = []\n\n # Fill the cache with the appropriate ListViewItems.\n for i in range(new_length):\n self._cache.append(WinForms.ListViewItem(self.row_data(self.interface.data[i])))\n\n def _native_item_selection_changed(self, sender, e):\n # update selection interface property\n self.interface._selection = self._selected_rows()\n\n if e.IsSelected:\n self.interface.on_select(self.interface, row=self.interface.data[e.ItemIndex])\n\n def _selected_rows(self):\n if not self.native.SelectedIndices.Count:\n return None\n\n if self.interface.multiple_select:\n selected = [row for i, row in enumerate(self.interface.data) if i in self.native.SelectedIndices]\n return selected\n else:\n return self.interface.data[self.native.SelectedIndices[0]]\n\n def _create_column(self, heading, accessor):\n col = WinForms.ColumnHeader()\n col.Text = heading\n col.Name = accessor\n return col\n\n def change_source(self, source):\n self.update_data()\n\n def row_data(self, item):\n # TODO: Winforms can't support icons in tree cells; so, if the data source\n # specifies an icon, strip it when converting to row data.\n def strip_icon(item, attr):\n val = getattr(item, attr, self.interface.missing_value)\n\n if isinstance(val, tuple):\n return str(val[1])\n return str(val)\n\n return [\n strip_icon(item, attr)\n for attr in self.interface._accessors\n ]\n\n def update_data(self):\n self.native.VirtualListSize = len(self.interface.data)\n self._cache = []\n\n def insert(self, index, item):\n self.update_data()\n\n def change(self, item):\n self.interface.factory.not_implemented('Table.change()')\n\n def remove(self, item):\n self.update_data()\n\n def clear(self):\n self.update_data()\n\n def set_on_select(self, handler):\n pass\n\n def scroll_to_row(self, row):\n self.native.EnsureVisible(row)\n\n def rehint(self):\n self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)\n self.interface.intrinsic.height = at_least(self.interface.MIN_HEIGHT)\n\n def remove_column(self, accessor):\n self.native.Columns.RemoveByKey(accessor)\n\n def add_column(self, heading, accessor):\n self.native.Columns.Add(self._create_column(heading, accessor))\n self.update_data()\n", "path": "src/winforms/toga_winforms/widgets/table.py"}], "after_files": [{"content": "from travertino.size import at_least\n\nfrom toga_winforms.libs import WinForms\n\nfrom .base import Widget\n\n\nclass Table(Widget):\n def create(self):\n self._container = self\n self.native = WinForms.ListView()\n self.native.View = WinForms.View.Details\n self._cache = []\n self._first_item = 0\n\n dataColumn = []\n for i, (heading, accessor) in enumerate(zip(\n self.interface.headings,\n self.interface._accessors\n )):\n dataColumn.append(self._create_column(heading, accessor))\n\n self.native.FullRowSelect = True\n self.native.MultiSelect = self.interface.multiple_select\n self.native.DoubleBuffered = True\n self.native.VirtualMode = True\n self.native.Columns.AddRange(dataColumn)\n\n self.native.ItemSelectionChanged += self._native_item_selection_changed\n self.native.RetrieveVirtualItem += self._native_retrieve_virtual_item\n self.native.CacheVirtualItems += self._native_cache_virtual_items\n self.native.VirtualItemsSelectionRangeChanged += self._native_virtual_item_selection_range_changed\n\n def _native_virtual_item_selection_range_changed(self, sender, e):\n # update selection interface property\n self.interface._selection = self._selected_rows()\n\n # `Shift` key or Range selection handler\n if e.IsSelected and self.interface.multiple_select and self.interface.on_select:\n # call on select with the last row of the multi selection\n selected = self.interface.data[e.EndIndex]\n self.interface.on_select(self.interface, row=selected)\n\n def _native_retrieve_virtual_item(self, sender, e):\n # Because ListView is in VirtualMode, it's necessary implement\n # VirtualItemsSelectionRangeChanged event to create ListViewItem when it's needed\n if self._cache and e.ItemIndex >= self._first_item and \\\n e.ItemIndex < self._first_item + len(self._cache):\n e.Item = self._cache[e.ItemIndex - self._first_item]\n else:\n e.Item = WinForms.ListViewItem(self.row_data(self.interface.data[e.ItemIndex]))\n\n def _native_cache_virtual_items(self, sender, e):\n if self._cache and e.StartIndex >= self._first_item and \\\n e.EndIndex <= self._first_item + len(self._cache):\n # If the newly requested cache is a subset of the old cache,\n # no need to rebuild everything, so do nothing\n return\n\n # Now we need to rebuild the cache.\n self._first_item = e.StartIndex\n new_length = e.EndIndex - e.StartIndex + 1\n self._cache = []\n\n # Fill the cache with the appropriate ListViewItems.\n for i in range(new_length):\n self._cache.append(WinForms.ListViewItem(self.row_data(self.interface.data[i])))\n\n def _native_item_selection_changed(self, sender, e):\n # update selection interface property\n self.interface._selection = self._selected_rows()\n\n if e.IsSelected and self.interface.on_select:\n self.interface.on_select(self.interface, row=self.interface.data[e.ItemIndex])\n\n def _selected_rows(self):\n if not self.native.SelectedIndices.Count:\n return None\n\n if self.interface.multiple_select:\n selected = [row for i, row in enumerate(self.interface.data) if i in self.native.SelectedIndices]\n return selected\n else:\n return self.interface.data[self.native.SelectedIndices[0]]\n\n def _create_column(self, heading, accessor):\n col = WinForms.ColumnHeader()\n col.Text = heading\n col.Name = accessor\n return col\n\n def change_source(self, source):\n self.update_data()\n\n def row_data(self, item):\n # TODO: Winforms can't support icons in tree cells; so, if the data source\n # specifies an icon, strip it when converting to row data.\n def strip_icon(item, attr):\n val = getattr(item, attr, self.interface.missing_value)\n\n if isinstance(val, tuple):\n return str(val[1])\n return str(val)\n\n return [\n strip_icon(item, attr)\n for attr in self.interface._accessors\n ]\n\n def update_data(self):\n self.native.VirtualListSize = len(self.interface.data)\n self._cache = []\n\n def insert(self, index, item):\n self.update_data()\n\n def change(self, item):\n self.interface.factory.not_implemented('Table.change()')\n\n def remove(self, item):\n self.update_data()\n\n def clear(self):\n self.update_data()\n\n def set_on_select(self, handler):\n pass\n\n def scroll_to_row(self, row):\n self.native.EnsureVisible(row)\n\n def rehint(self):\n self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)\n self.interface.intrinsic.height = at_least(self.interface.MIN_HEIGHT)\n\n def remove_column(self, accessor):\n self.native.Columns.RemoveByKey(accessor)\n\n def add_column(self, heading, accessor):\n self.native.Columns.Add(self._create_column(heading, accessor))\n self.update_data()\n", "path": "src/winforms/toga_winforms/widgets/table.py"}]} | 1,915 | 126 |
gh_patches_debug_1093 | rasdani/github-patches | git_diff | comic__grand-challenge.org-755 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handle NoneType comparison in _scores_to_ranks
```
TypeError: '<' not supported between instances of 'NoneType' and 'float'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/evaluation/utils.py`
Content:
```
1 from collections import OrderedDict
2 from typing import Tuple, NamedTuple, List, Callable, Iterable, Dict
3
4 from grandchallenge.evaluation.models import Result
5 from grandchallenge.evaluation.templatetags.evaluation_extras import (
6 get_jsonpath
7 )
8
9
10 class Metric(NamedTuple):
11 path: str
12 reverse: bool
13
14
15 class Positions(NamedTuple):
16 ranks: Dict[str, float]
17 rank_scores: Dict[str, float]
18 rank_per_metric: Dict[str, Dict[str, float]]
19
20
21 def rank_results(
22 *,
23 results: Tuple[Result, ...],
24 metrics: Tuple[Metric, ...],
25 score_method: Callable,
26 ) -> Positions:
27 """
28 Calculates the overall rank for each result, along with the rank_score
29 and the rank per metric.
30 """
31
32 results = _filter_valid_results(results=results, metrics=metrics)
33
34 rank_per_metric = _get_rank_per_metric(results=results, metrics=metrics)
35
36 rank_scores = {
37 pk: score_method([m for m in metrics.values()])
38 for pk, metrics in rank_per_metric.items()
39 }
40
41 return Positions(
42 ranks=_scores_to_ranks(scores=rank_scores, reverse=False),
43 rank_scores=rank_scores,
44 rank_per_metric=rank_per_metric,
45 )
46
47
48 def _filter_valid_results(
49 *, results: Iterable[Result], metrics: Tuple[Metric, ...]
50 ) -> List[Result]:
51 """ Ensure that all of the metrics are in every result """
52 return [
53 res
54 for res in results
55 if all(get_jsonpath(res.metrics, m.path) != "" for m in metrics)
56 ]
57
58
59 def _get_rank_per_metric(
60 *, results: Iterable[Result], metrics: Tuple[Metric, ...]
61 ) -> Dict[str, Dict[str, float]]:
62 """
63 Takes results and calculates the rank for each of the individual metrics
64
65 Returns a dictionary where the key is the pk of the result, and the
66 values is another dictionary where the key is the path of the metric and
67 the value is the rank of this result for this metric
68 """
69 metric_rank = {}
70 for metric in metrics:
71 # Extract the value of the metric for this primary key and sort on the
72 # value of the metric
73 metric_scores = {
74 res.pk: get_jsonpath(res.metrics, metric.path) for res in results
75 }
76 metric_rank[metric.path] = _scores_to_ranks(
77 scores=metric_scores, reverse=metric.reverse
78 )
79
80 return {
81 res.pk: {
82 metric_path: ranks[res.pk]
83 for metric_path, ranks in metric_rank.items()
84 }
85 for res in results
86 }
87
88
89 def _scores_to_ranks(
90 *, scores: Dict, reverse: bool = False
91 ) -> Dict[str, float]:
92 """
93 Go from a score (a scalar) to a rank (integer). If two scalars are the
94 same then they will have the same rank.
95
96 Takes a dictionary where the keys are the pk of the results and the values
97 are the scores.
98
99 Outputs a dictionary where they keys are the pk of the results and the
100 values are the ranks.
101 """
102 scores = OrderedDict(
103 sorted(scores.items(), key=lambda t: t[1], reverse=reverse)
104 )
105
106 ranks = {}
107 current_score = current_rank = None
108
109 for idx, (pk, score) in enumerate(scores.items()):
110 if score != current_score:
111 current_score = score
112 current_rank = idx + 1
113
114 ranks[pk] = current_rank
115
116 return ranks
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/evaluation/utils.py b/app/grandchallenge/evaluation/utils.py
--- a/app/grandchallenge/evaluation/utils.py
+++ b/app/grandchallenge/evaluation/utils.py
@@ -52,7 +52,10 @@
return [
res
for res in results
- if all(get_jsonpath(res.metrics, m.path) != "" for m in metrics)
+ if all(
+ get_jsonpath(res.metrics, m.path) not in ["", None]
+ for m in metrics
+ )
]
| {"golden_diff": "diff --git a/app/grandchallenge/evaluation/utils.py b/app/grandchallenge/evaluation/utils.py\n--- a/app/grandchallenge/evaluation/utils.py\n+++ b/app/grandchallenge/evaluation/utils.py\n@@ -52,7 +52,10 @@\n return [\n res\n for res in results\n- if all(get_jsonpath(res.metrics, m.path) != \"\" for m in metrics)\n+ if all(\n+ get_jsonpath(res.metrics, m.path) not in [\"\", None]\n+ for m in metrics\n+ )\n ]\n", "issue": "Handle NoneType comparison in _scores_to_ranks\n```\r\nTypeError: '<' not supported between instances of 'NoneType' and 'float'\r\n```\n", "before_files": [{"content": "from collections import OrderedDict\nfrom typing import Tuple, NamedTuple, List, Callable, Iterable, Dict\n\nfrom grandchallenge.evaluation.models import Result\nfrom grandchallenge.evaluation.templatetags.evaluation_extras import (\n get_jsonpath\n)\n\n\nclass Metric(NamedTuple):\n path: str\n reverse: bool\n\n\nclass Positions(NamedTuple):\n ranks: Dict[str, float]\n rank_scores: Dict[str, float]\n rank_per_metric: Dict[str, Dict[str, float]]\n\n\ndef rank_results(\n *,\n results: Tuple[Result, ...],\n metrics: Tuple[Metric, ...],\n score_method: Callable,\n) -> Positions:\n \"\"\"\n Calculates the overall rank for each result, along with the rank_score\n and the rank per metric.\n \"\"\"\n\n results = _filter_valid_results(results=results, metrics=metrics)\n\n rank_per_metric = _get_rank_per_metric(results=results, metrics=metrics)\n\n rank_scores = {\n pk: score_method([m for m in metrics.values()])\n for pk, metrics in rank_per_metric.items()\n }\n\n return Positions(\n ranks=_scores_to_ranks(scores=rank_scores, reverse=False),\n rank_scores=rank_scores,\n rank_per_metric=rank_per_metric,\n )\n\n\ndef _filter_valid_results(\n *, results: Iterable[Result], metrics: Tuple[Metric, ...]\n) -> List[Result]:\n \"\"\" Ensure that all of the metrics are in every result \"\"\"\n return [\n res\n for res in results\n if all(get_jsonpath(res.metrics, m.path) != \"\" for m in metrics)\n ]\n\n\ndef _get_rank_per_metric(\n *, results: Iterable[Result], metrics: Tuple[Metric, ...]\n) -> Dict[str, Dict[str, float]]:\n \"\"\"\n Takes results and calculates the rank for each of the individual metrics\n\n Returns a dictionary where the key is the pk of the result, and the\n values is another dictionary where the key is the path of the metric and\n the value is the rank of this result for this metric\n \"\"\"\n metric_rank = {}\n for metric in metrics:\n # Extract the value of the metric for this primary key and sort on the\n # value of the metric\n metric_scores = {\n res.pk: get_jsonpath(res.metrics, metric.path) for res in results\n }\n metric_rank[metric.path] = _scores_to_ranks(\n scores=metric_scores, reverse=metric.reverse\n )\n\n return {\n res.pk: {\n metric_path: ranks[res.pk]\n for metric_path, ranks in metric_rank.items()\n }\n for res in results\n }\n\n\ndef _scores_to_ranks(\n *, scores: Dict, reverse: bool = False\n) -> Dict[str, float]:\n \"\"\"\n Go from a score (a scalar) to a rank (integer). If two scalars are the\n same then they will have the same rank.\n\n Takes a dictionary where the keys are the pk of the results and the values\n are the scores.\n\n Outputs a dictionary where they keys are the pk of the results and the\n values are the ranks.\n \"\"\"\n scores = OrderedDict(\n sorted(scores.items(), key=lambda t: t[1], reverse=reverse)\n )\n\n ranks = {}\n current_score = current_rank = None\n\n for idx, (pk, score) in enumerate(scores.items()):\n if score != current_score:\n current_score = score\n current_rank = idx + 1\n\n ranks[pk] = current_rank\n\n return ranks\n", "path": "app/grandchallenge/evaluation/utils.py"}], "after_files": [{"content": "from collections import OrderedDict\nfrom typing import Tuple, NamedTuple, List, Callable, Iterable, Dict\n\nfrom grandchallenge.evaluation.models import Result\nfrom grandchallenge.evaluation.templatetags.evaluation_extras import (\n get_jsonpath\n)\n\n\nclass Metric(NamedTuple):\n path: str\n reverse: bool\n\n\nclass Positions(NamedTuple):\n ranks: Dict[str, float]\n rank_scores: Dict[str, float]\n rank_per_metric: Dict[str, Dict[str, float]]\n\n\ndef rank_results(\n *,\n results: Tuple[Result, ...],\n metrics: Tuple[Metric, ...],\n score_method: Callable,\n) -> Positions:\n \"\"\"\n Calculates the overall rank for each result, along with the rank_score\n and the rank per metric.\n \"\"\"\n\n results = _filter_valid_results(results=results, metrics=metrics)\n\n rank_per_metric = _get_rank_per_metric(results=results, metrics=metrics)\n\n rank_scores = {\n pk: score_method([m for m in metrics.values()])\n for pk, metrics in rank_per_metric.items()\n }\n\n return Positions(\n ranks=_scores_to_ranks(scores=rank_scores, reverse=False),\n rank_scores=rank_scores,\n rank_per_metric=rank_per_metric,\n )\n\n\ndef _filter_valid_results(\n *, results: Iterable[Result], metrics: Tuple[Metric, ...]\n) -> List[Result]:\n \"\"\" Ensure that all of the metrics are in every result \"\"\"\n return [\n res\n for res in results\n if all(\n get_jsonpath(res.metrics, m.path) not in [\"\", None]\n for m in metrics\n )\n ]\n\n\ndef _get_rank_per_metric(\n *, results: Iterable[Result], metrics: Tuple[Metric, ...]\n) -> Dict[str, Dict[str, float]]:\n \"\"\"\n Takes results and calculates the rank for each of the individual metrics\n\n Returns a dictionary where the key is the pk of the result, and the\n values is another dictionary where the key is the path of the metric and\n the value is the rank of this result for this metric\n \"\"\"\n metric_rank = {}\n for metric in metrics:\n # Extract the value of the metric for this primary key and sort on the\n # value of the metric\n metric_scores = {\n res.pk: get_jsonpath(res.metrics, metric.path) for res in results\n }\n metric_rank[metric.path] = _scores_to_ranks(\n scores=metric_scores, reverse=metric.reverse\n )\n\n return {\n res.pk: {\n metric_path: ranks[res.pk]\n for metric_path, ranks in metric_rank.items()\n }\n for res in results\n }\n\n\ndef _scores_to_ranks(\n *, scores: Dict, reverse: bool = False\n) -> Dict[str, float]:\n \"\"\"\n Go from a score (a scalar) to a rank (integer). If two scalars are the\n same then they will have the same rank.\n\n Takes a dictionary where the keys are the pk of the results and the values\n are the scores.\n\n Outputs a dictionary where they keys are the pk of the results and the\n values are the ranks.\n \"\"\"\n scores = OrderedDict(\n sorted(scores.items(), key=lambda t: t[1], reverse=reverse)\n )\n\n ranks = {}\n current_score = current_rank = None\n\n for idx, (pk, score) in enumerate(scores.items()):\n if score != current_score:\n current_score = score\n current_rank = idx + 1\n\n ranks[pk] = current_rank\n\n return ranks\n", "path": "app/grandchallenge/evaluation/utils.py"}]} | 1,314 | 123 |
gh_patches_debug_7745 | rasdani/github-patches | git_diff | streamlink__streamlink-1302 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TVCatchup plugin is not working - "This service is ending soon"
### Checklist
- [x] This is a bug report.
- [ ] This is a feature request.
- [x] This is a plugin (improvement) request.
- [ ] I have read the contribution guidelines.
### Description
TVCatchup plugin is not working for some time. The problem is that plugin is able to connect to a stream without any errors but the stream is different comparing to the TVCatchup website's stream. It looks like streamlink gets a different type of stream deliberately prepared by the service provider to send the message: "This service is ending soon. Please download TVCatchup from the app store". Assuming that there is a real stream available on the website and mobile app, is it still possible to open it by streamlink?
Current stream for all of the channels:

Thanks
### Reproduction steps / Explicit stream URLs to test
streamlink http://tvcatchup.com/watch/bbctwo best
### Environment details
Operating system and version: Windows/Linux
Streamlink and Python version: Streamlink 0.8.1
[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/tvcatchup.py`
Content:
```
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugin.api import http
5 from streamlink.stream import HLSStream
6
7 USER_AGENT = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
8 _url_re = re.compile(r"http://(?:www\.)?tvcatchup.com/watch/\w+")
9 _stream_re = re.compile(r'''(?P<q>["'])(?P<stream_url>https?://.*m3u8\?.*clientKey=.*?)(?P=q)''')
10
11
12 class TVCatchup(Plugin):
13 @classmethod
14 def can_handle_url(cls, url):
15 return _url_re.match(url)
16
17 def _get_streams(self):
18 """
19 Finds the streams from tvcatchup.com.
20 """
21 http.headers.update({"User-Agent": USER_AGENT})
22 res = http.get(self.url)
23
24 match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)
25
26 if match:
27 stream_url = match.group("stream_url")
28
29 if stream_url:
30 if "_adp" in stream_url:
31 return HLSStream.parse_variant_playlist(self.session, stream_url)
32 else:
33 return {'576p': HLSStream(self.session, stream_url)}
34
35
36 __plugin__ = TVCatchup
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/tvcatchup.py b/src/streamlink/plugins/tvcatchup.py
--- a/src/streamlink/plugins/tvcatchup.py
+++ b/src/streamlink/plugins/tvcatchup.py
@@ -6,7 +6,7 @@
USER_AGENT = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
_url_re = re.compile(r"http://(?:www\.)?tvcatchup.com/watch/\w+")
-_stream_re = re.compile(r'''(?P<q>["'])(?P<stream_url>https?://.*m3u8\?.*clientKey=.*?)(?P=q)''')
+_stream_re = re.compile(r'''source.*?(?P<q>["'])(?P<stream_url>https?://.*m3u8\?.*clientKey=.*?)(?P=q)''')
class TVCatchup(Plugin):
| {"golden_diff": "diff --git a/src/streamlink/plugins/tvcatchup.py b/src/streamlink/plugins/tvcatchup.py\n--- a/src/streamlink/plugins/tvcatchup.py\n+++ b/src/streamlink/plugins/tvcatchup.py\n@@ -6,7 +6,7 @@\n \n USER_AGENT = \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\"\n _url_re = re.compile(r\"http://(?:www\\.)?tvcatchup.com/watch/\\w+\")\n-_stream_re = re.compile(r'''(?P<q>[\"'])(?P<stream_url>https?://.*m3u8\\?.*clientKey=.*?)(?P=q)''')\n+_stream_re = re.compile(r'''source.*?(?P<q>[\"'])(?P<stream_url>https?://.*m3u8\\?.*clientKey=.*?)(?P=q)''')\n \n \n class TVCatchup(Plugin):\n", "issue": "TVCatchup plugin is not working - \"This service is ending soon\"\n### Checklist\r\n\r\n- [x] This is a bug report.\r\n- [ ] This is a feature request.\r\n- [x] This is a plugin (improvement) request.\r\n- [ ] I have read the contribution guidelines.\r\n\r\n### Description\r\n\r\nTVCatchup plugin is not working for some time. The problem is that plugin is able to connect to a stream without any errors but the stream is different comparing to the TVCatchup website's stream. It looks like streamlink gets a different type of stream deliberately prepared by the service provider to send the message: \"This service is ending soon. Please download TVCatchup from the app store\". Assuming that there is a real stream available on the website and mobile app, is it still possible to open it by streamlink?\r\n\r\nCurrent stream for all of the channels:\r\n\r\n\r\n\r\nThanks\r\n\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\nstreamlink http://tvcatchup.com/watch/bbctwo best\r\n\r\n### Environment details\r\n\r\nOperating system and version: Windows/Linux\r\nStreamlink and Python version: Streamlink 0.8.1\r\n\r\n[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)\r\n\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\n\nUSER_AGENT = \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\"\n_url_re = re.compile(r\"http://(?:www\\.)?tvcatchup.com/watch/\\w+\")\n_stream_re = re.compile(r'''(?P<q>[\"'])(?P<stream_url>https?://.*m3u8\\?.*clientKey=.*?)(?P=q)''')\n\n\nclass TVCatchup(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n \"\"\"\n Finds the streams from tvcatchup.com.\n \"\"\"\n http.headers.update({\"User-Agent\": USER_AGENT})\n res = http.get(self.url)\n\n match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)\n\n if match:\n stream_url = match.group(\"stream_url\")\n\n if stream_url:\n if \"_adp\" in stream_url:\n return HLSStream.parse_variant_playlist(self.session, stream_url)\n else:\n return {'576p': HLSStream(self.session, stream_url)}\n\n\n__plugin__ = TVCatchup\n", "path": "src/streamlink/plugins/tvcatchup.py"}], "after_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\n\nUSER_AGENT = \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\"\n_url_re = re.compile(r\"http://(?:www\\.)?tvcatchup.com/watch/\\w+\")\n_stream_re = re.compile(r'''source.*?(?P<q>[\"'])(?P<stream_url>https?://.*m3u8\\?.*clientKey=.*?)(?P=q)''')\n\n\nclass TVCatchup(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n \"\"\"\n Finds the streams from tvcatchup.com.\n \"\"\"\n http.headers.update({\"User-Agent\": USER_AGENT})\n res = http.get(self.url)\n\n match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)\n\n if match:\n stream_url = match.group(\"stream_url\")\n\n if stream_url:\n if \"_adp\" in stream_url:\n return HLSStream.parse_variant_playlist(self.session, stream_url)\n else:\n return {'576p': HLSStream(self.session, stream_url)}\n\n\n__plugin__ = TVCatchup\n", "path": "src/streamlink/plugins/tvcatchup.py"}]} | 966 | 236 |
gh_patches_debug_2018 | rasdani/github-patches | git_diff | pyca__cryptography-1237 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
0.5 fails to compile on OS X 10.8
Full traceback: http://pastebin.com/raw.php?i=M9N6Fgzi
@reaperhulk has diagnosed, but this will require an 0.5.2 release to fix for supported platform.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/hazmat/bindings/commoncrypto/secitem.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 INCLUDES = """
17 #include <Security/SecItem.h>
18 """
19
20 TYPES = """
21 const CFTypeRef kSecAttrKeyType;
22 const CFTypeRef kSecAttrKeySizeInBits;
23 const CFTypeRef kSecAttrIsPermanent;
24 const CFTypeRef kSecAttrKeyTypeRSA;
25 const CFTypeRef kSecAttrKeyTypeDSA;
26 const CFTypeRef kSecAttrKeyTypeEC;
27 const CFTypeRef kSecAttrKeyTypeEC;
28 const CFTypeRef kSecUseKeychain;
29 """
30
31 FUNCTIONS = """
32 """
33
34 MACROS = """
35 """
36
37 CUSTOMIZATIONS = """
38 """
39
40 CONDITIONAL_NAMES = {}
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cryptography/hazmat/bindings/commoncrypto/secitem.py b/cryptography/hazmat/bindings/commoncrypto/secitem.py
--- a/cryptography/hazmat/bindings/commoncrypto/secitem.py
+++ b/cryptography/hazmat/bindings/commoncrypto/secitem.py
@@ -23,8 +23,6 @@
const CFTypeRef kSecAttrIsPermanent;
const CFTypeRef kSecAttrKeyTypeRSA;
const CFTypeRef kSecAttrKeyTypeDSA;
-const CFTypeRef kSecAttrKeyTypeEC;
-const CFTypeRef kSecAttrKeyTypeEC;
const CFTypeRef kSecUseKeychain;
"""
| {"golden_diff": "diff --git a/cryptography/hazmat/bindings/commoncrypto/secitem.py b/cryptography/hazmat/bindings/commoncrypto/secitem.py\n--- a/cryptography/hazmat/bindings/commoncrypto/secitem.py\n+++ b/cryptography/hazmat/bindings/commoncrypto/secitem.py\n@@ -23,8 +23,6 @@\n const CFTypeRef kSecAttrIsPermanent;\n const CFTypeRef kSecAttrKeyTypeRSA;\n const CFTypeRef kSecAttrKeyTypeDSA;\n-const CFTypeRef kSecAttrKeyTypeEC;\n-const CFTypeRef kSecAttrKeyTypeEC;\n const CFTypeRef kSecUseKeychain;\n \"\"\"\n", "issue": "0.5 fails to compile on OS X 10.8\nFull traceback: http://pastebin.com/raw.php?i=M9N6Fgzi\n\n@reaperhulk has diagnosed, but this will require an 0.5.2 release to fix for supported platform.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <Security/SecItem.h>\n\"\"\"\n\nTYPES = \"\"\"\nconst CFTypeRef kSecAttrKeyType;\nconst CFTypeRef kSecAttrKeySizeInBits;\nconst CFTypeRef kSecAttrIsPermanent;\nconst CFTypeRef kSecAttrKeyTypeRSA;\nconst CFTypeRef kSecAttrKeyTypeDSA;\nconst CFTypeRef kSecAttrKeyTypeEC;\nconst CFTypeRef kSecAttrKeyTypeEC;\nconst CFTypeRef kSecUseKeychain;\n\"\"\"\n\nFUNCTIONS = \"\"\"\n\"\"\"\n\nMACROS = \"\"\"\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n\"\"\"\n\nCONDITIONAL_NAMES = {}\n", "path": "cryptography/hazmat/bindings/commoncrypto/secitem.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <Security/SecItem.h>\n\"\"\"\n\nTYPES = \"\"\"\nconst CFTypeRef kSecAttrKeyType;\nconst CFTypeRef kSecAttrKeySizeInBits;\nconst CFTypeRef kSecAttrIsPermanent;\nconst CFTypeRef kSecAttrKeyTypeRSA;\nconst CFTypeRef kSecAttrKeyTypeDSA;\nconst CFTypeRef kSecUseKeychain;\n\"\"\"\n\nFUNCTIONS = \"\"\"\n\"\"\"\n\nMACROS = \"\"\"\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n\"\"\"\n\nCONDITIONAL_NAMES = {}\n", "path": "cryptography/hazmat/bindings/commoncrypto/secitem.py"}]} | 660 | 135 |
gh_patches_debug_3327 | rasdani/github-patches | git_diff | kubeflow__pipelines-5135 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Presubmit failure
If you look into the test it said
```
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/usr/local/lib/python3.6/site-packages/kfp/__init__.py", line 24, in <module>
from ._client import Client
File "/usr/local/lib/python3.6/site-packages/kfp/_client.py", line 31, in <module>
from kfp.compiler import compiler
File "/usr/local/lib/python3.6/site-packages/kfp/compiler/__init__.py", line 17, in <module>
from ..containers._component_builder import build_python_component, build_docker_image, VersionedDependency
File "/usr/local/lib/python3.6/site-packages/kfp/containers/_component_builder.py", line 32, in <module>
from kfp.containers import entrypoint
File "/usr/local/lib/python3.6/site-packages/kfp/containers/entrypoint.py", line 23, in <module>
from kfp.containers import entrypoint_utils
File "/usr/local/lib/python3.6/site-packages/kfp/containers/entrypoint_utils.py", line 23, in <module>
from kfp.pipeline_spec import pipeline_spec_pb2
File "/usr/local/lib/python3.6/site-packages/kfp/pipeline_spec/pipeline_spec_pb2.py", line 23, in <module>
create_key=_descriptor._internal_create_key,
AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
```
Looks like the `protobuf` version is not matching in this case. @Bobgy are you aware of this error? Thanks.
_Originally posted by @Tomcli in https://github.com/kubeflow/pipelines/pull/5059#issuecomment-777656530_
/cc @numerology @chensun @Ark-kun
Can you take a look at this issue? I have seen multiple reports, this error seems to fail consistently.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api/v2alpha1/python/setup.py`
Content:
```
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import subprocess
17 import sys
18 import setuptools
19 try:
20 from distutils.spawn import find_executable
21 except ImportError:
22 from shutil import which as find_executable
23
24 NAME = "kfp-pipeline-spec"
25 VERSION = "0.1.5"
26
27 PROTO_DIR = os.path.realpath(
28 os.path.join(os.path.dirname(__file__), os.pardir))
29
30 PKG_DIR = os.path.realpath(
31 os.path.join(os.path.dirname(__file__), "kfp", "pipeline_spec"))
32
33 # Find the Protocol Compiler. (Taken from protobuf/python/setup.py)
34 if "PROTOC" in os.environ and os.path.exists(os.environ["PROTOC"]):
35 PROTOC = os.environ["PROTOC"]
36 else:
37 PROTOC = find_executable("protoc")
38
39
40 def GenerateProto(source):
41 """Generate a _pb2.py from a .proto file.
42
43 Invokes the Protocol Compiler to generate a _pb2.py from the given
44 .proto file. Does nothing if the output already exists and is newer than
45 the input.
46
47 Args:
48 source: The source proto file that needs to be compiled.
49 """
50
51 output = source.replace(".proto", "_pb2.py")
52
53 if not os.path.exists(output) or (
54 os.path.exists(source) and
55 os.path.getmtime(source) > os.path.getmtime(output)):
56 print("Generating %s..." % output)
57
58 if not os.path.exists(source):
59 sys.stderr.write("Can't find required file: %s\n" % source)
60 sys.exit(-1)
61
62 if PROTOC is None:
63 sys.stderr.write("protoc is not found. Please compile it "
64 "or install the binary package.\n")
65 sys.exit(-1)
66
67 protoc_command = [
68 PROTOC, "-I%s" % PROTO_DIR,
69 "--python_out=%s" % PKG_DIR, source
70 ]
71 if subprocess.call(protoc_command) != 0:
72 sys.exit(-1)
73
74
75 # Generate the protobuf files that we depend on.
76 GenerateProto(os.path.join(PROTO_DIR, "pipeline_spec.proto"))
77
78 setuptools.setup(
79 name=NAME,
80 version=VERSION,
81 description="Kubeflow Pipelines pipeline spec",
82 author="google",
83 author_email="[email protected]",
84 url="https://github.com/kubeflow/pipelines",
85 packages=setuptools.find_namespace_packages(include=['kfp.*']),
86 python_requires=">=3.5.3",
87 include_package_data=True,
88 license="Apache 2.0",
89 )
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api/v2alpha1/python/setup.py b/api/v2alpha1/python/setup.py
--- a/api/v2alpha1/python/setup.py
+++ b/api/v2alpha1/python/setup.py
@@ -84,6 +84,7 @@
url="https://github.com/kubeflow/pipelines",
packages=setuptools.find_namespace_packages(include=['kfp.*']),
python_requires=">=3.5.3",
+ install_requires=["protobuf>=3.13.0,<4"],
include_package_data=True,
license="Apache 2.0",
)
| {"golden_diff": "diff --git a/api/v2alpha1/python/setup.py b/api/v2alpha1/python/setup.py\n--- a/api/v2alpha1/python/setup.py\n+++ b/api/v2alpha1/python/setup.py\n@@ -84,6 +84,7 @@\n url=\"https://github.com/kubeflow/pipelines\",\n packages=setuptools.find_namespace_packages(include=['kfp.*']),\n python_requires=\">=3.5.3\",\n+ install_requires=[\"protobuf>=3.13.0,<4\"],\n include_package_data=True,\n license=\"Apache 2.0\",\n )\n", "issue": "Presubmit failure\nIf you look into the test it said\r\n```\r\nTraceback (most recent call last):\r\n File \"<string>\", line 3, in <module>\r\n File \"/usr/local/lib/python3.6/site-packages/kfp/__init__.py\", line 24, in <module>\r\n from ._client import Client\r\n File \"/usr/local/lib/python3.6/site-packages/kfp/_client.py\", line 31, in <module>\r\n from kfp.compiler import compiler\r\n File \"/usr/local/lib/python3.6/site-packages/kfp/compiler/__init__.py\", line 17, in <module>\r\n from ..containers._component_builder import build_python_component, build_docker_image, VersionedDependency\r\n File \"/usr/local/lib/python3.6/site-packages/kfp/containers/_component_builder.py\", line 32, in <module>\r\n from kfp.containers import entrypoint\r\n File \"/usr/local/lib/python3.6/site-packages/kfp/containers/entrypoint.py\", line 23, in <module>\r\n from kfp.containers import entrypoint_utils\r\n File \"/usr/local/lib/python3.6/site-packages/kfp/containers/entrypoint_utils.py\", line 23, in <module>\r\n from kfp.pipeline_spec import pipeline_spec_pb2\r\n File \"/usr/local/lib/python3.6/site-packages/kfp/pipeline_spec/pipeline_spec_pb2.py\", line 23, in <module>\r\n create_key=_descriptor._internal_create_key,\r\nAttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'\r\n```\r\n\r\nLooks like the `protobuf` version is not matching in this case. @Bobgy are you aware of this error? Thanks.\n\n_Originally posted by @Tomcli in https://github.com/kubeflow/pipelines/pull/5059#issuecomment-777656530_\n\n/cc @numerology @chensun @Ark-kun \nCan you take a look at this issue? I have seen multiple reports, this error seems to fail consistently.\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport subprocess\nimport sys\nimport setuptools\ntry:\n from distutils.spawn import find_executable\nexcept ImportError:\n from shutil import which as find_executable\n\nNAME = \"kfp-pipeline-spec\"\nVERSION = \"0.1.5\"\n\nPROTO_DIR = os.path.realpath(\n os.path.join(os.path.dirname(__file__), os.pardir))\n\nPKG_DIR = os.path.realpath(\n os.path.join(os.path.dirname(__file__), \"kfp\", \"pipeline_spec\"))\n\n# Find the Protocol Compiler. (Taken from protobuf/python/setup.py)\nif \"PROTOC\" in os.environ and os.path.exists(os.environ[\"PROTOC\"]):\n PROTOC = os.environ[\"PROTOC\"]\nelse:\n PROTOC = find_executable(\"protoc\")\n\n\ndef GenerateProto(source):\n \"\"\"Generate a _pb2.py from a .proto file.\n\n Invokes the Protocol Compiler to generate a _pb2.py from the given\n .proto file. Does nothing if the output already exists and is newer than\n the input.\n\n Args:\n source: The source proto file that needs to be compiled.\n \"\"\"\n\n output = source.replace(\".proto\", \"_pb2.py\")\n\n if not os.path.exists(output) or (\n os.path.exists(source) and\n os.path.getmtime(source) > os.path.getmtime(output)):\n print(\"Generating %s...\" % output)\n\n if not os.path.exists(source):\n sys.stderr.write(\"Can't find required file: %s\\n\" % source)\n sys.exit(-1)\n\n if PROTOC is None:\n sys.stderr.write(\"protoc is not found. Please compile it \"\n \"or install the binary package.\\n\")\n sys.exit(-1)\n\n protoc_command = [\n PROTOC, \"-I%s\" % PROTO_DIR,\n \"--python_out=%s\" % PKG_DIR, source\n ]\n if subprocess.call(protoc_command) != 0:\n sys.exit(-1)\n\n\n# Generate the protobuf files that we depend on.\nGenerateProto(os.path.join(PROTO_DIR, \"pipeline_spec.proto\"))\n\nsetuptools.setup(\n name=NAME,\n version=VERSION,\n description=\"Kubeflow Pipelines pipeline spec\",\n author=\"google\",\n author_email=\"[email protected]\",\n url=\"https://github.com/kubeflow/pipelines\",\n packages=setuptools.find_namespace_packages(include=['kfp.*']),\n python_requires=\">=3.5.3\",\n include_package_data=True,\n license=\"Apache 2.0\",\n)\n", "path": "api/v2alpha1/python/setup.py"}], "after_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport subprocess\nimport sys\nimport setuptools\ntry:\n from distutils.spawn import find_executable\nexcept ImportError:\n from shutil import which as find_executable\n\nNAME = \"kfp-pipeline-spec\"\nVERSION = \"0.1.5\"\n\nPROTO_DIR = os.path.realpath(\n os.path.join(os.path.dirname(__file__), os.pardir))\n\nPKG_DIR = os.path.realpath(\n os.path.join(os.path.dirname(__file__), \"kfp\", \"pipeline_spec\"))\n\n# Find the Protocol Compiler. (Taken from protobuf/python/setup.py)\nif \"PROTOC\" in os.environ and os.path.exists(os.environ[\"PROTOC\"]):\n PROTOC = os.environ[\"PROTOC\"]\nelse:\n PROTOC = find_executable(\"protoc\")\n\n\ndef GenerateProto(source):\n \"\"\"Generate a _pb2.py from a .proto file.\n\n Invokes the Protocol Compiler to generate a _pb2.py from the given\n .proto file. Does nothing if the output already exists and is newer than\n the input.\n\n Args:\n source: The source proto file that needs to be compiled.\n \"\"\"\n\n output = source.replace(\".proto\", \"_pb2.py\")\n\n if not os.path.exists(output) or (\n os.path.exists(source) and\n os.path.getmtime(source) > os.path.getmtime(output)):\n print(\"Generating %s...\" % output)\n\n if not os.path.exists(source):\n sys.stderr.write(\"Can't find required file: %s\\n\" % source)\n sys.exit(-1)\n\n if PROTOC is None:\n sys.stderr.write(\"protoc is not found. Please compile it \"\n \"or install the binary package.\\n\")\n sys.exit(-1)\n\n protoc_command = [\n PROTOC, \"-I%s\" % PROTO_DIR,\n \"--python_out=%s\" % PKG_DIR, source\n ]\n if subprocess.call(protoc_command) != 0:\n sys.exit(-1)\n\n\n# Generate the protobuf files that we depend on.\nGenerateProto(os.path.join(PROTO_DIR, \"pipeline_spec.proto\"))\n\nsetuptools.setup(\n name=NAME,\n version=VERSION,\n description=\"Kubeflow Pipelines pipeline spec\",\n author=\"google\",\n author_email=\"[email protected]\",\n url=\"https://github.com/kubeflow/pipelines\",\n packages=setuptools.find_namespace_packages(include=['kfp.*']),\n python_requires=\">=3.5.3\",\n install_requires=[\"protobuf>=3.13.0,<4\"],\n include_package_data=True,\n license=\"Apache 2.0\",\n)\n", "path": "api/v2alpha1/python/setup.py"}]} | 1,565 | 126 |
gh_patches_debug_26964 | rasdani/github-patches | git_diff | pypi__warehouse-4274 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyPI sends emails with unicode in header
**Describe the bug**
<!-- A clear and concise description the bug -->
I created a new account on pypi.org, but I am not receiving any email verification link by email - not after accouunt creation and also not after resending the verification link (even if pypi.org claims "Verification email for [email protected] resent").
test.pypi.org works for me, using the same account name and email address. I deleted and recreated the account on both pypi.org and test.pypi.org - same behaviour (well, the initial email did sometimes not arrive on test.pypi.org, but the the verification resend always arrived right away).
**Expected behavior**
<!-- A clear and concise description of what you expected to happen -->
I should get an email token after registering, and also when I resend the verification mail from the account page. I know it might take a while, but I am experiencing this since Friday.
**To Reproduce**
<!-- Steps to reproduce the bug, or a link to PyPI where the bug is visible -->
Register on pypi.org, wait for email. Resend the verification email from the account settings page, wait again.
**My Platform**
<!--
Any details about your specific platform:
* If the problem is in the browser, what browser, version, and OS?
* If the problem is with a command-line tool, what version of that tool?
* If the problem is with connecting to PyPI, include some details about
your network, including SSL/TLS implementation in use, internet service
provider, and if there are any firewalls or proxies in use.
-->
Firefox 52
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/email/services.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from email.headerregistry import Address
14 from email.utils import parseaddr
15
16 from pyramid_mailer import get_mailer
17 from pyramid_mailer.message import Message
18 from zope.interface import implementer
19
20 from warehouse.email.interfaces import IEmailSender
21 from warehouse.email.ses.models import EmailMessage
22
23
24 def _format_sender(sitename, sender):
25 if sender is not None:
26 return str(Address(sitename, addr_spec=sender))
27
28
29 @implementer(IEmailSender)
30 class SMTPEmailSender:
31 def __init__(self, mailer, sender=None):
32 self.mailer = mailer
33 self.sender = sender
34
35 @classmethod
36 def create_service(cls, context, request):
37 sitename = request.registry.settings["site.name"]
38 sender = _format_sender(sitename, request.registry.settings.get("mail.sender"))
39 return cls(get_mailer(request), sender=sender)
40
41 def send(self, subject, body, *, recipient):
42 message = Message(
43 subject=subject, body=body, recipients=[recipient], sender=self.sender
44 )
45 self.mailer.send_immediately(message)
46
47
48 @implementer(IEmailSender)
49 class SESEmailSender:
50 def __init__(self, client, *, sender=None, db):
51 self._client = client
52 self._sender = sender
53 self._db = db
54
55 @classmethod
56 def create_service(cls, context, request):
57 sitename = request.registry.settings["site.name"]
58 sender = _format_sender(sitename, request.registry.settings.get("mail.sender"))
59
60 aws_session = request.find_service(name="aws.session")
61
62 return cls(
63 aws_session.client(
64 "ses", region_name=request.registry.settings.get("mail.region")
65 ),
66 sender=sender,
67 db=request.db,
68 )
69
70 def send(self, subject, body, *, recipient):
71 resp = self._client.send_email(
72 Source=self._sender,
73 Destination={"ToAddresses": [recipient]},
74 Message={
75 "Subject": {"Data": subject, "Charset": "UTF-8"},
76 "Body": {"Text": {"Data": body, "Charset": "UTF-8"}},
77 },
78 )
79
80 self._db.add(
81 EmailMessage(
82 message_id=resp["MessageId"],
83 from_=parseaddr(self._sender)[1],
84 to=parseaddr(recipient)[1],
85 subject=subject,
86 )
87 )
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/email/services.py b/warehouse/email/services.py
--- a/warehouse/email/services.py
+++ b/warehouse/email/services.py
@@ -11,7 +11,9 @@
# limitations under the License.
from email.headerregistry import Address
-from email.utils import parseaddr
+from email.mime.multipart import MIMEMultipart
+from email.mime.text import MIMEText
+from email.utils import parseaddr, formataddr
from pyramid_mailer import get_mailer
from pyramid_mailer.message import Message
@@ -68,13 +70,25 @@
)
def send(self, subject, body, *, recipient):
- resp = self._client.send_email(
+ message = MIMEMultipart("mixed")
+ message["Subject"] = subject
+ message["From"] = self._sender
+
+ # The following is necessary to support friendly names with Unicode characters,
+ # otherwise the entire value will get encoded and will not be accepted by SES:
+ #
+ # >>> parseaddr("Fööbar <[email protected]>")
+ # ('Fööbar', '[email protected]')
+ # >>> formataddr(_)
+ # '=?utf-8?b?RsO2w7ZiYXI=?= <[email protected]>'
+ message["To"] = formataddr(parseaddr(recipient))
+
+ message.attach(MIMEText(body, "plain", "utf-8"))
+
+ resp = self._client.send_raw_email(
Source=self._sender,
- Destination={"ToAddresses": [recipient]},
- Message={
- "Subject": {"Data": subject, "Charset": "UTF-8"},
- "Body": {"Text": {"Data": body, "Charset": "UTF-8"}},
- },
+ Destinations=[recipient],
+ RawMessage={"Data": message.as_string()},
)
self._db.add(
| {"golden_diff": "diff --git a/warehouse/email/services.py b/warehouse/email/services.py\n--- a/warehouse/email/services.py\n+++ b/warehouse/email/services.py\n@@ -11,7 +11,9 @@\n # limitations under the License.\n \n from email.headerregistry import Address\n-from email.utils import parseaddr\n+from email.mime.multipart import MIMEMultipart\n+from email.mime.text import MIMEText\n+from email.utils import parseaddr, formataddr\n \n from pyramid_mailer import get_mailer\n from pyramid_mailer.message import Message\n@@ -68,13 +70,25 @@\n )\n \n def send(self, subject, body, *, recipient):\n- resp = self._client.send_email(\n+ message = MIMEMultipart(\"mixed\")\n+ message[\"Subject\"] = subject\n+ message[\"From\"] = self._sender\n+\n+ # The following is necessary to support friendly names with Unicode characters,\n+ # otherwise the entire value will get encoded and will not be accepted by SES:\n+ #\n+ # >>> parseaddr(\"F\u00f6\u00f6bar <[email protected]>\")\n+ # ('F\u00f6\u00f6bar', '[email protected]')\n+ # >>> formataddr(_)\n+ # '=?utf-8?b?RsO2w7ZiYXI=?= <[email protected]>'\n+ message[\"To\"] = formataddr(parseaddr(recipient))\n+\n+ message.attach(MIMEText(body, \"plain\", \"utf-8\"))\n+\n+ resp = self._client.send_raw_email(\n Source=self._sender,\n- Destination={\"ToAddresses\": [recipient]},\n- Message={\n- \"Subject\": {\"Data\": subject, \"Charset\": \"UTF-8\"},\n- \"Body\": {\"Text\": {\"Data\": body, \"Charset\": \"UTF-8\"}},\n- },\n+ Destinations=[recipient],\n+ RawMessage={\"Data\": message.as_string()},\n )\n \n self._db.add(\n", "issue": "PyPI sends emails with unicode in header\n**Describe the bug**\r\n<!-- A clear and concise description the bug -->\r\nI created a new account on pypi.org, but I am not receiving any email verification link by email - not after accouunt creation and also not after resending the verification link (even if pypi.org claims \"Verification email for [email protected] resent\").\r\n\r\ntest.pypi.org works for me, using the same account name and email address. I deleted and recreated the account on both pypi.org and test.pypi.org - same behaviour (well, the initial email did sometimes not arrive on test.pypi.org, but the the verification resend always arrived right away).\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen -->\r\nI should get an email token after registering, and also when I resend the verification mail from the account page. I know it might take a while, but I am experiencing this since Friday.\r\n\r\n**To Reproduce**\r\n<!-- Steps to reproduce the bug, or a link to PyPI where the bug is visible -->\r\nRegister on pypi.org, wait for email. Resend the verification email from the account settings page, wait again.\r\n\r\n**My Platform**\r\n<!--\r\n Any details about your specific platform:\r\n * If the problem is in the browser, what browser, version, and OS?\r\n * If the problem is with a command-line tool, what version of that tool?\r\n * If the problem is with connecting to PyPI, include some details about\r\n your network, including SSL/TLS implementation in use, internet service\r\n provider, and if there are any firewalls or proxies in use.\r\n-->\r\nFirefox 52\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom email.headerregistry import Address\nfrom email.utils import parseaddr\n\nfrom pyramid_mailer import get_mailer\nfrom pyramid_mailer.message import Message\nfrom zope.interface import implementer\n\nfrom warehouse.email.interfaces import IEmailSender\nfrom warehouse.email.ses.models import EmailMessage\n\n\ndef _format_sender(sitename, sender):\n if sender is not None:\n return str(Address(sitename, addr_spec=sender))\n\n\n@implementer(IEmailSender)\nclass SMTPEmailSender:\n def __init__(self, mailer, sender=None):\n self.mailer = mailer\n self.sender = sender\n\n @classmethod\n def create_service(cls, context, request):\n sitename = request.registry.settings[\"site.name\"]\n sender = _format_sender(sitename, request.registry.settings.get(\"mail.sender\"))\n return cls(get_mailer(request), sender=sender)\n\n def send(self, subject, body, *, recipient):\n message = Message(\n subject=subject, body=body, recipients=[recipient], sender=self.sender\n )\n self.mailer.send_immediately(message)\n\n\n@implementer(IEmailSender)\nclass SESEmailSender:\n def __init__(self, client, *, sender=None, db):\n self._client = client\n self._sender = sender\n self._db = db\n\n @classmethod\n def create_service(cls, context, request):\n sitename = request.registry.settings[\"site.name\"]\n sender = _format_sender(sitename, request.registry.settings.get(\"mail.sender\"))\n\n aws_session = request.find_service(name=\"aws.session\")\n\n return cls(\n aws_session.client(\n \"ses\", region_name=request.registry.settings.get(\"mail.region\")\n ),\n sender=sender,\n db=request.db,\n )\n\n def send(self, subject, body, *, recipient):\n resp = self._client.send_email(\n Source=self._sender,\n Destination={\"ToAddresses\": [recipient]},\n Message={\n \"Subject\": {\"Data\": subject, \"Charset\": \"UTF-8\"},\n \"Body\": {\"Text\": {\"Data\": body, \"Charset\": \"UTF-8\"}},\n },\n )\n\n self._db.add(\n EmailMessage(\n message_id=resp[\"MessageId\"],\n from_=parseaddr(self._sender)[1],\n to=parseaddr(recipient)[1],\n subject=subject,\n )\n )\n", "path": "warehouse/email/services.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom email.headerregistry import Address\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.text import MIMEText\nfrom email.utils import parseaddr, formataddr\n\nfrom pyramid_mailer import get_mailer\nfrom pyramid_mailer.message import Message\nfrom zope.interface import implementer\n\nfrom warehouse.email.interfaces import IEmailSender\nfrom warehouse.email.ses.models import EmailMessage\n\n\ndef _format_sender(sitename, sender):\n if sender is not None:\n return str(Address(sitename, addr_spec=sender))\n\n\n@implementer(IEmailSender)\nclass SMTPEmailSender:\n def __init__(self, mailer, sender=None):\n self.mailer = mailer\n self.sender = sender\n\n @classmethod\n def create_service(cls, context, request):\n sitename = request.registry.settings[\"site.name\"]\n sender = _format_sender(sitename, request.registry.settings.get(\"mail.sender\"))\n return cls(get_mailer(request), sender=sender)\n\n def send(self, subject, body, *, recipient):\n message = Message(\n subject=subject, body=body, recipients=[recipient], sender=self.sender\n )\n self.mailer.send_immediately(message)\n\n\n@implementer(IEmailSender)\nclass SESEmailSender:\n def __init__(self, client, *, sender=None, db):\n self._client = client\n self._sender = sender\n self._db = db\n\n @classmethod\n def create_service(cls, context, request):\n sitename = request.registry.settings[\"site.name\"]\n sender = _format_sender(sitename, request.registry.settings.get(\"mail.sender\"))\n\n aws_session = request.find_service(name=\"aws.session\")\n\n return cls(\n aws_session.client(\n \"ses\", region_name=request.registry.settings.get(\"mail.region\")\n ),\n sender=sender,\n db=request.db,\n )\n\n def send(self, subject, body, *, recipient):\n message = MIMEMultipart(\"mixed\")\n message[\"Subject\"] = subject\n message[\"From\"] = self._sender\n\n # The following is necessary to support friendly names with Unicode characters,\n # otherwise the entire value will get encoded and will not be accepted by SES:\n #\n # >>> parseaddr(\"F\u00f6\u00f6bar <[email protected]>\")\n # ('F\u00f6\u00f6bar', '[email protected]')\n # >>> formataddr(_)\n # '=?utf-8?b?RsO2w7ZiYXI=?= <[email protected]>'\n message[\"To\"] = formataddr(parseaddr(recipient))\n\n message.attach(MIMEText(body, \"plain\", \"utf-8\"))\n\n resp = self._client.send_raw_email(\n Source=self._sender,\n Destinations=[recipient],\n RawMessage={\"Data\": message.as_string()},\n )\n\n self._db.add(\n EmailMessage(\n message_id=resp[\"MessageId\"],\n from_=parseaddr(self._sender)[1],\n to=parseaddr(recipient)[1],\n subject=subject,\n )\n )\n", "path": "warehouse/email/services.py"}]} | 1,408 | 425 |
gh_patches_debug_4268 | rasdani/github-patches | git_diff | weecology__retriever-685 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove unnecessary warning when using `download`
When running `retriever download` an unnecessary warning displays
```
ethan@gandalf:~$ retriever download Clark2006
Creating database Clark2006...
Couldn't create database ('NoneType' object has no attribute 'execute').
...
```
The command executes correctly, but this warning isn't really correct (there is no database to create since we're just downloading the raw data files) and could be distracting to users.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `engines/download_only.py`
Content:
```
1 from __future__ import print_function
2 from builtins import object
3 import os
4 import platform
5 import shutil
6 import inspect
7
8 from retriever.lib.engine import filename_from_url
9 from retriever.lib.models import Engine, no_cleanup
10 from retriever import DATA_DIR, HOME_DIR
11
12
13 class DummyConnection(object):
14
15 def cursor(self):
16 pass
17
18 def commit(self):
19 pass
20
21 def rollback(self):
22 pass
23
24 def close(self):
25 pass
26
27
28 class DummyCursor(DummyConnection):
29 pass
30
31
32 class engine(Engine):
33 """Engine instance for writing data to a CSV file."""
34 name = "Download Only"
35 abbreviation = "download"
36 required_opts = [("path",
37 "File path to copy data files",
38 "./"),
39 ("subdir",
40 "Keep the subdirectories for archived files",
41 False)
42 ]
43
44 def table_exists(self, dbname, tablename):
45 """Checks if the file to be downloaded already exists"""
46 try:
47 tablename = self.table_name(name=tablename, dbname=dbname)
48 return os.path.exists(tablename)
49 except:
50 return False
51
52 def get_connection(self):
53 """Gets the db connection."""
54 self.get_input()
55 return DummyConnection()
56
57 def final_cleanup(self):
58 """Copies downloaded files to desired directory
59
60 Copies the downloaded files into the chosen directory unless files with the same
61 name already exist in the directory.
62
63 """
64 if hasattr(self, "all_files"):
65 for file_name in self.all_files:
66 file_path, file_name_nopath = os.path.split(file_name)
67 subdir = os.path.split(file_path)[1] if self.opts['subdir'] else ''
68 dest_path = os.path.join(self.opts['path'], subdir)
69 if os.path.isfile(os.path.join(dest_path, file_name_nopath)):
70 print ("File already exists at specified location")
71 elif os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):
72 print ("%s is already in the working directory" %
73 file_name_nopath)
74 print("Keeping existing copy.")
75 else:
76 print("Copying %s from %s" % (file_name_nopath, file_path))
77 if os.path.isdir(dest_path):
78 try:
79 shutil.copy(file_name, dest_path)
80 except:
81 print("Couldn't copy file to %s" % dest_path)
82 else:
83 try:
84 print("Creating directory %s" % dest_path)
85 os.makedirs(dest_path)
86 shutil.copy(file_name, dest_path)
87 except:
88 print("Couldn't create directory %s" % dest_path)
89 self.all_files = set()
90
91 def auto_create_table(self, table, url=None, filename=None, pk=None):
92 """Download the file if it doesn't exist"""
93 if url and not filename:
94 filename = filename_from_url(url)
95
96 if url and not self.find_file(filename):
97 # If the file doesn't exist, download it
98 self.download_file(url, filename)
99
100 def insert_data_from_url(self, url):
101 """Insert data from a web resource"""
102 filename = filename_from_url(url)
103 find = self.find_file(filename)
104 if not find:
105 self.create_raw_data_dir()
106 self.download_file(url, filename)
107
108 def find_file(self, filename):
109 """Checks for the given file and adds it to the list of all files"""
110 result = Engine.find_file(self, filename)
111 if not hasattr(self, "all_files"):
112 self.all_files = set()
113 if result:
114 self.all_files.add(result)
115 return result
116
117 def register_files(self, filenames):
118 """Identify a list of files to be moved by the download
119
120 When downloading archives with multiple files the engine needs to be
121 informed of all of the file names so that it can move them.
122
123 """
124 full_filenames = {self.find_file(filename) for filename in filenames
125 if self.find_file(filename)}
126 self.all_files = self.all_files.union(full_filenames)
127
128
129 # replace all other methods with a function that does nothing
130 def dummy_method(self, *args, **kwargs):
131 pass
132
133
134 methods = inspect.getmembers(engine, predicate=inspect.ismethod)
135 keep_methods = {'table_exists',
136 'get_connection',
137 'final_cleanup',
138 'auto_create_table',
139 'insert_data_from_url',
140 }
141 remove_methods = ['insert_data_from_file']
142 for name, method in methods:
143 if (name not in keep_methods and
144 'download' not in name and
145 'file' not in name and
146 'dir' not in name):
147 setattr(engine, name, dummy_method)
148 for name in remove_methods:
149 setattr(engine, name, dummy_method)
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/engines/download_only.py b/engines/download_only.py
--- a/engines/download_only.py
+++ b/engines/download_only.py
@@ -138,7 +138,7 @@
'auto_create_table',
'insert_data_from_url',
}
-remove_methods = ['insert_data_from_file']
+remove_methods = ['insert_data_from_file', 'create_db']
for name, method in methods:
if (name not in keep_methods and
'download' not in name and
| {"golden_diff": "diff --git a/engines/download_only.py b/engines/download_only.py\n--- a/engines/download_only.py\n+++ b/engines/download_only.py\n@@ -138,7 +138,7 @@\n 'auto_create_table',\n 'insert_data_from_url',\n }\n-remove_methods = ['insert_data_from_file']\n+remove_methods = ['insert_data_from_file', 'create_db']\n for name, method in methods:\n if (name not in keep_methods and\n 'download' not in name and\n", "issue": "Remove unnecessary warning when using `download`\nWhen running `retriever download` an unnecessary warning displays\n\n```\nethan@gandalf:~$ retriever download Clark2006\nCreating database Clark2006...\nCouldn't create database ('NoneType' object has no attribute 'execute').\n...\n```\n\nThe command executes correctly, but this warning isn't really correct (there is no database to create since we're just downloading the raw data files) and could be distracting to users.\n\n", "before_files": [{"content": "from __future__ import print_function\nfrom builtins import object\nimport os\nimport platform\nimport shutil\nimport inspect\n\nfrom retriever.lib.engine import filename_from_url\nfrom retriever.lib.models import Engine, no_cleanup\nfrom retriever import DATA_DIR, HOME_DIR\n\n\nclass DummyConnection(object):\n\n def cursor(self):\n pass\n\n def commit(self):\n pass\n\n def rollback(self):\n pass\n\n def close(self):\n pass\n\n\nclass DummyCursor(DummyConnection):\n pass\n\n\nclass engine(Engine):\n \"\"\"Engine instance for writing data to a CSV file.\"\"\"\n name = \"Download Only\"\n abbreviation = \"download\"\n required_opts = [(\"path\",\n \"File path to copy data files\",\n \"./\"),\n (\"subdir\",\n \"Keep the subdirectories for archived files\",\n False)\n ]\n\n def table_exists(self, dbname, tablename):\n \"\"\"Checks if the file to be downloaded already exists\"\"\"\n try:\n tablename = self.table_name(name=tablename, dbname=dbname)\n return os.path.exists(tablename)\n except:\n return False\n\n def get_connection(self):\n \"\"\"Gets the db connection.\"\"\"\n self.get_input()\n return DummyConnection()\n\n def final_cleanup(self):\n \"\"\"Copies downloaded files to desired directory\n\n Copies the downloaded files into the chosen directory unless files with the same\n name already exist in the directory.\n\n \"\"\"\n if hasattr(self, \"all_files\"):\n for file_name in self.all_files:\n file_path, file_name_nopath = os.path.split(file_name)\n subdir = os.path.split(file_path)[1] if self.opts['subdir'] else ''\n dest_path = os.path.join(self.opts['path'], subdir)\n if os.path.isfile(os.path.join(dest_path, file_name_nopath)):\n print (\"File already exists at specified location\")\n elif os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):\n print (\"%s is already in the working directory\" %\n file_name_nopath)\n print(\"Keeping existing copy.\")\n else:\n print(\"Copying %s from %s\" % (file_name_nopath, file_path))\n if os.path.isdir(dest_path):\n try:\n shutil.copy(file_name, dest_path)\n except:\n print(\"Couldn't copy file to %s\" % dest_path)\n else:\n try:\n print(\"Creating directory %s\" % dest_path)\n os.makedirs(dest_path)\n shutil.copy(file_name, dest_path)\n except:\n print(\"Couldn't create directory %s\" % dest_path)\n self.all_files = set()\n\n def auto_create_table(self, table, url=None, filename=None, pk=None):\n \"\"\"Download the file if it doesn't exist\"\"\"\n if url and not filename:\n filename = filename_from_url(url)\n\n if url and not self.find_file(filename):\n # If the file doesn't exist, download it\n self.download_file(url, filename)\n\n def insert_data_from_url(self, url):\n \"\"\"Insert data from a web resource\"\"\"\n filename = filename_from_url(url)\n find = self.find_file(filename)\n if not find:\n self.create_raw_data_dir()\n self.download_file(url, filename)\n\n def find_file(self, filename):\n \"\"\"Checks for the given file and adds it to the list of all files\"\"\"\n result = Engine.find_file(self, filename)\n if not hasattr(self, \"all_files\"):\n self.all_files = set()\n if result:\n self.all_files.add(result)\n return result\n\n def register_files(self, filenames):\n \"\"\"Identify a list of files to be moved by the download\n\n When downloading archives with multiple files the engine needs to be\n informed of all of the file names so that it can move them.\n\n \"\"\"\n full_filenames = {self.find_file(filename) for filename in filenames\n if self.find_file(filename)}\n self.all_files = self.all_files.union(full_filenames)\n\n\n# replace all other methods with a function that does nothing\ndef dummy_method(self, *args, **kwargs):\n pass\n\n\nmethods = inspect.getmembers(engine, predicate=inspect.ismethod)\nkeep_methods = {'table_exists',\n 'get_connection',\n 'final_cleanup',\n 'auto_create_table',\n 'insert_data_from_url',\n }\nremove_methods = ['insert_data_from_file']\nfor name, method in methods:\n if (name not in keep_methods and\n 'download' not in name and\n 'file' not in name and\n 'dir' not in name):\n setattr(engine, name, dummy_method)\nfor name in remove_methods:\n setattr(engine, name, dummy_method)\n", "path": "engines/download_only.py"}], "after_files": [{"content": "from __future__ import print_function\nfrom builtins import object\nimport os\nimport platform\nimport shutil\nimport inspect\n\nfrom retriever.lib.engine import filename_from_url\nfrom retriever.lib.models import Engine, no_cleanup\nfrom retriever import DATA_DIR, HOME_DIR\n\n\nclass DummyConnection(object):\n\n def cursor(self):\n pass\n\n def commit(self):\n pass\n\n def rollback(self):\n pass\n\n def close(self):\n pass\n\n\nclass DummyCursor(DummyConnection):\n pass\n\n\nclass engine(Engine):\n \"\"\"Engine instance for writing data to a CSV file.\"\"\"\n name = \"Download Only\"\n abbreviation = \"download\"\n required_opts = [(\"path\",\n \"File path to copy data files\",\n \"./\"),\n (\"subdir\",\n \"Keep the subdirectories for archived files\",\n False)\n ]\n\n def table_exists(self, dbname, tablename):\n \"\"\"Checks if the file to be downloaded already exists\"\"\"\n try:\n tablename = self.table_name(name=tablename, dbname=dbname)\n return os.path.exists(tablename)\n except:\n return False\n\n def get_connection(self):\n \"\"\"Gets the db connection.\"\"\"\n self.get_input()\n return DummyConnection()\n\n def final_cleanup(self):\n \"\"\"Copies downloaded files to desired directory\n\n Copies the downloaded files into the chosen directory unless files with the same\n name already exist in the directory.\n\n \"\"\"\n if hasattr(self, \"all_files\"):\n for file_name in self.all_files:\n file_path, file_name_nopath = os.path.split(file_name)\n subdir = os.path.split(file_path)[1] if self.opts['subdir'] else ''\n dest_path = os.path.join(self.opts['path'], subdir)\n if os.path.isfile(os.path.join(dest_path, file_name_nopath)):\n print (\"File already exists at specified location\")\n elif os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):\n print (\"%s is already in the working directory\" %\n file_name_nopath)\n print(\"Keeping existing copy.\")\n else:\n print(\"Copying %s from %s\" % (file_name_nopath, file_path))\n if os.path.isdir(dest_path):\n try:\n shutil.copy(file_name, dest_path)\n except:\n print(\"Couldn't copy file to %s\" % dest_path)\n else:\n try:\n print(\"Creating directory %s\" % dest_path)\n os.makedirs(dest_path)\n shutil.copy(file_name, dest_path)\n except:\n print(\"Couldn't create directory %s\" % dest_path)\n self.all_files = set()\n\n def auto_create_table(self, table, url=None, filename=None, pk=None):\n \"\"\"Download the file if it doesn't exist\"\"\"\n if url and not filename:\n filename = filename_from_url(url)\n\n if url and not self.find_file(filename):\n # If the file doesn't exist, download it\n self.download_file(url, filename)\n\n def insert_data_from_url(self, url):\n \"\"\"Insert data from a web resource\"\"\"\n filename = filename_from_url(url)\n find = self.find_file(filename)\n if not find:\n self.create_raw_data_dir()\n self.download_file(url, filename)\n\n def find_file(self, filename):\n \"\"\"Checks for the given file and adds it to the list of all files\"\"\"\n result = Engine.find_file(self, filename)\n if not hasattr(self, \"all_files\"):\n self.all_files = set()\n if result:\n self.all_files.add(result)\n return result\n\n def register_files(self, filenames):\n \"\"\"Identify a list of files to be moved by the download\n\n When downloading archives with multiple files the engine needs to be\n informed of all of the file names so that it can move them.\n\n \"\"\"\n full_filenames = {self.find_file(filename) for filename in filenames\n if self.find_file(filename)}\n self.all_files = self.all_files.union(full_filenames)\n\n\n# replace all other methods with a function that does nothing\ndef dummy_method(self, *args, **kwargs):\n pass\n\n\nmethods = inspect.getmembers(engine, predicate=inspect.ismethod)\nkeep_methods = {'table_exists',\n 'get_connection',\n 'final_cleanup',\n 'auto_create_table',\n 'insert_data_from_url',\n }\nremove_methods = ['insert_data_from_file', 'create_db']\nfor name, method in methods:\n if (name not in keep_methods and\n 'download' not in name and\n 'file' not in name and\n 'dir' not in name):\n setattr(engine, name, dummy_method)\nfor name in remove_methods:\n setattr(engine, name, dummy_method)\n", "path": "engines/download_only.py"}]} | 1,707 | 114 |
gh_patches_debug_938 | rasdani/github-patches | git_diff | scrapy__scrapy-1735 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError in robotstxt middleware
I'm getting these errors in robots.txt middleware:
```
2016-01-27 16:18:21 [scrapy.core.scraper] ERROR: Error downloading <GET http://yellowpages.co.th>
Traceback (most recent call last):
File "/Users/kmike/envs/scraping/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/Users/kmike/svn/scrapy/scrapy/downloadermiddlewares/robotstxt.py", line 65, in robot_parser
if isinstance(self._parsers[netloc], Deferred):
KeyError: 'yellowpages.co.th'
```
It looks like https://github.com/scrapy/scrapy/pull/1473 caused it (I can't get this issue in Scrapy 1.0.4, but it present in Scrapy master). It happens when page failed to download and HTTP cache is enabled. I haven't debugged it further.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/downloadermiddlewares/robotstxt.py`
Content:
```
1 """
2 This is a middleware to respect robots.txt policies. To activate it you must
3 enable this middleware and enable the ROBOTSTXT_OBEY setting.
4
5 """
6
7 import logging
8
9 from six.moves.urllib import robotparser
10
11 from twisted.internet.defer import Deferred, maybeDeferred
12 from scrapy.exceptions import NotConfigured, IgnoreRequest
13 from scrapy.http import Request
14 from scrapy.utils.httpobj import urlparse_cached
15 from scrapy.utils.log import failure_to_exc_info
16
17 logger = logging.getLogger(__name__)
18
19
20 class RobotsTxtMiddleware(object):
21 DOWNLOAD_PRIORITY = 1000
22
23 def __init__(self, crawler):
24 if not crawler.settings.getbool('ROBOTSTXT_OBEY'):
25 raise NotConfigured
26
27 self.crawler = crawler
28 self._useragent = crawler.settings.get('USER_AGENT')
29 self._parsers = {}
30
31 @classmethod
32 def from_crawler(cls, crawler):
33 return cls(crawler)
34
35 def process_request(self, request, spider):
36 if request.meta.get('dont_obey_robotstxt'):
37 return
38 d = maybeDeferred(self.robot_parser, request, spider)
39 d.addCallback(self.process_request_2, request, spider)
40 return d
41
42 def process_request_2(self, rp, request, spider):
43 if rp is not None and not rp.can_fetch(self._useragent, request.url):
44 logger.debug("Forbidden by robots.txt: %(request)s",
45 {'request': request}, extra={'spider': spider})
46 raise IgnoreRequest()
47
48 def robot_parser(self, request, spider):
49 url = urlparse_cached(request)
50 netloc = url.netloc
51
52 if netloc not in self._parsers:
53 self._parsers[netloc] = Deferred()
54 robotsurl = "%s://%s/robots.txt" % (url.scheme, url.netloc)
55 robotsreq = Request(
56 robotsurl,
57 priority=self.DOWNLOAD_PRIORITY,
58 meta={'dont_obey_robotstxt': True}
59 )
60 dfd = self.crawler.engine.download(robotsreq, spider)
61 dfd.addCallback(self._parse_robots, netloc)
62 dfd.addErrback(self._logerror, robotsreq, spider)
63 dfd.addErrback(self._robots_error, netloc)
64
65 if isinstance(self._parsers[netloc], Deferred):
66 d = Deferred()
67 def cb(result):
68 d.callback(result)
69 return result
70 self._parsers[netloc].addCallback(cb)
71 return d
72 else:
73 return self._parsers[netloc]
74
75 def _logerror(self, failure, request, spider):
76 if failure.type is not IgnoreRequest:
77 logger.error("Error downloading %(request)s: %(f_exception)s",
78 {'request': request, 'f_exception': failure.value},
79 exc_info=failure_to_exc_info(failure),
80 extra={'spider': spider})
81 return failure
82
83 def _parse_robots(self, response, netloc):
84 rp = robotparser.RobotFileParser(response.url)
85 body = ''
86 if hasattr(response, 'text'):
87 body = response.text
88 else: # last effort try
89 try:
90 body = response.body.decode('utf-8')
91 except UnicodeDecodeError:
92 # If we found garbage, disregard it:,
93 # but keep the lookup cached (in self._parsers)
94 # Running rp.parse() will set rp state from
95 # 'disallow all' to 'allow any'.
96 pass
97 rp.parse(body.splitlines())
98
99 rp_dfd = self._parsers[netloc]
100 self._parsers[netloc] = rp
101 rp_dfd.callback(rp)
102
103 def _robots_error(self, failure, netloc):
104 self._parsers.pop(netloc).callback(None)
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/downloadermiddlewares/robotstxt.py b/scrapy/downloadermiddlewares/robotstxt.py
--- a/scrapy/downloadermiddlewares/robotstxt.py
+++ b/scrapy/downloadermiddlewares/robotstxt.py
@@ -101,4 +101,6 @@
rp_dfd.callback(rp)
def _robots_error(self, failure, netloc):
- self._parsers.pop(netloc).callback(None)
+ rp_dfd = self._parsers[netloc]
+ self._parsers[netloc] = None
+ rp_dfd.callback(None)
| {"golden_diff": "diff --git a/scrapy/downloadermiddlewares/robotstxt.py b/scrapy/downloadermiddlewares/robotstxt.py\n--- a/scrapy/downloadermiddlewares/robotstxt.py\n+++ b/scrapy/downloadermiddlewares/robotstxt.py\n@@ -101,4 +101,6 @@\n rp_dfd.callback(rp)\n \n def _robots_error(self, failure, netloc):\n- self._parsers.pop(netloc).callback(None)\n+ rp_dfd = self._parsers[netloc]\n+ self._parsers[netloc] = None\n+ rp_dfd.callback(None)\n", "issue": "KeyError in robotstxt middleware\nI'm getting these errors in robots.txt middleware:\n\n```\n2016-01-27 16:18:21 [scrapy.core.scraper] ERROR: Error downloading <GET http://yellowpages.co.th>\nTraceback (most recent call last):\n File \"/Users/kmike/envs/scraping/lib/python2.7/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\n result = f(*args, **kw)\n File \"/Users/kmike/svn/scrapy/scrapy/downloadermiddlewares/robotstxt.py\", line 65, in robot_parser\n if isinstance(self._parsers[netloc], Deferred):\nKeyError: 'yellowpages.co.th'\n```\n\nIt looks like https://github.com/scrapy/scrapy/pull/1473 caused it (I can't get this issue in Scrapy 1.0.4, but it present in Scrapy master). It happens when page failed to download and HTTP cache is enabled. I haven't debugged it further.\n\n", "before_files": [{"content": "\"\"\"\nThis is a middleware to respect robots.txt policies. To activate it you must\nenable this middleware and enable the ROBOTSTXT_OBEY setting.\n\n\"\"\"\n\nimport logging\n\nfrom six.moves.urllib import robotparser\n\nfrom twisted.internet.defer import Deferred, maybeDeferred\nfrom scrapy.exceptions import NotConfigured, IgnoreRequest\nfrom scrapy.http import Request\nfrom scrapy.utils.httpobj import urlparse_cached\nfrom scrapy.utils.log import failure_to_exc_info\n\nlogger = logging.getLogger(__name__)\n\n\nclass RobotsTxtMiddleware(object):\n DOWNLOAD_PRIORITY = 1000\n\n def __init__(self, crawler):\n if not crawler.settings.getbool('ROBOTSTXT_OBEY'):\n raise NotConfigured\n\n self.crawler = crawler\n self._useragent = crawler.settings.get('USER_AGENT')\n self._parsers = {}\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(crawler)\n\n def process_request(self, request, spider):\n if request.meta.get('dont_obey_robotstxt'):\n return\n d = maybeDeferred(self.robot_parser, request, spider)\n d.addCallback(self.process_request_2, request, spider)\n return d\n\n def process_request_2(self, rp, request, spider):\n if rp is not None and not rp.can_fetch(self._useragent, request.url):\n logger.debug(\"Forbidden by robots.txt: %(request)s\",\n {'request': request}, extra={'spider': spider})\n raise IgnoreRequest()\n\n def robot_parser(self, request, spider):\n url = urlparse_cached(request)\n netloc = url.netloc\n\n if netloc not in self._parsers:\n self._parsers[netloc] = Deferred()\n robotsurl = \"%s://%s/robots.txt\" % (url.scheme, url.netloc)\n robotsreq = Request(\n robotsurl,\n priority=self.DOWNLOAD_PRIORITY,\n meta={'dont_obey_robotstxt': True}\n )\n dfd = self.crawler.engine.download(robotsreq, spider)\n dfd.addCallback(self._parse_robots, netloc)\n dfd.addErrback(self._logerror, robotsreq, spider)\n dfd.addErrback(self._robots_error, netloc)\n\n if isinstance(self._parsers[netloc], Deferred):\n d = Deferred()\n def cb(result):\n d.callback(result)\n return result\n self._parsers[netloc].addCallback(cb)\n return d\n else:\n return self._parsers[netloc]\n\n def _logerror(self, failure, request, spider):\n if failure.type is not IgnoreRequest:\n logger.error(\"Error downloading %(request)s: %(f_exception)s\",\n {'request': request, 'f_exception': failure.value},\n exc_info=failure_to_exc_info(failure),\n extra={'spider': spider})\n return failure\n\n def _parse_robots(self, response, netloc):\n rp = robotparser.RobotFileParser(response.url)\n body = ''\n if hasattr(response, 'text'):\n body = response.text\n else: # last effort try\n try:\n body = response.body.decode('utf-8')\n except UnicodeDecodeError:\n # If we found garbage, disregard it:,\n # but keep the lookup cached (in self._parsers)\n # Running rp.parse() will set rp state from\n # 'disallow all' to 'allow any'.\n pass\n rp.parse(body.splitlines())\n\n rp_dfd = self._parsers[netloc]\n self._parsers[netloc] = rp\n rp_dfd.callback(rp)\n\n def _robots_error(self, failure, netloc):\n self._parsers.pop(netloc).callback(None)\n", "path": "scrapy/downloadermiddlewares/robotstxt.py"}], "after_files": [{"content": "\"\"\"\nThis is a middleware to respect robots.txt policies. To activate it you must\nenable this middleware and enable the ROBOTSTXT_OBEY setting.\n\n\"\"\"\n\nimport logging\n\nfrom six.moves.urllib import robotparser\n\nfrom twisted.internet.defer import Deferred, maybeDeferred\nfrom scrapy.exceptions import NotConfigured, IgnoreRequest\nfrom scrapy.http import Request\nfrom scrapy.utils.httpobj import urlparse_cached\nfrom scrapy.utils.log import failure_to_exc_info\n\nlogger = logging.getLogger(__name__)\n\n\nclass RobotsTxtMiddleware(object):\n DOWNLOAD_PRIORITY = 1000\n\n def __init__(self, crawler):\n if not crawler.settings.getbool('ROBOTSTXT_OBEY'):\n raise NotConfigured\n\n self.crawler = crawler\n self._useragent = crawler.settings.get('USER_AGENT')\n self._parsers = {}\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(crawler)\n\n def process_request(self, request, spider):\n if request.meta.get('dont_obey_robotstxt'):\n return\n d = maybeDeferred(self.robot_parser, request, spider)\n d.addCallback(self.process_request_2, request, spider)\n return d\n\n def process_request_2(self, rp, request, spider):\n if rp is not None and not rp.can_fetch(self._useragent, request.url):\n logger.debug(\"Forbidden by robots.txt: %(request)s\",\n {'request': request}, extra={'spider': spider})\n raise IgnoreRequest()\n\n def robot_parser(self, request, spider):\n url = urlparse_cached(request)\n netloc = url.netloc\n\n if netloc not in self._parsers:\n self._parsers[netloc] = Deferred()\n robotsurl = \"%s://%s/robots.txt\" % (url.scheme, url.netloc)\n robotsreq = Request(\n robotsurl,\n priority=self.DOWNLOAD_PRIORITY,\n meta={'dont_obey_robotstxt': True}\n )\n dfd = self.crawler.engine.download(robotsreq, spider)\n dfd.addCallback(self._parse_robots, netloc)\n dfd.addErrback(self._logerror, robotsreq, spider)\n dfd.addErrback(self._robots_error, netloc)\n\n if isinstance(self._parsers[netloc], Deferred):\n d = Deferred()\n def cb(result):\n d.callback(result)\n return result\n self._parsers[netloc].addCallback(cb)\n return d\n else:\n return self._parsers[netloc]\n\n def _logerror(self, failure, request, spider):\n if failure.type is not IgnoreRequest:\n logger.error(\"Error downloading %(request)s: %(f_exception)s\",\n {'request': request, 'f_exception': failure.value},\n exc_info=failure_to_exc_info(failure),\n extra={'spider': spider})\n return failure\n\n def _parse_robots(self, response, netloc):\n rp = robotparser.RobotFileParser(response.url)\n body = ''\n if hasattr(response, 'body_as_unicode'):\n body = response.body_as_unicode()\n else: # last effort try\n try:\n body = response.body.decode('utf-8')\n except UnicodeDecodeError:\n # If we found garbage, disregard it:,\n # but keep the lookup cached (in self._parsers)\n # Running rp.parse() will set rp state from\n # 'disallow all' to 'allow any'.\n pass\n rp.parse(body.splitlines())\n\n rp_dfd = self._parsers[netloc]\n self._parsers[netloc] = rp\n rp_dfd.callback(rp)\n\n def _robots_error(self, failure, netloc):\n rp_dfd = self._parsers[netloc]\n self._parsers[netloc] = None\n rp_dfd.callback(None)\n", "path": "scrapy/downloadermiddlewares/robotstxt.py"}]} | 1,507 | 131 |
gh_patches_debug_22224 | rasdani/github-patches | git_diff | quantopian__zipline-1723 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs build script requires `path.py` without specifying it as a dependency.
We should probably just remove it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/deploy.py`
Content:
```
1 #!/usr/bin/env python
2 from __future__ import print_function
3 from contextlib import contextmanager
4 from glob import glob
5 from path import path
6 import os
7 from os.path import abspath, basename, dirname, exists, isfile
8 from shutil import move, rmtree
9 from subprocess import check_call
10
11 HERE = dirname(abspath(__file__))
12 ZIPLINE_ROOT = dirname(HERE)
13 TEMP_LOCATION = '/tmp/zipline-doc'
14 TEMP_LOCATION_GLOB = TEMP_LOCATION + '/*'
15
16
17 @contextmanager
18 def removing(path):
19 try:
20 yield
21 finally:
22 rmtree(path)
23
24
25 def ensure_not_exists(path):
26 if not exists(path):
27 return
28 if isfile(path):
29 os.unlink(path)
30 else:
31 rmtree(path)
32
33
34 def main():
35 print("Moving to %s." % HERE)
36 with path(HERE):
37 print("Building docs with 'make html'")
38 check_call(['make', 'html'])
39
40 print("Clearing temp location '%s'" % TEMP_LOCATION)
41 rmtree(TEMP_LOCATION, ignore_errors=True)
42
43 with removing(TEMP_LOCATION):
44 print("Copying built files to temp location.")
45 move('build/html', TEMP_LOCATION)
46
47 print("Moving to '%s'" % ZIPLINE_ROOT)
48 os.chdir(ZIPLINE_ROOT)
49
50 print("Checking out gh-pages branch.")
51 check_call(
52 [
53 'git', 'branch', '-f',
54 '--track', 'gh-pages', 'origin/gh-pages'
55 ]
56 )
57 check_call(['git', 'checkout', 'gh-pages'])
58 check_call(['git', 'reset', '--hard', 'origin/gh-pages'])
59
60 print("Copying built files:")
61 for file_ in glob(TEMP_LOCATION_GLOB):
62 base = basename(file_)
63
64 print("%s -> %s" % (file_, base))
65 ensure_not_exists(base)
66 move(file_, '.')
67
68 print()
69 print("Updated documentation branch in directory %s" % ZIPLINE_ROOT)
70 print("If you are happy with these changes, commit and push to gh-pages.")
71
72 if __name__ == '__main__':
73 main()
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/deploy.py b/docs/deploy.py
--- a/docs/deploy.py
+++ b/docs/deploy.py
@@ -2,7 +2,6 @@
from __future__ import print_function
from contextlib import contextmanager
from glob import glob
-from path import path
import os
from os.path import abspath, basename, dirname, exists, isfile
from shutil import move, rmtree
@@ -32,8 +31,11 @@
def main():
+ old_dir = os.getcwd()
print("Moving to %s." % HERE)
- with path(HERE):
+ os.chdir(HERE)
+
+ try:
print("Building docs with 'make html'")
check_call(['make', 'html'])
@@ -64,6 +66,8 @@
print("%s -> %s" % (file_, base))
ensure_not_exists(base)
move(file_, '.')
+ finally:
+ os.chdir(old_dir)
print()
print("Updated documentation branch in directory %s" % ZIPLINE_ROOT)
| {"golden_diff": "diff --git a/docs/deploy.py b/docs/deploy.py\n--- a/docs/deploy.py\n+++ b/docs/deploy.py\n@@ -2,7 +2,6 @@\n from __future__ import print_function\n from contextlib import contextmanager\n from glob import glob\n-from path import path\n import os\n from os.path import abspath, basename, dirname, exists, isfile\n from shutil import move, rmtree\n@@ -32,8 +31,11 @@\n \n \n def main():\n+ old_dir = os.getcwd()\n print(\"Moving to %s.\" % HERE)\n- with path(HERE):\n+ os.chdir(HERE)\n+\n+ try:\n print(\"Building docs with 'make html'\")\n check_call(['make', 'html'])\n \n@@ -64,6 +66,8 @@\n print(\"%s -> %s\" % (file_, base))\n ensure_not_exists(base)\n move(file_, '.')\n+ finally:\n+ os.chdir(old_dir)\n \n print()\n print(\"Updated documentation branch in directory %s\" % ZIPLINE_ROOT)\n", "issue": "Docs build script requires `path.py` without specifying it as a dependency.\nWe should probably just remove it.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function\nfrom contextlib import contextmanager\nfrom glob import glob\nfrom path import path\nimport os\nfrom os.path import abspath, basename, dirname, exists, isfile\nfrom shutil import move, rmtree\nfrom subprocess import check_call\n\nHERE = dirname(abspath(__file__))\nZIPLINE_ROOT = dirname(HERE)\nTEMP_LOCATION = '/tmp/zipline-doc'\nTEMP_LOCATION_GLOB = TEMP_LOCATION + '/*'\n\n\n@contextmanager\ndef removing(path):\n try:\n yield\n finally:\n rmtree(path)\n\n\ndef ensure_not_exists(path):\n if not exists(path):\n return\n if isfile(path):\n os.unlink(path)\n else:\n rmtree(path)\n\n\ndef main():\n print(\"Moving to %s.\" % HERE)\n with path(HERE):\n print(\"Building docs with 'make html'\")\n check_call(['make', 'html'])\n\n print(\"Clearing temp location '%s'\" % TEMP_LOCATION)\n rmtree(TEMP_LOCATION, ignore_errors=True)\n\n with removing(TEMP_LOCATION):\n print(\"Copying built files to temp location.\")\n move('build/html', TEMP_LOCATION)\n\n print(\"Moving to '%s'\" % ZIPLINE_ROOT)\n os.chdir(ZIPLINE_ROOT)\n\n print(\"Checking out gh-pages branch.\")\n check_call(\n [\n 'git', 'branch', '-f',\n '--track', 'gh-pages', 'origin/gh-pages'\n ]\n )\n check_call(['git', 'checkout', 'gh-pages'])\n check_call(['git', 'reset', '--hard', 'origin/gh-pages'])\n\n print(\"Copying built files:\")\n for file_ in glob(TEMP_LOCATION_GLOB):\n base = basename(file_)\n\n print(\"%s -> %s\" % (file_, base))\n ensure_not_exists(base)\n move(file_, '.')\n\n print()\n print(\"Updated documentation branch in directory %s\" % ZIPLINE_ROOT)\n print(\"If you are happy with these changes, commit and push to gh-pages.\")\n\nif __name__ == '__main__':\n main()\n", "path": "docs/deploy.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function\nfrom contextlib import contextmanager\nfrom glob import glob\nimport os\nfrom os.path import abspath, basename, dirname, exists, isfile\nfrom shutil import move, rmtree\nfrom subprocess import check_call\n\nHERE = dirname(abspath(__file__))\nZIPLINE_ROOT = dirname(HERE)\nTEMP_LOCATION = '/tmp/zipline-doc'\nTEMP_LOCATION_GLOB = TEMP_LOCATION + '/*'\n\n\n@contextmanager\ndef removing(path):\n try:\n yield\n finally:\n rmtree(path)\n\n\ndef ensure_not_exists(path):\n if not exists(path):\n return\n if isfile(path):\n os.unlink(path)\n else:\n rmtree(path)\n\n\ndef main():\n old_dir = os.getcwd()\n print(\"Moving to %s.\" % HERE)\n os.chdir(HERE)\n\n try:\n print(\"Building docs with 'make html'\")\n check_call(['make', 'html'])\n\n print(\"Clearing temp location '%s'\" % TEMP_LOCATION)\n rmtree(TEMP_LOCATION, ignore_errors=True)\n\n with removing(TEMP_LOCATION):\n print(\"Copying built files to temp location.\")\n move('build/html', TEMP_LOCATION)\n\n print(\"Moving to '%s'\" % ZIPLINE_ROOT)\n os.chdir(ZIPLINE_ROOT)\n\n print(\"Checking out gh-pages branch.\")\n check_call(\n [\n 'git', 'branch', '-f',\n '--track', 'gh-pages', 'origin/gh-pages'\n ]\n )\n check_call(['git', 'checkout', 'gh-pages'])\n check_call(['git', 'reset', '--hard', 'origin/gh-pages'])\n\n print(\"Copying built files:\")\n for file_ in glob(TEMP_LOCATION_GLOB):\n base = basename(file_)\n\n print(\"%s -> %s\" % (file_, base))\n ensure_not_exists(base)\n move(file_, '.')\n finally:\n os.chdir(old_dir)\n\n print()\n print(\"Updated documentation branch in directory %s\" % ZIPLINE_ROOT)\n print(\"If you are happy with these changes, commit and push to gh-pages.\")\n\nif __name__ == '__main__':\n main()\n", "path": "docs/deploy.py"}]} | 883 | 236 |
gh_patches_debug_12484 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1205 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PORT] Slack adapter updates for dialog interactions
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/3744
Fixes #3733 #3726 #3725 #3724
* Adds missing values to SlackPayload model
* Expose SlackClientWrapper via public property
# Changed projects
* Adapters
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from typing import Optional, List
5 from slack.web.classes.actions import Action
6 from botbuilder.adapters.slack.slack_message import SlackMessage
7
8
9 class SlackPayload:
10 def __init__(self, **kwargs):
11 self.type: [str] = kwargs.get("type")
12 self.token: str = kwargs.get("token")
13 self.channel: str = kwargs.get("channel")
14 self.thread_ts: str = kwargs.get("thread_ts")
15 self.team: str = kwargs.get("team")
16 self.user: str = kwargs.get("user")
17 self.actions: Optional[List[Action]] = None
18
19 if "message" in kwargs:
20 message = kwargs.get("message")
21 self.message = (
22 message
23 if isinstance(message) is SlackMessage
24 else SlackMessage(**message)
25 )
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py b/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py
--- a/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py
+++ b/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py
@@ -15,6 +15,12 @@
self.team: str = kwargs.get("team")
self.user: str = kwargs.get("user")
self.actions: Optional[List[Action]] = None
+ self.trigger_id: str = kwargs.get("trigger_id")
+ self.action_ts: str = kwargs.get("action_ts")
+ self.submission: str = kwargs.get("submission")
+ self.callback_id: str = kwargs.get("callback_id")
+ self.state: str = kwargs.get("state")
+ self.response_url: str = kwargs.get("response_url")
if "message" in kwargs:
message = kwargs.get("message")
| {"golden_diff": "diff --git a/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py b/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py\n--- a/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py\n+++ b/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py\n@@ -15,6 +15,12 @@\n self.team: str = kwargs.get(\"team\")\r\n self.user: str = kwargs.get(\"user\")\r\n self.actions: Optional[List[Action]] = None\r\n+ self.trigger_id: str = kwargs.get(\"trigger_id\")\r\n+ self.action_ts: str = kwargs.get(\"action_ts\")\r\n+ self.submission: str = kwargs.get(\"submission\")\r\n+ self.callback_id: str = kwargs.get(\"callback_id\")\r\n+ self.state: str = kwargs.get(\"state\")\r\n+ self.response_url: str = kwargs.get(\"response_url\")\r\n \r\n if \"message\" in kwargs:\r\n message = kwargs.get(\"message\")\n", "issue": "[PORT] Slack adapter updates for dialog interactions\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/3744\n\nFixes #3733 #3726 #3725 #3724 \r\n\r\n* Adds missing values to SlackPayload model \r\n* Expose SlackClientWrapper via public property\n\n\r\n# Changed projects\r\n* Adapters\r\n\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nfrom typing import Optional, List\r\nfrom slack.web.classes.actions import Action\r\nfrom botbuilder.adapters.slack.slack_message import SlackMessage\r\n\r\n\r\nclass SlackPayload:\r\n def __init__(self, **kwargs):\r\n self.type: [str] = kwargs.get(\"type\")\r\n self.token: str = kwargs.get(\"token\")\r\n self.channel: str = kwargs.get(\"channel\")\r\n self.thread_ts: str = kwargs.get(\"thread_ts\")\r\n self.team: str = kwargs.get(\"team\")\r\n self.user: str = kwargs.get(\"user\")\r\n self.actions: Optional[List[Action]] = None\r\n\r\n if \"message\" in kwargs:\r\n message = kwargs.get(\"message\")\r\n self.message = (\r\n message\r\n if isinstance(message) is SlackMessage\r\n else SlackMessage(**message)\r\n )\r\n", "path": "libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nfrom typing import Optional, List\r\nfrom slack.web.classes.actions import Action\r\nfrom botbuilder.adapters.slack.slack_message import SlackMessage\r\n\r\n\r\nclass SlackPayload:\r\n def __init__(self, **kwargs):\r\n self.type: [str] = kwargs.get(\"type\")\r\n self.token: str = kwargs.get(\"token\")\r\n self.channel: str = kwargs.get(\"channel\")\r\n self.thread_ts: str = kwargs.get(\"thread_ts\")\r\n self.team: str = kwargs.get(\"team\")\r\n self.user: str = kwargs.get(\"user\")\r\n self.actions: Optional[List[Action]] = None\r\n self.trigger_id: str = kwargs.get(\"trigger_id\")\r\n self.action_ts: str = kwargs.get(\"action_ts\")\r\n self.submission: str = kwargs.get(\"submission\")\r\n self.callback_id: str = kwargs.get(\"callback_id\")\r\n self.state: str = kwargs.get(\"state\")\r\n self.response_url: str = kwargs.get(\"response_url\")\r\n\r\n if \"message\" in kwargs:\r\n message = kwargs.get(\"message\")\r\n self.message = (\r\n message\r\n if isinstance(message) is SlackMessage\r\n else SlackMessage(**message)\r\n )\r\n", "path": "libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/slack_payload.py"}]} | 599 | 237 |
gh_patches_debug_23016 | rasdani/github-patches | git_diff | rotki__rotki-7996 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Killing tasks on Startup fails on Windows (PC)
## Problem Definition
After an update Rotki would not restart on its own.
Closing the interface and restarting got no response at all (twice)
Logs showed an attempt to kill the tasks but recorded the failure.
## Logs
> 2024-04-19T13:56:55.600Z: (info): Checking for update
> 2024-04-19T13:56:57.360Z: (info): Update for version 1.32.2 is not available (latest version: 1.32.2, downgrade is disallowed).
> 2024-04-19T13:57:03.347Z: Restarting rotki-core> >
> 2024-04-19T13:57:03.350Z: Starting windows process termination
> 2024-04-19T13:57:07.274Z: Currently running: 246 tasks
> 2024-04-19T13:57:07.280Z: Detected the following running rotki-core processes: 2880, 7092
> 2024-04-19T13:57:07.282Z: Preparing to call "taskill /f /t /PID 2880 /PID 7092" on the rotki-core processes
> 2024-04-19T13:57:07.752Z: Call to taskkill exited
> 2024-04-19T13:57:07.754Z: The 2 processes are still running. Waiting for 2 seconds
> 2024-04-19T15:18:54.047Z:
Investigation and Remediation
Look for any running instances
> C:\Windows\system32>tasklist | find "rotki"
> rotki.exe 16380 Console 1 17,608 K
> rotki.exe 13204 Console 1 10,436 K
> rotki.exe 14164 Console 1 8,596 K
Kill the running instances:
> C:\Windows\system32>taskkill /f /t /PID 16380 /PID 13204 /PID 14164
> SUCCESS: The process with PID 13204 (child process of PID 16380) has been terminated.
> SUCCESS: The process with PID 14164 (child process of PID 16380) has been terminated.
> SUCCESS: The process with PID 16380 (child process of PID 7124) has been terminated.
Could it be the reason It can't kill the running instances is because there is a typo in the code?
As per the log;
2024-04-19T13:57:07.282Z: Preparing to call "taskill /f /t /PID 2880 /PID 7092" on the rotki-core processes
----------------------------------------------------^^^^------------
The command is taskkill (two k's) NOT taskill (one k).
-->
### System Description
Operating system:
`systeminfo | findstr /C:"OS"`
OS Name: Microsoft Windows 10 Home
OS Version: 10.0.19045 N/A Build 19045
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Workstation
OS Build Type: Multiprocessor Free
BIOS Version: LENOVO DUCN33WW, 2021-08-10
Rotki version: 1.32.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/server.py`
Content:
```
1 import logging
2 import os
3 import signal
4
5 import gevent
6
7 from rotkehlchen.api.server import APIServer, RestAPI
8 from rotkehlchen.args import app_args
9 from rotkehlchen.logging import TRACE, RotkehlchenLogsAdapter, add_logging_level, configure_logging
10 from rotkehlchen.rotkehlchen import Rotkehlchen
11
12 logger = logging.getLogger(__name__)
13 log = RotkehlchenLogsAdapter(logger)
14
15
16 class RotkehlchenServer:
17 def __init__(self) -> None:
18 """Initializes the backend server
19 May raise:
20 - SystemPermissionError due to the given args containing a datadir
21 that does not have the correct permissions
22 """
23 arg_parser = app_args(
24 prog='rotki',
25 description=(
26 'rotki, the portfolio tracker and accounting tool that respects your privacy'
27 ),
28 )
29 self.args = arg_parser.parse_args()
30 add_logging_level('TRACE', TRACE)
31 configure_logging(self.args)
32 self.rotkehlchen = Rotkehlchen(self.args)
33 self.stop_event = gevent.event.Event()
34 if ',' in self.args.api_cors:
35 domain_list = [str(domain) for domain in self.args.api_cors.split(',')]
36 else:
37 domain_list = [str(self.args.api_cors)]
38 self.api_server = APIServer(
39 rest_api=RestAPI(rotkehlchen=self.rotkehlchen),
40 ws_notifier=self.rotkehlchen.rotki_notifier,
41 cors_domain_list=domain_list,
42 )
43
44 def shutdown(self) -> None:
45 log.debug('Shutdown initiated')
46 self.api_server.stop()
47 self.stop_event.set()
48
49 def main(self) -> None:
50 # disable printing hub exceptions in stderr. With using the hub to do various
51 # tasks that should raise exceptions and have them handled outside the hub
52 # printing them in stdout is now too much spam (and would worry users too)
53 hub = gevent.hub.get_hub()
54 hub.exception_stream = None
55 # we don't use threadpool much so go to 2 instead of default 10
56 hub.threadpool_size = 2
57 hub.threadpool.maxsize = 2
58 if os.name != 'nt':
59 gevent.hub.signal(signal.SIGQUIT, self.shutdown) # type: ignore[attr-defined,unused-ignore] # pylint: disable=no-member # linters don't understand the os.name check
60 gevent.hub.signal(signal.SIGINT, self.shutdown)
61 gevent.hub.signal(signal.SIGTERM, self.shutdown)
62 # The api server's RestAPI starts rotki main loop
63 self.api_server.start(
64 host=self.args.api_host,
65 rest_port=self.args.rest_api_port,
66 )
67 self.stop_event.wait()
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rotkehlchen/server.py b/rotkehlchen/server.py
--- a/rotkehlchen/server.py
+++ b/rotkehlchen/server.py
@@ -56,9 +56,17 @@
hub.threadpool_size = 2
hub.threadpool.maxsize = 2
if os.name != 'nt':
- gevent.hub.signal(signal.SIGQUIT, self.shutdown) # type: ignore[attr-defined,unused-ignore] # pylint: disable=no-member # linters don't understand the os.name check
+ gevent.hub.signal(signal.SIGQUIT, self.shutdown)
+ gevent.hub.signal(signal.SIGTERM, self.shutdown)
+ else:
+ # Handle the window control signal as stated here https://pyinstaller.org/en/stable/feature-notes.html#signal-handling-in-console-windows-applications-and-onefile-application-cleanup # noqa: E501
+ # This logic handles the signal sent from the bootloader equivalent to sigterm in
+ # addition to the signals sent by windows's taskkill.
+ # Research documented in https://github.com/yabirgb/rotki-python-research
+ import win32api # pylint: disable=import-outside-toplevel # isort:skip
+ win32api.SetConsoleCtrlHandler(self.shutdown, True)
+
gevent.hub.signal(signal.SIGINT, self.shutdown)
- gevent.hub.signal(signal.SIGTERM, self.shutdown)
# The api server's RestAPI starts rotki main loop
self.api_server.start(
host=self.args.api_host,
| {"golden_diff": "diff --git a/rotkehlchen/server.py b/rotkehlchen/server.py\n--- a/rotkehlchen/server.py\n+++ b/rotkehlchen/server.py\n@@ -56,9 +56,17 @@\n hub.threadpool_size = 2\n hub.threadpool.maxsize = 2\n if os.name != 'nt':\n- gevent.hub.signal(signal.SIGQUIT, self.shutdown) # type: ignore[attr-defined,unused-ignore] # pylint: disable=no-member # linters don't understand the os.name check\n+ gevent.hub.signal(signal.SIGQUIT, self.shutdown)\n+ gevent.hub.signal(signal.SIGTERM, self.shutdown)\n+ else:\n+ # Handle the window control signal as stated here https://pyinstaller.org/en/stable/feature-notes.html#signal-handling-in-console-windows-applications-and-onefile-application-cleanup # noqa: E501\n+ # This logic handles the signal sent from the bootloader equivalent to sigterm in\n+ # addition to the signals sent by windows's taskkill.\n+ # Research documented in https://github.com/yabirgb/rotki-python-research\n+ import win32api # pylint: disable=import-outside-toplevel # isort:skip\n+ win32api.SetConsoleCtrlHandler(self.shutdown, True)\n+\n gevent.hub.signal(signal.SIGINT, self.shutdown)\n- gevent.hub.signal(signal.SIGTERM, self.shutdown)\n # The api server's RestAPI starts rotki main loop\n self.api_server.start(\n host=self.args.api_host,\n", "issue": "Killing tasks on Startup fails on Windows (PC)\n## Problem Definition\r\n\r\nAfter an update Rotki would not restart on its own.\r\n\r\nClosing the interface and restarting got no response at all (twice)\r\n\r\nLogs showed an attempt to kill the tasks but recorded the failure.\r\n\r\n## Logs\r\n\r\n> 2024-04-19T13:56:55.600Z: (info): Checking for update\r\n> 2024-04-19T13:56:57.360Z: (info): Update for version 1.32.2 is not available (latest version: 1.32.2, downgrade is disallowed).\r\n> 2024-04-19T13:57:03.347Z: Restarting rotki-core> > \r\n> 2024-04-19T13:57:03.350Z: Starting windows process termination\r\n> 2024-04-19T13:57:07.274Z: Currently running: 246 tasks\r\n> 2024-04-19T13:57:07.280Z: Detected the following running rotki-core processes: 2880, 7092\r\n> 2024-04-19T13:57:07.282Z: Preparing to call \"taskill /f /t /PID 2880 /PID 7092\" on the rotki-core processes\r\n> 2024-04-19T13:57:07.752Z: Call to taskkill exited\r\n> 2024-04-19T13:57:07.754Z: The 2 processes are still running. Waiting for 2 seconds\r\n> 2024-04-19T15:18:54.047Z: \r\n\r\nInvestigation and Remediation\r\n\r\nLook for any running instances \r\n\r\n> C:\\Windows\\system32>tasklist | find \"rotki\"\r\n> rotki.exe 16380 Console 1 17,608 K\r\n> rotki.exe 13204 Console 1 10,436 K\r\n> rotki.exe 14164 Console 1 8,596 K\r\n\r\nKill the running instances:\r\n\r\n> C:\\Windows\\system32>taskkill /f /t /PID 16380 /PID 13204 /PID 14164\r\n> SUCCESS: The process with PID 13204 (child process of PID 16380) has been terminated.\r\n> SUCCESS: The process with PID 14164 (child process of PID 16380) has been terminated.\r\n> SUCCESS: The process with PID 16380 (child process of PID 7124) has been terminated.\r\n\r\nCould it be the reason It can't kill the running instances is because there is a typo in the code?\r\n\r\nAs per the log;\r\n\r\n2024-04-19T13:57:07.282Z: Preparing to call \"taskill /f /t /PID 2880 /PID 7092\" on the rotki-core processes\r\n----------------------------------------------------^^^^------------\r\n\r\nThe command is taskkill (two k's) NOT taskill (one k). \r\n\r\n\r\n-->\r\n\r\n\r\n### System Description\r\n\r\n\r\nOperating system: \r\n\r\n`systeminfo | findstr /C:\"OS\"`\r\n\r\nOS Name: Microsoft Windows 10 Home\r\nOS Version: 10.0.19045 N/A Build 19045\r\nOS Manufacturer: Microsoft Corporation\r\nOS Configuration: Standalone Workstation\r\nOS Build Type: Multiprocessor Free\r\nBIOS Version: LENOVO DUCN33WW, 2021-08-10\r\n\r\n\r\nRotki version: 1.32.2\r\n\n", "before_files": [{"content": "import logging\nimport os\nimport signal\n\nimport gevent\n\nfrom rotkehlchen.api.server import APIServer, RestAPI\nfrom rotkehlchen.args import app_args\nfrom rotkehlchen.logging import TRACE, RotkehlchenLogsAdapter, add_logging_level, configure_logging\nfrom rotkehlchen.rotkehlchen import Rotkehlchen\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass RotkehlchenServer:\n def __init__(self) -> None:\n \"\"\"Initializes the backend server\n May raise:\n - SystemPermissionError due to the given args containing a datadir\n that does not have the correct permissions\n \"\"\"\n arg_parser = app_args(\n prog='rotki',\n description=(\n 'rotki, the portfolio tracker and accounting tool that respects your privacy'\n ),\n )\n self.args = arg_parser.parse_args()\n add_logging_level('TRACE', TRACE)\n configure_logging(self.args)\n self.rotkehlchen = Rotkehlchen(self.args)\n self.stop_event = gevent.event.Event()\n if ',' in self.args.api_cors:\n domain_list = [str(domain) for domain in self.args.api_cors.split(',')]\n else:\n domain_list = [str(self.args.api_cors)]\n self.api_server = APIServer(\n rest_api=RestAPI(rotkehlchen=self.rotkehlchen),\n ws_notifier=self.rotkehlchen.rotki_notifier,\n cors_domain_list=domain_list,\n )\n\n def shutdown(self) -> None:\n log.debug('Shutdown initiated')\n self.api_server.stop()\n self.stop_event.set()\n\n def main(self) -> None:\n # disable printing hub exceptions in stderr. With using the hub to do various\n # tasks that should raise exceptions and have them handled outside the hub\n # printing them in stdout is now too much spam (and would worry users too)\n hub = gevent.hub.get_hub()\n hub.exception_stream = None\n # we don't use threadpool much so go to 2 instead of default 10\n hub.threadpool_size = 2\n hub.threadpool.maxsize = 2\n if os.name != 'nt':\n gevent.hub.signal(signal.SIGQUIT, self.shutdown) # type: ignore[attr-defined,unused-ignore] # pylint: disable=no-member # linters don't understand the os.name check\n gevent.hub.signal(signal.SIGINT, self.shutdown)\n gevent.hub.signal(signal.SIGTERM, self.shutdown)\n # The api server's RestAPI starts rotki main loop\n self.api_server.start(\n host=self.args.api_host,\n rest_port=self.args.rest_api_port,\n )\n self.stop_event.wait()\n", "path": "rotkehlchen/server.py"}], "after_files": [{"content": "import logging\nimport os\nimport signal\n\nimport gevent\n\nfrom rotkehlchen.api.server import APIServer, RestAPI\nfrom rotkehlchen.args import app_args\nfrom rotkehlchen.logging import TRACE, RotkehlchenLogsAdapter, add_logging_level, configure_logging\nfrom rotkehlchen.rotkehlchen import Rotkehlchen\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass RotkehlchenServer:\n def __init__(self) -> None:\n \"\"\"Initializes the backend server\n May raise:\n - SystemPermissionError due to the given args containing a datadir\n that does not have the correct permissions\n \"\"\"\n arg_parser = app_args(\n prog='rotki',\n description=(\n 'rotki, the portfolio tracker and accounting tool that respects your privacy'\n ),\n )\n self.args = arg_parser.parse_args()\n add_logging_level('TRACE', TRACE)\n configure_logging(self.args)\n self.rotkehlchen = Rotkehlchen(self.args)\n self.stop_event = gevent.event.Event()\n if ',' in self.args.api_cors:\n domain_list = [str(domain) for domain in self.args.api_cors.split(',')]\n else:\n domain_list = [str(self.args.api_cors)]\n self.api_server = APIServer(\n rest_api=RestAPI(rotkehlchen=self.rotkehlchen),\n ws_notifier=self.rotkehlchen.rotki_notifier,\n cors_domain_list=domain_list,\n )\n\n def shutdown(self) -> None:\n log.debug('Shutdown initiated')\n self.api_server.stop()\n self.stop_event.set()\n\n def main(self) -> None:\n # disable printing hub exceptions in stderr. With using the hub to do various\n # tasks that should raise exceptions and have them handled outside the hub\n # printing them in stdout is now too much spam (and would worry users too)\n hub = gevent.hub.get_hub()\n hub.exception_stream = None\n # we don't use threadpool much so go to 2 instead of default 10\n hub.threadpool_size = 2\n hub.threadpool.maxsize = 2\n if os.name != 'nt':\n gevent.hub.signal(signal.SIGQUIT, self.shutdown)\n gevent.hub.signal(signal.SIGTERM, self.shutdown)\n else:\n # Handle the window control signal as stated here https://pyinstaller.org/en/stable/feature-notes.html#signal-handling-in-console-windows-applications-and-onefile-application-cleanup # noqa: E501\n # This logic handles the signal sent from the bootloader equivalent to sigterm in\n # addition to the signals sent by windows's taskkill.\n # Research documented in https://github.com/yabirgb/rotki-python-research\n import win32api # pylint: disable=import-outside-toplevel # isort:skip\n win32api.SetConsoleCtrlHandler(self.shutdown, True)\n\n gevent.hub.signal(signal.SIGINT, self.shutdown)\n # The api server's RestAPI starts rotki main loop\n self.api_server.start(\n host=self.args.api_host,\n rest_port=self.args.rest_api_port,\n )\n self.stop_event.wait()\n", "path": "rotkehlchen/server.py"}]} | 1,910 | 351 |
gh_patches_debug_25644 | rasdani/github-patches | git_diff | getsentry__sentry-3810 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tokens are able to enumerate project names regardless of organization membership.
We respond with 404s when there are missing project names, but 403s when there is an existing project. Probably should just check for organization membership before even going into the project fetching.
# Original HackerOne Report
Title: Project Enumeration of Other Teams
Types: Information Disclosure
Link: https://hackerone.com/reports/154048
Date: 2016-07-26 05:25:05 -0700
By: vagg-a-bond
Details:
Hello,
It is possible for any valid user of @sentry to enumerate project names in other team by comparing errors.
For example, if user user accesses `GET /api/0/projects/sumne/project_sumne_001/` which is not of his/her, then error will be `403 FORBIDDEN` and for `GET /api/0/projects/sumne/project_sumne_001_not_there/` application responds with `404 NOT FOUND`
So an attacker can enumerate all the projects available in a team by automating the process to request `GET /api/0/projects/TARGET_TEAM_NAME/PROJECT_NAME/`.
**Vulnerable Endpoint:**
> GET /api/0/projects/sumne/project_sumne_001/*
**Recommended Fix:**
Respond with `404 NOT FOUND` for both.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/api/bases/project.py`
Content:
```
1 from __future__ import absolute_import
2
3 from sentry.auth import access
4 from sentry.api.base import Endpoint
5 from sentry.api.exceptions import ResourceDoesNotExist
6 from sentry.api.permissions import ScopedPermission
7 from sentry.models import Project, ProjectStatus
8 from sentry.models.apikey import ROOT_KEY
9
10
11 class ProjectPermission(ScopedPermission):
12 scope_map = {
13 'GET': ['project:read', 'project:write', 'project:delete'],
14 'POST': ['project:write', 'project:delete'],
15 'PUT': ['project:write', 'project:delete'],
16 'DELETE': ['project:delete'],
17 }
18
19 def has_object_permission(self, request, view, project):
20 if request.user and request.user.is_authenticated() and request.auth:
21 request.access = access.from_request(
22 request, project.organization, scopes=request.auth.get_scopes(),
23 )
24
25 elif request.auth:
26 if request.auth is ROOT_KEY:
27 return True
28 return request.auth.organization_id == project.organization_id
29
30 else:
31 request.access = access.from_request(request, project.organization)
32
33 allowed_scopes = set(self.scope_map.get(request.method, []))
34 return any(
35 request.access.has_team_scope(project.team, s)
36 for s in allowed_scopes
37 )
38
39
40 class ProjectReleasePermission(ProjectPermission):
41 scope_map = {
42 'GET': ['project:read', 'project:write', 'project:delete', 'project:releases'],
43 'POST': ['project:write', 'project:delete', 'project:releases'],
44 'PUT': ['project:write', 'project:delete', 'project:releases'],
45 'DELETE': ['project:delete', 'project:releases'],
46 }
47
48
49 class ProjectEventPermission(ProjectPermission):
50 scope_map = {
51 'GET': ['event:read', 'event:write', 'event:delete'],
52 'POST': ['event:write', 'event:delete'],
53 'PUT': ['event:write', 'event:delete'],
54 'DELETE': ['event:delete'],
55 }
56
57
58 class ProjectEndpoint(Endpoint):
59 permission_classes = (ProjectPermission,)
60
61 def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):
62 try:
63 project = Project.objects.get_from_cache(
64 organization__slug=organization_slug,
65 slug=project_slug,
66 )
67 except Project.DoesNotExist:
68 raise ResourceDoesNotExist
69
70 if project.status != ProjectStatus.VISIBLE:
71 raise ResourceDoesNotExist
72
73 self.check_object_permissions(request, project)
74
75 kwargs['project'] = project
76 return (args, kwargs)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/api/bases/project.py b/src/sentry/api/bases/project.py
--- a/src/sentry/api/bases/project.py
+++ b/src/sentry/api/bases/project.py
@@ -4,7 +4,7 @@
from sentry.api.base import Endpoint
from sentry.api.exceptions import ResourceDoesNotExist
from sentry.api.permissions import ScopedPermission
-from sentry.models import Project, ProjectStatus
+from sentry.models import Organization, Project, ProjectStatus
from sentry.models.apikey import ROOT_KEY
@@ -59,9 +59,21 @@
permission_classes = (ProjectPermission,)
def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):
+ try:
+ org = Organization.objects.get_from_cache(slug=organization_slug)
+ if request.user:
+ can_access_org = any(access.from_request(request, org).memberships)
+ if request.auth:
+ can_access_org = request.auth.organization_id == org.id
+ if not can_access_org:
+ raise ResourceDoesNotExist
+
+ except Organization.DoesNotExist:
+ raise ResourceDoesNotExist
+
try:
project = Project.objects.get_from_cache(
- organization__slug=organization_slug,
+ organization=org,
slug=project_slug,
)
except Project.DoesNotExist:
| {"golden_diff": "diff --git a/src/sentry/api/bases/project.py b/src/sentry/api/bases/project.py\n--- a/src/sentry/api/bases/project.py\n+++ b/src/sentry/api/bases/project.py\n@@ -4,7 +4,7 @@\n from sentry.api.base import Endpoint\n from sentry.api.exceptions import ResourceDoesNotExist\n from sentry.api.permissions import ScopedPermission\n-from sentry.models import Project, ProjectStatus\n+from sentry.models import Organization, Project, ProjectStatus\n from sentry.models.apikey import ROOT_KEY\n \n \n@@ -59,9 +59,21 @@\n permission_classes = (ProjectPermission,)\n \n def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):\n+ try:\n+ org = Organization.objects.get_from_cache(slug=organization_slug)\n+ if request.user:\n+ can_access_org = any(access.from_request(request, org).memberships)\n+ if request.auth:\n+ can_access_org = request.auth.organization_id == org.id\n+ if not can_access_org:\n+ raise ResourceDoesNotExist\n+\n+ except Organization.DoesNotExist:\n+ raise ResourceDoesNotExist\n+\n try:\n project = Project.objects.get_from_cache(\n- organization__slug=organization_slug,\n+ organization=org,\n slug=project_slug,\n )\n except Project.DoesNotExist:\n", "issue": "Tokens are able to enumerate project names regardless of organization membership.\nWe respond with 404s when there are missing project names, but 403s when there is an existing project. Probably should just check for organization membership before even going into the project fetching.\n# Original HackerOne Report\n\nTitle: Project Enumeration of Other Teams\nTypes: Information Disclosure\nLink: https://hackerone.com/reports/154048\nDate: 2016-07-26 05:25:05 -0700\nBy: vagg-a-bond\n\nDetails:\nHello,\n\nIt is possible for any valid user of @sentry to enumerate project names in other team by comparing errors. \n\nFor example, if user user accesses `GET /api/0/projects/sumne/project_sumne_001/` which is not of his/her, then error will be `403 FORBIDDEN` and for `GET /api/0/projects/sumne/project_sumne_001_not_there/` application responds with `404 NOT FOUND`\n\nSo an attacker can enumerate all the projects available in a team by automating the process to request `GET /api/0/projects/TARGET_TEAM_NAME/PROJECT_NAME/`. \n\n**Vulnerable Endpoint:**\n\n> GET /api/0/projects/sumne/project_sumne_001/*\n\n**Recommended Fix:**\nRespond with `404 NOT FOUND` for both. \n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom sentry.auth import access\nfrom sentry.api.base import Endpoint\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.api.permissions import ScopedPermission\nfrom sentry.models import Project, ProjectStatus\nfrom sentry.models.apikey import ROOT_KEY\n\n\nclass ProjectPermission(ScopedPermission):\n scope_map = {\n 'GET': ['project:read', 'project:write', 'project:delete'],\n 'POST': ['project:write', 'project:delete'],\n 'PUT': ['project:write', 'project:delete'],\n 'DELETE': ['project:delete'],\n }\n\n def has_object_permission(self, request, view, project):\n if request.user and request.user.is_authenticated() and request.auth:\n request.access = access.from_request(\n request, project.organization, scopes=request.auth.get_scopes(),\n )\n\n elif request.auth:\n if request.auth is ROOT_KEY:\n return True\n return request.auth.organization_id == project.organization_id\n\n else:\n request.access = access.from_request(request, project.organization)\n\n allowed_scopes = set(self.scope_map.get(request.method, []))\n return any(\n request.access.has_team_scope(project.team, s)\n for s in allowed_scopes\n )\n\n\nclass ProjectReleasePermission(ProjectPermission):\n scope_map = {\n 'GET': ['project:read', 'project:write', 'project:delete', 'project:releases'],\n 'POST': ['project:write', 'project:delete', 'project:releases'],\n 'PUT': ['project:write', 'project:delete', 'project:releases'],\n 'DELETE': ['project:delete', 'project:releases'],\n }\n\n\nclass ProjectEventPermission(ProjectPermission):\n scope_map = {\n 'GET': ['event:read', 'event:write', 'event:delete'],\n 'POST': ['event:write', 'event:delete'],\n 'PUT': ['event:write', 'event:delete'],\n 'DELETE': ['event:delete'],\n }\n\n\nclass ProjectEndpoint(Endpoint):\n permission_classes = (ProjectPermission,)\n\n def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):\n try:\n project = Project.objects.get_from_cache(\n organization__slug=organization_slug,\n slug=project_slug,\n )\n except Project.DoesNotExist:\n raise ResourceDoesNotExist\n\n if project.status != ProjectStatus.VISIBLE:\n raise ResourceDoesNotExist\n\n self.check_object_permissions(request, project)\n\n kwargs['project'] = project\n return (args, kwargs)\n", "path": "src/sentry/api/bases/project.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom sentry.auth import access\nfrom sentry.api.base import Endpoint\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.api.permissions import ScopedPermission\nfrom sentry.models import Organization, Project, ProjectStatus\nfrom sentry.models.apikey import ROOT_KEY\n\n\nclass ProjectPermission(ScopedPermission):\n scope_map = {\n 'GET': ['project:read', 'project:write', 'project:delete'],\n 'POST': ['project:write', 'project:delete'],\n 'PUT': ['project:write', 'project:delete'],\n 'DELETE': ['project:delete'],\n }\n\n def has_object_permission(self, request, view, project):\n if request.user and request.user.is_authenticated() and request.auth:\n request.access = access.from_request(\n request, project.organization, scopes=request.auth.get_scopes(),\n )\n\n elif request.auth:\n if request.auth is ROOT_KEY:\n return True\n return request.auth.organization_id == project.organization_id\n\n else:\n request.access = access.from_request(request, project.organization)\n\n allowed_scopes = set(self.scope_map.get(request.method, []))\n return any(\n request.access.has_team_scope(project.team, s)\n for s in allowed_scopes\n )\n\n\nclass ProjectReleasePermission(ProjectPermission):\n scope_map = {\n 'GET': ['project:read', 'project:write', 'project:delete', 'project:releases'],\n 'POST': ['project:write', 'project:delete', 'project:releases'],\n 'PUT': ['project:write', 'project:delete', 'project:releases'],\n 'DELETE': ['project:delete', 'project:releases'],\n }\n\n\nclass ProjectEventPermission(ProjectPermission):\n scope_map = {\n 'GET': ['event:read', 'event:write', 'event:delete'],\n 'POST': ['event:write', 'event:delete'],\n 'PUT': ['event:write', 'event:delete'],\n 'DELETE': ['event:delete'],\n }\n\n\nclass ProjectEndpoint(Endpoint):\n permission_classes = (ProjectPermission,)\n\n def convert_args(self, request, organization_slug, project_slug, *args, **kwargs):\n try:\n org = Organization.objects.get_from_cache(slug=organization_slug)\n if request.user:\n can_access_org = any(access.from_request(request, org).memberships)\n if request.auth:\n can_access_org = request.auth.organization_id == org.id\n if not can_access_org:\n raise ResourceDoesNotExist\n\n except Organization.DoesNotExist:\n raise ResourceDoesNotExist\n\n try:\n project = Project.objects.get_from_cache(\n organization=org,\n slug=project_slug,\n )\n except Project.DoesNotExist:\n raise ResourceDoesNotExist\n\n if project.status != ProjectStatus.VISIBLE:\n raise ResourceDoesNotExist\n\n self.check_object_permissions(request, project)\n\n kwargs['project'] = project\n return (args, kwargs)\n", "path": "src/sentry/api/bases/project.py"}]} | 1,283 | 288 |
gh_patches_debug_51900 | rasdani/github-patches | git_diff | geopandas__geopandas-663 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Raise error when trying to write empty dataframes
Currently, the behavior when writing an empty dataframe is counter-intuitive (`ValueError: Geometry column cannot contain mutiple geometry types when writing to file.`). Would it be possible to add a check and more helpful exception?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/io/file.py`
Content:
```
1 import os
2
3 import fiona
4 import numpy as np
5 import six
6
7 from geopandas import GeoDataFrame
8
9 # Adapted from pandas.io.common
10 if six.PY3:
11 from urllib.request import urlopen as _urlopen
12 from urllib.parse import urlparse as parse_url
13 from urllib.parse import uses_relative, uses_netloc, uses_params
14 else:
15 from urllib2 import urlopen as _urlopen
16 from urlparse import urlparse as parse_url
17 from urlparse import uses_relative, uses_netloc, uses_params
18
19 _VALID_URLS = set(uses_relative + uses_netloc + uses_params)
20 _VALID_URLS.discard('')
21
22
23 def _is_url(url):
24 """Check to see if *url* has a valid protocol."""
25 try:
26 return parse_url(url).scheme in _VALID_URLS
27 except:
28 return False
29
30
31 def read_file(filename, **kwargs):
32 """
33 Returns a GeoDataFrame from a file or URL.
34
35 Parameters
36 ----------
37 filename: str
38 Either the absolute or relative path to the file or URL to
39 be opened.
40 **kwargs:
41 Keyword args to be passed to the `open` or `BytesCollection` method
42 in the fiona library when opening the file. For more information on
43 possible keywords, type:
44 ``import fiona; help(fiona.open)``
45
46 Examples
47 --------
48 >>> df = geopandas.read_file("nybb.shp")
49
50 Returns
51 -------
52 geodataframe : GeoDataFrame
53 """
54 bbox = kwargs.pop('bbox', None)
55 if _is_url(filename):
56 req = _urlopen(filename)
57 path_or_bytes = req.read()
58 reader = fiona.BytesCollection
59 else:
60 path_or_bytes = filename
61 reader = fiona.open
62
63 with reader(path_or_bytes, **kwargs) as features:
64 crs = features.crs
65 if bbox is not None:
66 assert len(bbox) == 4
67 f_filt = features.filter(bbox=bbox)
68 else:
69 f_filt = features
70
71 columns = list(features.meta["schema"]["properties"]) + ["geometry"]
72 gdf = GeoDataFrame.from_features(f_filt, crs=crs, columns=columns)
73
74 return gdf
75
76
77 def to_file(df, filename, driver="ESRI Shapefile", schema=None,
78 **kwargs):
79 """
80 Write this GeoDataFrame to an OGR data source
81
82 A dictionary of supported OGR providers is available via:
83 >>> import fiona
84 >>> fiona.supported_drivers
85
86 Parameters
87 ----------
88 df : GeoDataFrame to be written
89 filename : string
90 File path or file handle to write to.
91 driver : string, default 'ESRI Shapefile'
92 The OGR format driver used to write the vector file.
93 schema : dict, default None
94 If specified, the schema dictionary is passed to Fiona to
95 better control how the file is written. If None, GeoPandas
96 will determine the schema based on each column's dtype
97
98 The *kwargs* are passed to fiona.open and can be used to write
99 to multi-layer data, store data within archives (zip files), etc.
100 """
101 if schema is None:
102 schema = infer_schema(df)
103 filename = os.path.abspath(os.path.expanduser(filename))
104 with fiona.drivers():
105 with fiona.open(filename, 'w', driver=driver, crs=df.crs,
106 schema=schema, **kwargs) as colxn:
107 colxn.writerecords(df.iterfeatures())
108
109
110 def infer_schema(df):
111 try:
112 from collections import OrderedDict
113 except ImportError:
114 from ordereddict import OrderedDict
115
116 def convert_type(column, in_type):
117 if in_type == object:
118 return 'str'
119 out_type = type(np.asscalar(np.zeros(1, in_type))).__name__
120 if out_type == 'long':
121 out_type = 'int'
122 if out_type == 'bool':
123 raise ValueError('column "{}" is boolean type, '.format(column) +
124 'which is unsupported in file writing. '
125 'Consider casting the column to int type.')
126 return out_type
127
128 properties = OrderedDict([
129 (col, convert_type(col, _type)) for col, _type in
130 zip(df.columns, df.dtypes) if col != df._geometry_column_name
131 ])
132
133 geom_type = _common_geom_type(df)
134 if not geom_type:
135 raise ValueError("Geometry column cannot contain mutiple "
136 "geometry types when writing to file.")
137
138 schema = {'geometry': geom_type, 'properties': properties}
139
140 return schema
141
142
143 def _common_geom_type(df):
144 # Need to check geom_types before we write to file...
145 # Some (most?) providers expect a single geometry type:
146 # Point, LineString, or Polygon
147 geom_types = df.geometry.geom_type.unique()
148
149 from os.path import commonprefix
150 # use reversed geom types and commonprefix to find the common suffix,
151 # then reverse the result to get back to a geom type
152 geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1]
153 if not geom_type:
154 return None
155
156 if df.geometry.has_z.any():
157 geom_type = "3D " + geom_type
158
159 return geom_type
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geopandas/io/file.py b/geopandas/io/file.py
--- a/geopandas/io/file.py
+++ b/geopandas/io/file.py
@@ -130,7 +130,11 @@
zip(df.columns, df.dtypes) if col != df._geometry_column_name
])
+ if df.empty:
+ raise ValueError("Cannot write empty DataFrame to file.")
+
geom_type = _common_geom_type(df)
+
if not geom_type:
raise ValueError("Geometry column cannot contain mutiple "
"geometry types when writing to file.")
| {"golden_diff": "diff --git a/geopandas/io/file.py b/geopandas/io/file.py\n--- a/geopandas/io/file.py\n+++ b/geopandas/io/file.py\n@@ -130,7 +130,11 @@\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n \n+ if df.empty:\n+ raise ValueError(\"Cannot write empty DataFrame to file.\")\n+\n geom_type = _common_geom_type(df)\n+ \n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n", "issue": "Raise error when trying to write empty dataframes\nCurrently, the behavior when writing an empty dataframe is counter-intuitive (`ValueError: Geometry column cannot contain mutiple geometry types when writing to file.`). Would it be possible to add a check and more helpful exception? \n", "before_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nimport six\n\nfrom geopandas import GeoDataFrame\n\n# Adapted from pandas.io.common\nif six.PY3:\n from urllib.request import urlopen as _urlopen\n from urllib.parse import urlparse as parse_url\n from urllib.parse import uses_relative, uses_netloc, uses_params\nelse:\n from urllib2 import urlopen as _urlopen\n from urlparse import urlparse as parse_url\n from urlparse import uses_relative, uses_netloc, uses_params\n\n_VALID_URLS = set(uses_relative + uses_netloc + uses_params)\n_VALID_URLS.discard('')\n\n\ndef _is_url(url):\n \"\"\"Check to see if *url* has a valid protocol.\"\"\"\n try:\n return parse_url(url).scheme in _VALID_URLS\n except:\n return False\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file or URL.\n\n Parameters\n ----------\n filename: str\n Either the absolute or relative path to the file or URL to\n be opened.\n **kwargs:\n Keyword args to be passed to the `open` or `BytesCollection` method\n in the fiona library when opening the file. For more information on\n possible keywords, type:\n ``import fiona; help(fiona.open)``\n\n Examples\n --------\n >>> df = geopandas.read_file(\"nybb.shp\")\n\n Returns\n -------\n geodataframe : GeoDataFrame\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n if _is_url(filename):\n req = _urlopen(filename)\n path_or_bytes = req.read()\n reader = fiona.BytesCollection\n else:\n path_or_bytes = filename\n reader = fiona.open\n\n with reader(path_or_bytes, **kwargs) as features:\n crs = features.crs\n if bbox is not None:\n assert len(bbox) == 4\n f_filt = features.filter(bbox=bbox)\n else:\n f_filt = features\n\n columns = list(features.meta[\"schema\"][\"properties\"]) + [\"geometry\"]\n gdf = GeoDataFrame.from_features(f_filt, crs=crs, columns=columns)\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.drivers():\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as colxn:\n colxn.writerecords(df.iterfeatures())\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(column, in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n if out_type == 'bool':\n raise ValueError('column \"{}\" is boolean type, '.format(column) +\n 'which is unsupported in file writing. '\n 'Consider casting the column to int type.')\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(col, _type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n geom_type = _common_geom_type(df)\n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix\n # use reversed geom types and commonprefix to find the common suffix,\n # then reverse the result to get back to a geom type\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1]\n if not geom_type:\n return None\n\n if df.geometry.has_z.any():\n geom_type = \"3D \" + geom_type\n\n return geom_type\n", "path": "geopandas/io/file.py"}], "after_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nimport six\n\nfrom geopandas import GeoDataFrame\n\n# Adapted from pandas.io.common\nif six.PY3:\n from urllib.request import urlopen as _urlopen\n from urllib.parse import urlparse as parse_url\n from urllib.parse import uses_relative, uses_netloc, uses_params\nelse:\n from urllib2 import urlopen as _urlopen\n from urlparse import urlparse as parse_url\n from urlparse import uses_relative, uses_netloc, uses_params\n\n_VALID_URLS = set(uses_relative + uses_netloc + uses_params)\n_VALID_URLS.discard('')\n\n\ndef _is_url(url):\n \"\"\"Check to see if *url* has a valid protocol.\"\"\"\n try:\n return parse_url(url).scheme in _VALID_URLS\n except:\n return False\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file or URL.\n\n Parameters\n ----------\n filename: str\n Either the absolute or relative path to the file or URL to\n be opened.\n **kwargs:\n Keyword args to be passed to the `open` or `BytesCollection` method\n in the fiona library when opening the file. For more information on\n possible keywords, type:\n ``import fiona; help(fiona.open)``\n\n Examples\n --------\n >>> df = geopandas.read_file(\"nybb.shp\")\n\n Returns\n -------\n geodataframe : GeoDataFrame\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n if _is_url(filename):\n req = _urlopen(filename)\n path_or_bytes = req.read()\n reader = fiona.BytesCollection\n else:\n path_or_bytes = filename\n reader = fiona.open\n\n with reader(path_or_bytes, **kwargs) as features:\n crs = features.crs\n if bbox is not None:\n assert len(bbox) == 4\n f_filt = features.filter(bbox=bbox)\n else:\n f_filt = features\n\n columns = list(features.meta[\"schema\"][\"properties\"]) + [\"geometry\"]\n gdf = GeoDataFrame.from_features(f_filt, crs=crs, columns=columns)\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.drivers():\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as colxn:\n colxn.writerecords(df.iterfeatures())\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(column, in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n if out_type == 'bool':\n raise ValueError('column \"{}\" is boolean type, '.format(column) +\n 'which is unsupported in file writing. '\n 'Consider casting the column to int type.')\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(col, _type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n if df.empty:\n raise ValueError(\"Cannot write empty DataFrame to file.\")\n\n geom_type = _common_geom_type(df)\n \n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix\n # use reversed geom types and commonprefix to find the common suffix,\n # then reverse the result to get back to a geom type\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1]\n if not geom_type:\n return None\n\n if df.geometry.has_z.any():\n geom_type = \"3D \" + geom_type\n\n return geom_type\n", "path": "geopandas/io/file.py"}]} | 1,842 | 129 |
gh_patches_debug_26229 | rasdani/github-patches | git_diff | searx__searx-2799 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Updating languages of qwant is broken
The URL used for retrieving available languages has gone:
```
engines_languages = fetch_supported_languages()
File "/home/runner/work/searx/searx/./searx_extra/update/update_languages.py", line 30, in fetch_supported_languages
engines_languages[engine_name] = engines[engine_name].fetch_supported_languages()
File "/home/runner/work/searx/searx/searx/engines/__init__.py", line 141, in <lambda>
lambda: engine._fetch_supported_languages(get(engine.supported_languages_url, headers=headers)))
File "/home/runner/work/searx/searx/searx/poolrequests.py", line 209, in get
return request('get', url, **kwargs)
File "/home/runner/work/searx/searx/searx/poolrequests.py", line 202, in request
raise_for_httperror(response)
File "/home/runner/work/searx/searx/searx/raise_for_httperror.py", line 66, in raise_for_httperror
resp.raise_for_status()
File "/home/runner/work/searx/searx/local/py3/lib/python3.9/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://www.qwant.com/region
```
Full run: https://github.com/searx/searx/runs/2487497774?check_suite_focus=true
@MarcAbonce If you have time could you please look into it? Or at least give some pointers how it can be fixed?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/qwant.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """
3 Qwant (Web, Images, News, Social)
4 """
5
6 from datetime import datetime
7 from json import loads
8 from urllib.parse import urlencode
9 from searx.utils import html_to_text, match_language
10 from searx.exceptions import SearxEngineAPIException, SearxEngineCaptchaException
11 from searx.raise_for_httperror import raise_for_httperror
12
13 # about
14 about = {
15 "website": 'https://www.qwant.com/',
16 "wikidata_id": 'Q14657870',
17 "official_api_documentation": None,
18 "use_official_api": True,
19 "require_api_key": False,
20 "results": 'JSON',
21 }
22
23 # engine dependent config
24 categories = []
25 paging = True
26 supported_languages_url = 'https://qwant.com/region'
27
28 category_to_keyword = {'general': 'web',
29 'images': 'images',
30 'news': 'news'}
31
32 # search-url
33 url = 'https://api.qwant.com/api/search/{keyword}?count=10&offset={offset}&f=&{query}&t={keyword}&uiv=4'
34
35
36 # do search-request
37 def request(query, params):
38 offset = (params['pageno'] - 1) * 10
39
40 if categories[0] and categories[0] in category_to_keyword:
41
42 params['url'] = url.format(keyword=category_to_keyword[categories[0]],
43 query=urlencode({'q': query}),
44 offset=offset)
45 else:
46 params['url'] = url.format(keyword='web',
47 query=urlencode({'q': query}),
48 offset=offset)
49
50 # add language tag
51 if params['language'] != 'all':
52 language = match_language(params['language'], supported_languages, language_aliases)
53 params['url'] += '&locale=' + language.replace('-', '_').lower()
54
55 params['headers']['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64; rv:69.0) Gecko/20100101 Firefox/69.0'
56 params['raise_for_httperror'] = False
57 return params
58
59
60 # get response from search-request
61 def response(resp):
62 results = []
63
64 # According to https://www.qwant.com/js/app.js
65 if resp.status_code == 429:
66 raise SearxEngineCaptchaException()
67
68 # raise for other errors
69 raise_for_httperror(resp)
70
71 # load JSON result
72 search_results = loads(resp.text)
73
74 # check for an API error
75 if search_results.get('status') != 'success':
76 raise SearxEngineAPIException('API error ' + str(search_results.get('error', '')))
77
78 # return empty array if there are no results
79 if 'data' not in search_results:
80 return []
81
82 data = search_results.get('data', {})
83
84 res = data.get('result', {})
85
86 # parse results
87 for result in res.get('items', {}):
88
89 title = html_to_text(result['title'])
90 res_url = result['url']
91 content = html_to_text(result['desc'])
92
93 if category_to_keyword.get(categories[0], '') == 'web':
94 results.append({'title': title,
95 'content': content,
96 'url': res_url})
97
98 elif category_to_keyword.get(categories[0], '') == 'images':
99 thumbnail_src = result['thumbnail']
100 img_src = result['media']
101 results.append({'template': 'images.html',
102 'url': res_url,
103 'title': title,
104 'content': '',
105 'thumbnail_src': thumbnail_src,
106 'img_src': img_src})
107
108 elif category_to_keyword.get(categories[0], '') == 'news':
109 published_date = datetime.fromtimestamp(result['date'], None)
110 media = result.get('media', [])
111 if len(media) > 0:
112 img_src = media[0].get('pict', {}).get('url', None)
113 else:
114 img_src = None
115 results.append({'url': res_url,
116 'title': title,
117 'publishedDate': published_date,
118 'content': content,
119 'img_src': img_src})
120
121 return results
122
123
124 # get supported languages from their site
125 def _fetch_supported_languages(resp):
126 # list of regions is embedded in page as a js object
127 response_text = resp.text
128 response_text = response_text[response_text.find('regionalisation'):]
129 response_text = response_text[response_text.find('{'):response_text.find(');')]
130
131 regions_json = loads(response_text)
132
133 supported_languages = {}
134 for lang in regions_json['languages'].values():
135 for country in lang['countries']:
136 lang_code = "{lang}-{country}".format(lang=lang['code'], country=country)
137 supported_languages[lang_code] = {'name': lang['name']}
138
139 return supported_languages
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/qwant.py b/searx/engines/qwant.py
--- a/searx/engines/qwant.py
+++ b/searx/engines/qwant.py
@@ -23,7 +23,7 @@
# engine dependent config
categories = []
paging = True
-supported_languages_url = 'https://qwant.com/region'
+supported_languages_url = about['website']
category_to_keyword = {'general': 'web',
'images': 'images',
@@ -125,15 +125,15 @@
def _fetch_supported_languages(resp):
# list of regions is embedded in page as a js object
response_text = resp.text
- response_text = response_text[response_text.find('regionalisation'):]
- response_text = response_text[response_text.find('{'):response_text.find(');')]
+ response_text = response_text[response_text.find('INITIAL_PROPS'):]
+ response_text = response_text[response_text.find('{'):response_text.find('</script>')]
regions_json = loads(response_text)
- supported_languages = {}
- for lang in regions_json['languages'].values():
- for country in lang['countries']:
- lang_code = "{lang}-{country}".format(lang=lang['code'], country=country)
- supported_languages[lang_code] = {'name': lang['name']}
+ supported_languages = []
+ for country, langs in regions_json['locales'].items():
+ for lang in langs['langs']:
+ lang_code = "{lang}-{country}".format(lang=lang, country=country)
+ supported_languages.append(lang_code)
return supported_languages
| {"golden_diff": "diff --git a/searx/engines/qwant.py b/searx/engines/qwant.py\n--- a/searx/engines/qwant.py\n+++ b/searx/engines/qwant.py\n@@ -23,7 +23,7 @@\n # engine dependent config\n categories = []\n paging = True\n-supported_languages_url = 'https://qwant.com/region'\n+supported_languages_url = about['website']\n \n category_to_keyword = {'general': 'web',\n 'images': 'images',\n@@ -125,15 +125,15 @@\n def _fetch_supported_languages(resp):\n # list of regions is embedded in page as a js object\n response_text = resp.text\n- response_text = response_text[response_text.find('regionalisation'):]\n- response_text = response_text[response_text.find('{'):response_text.find(');')]\n+ response_text = response_text[response_text.find('INITIAL_PROPS'):]\n+ response_text = response_text[response_text.find('{'):response_text.find('</script>')]\n \n regions_json = loads(response_text)\n \n- supported_languages = {}\n- for lang in regions_json['languages'].values():\n- for country in lang['countries']:\n- lang_code = \"{lang}-{country}\".format(lang=lang['code'], country=country)\n- supported_languages[lang_code] = {'name': lang['name']}\n+ supported_languages = []\n+ for country, langs in regions_json['locales'].items():\n+ for lang in langs['langs']:\n+ lang_code = \"{lang}-{country}\".format(lang=lang, country=country)\n+ supported_languages.append(lang_code)\n \n return supported_languages\n", "issue": "Updating languages of qwant is broken\nThe URL used for retrieving available languages has gone:\r\n\r\n```\r\n engines_languages = fetch_supported_languages()\r\n File \"/home/runner/work/searx/searx/./searx_extra/update/update_languages.py\", line 30, in fetch_supported_languages\r\n engines_languages[engine_name] = engines[engine_name].fetch_supported_languages()\r\n File \"/home/runner/work/searx/searx/searx/engines/__init__.py\", line 141, in <lambda>\r\n lambda: engine._fetch_supported_languages(get(engine.supported_languages_url, headers=headers)))\r\n File \"/home/runner/work/searx/searx/searx/poolrequests.py\", line 209, in get\r\n return request('get', url, **kwargs)\r\n File \"/home/runner/work/searx/searx/searx/poolrequests.py\", line 202, in request\r\n raise_for_httperror(response)\r\n File \"/home/runner/work/searx/searx/searx/raise_for_httperror.py\", line 66, in raise_for_httperror\r\n resp.raise_for_status()\r\n File \"/home/runner/work/searx/searx/local/py3/lib/python3.9/site-packages/requests/models.py\", line 943, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://www.qwant.com/region\r\n```\r\n\r\nFull run: https://github.com/searx/searx/runs/2487497774?check_suite_focus=true\r\n\r\n@MarcAbonce If you have time could you please look into it? Or at least give some pointers how it can be fixed?\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Qwant (Web, Images, News, Social)\n\"\"\"\n\nfrom datetime import datetime\nfrom json import loads\nfrom urllib.parse import urlencode\nfrom searx.utils import html_to_text, match_language\nfrom searx.exceptions import SearxEngineAPIException, SearxEngineCaptchaException\nfrom searx.raise_for_httperror import raise_for_httperror\n\n# about\nabout = {\n \"website\": 'https://www.qwant.com/',\n \"wikidata_id\": 'Q14657870',\n \"official_api_documentation\": None,\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\n# engine dependent config\ncategories = []\npaging = True\nsupported_languages_url = 'https://qwant.com/region'\n\ncategory_to_keyword = {'general': 'web',\n 'images': 'images',\n 'news': 'news'}\n\n# search-url\nurl = 'https://api.qwant.com/api/search/{keyword}?count=10&offset={offset}&f=&{query}&t={keyword}&uiv=4'\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * 10\n\n if categories[0] and categories[0] in category_to_keyword:\n\n params['url'] = url.format(keyword=category_to_keyword[categories[0]],\n query=urlencode({'q': query}),\n offset=offset)\n else:\n params['url'] = url.format(keyword='web',\n query=urlencode({'q': query}),\n offset=offset)\n\n # add language tag\n if params['language'] != 'all':\n language = match_language(params['language'], supported_languages, language_aliases)\n params['url'] += '&locale=' + language.replace('-', '_').lower()\n\n params['headers']['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64; rv:69.0) Gecko/20100101 Firefox/69.0'\n params['raise_for_httperror'] = False\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n # According to https://www.qwant.com/js/app.js\n if resp.status_code == 429:\n raise SearxEngineCaptchaException()\n\n # raise for other errors\n raise_for_httperror(resp)\n\n # load JSON result\n search_results = loads(resp.text)\n\n # check for an API error\n if search_results.get('status') != 'success':\n raise SearxEngineAPIException('API error ' + str(search_results.get('error', '')))\n\n # return empty array if there are no results\n if 'data' not in search_results:\n return []\n\n data = search_results.get('data', {})\n\n res = data.get('result', {})\n\n # parse results\n for result in res.get('items', {}):\n\n title = html_to_text(result['title'])\n res_url = result['url']\n content = html_to_text(result['desc'])\n\n if category_to_keyword.get(categories[0], '') == 'web':\n results.append({'title': title,\n 'content': content,\n 'url': res_url})\n\n elif category_to_keyword.get(categories[0], '') == 'images':\n thumbnail_src = result['thumbnail']\n img_src = result['media']\n results.append({'template': 'images.html',\n 'url': res_url,\n 'title': title,\n 'content': '',\n 'thumbnail_src': thumbnail_src,\n 'img_src': img_src})\n\n elif category_to_keyword.get(categories[0], '') == 'news':\n published_date = datetime.fromtimestamp(result['date'], None)\n media = result.get('media', [])\n if len(media) > 0:\n img_src = media[0].get('pict', {}).get('url', None)\n else:\n img_src = None\n results.append({'url': res_url,\n 'title': title,\n 'publishedDate': published_date,\n 'content': content,\n 'img_src': img_src})\n\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n # list of regions is embedded in page as a js object\n response_text = resp.text\n response_text = response_text[response_text.find('regionalisation'):]\n response_text = response_text[response_text.find('{'):response_text.find(');')]\n\n regions_json = loads(response_text)\n\n supported_languages = {}\n for lang in regions_json['languages'].values():\n for country in lang['countries']:\n lang_code = \"{lang}-{country}\".format(lang=lang['code'], country=country)\n supported_languages[lang_code] = {'name': lang['name']}\n\n return supported_languages\n", "path": "searx/engines/qwant.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Qwant (Web, Images, News, Social)\n\"\"\"\n\nfrom datetime import datetime\nfrom json import loads\nfrom urllib.parse import urlencode\nfrom searx.utils import html_to_text, match_language\nfrom searx.exceptions import SearxEngineAPIException, SearxEngineCaptchaException\nfrom searx.raise_for_httperror import raise_for_httperror\n\n# about\nabout = {\n \"website\": 'https://www.qwant.com/',\n \"wikidata_id\": 'Q14657870',\n \"official_api_documentation\": None,\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\n# engine dependent config\ncategories = []\npaging = True\nsupported_languages_url = about['website']\n\ncategory_to_keyword = {'general': 'web',\n 'images': 'images',\n 'news': 'news'}\n\n# search-url\nurl = 'https://api.qwant.com/api/search/{keyword}?count=10&offset={offset}&f=&{query}&t={keyword}&uiv=4'\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * 10\n\n if categories[0] and categories[0] in category_to_keyword:\n\n params['url'] = url.format(keyword=category_to_keyword[categories[0]],\n query=urlencode({'q': query}),\n offset=offset)\n else:\n params['url'] = url.format(keyword='web',\n query=urlencode({'q': query}),\n offset=offset)\n\n # add language tag\n if params['language'] != 'all':\n language = match_language(params['language'], supported_languages, language_aliases)\n params['url'] += '&locale=' + language.replace('-', '_').lower()\n\n params['headers']['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64; rv:69.0) Gecko/20100101 Firefox/69.0'\n params['raise_for_httperror'] = False\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n # According to https://www.qwant.com/js/app.js\n if resp.status_code == 429:\n raise SearxEngineCaptchaException()\n\n # raise for other errors\n raise_for_httperror(resp)\n\n # load JSON result\n search_results = loads(resp.text)\n\n # check for an API error\n if search_results.get('status') != 'success':\n raise SearxEngineAPIException('API error ' + str(search_results.get('error', '')))\n\n # return empty array if there are no results\n if 'data' not in search_results:\n return []\n\n data = search_results.get('data', {})\n\n res = data.get('result', {})\n\n # parse results\n for result in res.get('items', {}):\n\n title = html_to_text(result['title'])\n res_url = result['url']\n content = html_to_text(result['desc'])\n\n if category_to_keyword.get(categories[0], '') == 'web':\n results.append({'title': title,\n 'content': content,\n 'url': res_url})\n\n elif category_to_keyword.get(categories[0], '') == 'images':\n thumbnail_src = result['thumbnail']\n img_src = result['media']\n results.append({'template': 'images.html',\n 'url': res_url,\n 'title': title,\n 'content': '',\n 'thumbnail_src': thumbnail_src,\n 'img_src': img_src})\n\n elif category_to_keyword.get(categories[0], '') == 'news':\n published_date = datetime.fromtimestamp(result['date'], None)\n media = result.get('media', [])\n if len(media) > 0:\n img_src = media[0].get('pict', {}).get('url', None)\n else:\n img_src = None\n results.append({'url': res_url,\n 'title': title,\n 'publishedDate': published_date,\n 'content': content,\n 'img_src': img_src})\n\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n # list of regions is embedded in page as a js object\n response_text = resp.text\n response_text = response_text[response_text.find('INITIAL_PROPS'):]\n response_text = response_text[response_text.find('{'):response_text.find('</script>')]\n\n regions_json = loads(response_text)\n\n supported_languages = []\n for country, langs in regions_json['locales'].items():\n for lang in langs['langs']:\n lang_code = \"{lang}-{country}\".format(lang=lang, country=country)\n supported_languages.append(lang_code)\n\n return supported_languages\n", "path": "searx/engines/qwant.py"}]} | 2,041 | 366 |
gh_patches_debug_23898 | rasdani/github-patches | git_diff | netbox-community__netbox-15807 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HTMX: Bulk actions in ObjectListView without function
### Deployment Type
Self-hosted
### NetBox Version
v4.0-beta1
### Python Version
3.11
### Steps to Reproduce
1. Set user preferences to HTMX Navigation enabled
2. Create two or more IP addresses
3. Select the IP addresses in list view
4. Click on "Edit Selected" or "Delete Selected"
### Expected Behavior
For "Edit Selected", the bulk edit form opens. For "Delete Selected", the usual "xxx Objects will be deleted" confirmation dialog appears.
### Observed Behavior
Nothing at all happens.
With "HTMX Navigation" set to "disabled", everything works as usual.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/utilities/templatetags/buttons.py`
Content:
```
1 from django import template
2 from django.contrib.contenttypes.models import ContentType
3 from django.urls import NoReverseMatch, reverse
4
5 from core.models import ObjectType
6 from extras.models import Bookmark, ExportTemplate
7 from utilities.querydict import prepare_cloned_fields
8 from utilities.views import get_viewname
9
10 __all__ = (
11 'add_button',
12 'bookmark_button',
13 'bulk_delete_button',
14 'bulk_edit_button',
15 'clone_button',
16 'delete_button',
17 'edit_button',
18 'export_button',
19 'import_button',
20 'sync_button',
21 )
22
23 register = template.Library()
24
25
26 #
27 # Instance buttons
28 #
29
30 @register.inclusion_tag('buttons/bookmark.html', takes_context=True)
31 def bookmark_button(context, instance):
32 # Check if this user has already bookmarked the object
33 content_type = ContentType.objects.get_for_model(instance)
34 bookmark = Bookmark.objects.filter(
35 object_type=content_type,
36 object_id=instance.pk,
37 user=context['request'].user
38 ).first()
39
40 # Compile form URL & data
41 if bookmark:
42 form_url = reverse('extras:bookmark_delete', kwargs={'pk': bookmark.pk})
43 form_data = {
44 'confirm': 'true',
45 }
46 else:
47 form_url = reverse('extras:bookmark_add')
48 form_data = {
49 'object_type': content_type.pk,
50 'object_id': instance.pk,
51 }
52
53 return {
54 'bookmark': bookmark,
55 'form_url': form_url,
56 'form_data': form_data,
57 'return_url': instance.get_absolute_url(),
58 }
59
60
61 @register.inclusion_tag('buttons/clone.html')
62 def clone_button(instance):
63 url = reverse(get_viewname(instance, 'add'))
64
65 # Populate cloned field values
66 param_string = prepare_cloned_fields(instance).urlencode()
67 if param_string:
68 url = f'{url}?{param_string}'
69 else:
70 url = None
71
72 return {
73 'url': url,
74 }
75
76
77 @register.inclusion_tag('buttons/edit.html')
78 def edit_button(instance):
79 viewname = get_viewname(instance, 'edit')
80 url = reverse(viewname, kwargs={'pk': instance.pk})
81
82 return {
83 'url': url,
84 }
85
86
87 @register.inclusion_tag('buttons/delete.html')
88 def delete_button(instance):
89 viewname = get_viewname(instance, 'delete')
90 url = reverse(viewname, kwargs={'pk': instance.pk})
91
92 return {
93 'url': url,
94 }
95
96
97 @register.inclusion_tag('buttons/sync.html')
98 def sync_button(instance):
99 viewname = get_viewname(instance, 'sync')
100 url = reverse(viewname, kwargs={'pk': instance.pk})
101
102 return {
103 'url': url,
104 }
105
106
107 #
108 # List buttons
109 #
110
111 @register.inclusion_tag('buttons/add.html')
112 def add_button(model, action='add'):
113 try:
114 url = reverse(get_viewname(model, action))
115 except NoReverseMatch:
116 url = None
117
118 return {
119 'url': url,
120 }
121
122
123 @register.inclusion_tag('buttons/import.html')
124 def import_button(model, action='import'):
125 try:
126 url = reverse(get_viewname(model, action))
127 except NoReverseMatch:
128 url = None
129
130 return {
131 'url': url,
132 }
133
134
135 @register.inclusion_tag('buttons/export.html', takes_context=True)
136 def export_button(context, model):
137 object_type = ObjectType.objects.get_for_model(model)
138 user = context['request'].user
139
140 # Determine if the "all data" export returns CSV or YAML
141 data_format = 'YAML' if hasattr(object_type.model_class(), 'to_yaml') else 'CSV'
142
143 # Retrieve all export templates for this model
144 export_templates = ExportTemplate.objects.restrict(user, 'view').filter(object_types=object_type)
145
146 return {
147 'perms': context['perms'],
148 'object_type': object_type,
149 'url_params': context['request'].GET.urlencode() if context['request'].GET else '',
150 'export_templates': export_templates,
151 'data_format': data_format,
152 }
153
154
155 @register.inclusion_tag('buttons/bulk_edit.html')
156 def bulk_edit_button(model, action='bulk_edit', query_params=None):
157 try:
158 url = reverse(get_viewname(model, action))
159 if query_params:
160 url = f'{url}?{query_params.urlencode()}'
161 except NoReverseMatch:
162 url = None
163
164 return {
165 'url': url,
166 }
167
168
169 @register.inclusion_tag('buttons/bulk_delete.html')
170 def bulk_delete_button(model, action='bulk_delete', query_params=None):
171 try:
172 url = reverse(get_viewname(model, action))
173 if query_params:
174 url = f'{url}?{query_params.urlencode()}'
175 except NoReverseMatch:
176 url = None
177
178 return {
179 'url': url,
180 }
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/utilities/templatetags/buttons.py b/netbox/utilities/templatetags/buttons.py
--- a/netbox/utilities/templatetags/buttons.py
+++ b/netbox/utilities/templatetags/buttons.py
@@ -152,8 +152,8 @@
}
[email protected]_tag('buttons/bulk_edit.html')
-def bulk_edit_button(model, action='bulk_edit', query_params=None):
[email protected]_tag('buttons/bulk_edit.html', takes_context=True)
+def bulk_edit_button(context, model, action='bulk_edit', query_params=None):
try:
url = reverse(get_viewname(model, action))
if query_params:
@@ -162,12 +162,13 @@
url = None
return {
+ 'htmx_navigation': context.get('htmx_navigation'),
'url': url,
}
[email protected]_tag('buttons/bulk_delete.html')
-def bulk_delete_button(model, action='bulk_delete', query_params=None):
[email protected]_tag('buttons/bulk_delete.html', takes_context=True)
+def bulk_delete_button(context, model, action='bulk_delete', query_params=None):
try:
url = reverse(get_viewname(model, action))
if query_params:
@@ -176,5 +177,6 @@
url = None
return {
+ 'htmx_navigation': context.get('htmx_navigation'),
'url': url,
}
| {"golden_diff": "diff --git a/netbox/utilities/templatetags/buttons.py b/netbox/utilities/templatetags/buttons.py\n--- a/netbox/utilities/templatetags/buttons.py\n+++ b/netbox/utilities/templatetags/buttons.py\n@@ -152,8 +152,8 @@\n }\n \n \[email protected]_tag('buttons/bulk_edit.html')\n-def bulk_edit_button(model, action='bulk_edit', query_params=None):\[email protected]_tag('buttons/bulk_edit.html', takes_context=True)\n+def bulk_edit_button(context, model, action='bulk_edit', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n@@ -162,12 +162,13 @@\n url = None\n \n return {\n+ 'htmx_navigation': context.get('htmx_navigation'),\n 'url': url,\n }\n \n \[email protected]_tag('buttons/bulk_delete.html')\n-def bulk_delete_button(model, action='bulk_delete', query_params=None):\[email protected]_tag('buttons/bulk_delete.html', takes_context=True)\n+def bulk_delete_button(context, model, action='bulk_delete', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n@@ -176,5 +177,6 @@\n url = None\n \n return {\n+ 'htmx_navigation': context.get('htmx_navigation'),\n 'url': url,\n }\n", "issue": "HTMX: Bulk actions in ObjectListView without function\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv4.0-beta1\n\n### Python Version\n\n3.11\n\n### Steps to Reproduce\n\n1. Set user preferences to HTMX Navigation enabled\r\n2. Create two or more IP addresses\r\n3. Select the IP addresses in list view\r\n4. Click on \"Edit Selected\" or \"Delete Selected\"\n\n### Expected Behavior\n\nFor \"Edit Selected\", the bulk edit form opens. For \"Delete Selected\", the usual \"xxx Objects will be deleted\" confirmation dialog appears.\n\n### Observed Behavior\n\nNothing at all happens.\r\n\r\nWith \"HTMX Navigation\" set to \"disabled\", everything works as usual.\n", "before_files": [{"content": "from django import template\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.urls import NoReverseMatch, reverse\n\nfrom core.models import ObjectType\nfrom extras.models import Bookmark, ExportTemplate\nfrom utilities.querydict import prepare_cloned_fields\nfrom utilities.views import get_viewname\n\n__all__ = (\n 'add_button',\n 'bookmark_button',\n 'bulk_delete_button',\n 'bulk_edit_button',\n 'clone_button',\n 'delete_button',\n 'edit_button',\n 'export_button',\n 'import_button',\n 'sync_button',\n)\n\nregister = template.Library()\n\n\n#\n# Instance buttons\n#\n\[email protected]_tag('buttons/bookmark.html', takes_context=True)\ndef bookmark_button(context, instance):\n # Check if this user has already bookmarked the object\n content_type = ContentType.objects.get_for_model(instance)\n bookmark = Bookmark.objects.filter(\n object_type=content_type,\n object_id=instance.pk,\n user=context['request'].user\n ).first()\n\n # Compile form URL & data\n if bookmark:\n form_url = reverse('extras:bookmark_delete', kwargs={'pk': bookmark.pk})\n form_data = {\n 'confirm': 'true',\n }\n else:\n form_url = reverse('extras:bookmark_add')\n form_data = {\n 'object_type': content_type.pk,\n 'object_id': instance.pk,\n }\n\n return {\n 'bookmark': bookmark,\n 'form_url': form_url,\n 'form_data': form_data,\n 'return_url': instance.get_absolute_url(),\n }\n\n\[email protected]_tag('buttons/clone.html')\ndef clone_button(instance):\n url = reverse(get_viewname(instance, 'add'))\n\n # Populate cloned field values\n param_string = prepare_cloned_fields(instance).urlencode()\n if param_string:\n url = f'{url}?{param_string}'\n else:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/edit.html')\ndef edit_button(instance):\n viewname = get_viewname(instance, 'edit')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/delete.html')\ndef delete_button(instance):\n viewname = get_viewname(instance, 'delete')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/sync.html')\ndef sync_button(instance):\n viewname = get_viewname(instance, 'sync')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\n#\n# List buttons\n#\n\[email protected]_tag('buttons/add.html')\ndef add_button(model, action='add'):\n try:\n url = reverse(get_viewname(model, action))\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/import.html')\ndef import_button(model, action='import'):\n try:\n url = reverse(get_viewname(model, action))\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/export.html', takes_context=True)\ndef export_button(context, model):\n object_type = ObjectType.objects.get_for_model(model)\n user = context['request'].user\n\n # Determine if the \"all data\" export returns CSV or YAML\n data_format = 'YAML' if hasattr(object_type.model_class(), 'to_yaml') else 'CSV'\n\n # Retrieve all export templates for this model\n export_templates = ExportTemplate.objects.restrict(user, 'view').filter(object_types=object_type)\n\n return {\n 'perms': context['perms'],\n 'object_type': object_type,\n 'url_params': context['request'].GET.urlencode() if context['request'].GET else '',\n 'export_templates': export_templates,\n 'data_format': data_format,\n }\n\n\[email protected]_tag('buttons/bulk_edit.html')\ndef bulk_edit_button(model, action='bulk_edit', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n url = f'{url}?{query_params.urlencode()}'\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/bulk_delete.html')\ndef bulk_delete_button(model, action='bulk_delete', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n url = f'{url}?{query_params.urlencode()}'\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n", "path": "netbox/utilities/templatetags/buttons.py"}], "after_files": [{"content": "from django import template\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.urls import NoReverseMatch, reverse\n\nfrom core.models import ObjectType\nfrom extras.models import Bookmark, ExportTemplate\nfrom utilities.querydict import prepare_cloned_fields\nfrom utilities.views import get_viewname\n\n__all__ = (\n 'add_button',\n 'bookmark_button',\n 'bulk_delete_button',\n 'bulk_edit_button',\n 'clone_button',\n 'delete_button',\n 'edit_button',\n 'export_button',\n 'import_button',\n 'sync_button',\n)\n\nregister = template.Library()\n\n\n#\n# Instance buttons\n#\n\[email protected]_tag('buttons/bookmark.html', takes_context=True)\ndef bookmark_button(context, instance):\n # Check if this user has already bookmarked the object\n content_type = ContentType.objects.get_for_model(instance)\n bookmark = Bookmark.objects.filter(\n object_type=content_type,\n object_id=instance.pk,\n user=context['request'].user\n ).first()\n\n # Compile form URL & data\n if bookmark:\n form_url = reverse('extras:bookmark_delete', kwargs={'pk': bookmark.pk})\n form_data = {\n 'confirm': 'true',\n }\n else:\n form_url = reverse('extras:bookmark_add')\n form_data = {\n 'object_type': content_type.pk,\n 'object_id': instance.pk,\n }\n\n return {\n 'bookmark': bookmark,\n 'form_url': form_url,\n 'form_data': form_data,\n 'return_url': instance.get_absolute_url(),\n }\n\n\[email protected]_tag('buttons/clone.html')\ndef clone_button(instance):\n url = reverse(get_viewname(instance, 'add'))\n\n # Populate cloned field values\n param_string = prepare_cloned_fields(instance).urlencode()\n if param_string:\n url = f'{url}?{param_string}'\n else:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/edit.html')\ndef edit_button(instance):\n viewname = get_viewname(instance, 'edit')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/delete.html')\ndef delete_button(instance):\n viewname = get_viewname(instance, 'delete')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/sync.html')\ndef sync_button(instance):\n viewname = get_viewname(instance, 'sync')\n url = reverse(viewname, kwargs={'pk': instance.pk})\n\n return {\n 'url': url,\n }\n\n\n#\n# List buttons\n#\n\[email protected]_tag('buttons/add.html')\ndef add_button(model, action='add'):\n try:\n url = reverse(get_viewname(model, action))\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/import.html')\ndef import_button(model, action='import'):\n try:\n url = reverse(get_viewname(model, action))\n except NoReverseMatch:\n url = None\n\n return {\n 'url': url,\n }\n\n\[email protected]_tag('buttons/export.html', takes_context=True)\ndef export_button(context, model):\n object_type = ObjectType.objects.get_for_model(model)\n user = context['request'].user\n\n # Determine if the \"all data\" export returns CSV or YAML\n data_format = 'YAML' if hasattr(object_type.model_class(), 'to_yaml') else 'CSV'\n\n # Retrieve all export templates for this model\n export_templates = ExportTemplate.objects.restrict(user, 'view').filter(object_types=object_type)\n\n return {\n 'perms': context['perms'],\n 'object_type': object_type,\n 'url_params': context['request'].GET.urlencode() if context['request'].GET else '',\n 'export_templates': export_templates,\n 'data_format': data_format,\n }\n\n\[email protected]_tag('buttons/bulk_edit.html', takes_context=True)\ndef bulk_edit_button(context, model, action='bulk_edit', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n url = f'{url}?{query_params.urlencode()}'\n except NoReverseMatch:\n url = None\n\n return {\n 'htmx_navigation': context.get('htmx_navigation'),\n 'url': url,\n }\n\n\[email protected]_tag('buttons/bulk_delete.html', takes_context=True)\ndef bulk_delete_button(context, model, action='bulk_delete', query_params=None):\n try:\n url = reverse(get_viewname(model, action))\n if query_params:\n url = f'{url}?{query_params.urlencode()}'\n except NoReverseMatch:\n url = None\n\n return {\n 'htmx_navigation': context.get('htmx_navigation'),\n 'url': url,\n }\n", "path": "netbox/utilities/templatetags/buttons.py"}]} | 1,898 | 328 |
gh_patches_debug_10868 | rasdani/github-patches | git_diff | coreruleset__coreruleset-2615 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Regex assembler seems to be broken again.
### Describe the bug
`util/regexp-assemble/regexp-assemble.py --log-level DEBUG update 942521`
```
Processing 942521, chain offset 0
detected processor: <class 'NoneType'>
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found simple comment
Found regular input '##!^ ^\n'
Found regular input '##!+ i\n'
Found simple comment
Found simple comment
Found regular input "[^']*?(?:'[^']*?'[^']*?)*?'\\s*(\\w+)\\b\n"
Found regular input '[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"\\s*(\\w+)\\b\n'
Found regular input '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\s*(\\w+)\\b\n'
processor will process: ['##!^ ^\n', '##!+ i\n', "[^']*?(?:'[^']*?'[^']*?)*?'\\s*(\\w+)\\b\n", '[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"\\s*(\\w+)\\b\n', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\s*(\\w+)\\b\n']
preprocessed lines: ['##!^ ^\n', '##!+ i\n', "[^']*?(?:'[^']*?'[^']*?)*?'\\s*(\\w+)\\b\n", '[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"\\s*(\\w+)\\b\n', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\s*(\\w+)\\b\n']
Completing assembly
Running assembler with lines: ['##!^ ^', '##!+ i', "[^']*?(?:'[^']*?'[^']*?)*?'\\s*(\\w+)\\b", '[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"\\s*(\\w+)\\b', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\s*(\\w+)\\b']
Assembler errors: b'', output b'(?i)^(?:[^\\"]*?(?:\\"[^\\"]*?\\"[^\\"]*?)*?\\"|[^\']*?(?:\'[^\']*?\'[^\']*?)*?\'|[^`]*?(?:`[^`]*?`[^`]*?)*?`)\\s*(\\w+)\\b\n'
Completed assembly: (?i)^(?:[^\"]*?(?:\"[^\"]*?\"[^\"]*?)*?\"|[^']*?(?:'[^']*?'[^']*?)*?'|[^`]*?(?:`[^`]*?`[^`]*?)*?`)\s*(\w+)\b
Traceback (most recent call last):
File "util/regexp-assemble/regexp-assemble.py", line 227, in <module>
namespace.func(namespace)
File "util/regexp-assemble/regexp-assemble.py", line 146, in handle_update
updater.run(False)
File "/home/janek/crs-waf/coreruleset/util/regexp-assemble/lib/operators/updater.py", line 8, in run
self.write_updates()
File "/home/janek/crs-waf/coreruleset/util/regexp-assemble/lib/operators/updater.py", line 24, in write_updates
with open(file_path, "w") as handle:
FileNotFoundError: [Errno 2] No such file or directory: 'rules/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf'
```
It seems that there is double `rules/rules` so updating a rule doesn't work.
@theseion
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `util/regexp-assemble/lib/operators/updater.py`
Content:
```
1 from msc_pyparser import MSCWriter
2 from lib.operators.parser import Parser
3
4
5 class Updater(Parser):
6 def run(self, process_all: bool):
7 self.perform_compare_or_update(process_all, self.update_regex)
8 self.write_updates()
9
10 def update_regex(
11 self,
12 rule_id: str,
13 generated_regex: str,
14 current_regex: str,
15 config: dict,
16 config_key: str,
17 ):
18 config[config_key] = generated_regex
19
20 def write_updates(self):
21 for rule_prefix, parser in self.parsers.items():
22 writer = MSCWriter(parser.configlines)
23 file_path = self.context.rules_directory / self.prefix_to_file_map[rule_prefix]
24 with open(file_path, "w") as handle:
25 writer.generate()
26 # add extra new line at the end of file
27 writer.output.append("")
28 handle.write("\n".join(writer.output))
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/util/regexp-assemble/lib/operators/updater.py b/util/regexp-assemble/lib/operators/updater.py
--- a/util/regexp-assemble/lib/operators/updater.py
+++ b/util/regexp-assemble/lib/operators/updater.py
@@ -20,8 +20,7 @@
def write_updates(self):
for rule_prefix, parser in self.parsers.items():
writer = MSCWriter(parser.configlines)
- file_path = self.context.rules_directory / self.prefix_to_file_map[rule_prefix]
- with open(file_path, "w") as handle:
+ with self.prefix_to_file_map[rule_prefix].open("w") as handle:
writer.generate()
# add extra new line at the end of file
writer.output.append("")
| {"golden_diff": "diff --git a/util/regexp-assemble/lib/operators/updater.py b/util/regexp-assemble/lib/operators/updater.py\n--- a/util/regexp-assemble/lib/operators/updater.py\n+++ b/util/regexp-assemble/lib/operators/updater.py\n@@ -20,8 +20,7 @@\n def write_updates(self):\n for rule_prefix, parser in self.parsers.items():\n writer = MSCWriter(parser.configlines)\n- file_path = self.context.rules_directory / self.prefix_to_file_map[rule_prefix]\n- with open(file_path, \"w\") as handle:\n+ with self.prefix_to_file_map[rule_prefix].open(\"w\") as handle:\n writer.generate()\n # add extra new line at the end of file\n writer.output.append(\"\")\n", "issue": "Regex assembler seems to be broken again.\n### Describe the bug\r\n`util/regexp-assemble/regexp-assemble.py --log-level DEBUG update 942521`\r\n\r\n```\r\nProcessing 942521, chain offset 0\r\ndetected processor: <class 'NoneType'>\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound simple comment\r\nFound regular input '##!^ ^\\n'\r\nFound regular input '##!+ i\\n'\r\nFound simple comment\r\nFound simple comment\r\nFound regular input \"[^']*?(?:'[^']*?'[^']*?)*?'\\\\s*(\\\\w+)\\\\b\\n\"\r\nFound regular input '[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"\\\\s*(\\\\w+)\\\\b\\n'\r\nFound regular input '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\\\s*(\\\\w+)\\\\b\\n'\r\nprocessor will process: ['##!^ ^\\n', '##!+ i\\n', \"[^']*?(?:'[^']*?'[^']*?)*?'\\\\s*(\\\\w+)\\\\b\\n\", '[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"\\\\s*(\\\\w+)\\\\b\\n', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\\\s*(\\\\w+)\\\\b\\n']\r\npreprocessed lines: ['##!^ ^\\n', '##!+ i\\n', \"[^']*?(?:'[^']*?'[^']*?)*?'\\\\s*(\\\\w+)\\\\b\\n\", '[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"\\\\s*(\\\\w+)\\\\b\\n', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\\\s*(\\\\w+)\\\\b\\n']\r\nCompleting assembly\r\nRunning assembler with lines: ['##!^ ^', '##!+ i', \"[^']*?(?:'[^']*?'[^']*?)*?'\\\\s*(\\\\w+)\\\\b\", '[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"\\\\s*(\\\\w+)\\\\b', '[^`]*?(?:`[^`]*?`[^`]*?)*?`\\\\s*(\\\\w+)\\\\b']\r\nAssembler errors: b'', output b'(?i)^(?:[^\\\\\"]*?(?:\\\\\"[^\\\\\"]*?\\\\\"[^\\\\\"]*?)*?\\\\\"|[^\\']*?(?:\\'[^\\']*?\\'[^\\']*?)*?\\'|[^`]*?(?:`[^`]*?`[^`]*?)*?`)\\\\s*(\\\\w+)\\\\b\\n'\r\nCompleted assembly: (?i)^(?:[^\\\"]*?(?:\\\"[^\\\"]*?\\\"[^\\\"]*?)*?\\\"|[^']*?(?:'[^']*?'[^']*?)*?'|[^`]*?(?:`[^`]*?`[^`]*?)*?`)\\s*(\\w+)\\b\r\nTraceback (most recent call last):\r\n File \"util/regexp-assemble/regexp-assemble.py\", line 227, in <module>\r\n namespace.func(namespace)\r\n File \"util/regexp-assemble/regexp-assemble.py\", line 146, in handle_update\r\n updater.run(False)\r\n File \"/home/janek/crs-waf/coreruleset/util/regexp-assemble/lib/operators/updater.py\", line 8, in run\r\n self.write_updates()\r\n File \"/home/janek/crs-waf/coreruleset/util/regexp-assemble/lib/operators/updater.py\", line 24, in write_updates\r\n with open(file_path, \"w\") as handle:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'rules/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf'\r\n```\r\n\r\nIt seems that there is double `rules/rules` so updating a rule doesn't work. \r\n\r\n@theseion \n", "before_files": [{"content": "from msc_pyparser import MSCWriter\nfrom lib.operators.parser import Parser\n\n\nclass Updater(Parser):\n def run(self, process_all: bool):\n self.perform_compare_or_update(process_all, self.update_regex)\n self.write_updates()\n\n def update_regex(\n self,\n rule_id: str,\n generated_regex: str,\n current_regex: str,\n config: dict,\n config_key: str,\n ):\n config[config_key] = generated_regex\n\n def write_updates(self):\n for rule_prefix, parser in self.parsers.items():\n writer = MSCWriter(parser.configlines)\n file_path = self.context.rules_directory / self.prefix_to_file_map[rule_prefix]\n with open(file_path, \"w\") as handle:\n writer.generate()\n # add extra new line at the end of file\n writer.output.append(\"\")\n handle.write(\"\\n\".join(writer.output))\n", "path": "util/regexp-assemble/lib/operators/updater.py"}], "after_files": [{"content": "from msc_pyparser import MSCWriter\nfrom lib.operators.parser import Parser\n\n\nclass Updater(Parser):\n def run(self, process_all: bool):\n self.perform_compare_or_update(process_all, self.update_regex)\n self.write_updates()\n\n def update_regex(\n self,\n rule_id: str,\n generated_regex: str,\n current_regex: str,\n config: dict,\n config_key: str,\n ):\n config[config_key] = generated_regex\n\n def write_updates(self):\n for rule_prefix, parser in self.parsers.items():\n writer = MSCWriter(parser.configlines)\n with self.prefix_to_file_map[rule_prefix].open(\"w\") as handle:\n writer.generate()\n # add extra new line at the end of file\n writer.output.append(\"\")\n handle.write(\"\\n\".join(writer.output))\n", "path": "util/regexp-assemble/lib/operators/updater.py"}]} | 1,455 | 162 |
gh_patches_debug_15905 | rasdani/github-patches | git_diff | conan-io__conan-center-index-1085 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LibibertyConan: 404: Not Found
```bash
conans.errors.NotFoundException: Not found: https://github.com/gcc-mirror/gcc/archive/gcc-9_1_0-release.tar.gz
```
when i open url above in browser:
```bash
404: Not Found
```
Provide more mirrors, please
Related to https://github.com/conan-io/conan-center-index/blob/master/recipes/libiberty/all/conanfile.py#L40
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/libiberty/all/conanfile.py`
Content:
```
1 from conans import ConanFile, tools, AutoToolsBuildEnvironment
2 from conans.errors import ConanInvalidConfiguration
3 import os
4
5
6 class LibibertyConan(ConanFile):
7 name = "libiberty"
8 version = "9.1.0"
9 description = "A collection of subroutines used by various GNU programs"
10 topics = ("conan", "libiberty", "gnu", "gnu-collection")
11 url = "https://github.com/conan-io/conan-center-index"
12 homepage = "https://gcc.gnu.org/onlinedocs/libiberty"
13 license = "LGPL-2.1"
14 settings = "os", "arch", "compiler", "build_type"
15 options = {"fPIC": [True, False]}
16 default_options = {"fPIC": True}
17 _autotools = None
18
19 @property
20 def _source_subfolder(self):
21 return "source_subfolder"
22
23 @property
24 def _libiberty_folder(self):
25 return os.path.join(self._source_subfolder, self.name)
26
27 def config_options(self):
28 if self.settings.os == 'Windows':
29 del self.options.fPIC
30
31 def configure(self):
32 if self.settings.compiler == "Visual Studio":
33 raise ConanInvalidConfiguration("libiberty can not be built by Visual Studio.")
34 del self.settings.compiler.libcxx
35 del self.settings.compiler.cppstd
36
37 def source(self):
38 tools.get(**self.conan_data["sources"][self.version])
39 pkg_version = self.version.replace('.', '_')
40 extracted_dir = "gcc-gcc-{}-release".format(pkg_version)
41 os.rename(extracted_dir, self._source_subfolder)
42 tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))
43 tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))
44
45 def _configure_autotools(self):
46 if not self._autotools:
47 args = ["--enable-install-libiberty"]
48 self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
49 self._autotools.configure(args=args, configure_dir=self._libiberty_folder)
50 return self._autotools
51
52 def build(self):
53 autotools = self._configure_autotools()
54 autotools.make()
55
56 def package(self):
57 self.copy(pattern="COPYING.LIB", src=self._libiberty_folder, dst="licenses")
58 autotools = self._configure_autotools()
59 autotools.install()
60 self._package_x86()
61
62 def _package_x86(self):
63 lib32dir = os.path.join(self.package_folder, "lib32")
64 if os.path.exists(lib32dir):
65 libdir = os.path.join(self.package_folder, "lib")
66 tools.rmdir(libdir)
67 os.rename(lib32dir, libdir)
68
69 def package_info(self):
70 self.cpp_info.libs = tools.collect_libs(self)
71
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/libiberty/all/conanfile.py b/recipes/libiberty/all/conanfile.py
--- a/recipes/libiberty/all/conanfile.py
+++ b/recipes/libiberty/all/conanfile.py
@@ -36,8 +36,7 @@
def source(self):
tools.get(**self.conan_data["sources"][self.version])
- pkg_version = self.version.replace('.', '_')
- extracted_dir = "gcc-gcc-{}-release".format(pkg_version)
+ extracted_dir = "gcc-" + self.version
os.rename(extracted_dir, self._source_subfolder)
tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))
tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))
@@ -68,4 +67,3 @@
def package_info(self):
self.cpp_info.libs = tools.collect_libs(self)
-
| {"golden_diff": "diff --git a/recipes/libiberty/all/conanfile.py b/recipes/libiberty/all/conanfile.py\n--- a/recipes/libiberty/all/conanfile.py\n+++ b/recipes/libiberty/all/conanfile.py\n@@ -36,8 +36,7 @@\n \n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n- pkg_version = self.version.replace('.', '_')\n- extracted_dir = \"gcc-gcc-{}-release\".format(pkg_version)\n+ extracted_dir = \"gcc-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))\n tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))\n@@ -68,4 +67,3 @@\n \n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n-\n", "issue": "LibibertyConan: 404: Not Found\n```bash\r\nconans.errors.NotFoundException: Not found: https://github.com/gcc-mirror/gcc/archive/gcc-9_1_0-release.tar.gz\r\n```\r\n\r\nwhen i open url above in browser:\r\n\r\n```bash\r\n404: Not Found\r\n```\r\n\r\nProvide more mirrors, please\r\n\r\nRelated to https://github.com/conan-io/conan-center-index/blob/master/recipes/libiberty/all/conanfile.py#L40\n", "before_files": [{"content": "from conans import ConanFile, tools, AutoToolsBuildEnvironment\nfrom conans.errors import ConanInvalidConfiguration\nimport os\n\n\nclass LibibertyConan(ConanFile):\n name = \"libiberty\"\n version = \"9.1.0\"\n description = \"A collection of subroutines used by various GNU programs\"\n topics = (\"conan\", \"libiberty\", \"gnu\", \"gnu-collection\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://gcc.gnu.org/onlinedocs/libiberty\"\n license = \"LGPL-2.1\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"fPIC\": [True, False]}\n default_options = {\"fPIC\": True}\n _autotools = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _libiberty_folder(self):\n return os.path.join(self._source_subfolder, self.name)\n\n def config_options(self):\n if self.settings.os == 'Windows':\n del self.options.fPIC\n\n def configure(self):\n if self.settings.compiler == \"Visual Studio\":\n raise ConanInvalidConfiguration(\"libiberty can not be built by Visual Studio.\")\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n pkg_version = self.version.replace('.', '_')\n extracted_dir = \"gcc-gcc-{}-release\".format(pkg_version)\n os.rename(extracted_dir, self._source_subfolder)\n tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))\n tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))\n\n def _configure_autotools(self):\n if not self._autotools:\n args = [\"--enable-install-libiberty\"]\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n self._autotools.configure(args=args, configure_dir=self._libiberty_folder)\n return self._autotools\n\n def build(self):\n autotools = self._configure_autotools()\n autotools.make()\n\n def package(self):\n self.copy(pattern=\"COPYING.LIB\", src=self._libiberty_folder, dst=\"licenses\")\n autotools = self._configure_autotools()\n autotools.install()\n self._package_x86()\n\n def _package_x86(self):\n lib32dir = os.path.join(self.package_folder, \"lib32\")\n if os.path.exists(lib32dir):\n libdir = os.path.join(self.package_folder, \"lib\")\n tools.rmdir(libdir)\n os.rename(lib32dir, libdir)\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n\n", "path": "recipes/libiberty/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, tools, AutoToolsBuildEnvironment\nfrom conans.errors import ConanInvalidConfiguration\nimport os\n\n\nclass LibibertyConan(ConanFile):\n name = \"libiberty\"\n version = \"9.1.0\"\n description = \"A collection of subroutines used by various GNU programs\"\n topics = (\"conan\", \"libiberty\", \"gnu\", \"gnu-collection\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://gcc.gnu.org/onlinedocs/libiberty\"\n license = \"LGPL-2.1\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"fPIC\": [True, False]}\n default_options = {\"fPIC\": True}\n _autotools = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _libiberty_folder(self):\n return os.path.join(self._source_subfolder, self.name)\n\n def config_options(self):\n if self.settings.os == 'Windows':\n del self.options.fPIC\n\n def configure(self):\n if self.settings.compiler == \"Visual Studio\":\n raise ConanInvalidConfiguration(\"libiberty can not be built by Visual Studio.\")\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"gcc-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n tools.rmdir(os.path.join(self._source_subfolder, 'gcc'))\n tools.rmdir(os.path.join(self._source_subfolder, 'libstdc++-v3'))\n\n def _configure_autotools(self):\n if not self._autotools:\n args = [\"--enable-install-libiberty\"]\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n self._autotools.configure(args=args, configure_dir=self._libiberty_folder)\n return self._autotools\n\n def build(self):\n autotools = self._configure_autotools()\n autotools.make()\n\n def package(self):\n self.copy(pattern=\"COPYING.LIB\", src=self._libiberty_folder, dst=\"licenses\")\n autotools = self._configure_autotools()\n autotools.install()\n self._package_x86()\n\n def _package_x86(self):\n lib32dir = os.path.join(self.package_folder, \"lib32\")\n if os.path.exists(lib32dir):\n libdir = os.path.join(self.package_folder, \"lib\")\n tools.rmdir(libdir)\n os.rename(lib32dir, libdir)\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n", "path": "recipes/libiberty/all/conanfile.py"}]} | 1,149 | 208 |
gh_patches_debug_9199 | rasdani/github-patches | git_diff | ephios-dev__ephios-166 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Kalender-URL schöner anzeigen
Als Nutzer möchte ich auf meiner Profilseite ein read-only-Feld sehen, in dem die URL zu meinem Kalender-Feed steht. Der Link soll nicht klickbar sein, damit die ics-Datei nicht heruntergeladen wird. Neben dem Link soll ein Button zum Kopieren der URL vorhanden sein.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/extra/context.py`
Content:
```
1 import importlib
2 import subprocess
3
4 from django.templatetags.static import static
5 from django.utils.translation import get_language
6
7 from ephios.extra.signals import footer_link
8
9 try:
10 EPHIOS_VERSION = (
11 subprocess.check_output(["git", "rev-parse", "--short", "HEAD"]).decode().strip()
12 )
13 except (subprocess.CalledProcessError, FileNotFoundError):
14 # suggested in https://github.com/python-poetry/poetry/issues/273
15 EPHIOS_VERSION = "v" + importlib.metadata.version("ephios")
16
17
18 def ephios_base_context(request):
19 footer = {}
20 for receiver, result in footer_link.send(None, request=request):
21 for label, url in result.items():
22 footer[label] = url
23
24 datatables_translation_url = None
25 if get_language() == "de-de":
26 datatables_translation_url = static("datatables/german.json")
27
28 return {
29 "footer": footer,
30 "datatables_translation_url": datatables_translation_url,
31 "ephios_version": EPHIOS_VERSION,
32 }
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/extra/context.py b/ephios/extra/context.py
--- a/ephios/extra/context.py
+++ b/ephios/extra/context.py
@@ -5,6 +5,7 @@
from django.utils.translation import get_language
from ephios.extra.signals import footer_link
+from ephios.settings import SITE_URL
try:
EPHIOS_VERSION = (
@@ -29,4 +30,5 @@
"footer": footer,
"datatables_translation_url": datatables_translation_url,
"ephios_version": EPHIOS_VERSION,
+ "SITE_URL": SITE_URL,
}
| {"golden_diff": "diff --git a/ephios/extra/context.py b/ephios/extra/context.py\n--- a/ephios/extra/context.py\n+++ b/ephios/extra/context.py\n@@ -5,6 +5,7 @@\n from django.utils.translation import get_language\n \n from ephios.extra.signals import footer_link\n+from ephios.settings import SITE_URL\n \n try:\n EPHIOS_VERSION = (\n@@ -29,4 +30,5 @@\n \"footer\": footer,\n \"datatables_translation_url\": datatables_translation_url,\n \"ephios_version\": EPHIOS_VERSION,\n+ \"SITE_URL\": SITE_URL,\n }\n", "issue": "Kalender-URL sch\u00f6ner anzeigen\nAls Nutzer m\u00f6chte ich auf meiner Profilseite ein read-only-Feld sehen, in dem die URL zu meinem Kalender-Feed steht. Der Link soll nicht klickbar sein, damit die ics-Datei nicht heruntergeladen wird. Neben dem Link soll ein Button zum Kopieren der URL vorhanden sein.\n", "before_files": [{"content": "import importlib\nimport subprocess\n\nfrom django.templatetags.static import static\nfrom django.utils.translation import get_language\n\nfrom ephios.extra.signals import footer_link\n\ntry:\n EPHIOS_VERSION = (\n subprocess.check_output([\"git\", \"rev-parse\", \"--short\", \"HEAD\"]).decode().strip()\n )\nexcept (subprocess.CalledProcessError, FileNotFoundError):\n # suggested in https://github.com/python-poetry/poetry/issues/273\n EPHIOS_VERSION = \"v\" + importlib.metadata.version(\"ephios\")\n\n\ndef ephios_base_context(request):\n footer = {}\n for receiver, result in footer_link.send(None, request=request):\n for label, url in result.items():\n footer[label] = url\n\n datatables_translation_url = None\n if get_language() == \"de-de\":\n datatables_translation_url = static(\"datatables/german.json\")\n\n return {\n \"footer\": footer,\n \"datatables_translation_url\": datatables_translation_url,\n \"ephios_version\": EPHIOS_VERSION,\n }\n", "path": "ephios/extra/context.py"}], "after_files": [{"content": "import importlib\nimport subprocess\n\nfrom django.templatetags.static import static\nfrom django.utils.translation import get_language\n\nfrom ephios.extra.signals import footer_link\nfrom ephios.settings import SITE_URL\n\ntry:\n EPHIOS_VERSION = (\n subprocess.check_output([\"git\", \"rev-parse\", \"--short\", \"HEAD\"]).decode().strip()\n )\nexcept (subprocess.CalledProcessError, FileNotFoundError):\n # suggested in https://github.com/python-poetry/poetry/issues/273\n EPHIOS_VERSION = \"v\" + importlib.metadata.version(\"ephios\")\n\n\ndef ephios_base_context(request):\n footer = {}\n for receiver, result in footer_link.send(None, request=request):\n for label, url in result.items():\n footer[label] = url\n\n datatables_translation_url = None\n if get_language() == \"de-de\":\n datatables_translation_url = static(\"datatables/german.json\")\n\n return {\n \"footer\": footer,\n \"datatables_translation_url\": datatables_translation_url,\n \"ephios_version\": EPHIOS_VERSION,\n \"SITE_URL\": SITE_URL,\n }\n", "path": "ephios/extra/context.py"}]} | 626 | 139 |
gh_patches_debug_2639 | rasdani/github-patches | git_diff | pypa__pip-10009 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update quickstart guide to reflect user research
Updates quickstart guide to reflect most common tasks as discovered in our "buy a feature" user research.
Preview: https://pip--9137.org.readthedocs.build/en/9137/quickstart/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/html/conf.py`
Content:
```
1 """Sphinx configuration file for pip's documentation."""
2
3 import glob
4 import os
5 import pathlib
6 import re
7 import sys
8 from typing import List, Tuple
9
10 # Add the docs/ directory to sys.path, because pip_sphinxext.py is there.
11 docs_dir = os.path.dirname(os.path.dirname(__file__))
12 sys.path.insert(0, docs_dir)
13
14 # -- General configuration ------------------------------------------------------------
15
16 extensions = [
17 # first-party extensions
18 "sphinx.ext.autodoc",
19 "sphinx.ext.todo",
20 "sphinx.ext.extlinks",
21 "sphinx.ext.intersphinx",
22 # our extensions
23 "pip_sphinxext",
24 # third-party extensions
25 "myst_parser",
26 "sphinx_copybutton",
27 "sphinx_inline_tabs",
28 "sphinxcontrib.towncrier",
29 ]
30
31 # General information about the project.
32 project = "pip"
33 copyright = "2008-2020, PyPA"
34
35 # Find the version and release information.
36 # We have a single source of truth for our version number: pip's __init__.py file.
37 # This next bit of code reads from it.
38 file_with_version = os.path.join(docs_dir, "..", "src", "pip", "__init__.py")
39 with open(file_with_version) as f:
40 for line in f:
41 m = re.match(r'__version__ = "(.*)"', line)
42 if m:
43 __version__ = m.group(1)
44 # The short X.Y version.
45 version = ".".join(__version__.split(".")[:2])
46 # The full version, including alpha/beta/rc tags.
47 release = __version__
48 break
49 else: # AKA no-break
50 version = release = "dev"
51
52 print("pip version:", version)
53 print("pip release:", release)
54
55 # -- Options for smartquotes ----------------------------------------------------------
56
57 # Disable the conversion of dashes so that long options like "--find-links" won't
58 # render as "-find-links" if included in the text.The default of "qDe" converts normal
59 # quote characters ('"' and "'"), en and em dashes ("--" and "---"), and ellipses "..."
60 smartquotes_action = "qe"
61
62 # -- Options for intersphinx ----------------------------------------------------------
63
64 intersphinx_mapping = {
65 "python": ("https://docs.python.org/3", None),
66 "pypug": ("https://packaging.python.org", None),
67 }
68
69 # -- Options for extlinks -------------------------------------------------------------
70
71 extlinks = {
72 "issue": ("https://github.com/pypa/pip/issues/%s", "#"),
73 "pull": ("https://github.com/pypa/pip/pull/%s", "PR #"),
74 "pypi": ("https://pypi.org/project/%s/", ""),
75 }
76
77 # -- Options for towncrier_draft extension --------------------------------------------
78
79 towncrier_draft_autoversion_mode = "draft" # or: 'sphinx-release', 'sphinx-version'
80 towncrier_draft_include_empty = True
81 towncrier_draft_working_directory = pathlib.Path(docs_dir).parent
82 # Not yet supported: towncrier_draft_config_path = 'pyproject.toml' # relative to cwd
83
84 # -- Options for HTML -----------------------------------------------------------------
85
86 html_theme = "furo"
87 html_title = f"{project} documentation v{release}"
88
89 # Disable the generation of the various indexes
90 html_use_modindex = False
91 html_use_index = False
92
93 # -- Options for Manual Pages ---------------------------------------------------------
94
95
96 # List of manual pages generated
97 def determine_man_pages() -> List[Tuple[str, str, str, str, int]]:
98 """Determine which man pages need to be generated."""
99
100 def to_document_name(path: str, base_dir: str) -> str:
101 """Convert a provided path to a Sphinx "document name"."""
102 relative_path = os.path.relpath(path, base_dir)
103 root, _ = os.path.splitext(relative_path)
104 return root.replace(os.sep, "/")
105
106 # Crawl the entire man/commands/ directory and list every file with appropriate
107 # name and details.
108 man_dir = os.path.join(docs_dir, "man")
109 raw_subcommands = glob.glob(os.path.join(man_dir, "commands/*.rst"))
110 if not raw_subcommands:
111 raise FileNotFoundError(
112 "The individual subcommand manpages could not be found!"
113 )
114
115 retval = [
116 ("index", "pip", "package manager for Python packages", "pip developers", 1),
117 ]
118 for fname in raw_subcommands:
119 fname_base = to_document_name(fname, man_dir)
120 outname = "pip-" + fname_base.split("/")[1]
121 description = "description of {} command".format(outname.replace("-", " "))
122
123 retval.append((fname_base, outname, description, "pip developers", 1))
124
125 return retval
126
127
128 man_pages = determine_man_pages()
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/html/conf.py b/docs/html/conf.py
--- a/docs/html/conf.py
+++ b/docs/html/conf.py
@@ -30,7 +30,7 @@
# General information about the project.
project = "pip"
-copyright = "2008-2020, PyPA"
+copyright = "The pip developers"
# Find the version and release information.
# We have a single source of truth for our version number: pip's __init__.py file.
| {"golden_diff": "diff --git a/docs/html/conf.py b/docs/html/conf.py\n--- a/docs/html/conf.py\n+++ b/docs/html/conf.py\n@@ -30,7 +30,7 @@\n \n # General information about the project.\n project = \"pip\"\n-copyright = \"2008-2020, PyPA\"\n+copyright = \"The pip developers\"\n \n # Find the version and release information.\n # We have a single source of truth for our version number: pip's __init__.py file.\n", "issue": "Update quickstart guide to reflect user research\nUpdates quickstart guide to reflect most common tasks as discovered in our \"buy a feature\" user research.\r\n\r\nPreview: https://pip--9137.org.readthedocs.build/en/9137/quickstart/\n", "before_files": [{"content": "\"\"\"Sphinx configuration file for pip's documentation.\"\"\"\n\nimport glob\nimport os\nimport pathlib\nimport re\nimport sys\nfrom typing import List, Tuple\n\n# Add the docs/ directory to sys.path, because pip_sphinxext.py is there.\ndocs_dir = os.path.dirname(os.path.dirname(__file__))\nsys.path.insert(0, docs_dir)\n\n# -- General configuration ------------------------------------------------------------\n\nextensions = [\n # first-party extensions\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.extlinks\",\n \"sphinx.ext.intersphinx\",\n # our extensions\n \"pip_sphinxext\",\n # third-party extensions\n \"myst_parser\",\n \"sphinx_copybutton\",\n \"sphinx_inline_tabs\",\n \"sphinxcontrib.towncrier\",\n]\n\n# General information about the project.\nproject = \"pip\"\ncopyright = \"2008-2020, PyPA\"\n\n# Find the version and release information.\n# We have a single source of truth for our version number: pip's __init__.py file.\n# This next bit of code reads from it.\nfile_with_version = os.path.join(docs_dir, \"..\", \"src\", \"pip\", \"__init__.py\")\nwith open(file_with_version) as f:\n for line in f:\n m = re.match(r'__version__ = \"(.*)\"', line)\n if m:\n __version__ = m.group(1)\n # The short X.Y version.\n version = \".\".join(__version__.split(\".\")[:2])\n # The full version, including alpha/beta/rc tags.\n release = __version__\n break\n else: # AKA no-break\n version = release = \"dev\"\n\nprint(\"pip version:\", version)\nprint(\"pip release:\", release)\n\n# -- Options for smartquotes ----------------------------------------------------------\n\n# Disable the conversion of dashes so that long options like \"--find-links\" won't\n# render as \"-find-links\" if included in the text.The default of \"qDe\" converts normal\n# quote characters ('\"' and \"'\"), en and em dashes (\"--\" and \"---\"), and ellipses \"...\"\nsmartquotes_action = \"qe\"\n\n# -- Options for intersphinx ----------------------------------------------------------\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"pypug\": (\"https://packaging.python.org\", None),\n}\n\n# -- Options for extlinks -------------------------------------------------------------\n\nextlinks = {\n \"issue\": (\"https://github.com/pypa/pip/issues/%s\", \"#\"),\n \"pull\": (\"https://github.com/pypa/pip/pull/%s\", \"PR #\"),\n \"pypi\": (\"https://pypi.org/project/%s/\", \"\"),\n}\n\n# -- Options for towncrier_draft extension --------------------------------------------\n\ntowncrier_draft_autoversion_mode = \"draft\" # or: 'sphinx-release', 'sphinx-version'\ntowncrier_draft_include_empty = True\ntowncrier_draft_working_directory = pathlib.Path(docs_dir).parent\n# Not yet supported: towncrier_draft_config_path = 'pyproject.toml' # relative to cwd\n\n# -- Options for HTML -----------------------------------------------------------------\n\nhtml_theme = \"furo\"\nhtml_title = f\"{project} documentation v{release}\"\n\n# Disable the generation of the various indexes\nhtml_use_modindex = False\nhtml_use_index = False\n\n# -- Options for Manual Pages ---------------------------------------------------------\n\n\n# List of manual pages generated\ndef determine_man_pages() -> List[Tuple[str, str, str, str, int]]:\n \"\"\"Determine which man pages need to be generated.\"\"\"\n\n def to_document_name(path: str, base_dir: str) -> str:\n \"\"\"Convert a provided path to a Sphinx \"document name\".\"\"\"\n relative_path = os.path.relpath(path, base_dir)\n root, _ = os.path.splitext(relative_path)\n return root.replace(os.sep, \"/\")\n\n # Crawl the entire man/commands/ directory and list every file with appropriate\n # name and details.\n man_dir = os.path.join(docs_dir, \"man\")\n raw_subcommands = glob.glob(os.path.join(man_dir, \"commands/*.rst\"))\n if not raw_subcommands:\n raise FileNotFoundError(\n \"The individual subcommand manpages could not be found!\"\n )\n\n retval = [\n (\"index\", \"pip\", \"package manager for Python packages\", \"pip developers\", 1),\n ]\n for fname in raw_subcommands:\n fname_base = to_document_name(fname, man_dir)\n outname = \"pip-\" + fname_base.split(\"/\")[1]\n description = \"description of {} command\".format(outname.replace(\"-\", \" \"))\n\n retval.append((fname_base, outname, description, \"pip developers\", 1))\n\n return retval\n\n\nman_pages = determine_man_pages()\n", "path": "docs/html/conf.py"}], "after_files": [{"content": "\"\"\"Sphinx configuration file for pip's documentation.\"\"\"\n\nimport glob\nimport os\nimport pathlib\nimport re\nimport sys\nfrom typing import List, Tuple\n\n# Add the docs/ directory to sys.path, because pip_sphinxext.py is there.\ndocs_dir = os.path.dirname(os.path.dirname(__file__))\nsys.path.insert(0, docs_dir)\n\n# -- General configuration ------------------------------------------------------------\n\nextensions = [\n # first-party extensions\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.extlinks\",\n \"sphinx.ext.intersphinx\",\n # our extensions\n \"pip_sphinxext\",\n # third-party extensions\n \"myst_parser\",\n \"sphinx_copybutton\",\n \"sphinx_inline_tabs\",\n \"sphinxcontrib.towncrier\",\n]\n\n# General information about the project.\nproject = \"pip\"\ncopyright = \"The pip developers\"\n\n# Find the version and release information.\n# We have a single source of truth for our version number: pip's __init__.py file.\n# This next bit of code reads from it.\nfile_with_version = os.path.join(docs_dir, \"..\", \"src\", \"pip\", \"__init__.py\")\nwith open(file_with_version) as f:\n for line in f:\n m = re.match(r'__version__ = \"(.*)\"', line)\n if m:\n __version__ = m.group(1)\n # The short X.Y version.\n version = \".\".join(__version__.split(\".\")[:2])\n # The full version, including alpha/beta/rc tags.\n release = __version__\n break\n else: # AKA no-break\n version = release = \"dev\"\n\nprint(\"pip version:\", version)\nprint(\"pip release:\", release)\n\n# -- Options for smartquotes ----------------------------------------------------------\n\n# Disable the conversion of dashes so that long options like \"--find-links\" won't\n# render as \"-find-links\" if included in the text.The default of \"qDe\" converts normal\n# quote characters ('\"' and \"'\"), en and em dashes (\"--\" and \"---\"), and ellipses \"...\"\nsmartquotes_action = \"qe\"\n\n# -- Options for intersphinx ----------------------------------------------------------\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"pypug\": (\"https://packaging.python.org\", None),\n}\n\n# -- Options for extlinks -------------------------------------------------------------\n\nextlinks = {\n \"issue\": (\"https://github.com/pypa/pip/issues/%s\", \"#\"),\n \"pull\": (\"https://github.com/pypa/pip/pull/%s\", \"PR #\"),\n \"pypi\": (\"https://pypi.org/project/%s/\", \"\"),\n}\n\n# -- Options for towncrier_draft extension --------------------------------------------\n\ntowncrier_draft_autoversion_mode = \"draft\" # or: 'sphinx-release', 'sphinx-version'\ntowncrier_draft_include_empty = True\ntowncrier_draft_working_directory = pathlib.Path(docs_dir).parent\n# Not yet supported: towncrier_draft_config_path = 'pyproject.toml' # relative to cwd\n\n# -- Options for HTML -----------------------------------------------------------------\n\nhtml_theme = \"furo\"\nhtml_title = f\"{project} documentation v{release}\"\n\n# Disable the generation of the various indexes\nhtml_use_modindex = False\nhtml_use_index = False\n\n# -- Options for Manual Pages ---------------------------------------------------------\n\n\n# List of manual pages generated\ndef determine_man_pages() -> List[Tuple[str, str, str, str, int]]:\n \"\"\"Determine which man pages need to be generated.\"\"\"\n\n def to_document_name(path: str, base_dir: str) -> str:\n \"\"\"Convert a provided path to a Sphinx \"document name\".\"\"\"\n relative_path = os.path.relpath(path, base_dir)\n root, _ = os.path.splitext(relative_path)\n return root.replace(os.sep, \"/\")\n\n # Crawl the entire man/commands/ directory and list every file with appropriate\n # name and details.\n man_dir = os.path.join(docs_dir, \"man\")\n raw_subcommands = glob.glob(os.path.join(man_dir, \"commands/*.rst\"))\n if not raw_subcommands:\n raise FileNotFoundError(\n \"The individual subcommand manpages could not be found!\"\n )\n\n retval = [\n (\"index\", \"pip\", \"package manager for Python packages\", \"pip developers\", 1),\n ]\n for fname in raw_subcommands:\n fname_base = to_document_name(fname, man_dir)\n outname = \"pip-\" + fname_base.split(\"/\")[1]\n description = \"description of {} command\".format(outname.replace(\"-\", \" \"))\n\n retval.append((fname_base, outname, description, \"pip developers\", 1))\n\n return retval\n\n\nman_pages = determine_man_pages()\n", "path": "docs/html/conf.py"}]} | 1,627 | 108 |
gh_patches_debug_3962 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1874 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: fetch failed for source Stavanger Kommune
### I Have A Problem With:
A specific source
### What's Your Problem
Unable to fetch data for Stavanger Norway. Have worked flawless a while, but no there is no data.
### Source (if relevant)
stavanger_no
### Logs
```Shell
This error originated from a custom integration.
Logger: waste_collection_schedule.source_shell
Source: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136
Integration: waste_collection_schedule (documentation)
First occurred: 11:08:21 (2 occurrences)
Last logged: 11:13:26
fetch failed for source Stavanger Kommune: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py", line 49, in fetch r.raise_for_status() File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show?id=###removed###
```
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py`
Content:
```
1 from datetime import datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6
7 TITLE = "Stavanger Kommune"
8 DESCRIPTION = "Source for Stavanger Kommune, Norway"
9 URL = "https://www.stavanger.kommune.no/"
10 TEST_CASES = {
11 "TestcaseI": {
12 "id": "57bf9d36-722e-400b-ae93-d80f8e354724",
13 "municipality": "Stavanger",
14 "gnumber": "57",
15 "bnumber": "922",
16 "snumber": "0",
17 },
18 }
19
20 ICON_MAP = {
21 "Restavfall": "mdi:trash-can",
22 "Papp/papir": "mdi:recycle",
23 "Bio": "mdi:leaf",
24 "Juletre": "mdi:pine-tree",
25 }
26
27
28 class Source:
29 def __init__(self, id, municipality, gnumber, bnumber, snumber):
30 self._id = id
31 self._municipality = municipality
32 self._gnumber = gnumber
33 self._bnumber = bnumber
34 self._snumber = snumber
35
36 def fetch(self):
37 url = "https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show"
38 headers = {"referer": "https://www.stavanger.kommune.no"}
39
40 params = {
41 "id": self._id,
42 "municipality": self._municipality,
43 "gnumber": self._gnumber,
44 "bnumber": self._bnumber,
45 "snumber": self._snumber,
46 }
47
48 r = requests.get(url, params=params, headers=headers)
49 r.raise_for_status()
50
51 soup = BeautifulSoup(r.text, "html.parser")
52
53 tag = soup.find_all("option")
54 entries = []
55 for tag in soup.find_all("tr", {"class": "waste-calendar__item"}):
56 if tag.text.strip() == "Dato og dag\nAvfallstype":
57 continue
58
59 year = tag.parent.attrs["data-month"].split("-")[1]
60 date = tag.text.strip().split(" - ")
61 date = datetime.strptime(date[0] + "." + year, "%d.%m.%Y").date()
62
63 for img in tag.find_all("img"):
64 waste_type = img.get("title")
65 entries.append(
66 Collection(date, waste_type, icon=ICON_MAP.get(waste_type))
67 )
68
69 return entries
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
@@ -38,6 +38,7 @@
headers = {"referer": "https://www.stavanger.kommune.no"}
params = {
+ "ids": self._id,
"id": self._id,
"municipality": self._municipality,
"gnumber": self._gnumber,
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n@@ -38,6 +38,7 @@\n headers = {\"referer\": \"https://www.stavanger.kommune.no\"}\n \n params = {\n+ \"ids\": self._id,\n \"id\": self._id,\n \"municipality\": self._municipality,\n \"gnumber\": self._gnumber,\n", "issue": "[Bug]: fetch failed for source Stavanger Kommune\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nUnable to fetch data for Stavanger Norway. Have worked flawless a while, but no there is no data.\n\n### Source (if relevant)\n\nstavanger_no\n\n### Logs\n\n```Shell\nThis error originated from a custom integration.\r\n\r\nLogger: waste_collection_schedule.source_shell\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136\r\nIntegration: waste_collection_schedule (documentation)\r\nFirst occurred: 11:08:21 (2 occurrences)\r\nLast logged: 11:13:26\r\n\r\nfetch failed for source Stavanger Kommune: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\", line 49, in fetch r.raise_for_status() File \"/usr/local/lib/python3.12/site-packages/requests/models.py\", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show?id=###removed###\n```\n\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Stavanger Kommune\"\nDESCRIPTION = \"Source for Stavanger Kommune, Norway\"\nURL = \"https://www.stavanger.kommune.no/\"\nTEST_CASES = {\n \"TestcaseI\": {\n \"id\": \"57bf9d36-722e-400b-ae93-d80f8e354724\",\n \"municipality\": \"Stavanger\",\n \"gnumber\": \"57\",\n \"bnumber\": \"922\",\n \"snumber\": \"0\",\n },\n}\n\nICON_MAP = {\n \"Restavfall\": \"mdi:trash-can\",\n \"Papp/papir\": \"mdi:recycle\",\n \"Bio\": \"mdi:leaf\",\n \"Juletre\": \"mdi:pine-tree\",\n}\n\n\nclass Source:\n def __init__(self, id, municipality, gnumber, bnumber, snumber):\n self._id = id\n self._municipality = municipality\n self._gnumber = gnumber\n self._bnumber = bnumber\n self._snumber = snumber\n\n def fetch(self):\n url = \"https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show\"\n headers = {\"referer\": \"https://www.stavanger.kommune.no\"}\n\n params = {\n \"id\": self._id,\n \"municipality\": self._municipality,\n \"gnumber\": self._gnumber,\n \"bnumber\": self._bnumber,\n \"snumber\": self._snumber,\n }\n\n r = requests.get(url, params=params, headers=headers)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n tag = soup.find_all(\"option\")\n entries = []\n for tag in soup.find_all(\"tr\", {\"class\": \"waste-calendar__item\"}):\n if tag.text.strip() == \"Dato og dag\\nAvfallstype\":\n continue\n\n year = tag.parent.attrs[\"data-month\"].split(\"-\")[1]\n date = tag.text.strip().split(\" - \")\n date = datetime.strptime(date[0] + \".\" + year, \"%d.%m.%Y\").date()\n\n for img in tag.find_all(\"img\"):\n waste_type = img.get(\"title\")\n entries.append(\n Collection(date, waste_type, icon=ICON_MAP.get(waste_type))\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py"}], "after_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Stavanger Kommune\"\nDESCRIPTION = \"Source for Stavanger Kommune, Norway\"\nURL = \"https://www.stavanger.kommune.no/\"\nTEST_CASES = {\n \"TestcaseI\": {\n \"id\": \"57bf9d36-722e-400b-ae93-d80f8e354724\",\n \"municipality\": \"Stavanger\",\n \"gnumber\": \"57\",\n \"bnumber\": \"922\",\n \"snumber\": \"0\",\n },\n}\n\nICON_MAP = {\n \"Restavfall\": \"mdi:trash-can\",\n \"Papp/papir\": \"mdi:recycle\",\n \"Bio\": \"mdi:leaf\",\n \"Juletre\": \"mdi:pine-tree\",\n}\n\n\nclass Source:\n def __init__(self, id, municipality, gnumber, bnumber, snumber):\n self._id = id\n self._municipality = municipality\n self._gnumber = gnumber\n self._bnumber = bnumber\n self._snumber = snumber\n\n def fetch(self):\n url = \"https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show\"\n headers = {\"referer\": \"https://www.stavanger.kommune.no\"}\n\n params = {\n \"ids\": self._id,\n \"id\": self._id,\n \"municipality\": self._municipality,\n \"gnumber\": self._gnumber,\n \"bnumber\": self._bnumber,\n \"snumber\": self._snumber,\n }\n\n r = requests.get(url, params=params, headers=headers)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n tag = soup.find_all(\"option\")\n entries = []\n for tag in soup.find_all(\"tr\", {\"class\": \"waste-calendar__item\"}):\n if tag.text.strip() == \"Dato og dag\\nAvfallstype\":\n continue\n\n year = tag.parent.attrs[\"data-month\"].split(\"-\")[1]\n date = tag.text.strip().split(\" - \")\n date = datetime.strptime(date[0] + \".\" + year, \"%d.%m.%Y\").date()\n\n for img in tag.find_all(\"img\"):\n waste_type = img.get(\"title\")\n entries.append(\n Collection(date, waste_type, icon=ICON_MAP.get(waste_type))\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py"}]} | 1,522 | 150 |
gh_patches_debug_14847 | rasdani/github-patches | git_diff | open-mmlab__mmsegmentation-261 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
an unexpected keyword argument 'drop_out_ratio' when using config file ocrnet_r50-d8.py
**Describe the bug**
It seems that there is a small error in the config file:
https://github.com/open-mmlab/mmsegmentation/blob/381eacb9a5e0e8eb475e456845f1d4c55f3c0339/configs/_base_/models/ocrnet_r50-d8.py#L26
and
https://github.com/open-mmlab/mmsegmentation/blob/381eacb9a5e0e8eb475e456845f1d4c55f3c0339/configs/_base_/models/ocrnet_r50-d8.py#L38
**Error traceback**
```
Traceback (most recent call last):
File "./tools/train.py", line 161, in <module>
main()
File "./tools/train.py", line 131, in main
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
File "/root/userfolder/mmsegmentation/mmseg/models/builder.py", line 56, in build_segmentor
return build(cfg, SEGMENTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/root/userfolder/mmsegmentation/mmseg/models/builder.py", line 31, in build
return build_from_cfg(cfg, registry, default_args)
File "/root/userfolder/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 171, in build_from_cfg
return obj_cls(**args)
File "/root/userfolder/mmsegmentation/mmseg/models/segmentors/cascade_encoder_decoder.py", line 36, in __init__
pretrained=pretrained)
File "/root/userfolder/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py", line 34, in __init__
self._init_decode_head(decode_head)
File "/root/userfolder/mmsegmentation/mmseg/models/segmentors/cascade_encoder_decoder.py", line 44, in _init_decode_head
self.decode_head.append(builder.build_head(decode_head[i]))
File "/root/userfolder/mmsegmentation/mmseg/models/builder.py", line 46, in build_head
return build(cfg, HEADS)
File "/root/userfolder/mmsegmentation/mmseg/models/builder.py", line 31, in build
return build_from_cfg(cfg, registry, default_args)
File "/root/userfolder/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 171, in build_from_cfg
return obj_cls(**args)
File "/root/userfolder/mmsegmentation/mmseg/models/decode_heads/fcn_head.py", line 31, in __init__
super(FCNHead, self).__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'drop_out_ratio'
```
**Bug fix**
change the keyword `drop_out_ratio=0.1` to `dropout_ratio=0.1` will fix this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `configs/_base_/models/ocrnet_r50-d8.py`
Content:
```
1 # model settings
2 norm_cfg = dict(type='SyncBN', requires_grad=True)
3 model = dict(
4 type='CascadeEncoderDecoder',
5 num_stages=2,
6 pretrained='open-mmlab://resnet50_v1c',
7 backbone=dict(
8 type='ResNetV1c',
9 depth=50,
10 num_stages=4,
11 out_indices=(0, 1, 2, 3),
12 dilations=(1, 1, 2, 4),
13 strides=(1, 2, 1, 1),
14 norm_cfg=norm_cfg,
15 norm_eval=False,
16 style='pytorch',
17 contract_dilation=True),
18 decode_head=[
19 dict(
20 type='FCNHead',
21 in_channels=1024,
22 in_index=2,
23 channels=256,
24 num_convs=1,
25 concat_input=False,
26 drop_out_ratio=0.1,
27 num_classes=19,
28 norm_cfg=norm_cfg,
29 align_corners=False,
30 loss_decode=dict(
31 type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
32 dict(
33 type='OCRHead',
34 in_channels=2048,
35 in_index=3,
36 channels=512,
37 ocr_channels=256,
38 drop_out_ratio=0.1,
39 num_classes=19,
40 norm_cfg=norm_cfg,
41 align_corners=False,
42 loss_decode=dict(
43 type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))
44 ])
45 # model training and testing settings
46 train_cfg = dict()
47 test_cfg = dict(mode='whole')
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/configs/_base_/models/ocrnet_r50-d8.py b/configs/_base_/models/ocrnet_r50-d8.py
--- a/configs/_base_/models/ocrnet_r50-d8.py
+++ b/configs/_base_/models/ocrnet_r50-d8.py
@@ -23,7 +23,7 @@
channels=256,
num_convs=1,
concat_input=False,
- drop_out_ratio=0.1,
+ dropout_ratio=0.1,
num_classes=19,
norm_cfg=norm_cfg,
align_corners=False,
@@ -35,7 +35,7 @@
in_index=3,
channels=512,
ocr_channels=256,
- drop_out_ratio=0.1,
+ dropout_ratio=0.1,
num_classes=19,
norm_cfg=norm_cfg,
align_corners=False,
| {"golden_diff": "diff --git a/configs/_base_/models/ocrnet_r50-d8.py b/configs/_base_/models/ocrnet_r50-d8.py\n--- a/configs/_base_/models/ocrnet_r50-d8.py\n+++ b/configs/_base_/models/ocrnet_r50-d8.py\n@@ -23,7 +23,7 @@\n channels=256,\n num_convs=1,\n concat_input=False,\n- drop_out_ratio=0.1,\n+ dropout_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n@@ -35,7 +35,7 @@\n in_index=3,\n channels=512,\n ocr_channels=256,\n- drop_out_ratio=0.1,\n+ dropout_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n", "issue": "an unexpected keyword argument 'drop_out_ratio' when using config file ocrnet_r50-d8.py\n**Describe the bug**\r\n\r\nIt seems that there is a small error in the config file: \r\nhttps://github.com/open-mmlab/mmsegmentation/blob/381eacb9a5e0e8eb475e456845f1d4c55f3c0339/configs/_base_/models/ocrnet_r50-d8.py#L26\r\nand\r\nhttps://github.com/open-mmlab/mmsegmentation/blob/381eacb9a5e0e8eb475e456845f1d4c55f3c0339/configs/_base_/models/ocrnet_r50-d8.py#L38\r\n\r\n**Error traceback**\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"./tools/train.py\", line 161, in <module>\r\n main()\r\n File \"./tools/train.py\", line 131, in main\r\n cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/builder.py\", line 56, in build_segmentor\r\n return build(cfg, SEGMENTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/builder.py\", line 31, in build\r\n return build_from_cfg(cfg, registry, default_args)\r\n File \"/root/userfolder/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py\", line 171, in build_from_cfg\r\n return obj_cls(**args)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/segmentors/cascade_encoder_decoder.py\", line 36, in __init__\r\n pretrained=pretrained)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py\", line 34, in __init__\r\n self._init_decode_head(decode_head)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/segmentors/cascade_encoder_decoder.py\", line 44, in _init_decode_head\r\n self.decode_head.append(builder.build_head(decode_head[i]))\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/builder.py\", line 46, in build_head\r\n return build(cfg, HEADS)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/builder.py\", line 31, in build\r\n return build_from_cfg(cfg, registry, default_args)\r\n File \"/root/userfolder/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py\", line 171, in build_from_cfg\r\n return obj_cls(**args)\r\n File \"/root/userfolder/mmsegmentation/mmseg/models/decode_heads/fcn_head.py\", line 31, in __init__\r\n super(FCNHead, self).__init__(**kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'drop_out_ratio'\r\n```\r\n\r\n**Bug fix**\r\nchange the keyword `drop_out_ratio=0.1` to `dropout_ratio=0.1` will fix this.\r\n\r\n\n", "before_files": [{"content": "# model settings\nnorm_cfg = dict(type='SyncBN', requires_grad=True)\nmodel = dict(\n type='CascadeEncoderDecoder',\n num_stages=2,\n pretrained='open-mmlab://resnet50_v1c',\n backbone=dict(\n type='ResNetV1c',\n depth=50,\n num_stages=4,\n out_indices=(0, 1, 2, 3),\n dilations=(1, 1, 2, 4),\n strides=(1, 2, 1, 1),\n norm_cfg=norm_cfg,\n norm_eval=False,\n style='pytorch',\n contract_dilation=True),\n decode_head=[\n dict(\n type='FCNHead',\n in_channels=1024,\n in_index=2,\n channels=256,\n num_convs=1,\n concat_input=False,\n drop_out_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),\n dict(\n type='OCRHead',\n in_channels=2048,\n in_index=3,\n channels=512,\n ocr_channels=256,\n drop_out_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))\n ])\n# model training and testing settings\ntrain_cfg = dict()\ntest_cfg = dict(mode='whole')\n", "path": "configs/_base_/models/ocrnet_r50-d8.py"}], "after_files": [{"content": "# model settings\nnorm_cfg = dict(type='SyncBN', requires_grad=True)\nmodel = dict(\n type='CascadeEncoderDecoder',\n num_stages=2,\n pretrained='open-mmlab://resnet50_v1c',\n backbone=dict(\n type='ResNetV1c',\n depth=50,\n num_stages=4,\n out_indices=(0, 1, 2, 3),\n dilations=(1, 1, 2, 4),\n strides=(1, 2, 1, 1),\n norm_cfg=norm_cfg,\n norm_eval=False,\n style='pytorch',\n contract_dilation=True),\n decode_head=[\n dict(\n type='FCNHead',\n in_channels=1024,\n in_index=2,\n channels=256,\n num_convs=1,\n concat_input=False,\n dropout_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),\n dict(\n type='OCRHead',\n in_channels=2048,\n in_index=3,\n channels=512,\n ocr_channels=256,\n dropout_ratio=0.1,\n num_classes=19,\n norm_cfg=norm_cfg,\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))\n ])\n# model training and testing settings\ntrain_cfg = dict()\ntest_cfg = dict(mode='whole')\n", "path": "configs/_base_/models/ocrnet_r50-d8.py"}]} | 1,428 | 213 |
gh_patches_debug_19626 | rasdani/github-patches | git_diff | Flexget__Flexget-2391 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tasks failing after migration from cfscraper to cloudscraper
Since #2381 one of my tasks stopped returning new entries.
After checking the log I noticed `cloudscraper` returns a ReCaptcha related error during the `rmz` automatic url-rewrite (a problem `cfscrape` doesn't have).
Upon further investigation, it seems that `cloudscraper` has this issue when importing an existing requests session generated by Flexget, since removing the last `task.requests` (for testing purposes) from https://github.com/Flexget/Flexget/blob/a4037da07f7d7707650596670c467ab3b7e748c9/flexget/plugins/operate/cfscraper.py#L39 results in a successful processing of the task.
Another finding: Disabling `urlrewriting` also did not trigger a ReCaptcha error (therefore completing the task) even without meddling with any of Flexget's .py's, however it also doesn't return a download URL if [rmz.py](https://github.com/Flexget/Flexget/blob/master/flexget/components/sites/sites/rmz.py) is not processed.
### Expected behaviour:
Task returning proper entries with populated urls
### Actual behaviour:
Task failing due to `cloudscraper` bumping into ReCaptcha during `rmz`url-rewrite
### Steps to reproduce:
- Step 1:
#### Config:
```
tasks:
cftask:
disable:
- seen
- seen_info_hash
- retry_failed
# - urlrewriting
cfscraper: yes
rss: http://rmz.cr/feed
series:
# Random entry from the feed below#
- Days of our lives
```
#### Log:
<details>
<summary>(click to expand)</summary>
```
$ flexget execute --tasks cftask
2019-05-09 00:34 VERBOSE task_queue There are 1 tasks to execute. Shutdown will commence when they have completed.
2019-05-09 00:34 VERBOSE details cftask Produced 100 entries.
2019-05-09 00:34 VERBOSE series.db cftask identified by is currently on `auto` for Days of our lives. Multiple id types may be accepted until it locks in on the appropriate type.
2019-05-09 00:34 VERBOSE task cftask ACCEPTED: `[RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB)` by series plugin because choosing first acceptable match
2019-05-09 00:35 WARNING urlrewriter cftask URL rewriting rmz failed: Captcha
2019-05-09 00:35 ERROR entry cftask Failed [RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB) (None)
2019-05-09 00:35 VERBOSE task cftask FAILED: `[RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB)` by urlrewriting plugin
2019-05-09 00:35 VERBOSE details cftask Summary - Accepted: 0 (Rejected: 0 Undecided: 99 Failed: 1)
```
</details>
### Additional information:
- FlexGet version: 2.20.26
- Python version: 2.7.14
- Installation method: pip
- Using daemon (yes/no): yes
- OS and version: Windows 10 (1809)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/operate/cfscraper.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3
4 import logging
5
6 from flexget import plugin
7 from flexget.event import event
8 from flexget.utils.requests import Session
9
10 log = logging.getLogger('cfscraper')
11
12
13 class CFScraper(object):
14 """
15 Plugin that enables scraping of cloudflare protected sites.
16
17 Example::
18 cfscraper: yes
19 """
20
21 schema = {'type': 'boolean'}
22
23 @plugin.priority(253)
24 def on_task_start(self, task, config):
25 try:
26 import cloudscraper
27 except ImportError as e:
28 log.debug('Error importing cloudscraper: %s' % e)
29 raise plugin.DependencyError(
30 'cfscraper', 'cloudscraper', 'cloudscraper module required. ImportError: %s' % e
31 )
32
33 class CFScrapeWrapper(Session, cloudscraper.CloudScraper):
34 """
35 This class allows the FlexGet session to inherit from CloudScraper instead of the requests.Session directly.
36 """
37
38 if config is True:
39 task.requests = CFScrapeWrapper.create_scraper(task.requests)
40
41
42 @event('plugin.register')
43 def register_plugin():
44 plugin.register(CFScraper, 'cfscraper', api_ver=2)
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flexget/plugins/operate/cfscraper.py b/flexget/plugins/operate/cfscraper.py
--- a/flexget/plugins/operate/cfscraper.py
+++ b/flexget/plugins/operate/cfscraper.py
@@ -6,6 +6,7 @@
from flexget import plugin
from flexget.event import event
from flexget.utils.requests import Session
+from collections import OrderedDict
log = logging.getLogger('cfscraper')
@@ -36,6 +37,18 @@
"""
if config is True:
+ task.requests.headers = (
+ OrderedDict(
+ [
+ ('User-Agent', task.requests.headers['User-Agent']),
+ ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),
+ ('Accept-Language', 'en-US,en;q=0.5'),
+ ('Accept-Encoding', 'gzip, deflate'),
+ ('Connection', 'close'),
+ ('Upgrade-Insecure-Requests', '1')
+ ]
+ )
+ )
task.requests = CFScrapeWrapper.create_scraper(task.requests)
| {"golden_diff": "diff --git a/flexget/plugins/operate/cfscraper.py b/flexget/plugins/operate/cfscraper.py\n--- a/flexget/plugins/operate/cfscraper.py\n+++ b/flexget/plugins/operate/cfscraper.py\n@@ -6,6 +6,7 @@\n from flexget import plugin\n from flexget.event import event\n from flexget.utils.requests import Session\n+from collections import OrderedDict\n \n log = logging.getLogger('cfscraper')\n \n@@ -36,6 +37,18 @@\n \"\"\"\n \n if config is True:\n+ task.requests.headers = (\n+ OrderedDict(\n+ [\n+ ('User-Agent', task.requests.headers['User-Agent']),\n+ ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),\n+ ('Accept-Language', 'en-US,en;q=0.5'),\n+ ('Accept-Encoding', 'gzip, deflate'),\n+ ('Connection', 'close'),\n+ ('Upgrade-Insecure-Requests', '1')\n+ ]\n+ )\n+ )\n task.requests = CFScrapeWrapper.create_scraper(task.requests)\n", "issue": "Tasks failing after migration from cfscraper to cloudscraper\nSince #2381 one of my tasks stopped returning new entries.\r\nAfter checking the log I noticed `cloudscraper` returns a ReCaptcha related error during the `rmz` automatic url-rewrite (a problem `cfscrape` doesn't have).\r\nUpon further investigation, it seems that `cloudscraper` has this issue when importing an existing requests session generated by Flexget, since removing the last `task.requests` (for testing purposes) from https://github.com/Flexget/Flexget/blob/a4037da07f7d7707650596670c467ab3b7e748c9/flexget/plugins/operate/cfscraper.py#L39 results in a successful processing of the task.\r\n\r\nAnother finding: Disabling `urlrewriting` also did not trigger a ReCaptcha error (therefore completing the task) even without meddling with any of Flexget's .py's, however it also doesn't return a download URL if [rmz.py](https://github.com/Flexget/Flexget/blob/master/flexget/components/sites/sites/rmz.py) is not processed.\r\n### Expected behaviour:\r\nTask returning proper entries with populated urls\r\n\r\n### Actual behaviour:\r\nTask failing due to `cloudscraper` bumping into ReCaptcha during `rmz`url-rewrite\r\n### Steps to reproduce:\r\n- Step 1: \r\n#### Config:\r\n```\r\ntasks:\r\n cftask:\r\n disable:\r\n - seen\r\n - seen_info_hash\r\n - retry_failed\r\n # - urlrewriting\r\n cfscraper: yes\r\n rss: http://rmz.cr/feed\r\n series:\r\n # Random entry from the feed below#\r\n - Days of our lives\r\n```\r\n \r\n#### Log:\r\n<details>\r\n <summary>(click to expand)</summary>\r\n\r\n```\r\n$ flexget execute --tasks cftask\r\n2019-05-09 00:34 VERBOSE task_queue There are 1 tasks to execute. Shutdown will commence when they have completed.\r\n2019-05-09 00:34 VERBOSE details cftask Produced 100 entries.\r\n2019-05-09 00:34 VERBOSE series.db cftask identified by is currently on `auto` for Days of our lives. Multiple id types may be accepted until it locks in on the appropriate type.\r\n2019-05-09 00:34 VERBOSE task cftask ACCEPTED: `[RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB)` by series plugin because choosing first acceptable match\r\n2019-05-09 00:35 WARNING urlrewriter cftask URL rewriting rmz failed: Captcha\r\n2019-05-09 00:35 ERROR entry cftask Failed [RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB) (None)\r\n2019-05-09 00:35 VERBOSE task cftask FAILED: `[RR/NF/UL/OL/CU] Days of our Lives S54E160 720p WEB HEVC x265-RMTeam (198MB)` by urlrewriting plugin\r\n2019-05-09 00:35 VERBOSE details cftask Summary - Accepted: 0 (Rejected: 0 Undecided: 99 Failed: 1)\r\n```\r\n</details>\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 2.20.26\r\n- Python version: 2.7.14\r\n- Installation method: pip\r\n- Using daemon (yes/no): yes\r\n- OS and version: Windows 10 (1809)\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.requests import Session\n\nlog = logging.getLogger('cfscraper')\n\n\nclass CFScraper(object):\n \"\"\"\n Plugin that enables scraping of cloudflare protected sites.\n\n Example::\n cfscraper: yes\n \"\"\"\n\n schema = {'type': 'boolean'}\n\n @plugin.priority(253)\n def on_task_start(self, task, config):\n try:\n import cloudscraper\n except ImportError as e:\n log.debug('Error importing cloudscraper: %s' % e)\n raise plugin.DependencyError(\n 'cfscraper', 'cloudscraper', 'cloudscraper module required. ImportError: %s' % e\n )\n\n class CFScrapeWrapper(Session, cloudscraper.CloudScraper):\n \"\"\"\n This class allows the FlexGet session to inherit from CloudScraper instead of the requests.Session directly.\n \"\"\"\n\n if config is True:\n task.requests = CFScrapeWrapper.create_scraper(task.requests)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(CFScraper, 'cfscraper', api_ver=2)\n", "path": "flexget/plugins/operate/cfscraper.py"}], "after_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.requests import Session\nfrom collections import OrderedDict\n\nlog = logging.getLogger('cfscraper')\n\n\nclass CFScraper(object):\n \"\"\"\n Plugin that enables scraping of cloudflare protected sites.\n\n Example::\n cfscraper: yes\n \"\"\"\n\n schema = {'type': 'boolean'}\n\n @plugin.priority(253)\n def on_task_start(self, task, config):\n try:\n import cloudscraper\n except ImportError as e:\n log.debug('Error importing cloudscraper: %s' % e)\n raise plugin.DependencyError(\n 'cfscraper', 'cloudscraper', 'cloudscraper module required. ImportError: %s' % e\n )\n\n class CFScrapeWrapper(Session, cloudscraper.CloudScraper):\n \"\"\"\n This class allows the FlexGet session to inherit from CloudScraper instead of the requests.Session directly.\n \"\"\"\n\n if config is True:\n task.requests.headers = (\n OrderedDict(\n [\n ('User-Agent', task.requests.headers['User-Agent']),\n ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),\n ('Accept-Language', 'en-US,en;q=0.5'),\n ('Accept-Encoding', 'gzip, deflate'),\n ('Connection', 'close'),\n ('Upgrade-Insecure-Requests', '1')\n ]\n )\n )\n task.requests = CFScrapeWrapper.create_scraper(task.requests)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(CFScraper, 'cfscraper', api_ver=2)\n", "path": "flexget/plugins/operate/cfscraper.py"}]} | 1,547 | 250 |
gh_patches_debug_11241 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1942 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use string form for type checking until Python 3.7 dropped
What the if here? The string form is fine until you drop Pythons that don’t support it.
_Originally posted by @henryiii in https://github.com/scikit-hep/pyhf/pull/1909#discussion_r944456765_
This was in reference to
```python
if T.TYPE_CHECKING:
PathOrStr = T.Union[str, os.PathLike[str]]
else:
PathOrStr = T.Union[str, "os.PathLike[str]"]
```
in PR #190 now in
https://github.com/scikit-hep/pyhf/blob/ad1dd86f1d7c1bcbf737805b6821e07c4ef75fca/src/pyhf/typing.py#L30-L33
So until Python 3.7 is dropped (I think this is the version I am not very up to date with my type checking knowledge) we could instead just drop the `if TYPE_CHECKING` and use
```python
#TODO: Switch to os.PathLike[str] once Python 3.7 dropped
PathOrStr = Union[str, "os.PathLike[str]"]
```
This would also allow for reverting PR #1937.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/typing.py`
Content:
```
1 import os
2 import sys
3 from typing import TYPE_CHECKING, MutableSequence, Sequence, Union
4
5 if sys.version_info >= (3, 8):
6 from typing import Literal, TypedDict
7 else:
8 from typing_extensions import Literal, TypedDict
9
10 __all__ = (
11 "PathOrStr",
12 "ParameterBase",
13 "Parameter",
14 "Measurement",
15 "ModifierBase",
16 "NormSys",
17 "NormFactor",
18 "HistoSys",
19 "StatError",
20 "ShapeSys",
21 "ShapeFactor",
22 "LumiSys",
23 "Modifier",
24 "Sample",
25 "Channel",
26 "Observation",
27 "Workspace",
28 )
29
30 if TYPE_CHECKING:
31 PathOrStr = Union[str, os.PathLike[str]]
32 else:
33 PathOrStr = Union[str, "os.PathLike[str]"]
34
35
36 class ParameterBase(TypedDict, total=False):
37 auxdata: Sequence[float]
38 bounds: Sequence[Sequence[float]]
39 inits: Sequence[float]
40 sigmas: Sequence[float]
41 fixed: bool
42
43
44 class Parameter(ParameterBase):
45 name: str
46
47
48 class Config(TypedDict):
49 poi: str
50 parameters: MutableSequence[Parameter]
51
52
53 class Measurement(TypedDict):
54 name: str
55 config: Config
56
57
58 class ModifierBase(TypedDict):
59 name: str
60
61
62 class NormSysData(TypedDict):
63 lo: float
64 hi: float
65
66
67 class NormSys(ModifierBase):
68 type: Literal['normsys']
69 data: NormSysData
70
71
72 class NormFactor(ModifierBase):
73 type: Literal['normfactor']
74 data: None
75
76
77 class HistoSysData(TypedDict):
78 lo_data: Sequence[float]
79 hi_data: Sequence[float]
80
81
82 class HistoSys(ModifierBase):
83 type: Literal['histosys']
84 data: HistoSysData
85
86
87 class StatError(ModifierBase):
88 type: Literal['staterror']
89 data: Sequence[float]
90
91
92 class ShapeSys(ModifierBase):
93 type: Literal['shapesys']
94 data: Sequence[float]
95
96
97 class ShapeFactor(ModifierBase):
98 type: Literal['shapefactor']
99 data: None
100
101
102 class LumiSys(TypedDict):
103 name: Literal['lumi']
104 type: Literal['lumi']
105 data: None
106
107
108 Modifier = Union[
109 NormSys, NormFactor, HistoSys, StatError, ShapeSys, ShapeFactor, LumiSys
110 ]
111
112
113 class SampleBase(TypedDict, total=False):
114 parameter_configs: Sequence[Parameter]
115
116
117 class Sample(SampleBase):
118 name: str
119 data: Sequence[float]
120 modifiers: Sequence[Modifier]
121
122
123 class Channel(TypedDict):
124 name: str
125 samples: Sequence[Sample]
126
127
128 class Observation(TypedDict):
129 name: str
130 data: Sequence[float]
131
132
133 class Workspace(TypedDict):
134 measurements: Sequence[Measurement]
135 channels: Sequence[Channel]
136 observations: Sequence[Observation]
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pyhf/typing.py b/src/pyhf/typing.py
--- a/src/pyhf/typing.py
+++ b/src/pyhf/typing.py
@@ -1,6 +1,6 @@
import os
import sys
-from typing import TYPE_CHECKING, MutableSequence, Sequence, Union
+from typing import MutableSequence, Sequence, Union
if sys.version_info >= (3, 8):
from typing import Literal, TypedDict
@@ -27,10 +27,8 @@
"Workspace",
)
-if TYPE_CHECKING:
- PathOrStr = Union[str, os.PathLike[str]]
-else:
- PathOrStr = Union[str, "os.PathLike[str]"]
+# TODO: Switch to os.PathLike[str] once Python 3.8 support dropped
+PathOrStr = Union[str, "os.PathLike[str]"]
class ParameterBase(TypedDict, total=False):
| {"golden_diff": "diff --git a/src/pyhf/typing.py b/src/pyhf/typing.py\n--- a/src/pyhf/typing.py\n+++ b/src/pyhf/typing.py\n@@ -1,6 +1,6 @@\n import os\n import sys\n-from typing import TYPE_CHECKING, MutableSequence, Sequence, Union\n+from typing import MutableSequence, Sequence, Union\n \n if sys.version_info >= (3, 8):\n from typing import Literal, TypedDict\n@@ -27,10 +27,8 @@\n \"Workspace\",\n )\n \n-if TYPE_CHECKING:\n- PathOrStr = Union[str, os.PathLike[str]]\n-else:\n- PathOrStr = Union[str, \"os.PathLike[str]\"]\n+# TODO: Switch to os.PathLike[str] once Python 3.8 support dropped\n+PathOrStr = Union[str, \"os.PathLike[str]\"]\n \n \n class ParameterBase(TypedDict, total=False):\n", "issue": "Use string form for type checking until Python 3.7 dropped\nWhat the if here? The string form is fine until you drop Pythons that don\u2019t support it.\r\n\r\n_Originally posted by @henryiii in https://github.com/scikit-hep/pyhf/pull/1909#discussion_r944456765_\r\n\r\nThis was in reference to \r\n\r\n```python\r\nif T.TYPE_CHECKING:\r\n PathOrStr = T.Union[str, os.PathLike[str]]\r\nelse:\r\n PathOrStr = T.Union[str, \"os.PathLike[str]\"]\r\n```\r\n\r\nin PR #190 now in \r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/ad1dd86f1d7c1bcbf737805b6821e07c4ef75fca/src/pyhf/typing.py#L30-L33\r\n\r\nSo until Python 3.7 is dropped (I think this is the version I am not very up to date with my type checking knowledge) we could instead just drop the `if TYPE_CHECKING` and use\r\n\r\n```python\r\n#TODO: Switch to os.PathLike[str] once Python 3.7 dropped\r\nPathOrStr = Union[str, \"os.PathLike[str]\"]\r\n```\r\n\r\nThis would also allow for reverting PR #1937.\n", "before_files": [{"content": "import os\nimport sys\nfrom typing import TYPE_CHECKING, MutableSequence, Sequence, Union\n\nif sys.version_info >= (3, 8):\n from typing import Literal, TypedDict\nelse:\n from typing_extensions import Literal, TypedDict\n\n__all__ = (\n \"PathOrStr\",\n \"ParameterBase\",\n \"Parameter\",\n \"Measurement\",\n \"ModifierBase\",\n \"NormSys\",\n \"NormFactor\",\n \"HistoSys\",\n \"StatError\",\n \"ShapeSys\",\n \"ShapeFactor\",\n \"LumiSys\",\n \"Modifier\",\n \"Sample\",\n \"Channel\",\n \"Observation\",\n \"Workspace\",\n)\n\nif TYPE_CHECKING:\n PathOrStr = Union[str, os.PathLike[str]]\nelse:\n PathOrStr = Union[str, \"os.PathLike[str]\"]\n\n\nclass ParameterBase(TypedDict, total=False):\n auxdata: Sequence[float]\n bounds: Sequence[Sequence[float]]\n inits: Sequence[float]\n sigmas: Sequence[float]\n fixed: bool\n\n\nclass Parameter(ParameterBase):\n name: str\n\n\nclass Config(TypedDict):\n poi: str\n parameters: MutableSequence[Parameter]\n\n\nclass Measurement(TypedDict):\n name: str\n config: Config\n\n\nclass ModifierBase(TypedDict):\n name: str\n\n\nclass NormSysData(TypedDict):\n lo: float\n hi: float\n\n\nclass NormSys(ModifierBase):\n type: Literal['normsys']\n data: NormSysData\n\n\nclass NormFactor(ModifierBase):\n type: Literal['normfactor']\n data: None\n\n\nclass HistoSysData(TypedDict):\n lo_data: Sequence[float]\n hi_data: Sequence[float]\n\n\nclass HistoSys(ModifierBase):\n type: Literal['histosys']\n data: HistoSysData\n\n\nclass StatError(ModifierBase):\n type: Literal['staterror']\n data: Sequence[float]\n\n\nclass ShapeSys(ModifierBase):\n type: Literal['shapesys']\n data: Sequence[float]\n\n\nclass ShapeFactor(ModifierBase):\n type: Literal['shapefactor']\n data: None\n\n\nclass LumiSys(TypedDict):\n name: Literal['lumi']\n type: Literal['lumi']\n data: None\n\n\nModifier = Union[\n NormSys, NormFactor, HistoSys, StatError, ShapeSys, ShapeFactor, LumiSys\n]\n\n\nclass SampleBase(TypedDict, total=False):\n parameter_configs: Sequence[Parameter]\n\n\nclass Sample(SampleBase):\n name: str\n data: Sequence[float]\n modifiers: Sequence[Modifier]\n\n\nclass Channel(TypedDict):\n name: str\n samples: Sequence[Sample]\n\n\nclass Observation(TypedDict):\n name: str\n data: Sequence[float]\n\n\nclass Workspace(TypedDict):\n measurements: Sequence[Measurement]\n channels: Sequence[Channel]\n observations: Sequence[Observation]\n", "path": "src/pyhf/typing.py"}], "after_files": [{"content": "import os\nimport sys\nfrom typing import MutableSequence, Sequence, Union\n\nif sys.version_info >= (3, 8):\n from typing import Literal, TypedDict\nelse:\n from typing_extensions import Literal, TypedDict\n\n__all__ = (\n \"PathOrStr\",\n \"ParameterBase\",\n \"Parameter\",\n \"Measurement\",\n \"ModifierBase\",\n \"NormSys\",\n \"NormFactor\",\n \"HistoSys\",\n \"StatError\",\n \"ShapeSys\",\n \"ShapeFactor\",\n \"LumiSys\",\n \"Modifier\",\n \"Sample\",\n \"Channel\",\n \"Observation\",\n \"Workspace\",\n)\n\n# TODO: Switch to os.PathLike[str] once Python 3.8 support dropped\nPathOrStr = Union[str, \"os.PathLike[str]\"]\n\n\nclass ParameterBase(TypedDict, total=False):\n auxdata: Sequence[float]\n bounds: Sequence[Sequence[float]]\n inits: Sequence[float]\n sigmas: Sequence[float]\n fixed: bool\n\n\nclass Parameter(ParameterBase):\n name: str\n\n\nclass Config(TypedDict):\n poi: str\n parameters: MutableSequence[Parameter]\n\n\nclass Measurement(TypedDict):\n name: str\n config: Config\n\n\nclass ModifierBase(TypedDict):\n name: str\n\n\nclass NormSysData(TypedDict):\n lo: float\n hi: float\n\n\nclass NormSys(ModifierBase):\n type: Literal['normsys']\n data: NormSysData\n\n\nclass NormFactor(ModifierBase):\n type: Literal['normfactor']\n data: None\n\n\nclass HistoSysData(TypedDict):\n lo_data: Sequence[float]\n hi_data: Sequence[float]\n\n\nclass HistoSys(ModifierBase):\n type: Literal['histosys']\n data: HistoSysData\n\n\nclass StatError(ModifierBase):\n type: Literal['staterror']\n data: Sequence[float]\n\n\nclass ShapeSys(ModifierBase):\n type: Literal['shapesys']\n data: Sequence[float]\n\n\nclass ShapeFactor(ModifierBase):\n type: Literal['shapefactor']\n data: None\n\n\nclass LumiSys(TypedDict):\n name: Literal['lumi']\n type: Literal['lumi']\n data: None\n\n\nModifier = Union[\n NormSys, NormFactor, HistoSys, StatError, ShapeSys, ShapeFactor, LumiSys\n]\n\n\nclass SampleBase(TypedDict, total=False):\n parameter_configs: Sequence[Parameter]\n\n\nclass Sample(SampleBase):\n name: str\n data: Sequence[float]\n modifiers: Sequence[Modifier]\n\n\nclass Channel(TypedDict):\n name: str\n samples: Sequence[Sample]\n\n\nclass Observation(TypedDict):\n name: str\n data: Sequence[float]\n\n\nclass Workspace(TypedDict):\n measurements: Sequence[Measurement]\n channels: Sequence[Channel]\n observations: Sequence[Observation]\n", "path": "src/pyhf/typing.py"}]} | 1,505 | 199 |
gh_patches_debug_31813 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-206 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Capture Celery Time-in-queue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/celery.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from celery.signals import task_postrun, task_prerun
5
6 import scout_apm.core
7 from scout_apm.core.tracked_request import TrackedRequest
8
9
10 def prerun_callback(task=None, **kwargs):
11 tracked_request = TrackedRequest.instance()
12 tracked_request.mark_real_request()
13
14 delivery_info = task.request.delivery_info
15 tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
16 tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
17 tracked_request.tag("routing_key", delivery_info.get("routing_key", "unknown"))
18 tracked_request.tag("queue", delivery_info.get("queue", "unknown"))
19
20 tracked_request.start_span(operation=("Job/" + task.name))
21
22
23 def postrun_callback(task=None, **kwargs):
24 tracked_request = TrackedRequest.instance()
25 tracked_request.stop_span()
26
27
28 def install():
29 installed = scout_apm.core.install()
30 if not installed:
31 return
32
33 task_prerun.connect(prerun_callback)
34 task_postrun.connect(postrun_callback)
35
36
37 def uninstall():
38 task_prerun.disconnect(prerun_callback)
39 task_postrun.disconnect(postrun_callback)
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py
--- a/src/scout_apm/celery.py
+++ b/src/scout_apm/celery.py
@@ -1,16 +1,34 @@
# coding=utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
-from celery.signals import task_postrun, task_prerun
+import datetime as dt
+
+from celery.signals import before_task_publish, task_postrun, task_prerun
import scout_apm.core
+from scout_apm.compat import datetime_to_timestamp
from scout_apm.core.tracked_request import TrackedRequest
+def before_publish_callback(headers=None, properties=None, **kwargs):
+ if "scout_task_start" not in headers:
+ headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
+
+
def prerun_callback(task=None, **kwargs):
tracked_request = TrackedRequest.instance()
tracked_request.mark_real_request()
+ start = getattr(task.request, "scout_task_start", None)
+ if start is not None:
+ now = datetime_to_timestamp(dt.datetime.utcnow())
+ try:
+ queue_time = now - start
+ except TypeError:
+ pass
+ else:
+ tracked_request.tag("queue_time", queue_time)
+
delivery_info = task.request.delivery_info
tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
@@ -30,10 +48,12 @@
if not installed:
return
+ before_task_publish.connect(before_publish_callback)
task_prerun.connect(prerun_callback)
task_postrun.connect(postrun_callback)
def uninstall():
+ before_task_publish.disconnect(before_publish_callback)
task_prerun.disconnect(prerun_callback)
task_postrun.disconnect(postrun_callback)
| {"golden_diff": "diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py\n--- a/src/scout_apm/celery.py\n+++ b/src/scout_apm/celery.py\n@@ -1,16 +1,34 @@\n # coding=utf-8\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n-from celery.signals import task_postrun, task_prerun\n+import datetime as dt\n+\n+from celery.signals import before_task_publish, task_postrun, task_prerun\n \n import scout_apm.core\n+from scout_apm.compat import datetime_to_timestamp\n from scout_apm.core.tracked_request import TrackedRequest\n \n \n+def before_publish_callback(headers=None, properties=None, **kwargs):\n+ if \"scout_task_start\" not in headers:\n+ headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n+\n+\n def prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.mark_real_request()\n \n+ start = getattr(task.request, \"scout_task_start\", None)\n+ if start is not None:\n+ now = datetime_to_timestamp(dt.datetime.utcnow())\n+ try:\n+ queue_time = now - start\n+ except TypeError:\n+ pass\n+ else:\n+ tracked_request.tag(\"queue_time\", queue_time)\n+\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n@@ -30,10 +48,12 @@\n if not installed:\n return\n \n+ before_task_publish.connect(before_publish_callback)\n task_prerun.connect(prerun_callback)\n task_postrun.connect(postrun_callback)\n \n \n def uninstall():\n+ before_task_publish.disconnect(before_publish_callback)\n task_prerun.disconnect(prerun_callback)\n task_postrun.disconnect(postrun_callback)\n", "issue": "Capture Celery Time-in-queue\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom celery.signals import task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.mark_real_request()\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef install():\n installed = scout_apm.core.install()\n if not installed:\n return\n\n task_prerun.connect(prerun_callback)\n task_postrun.connect(postrun_callback)\n\n\ndef uninstall():\n task_prerun.disconnect(prerun_callback)\n task_postrun.disconnect(postrun_callback)\n", "path": "src/scout_apm/celery.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nfrom celery.signals import before_task_publish, task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.compat import datetime_to_timestamp\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n\n\ndef prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.mark_real_request()\n\n start = getattr(task.request, \"scout_task_start\", None)\n if start is not None:\n now = datetime_to_timestamp(dt.datetime.utcnow())\n try:\n queue_time = now - start\n except TypeError:\n pass\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef install():\n installed = scout_apm.core.install()\n if not installed:\n return\n\n before_task_publish.connect(before_publish_callback)\n task_prerun.connect(prerun_callback)\n task_postrun.connect(postrun_callback)\n\n\ndef uninstall():\n before_task_publish.disconnect(before_publish_callback)\n task_prerun.disconnect(prerun_callback)\n task_postrun.disconnect(postrun_callback)\n", "path": "src/scout_apm/celery.py"}]} | 620 | 434 |
gh_patches_debug_19538 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2418 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Script Reloader does not unload old script
##### Steps to reproduce the problem:
1. mitmdump -s script.py
2. Modify script on disk.
##### System information
Mitmproxy version: 3.0.0 (2.0.0dev0407-0x315daa0)
Python version: 3.5.2
Platform: Linux-4.4.0-43-Microsoft-x86_64-with-Ubuntu-16.04-xenial
SSL version: OpenSSL 1.0.2g 1 Mar 2016
Linux distro: Ubuntu 16.04 xenial
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/addons/script.py`
Content:
```
1 import os
2 import importlib
3 import time
4 import sys
5 import typing
6
7 from mitmproxy import addonmanager
8 from mitmproxy import exceptions
9 from mitmproxy import flow
10 from mitmproxy import command
11 from mitmproxy import eventsequence
12 from mitmproxy import ctx
13
14
15 def load_script(actx, path):
16 if not os.path.exists(path):
17 ctx.log.info("No such file: %s" % path)
18 return
19 loader = importlib.machinery.SourceFileLoader(os.path.basename(path), path)
20 try:
21 oldpath = sys.path
22 sys.path.insert(0, os.path.dirname(path))
23 with addonmanager.safecall():
24 m = loader.load_module()
25 if not getattr(m, "name", None):
26 m.name = path
27 return m
28 finally:
29 sys.path[:] = oldpath
30
31
32 class Script:
33 """
34 An addon that manages a single script.
35 """
36 ReloadInterval = 2
37
38 def __init__(self, path):
39 self.name = "scriptmanager:" + path
40 self.path = path
41 self.fullpath = os.path.expanduser(path)
42 self.ns = None
43
44 self.last_load = 0
45 self.last_mtime = 0
46 if not os.path.isfile(self.fullpath):
47 raise exceptions.OptionsError("No such script: %s" % path)
48
49 @property
50 def addons(self):
51 return [self.ns] if self.ns else []
52
53 def tick(self):
54 if time.time() - self.last_load > self.ReloadInterval:
55 mtime = os.stat(self.fullpath).st_mtime
56 if mtime > self.last_mtime:
57 ctx.log.info("Loading script: %s" % self.path)
58 if self.ns:
59 ctx.master.addons.remove(self.ns)
60 self.ns = load_script(ctx, self.fullpath)
61 if self.ns:
62 # We're already running, so we have to explicitly register and
63 # configure the addon
64 ctx.master.addons.register(self.ns)
65 ctx.master.addons.invoke_addon(self.ns, "running")
66 ctx.master.addons.invoke_addon(
67 self.ns,
68 "configure",
69 ctx.options.keys()
70 )
71 self.last_load = time.time()
72 self.last_mtime = mtime
73
74
75 class ScriptLoader:
76 """
77 An addon that manages loading scripts from options.
78 """
79 def __init__(self):
80 self.is_running = False
81 self.addons = []
82
83 def running(self):
84 self.is_running = True
85
86 @command.command("script.run")
87 def script_run(self, flows: typing.Sequence[flow.Flow], path: str) -> None:
88 """
89 Run a script on the specified flows. The script is loaded with
90 default options, and all lifecycle events for each flow are
91 simulated.
92 """
93 try:
94 s = Script(path)
95 l = addonmanager.Loader(ctx.master)
96 ctx.master.addons.invoke_addon(s, "load", l)
97 ctx.master.addons.invoke_addon(s, "configure", ctx.options.keys())
98 # Script is loaded on the first tick
99 ctx.master.addons.invoke_addon(s, "tick")
100 for f in flows:
101 for evt, arg in eventsequence.iterate(f):
102 ctx.master.addons.invoke_addon(s, evt, arg)
103 except exceptions.OptionsError as e:
104 raise exceptions.CommandError("Error running script: %s" % e) from e
105
106 def configure(self, updated):
107 if "scripts" in updated:
108 for s in ctx.options.scripts:
109 if ctx.options.scripts.count(s) > 1:
110 raise exceptions.OptionsError("Duplicate script: %s" % s)
111
112 for a in self.addons[:]:
113 if a.path not in ctx.options.scripts:
114 ctx.log.info("Un-loading script: %s" % a.name)
115 ctx.master.addons.remove(a)
116 self.addons.remove(a)
117
118 # The machinations below are to ensure that:
119 # - Scripts remain in the same order
120 # - Scripts are not initialized un-necessarily. If only a
121 # script's order in the script list has changed, it is just
122 # moved.
123
124 current = {}
125 for a in self.addons:
126 current[a.path] = a
127
128 ordered = []
129 newscripts = []
130 for s in ctx.options.scripts:
131 if s in current:
132 ordered.append(current[s])
133 else:
134 sc = Script(s)
135 ordered.append(sc)
136 newscripts.append(sc)
137
138 self.addons = ordered
139
140 for s in newscripts:
141 ctx.master.addons.register(s)
142 if self.is_running:
143 # If we're already running, we configure and tell the addon
144 # we're up and running.
145 ctx.master.addons.invoke_addon(s, "running")
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py
--- a/mitmproxy/addons/script.py
+++ b/mitmproxy/addons/script.py
@@ -52,11 +52,19 @@
def tick(self):
if time.time() - self.last_load > self.ReloadInterval:
- mtime = os.stat(self.fullpath).st_mtime
+ try:
+ mtime = os.stat(self.fullpath).st_mtime
+ except FileNotFoundError:
+ scripts = ctx.options.scripts
+ scripts.remove(self.path)
+ ctx.options.update(scripts=scripts)
+ return
+
if mtime > self.last_mtime:
ctx.log.info("Loading script: %s" % self.path)
if self.ns:
ctx.master.addons.remove(self.ns)
+ del sys.modules[self.ns.__name__]
self.ns = load_script(ctx, self.fullpath)
if self.ns:
# We're already running, so we have to explicitly register and
| {"golden_diff": "diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py\n--- a/mitmproxy/addons/script.py\n+++ b/mitmproxy/addons/script.py\n@@ -52,11 +52,19 @@\n \n def tick(self):\n if time.time() - self.last_load > self.ReloadInterval:\n- mtime = os.stat(self.fullpath).st_mtime\n+ try:\n+ mtime = os.stat(self.fullpath).st_mtime\n+ except FileNotFoundError:\n+ scripts = ctx.options.scripts\n+ scripts.remove(self.path)\n+ ctx.options.update(scripts=scripts)\n+ return\n+\n if mtime > self.last_mtime:\n ctx.log.info(\"Loading script: %s\" % self.path)\n if self.ns:\n ctx.master.addons.remove(self.ns)\n+ del sys.modules[self.ns.__name__]\n self.ns = load_script(ctx, self.fullpath)\n if self.ns:\n # We're already running, so we have to explicitly register and\n", "issue": "Script Reloader does not unload old script\n##### Steps to reproduce the problem:\r\n\r\n1. mitmdump -s script.py\r\n2. Modify script on disk.\r\n\r\n##### System information\r\n\r\nMitmproxy version: 3.0.0 (2.0.0dev0407-0x315daa0)\r\nPython version: 3.5.2\r\nPlatform: Linux-4.4.0-43-Microsoft-x86_64-with-Ubuntu-16.04-xenial\r\nSSL version: OpenSSL 1.0.2g 1 Mar 2016\r\nLinux distro: Ubuntu 16.04 xenial\n", "before_files": [{"content": "import os\nimport importlib\nimport time\nimport sys\nimport typing\n\nfrom mitmproxy import addonmanager\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import command\nfrom mitmproxy import eventsequence\nfrom mitmproxy import ctx\n\n\ndef load_script(actx, path):\n if not os.path.exists(path):\n ctx.log.info(\"No such file: %s\" % path)\n return\n loader = importlib.machinery.SourceFileLoader(os.path.basename(path), path)\n try:\n oldpath = sys.path\n sys.path.insert(0, os.path.dirname(path))\n with addonmanager.safecall():\n m = loader.load_module()\n if not getattr(m, \"name\", None):\n m.name = path\n return m\n finally:\n sys.path[:] = oldpath\n\n\nclass Script:\n \"\"\"\n An addon that manages a single script.\n \"\"\"\n ReloadInterval = 2\n\n def __init__(self, path):\n self.name = \"scriptmanager:\" + path\n self.path = path\n self.fullpath = os.path.expanduser(path)\n self.ns = None\n\n self.last_load = 0\n self.last_mtime = 0\n if not os.path.isfile(self.fullpath):\n raise exceptions.OptionsError(\"No such script: %s\" % path)\n\n @property\n def addons(self):\n return [self.ns] if self.ns else []\n\n def tick(self):\n if time.time() - self.last_load > self.ReloadInterval:\n mtime = os.stat(self.fullpath).st_mtime\n if mtime > self.last_mtime:\n ctx.log.info(\"Loading script: %s\" % self.path)\n if self.ns:\n ctx.master.addons.remove(self.ns)\n self.ns = load_script(ctx, self.fullpath)\n if self.ns:\n # We're already running, so we have to explicitly register and\n # configure the addon\n ctx.master.addons.register(self.ns)\n ctx.master.addons.invoke_addon(self.ns, \"running\")\n ctx.master.addons.invoke_addon(\n self.ns,\n \"configure\",\n ctx.options.keys()\n )\n self.last_load = time.time()\n self.last_mtime = mtime\n\n\nclass ScriptLoader:\n \"\"\"\n An addon that manages loading scripts from options.\n \"\"\"\n def __init__(self):\n self.is_running = False\n self.addons = []\n\n def running(self):\n self.is_running = True\n\n @command.command(\"script.run\")\n def script_run(self, flows: typing.Sequence[flow.Flow], path: str) -> None:\n \"\"\"\n Run a script on the specified flows. The script is loaded with\n default options, and all lifecycle events for each flow are\n simulated.\n \"\"\"\n try:\n s = Script(path)\n l = addonmanager.Loader(ctx.master)\n ctx.master.addons.invoke_addon(s, \"load\", l)\n ctx.master.addons.invoke_addon(s, \"configure\", ctx.options.keys())\n # Script is loaded on the first tick\n ctx.master.addons.invoke_addon(s, \"tick\")\n for f in flows:\n for evt, arg in eventsequence.iterate(f):\n ctx.master.addons.invoke_addon(s, evt, arg)\n except exceptions.OptionsError as e:\n raise exceptions.CommandError(\"Error running script: %s\" % e) from e\n\n def configure(self, updated):\n if \"scripts\" in updated:\n for s in ctx.options.scripts:\n if ctx.options.scripts.count(s) > 1:\n raise exceptions.OptionsError(\"Duplicate script: %s\" % s)\n\n for a in self.addons[:]:\n if a.path not in ctx.options.scripts:\n ctx.log.info(\"Un-loading script: %s\" % a.name)\n ctx.master.addons.remove(a)\n self.addons.remove(a)\n\n # The machinations below are to ensure that:\n # - Scripts remain in the same order\n # - Scripts are not initialized un-necessarily. If only a\n # script's order in the script list has changed, it is just\n # moved.\n\n current = {}\n for a in self.addons:\n current[a.path] = a\n\n ordered = []\n newscripts = []\n for s in ctx.options.scripts:\n if s in current:\n ordered.append(current[s])\n else:\n sc = Script(s)\n ordered.append(sc)\n newscripts.append(sc)\n\n self.addons = ordered\n\n for s in newscripts:\n ctx.master.addons.register(s)\n if self.is_running:\n # If we're already running, we configure and tell the addon\n # we're up and running.\n ctx.master.addons.invoke_addon(s, \"running\")\n", "path": "mitmproxy/addons/script.py"}], "after_files": [{"content": "import os\nimport importlib\nimport time\nimport sys\nimport typing\n\nfrom mitmproxy import addonmanager\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import command\nfrom mitmproxy import eventsequence\nfrom mitmproxy import ctx\n\n\ndef load_script(actx, path):\n if not os.path.exists(path):\n ctx.log.info(\"No such file: %s\" % path)\n return\n loader = importlib.machinery.SourceFileLoader(os.path.basename(path), path)\n try:\n oldpath = sys.path\n sys.path.insert(0, os.path.dirname(path))\n with addonmanager.safecall():\n m = loader.load_module()\n if not getattr(m, \"name\", None):\n m.name = path\n return m\n finally:\n sys.path[:] = oldpath\n\n\nclass Script:\n \"\"\"\n An addon that manages a single script.\n \"\"\"\n ReloadInterval = 2\n\n def __init__(self, path):\n self.name = \"scriptmanager:\" + path\n self.path = path\n self.fullpath = os.path.expanduser(path)\n self.ns = None\n\n self.last_load = 0\n self.last_mtime = 0\n if not os.path.isfile(self.fullpath):\n raise exceptions.OptionsError(\"No such script: %s\" % path)\n\n @property\n def addons(self):\n return [self.ns] if self.ns else []\n\n def tick(self):\n if time.time() - self.last_load > self.ReloadInterval:\n try:\n mtime = os.stat(self.fullpath).st_mtime\n except FileNotFoundError:\n scripts = ctx.options.scripts\n scripts.remove(self.path)\n ctx.options.update(scripts=scripts)\n return\n\n if mtime > self.last_mtime:\n ctx.log.info(\"Loading script: %s\" % self.path)\n if self.ns:\n ctx.master.addons.remove(self.ns)\n del sys.modules[self.ns.__name__]\n self.ns = load_script(ctx, self.fullpath)\n if self.ns:\n # We're already running, so we have to explicitly register and\n # configure the addon\n ctx.master.addons.register(self.ns)\n ctx.master.addons.invoke_addon(self.ns, \"running\")\n ctx.master.addons.invoke_addon(\n self.ns,\n \"configure\",\n ctx.options.keys()\n )\n self.last_load = time.time()\n self.last_mtime = mtime\n\n\nclass ScriptLoader:\n \"\"\"\n An addon that manages loading scripts from options.\n \"\"\"\n def __init__(self):\n self.is_running = False\n self.addons = []\n\n def running(self):\n self.is_running = True\n\n @command.command(\"script.run\")\n def script_run(self, flows: typing.Sequence[flow.Flow], path: str) -> None:\n \"\"\"\n Run a script on the specified flows. The script is loaded with\n default options, and all lifecycle events for each flow are\n simulated.\n \"\"\"\n try:\n s = Script(path)\n l = addonmanager.Loader(ctx.master)\n ctx.master.addons.invoke_addon(s, \"load\", l)\n ctx.master.addons.invoke_addon(s, \"configure\", ctx.options.keys())\n # Script is loaded on the first tick\n ctx.master.addons.invoke_addon(s, \"tick\")\n for f in flows:\n for evt, arg in eventsequence.iterate(f):\n ctx.master.addons.invoke_addon(s, evt, arg)\n except exceptions.OptionsError as e:\n raise exceptions.CommandError(\"Error running script: %s\" % e) from e\n\n def configure(self, updated):\n if \"scripts\" in updated:\n for s in ctx.options.scripts:\n if ctx.options.scripts.count(s) > 1:\n raise exceptions.OptionsError(\"Duplicate script: %s\" % s)\n\n for a in self.addons[:]:\n if a.path not in ctx.options.scripts:\n ctx.log.info(\"Un-loading script: %s\" % a.name)\n ctx.master.addons.remove(a)\n self.addons.remove(a)\n\n # The machinations below are to ensure that:\n # - Scripts remain in the same order\n # - Scripts are not initialized un-necessarily. If only a\n # script's order in the script list has changed, it is just\n # moved.\n\n current = {}\n for a in self.addons:\n current[a.path] = a\n\n ordered = []\n newscripts = []\n for s in ctx.options.scripts:\n if s in current:\n ordered.append(current[s])\n else:\n sc = Script(s)\n ordered.append(sc)\n newscripts.append(sc)\n\n self.addons = ordered\n\n for s in newscripts:\n ctx.master.addons.register(s)\n if self.is_running:\n # If we're already running, we configure and tell the addon\n # we're up and running.\n ctx.master.addons.invoke_addon(s, \"running\")\n", "path": "mitmproxy/addons/script.py"}]} | 1,783 | 224 |
gh_patches_debug_13458 | rasdani/github-patches | git_diff | explosion__spaCy-3499 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect lemma from lemmatizer
**Right:**
`[w.lemma_ for w in nlp('funnier')]` -> `['funny']`
**Wrong:**
`[w.lemma_ for w in nlp('faster')]` ->`['faster']`
I think for word _faster_ lemma should be _fast_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spacy/lang/en/lemmatizer/_adverbs_irreg.py`
Content:
```
1 # coding: utf8
2 from __future__ import unicode_literals
3
4
5 ADVERBS_IRREG = {
6 "best": ("well",),
7 "better": ("well",),
8 "deeper": ("deeply",),
9 "farther": ("far",),
10 "further": ("far",),
11 "harder": ("hard",),
12 "hardest": ("hard",),
13 }
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spacy/lang/en/lemmatizer/_adverbs_irreg.py b/spacy/lang/en/lemmatizer/_adverbs_irreg.py
--- a/spacy/lang/en/lemmatizer/_adverbs_irreg.py
+++ b/spacy/lang/en/lemmatizer/_adverbs_irreg.py
@@ -5,9 +5,27 @@
ADVERBS_IRREG = {
"best": ("well",),
"better": ("well",),
+ "closer": ("close",),
+ "closest": ("close",),
"deeper": ("deeply",),
+ "earlier": ("early",),
+ "earliest": ("early",),
"farther": ("far",),
"further": ("far",),
+ "faster": ("fast",),
+ "fastest": ("fast",),
"harder": ("hard",),
"hardest": ("hard",),
+ "longer": ("long",),
+ "longest": ("long",),
+ "nearer": ("near",),
+ "nearest": ("near",),
+ "nigher": ("nigh",),
+ "nighest": ("nigh",),
+ "quicker": ("quick",),
+ "quickest": ("quick",),
+ "slower": ("slow",),
+ "slowest": ("slowest",),
+ "sooner": ("soon",),
+ "soonest": ("soon",)
}
| {"golden_diff": "diff --git a/spacy/lang/en/lemmatizer/_adverbs_irreg.py b/spacy/lang/en/lemmatizer/_adverbs_irreg.py\n--- a/spacy/lang/en/lemmatizer/_adverbs_irreg.py\n+++ b/spacy/lang/en/lemmatizer/_adverbs_irreg.py\n@@ -5,9 +5,27 @@\n ADVERBS_IRREG = {\n \"best\": (\"well\",),\n \"better\": (\"well\",),\n+ \"closer\": (\"close\",),\n+ \"closest\": (\"close\",),\n \"deeper\": (\"deeply\",),\n+ \"earlier\": (\"early\",),\n+ \"earliest\": (\"early\",),\n \"farther\": (\"far\",),\n \"further\": (\"far\",),\n+ \"faster\": (\"fast\",),\n+ \"fastest\": (\"fast\",),\n \"harder\": (\"hard\",),\n \"hardest\": (\"hard\",),\n+ \"longer\": (\"long\",),\n+ \"longest\": (\"long\",),\n+ \"nearer\": (\"near\",),\n+ \"nearest\": (\"near\",),\n+ \"nigher\": (\"nigh\",),\n+ \"nighest\": (\"nigh\",),\n+ \"quicker\": (\"quick\",),\n+ \"quickest\": (\"quick\",),\n+ \"slower\": (\"slow\",),\n+ \"slowest\": (\"slowest\",),\n+ \"sooner\": (\"soon\",),\n+ \"soonest\": (\"soon\",)\n }\n", "issue": "Incorrect lemma from lemmatizer\n**Right:**\r\n`[w.lemma_ for w in nlp('funnier')]` -> `['funny']` \r\n\r\n**Wrong:**\r\n`[w.lemma_ for w in nlp('faster')]` ->`['faster']`\r\n\r\nI think for word _faster_ lemma should be _fast_\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\n\nADVERBS_IRREG = {\n \"best\": (\"well\",),\n \"better\": (\"well\",),\n \"deeper\": (\"deeply\",),\n \"farther\": (\"far\",),\n \"further\": (\"far\",),\n \"harder\": (\"hard\",),\n \"hardest\": (\"hard\",),\n}\n", "path": "spacy/lang/en/lemmatizer/_adverbs_irreg.py"}], "after_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\n\nADVERBS_IRREG = {\n \"best\": (\"well\",),\n \"better\": (\"well\",),\n \"closer\": (\"close\",),\n \"closest\": (\"close\",),\n \"deeper\": (\"deeply\",),\n \"earlier\": (\"early\",),\n \"earliest\": (\"early\",),\n \"farther\": (\"far\",),\n \"further\": (\"far\",),\n \"faster\": (\"fast\",),\n \"fastest\": (\"fast\",),\n \"harder\": (\"hard\",),\n \"hardest\": (\"hard\",),\n \"longer\": (\"long\",),\n \"longest\": (\"long\",),\n \"nearer\": (\"near\",),\n \"nearest\": (\"near\",),\n \"nigher\": (\"nigh\",),\n \"nighest\": (\"nigh\",),\n \"quicker\": (\"quick\",),\n \"quickest\": (\"quick\",),\n \"slower\": (\"slow\",),\n \"slowest\": (\"slowest\",),\n \"sooner\": (\"soon\",),\n \"soonest\": (\"soon\",)\n}\n", "path": "spacy/lang/en/lemmatizer/_adverbs_irreg.py"}]} | 451 | 331 |
gh_patches_debug_67335 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-284 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
base_url is used twice in redirects
For jupyterhub running at _http://address/prefix/_ almost every redirect has `base_url` twice. It was caused by PR #276. Reverting changes to jupyterhub/handlers/pages.py:27 [1] fixes the issue.
[1] https://github.com/Crunch-io/jupyterhub/commit/05e4ab41fe6691dd697eecc27d9abee2ac4e23b6
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/handlers/pages.py`
Content:
```
1 """Basic html-rendering handlers."""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from tornado import web
7
8 from .. import orm
9 from ..utils import admin_only, url_path_join
10 from .base import BaseHandler
11
12
13 class RootHandler(BaseHandler):
14 """Render the Hub root page.
15
16 If logged in, redirects to:
17
18 - single-user server if running
19 - hub home, otherwise
20
21 Otherwise, renders login page.
22 """
23 def get(self):
24 user = self.get_current_user()
25 if user:
26 if user.running:
27 url = url_path_join(self.hub.server.base_url, user.server.base_url)
28 self.log.debug("User is running: %s", url)
29 else:
30 url = url_path_join(self.hub.server.base_url, 'home')
31 self.log.debug("User is not running: %s", url)
32 self.redirect(url, permanent=False)
33 return
34 # Redirect to the authenticator login page instead of rendering the
35 # login html page
36 url = self.authenticator.login_url(self.hub.server.base_url)
37 self.log.debug("No user logged in: %s", url)
38 self.redirect(url, permanent=False)
39
40 class HomeHandler(BaseHandler):
41 """Render the user's home page."""
42
43 @web.authenticated
44 def get(self):
45 html = self.render_template('home.html',
46 user=self.get_current_user(),
47 )
48 self.finish(html)
49
50
51 class AdminHandler(BaseHandler):
52 """Render the admin page."""
53
54 @admin_only
55 def get(self):
56 available = {'name', 'admin', 'running', 'last_activity'}
57 default_sort = ['admin', 'name']
58 mapping = {
59 'running': '_server_id'
60 }
61 default_order = {
62 'name': 'asc',
63 'last_activity': 'desc',
64 'admin': 'desc',
65 'running': 'desc',
66 }
67 sorts = self.get_arguments('sort') or default_sort
68 orders = self.get_arguments('order')
69
70 for bad in set(sorts).difference(available):
71 self.log.warn("ignoring invalid sort: %r", bad)
72 sorts.remove(bad)
73 for bad in set(orders).difference({'asc', 'desc'}):
74 self.log.warn("ignoring invalid order: %r", bad)
75 orders.remove(bad)
76
77 # add default sort as secondary
78 for s in default_sort:
79 if s not in sorts:
80 sorts.append(s)
81 if len(orders) < len(sorts):
82 for col in sorts[len(orders):]:
83 orders.append(default_order[col])
84 else:
85 orders = orders[:len(sorts)]
86
87 # this could be one incomprehensible nested list comprehension
88 # get User columns
89 cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]
90 # get User.col.desc() order objects
91 ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]
92
93 users = self.db.query(orm.User).order_by(*ordered)
94 running = users.filter(orm.User.server != None)
95
96 html = self.render_template('admin.html',
97 user=self.get_current_user(),
98 admin_access=self.settings.get('admin_access', False),
99 users=users,
100 running=running,
101 sort={s:o for s,o in zip(sorts, orders)},
102 )
103 self.finish(html)
104
105
106 default_handlers = [
107 (r'/', RootHandler),
108 (r'/home', HomeHandler),
109 (r'/admin', AdminHandler),
110 ]
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py
--- a/jupyterhub/handlers/pages.py
+++ b/jupyterhub/handlers/pages.py
@@ -24,7 +24,7 @@
user = self.get_current_user()
if user:
if user.running:
- url = url_path_join(self.hub.server.base_url, user.server.base_url)
+ url = user.server.base_url
self.log.debug("User is running: %s", url)
else:
url = url_path_join(self.hub.server.base_url, 'home')
| {"golden_diff": "diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py\n--- a/jupyterhub/handlers/pages.py\n+++ b/jupyterhub/handlers/pages.py\n@@ -24,7 +24,7 @@\n user = self.get_current_user()\n if user:\n if user.running:\n- url = url_path_join(self.hub.server.base_url, user.server.base_url)\n+ url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n", "issue": "base_url is used twice in redirects\nFor jupyterhub running at _http://address/prefix/_ almost every redirect has `base_url` twice. It was caused by PR #276. Reverting changes to jupyterhub/handlers/pages.py:27 [1] fixes the issue.\n\n[1] https://github.com/Crunch-io/jupyterhub/commit/05e4ab41fe6691dd697eecc27d9abee2ac4e23b6\n\n", "before_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = url_path_join(self.hub.server.base_url, user.server.base_url)\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url, permanent=False)\n return\n # Redirect to the authenticator login page instead of rendering the\n # login html page\n url = self.authenticator.login_url(self.hub.server.base_url)\n self.log.debug(\"No user logged in: %s\", url)\n self.redirect(url, permanent=False)\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n running = users.filter(orm.User.server != None)\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}], "after_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url, permanent=False)\n return\n # Redirect to the authenticator login page instead of rendering the\n # login html page\n url = self.authenticator.login_url(self.hub.server.base_url)\n self.log.debug(\"No user logged in: %s\", url)\n self.redirect(url, permanent=False)\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n running = users.filter(orm.User.server != None)\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}]} | 1,376 | 134 |
gh_patches_debug_1591 | rasdani/github-patches | git_diff | evennia__evennia-600 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'<' character is an "escape" character when messaging characters?
Hi! I'm not sure if anyone else has stumbled across this issue. I first discovered this using a custom character typeclass that didn't have its msg method overloaded, and then tested it again with the default character typeclass. I haven't messed/overrode any functions found in /src. It seems as though the character '<' works like an escape character, as of the latest Evennia patch when messaging characters.
Examples of testing in-game with @py (using the default character typeclass):

It's pretty weird, and it likely doesn't affect anyone sorely, but I was using the '<' and '>' character for my prompt, so I was surprised and baffled when half of my prompt disappeared! It used to work as of last night, before I pulled in the latest Evennia changes. I was browsing through Evennia's latest commits, but I haven't found anything that I thought would affect this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/server/portal/mxp.py`
Content:
```
1 """
2 MXP - Mud eXtension Protocol.
3
4 Partial implementation of the MXP protocol.
5 The MXP protocol allows more advanced formatting options for telnet clients
6 that supports it (mudlet, zmud, mushclient are a few)
7
8 This only implements the SEND tag.
9
10 More information can be found on the following links:
11 http://www.zuggsoft.com/zmud/mxp.htm
12 http://www.mushclient.com/mushclient/mxp.htm
13 http://www.gammon.com.au/mushclient/addingservermxp.htm
14 """
15 import re
16
17 LINKS_SUB = re.compile(r'\{lc(.*?)\{lt(.*?)\{le', re.DOTALL)
18
19 MXP = "\x5B"
20 MXP_TEMPSECURE = "\x1B[4z"
21 MXP_SEND = MXP_TEMPSECURE + \
22 "<SEND HREF='\\1'>" + \
23 "\\2" + \
24 MXP_TEMPSECURE + \
25 "</SEND>"
26
27 def mxp_parse(text):
28 """
29 Replaces links to the correct format for MXP.
30 """
31 text = LINKS_SUB.sub(MXP_SEND, text)
32 return text
33
34 class Mxp(object):
35 """
36 Implements the MXP protocol.
37 """
38
39 def __init__(self, protocol):
40 """Initializes the protocol by checking if the client supports it."""
41 self.protocol = protocol
42 self.protocol.protocol_flags["MXP"] = False
43 self.protocol.will(MXP).addCallbacks(self.do_mxp, self.no_mxp)
44
45 def no_mxp(self, option):
46 """
47 Client does not support MXP.
48 """
49 self.protocol.protocol_flags["MXP"] = False
50
51 def do_mxp(self, option):
52 """
53 Client does support MXP.
54 """
55 self.protocol.protocol_flags["MXP"] = True
56 self.protocol.handshake_done()
57 self.protocol.requestNegotiation(MXP, '')
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/server/portal/mxp.py b/src/server/portal/mxp.py
--- a/src/server/portal/mxp.py
+++ b/src/server/portal/mxp.py
@@ -28,6 +28,10 @@
"""
Replaces links to the correct format for MXP.
"""
+ text = text.replace("&", "&") \
+ .replace("<", "<") \
+ .replace(">", ">")
+
text = LINKS_SUB.sub(MXP_SEND, text)
return text
| {"golden_diff": "diff --git a/src/server/portal/mxp.py b/src/server/portal/mxp.py\n--- a/src/server/portal/mxp.py\n+++ b/src/server/portal/mxp.py\n@@ -28,6 +28,10 @@\n \"\"\"\n Replaces links to the correct format for MXP.\n \"\"\"\n+ text = text.replace(\"&\", \"&\") \\\n+ .replace(\"<\", \"<\") \\\n+ .replace(\">\", \">\")\n+\n text = LINKS_SUB.sub(MXP_SEND, text)\n return text\n", "issue": "'<' character is an \"escape\" character when messaging characters?\nHi! I'm not sure if anyone else has stumbled across this issue. I first discovered this using a custom character typeclass that didn't have its msg method overloaded, and then tested it again with the default character typeclass. I haven't messed/overrode any functions found in /src. It seems as though the character '<' works like an escape character, as of the latest Evennia patch when messaging characters.\n\nExamples of testing in-game with @py (using the default character typeclass):\n\n\nIt's pretty weird, and it likely doesn't affect anyone sorely, but I was using the '<' and '>' character for my prompt, so I was surprised and baffled when half of my prompt disappeared! It used to work as of last night, before I pulled in the latest Evennia changes. I was browsing through Evennia's latest commits, but I haven't found anything that I thought would affect this.\n\n", "before_files": [{"content": "\"\"\"\nMXP - Mud eXtension Protocol.\n\nPartial implementation of the MXP protocol.\nThe MXP protocol allows more advanced formatting options for telnet clients\nthat supports it (mudlet, zmud, mushclient are a few)\n\nThis only implements the SEND tag.\n\nMore information can be found on the following links:\nhttp://www.zuggsoft.com/zmud/mxp.htm\nhttp://www.mushclient.com/mushclient/mxp.htm\nhttp://www.gammon.com.au/mushclient/addingservermxp.htm\n\"\"\"\nimport re\n\nLINKS_SUB = re.compile(r'\\{lc(.*?)\\{lt(.*?)\\{le', re.DOTALL)\n\nMXP = \"\\x5B\"\nMXP_TEMPSECURE = \"\\x1B[4z\"\nMXP_SEND = MXP_TEMPSECURE + \\\n \"<SEND HREF='\\\\1'>\" + \\\n \"\\\\2\" + \\\n MXP_TEMPSECURE + \\\n \"</SEND>\"\n\ndef mxp_parse(text):\n \"\"\"\n Replaces links to the correct format for MXP.\n \"\"\"\n text = LINKS_SUB.sub(MXP_SEND, text)\n return text\n\nclass Mxp(object):\n \"\"\"\n Implements the MXP protocol.\n \"\"\"\n\n def __init__(self, protocol):\n \"\"\"Initializes the protocol by checking if the client supports it.\"\"\"\n self.protocol = protocol\n self.protocol.protocol_flags[\"MXP\"] = False\n self.protocol.will(MXP).addCallbacks(self.do_mxp, self.no_mxp)\n\n def no_mxp(self, option):\n \"\"\"\n Client does not support MXP.\n \"\"\"\n self.protocol.protocol_flags[\"MXP\"] = False\n\n def do_mxp(self, option):\n \"\"\"\n Client does support MXP.\n \"\"\"\n self.protocol.protocol_flags[\"MXP\"] = True\n self.protocol.handshake_done()\n self.protocol.requestNegotiation(MXP, '')\n", "path": "src/server/portal/mxp.py"}], "after_files": [{"content": "\"\"\"\nMXP - Mud eXtension Protocol.\n\nPartial implementation of the MXP protocol.\nThe MXP protocol allows more advanced formatting options for telnet clients\nthat supports it (mudlet, zmud, mushclient are a few)\n\nThis only implements the SEND tag.\n\nMore information can be found on the following links:\nhttp://www.zuggsoft.com/zmud/mxp.htm\nhttp://www.mushclient.com/mushclient/mxp.htm\nhttp://www.gammon.com.au/mushclient/addingservermxp.htm\n\"\"\"\nimport re\n\nLINKS_SUB = re.compile(r'\\{lc(.*?)\\{lt(.*?)\\{le', re.DOTALL)\n\nMXP = \"\\x5B\"\nMXP_TEMPSECURE = \"\\x1B[4z\"\nMXP_SEND = MXP_TEMPSECURE + \\\n \"<SEND HREF='\\\\1'>\" + \\\n \"\\\\2\" + \\\n MXP_TEMPSECURE + \\\n \"</SEND>\"\n\ndef mxp_parse(text):\n \"\"\"\n Replaces links to the correct format for MXP.\n \"\"\"\n text = text.replace(\"&\", \"&\") \\\n .replace(\"<\", \"<\") \\\n .replace(\">\", \">\")\n\n text = LINKS_SUB.sub(MXP_SEND, text)\n return text\n\nclass Mxp(object):\n \"\"\"\n Implements the MXP protocol.\n \"\"\"\n\n def __init__(self, protocol):\n \"\"\"Initializes the protocol by checking if the client supports it.\"\"\"\n self.protocol = protocol\n self.protocol.protocol_flags[\"MXP\"] = False\n self.protocol.will(MXP).addCallbacks(self.do_mxp, self.no_mxp)\n\n def no_mxp(self, option):\n \"\"\"\n Client does not support MXP.\n \"\"\"\n self.protocol.protocol_flags[\"MXP\"] = False\n\n def do_mxp(self, option):\n \"\"\"\n Client does support MXP.\n \"\"\"\n self.protocol.protocol_flags[\"MXP\"] = True\n self.protocol.handshake_done()\n self.protocol.requestNegotiation(MXP, '')\n", "path": "src/server/portal/mxp.py"}]} | 1,046 | 120 |
gh_patches_debug_1718 | rasdani/github-patches | git_diff | iterative__dvc-6683 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
alias `list` as `ls`?
I have been reaching for `dvc ls` out of habit instead of `dvc list`. Should we introduce an alias for `dvc list`?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/command/ls/__init__.py`
Content:
```
1 import argparse
2 import logging
3
4 from dvc.command import completion
5 from dvc.command.base import CmdBaseNoRepo, append_doc_link
6 from dvc.command.ls.ls_colors import LsColors
7 from dvc.exceptions import DvcException
8 from dvc.ui import ui
9
10 logger = logging.getLogger(__name__)
11
12
13 def _prettify(entries, with_color=False):
14 if with_color:
15 ls_colors = LsColors()
16 fmt = ls_colors.format
17 else:
18
19 def fmt(entry):
20 return entry["path"]
21
22 return [fmt(entry) for entry in entries]
23
24
25 class CmdList(CmdBaseNoRepo):
26 def run(self):
27 from dvc.repo import Repo
28
29 try:
30 entries = Repo.ls(
31 self.args.url,
32 self.args.path,
33 rev=self.args.rev,
34 recursive=self.args.recursive,
35 dvc_only=self.args.dvc_only,
36 )
37 if self.args.show_json:
38 import json
39
40 ui.write(json.dumps(entries))
41 elif entries:
42 entries = _prettify(entries, with_color=True)
43 ui.write("\n".join(entries))
44 return 0
45 except DvcException:
46 logger.exception(f"failed to list '{self.args.url}'")
47 return 1
48
49
50 def add_parser(subparsers, parent_parser):
51 LIST_HELP = (
52 "List repository contents, including files"
53 " and directories tracked by DVC and by Git."
54 )
55 list_parser = subparsers.add_parser(
56 "list",
57 parents=[parent_parser],
58 description=append_doc_link(LIST_HELP, "list"),
59 help=LIST_HELP,
60 formatter_class=argparse.RawTextHelpFormatter,
61 )
62 list_parser.add_argument("url", help="Location of DVC repository to list")
63 list_parser.add_argument(
64 "-R",
65 "--recursive",
66 action="store_true",
67 help="Recursively list files.",
68 )
69 list_parser.add_argument(
70 "--dvc-only", action="store_true", help="Show only DVC outputs."
71 )
72 list_parser.add_argument(
73 "--show-json", action="store_true", help="Show output in JSON format."
74 )
75 list_parser.add_argument(
76 "--rev",
77 nargs="?",
78 help="Git revision (e.g. SHA, branch, tag)",
79 metavar="<commit>",
80 )
81 list_parser.add_argument(
82 "path",
83 nargs="?",
84 help="Path to directory within the repository to list outputs for",
85 ).complete = completion.DIR
86 list_parser.set_defaults(func=CmdList)
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/command/ls/__init__.py b/dvc/command/ls/__init__.py
--- a/dvc/command/ls/__init__.py
+++ b/dvc/command/ls/__init__.py
@@ -54,6 +54,7 @@
)
list_parser = subparsers.add_parser(
"list",
+ aliases=["ls"],
parents=[parent_parser],
description=append_doc_link(LIST_HELP, "list"),
help=LIST_HELP,
| {"golden_diff": "diff --git a/dvc/command/ls/__init__.py b/dvc/command/ls/__init__.py\n--- a/dvc/command/ls/__init__.py\n+++ b/dvc/command/ls/__init__.py\n@@ -54,6 +54,7 @@\n )\n list_parser = subparsers.add_parser(\n \"list\",\n+ aliases=[\"ls\"],\n parents=[parent_parser],\n description=append_doc_link(LIST_HELP, \"list\"),\n help=LIST_HELP,\n", "issue": "alias `list` as `ls`?\nI have been reaching for `dvc ls` out of habit instead of `dvc list`. Should we introduce an alias for `dvc list`?\n", "before_files": [{"content": "import argparse\nimport logging\n\nfrom dvc.command import completion\nfrom dvc.command.base import CmdBaseNoRepo, append_doc_link\nfrom dvc.command.ls.ls_colors import LsColors\nfrom dvc.exceptions import DvcException\nfrom dvc.ui import ui\n\nlogger = logging.getLogger(__name__)\n\n\ndef _prettify(entries, with_color=False):\n if with_color:\n ls_colors = LsColors()\n fmt = ls_colors.format\n else:\n\n def fmt(entry):\n return entry[\"path\"]\n\n return [fmt(entry) for entry in entries]\n\n\nclass CmdList(CmdBaseNoRepo):\n def run(self):\n from dvc.repo import Repo\n\n try:\n entries = Repo.ls(\n self.args.url,\n self.args.path,\n rev=self.args.rev,\n recursive=self.args.recursive,\n dvc_only=self.args.dvc_only,\n )\n if self.args.show_json:\n import json\n\n ui.write(json.dumps(entries))\n elif entries:\n entries = _prettify(entries, with_color=True)\n ui.write(\"\\n\".join(entries))\n return 0\n except DvcException:\n logger.exception(f\"failed to list '{self.args.url}'\")\n return 1\n\n\ndef add_parser(subparsers, parent_parser):\n LIST_HELP = (\n \"List repository contents, including files\"\n \" and directories tracked by DVC and by Git.\"\n )\n list_parser = subparsers.add_parser(\n \"list\",\n parents=[parent_parser],\n description=append_doc_link(LIST_HELP, \"list\"),\n help=LIST_HELP,\n formatter_class=argparse.RawTextHelpFormatter,\n )\n list_parser.add_argument(\"url\", help=\"Location of DVC repository to list\")\n list_parser.add_argument(\n \"-R\",\n \"--recursive\",\n action=\"store_true\",\n help=\"Recursively list files.\",\n )\n list_parser.add_argument(\n \"--dvc-only\", action=\"store_true\", help=\"Show only DVC outputs.\"\n )\n list_parser.add_argument(\n \"--show-json\", action=\"store_true\", help=\"Show output in JSON format.\"\n )\n list_parser.add_argument(\n \"--rev\",\n nargs=\"?\",\n help=\"Git revision (e.g. SHA, branch, tag)\",\n metavar=\"<commit>\",\n )\n list_parser.add_argument(\n \"path\",\n nargs=\"?\",\n help=\"Path to directory within the repository to list outputs for\",\n ).complete = completion.DIR\n list_parser.set_defaults(func=CmdList)\n", "path": "dvc/command/ls/__init__.py"}], "after_files": [{"content": "import argparse\nimport logging\n\nfrom dvc.command import completion\nfrom dvc.command.base import CmdBaseNoRepo, append_doc_link\nfrom dvc.command.ls.ls_colors import LsColors\nfrom dvc.exceptions import DvcException\nfrom dvc.ui import ui\n\nlogger = logging.getLogger(__name__)\n\n\ndef _prettify(entries, with_color=False):\n if with_color:\n ls_colors = LsColors()\n fmt = ls_colors.format\n else:\n\n def fmt(entry):\n return entry[\"path\"]\n\n return [fmt(entry) for entry in entries]\n\n\nclass CmdList(CmdBaseNoRepo):\n def run(self):\n from dvc.repo import Repo\n\n try:\n entries = Repo.ls(\n self.args.url,\n self.args.path,\n rev=self.args.rev,\n recursive=self.args.recursive,\n dvc_only=self.args.dvc_only,\n )\n if self.args.show_json:\n import json\n\n ui.write(json.dumps(entries))\n elif entries:\n entries = _prettify(entries, with_color=True)\n ui.write(\"\\n\".join(entries))\n return 0\n except DvcException:\n logger.exception(f\"failed to list '{self.args.url}'\")\n return 1\n\n\ndef add_parser(subparsers, parent_parser):\n LIST_HELP = (\n \"List repository contents, including files\"\n \" and directories tracked by DVC and by Git.\"\n )\n list_parser = subparsers.add_parser(\n \"list\",\n aliases=[\"ls\"],\n parents=[parent_parser],\n description=append_doc_link(LIST_HELP, \"list\"),\n help=LIST_HELP,\n formatter_class=argparse.RawTextHelpFormatter,\n )\n list_parser.add_argument(\"url\", help=\"Location of DVC repository to list\")\n list_parser.add_argument(\n \"-R\",\n \"--recursive\",\n action=\"store_true\",\n help=\"Recursively list files.\",\n )\n list_parser.add_argument(\n \"--dvc-only\", action=\"store_true\", help=\"Show only DVC outputs.\"\n )\n list_parser.add_argument(\n \"--show-json\", action=\"store_true\", help=\"Show output in JSON format.\"\n )\n list_parser.add_argument(\n \"--rev\",\n nargs=\"?\",\n help=\"Git revision (e.g. SHA, branch, tag)\",\n metavar=\"<commit>\",\n )\n list_parser.add_argument(\n \"path\",\n nargs=\"?\",\n help=\"Path to directory within the repository to list outputs for\",\n ).complete = completion.DIR\n list_parser.set_defaults(func=CmdList)\n", "path": "dvc/command/ls/__init__.py"}]} | 1,011 | 106 |
gh_patches_debug_35244 | rasdani/github-patches | git_diff | Zeroto521__my-data-toolkit-567 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EHN: New accessor `fillna_regresssion`
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [x] whatsnew entry
Fill na value with regression method
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dtoolkit/accessor/dataframe/fillna_regression.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 import pandas as pd
6
7 from dtoolkit._typing import IntOrStr
8 from dtoolkit.accessor.register import register_dataframe_method
9
10 if TYPE_CHECKING:
11 from sklearn.base import RegressorMixin
12
13
14 @register_dataframe_method
15 def fillna_regression(
16 df: pd.DataFrame,
17 method: RegressorMixin,
18 X: IntOrStr | list[IntOrStr] | pd.Index,
19 y: IntOrStr,
20 how: str = "na",
21 **kwargs,
22 ) -> pd.DataFrame:
23 """
24 Fill na value with regression algorithm.
25
26 Parameters
27 ----------
28 method : RegressorMixin
29 Regression transformer.
30
31 X : int or str, list of int or str, Index
32 Feature columns.
33
34 y : int or str
35 Target column.
36
37 how : {'na', 'all'}, default 'na'
38 Only fill na value or apply regression to entire target column.
39
40 **kwargs
41 See the documentation for ``method`` for complete details on
42 the keyword arguments.
43
44 See Also
45 --------
46 sklearn.kernel_ridge
47 sklearn.linear_model
48 sklearn.dummy.DummyRegressor
49 sklearn.ensemble.AdaBoostRegressor
50 sklearn.ensemble.BaggingRegressor
51 sklearn.ensemble.ExtraTreesRegressor
52 sklearn.ensemble.GradientBoostingRegressor
53 sklearn.ensemble.RandomForestRegressor
54 sklearn.ensemble.StackingRegressor
55 sklearn.ensemble.VotingRegressor
56 sklearn.ensemble.HistGradientBoostingRegressor
57 sklearn.gaussian_process.GaussianProcessRegressor
58 sklearn.isotonic.IsotonicRegression
59 sklearn.kernel_ridge.KernelRidge
60 sklearn.neighbors.KNeighborsRegressor
61 sklearn.neighbors.RadiusNeighborsRegressor
62 sklearn.neural_network.MLPRegressor
63 sklearn.svm.LinearSVR
64 sklearn.svm.NuSVR
65 sklearn.svm.SVR
66 sklearn.tree.DecisionTreeRegressor
67 sklearn.tree.ExtraTreeRegressor
68
69 Examples
70 --------
71 >>> import dtoolkit.accessor
72 >>> import pandas as pd
73 >>> from sklearn.linear_model import LinearRegression
74
75 .. math:: y = 1 \\times x_0 + 2 \\times x_1 + 3
76
77 >>> df = pd.DataFrame(
78 ... [
79 ... [1, 1, 6],
80 ... [1, 2, 8],
81 ... [2, 2, 9],
82 ... [2, 3, 11],
83 ... [3, 5, None],
84 ... ],
85 ... columns=['x1', 'x2', 'y'],
86 ... )
87 >>> df
88 x1 x2 y
89 0 1 1 6.0
90 1 1 2 8.0
91 2 2 2 9.0
92 3 2 3 11.0
93 4 3 5 NaN
94
95 Use 'x1' and 'x2' columns to fit 'y' column and fill the value.
96
97 >>> df.fillna_regression(LinearRegression, ['x1', 'x2'], 'y')
98 x1 x2 y
99 0 1 1 6.0
100 1 1 2 8.0
101 2 2 2 9.0
102 3 2 3 11.0
103 4 3 5 16.0
104 """
105
106 if how not in {"na", "all"}:
107 raise ValueError(f"invalid inf option: {how!r}")
108
109 if isinstance(X, (str, int)):
110 X = [X]
111
112 index_notnull = df[df[y].notnull()].index
113 model = method(**kwargs).fit(
114 df.loc[index_notnull, X],
115 df.loc[index_notnull, y],
116 )
117
118 if how == "all":
119 df[y] = model.predict(df[X])
120 elif how == "na":
121 index_null = df[df[y].isnull()].index
122 df.loc[index_null, y] = model.predict(df.loc[index_null, X])
123
124 return df
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dtoolkit/accessor/dataframe/fillna_regression.py b/dtoolkit/accessor/dataframe/fillna_regression.py
--- a/dtoolkit/accessor/dataframe/fillna_regression.py
+++ b/dtoolkit/accessor/dataframe/fillna_regression.py
@@ -15,8 +15,7 @@
def fillna_regression(
df: pd.DataFrame,
method: RegressorMixin,
- X: IntOrStr | list[IntOrStr] | pd.Index,
- y: IntOrStr,
+ columns: dict[IntOrStr, IntOrStr | list[IntOrStr] | pd.Index],
how: str = "na",
**kwargs,
) -> pd.DataFrame:
@@ -28,11 +27,9 @@
method : RegressorMixin
Regression transformer.
- X : int or str, list of int or str, Index
- Feature columns.
-
- y : int or str
- Target column.
+ columns : dict, ``{y: X}``
+ A series of column names pairs. The key is the y (or target) column name, and
+ values are X (or feature) column names.
how : {'na', 'all'}, default 'na'
Only fill na value or apply regression to entire target column.
@@ -41,6 +38,10 @@
See the documentation for ``method`` for complete details on
the keyword arguments.
+ Returns
+ -------
+ DataFrame
+
See Also
--------
sklearn.kernel_ridge
@@ -94,7 +95,7 @@
Use 'x1' and 'x2' columns to fit 'y' column and fill the value.
- >>> df.fillna_regression(LinearRegression, ['x1', 'x2'], 'y')
+ >>> df.fillna_regression(LinearRegression, {'y': ['x1', 'x2']})
x1 x2 y
0 1 1 6.0
1 1 2 8.0
@@ -106,6 +107,22 @@
if how not in {"na", "all"}:
raise ValueError(f"invalid inf option: {how!r}")
+ for y, X in columns.items():
+ df = _fillna_regression(df, method, y, X, how=how, **kwargs)
+
+ return df
+
+
+def _fillna_regression(
+ df: pd.DataFrame,
+ method: RegressorMixin,
+ y: IntOrStr,
+ X: IntOrStr | list[IntOrStr] | pd.Index,
+ how: str = "na",
+ **kwargs,
+):
+ """Fill single na column at once."""
+
if isinstance(X, (str, int)):
X = [X]
| {"golden_diff": "diff --git a/dtoolkit/accessor/dataframe/fillna_regression.py b/dtoolkit/accessor/dataframe/fillna_regression.py\n--- a/dtoolkit/accessor/dataframe/fillna_regression.py\n+++ b/dtoolkit/accessor/dataframe/fillna_regression.py\n@@ -15,8 +15,7 @@\n def fillna_regression(\n df: pd.DataFrame,\n method: RegressorMixin,\n- X: IntOrStr | list[IntOrStr] | pd.Index,\n- y: IntOrStr,\n+ columns: dict[IntOrStr, IntOrStr | list[IntOrStr] | pd.Index],\n how: str = \"na\",\n **kwargs,\n ) -> pd.DataFrame:\n@@ -28,11 +27,9 @@\n method : RegressorMixin\n Regression transformer.\n \n- X : int or str, list of int or str, Index\n- Feature columns.\n-\n- y : int or str\n- Target column.\n+ columns : dict, ``{y: X}``\n+ A series of column names pairs. The key is the y (or target) column name, and\n+ values are X (or feature) column names.\n \n how : {'na', 'all'}, default 'na'\n Only fill na value or apply regression to entire target column.\n@@ -41,6 +38,10 @@\n See the documentation for ``method`` for complete details on\n the keyword arguments.\n \n+ Returns\n+ -------\n+ DataFrame\n+\n See Also\n --------\n sklearn.kernel_ridge\n@@ -94,7 +95,7 @@\n \n Use 'x1' and 'x2' columns to fit 'y' column and fill the value.\n \n- >>> df.fillna_regression(LinearRegression, ['x1', 'x2'], 'y')\n+ >>> df.fillna_regression(LinearRegression, {'y': ['x1', 'x2']})\n x1 x2 y\n 0 1 1 6.0\n 1 1 2 8.0\n@@ -106,6 +107,22 @@\n if how not in {\"na\", \"all\"}:\n raise ValueError(f\"invalid inf option: {how!r}\")\n \n+ for y, X in columns.items():\n+ df = _fillna_regression(df, method, y, X, how=how, **kwargs)\n+\n+ return df\n+\n+\n+def _fillna_regression(\n+ df: pd.DataFrame,\n+ method: RegressorMixin,\n+ y: IntOrStr,\n+ X: IntOrStr | list[IntOrStr] | pd.Index,\n+ how: str = \"na\",\n+ **kwargs,\n+):\n+ \"\"\"Fill single na column at once.\"\"\"\n+\n if isinstance(X, (str, int)):\n X = [X]\n", "issue": "EHN: New accessor `fillna_regresssion`\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [x] whatsnew entry\r\n\r\nFill na value with regression method\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nimport pandas as pd\n\nfrom dtoolkit._typing import IntOrStr\nfrom dtoolkit.accessor.register import register_dataframe_method\n\nif TYPE_CHECKING:\n from sklearn.base import RegressorMixin\n\n\n@register_dataframe_method\ndef fillna_regression(\n df: pd.DataFrame,\n method: RegressorMixin,\n X: IntOrStr | list[IntOrStr] | pd.Index,\n y: IntOrStr,\n how: str = \"na\",\n **kwargs,\n) -> pd.DataFrame:\n \"\"\"\n Fill na value with regression algorithm.\n\n Parameters\n ----------\n method : RegressorMixin\n Regression transformer.\n\n X : int or str, list of int or str, Index\n Feature columns.\n\n y : int or str\n Target column.\n\n how : {'na', 'all'}, default 'na'\n Only fill na value or apply regression to entire target column.\n\n **kwargs\n See the documentation for ``method`` for complete details on\n the keyword arguments.\n\n See Also\n --------\n sklearn.kernel_ridge\n sklearn.linear_model\n sklearn.dummy.DummyRegressor\n sklearn.ensemble.AdaBoostRegressor\n sklearn.ensemble.BaggingRegressor\n sklearn.ensemble.ExtraTreesRegressor\n sklearn.ensemble.GradientBoostingRegressor\n sklearn.ensemble.RandomForestRegressor\n sklearn.ensemble.StackingRegressor\n sklearn.ensemble.VotingRegressor\n sklearn.ensemble.HistGradientBoostingRegressor\n sklearn.gaussian_process.GaussianProcessRegressor\n sklearn.isotonic.IsotonicRegression\n sklearn.kernel_ridge.KernelRidge\n sklearn.neighbors.KNeighborsRegressor\n sklearn.neighbors.RadiusNeighborsRegressor\n sklearn.neural_network.MLPRegressor\n sklearn.svm.LinearSVR\n sklearn.svm.NuSVR\n sklearn.svm.SVR\n sklearn.tree.DecisionTreeRegressor\n sklearn.tree.ExtraTreeRegressor\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> from sklearn.linear_model import LinearRegression\n\n .. math:: y = 1 \\\\times x_0 + 2 \\\\times x_1 + 3\n\n >>> df = pd.DataFrame(\n ... [\n ... [1, 1, 6],\n ... [1, 2, 8],\n ... [2, 2, 9],\n ... [2, 3, 11],\n ... [3, 5, None],\n ... ],\n ... columns=['x1', 'x2', 'y'],\n ... )\n >>> df\n x1 x2 y\n 0 1 1 6.0\n 1 1 2 8.0\n 2 2 2 9.0\n 3 2 3 11.0\n 4 3 5 NaN\n\n Use 'x1' and 'x2' columns to fit 'y' column and fill the value.\n\n >>> df.fillna_regression(LinearRegression, ['x1', 'x2'], 'y')\n x1 x2 y\n 0 1 1 6.0\n 1 1 2 8.0\n 2 2 2 9.0\n 3 2 3 11.0\n 4 3 5 16.0\n \"\"\"\n\n if how not in {\"na\", \"all\"}:\n raise ValueError(f\"invalid inf option: {how!r}\")\n\n if isinstance(X, (str, int)):\n X = [X]\n\n index_notnull = df[df[y].notnull()].index\n model = method(**kwargs).fit(\n df.loc[index_notnull, X],\n df.loc[index_notnull, y],\n )\n\n if how == \"all\":\n df[y] = model.predict(df[X])\n elif how == \"na\":\n index_null = df[df[y].isnull()].index\n df.loc[index_null, y] = model.predict(df.loc[index_null, X])\n\n return df\n", "path": "dtoolkit/accessor/dataframe/fillna_regression.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nimport pandas as pd\n\nfrom dtoolkit._typing import IntOrStr\nfrom dtoolkit.accessor.register import register_dataframe_method\n\nif TYPE_CHECKING:\n from sklearn.base import RegressorMixin\n\n\n@register_dataframe_method\ndef fillna_regression(\n df: pd.DataFrame,\n method: RegressorMixin,\n columns: dict[IntOrStr, IntOrStr | list[IntOrStr] | pd.Index],\n how: str = \"na\",\n **kwargs,\n) -> pd.DataFrame:\n \"\"\"\n Fill na value with regression algorithm.\n\n Parameters\n ----------\n method : RegressorMixin\n Regression transformer.\n\n columns : dict, ``{y: X}``\n A series of column names pairs. The key is the y (or target) column name, and\n values are X (or feature) column names.\n\n how : {'na', 'all'}, default 'na'\n Only fill na value or apply regression to entire target column.\n\n **kwargs\n See the documentation for ``method`` for complete details on\n the keyword arguments.\n\n Returns\n -------\n DataFrame\n\n See Also\n --------\n sklearn.kernel_ridge\n sklearn.linear_model\n sklearn.dummy.DummyRegressor\n sklearn.ensemble.AdaBoostRegressor\n sklearn.ensemble.BaggingRegressor\n sklearn.ensemble.ExtraTreesRegressor\n sklearn.ensemble.GradientBoostingRegressor\n sklearn.ensemble.RandomForestRegressor\n sklearn.ensemble.StackingRegressor\n sklearn.ensemble.VotingRegressor\n sklearn.ensemble.HistGradientBoostingRegressor\n sklearn.gaussian_process.GaussianProcessRegressor\n sklearn.isotonic.IsotonicRegression\n sklearn.kernel_ridge.KernelRidge\n sklearn.neighbors.KNeighborsRegressor\n sklearn.neighbors.RadiusNeighborsRegressor\n sklearn.neural_network.MLPRegressor\n sklearn.svm.LinearSVR\n sklearn.svm.NuSVR\n sklearn.svm.SVR\n sklearn.tree.DecisionTreeRegressor\n sklearn.tree.ExtraTreeRegressor\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> from sklearn.linear_model import LinearRegression\n\n .. math:: y = 1 \\\\times x_0 + 2 \\\\times x_1 + 3\n\n >>> df = pd.DataFrame(\n ... [\n ... [1, 1, 6],\n ... [1, 2, 8],\n ... [2, 2, 9],\n ... [2, 3, 11],\n ... [3, 5, None],\n ... ],\n ... columns=['x1', 'x2', 'y'],\n ... )\n >>> df\n x1 x2 y\n 0 1 1 6.0\n 1 1 2 8.0\n 2 2 2 9.0\n 3 2 3 11.0\n 4 3 5 NaN\n\n Use 'x1' and 'x2' columns to fit 'y' column and fill the value.\n\n >>> df.fillna_regression(LinearRegression, {'y': ['x1', 'x2']})\n x1 x2 y\n 0 1 1 6.0\n 1 1 2 8.0\n 2 2 2 9.0\n 3 2 3 11.0\n 4 3 5 16.0\n \"\"\"\n\n if how not in {\"na\", \"all\"}:\n raise ValueError(f\"invalid inf option: {how!r}\")\n\n for y, X in columns.items():\n df = _fillna_regression(df, method, y, X, how=how, **kwargs)\n\n return df\n\n\ndef _fillna_regression(\n df: pd.DataFrame,\n method: RegressorMixin,\n y: IntOrStr,\n X: IntOrStr | list[IntOrStr] | pd.Index,\n how: str = \"na\",\n **kwargs,\n):\n \"\"\"Fill single na column at once.\"\"\"\n\n if isinstance(X, (str, int)):\n X = [X]\n\n index_notnull = df[df[y].notnull()].index\n model = method(**kwargs).fit(\n df.loc[index_notnull, X],\n df.loc[index_notnull, y],\n )\n\n if how == \"all\":\n df[y] = model.predict(df[X])\n elif how == \"na\":\n index_null = df[df[y].isnull()].index\n df.loc[index_null, y] = model.predict(df.loc[index_null, X])\n\n return df\n", "path": "dtoolkit/accessor/dataframe/fillna_regression.py"}]} | 1,681 | 640 |
gh_patches_debug_18756 | rasdani/github-patches | git_diff | Textualize__textual-3678 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mouse movements example does not work
When I run the example at https://textual.textualize.io/guide/input/#mouse-movements in gnome-terminal all I can see is black screen, and nothing I do (apart from CTRL+C) seems to have any effect.
```
textual diagnose
```
# Textual Diagnostics
## Versions
| Name | Value |
|---------|--------|
| Textual | 0.41.0 |
| Rich | 13.6.0 |
## Python
| Name | Value |
|----------------|---------------------------------------------------------------------------|
| Version | 3.11.2 |
| Implementation | CPython |
| Compiler | GCC 12.2.0 |
| Executable | /srv/home/porridge/.local/share/virtualenvs/reconcile-Vnvz65ja/bin/python |
## Operating System
| Name | Value |
|---------|-----------------------------------------------------|
| System | Linux |
| Release | 6.1.0-13-amd64 |
| Version | #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) |
## Terminal
| Name | Value |
|----------------------|----------------|
| Terminal Application | *Unknown* |
| TERM | xterm-256color |
| COLORTERM | truecolor |
| FORCE_COLOR | *Not set* |
| NO_COLOR | *Not set* |
## Rich Console options
| Name | Value |
|----------------|---------------------|
| size | width=87, height=27 |
| legacy_windows | False |
| min_width | 1 |
| max_width | 87 |
| is_terminal | True |
| encoding | utf-8 |
| max_height | 27 |
| justify | None |
| overflow | None |
| no_wrap | False |
| highlight | None |
| markup | None |
| height | None |
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/examples/guide/input/mouse01.py`
Content:
```
1 from textual import events
2 from textual.app import App, ComposeResult
3 from textual.containers import Container
4 from textual.widgets import RichLog, Static
5
6
7 class PlayArea(Container):
8 def on_mount(self) -> None:
9 self.capture_mouse()
10
11 def on_mouse_move(self, event: events.MouseMove) -> None:
12 self.screen.query_one(RichLog).write(event)
13 self.query_one(Ball).offset = event.offset - (8, 2)
14
15
16 class Ball(Static):
17 pass
18
19
20 class MouseApp(App):
21 CSS_PATH = "mouse01.tcss"
22
23 def compose(self) -> ComposeResult:
24 yield RichLog()
25 yield PlayArea(Ball("Textual"))
26
27
28 if __name__ == "__main__":
29 app = MouseApp()
30 app.run()
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/examples/guide/input/mouse01.py b/docs/examples/guide/input/mouse01.py
--- a/docs/examples/guide/input/mouse01.py
+++ b/docs/examples/guide/input/mouse01.py
@@ -1,18 +1,8 @@
from textual import events
from textual.app import App, ComposeResult
-from textual.containers import Container
from textual.widgets import RichLog, Static
-class PlayArea(Container):
- def on_mount(self) -> None:
- self.capture_mouse()
-
- def on_mouse_move(self, event: events.MouseMove) -> None:
- self.screen.query_one(RichLog).write(event)
- self.query_one(Ball).offset = event.offset - (8, 2)
-
-
class Ball(Static):
pass
@@ -22,7 +12,11 @@
def compose(self) -> ComposeResult:
yield RichLog()
- yield PlayArea(Ball("Textual"))
+ yield Ball("Textual")
+
+ def on_mouse_move(self, event: events.MouseMove) -> None:
+ self.screen.query_one(RichLog).write(event)
+ self.query_one(Ball).offset = event.screen_offset - (8, 2)
if __name__ == "__main__":
| {"golden_diff": "diff --git a/docs/examples/guide/input/mouse01.py b/docs/examples/guide/input/mouse01.py\n--- a/docs/examples/guide/input/mouse01.py\n+++ b/docs/examples/guide/input/mouse01.py\n@@ -1,18 +1,8 @@\n from textual import events\n from textual.app import App, ComposeResult\n-from textual.containers import Container\n from textual.widgets import RichLog, Static\n \n \n-class PlayArea(Container):\n- def on_mount(self) -> None:\n- self.capture_mouse()\n-\n- def on_mouse_move(self, event: events.MouseMove) -> None:\n- self.screen.query_one(RichLog).write(event)\n- self.query_one(Ball).offset = event.offset - (8, 2)\n-\n-\n class Ball(Static):\n pass\n \n@@ -22,7 +12,11 @@\n \n def compose(self) -> ComposeResult:\n yield RichLog()\n- yield PlayArea(Ball(\"Textual\"))\n+ yield Ball(\"Textual\")\n+\n+ def on_mouse_move(self, event: events.MouseMove) -> None:\n+ self.screen.query_one(RichLog).write(event)\n+ self.query_one(Ball).offset = event.screen_offset - (8, 2)\n \n \n if __name__ == \"__main__\":\n", "issue": "Mouse movements example does not work\nWhen I run the example at https://textual.textualize.io/guide/input/#mouse-movements in gnome-terminal all I can see is black screen, and nothing I do (apart from CTRL+C) seems to have any effect.\r\n\r\n```\r\ntextual diagnose\r\n```\r\n\r\n# Textual Diagnostics\r\n\r\n## Versions\r\n\r\n| Name | Value |\r\n|---------|--------|\r\n| Textual | 0.41.0 |\r\n| Rich | 13.6.0 |\r\n\r\n## Python\r\n\r\n| Name | Value |\r\n|----------------|---------------------------------------------------------------------------|\r\n| Version | 3.11.2 |\r\n| Implementation | CPython |\r\n| Compiler | GCC 12.2.0 |\r\n| Executable | /srv/home/porridge/.local/share/virtualenvs/reconcile-Vnvz65ja/bin/python |\r\n\r\n## Operating System\r\n\r\n| Name | Value |\r\n|---------|-----------------------------------------------------|\r\n| System | Linux |\r\n| Release | 6.1.0-13-amd64 |\r\n| Version | #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) |\r\n\r\n## Terminal\r\n\r\n| Name | Value |\r\n|----------------------|----------------|\r\n| Terminal Application | *Unknown* |\r\n| TERM | xterm-256color |\r\n| COLORTERM | truecolor |\r\n| FORCE_COLOR | *Not set* |\r\n| NO_COLOR | *Not set* |\r\n\r\n## Rich Console options\r\n\r\n| Name | Value |\r\n|----------------|---------------------|\r\n| size | width=87, height=27 |\r\n| legacy_windows | False |\r\n| min_width | 1 |\r\n| max_width | 87 |\r\n| is_terminal | True |\r\n| encoding | utf-8 |\r\n| max_height | 27 |\r\n| justify | None |\r\n| overflow | None |\r\n| no_wrap | False |\r\n| highlight | None |\r\n| markup | None |\r\n| height | None |\r\n\r\n\r\n\n", "before_files": [{"content": "from textual import events\nfrom textual.app import App, ComposeResult\nfrom textual.containers import Container\nfrom textual.widgets import RichLog, Static\n\n\nclass PlayArea(Container):\n def on_mount(self) -> None:\n self.capture_mouse()\n\n def on_mouse_move(self, event: events.MouseMove) -> None:\n self.screen.query_one(RichLog).write(event)\n self.query_one(Ball).offset = event.offset - (8, 2)\n\n\nclass Ball(Static):\n pass\n\n\nclass MouseApp(App):\n CSS_PATH = \"mouse01.tcss\"\n\n def compose(self) -> ComposeResult:\n yield RichLog()\n yield PlayArea(Ball(\"Textual\"))\n\n\nif __name__ == \"__main__\":\n app = MouseApp()\n app.run()\n", "path": "docs/examples/guide/input/mouse01.py"}], "after_files": [{"content": "from textual import events\nfrom textual.app import App, ComposeResult\nfrom textual.widgets import RichLog, Static\n\n\nclass Ball(Static):\n pass\n\n\nclass MouseApp(App):\n CSS_PATH = \"mouse01.tcss\"\n\n def compose(self) -> ComposeResult:\n yield RichLog()\n yield Ball(\"Textual\")\n\n def on_mouse_move(self, event: events.MouseMove) -> None:\n self.screen.query_one(RichLog).write(event)\n self.query_one(Ball).offset = event.screen_offset - (8, 2)\n\n\nif __name__ == \"__main__\":\n app = MouseApp()\n app.run()\n", "path": "docs/examples/guide/input/mouse01.py"}]} | 959 | 284 |
gh_patches_debug_16171 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1992 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken date handling in careeropportunity display date
## What kind of an issue is this?
- [x] Bug report
- [ ] Feature request
## What is the expected behaviour?
A careeropportunity should be displayed from the start date.
## What is the current behaviour?
If the start date is in the past, but sufficiently close to the current date, the careeropportunity is not displayed. Setting the start date to an earlier date fixes the problem.
<!-- if this is a bug report -->
## How do you reproduce this problem?
Set start date to the middle of the night the day before the current day.
Broken date handling in careeropportunity display date
## What kind of an issue is this?
- [x] Bug report
- [ ] Feature request
## What is the expected behaviour?
A careeropportunity should be displayed from the start date.
## What is the current behaviour?
If the start date is in the past, but sufficiently close to the current date, the careeropportunity is not displayed. Setting the start date to an earlier date fixes the problem.
<!-- if this is a bug report -->
## How do you reproduce this problem?
Set start date to the middle of the night the day before the current day.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/careeropportunity/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from django.shortcuts import render
4 from django.utils import timezone
5 # API v1
6 from rest_framework import mixins, viewsets
7 from rest_framework.pagination import PageNumberPagination
8 from rest_framework.permissions import AllowAny
9
10 from apps.careeropportunity.models import CareerOpportunity
11 from apps.careeropportunity.serializers import CareerSerializer
12
13
14 def index(request, id=None):
15 return render(request, 'careeropportunity/index.html')
16
17
18 class HundredItemsPaginator(PageNumberPagination):
19 page_size = 100
20
21
22 class CareerViewSet(viewsets.GenericViewSet, mixins.RetrieveModelMixin, mixins.ListModelMixin):
23 """
24 Viewset for Career serializer
25 """
26
27 queryset = CareerOpportunity.objects.filter(
28 start__lte=timezone.now(),
29 end__gte=timezone.now()
30 ).order_by('-featured', '-start')
31 serializer_class = CareerSerializer
32 permission_classes = (AllowAny,)
33 pagination_class = HundredItemsPaginator
34
```
Path: `apps/careeropportunity/urls.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from django.conf.urls import url
4
5 from apps.api.utils import SharedAPIRootRouter
6 from apps.careeropportunity import views
7
8 urlpatterns = [
9 url(r'^$', views.index, name='careeropportunity_index'),
10 url(r'^(\d+)/$', views.index, name='careeropportunity_details'),
11 ]
12
13 # API v1
14 router = SharedAPIRootRouter()
15 router.register(r'career', views.CareerViewSet)
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/careeropportunity/urls.py b/apps/careeropportunity/urls.py
--- a/apps/careeropportunity/urls.py
+++ b/apps/careeropportunity/urls.py
@@ -12,4 +12,4 @@
# API v1
router = SharedAPIRootRouter()
-router.register(r'career', views.CareerViewSet)
+router.register(r'career', views.CareerViewSet, base_name='careeropportunity')
diff --git a/apps/careeropportunity/views.py b/apps/careeropportunity/views.py
--- a/apps/careeropportunity/views.py
+++ b/apps/careeropportunity/views.py
@@ -24,10 +24,12 @@
Viewset for Career serializer
"""
- queryset = CareerOpportunity.objects.filter(
- start__lte=timezone.now(),
- end__gte=timezone.now()
- ).order_by('-featured', '-start')
serializer_class = CareerSerializer
permission_classes = (AllowAny,)
pagination_class = HundredItemsPaginator
+
+ def get_queryset(self, *args, **kwargs):
+ return CareerOpportunity.objects.filter(
+ start__lte=timezone.now(),
+ end__gte=timezone.now()
+ ).order_by('-featured', '-start')
| {"golden_diff": "diff --git a/apps/careeropportunity/urls.py b/apps/careeropportunity/urls.py\n--- a/apps/careeropportunity/urls.py\n+++ b/apps/careeropportunity/urls.py\n@@ -12,4 +12,4 @@\n \n # API v1\n router = SharedAPIRootRouter()\n-router.register(r'career', views.CareerViewSet)\n+router.register(r'career', views.CareerViewSet, base_name='careeropportunity')\ndiff --git a/apps/careeropportunity/views.py b/apps/careeropportunity/views.py\n--- a/apps/careeropportunity/views.py\n+++ b/apps/careeropportunity/views.py\n@@ -24,10 +24,12 @@\n Viewset for Career serializer\n \"\"\"\n \n- queryset = CareerOpportunity.objects.filter(\n- start__lte=timezone.now(),\n- end__gte=timezone.now()\n- ).order_by('-featured', '-start')\n serializer_class = CareerSerializer\n permission_classes = (AllowAny,)\n pagination_class = HundredItemsPaginator\n+\n+ def get_queryset(self, *args, **kwargs):\n+ return CareerOpportunity.objects.filter(\n+ start__lte=timezone.now(),\n+ end__gte=timezone.now()\n+ ).order_by('-featured', '-start')\n", "issue": "Broken date handling in careeropportunity display date\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n- [ ] Feature request\r\n\r\n\r\n## What is the expected behaviour?\r\nA careeropportunity should be displayed from the start date.\r\n\r\n## What is the current behaviour?\r\nIf the start date is in the past, but sufficiently close to the current date, the careeropportunity is not displayed. Setting the start date to an earlier date fixes the problem.\r\n\r\n<!-- if this is a bug report -->\r\n\r\n\r\n## How do you reproduce this problem? \r\n\r\nSet start date to the middle of the night the day before the current day.\nBroken date handling in careeropportunity display date\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n- [ ] Feature request\r\n\r\n\r\n## What is the expected behaviour?\r\nA careeropportunity should be displayed from the start date.\r\n\r\n## What is the current behaviour?\r\nIf the start date is in the past, but sufficiently close to the current date, the careeropportunity is not displayed. Setting the start date to an earlier date fixes the problem.\r\n\r\n<!-- if this is a bug report -->\r\n\r\n\r\n## How do you reproduce this problem? \r\n\r\nSet start date to the middle of the night the day before the current day.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django.shortcuts import render\nfrom django.utils import timezone\n# API v1\nfrom rest_framework import mixins, viewsets\nfrom rest_framework.pagination import PageNumberPagination\nfrom rest_framework.permissions import AllowAny\n\nfrom apps.careeropportunity.models import CareerOpportunity\nfrom apps.careeropportunity.serializers import CareerSerializer\n\n\ndef index(request, id=None):\n return render(request, 'careeropportunity/index.html')\n\n\nclass HundredItemsPaginator(PageNumberPagination):\n page_size = 100\n\n\nclass CareerViewSet(viewsets.GenericViewSet, mixins.RetrieveModelMixin, mixins.ListModelMixin):\n \"\"\"\n Viewset for Career serializer\n \"\"\"\n\n queryset = CareerOpportunity.objects.filter(\n start__lte=timezone.now(),\n end__gte=timezone.now()\n ).order_by('-featured', '-start')\n serializer_class = CareerSerializer\n permission_classes = (AllowAny,)\n pagination_class = HundredItemsPaginator\n", "path": "apps/careeropportunity/views.py"}, {"content": "# -*- coding: utf-8 -*-\n\nfrom django.conf.urls import url\n\nfrom apps.api.utils import SharedAPIRootRouter\nfrom apps.careeropportunity import views\n\nurlpatterns = [\n url(r'^$', views.index, name='careeropportunity_index'),\n url(r'^(\\d+)/$', views.index, name='careeropportunity_details'),\n]\n\n# API v1\nrouter = SharedAPIRootRouter()\nrouter.register(r'career', views.CareerViewSet)\n", "path": "apps/careeropportunity/urls.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django.shortcuts import render\nfrom django.utils import timezone\n# API v1\nfrom rest_framework import mixins, viewsets\nfrom rest_framework.pagination import PageNumberPagination\nfrom rest_framework.permissions import AllowAny\n\nfrom apps.careeropportunity.models import CareerOpportunity\nfrom apps.careeropportunity.serializers import CareerSerializer\n\n\ndef index(request, id=None):\n return render(request, 'careeropportunity/index.html')\n\n\nclass HundredItemsPaginator(PageNumberPagination):\n page_size = 100\n\n\nclass CareerViewSet(viewsets.GenericViewSet, mixins.RetrieveModelMixin, mixins.ListModelMixin):\n \"\"\"\n Viewset for Career serializer\n \"\"\"\n\n serializer_class = CareerSerializer\n permission_classes = (AllowAny,)\n pagination_class = HundredItemsPaginator\n\n def get_queryset(self, *args, **kwargs):\n return CareerOpportunity.objects.filter(\n start__lte=timezone.now(),\n end__gte=timezone.now()\n ).order_by('-featured', '-start')\n", "path": "apps/careeropportunity/views.py"}, {"content": "# -*- coding: utf-8 -*-\n\nfrom django.conf.urls import url\n\nfrom apps.api.utils import SharedAPIRootRouter\nfrom apps.careeropportunity import views\n\nurlpatterns = [\n url(r'^$', views.index, name='careeropportunity_index'),\n url(r'^(\\d+)/$', views.index, name='careeropportunity_details'),\n]\n\n# API v1\nrouter = SharedAPIRootRouter()\nrouter.register(r'career', views.CareerViewSet, base_name='careeropportunity')\n", "path": "apps/careeropportunity/urls.py"}]} | 933 | 287 |
gh_patches_debug_26083 | rasdani/github-patches | git_diff | spotify__luigi-559 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Some Docs Have Disappeared
Not sure if this is the right place to report this, but it looks like package documentation has disappeared from readthedocs. For example:
http://luigi.readthedocs.org/en/latest/api/luigi.html#luigi-file-module
I swear there used to be a lot of useful information here, now there is nothing.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) 2012 Spotify AB
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may not
4 # use this file except in compliance with the License. You may obtain a copy of
5 # the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations under
13 # the License.
14
15 import os
16 import sys
17
18 try:
19 from setuptools import setup
20 from setuptools.command.test import test as TestCommand
21 except:
22 from distutils.core import setup
23 from distutils.cmd import Command as TestCommand
24
25
26 class Tox(TestCommand):
27 user_options = [('tox-args=', None, "Arguments to pass to tox")]
28 def initialize_options(self):
29 TestCommand.initialize_options(self)
30 self.tox_args = ''
31 def finalize_options(self):
32 TestCommand.finalize_options(self)
33 self.test_args = []
34 self.test_suite = True
35 def run_tests(self):
36 #import here, cause outside the eggs aren't loaded
37 import tox
38 errno = tox.cmdline(args=self.tox_args.split())
39 sys.exit(errno)
40
41
42 def get_static_files(path):
43 return [os.path.join(dirpath.replace("luigi/", ""), ext)
44 for (dirpath, dirnames, filenames) in os.walk(path)
45 for ext in ["*.html", "*.js", "*.css", "*.png"]]
46
47
48 luigi_package_data = sum(map(get_static_files, ["luigi/static", "luigi/templates"]), [])
49
50 readme_note = """\
51 .. note::
52
53 For the latest source, discussion, etc, please visit the
54 `GitHub repository <https://github.com/spotify/luigi>`_\n\n
55 """
56
57 with open('README.rst') as fobj:
58 long_description = readme_note + fobj.read()
59
60
61 setup(
62 name='luigi',
63 version='1.0.19',
64 description='Workflow mgmgt + task scheduling + dependency resolution',
65 long_description=long_description,
66 author='Erik Bernhardsson',
67 author_email='[email protected]',
68 url='https://github.com/spotify/luigi',
69 license='Apache License 2.0',
70 packages=[
71 'luigi',
72 'luigi.contrib',
73 'luigi.tools'
74 ],
75 package_data={
76 'luigi': luigi_package_data
77 },
78 scripts=[
79 'bin/luigid',
80 'bin/luigi'
81 ],
82 tests_require=['tox', 'virtualenv'],
83 cmdclass={'test': Tox},
84 )
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -17,26 +17,8 @@
try:
from setuptools import setup
- from setuptools.command.test import test as TestCommand
except:
from distutils.core import setup
- from distutils.cmd import Command as TestCommand
-
-
-class Tox(TestCommand):
- user_options = [('tox-args=', None, "Arguments to pass to tox")]
- def initialize_options(self):
- TestCommand.initialize_options(self)
- self.tox_args = ''
- def finalize_options(self):
- TestCommand.finalize_options(self)
- self.test_args = []
- self.test_suite = True
- def run_tests(self):
- #import here, cause outside the eggs aren't loaded
- import tox
- errno = tox.cmdline(args=self.tox_args.split())
- sys.exit(errno)
def get_static_files(path):
@@ -57,6 +39,18 @@
with open('README.rst') as fobj:
long_description = readme_note + fobj.read()
+install_requires = [
+ 'boto',
+ 'pyparsing',
+ 'requests',
+ 'sqlalchemy',
+ 'tornado',
+ 'whoops',
+ 'snakebite>=2.4.10',
+]
+
+if sys.version_info[:2] < (2, 7):
+ install_requires.extend(['argparse', 'ordereddict'])
setup(
name='luigi',
@@ -79,6 +73,5 @@
'bin/luigid',
'bin/luigi'
],
- tests_require=['tox', 'virtualenv'],
- cmdclass={'test': Tox},
+ install_requires=install_requires,
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -17,26 +17,8 @@\n \n try:\n from setuptools import setup\n- from setuptools.command.test import test as TestCommand\n except:\n from distutils.core import setup\n- from distutils.cmd import Command as TestCommand\n-\n-\n-class Tox(TestCommand):\n- user_options = [('tox-args=', None, \"Arguments to pass to tox\")]\n- def initialize_options(self):\n- TestCommand.initialize_options(self)\n- self.tox_args = ''\n- def finalize_options(self):\n- TestCommand.finalize_options(self)\n- self.test_args = []\n- self.test_suite = True\n- def run_tests(self):\n- #import here, cause outside the eggs aren't loaded\n- import tox\n- errno = tox.cmdline(args=self.tox_args.split())\n- sys.exit(errno)\n \n \n def get_static_files(path):\n@@ -57,6 +39,18 @@\n with open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n \n+install_requires = [\n+ 'boto',\n+ 'pyparsing',\n+ 'requests',\n+ 'sqlalchemy',\n+ 'tornado',\n+ 'whoops',\n+ 'snakebite>=2.4.10',\n+]\n+\n+if sys.version_info[:2] < (2, 7):\n+ install_requires.extend(['argparse', 'ordereddict'])\n \n setup(\n name='luigi',\n@@ -79,6 +73,5 @@\n 'bin/luigid',\n 'bin/luigi'\n ],\n- tests_require=['tox', 'virtualenv'],\n- cmdclass={'test': Tox},\n+ install_requires=install_requires,\n )\n", "issue": "Some Docs Have Disappeared\nNot sure if this is the right place to report this, but it looks like package documentation has disappeared from readthedocs. For example:\n\nhttp://luigi.readthedocs.org/en/latest/api/luigi.html#luigi-file-module\n\nI swear there used to be a lot of useful information here, now there is nothing.\n\n", "before_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\nimport sys\n\ntry:\n from setuptools import setup\n from setuptools.command.test import test as TestCommand\nexcept:\n from distutils.core import setup\n from distutils.cmd import Command as TestCommand\n\n\nclass Tox(TestCommand):\n user_options = [('tox-args=', None, \"Arguments to pass to tox\")]\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.tox_args = ''\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n def run_tests(self):\n #import here, cause outside the eggs aren't loaded\n import tox\n errno = tox.cmdline(args=self.tox_args.split())\n sys.exit(errno)\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext) \n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\n\nsetup(\n name='luigi',\n version='1.0.19',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='Erik Bernhardsson',\n author_email='[email protected]',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.contrib',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n scripts=[\n 'bin/luigid',\n 'bin/luigi'\n ],\n tests_require=['tox', 'virtualenv'],\n cmdclass={'test': Tox},\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\nimport sys\n\ntry:\n from setuptools import setup\nexcept:\n from distutils.core import setup\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext) \n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\ninstall_requires = [\n 'boto',\n 'pyparsing',\n 'requests',\n 'sqlalchemy',\n 'tornado',\n 'whoops',\n 'snakebite>=2.4.10',\n]\n\nif sys.version_info[:2] < (2, 7):\n install_requires.extend(['argparse', 'ordereddict'])\n\nsetup(\n name='luigi',\n version='1.0.19',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='Erik Bernhardsson',\n author_email='[email protected]',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.contrib',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n scripts=[\n 'bin/luigid',\n 'bin/luigi'\n ],\n install_requires=install_requires,\n)\n", "path": "setup.py"}]} | 1,101 | 400 |
gh_patches_debug_26630 | rasdani/github-patches | git_diff | spack__spack-14473 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
spack extensions behavior when no extendable package is specified
If you run `spack extensions` with no extendable package, it prints an error message telling you that a package spec is required. It would be nice if Spack also printed a list of installed extendable packages to choose from.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/spack/spack/cmd/extensions.py`
Content:
```
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 import argparse
7
8 import llnl.util.tty as tty
9 from llnl.util.tty.colify import colify
10
11 import spack.environment as ev
12 import spack.cmd as cmd
13 import spack.cmd.common.arguments as arguments
14 import spack.repo
15 import spack.store
16 from spack.filesystem_view import YamlFilesystemView
17
18 description = "list extensions for package"
19 section = "extensions"
20 level = "long"
21
22
23 def setup_parser(subparser):
24 arguments.add_common_arguments(subparser, ['long', 'very_long'])
25 subparser.add_argument('-d', '--deps', action='store_true',
26 help='output dependencies along with found specs')
27
28 subparser.add_argument('-p', '--paths', action='store_true',
29 help='show paths to package install directories')
30 subparser.add_argument(
31 '-s', '--show', action='store', default='all',
32 choices=("packages", "installed", "activated", "all"),
33 help="show only part of output")
34 subparser.add_argument(
35 '-v', '--view', metavar='VIEW', type=str,
36 help="the view to operate on")
37
38 subparser.add_argument(
39 'spec', nargs=argparse.REMAINDER,
40 help='spec of package to list extensions for', metavar='extendable')
41
42
43 def extensions(parser, args):
44 if not args.spec:
45 tty.die("extensions requires a package spec.")
46
47 # Checks
48 spec = cmd.parse_specs(args.spec)
49 if len(spec) > 1:
50 tty.die("Can only list extensions for one package.")
51
52 if not spec[0].package.extendable:
53 tty.die("%s is not an extendable package." % spec[0].name)
54
55 env = ev.get_env(args, 'extensions')
56 spec = cmd.disambiguate_spec(spec[0], env)
57
58 if not spec.package.extendable:
59 tty.die("%s does not have extensions." % spec.short_spec)
60
61 if args.show in ("packages", "all"):
62 # List package names of extensions
63 extensions = spack.repo.path.extensions_for(spec)
64 if not extensions:
65 tty.msg("%s has no extensions." % spec.cshort_spec)
66 else:
67 tty.msg(spec.cshort_spec)
68 tty.msg("%d extensions:" % len(extensions))
69 colify(ext.name for ext in extensions)
70
71 if args.view:
72 target = args.view
73 else:
74 target = spec.prefix
75
76 view = YamlFilesystemView(target, spack.store.layout)
77
78 if args.show in ("installed", "all"):
79 # List specs of installed extensions.
80 installed = [
81 s.spec for s in spack.store.db.installed_extensions_for(spec)]
82
83 if args.show == "all":
84 print
85 if not installed:
86 tty.msg("None installed.")
87 else:
88 tty.msg("%d installed:" % len(installed))
89 cmd.display_specs(installed, args)
90
91 if args.show in ("activated", "all"):
92 # List specs of activated extensions.
93 activated = view.extensions_layout.extension_map(spec)
94 if args.show == "all":
95 print
96 if not activated:
97 tty.msg("None activated.")
98 else:
99 tty.msg("%d activated:" % len(activated))
100 cmd.display_specs(activated.values(), args)
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/spack/spack/cmd/extensions.py b/lib/spack/spack/cmd/extensions.py
--- a/lib/spack/spack/cmd/extensions.py
+++ b/lib/spack/spack/cmd/extensions.py
@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
+import sys
import llnl.util.tty as tty
from llnl.util.tty.colify import colify
@@ -21,6 +22,8 @@
def setup_parser(subparser):
+ subparser.epilog = 'If called without argument returns ' \
+ 'the list of all valid extendable packages'
arguments.add_common_arguments(subparser, ['long', 'very_long'])
subparser.add_argument('-d', '--deps', action='store_true',
help='output dependencies along with found specs')
@@ -42,7 +45,19 @@
def extensions(parser, args):
if not args.spec:
- tty.die("extensions requires a package spec.")
+ # If called without arguments, list all the extendable packages
+ isatty = sys.stdout.isatty()
+ if isatty:
+ tty.info('Extendable packages:')
+
+ extendable_pkgs = []
+ for name in spack.repo.all_package_names():
+ pkg = spack.repo.get(name)
+ if pkg.extendable:
+ extendable_pkgs.append(name)
+
+ colify(extendable_pkgs, indent=4)
+ return
# Checks
spec = cmd.parse_specs(args.spec)
| {"golden_diff": "diff --git a/lib/spack/spack/cmd/extensions.py b/lib/spack/spack/cmd/extensions.py\n--- a/lib/spack/spack/cmd/extensions.py\n+++ b/lib/spack/spack/cmd/extensions.py\n@@ -4,6 +4,7 @@\n # SPDX-License-Identifier: (Apache-2.0 OR MIT)\n \n import argparse\n+import sys\n \n import llnl.util.tty as tty\n from llnl.util.tty.colify import colify\n@@ -21,6 +22,8 @@\n \n \n def setup_parser(subparser):\n+ subparser.epilog = 'If called without argument returns ' \\\n+ 'the list of all valid extendable packages'\n arguments.add_common_arguments(subparser, ['long', 'very_long'])\n subparser.add_argument('-d', '--deps', action='store_true',\n help='output dependencies along with found specs')\n@@ -42,7 +45,19 @@\n \n def extensions(parser, args):\n if not args.spec:\n- tty.die(\"extensions requires a package spec.\")\n+ # If called without arguments, list all the extendable packages\n+ isatty = sys.stdout.isatty()\n+ if isatty:\n+ tty.info('Extendable packages:')\n+\n+ extendable_pkgs = []\n+ for name in spack.repo.all_package_names():\n+ pkg = spack.repo.get(name)\n+ if pkg.extendable:\n+ extendable_pkgs.append(name)\n+\n+ colify(extendable_pkgs, indent=4)\n+ return\n \n # Checks\n spec = cmd.parse_specs(args.spec)\n", "issue": "spack extensions behavior when no extendable package is specified\nIf you run `spack extensions` with no extendable package, it prints an error message telling you that a package spec is required. It would be nice if Spack also printed a list of installed extendable packages to choose from.\n\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport argparse\n\nimport llnl.util.tty as tty\nfrom llnl.util.tty.colify import colify\n\nimport spack.environment as ev\nimport spack.cmd as cmd\nimport spack.cmd.common.arguments as arguments\nimport spack.repo\nimport spack.store\nfrom spack.filesystem_view import YamlFilesystemView\n\ndescription = \"list extensions for package\"\nsection = \"extensions\"\nlevel = \"long\"\n\n\ndef setup_parser(subparser):\n arguments.add_common_arguments(subparser, ['long', 'very_long'])\n subparser.add_argument('-d', '--deps', action='store_true',\n help='output dependencies along with found specs')\n\n subparser.add_argument('-p', '--paths', action='store_true',\n help='show paths to package install directories')\n subparser.add_argument(\n '-s', '--show', action='store', default='all',\n choices=(\"packages\", \"installed\", \"activated\", \"all\"),\n help=\"show only part of output\")\n subparser.add_argument(\n '-v', '--view', metavar='VIEW', type=str,\n help=\"the view to operate on\")\n\n subparser.add_argument(\n 'spec', nargs=argparse.REMAINDER,\n help='spec of package to list extensions for', metavar='extendable')\n\n\ndef extensions(parser, args):\n if not args.spec:\n tty.die(\"extensions requires a package spec.\")\n\n # Checks\n spec = cmd.parse_specs(args.spec)\n if len(spec) > 1:\n tty.die(\"Can only list extensions for one package.\")\n\n if not spec[0].package.extendable:\n tty.die(\"%s is not an extendable package.\" % spec[0].name)\n\n env = ev.get_env(args, 'extensions')\n spec = cmd.disambiguate_spec(spec[0], env)\n\n if not spec.package.extendable:\n tty.die(\"%s does not have extensions.\" % spec.short_spec)\n\n if args.show in (\"packages\", \"all\"):\n # List package names of extensions\n extensions = spack.repo.path.extensions_for(spec)\n if not extensions:\n tty.msg(\"%s has no extensions.\" % spec.cshort_spec)\n else:\n tty.msg(spec.cshort_spec)\n tty.msg(\"%d extensions:\" % len(extensions))\n colify(ext.name for ext in extensions)\n\n if args.view:\n target = args.view\n else:\n target = spec.prefix\n\n view = YamlFilesystemView(target, spack.store.layout)\n\n if args.show in (\"installed\", \"all\"):\n # List specs of installed extensions.\n installed = [\n s.spec for s in spack.store.db.installed_extensions_for(spec)]\n\n if args.show == \"all\":\n print\n if not installed:\n tty.msg(\"None installed.\")\n else:\n tty.msg(\"%d installed:\" % len(installed))\n cmd.display_specs(installed, args)\n\n if args.show in (\"activated\", \"all\"):\n # List specs of activated extensions.\n activated = view.extensions_layout.extension_map(spec)\n if args.show == \"all\":\n print\n if not activated:\n tty.msg(\"None activated.\")\n else:\n tty.msg(\"%d activated:\" % len(activated))\n cmd.display_specs(activated.values(), args)\n", "path": "lib/spack/spack/cmd/extensions.py"}], "after_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport argparse\nimport sys\n\nimport llnl.util.tty as tty\nfrom llnl.util.tty.colify import colify\n\nimport spack.environment as ev\nimport spack.cmd as cmd\nimport spack.cmd.common.arguments as arguments\nimport spack.repo\nimport spack.store\nfrom spack.filesystem_view import YamlFilesystemView\n\ndescription = \"list extensions for package\"\nsection = \"extensions\"\nlevel = \"long\"\n\n\ndef setup_parser(subparser):\n subparser.epilog = 'If called without argument returns ' \\\n 'the list of all valid extendable packages'\n arguments.add_common_arguments(subparser, ['long', 'very_long'])\n subparser.add_argument('-d', '--deps', action='store_true',\n help='output dependencies along with found specs')\n\n subparser.add_argument('-p', '--paths', action='store_true',\n help='show paths to package install directories')\n subparser.add_argument(\n '-s', '--show', action='store', default='all',\n choices=(\"packages\", \"installed\", \"activated\", \"all\"),\n help=\"show only part of output\")\n subparser.add_argument(\n '-v', '--view', metavar='VIEW', type=str,\n help=\"the view to operate on\")\n\n subparser.add_argument(\n 'spec', nargs=argparse.REMAINDER,\n help='spec of package to list extensions for', metavar='extendable')\n\n\ndef extensions(parser, args):\n if not args.spec:\n # If called without arguments, list all the extendable packages\n isatty = sys.stdout.isatty()\n if isatty:\n tty.info('Extendable packages:')\n\n extendable_pkgs = []\n for name in spack.repo.all_package_names():\n pkg = spack.repo.get(name)\n if pkg.extendable:\n extendable_pkgs.append(name)\n\n colify(extendable_pkgs, indent=4)\n return\n\n # Checks\n spec = cmd.parse_specs(args.spec)\n if len(spec) > 1:\n tty.die(\"Can only list extensions for one package.\")\n\n if not spec[0].package.extendable:\n tty.die(\"%s is not an extendable package.\" % spec[0].name)\n\n env = ev.get_env(args, 'extensions')\n spec = cmd.disambiguate_spec(spec[0], env)\n\n if not spec.package.extendable:\n tty.die(\"%s does not have extensions.\" % spec.short_spec)\n\n if args.show in (\"packages\", \"all\"):\n # List package names of extensions\n extensions = spack.repo.path.extensions_for(spec)\n if not extensions:\n tty.msg(\"%s has no extensions.\" % spec.cshort_spec)\n else:\n tty.msg(spec.cshort_spec)\n tty.msg(\"%d extensions:\" % len(extensions))\n colify(ext.name for ext in extensions)\n\n if args.view:\n target = args.view\n else:\n target = spec.prefix\n\n view = YamlFilesystemView(target, spack.store.layout)\n\n if args.show in (\"installed\", \"all\"):\n # List specs of installed extensions.\n installed = [\n s.spec for s in spack.store.db.installed_extensions_for(spec)]\n\n if args.show == \"all\":\n print\n if not installed:\n tty.msg(\"None installed.\")\n else:\n tty.msg(\"%d installed:\" % len(installed))\n cmd.display_specs(installed, args)\n\n if args.show in (\"activated\", \"all\"):\n # List specs of activated extensions.\n activated = view.extensions_layout.extension_map(spec)\n if args.show == \"all\":\n print\n if not activated:\n tty.msg(\"None activated.\")\n else:\n tty.msg(\"%d activated:\" % len(activated))\n cmd.display_specs(activated.values(), args)\n", "path": "lib/spack/spack/cmd/extensions.py"}]} | 1,279 | 343 |
gh_patches_debug_10265 | rasdani/github-patches | git_diff | yt-project__yt-3278 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reminder: remove dead module mods
As discussed in #3083, the `mods.py` module should be removed **after the 4.0 release**.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yt/mods.py`
Content:
```
1 #
2 # ALL IMPORTS GO HERE
3 #
4
5 import os
6
7 import numpy as np
8
9 # This next item will handle most of the actual startup procedures, but it will
10 # also attempt to parse the command line and set up the global state of various
11 # operations. The variable unparsed_args is not used internally but is
12 # provided as a convenience for users who wish to parse arguments in scripts.
13 # https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/
14 import yt.startup_tasks as __startup_tasks
15 from yt import *
16 from yt.config import ytcfg, ytcfg_defaults
17 from yt.utilities.logger import _level
18
19 unparsed_args = __startup_tasks.unparsed_args
20
21
22 if _level >= int(ytcfg_defaults["yt"]["log_level"]):
23 # This won't get displayed.
24 mylog.debug("Turning off NumPy error reporting")
25 np.seterr(all="ignore")
26
27 # We load plugins. Keep in mind, this can be fairly dangerous -
28 # the primary purpose is to allow people to have a set of functions
29 # that get used every time that they don't have to *define* every time.
30 # This way, other command-line tools can be used very simply.
31 # Unfortunately, for now, I think the easiest and simplest way of doing
32 # this is also the most dangerous way.
33 if ytcfg.get("yt", "load_field_plugins"):
34 enable_plugins()
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/yt/mods.py b/yt/mods.py
--- a/yt/mods.py
+++ b/yt/mods.py
@@ -13,9 +13,14 @@
# https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/
import yt.startup_tasks as __startup_tasks
from yt import *
+from yt._maintenance.deprecation import issue_deprecation_warning
from yt.config import ytcfg, ytcfg_defaults
from yt.utilities.logger import _level
+issue_deprecation_warning(
+ "The yt.mods module is deprecated.", since="4.1.0", removal="4.2.0"
+)
+
unparsed_args = __startup_tasks.unparsed_args
| {"golden_diff": "diff --git a/yt/mods.py b/yt/mods.py\n--- a/yt/mods.py\n+++ b/yt/mods.py\n@@ -13,9 +13,14 @@\n # https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/\n import yt.startup_tasks as __startup_tasks\n from yt import *\n+from yt._maintenance.deprecation import issue_deprecation_warning\n from yt.config import ytcfg, ytcfg_defaults\n from yt.utilities.logger import _level\n \n+issue_deprecation_warning(\n+ \"The yt.mods module is deprecated.\", since=\"4.1.0\", removal=\"4.2.0\"\n+)\n+\n unparsed_args = __startup_tasks.unparsed_args\n", "issue": "Reminder: remove dead module mods\nAs discussed in #3083, the `mods.py` module should be removed **after the 4.0 release**.\r\n\n", "before_files": [{"content": "#\n# ALL IMPORTS GO HERE\n#\n\nimport os\n\nimport numpy as np\n\n# This next item will handle most of the actual startup procedures, but it will\n# also attempt to parse the command line and set up the global state of various\n# operations. The variable unparsed_args is not used internally but is\n# provided as a convenience for users who wish to parse arguments in scripts.\n# https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/\nimport yt.startup_tasks as __startup_tasks\nfrom yt import *\nfrom yt.config import ytcfg, ytcfg_defaults\nfrom yt.utilities.logger import _level\n\nunparsed_args = __startup_tasks.unparsed_args\n\n\nif _level >= int(ytcfg_defaults[\"yt\"][\"log_level\"]):\n # This won't get displayed.\n mylog.debug(\"Turning off NumPy error reporting\")\n np.seterr(all=\"ignore\")\n\n# We load plugins. Keep in mind, this can be fairly dangerous -\n# the primary purpose is to allow people to have a set of functions\n# that get used every time that they don't have to *define* every time.\n# This way, other command-line tools can be used very simply.\n# Unfortunately, for now, I think the easiest and simplest way of doing\n# this is also the most dangerous way.\nif ytcfg.get(\"yt\", \"load_field_plugins\"):\n enable_plugins()\n", "path": "yt/mods.py"}], "after_files": [{"content": "#\n# ALL IMPORTS GO HERE\n#\n\nimport os\n\nimport numpy as np\n\n# This next item will handle most of the actual startup procedures, but it will\n# also attempt to parse the command line and set up the global state of various\n# operations. The variable unparsed_args is not used internally but is\n# provided as a convenience for users who wish to parse arguments in scripts.\n# https://mail.python.org/archives/list/[email protected]/thread/L6AQPJ3OIMJC5SNKVM7CJG32YVQZRJWA/\nimport yt.startup_tasks as __startup_tasks\nfrom yt import *\nfrom yt._maintenance.deprecation import issue_deprecation_warning\nfrom yt.config import ytcfg, ytcfg_defaults\nfrom yt.utilities.logger import _level\n\nissue_deprecation_warning(\n \"The yt.mods module is deprecated.\", since=\"4.1.0\", removal=\"4.2.0\"\n)\n\nunparsed_args = __startup_tasks.unparsed_args\n\n\nif _level >= int(ytcfg_defaults[\"yt\"][\"log_level\"]):\n # This won't get displayed.\n mylog.debug(\"Turning off NumPy error reporting\")\n np.seterr(all=\"ignore\")\n\n# We load plugins. Keep in mind, this can be fairly dangerous -\n# the primary purpose is to allow people to have a set of functions\n# that get used every time that they don't have to *define* every time.\n# This way, other command-line tools can be used very simply.\n# Unfortunately, for now, I think the easiest and simplest way of doing\n# this is also the most dangerous way.\nif ytcfg.get(\"yt\", \"load_field_plugins\"):\n enable_plugins()\n", "path": "yt/mods.py"}]} | 682 | 181 |
gh_patches_debug_67260 | rasdani/github-patches | git_diff | freqtrade__freqtrade-5487 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hyperoptable parameter type: CategoricalParameter is not returning correctly.
## Describe your environment
* Operating system: MacOS 11.2.3 (20D91)
* Python Version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)
* CCXT version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)
* Freqtrade Version: freqtrade 2021.4
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
Hi! It appears the Hyperoptable parameter type: `CategoricalParameter` is not returning correctly.
If I run the example as per the Freqtrade Docs [here](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters), namely setting a `CategoricalParameter` like so:
```
buy_rsi_enabled = CategoricalParameter([True, False]),
```
...then when running the Hyperopt tool there is an error in the `populate_buy_trend` as below:
```
if self.buy_adx_enabled.value:
AttributeError: 'tuple' object has no attribute 'value'
```
It would appear that the `CategoricalParameter` is not actually returning one of the categories (even a default) but instead returning a Python Tuple.
### Steps to reproduce:
1. Follow the example in the [Docs](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters)
### Observed Results:
* What happened? There was an AttributeError: 'tuple' object has no attribute 'value'.
* What did you expect to happen? The 'value' property to exist and be set to either True or False
### Relevant code exceptions or logs
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
2021-05-02 09:48:02,421 - freqtrade - ERROR - Fatal exception!
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker
r = call_item()
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py", line 285, in __call__
return self.fn(*self.args, **self.kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 595, in __call__
return self.func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py", line 262, in __call__
return [func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py", line 262, in <listcomp>
return [func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/cloudpickle_wrapper.py", line 38, in __call__
return self._obj(*args, **kwargs)
File "/freqtrade/freqtrade/optimize/hyperopt.py", line 288, in generate_optimizer
backtesting_results = self.backtesting.backtest(
File "/freqtrade/freqtrade/optimize/backtesting.py", line 352, in backtest
data: Dict = self._get_ohlcv_as_lists(processed)
File "/freqtrade/freqtrade/optimize/backtesting.py", line 196, in _get_ohlcv_as_lists
self.strategy.advise_buy(pair_data, {'pair': pair}), {'pair': pair})[headers].copy()
File "/freqtrade/freqtrade/optimize/hyperopt_auto.py", line 31, in populate_buy_trend
return self.strategy.populate_buy_trend(dataframe, metadata)
File "/freqtrade/user_data/strategies/Strategy004.py", line 149, in populate_buy_trend
if self.buy_adx_enabled.value:
AttributeError: 'tuple' object has no attribute 'value'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/__init__.py`
Content:
```
1 """ Freqtrade bot """
2 __version__ = 'develop'
3
4 if __version__ == 'develop':
5
6 try:
7 import subprocess
8
9 __version__ = 'develop-' + subprocess.check_output(
10 ['git', 'log', '--format="%h"', '-n 1'],
11 stderr=subprocess.DEVNULL).decode("utf-8").rstrip().strip('"')
12
13 # from datetime import datetime
14 # last_release = subprocess.check_output(
15 # ['git', 'tag']
16 # ).decode('utf-8').split()[-1].split(".")
17 # # Releases are in the format "2020.1" - we increment the latest version for dev.
18 # prefix = f"{last_release[0]}.{int(last_release[1]) + 1}"
19 # dev_version = int(datetime.now().timestamp() // 1000)
20 # __version__ = f"{prefix}.dev{dev_version}"
21
22 # subprocess.check_output(
23 # ['git', 'log', '--format="%h"', '-n 1'],
24 # stderr=subprocess.DEVNULL).decode("utf-8").rstrip().strip('"')
25 except Exception:
26 # git not available, ignore
27 try:
28 # Try Fallback to freqtrade_commit file (created by CI while building docker image)
29 from pathlib import Path
30 versionfile = Path('./freqtrade_commit')
31 if versionfile.is_file():
32 __version__ = f"docker-{versionfile.read_text()[:8]}"
33 except Exception:
34 pass
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/__init__.py b/freqtrade/__init__.py
--- a/freqtrade/__init__.py
+++ b/freqtrade/__init__.py
@@ -1,5 +1,5 @@
""" Freqtrade bot """
-__version__ = 'develop'
+__version__ = '2021.8'
if __version__ == 'develop':
| {"golden_diff": "diff --git a/freqtrade/__init__.py b/freqtrade/__init__.py\n--- a/freqtrade/__init__.py\n+++ b/freqtrade/__init__.py\n@@ -1,5 +1,5 @@\n \"\"\" Freqtrade bot \"\"\"\n-__version__ = 'develop'\n+__version__ = '2021.8'\n \n if __version__ == 'develop':\n", "issue": "Hyperoptable parameter type: CategoricalParameter is not returning correctly.\n## Describe your environment\r\n\r\n * Operating system: MacOS 11.2.3 (20D91)\r\n * Python Version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)\r\n * CCXT version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)\r\n * Freqtrade Version: freqtrade 2021.4\r\n \r\nNote: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.\r\n\r\n## Describe the problem:\r\n\r\nHi! It appears the Hyperoptable parameter type: `CategoricalParameter` is not returning correctly.\r\n\r\nIf I run the example as per the Freqtrade Docs [here](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters), namely setting a `CategoricalParameter` like so:\r\n\r\n```\r\nbuy_rsi_enabled = CategoricalParameter([True, False]),\r\n```\r\n\r\n...then when running the Hyperopt tool there is an error in the `populate_buy_trend` as below:\r\n\r\n```\r\nif self.buy_adx_enabled.value:\r\nAttributeError: 'tuple' object has no attribute 'value'\r\n```\r\n\r\nIt would appear that the `CategoricalParameter` is not actually returning one of the categories (even a default) but instead returning a Python Tuple.\r\n\r\n### Steps to reproduce:\r\n\r\n 1. Follow the example in the [Docs](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters)\r\n \r\n### Observed Results:\r\n\r\n * What happened? There was an AttributeError: 'tuple' object has no attribute 'value'. \r\n * What did you expect to happen? The 'value' property to exist and be set to either True or False\r\n\r\n### Relevant code exceptions or logs\r\n\r\nNote: Please copy/paste text of the messages, no screenshots of logs please.\r\n\r\n ```\r\n2021-05-02 09:48:02,421 - freqtrade - ERROR - Fatal exception!\r\njoblib.externals.loky.process_executor._RemoteTraceback:\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py\", line 431, in _process_worker\r\n r = call_item()\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py\", line 285, in __call__\r\n return self.fn(*self.args, **self.kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/_parallel_backends.py\", line 595, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py\", line 262, in __call__\r\n return [func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py\", line 262, in <listcomp>\r\n return [func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/cloudpickle_wrapper.py\", line 38, in __call__\r\n return self._obj(*args, **kwargs)\r\n File \"/freqtrade/freqtrade/optimize/hyperopt.py\", line 288, in generate_optimizer\r\n backtesting_results = self.backtesting.backtest(\r\n File \"/freqtrade/freqtrade/optimize/backtesting.py\", line 352, in backtest\r\n data: Dict = self._get_ohlcv_as_lists(processed)\r\n File \"/freqtrade/freqtrade/optimize/backtesting.py\", line 196, in _get_ohlcv_as_lists\r\n self.strategy.advise_buy(pair_data, {'pair': pair}), {'pair': pair})[headers].copy()\r\n File \"/freqtrade/freqtrade/optimize/hyperopt_auto.py\", line 31, in populate_buy_trend\r\n return self.strategy.populate_buy_trend(dataframe, metadata)\r\n File \"/freqtrade/user_data/strategies/Strategy004.py\", line 149, in populate_buy_trend\r\n if self.buy_adx_enabled.value:\r\nAttributeError: 'tuple' object has no attribute 'value'\r\n ```\r\n\n", "before_files": [{"content": "\"\"\" Freqtrade bot \"\"\"\n__version__ = 'develop'\n\nif __version__ == 'develop':\n\n try:\n import subprocess\n\n __version__ = 'develop-' + subprocess.check_output(\n ['git', 'log', '--format=\"%h\"', '-n 1'],\n stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n\n # from datetime import datetime\n # last_release = subprocess.check_output(\n # ['git', 'tag']\n # ).decode('utf-8').split()[-1].split(\".\")\n # # Releases are in the format \"2020.1\" - we increment the latest version for dev.\n # prefix = f\"{last_release[0]}.{int(last_release[1]) + 1}\"\n # dev_version = int(datetime.now().timestamp() // 1000)\n # __version__ = f\"{prefix}.dev{dev_version}\"\n\n # subprocess.check_output(\n # ['git', 'log', '--format=\"%h\"', '-n 1'],\n # stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n except Exception:\n # git not available, ignore\n try:\n # Try Fallback to freqtrade_commit file (created by CI while building docker image)\n from pathlib import Path\n versionfile = Path('./freqtrade_commit')\n if versionfile.is_file():\n __version__ = f\"docker-{versionfile.read_text()[:8]}\"\n except Exception:\n pass\n", "path": "freqtrade/__init__.py"}], "after_files": [{"content": "\"\"\" Freqtrade bot \"\"\"\n__version__ = '2021.8'\n\nif __version__ == 'develop':\n\n try:\n import subprocess\n\n __version__ = 'develop-' + subprocess.check_output(\n ['git', 'log', '--format=\"%h\"', '-n 1'],\n stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n\n # from datetime import datetime\n # last_release = subprocess.check_output(\n # ['git', 'tag']\n # ).decode('utf-8').split()[-1].split(\".\")\n # # Releases are in the format \"2020.1\" - we increment the latest version for dev.\n # prefix = f\"{last_release[0]}.{int(last_release[1]) + 1}\"\n # dev_version = int(datetime.now().timestamp() // 1000)\n # __version__ = f\"{prefix}.dev{dev_version}\"\n\n # subprocess.check_output(\n # ['git', 'log', '--format=\"%h\"', '-n 1'],\n # stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n except Exception:\n # git not available, ignore\n try:\n # Try Fallback to freqtrade_commit file (created by CI while building docker image)\n from pathlib import Path\n versionfile = Path('./freqtrade_commit')\n if versionfile.is_file():\n __version__ = f\"docker-{versionfile.read_text()[:8]}\"\n except Exception:\n pass\n", "path": "freqtrade/__init__.py"}]} | 1,638 | 86 |
gh_patches_debug_17333 | rasdani/github-patches | git_diff | sanic-org__sanic-2651 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ASGI websocket must pass thru bytes as is
_Originally posted by @Tronic in https://github.com/sanic-org/sanic/pull/2640#discussion_r1058027028_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/server/websockets/connection.py`
Content:
```
1 from typing import (
2 Any,
3 Awaitable,
4 Callable,
5 Dict,
6 List,
7 MutableMapping,
8 Optional,
9 Union,
10 )
11
12 from sanic.exceptions import InvalidUsage
13
14
15 ASGIMessage = MutableMapping[str, Any]
16
17
18 class WebSocketConnection:
19 """
20 This is for ASGI Connections.
21 It provides an interface similar to WebsocketProtocol, but
22 sends/receives over an ASGI connection.
23 """
24
25 # TODO
26 # - Implement ping/pong
27
28 def __init__(
29 self,
30 send: Callable[[ASGIMessage], Awaitable[None]],
31 receive: Callable[[], Awaitable[ASGIMessage]],
32 subprotocols: Optional[List[str]] = None,
33 ) -> None:
34 self._send = send
35 self._receive = receive
36 self._subprotocols = subprotocols or []
37
38 async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:
39 message: Dict[str, Union[str, bytes]] = {"type": "websocket.send"}
40
41 if isinstance(data, bytes):
42 message.update({"bytes": data})
43 else:
44 message.update({"text": str(data)})
45
46 await self._send(message)
47
48 async def recv(self, *args, **kwargs) -> Optional[str]:
49 message = await self._receive()
50
51 if message["type"] == "websocket.receive":
52 try:
53 return message["text"]
54 except KeyError:
55 try:
56 return message["bytes"].decode()
57 except KeyError:
58 raise InvalidUsage("Bad ASGI message received")
59 elif message["type"] == "websocket.disconnect":
60 pass
61
62 return None
63
64 receive = recv
65
66 async def accept(self, subprotocols: Optional[List[str]] = None) -> None:
67 subprotocol = None
68 if subprotocols:
69 for subp in subprotocols:
70 if subp in self.subprotocols:
71 subprotocol = subp
72 break
73
74 await self._send(
75 {
76 "type": "websocket.accept",
77 "subprotocol": subprotocol,
78 }
79 )
80
81 async def close(self, code: int = 1000, reason: str = "") -> None:
82 pass
83
84 @property
85 def subprotocols(self):
86 return self._subprotocols
87
88 @subprotocols.setter
89 def subprotocols(self, subprotocols: Optional[List[str]] = None):
90 self._subprotocols = subprotocols or []
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/server/websockets/connection.py b/sanic/server/websockets/connection.py
--- a/sanic/server/websockets/connection.py
+++ b/sanic/server/websockets/connection.py
@@ -45,7 +45,7 @@
await self._send(message)
- async def recv(self, *args, **kwargs) -> Optional[str]:
+ async def recv(self, *args, **kwargs) -> Optional[Union[str, bytes]]:
message = await self._receive()
if message["type"] == "websocket.receive":
@@ -53,7 +53,7 @@
return message["text"]
except KeyError:
try:
- return message["bytes"].decode()
+ return message["bytes"]
except KeyError:
raise InvalidUsage("Bad ASGI message received")
elif message["type"] == "websocket.disconnect":
| {"golden_diff": "diff --git a/sanic/server/websockets/connection.py b/sanic/server/websockets/connection.py\n--- a/sanic/server/websockets/connection.py\n+++ b/sanic/server/websockets/connection.py\n@@ -45,7 +45,7 @@\n \n await self._send(message)\n \n- async def recv(self, *args, **kwargs) -> Optional[str]:\n+ async def recv(self, *args, **kwargs) -> Optional[Union[str, bytes]]:\n message = await self._receive()\n \n if message[\"type\"] == \"websocket.receive\":\n@@ -53,7 +53,7 @@\n return message[\"text\"]\n except KeyError:\n try:\n- return message[\"bytes\"].decode()\n+ return message[\"bytes\"]\n except KeyError:\n raise InvalidUsage(\"Bad ASGI message received\")\n elif message[\"type\"] == \"websocket.disconnect\":\n", "issue": "ASGI websocket must pass thru bytes as is\n\r\n\r\n_Originally posted by @Tronic in https://github.com/sanic-org/sanic/pull/2640#discussion_r1058027028_\r\n \n", "before_files": [{"content": "from typing import (\n Any,\n Awaitable,\n Callable,\n Dict,\n List,\n MutableMapping,\n Optional,\n Union,\n)\n\nfrom sanic.exceptions import InvalidUsage\n\n\nASGIMessage = MutableMapping[str, Any]\n\n\nclass WebSocketConnection:\n \"\"\"\n This is for ASGI Connections.\n It provides an interface similar to WebsocketProtocol, but\n sends/receives over an ASGI connection.\n \"\"\"\n\n # TODO\n # - Implement ping/pong\n\n def __init__(\n self,\n send: Callable[[ASGIMessage], Awaitable[None]],\n receive: Callable[[], Awaitable[ASGIMessage]],\n subprotocols: Optional[List[str]] = None,\n ) -> None:\n self._send = send\n self._receive = receive\n self._subprotocols = subprotocols or []\n\n async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:\n message: Dict[str, Union[str, bytes]] = {\"type\": \"websocket.send\"}\n\n if isinstance(data, bytes):\n message.update({\"bytes\": data})\n else:\n message.update({\"text\": str(data)})\n\n await self._send(message)\n\n async def recv(self, *args, **kwargs) -> Optional[str]:\n message = await self._receive()\n\n if message[\"type\"] == \"websocket.receive\":\n try:\n return message[\"text\"]\n except KeyError:\n try:\n return message[\"bytes\"].decode()\n except KeyError:\n raise InvalidUsage(\"Bad ASGI message received\")\n elif message[\"type\"] == \"websocket.disconnect\":\n pass\n\n return None\n\n receive = recv\n\n async def accept(self, subprotocols: Optional[List[str]] = None) -> None:\n subprotocol = None\n if subprotocols:\n for subp in subprotocols:\n if subp in self.subprotocols:\n subprotocol = subp\n break\n\n await self._send(\n {\n \"type\": \"websocket.accept\",\n \"subprotocol\": subprotocol,\n }\n )\n\n async def close(self, code: int = 1000, reason: str = \"\") -> None:\n pass\n\n @property\n def subprotocols(self):\n return self._subprotocols\n\n @subprotocols.setter\n def subprotocols(self, subprotocols: Optional[List[str]] = None):\n self._subprotocols = subprotocols or []\n", "path": "sanic/server/websockets/connection.py"}], "after_files": [{"content": "from typing import (\n Any,\n Awaitable,\n Callable,\n Dict,\n List,\n MutableMapping,\n Optional,\n Union,\n)\n\nfrom sanic.exceptions import InvalidUsage\n\n\nASGIMessage = MutableMapping[str, Any]\n\n\nclass WebSocketConnection:\n \"\"\"\n This is for ASGI Connections.\n It provides an interface similar to WebsocketProtocol, but\n sends/receives over an ASGI connection.\n \"\"\"\n\n # TODO\n # - Implement ping/pong\n\n def __init__(\n self,\n send: Callable[[ASGIMessage], Awaitable[None]],\n receive: Callable[[], Awaitable[ASGIMessage]],\n subprotocols: Optional[List[str]] = None,\n ) -> None:\n self._send = send\n self._receive = receive\n self._subprotocols = subprotocols or []\n\n async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:\n message: Dict[str, Union[str, bytes]] = {\"type\": \"websocket.send\"}\n\n if isinstance(data, bytes):\n message.update({\"bytes\": data})\n else:\n message.update({\"text\": str(data)})\n\n await self._send(message)\n\n async def recv(self, *args, **kwargs) -> Optional[Union[str, bytes]]:\n message = await self._receive()\n\n if message[\"type\"] == \"websocket.receive\":\n try:\n return message[\"text\"]\n except KeyError:\n try:\n return message[\"bytes\"]\n except KeyError:\n raise InvalidUsage(\"Bad ASGI message received\")\n elif message[\"type\"] == \"websocket.disconnect\":\n pass\n\n return None\n\n receive = recv\n\n async def accept(self, subprotocols: Optional[List[str]] = None) -> None:\n subprotocol = None\n if subprotocols:\n for subp in subprotocols:\n if subp in self.subprotocols:\n subprotocol = subp\n break\n\n await self._send(\n {\n \"type\": \"websocket.accept\",\n \"subprotocol\": subprotocol,\n }\n )\n\n async def close(self, code: int = 1000, reason: str = \"\") -> None:\n pass\n\n @property\n def subprotocols(self):\n return self._subprotocols\n\n @subprotocols.setter\n def subprotocols(self, subprotocols: Optional[List[str]] = None):\n self._subprotocols = subprotocols or []\n", "path": "sanic/server/websockets/connection.py"}]} | 1,015 | 186 |
gh_patches_debug_11443 | rasdani/github-patches | git_diff | pytorch__text-1067 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip install torchtext==0.7.0 installs incompatible PyTorch 1.7.0
## 🐛 Bug
**Describe the bug**
Recently, after I do `pip install torchtext==0.7.0`, import torchtext would cause segmentation fault. I found that degrading pytorch to 1.6.0 fixes this issue.
**To Reproduce**
Steps to reproduce the behavior:
1. `pip install torchtext==0.7.0` (assuming that pytorch is not installed yet, and this command will install the latest pytorch)
2. python -c "import torchtext"
**Expected behavior**
Segmentation Fault
**Environment**
- PyTorch Version (e.g., 1.0): 1.7.0
- OS (e.g., Linux): Linux/MacOS
- Python: 3.8.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import io
3 import os
4 import shutil
5 import subprocess
6 from pathlib import Path
7 import distutils.command.clean
8 from setuptools import setup, find_packages
9
10 from build_tools import setup_helpers
11
12 ROOT_DIR = Path(__file__).parent.resolve()
13
14
15 def read(*names, **kwargs):
16 with io.open(ROOT_DIR.joinpath(*names), encoding=kwargs.get("encoding", "utf8")) as fp:
17 return fp.read()
18
19
20 def _get_version():
21 version = '0.9.0a0'
22 sha = None
23
24 try:
25 cmd = ['git', 'rev-parse', 'HEAD']
26 sha = subprocess.check_output(cmd, cwd=str(ROOT_DIR)).decode('ascii').strip()
27 except Exception:
28 pass
29
30 if os.getenv('BUILD_VERSION'):
31 version = os.getenv('BUILD_VERSION')
32 elif sha is not None:
33 version += '+' + sha[:7]
34
35 if sha is None:
36 sha = 'Unknown'
37 return version, sha
38
39
40 def _export_version(version, sha):
41 version_path = ROOT_DIR / 'torchtext' / 'version.py'
42 with open(version_path, 'w') as fileobj:
43 fileobj.write("__version__ = '{}'\n".format(version))
44 fileobj.write("git_version = {}\n".format(repr(sha)))
45
46
47 VERSION, SHA = _get_version()
48 _export_version(VERSION, SHA)
49
50 print('-- Building version ' + VERSION)
51
52
53 class clean(distutils.command.clean.clean):
54 def run(self):
55 # Run default behavior first
56 distutils.command.clean.clean.run(self)
57
58 # Remove torchtext extension
59 for path in (ROOT_DIR / 'torchtext').glob('**/*.so'):
60 print(f'removing \'{path}\'')
61 path.unlink()
62 # Remove build directory
63 build_dirs = [
64 ROOT_DIR / 'build',
65 ROOT_DIR / 'third_party' / 'build',
66 ]
67 for path in build_dirs:
68 if path.exists():
69 print(f'removing \'{path}\' (and everything under it)')
70 shutil.rmtree(str(path), ignore_errors=True)
71
72
73 setup_info = dict(
74 # Metadata
75 name='torchtext',
76 version=VERSION,
77 author='PyTorch core devs and James Bradbury',
78 author_email='[email protected]',
79 url='https://github.com/pytorch/text',
80 description='Text utilities and datasets for PyTorch',
81 long_description=read('README.rst'),
82 license='BSD',
83
84 install_requires=[
85 'tqdm', 'requests', 'torch', 'numpy'
86 ],
87 python_requires='>=3.5',
88 classifiers=[
89 'Programming Language :: Python :: 3',
90 'Programming Language :: Python :: 3.5',
91 'Programming Language :: Python :: 3.6',
92 'Programming Language :: Python :: 3.7',
93 'Programming Language :: Python :: 3.8',
94 'Programming Language :: Python :: 3 :: Only',
95 ],
96 # Package info
97 packages=find_packages(exclude=('test*', 'build_tools*')),
98 zip_safe=False,
99 # Extension info
100 # If you are trying to use torchtext.so and see no registered op.
101 # See here: https://github.com/pytorch/vision/issues/2134"
102 ext_modules=setup_helpers.get_ext_modules(),
103 cmdclass={
104 'build_ext': setup_helpers.BuildExtension.with_options(no_python_abi_suffix=True),
105 'clean': clean,
106 },
107 )
108
109 setup(**setup_info)
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,6 +49,12 @@
print('-- Building version ' + VERSION)
+pytorch_package_version = os.getenv('PYTORCH_VERSION')
+
+pytorch_package_dep = 'torch'
+if pytorch_package_version is not None:
+ pytorch_package_dep += "==" + pytorch_package_version
+
class clean(distutils.command.clean.clean):
def run(self):
@@ -82,7 +88,7 @@
license='BSD',
install_requires=[
- 'tqdm', 'requests', 'torch', 'numpy'
+ 'tqdm', 'requests', pytorch_package_dep, 'numpy'
],
python_requires='>=3.5',
classifiers=[
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -49,6 +49,12 @@\n \n print('-- Building version ' + VERSION)\n \n+pytorch_package_version = os.getenv('PYTORCH_VERSION')\n+\n+pytorch_package_dep = 'torch'\n+if pytorch_package_version is not None:\n+ pytorch_package_dep += \"==\" + pytorch_package_version\n+\n \n class clean(distutils.command.clean.clean):\n def run(self):\n@@ -82,7 +88,7 @@\n license='BSD',\n \n install_requires=[\n- 'tqdm', 'requests', 'torch', 'numpy'\n+ 'tqdm', 'requests', pytorch_package_dep, 'numpy'\n ],\n python_requires='>=3.5',\n classifiers=[\n", "issue": "pip install torchtext==0.7.0 installs incompatible PyTorch 1.7.0\n## \ud83d\udc1b Bug\r\n**Describe the bug**\r\nRecently, after I do `pip install torchtext==0.7.0`, import torchtext would cause segmentation fault. I found that degrading pytorch to 1.6.0 fixes this issue. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. `pip install torchtext==0.7.0` (assuming that pytorch is not installed yet, and this command will install the latest pytorch)\r\n2. python -c \"import torchtext\"\r\n\r\n**Expected behavior**\r\nSegmentation Fault\r\n\r\n**Environment**\r\n\r\n - PyTorch Version (e.g., 1.0): 1.7.0\r\n - OS (e.g., Linux): Linux/MacOS\r\n - Python: 3.8.3\n", "before_files": [{"content": "#!/usr/bin/env python\nimport io\nimport os\nimport shutil\nimport subprocess\nfrom pathlib import Path\nimport distutils.command.clean\nfrom setuptools import setup, find_packages\n\nfrom build_tools import setup_helpers\n\nROOT_DIR = Path(__file__).parent.resolve()\n\n\ndef read(*names, **kwargs):\n with io.open(ROOT_DIR.joinpath(*names), encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef _get_version():\n version = '0.9.0a0'\n sha = None\n\n try:\n cmd = ['git', 'rev-parse', 'HEAD']\n sha = subprocess.check_output(cmd, cwd=str(ROOT_DIR)).decode('ascii').strip()\n except Exception:\n pass\n\n if os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\n elif sha is not None:\n version += '+' + sha[:7]\n\n if sha is None:\n sha = 'Unknown'\n return version, sha\n\n\ndef _export_version(version, sha):\n version_path = ROOT_DIR / 'torchtext' / 'version.py'\n with open(version_path, 'w') as fileobj:\n fileobj.write(\"__version__ = '{}'\\n\".format(version))\n fileobj.write(\"git_version = {}\\n\".format(repr(sha)))\n\n\nVERSION, SHA = _get_version()\n_export_version(VERSION, SHA)\n\nprint('-- Building version ' + VERSION)\n\n\nclass clean(distutils.command.clean.clean):\n def run(self):\n # Run default behavior first\n distutils.command.clean.clean.run(self)\n\n # Remove torchtext extension\n for path in (ROOT_DIR / 'torchtext').glob('**/*.so'):\n print(f'removing \\'{path}\\'')\n path.unlink()\n # Remove build directory\n build_dirs = [\n ROOT_DIR / 'build',\n ROOT_DIR / 'third_party' / 'build',\n ]\n for path in build_dirs:\n if path.exists():\n print(f'removing \\'{path}\\' (and everything under it)')\n shutil.rmtree(str(path), ignore_errors=True)\n\n\nsetup_info = dict(\n # Metadata\n name='torchtext',\n version=VERSION,\n author='PyTorch core devs and James Bradbury',\n author_email='[email protected]',\n url='https://github.com/pytorch/text',\n description='Text utilities and datasets for PyTorch',\n long_description=read('README.rst'),\n license='BSD',\n\n install_requires=[\n 'tqdm', 'requests', 'torch', 'numpy'\n ],\n python_requires='>=3.5',\n classifiers=[\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3 :: Only',\n ],\n # Package info\n packages=find_packages(exclude=('test*', 'build_tools*')),\n zip_safe=False,\n # Extension info\n # If you are trying to use torchtext.so and see no registered op.\n # See here: https://github.com/pytorch/vision/issues/2134\"\n ext_modules=setup_helpers.get_ext_modules(),\n cmdclass={\n 'build_ext': setup_helpers.BuildExtension.with_options(no_python_abi_suffix=True),\n 'clean': clean,\n },\n)\n\nsetup(**setup_info)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport io\nimport os\nimport shutil\nimport subprocess\nfrom pathlib import Path\nimport distutils.command.clean\nfrom setuptools import setup, find_packages\n\nfrom build_tools import setup_helpers\n\nROOT_DIR = Path(__file__).parent.resolve()\n\n\ndef read(*names, **kwargs):\n with io.open(ROOT_DIR.joinpath(*names), encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef _get_version():\n version = '0.9.0a0'\n sha = None\n\n try:\n cmd = ['git', 'rev-parse', 'HEAD']\n sha = subprocess.check_output(cmd, cwd=str(ROOT_DIR)).decode('ascii').strip()\n except Exception:\n pass\n\n if os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\n elif sha is not None:\n version += '+' + sha[:7]\n\n if sha is None:\n sha = 'Unknown'\n return version, sha\n\n\ndef _export_version(version, sha):\n version_path = ROOT_DIR / 'torchtext' / 'version.py'\n with open(version_path, 'w') as fileobj:\n fileobj.write(\"__version__ = '{}'\\n\".format(version))\n fileobj.write(\"git_version = {}\\n\".format(repr(sha)))\n\n\nVERSION, SHA = _get_version()\n_export_version(VERSION, SHA)\n\nprint('-- Building version ' + VERSION)\n\npytorch_package_version = os.getenv('PYTORCH_VERSION')\n\npytorch_package_dep = 'torch'\nif pytorch_package_version is not None:\n pytorch_package_dep += \"==\" + pytorch_package_version\n\n\nclass clean(distutils.command.clean.clean):\n def run(self):\n # Run default behavior first\n distutils.command.clean.clean.run(self)\n\n # Remove torchtext extension\n for path in (ROOT_DIR / 'torchtext').glob('**/*.so'):\n print(f'removing \\'{path}\\'')\n path.unlink()\n # Remove build directory\n build_dirs = [\n ROOT_DIR / 'build',\n ROOT_DIR / 'third_party' / 'build',\n ]\n for path in build_dirs:\n if path.exists():\n print(f'removing \\'{path}\\' (and everything under it)')\n shutil.rmtree(str(path), ignore_errors=True)\n\n\nsetup_info = dict(\n # Metadata\n name='torchtext',\n version=VERSION,\n author='PyTorch core devs and James Bradbury',\n author_email='[email protected]',\n url='https://github.com/pytorch/text',\n description='Text utilities and datasets for PyTorch',\n long_description=read('README.rst'),\n license='BSD',\n\n install_requires=[\n 'tqdm', 'requests', pytorch_package_dep, 'numpy'\n ],\n python_requires='>=3.5',\n classifiers=[\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3 :: Only',\n ],\n # Package info\n packages=find_packages(exclude=('test*', 'build_tools*')),\n zip_safe=False,\n # Extension info\n # If you are trying to use torchtext.so and see no registered op.\n # See here: https://github.com/pytorch/vision/issues/2134\"\n ext_modules=setup_helpers.get_ext_modules(),\n cmdclass={\n 'build_ext': setup_helpers.BuildExtension.with_options(no_python_abi_suffix=True),\n 'clean': clean,\n },\n)\n\nsetup(**setup_info)\n", "path": "setup.py"}]} | 1,426 | 177 |
gh_patches_debug_10135 | rasdani/github-patches | git_diff | tensorflow__addons-1878 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation: Focal loss example misleading
In the example of the focal loss (`tfa.losses.SigmoidFocalCrossEntropy`) the label and the prediction data seems to be swapped.
This is misleading as `sigmoid_focal_crossentropy()` expects `y_true, y_pred`, whereas the example presents `y_pred, y_true`.
https://github.com/tensorflow/addons/blob/efa0cd9c86578f53a2eb9d80849e186bffa4d64a/tensorflow_addons/losses/focal_loss.py#L42-L47
https://github.com/tensorflow/addons/blob/efa0cd9c86578f53a2eb9d80849e186bffa4d64a/tensorflow_addons/losses/focal_loss.py#L90-L92
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/losses/focal_loss.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Implements Focal loss."""
16
17 import tensorflow as tf
18 import tensorflow.keras.backend as K
19
20 from tensorflow_addons.utils.keras_utils import LossFunctionWrapper
21 from tensorflow_addons.utils.types import FloatTensorLike, TensorLike
22 from typeguard import typechecked
23
24
25 @tf.keras.utils.register_keras_serializable(package="Addons")
26 class SigmoidFocalCrossEntropy(LossFunctionWrapper):
27 """Implements the focal loss function.
28
29 Focal loss was first introduced in the RetinaNet paper
30 (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for
31 classification when you have highly imbalanced classes. It down-weights
32 well-classified examples and focuses on hard examples. The loss value is
33 much high for a sample which is misclassified by the classifier as compared
34 to the loss value corresponding to a well-classified example. One of the
35 best use-cases of focal loss is its usage in object detection where the
36 imbalance between the background class and other classes is extremely high.
37
38 Usage:
39
40 ```python
41 fl = tfa.losses.SigmoidFocalCrossEntropy()
42 loss = fl(
43 [[0.97], [0.91], [0.03]],
44 [[1.0], [1.0], [0.0]])
45 print('Loss: ', loss.numpy()) # Loss: [0.00010971,
46 0.0032975,
47 0.00030611]
48 ```
49 Usage with tf.keras API:
50
51 ```python
52 model = tf.keras.Model(inputs, outputs)
53 model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())
54 ```
55
56 Args
57 alpha: balancing factor, default value is 0.25
58 gamma: modulating factor, default value is 2.0
59
60 Returns:
61 Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same
62 shape as `y_true`; otherwise, it is scalar.
63
64 Raises:
65 ValueError: If the shape of `sample_weight` is invalid or value of
66 `gamma` is less than zero
67 """
68
69 @typechecked
70 def __init__(
71 self,
72 from_logits: bool = False,
73 alpha: FloatTensorLike = 0.25,
74 gamma: FloatTensorLike = 2.0,
75 reduction: str = tf.keras.losses.Reduction.NONE,
76 name: str = "sigmoid_focal_crossentropy",
77 ):
78 super().__init__(
79 sigmoid_focal_crossentropy,
80 name=name,
81 reduction=reduction,
82 from_logits=from_logits,
83 alpha=alpha,
84 gamma=gamma,
85 )
86
87
88 @tf.keras.utils.register_keras_serializable(package="Addons")
89 @tf.function
90 def sigmoid_focal_crossentropy(
91 y_true: TensorLike,
92 y_pred: TensorLike,
93 alpha: FloatTensorLike = 0.25,
94 gamma: FloatTensorLike = 2.0,
95 from_logits: bool = False,
96 ) -> tf.Tensor:
97 """
98 Args
99 y_true: true targets tensor.
100 y_pred: predictions tensor.
101 alpha: balancing factor.
102 gamma: modulating factor.
103
104 Returns:
105 Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the
106 same shape as `y_true`; otherwise, it is scalar.
107 """
108 if gamma and gamma < 0:
109 raise ValueError("Value of gamma should be greater than or equal to zero")
110
111 y_pred = tf.convert_to_tensor(y_pred)
112 y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)
113
114 # Get the cross_entropy for each entry
115 ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)
116
117 # If logits are provided then convert the predictions into probabilities
118 if from_logits:
119 pred_prob = tf.sigmoid(y_pred)
120 else:
121 pred_prob = y_pred
122
123 p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))
124 alpha_factor = 1.0
125 modulating_factor = 1.0
126
127 if alpha:
128 alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())
129 alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)
130
131 if gamma:
132 gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())
133 modulating_factor = tf.pow((1.0 - p_t), gamma)
134
135 # compute the final loss and return
136 return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensorflow_addons/losses/focal_loss.py b/tensorflow_addons/losses/focal_loss.py
--- a/tensorflow_addons/losses/focal_loss.py
+++ b/tensorflow_addons/losses/focal_loss.py
@@ -40,11 +40,11 @@
```python
fl = tfa.losses.SigmoidFocalCrossEntropy()
loss = fl(
- [[0.97], [0.91], [0.03]],
- [[1.0], [1.0], [0.0]])
- print('Loss: ', loss.numpy()) # Loss: [0.00010971,
- 0.0032975,
- 0.00030611]
+ y_true = [[1.0], [1.0], [0.0]],
+ y_pred = [[0.97], [0.91], [0.03]])
+ print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,
+ 1.9097870e-04,
+ 2.0559824e-05]
```
Usage with tf.keras API:
| {"golden_diff": "diff --git a/tensorflow_addons/losses/focal_loss.py b/tensorflow_addons/losses/focal_loss.py\n--- a/tensorflow_addons/losses/focal_loss.py\n+++ b/tensorflow_addons/losses/focal_loss.py\n@@ -40,11 +40,11 @@\n ```python\n fl = tfa.losses.SigmoidFocalCrossEntropy()\n loss = fl(\n- [[0.97], [0.91], [0.03]],\n- [[1.0], [1.0], [0.0]])\n- print('Loss: ', loss.numpy()) # Loss: [0.00010971,\n- 0.0032975,\n- 0.00030611]\n+ y_true = [[1.0], [1.0], [0.0]],\n+ y_pred = [[0.97], [0.91], [0.03]])\n+ print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,\n+ 1.9097870e-04,\n+ 2.0559824e-05]\n ```\n Usage with tf.keras API:\n", "issue": "Documentation: Focal loss example misleading\nIn the example of the focal loss (`tfa.losses.SigmoidFocalCrossEntropy`) the label and the prediction data seems to be swapped. \r\nThis is misleading as `sigmoid_focal_crossentropy()` expects `y_true, y_pred`, whereas the example presents `y_pred, y_true`.\r\n\r\nhttps://github.com/tensorflow/addons/blob/efa0cd9c86578f53a2eb9d80849e186bffa4d64a/tensorflow_addons/losses/focal_loss.py#L42-L47\r\n\r\nhttps://github.com/tensorflow/addons/blob/efa0cd9c86578f53a2eb9d80849e186bffa4d64a/tensorflow_addons/losses/focal_loss.py#L90-L92\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements Focal loss.\"\"\"\n\nimport tensorflow as tf\nimport tensorflow.keras.backend as K\n\nfrom tensorflow_addons.utils.keras_utils import LossFunctionWrapper\nfrom tensorflow_addons.utils.types import FloatTensorLike, TensorLike\nfrom typeguard import typechecked\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass SigmoidFocalCrossEntropy(LossFunctionWrapper):\n \"\"\"Implements the focal loss function.\n\n Focal loss was first introduced in the RetinaNet paper\n (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for\n classification when you have highly imbalanced classes. It down-weights\n well-classified examples and focuses on hard examples. The loss value is\n much high for a sample which is misclassified by the classifier as compared\n to the loss value corresponding to a well-classified example. One of the\n best use-cases of focal loss is its usage in object detection where the\n imbalance between the background class and other classes is extremely high.\n\n Usage:\n\n ```python\n fl = tfa.losses.SigmoidFocalCrossEntropy()\n loss = fl(\n [[0.97], [0.91], [0.03]],\n [[1.0], [1.0], [0.0]])\n print('Loss: ', loss.numpy()) # Loss: [0.00010971,\n 0.0032975,\n 0.00030611]\n ```\n Usage with tf.keras API:\n\n ```python\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())\n ```\n\n Args\n alpha: balancing factor, default value is 0.25\n gamma: modulating factor, default value is 2.0\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `y_true`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `sample_weight` is invalid or value of\n `gamma` is less than zero\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n from_logits: bool = False,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n reduction: str = tf.keras.losses.Reduction.NONE,\n name: str = \"sigmoid_focal_crossentropy\",\n ):\n super().__init__(\n sigmoid_focal_crossentropy,\n name=name,\n reduction=reduction,\n from_logits=from_logits,\n alpha=alpha,\n gamma=gamma,\n )\n\n\[email protected]_keras_serializable(package=\"Addons\")\[email protected]\ndef sigmoid_focal_crossentropy(\n y_true: TensorLike,\n y_pred: TensorLike,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n from_logits: bool = False,\n) -> tf.Tensor:\n \"\"\"\n Args\n y_true: true targets tensor.\n y_pred: predictions tensor.\n alpha: balancing factor.\n gamma: modulating factor.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the\n same shape as `y_true`; otherwise, it is scalar.\n \"\"\"\n if gamma and gamma < 0:\n raise ValueError(\"Value of gamma should be greater than or equal to zero\")\n\n y_pred = tf.convert_to_tensor(y_pred)\n y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)\n\n # Get the cross_entropy for each entry\n ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)\n\n # If logits are provided then convert the predictions into probabilities\n if from_logits:\n pred_prob = tf.sigmoid(y_pred)\n else:\n pred_prob = y_pred\n\n p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))\n alpha_factor = 1.0\n modulating_factor = 1.0\n\n if alpha:\n alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())\n alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)\n\n if gamma:\n gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())\n modulating_factor = tf.pow((1.0 - p_t), gamma)\n\n # compute the final loss and return\n return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)\n", "path": "tensorflow_addons/losses/focal_loss.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements Focal loss.\"\"\"\n\nimport tensorflow as tf\nimport tensorflow.keras.backend as K\n\nfrom tensorflow_addons.utils.keras_utils import LossFunctionWrapper\nfrom tensorflow_addons.utils.types import FloatTensorLike, TensorLike\nfrom typeguard import typechecked\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass SigmoidFocalCrossEntropy(LossFunctionWrapper):\n \"\"\"Implements the focal loss function.\n\n Focal loss was first introduced in the RetinaNet paper\n (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for\n classification when you have highly imbalanced classes. It down-weights\n well-classified examples and focuses on hard examples. The loss value is\n much high for a sample which is misclassified by the classifier as compared\n to the loss value corresponding to a well-classified example. One of the\n best use-cases of focal loss is its usage in object detection where the\n imbalance between the background class and other classes is extremely high.\n\n Usage:\n\n ```python\n fl = tfa.losses.SigmoidFocalCrossEntropy()\n loss = fl(\n y_true = [[1.0], [1.0], [0.0]],\n y_pred = [[0.97], [0.91], [0.03]])\n print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,\n 1.9097870e-04,\n 2.0559824e-05]\n ```\n Usage with tf.keras API:\n\n ```python\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())\n ```\n\n Args\n alpha: balancing factor, default value is 0.25\n gamma: modulating factor, default value is 2.0\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `y_true`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `sample_weight` is invalid or value of\n `gamma` is less than zero\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n from_logits: bool = False,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n reduction: str = tf.keras.losses.Reduction.NONE,\n name: str = \"sigmoid_focal_crossentropy\",\n ):\n super().__init__(\n sigmoid_focal_crossentropy,\n name=name,\n reduction=reduction,\n from_logits=from_logits,\n alpha=alpha,\n gamma=gamma,\n )\n\n\[email protected]_keras_serializable(package=\"Addons\")\[email protected]\ndef sigmoid_focal_crossentropy(\n y_true: TensorLike,\n y_pred: TensorLike,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n from_logits: bool = False,\n) -> tf.Tensor:\n \"\"\"\n Args\n y_true: true targets tensor.\n y_pred: predictions tensor.\n alpha: balancing factor.\n gamma: modulating factor.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the\n same shape as `y_true`; otherwise, it is scalar.\n \"\"\"\n if gamma and gamma < 0:\n raise ValueError(\"Value of gamma should be greater than or equal to zero\")\n\n y_pred = tf.convert_to_tensor(y_pred)\n y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)\n\n # Get the cross_entropy for each entry\n ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)\n\n # If logits are provided then convert the predictions into probabilities\n if from_logits:\n pred_prob = tf.sigmoid(y_pred)\n else:\n pred_prob = y_pred\n\n p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))\n alpha_factor = 1.0\n modulating_factor = 1.0\n\n if alpha:\n alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())\n alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)\n\n if gamma:\n gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())\n modulating_factor = tf.pow((1.0 - p_t), gamma)\n\n # compute the final loss and return\n return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)\n", "path": "tensorflow_addons/losses/focal_loss.py"}]} | 1,932 | 302 |
gh_patches_debug_14685 | rasdani/github-patches | git_diff | pre-commit__pre-commit-370 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ruby hooks failing with rbenv installed
Pre-commit has been failing for the past few weeks.
https://gist.github.com/ThatGerber/d6533155848076b25e5e0d5cb02e20eb
Seems to be an issue with the ruby (rbenv) environment.
Tried running `pre-commit clean && pre-commit` but it returns the same issue. Setting `rbenv global 2.2.4` and `rbenv shell 2.2.4` does help either.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/ruby.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import os.path
6 import shutil
7
8 from pre_commit.envcontext import envcontext
9 from pre_commit.envcontext import Var
10 from pre_commit.languages import helpers
11 from pre_commit.util import CalledProcessError
12 from pre_commit.util import clean_path_on_failure
13 from pre_commit.util import resource_filename
14 from pre_commit.util import tarfile_open
15 from pre_commit.xargs import xargs
16
17
18 ENVIRONMENT_DIR = 'rbenv'
19
20
21 def get_env_patch(venv, language_version):
22 return (
23 ('GEM_HOME', os.path.join(venv, 'gems')),
24 ('RBENV_ROOT', venv),
25 ('RBENV_VERSION', language_version),
26 ('PATH', (
27 os.path.join(venv, 'gems', 'bin'), os.pathsep,
28 os.path.join(venv, 'shims'), os.pathsep,
29 os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),
30 )),
31 )
32
33
34 @contextlib.contextmanager
35 def in_env(repo_cmd_runner, language_version):
36 envdir = os.path.join(
37 repo_cmd_runner.prefix_dir,
38 helpers.environment_dir(ENVIRONMENT_DIR, language_version),
39 )
40 with envcontext(get_env_patch(envdir, language_version)):
41 yield
42
43
44 def _install_rbenv(repo_cmd_runner, version='default'):
45 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
46
47 with tarfile_open(resource_filename('rbenv.tar.gz')) as tf:
48 tf.extractall(repo_cmd_runner.path('.'))
49 shutil.move(
50 repo_cmd_runner.path('rbenv'), repo_cmd_runner.path(directory),
51 )
52
53 # Only install ruby-build if the version is specified
54 if version != 'default':
55 # ruby-download
56 with tarfile_open(resource_filename('ruby-download.tar.gz')) as tf:
57 tf.extractall(repo_cmd_runner.path(directory, 'plugins'))
58
59 # ruby-build
60 with tarfile_open(resource_filename('ruby-build.tar.gz')) as tf:
61 tf.extractall(repo_cmd_runner.path(directory, 'plugins'))
62
63 activate_path = repo_cmd_runner.path(directory, 'bin', 'activate')
64 with io.open(activate_path, 'w') as activate_file:
65 # This is similar to how you would install rbenv to your home directory
66 # However we do a couple things to make the executables exposed and
67 # configure it to work in our directory.
68 # We also modify the PS1 variable for manual debugging sake.
69 activate_file.write(
70 '#!/usr/bin/env bash\n'
71 "export RBENV_ROOT='{0}'\n"
72 'export PATH="$RBENV_ROOT/bin:$PATH"\n'
73 'eval "$(rbenv init -)"\n'
74 'export PS1="(rbenv)$PS1"\n'
75 # This lets us install gems in an isolated and repeatable
76 # directory
77 "export GEM_HOME='{0}/gems'\n"
78 'export PATH="$GEM_HOME/bin:$PATH"\n'
79 '\n'.format(repo_cmd_runner.path(directory))
80 )
81
82 # If we aren't using the system ruby, add a version here
83 if version != 'default':
84 activate_file.write('export RBENV_VERSION="{0}"\n'.format(version))
85
86
87 def _install_ruby(runner, version):
88 try:
89 helpers.run_setup_cmd(runner, ('rbenv', 'download', version))
90 except CalledProcessError: # pragma: no cover (usually find with download)
91 # Failed to download from mirror for some reason, build it instead
92 helpers.run_setup_cmd(runner, ('rbenv', 'install', version))
93
94
95 def install_environment(
96 repo_cmd_runner,
97 version='default',
98 additional_dependencies=(),
99 ):
100 additional_dependencies = tuple(additional_dependencies)
101 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
102 with clean_path_on_failure(repo_cmd_runner.path(directory)):
103 # TODO: this currently will fail if there's no version specified and
104 # there's no system ruby installed. Is this ok?
105 _install_rbenv(repo_cmd_runner, version=version)
106 with in_env(repo_cmd_runner, version):
107 # Need to call this before installing so rbenv's directories are
108 # set up
109 helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'init', '-'))
110 if version != 'default':
111 _install_ruby(repo_cmd_runner, version)
112 # Need to call this after installing to set up the shims
113 helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'rehash'))
114 helpers.run_setup_cmd(
115 repo_cmd_runner,
116 ('gem', 'build') + repo_cmd_runner.star('.gemspec'),
117 )
118 helpers.run_setup_cmd(
119 repo_cmd_runner,
120 (
121 ('gem', 'install', '--no-ri', '--no-rdoc') +
122 repo_cmd_runner.star('.gem') + additional_dependencies
123 ),
124 )
125
126
127 def run_hook(repo_cmd_runner, hook, file_args):
128 with in_env(repo_cmd_runner, hook['language_version']):
129 return xargs((hook['entry'],) + tuple(hook['args']), file_args)
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py
--- a/pre_commit/languages/ruby.py
+++ b/pre_commit/languages/ruby.py
@@ -19,16 +19,18 @@
def get_env_patch(venv, language_version):
- return (
+ patches = (
('GEM_HOME', os.path.join(venv, 'gems')),
('RBENV_ROOT', venv),
- ('RBENV_VERSION', language_version),
('PATH', (
os.path.join(venv, 'gems', 'bin'), os.pathsep,
os.path.join(venv, 'shims'), os.pathsep,
os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),
)),
)
+ if language_version != 'default':
+ patches += (('RBENV_VERSION', language_version),)
+ return patches
@contextlib.contextmanager
| {"golden_diff": "diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py\n--- a/pre_commit/languages/ruby.py\n+++ b/pre_commit/languages/ruby.py\n@@ -19,16 +19,18 @@\n \n \n def get_env_patch(venv, language_version):\n- return (\n+ patches = (\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('RBENV_ROOT', venv),\n- ('RBENV_VERSION', language_version),\n ('PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n os.path.join(venv, 'shims'), os.pathsep,\n os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),\n )),\n )\n+ if language_version != 'default':\n+ patches += (('RBENV_VERSION', language_version),)\n+ return patches\n \n \n @contextlib.contextmanager\n", "issue": "Ruby hooks failing with rbenv installed\nPre-commit has been failing for the past few weeks.\n\nhttps://gist.github.com/ThatGerber/d6533155848076b25e5e0d5cb02e20eb\n\nSeems to be an issue with the ruby (rbenv) environment.\n\nTried running `pre-commit clean && pre-commit` but it returns the same issue. Setting `rbenv global 2.2.4` and `rbenv shell 2.2.4` does help either.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport os.path\nimport shutil\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import resource_filename\nfrom pre_commit.util import tarfile_open\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'rbenv'\n\n\ndef get_env_patch(venv, language_version):\n return (\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('RBENV_ROOT', venv),\n ('RBENV_VERSION', language_version),\n ('PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n os.path.join(venv, 'shims'), os.pathsep,\n os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),\n )),\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n envdir = os.path.join(\n repo_cmd_runner.prefix_dir,\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir, language_version)):\n yield\n\n\ndef _install_rbenv(repo_cmd_runner, version='default'):\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n with tarfile_open(resource_filename('rbenv.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path('.'))\n shutil.move(\n repo_cmd_runner.path('rbenv'), repo_cmd_runner.path(directory),\n )\n\n # Only install ruby-build if the version is specified\n if version != 'default':\n # ruby-download\n with tarfile_open(resource_filename('ruby-download.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path(directory, 'plugins'))\n\n # ruby-build\n with tarfile_open(resource_filename('ruby-build.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path(directory, 'plugins'))\n\n activate_path = repo_cmd_runner.path(directory, 'bin', 'activate')\n with io.open(activate_path, 'w') as activate_file:\n # This is similar to how you would install rbenv to your home directory\n # However we do a couple things to make the executables exposed and\n # configure it to work in our directory.\n # We also modify the PS1 variable for manual debugging sake.\n activate_file.write(\n '#!/usr/bin/env bash\\n'\n \"export RBENV_ROOT='{0}'\\n\"\n 'export PATH=\"$RBENV_ROOT/bin:$PATH\"\\n'\n 'eval \"$(rbenv init -)\"\\n'\n 'export PS1=\"(rbenv)$PS1\"\\n'\n # This lets us install gems in an isolated and repeatable\n # directory\n \"export GEM_HOME='{0}/gems'\\n\"\n 'export PATH=\"$GEM_HOME/bin:$PATH\"\\n'\n '\\n'.format(repo_cmd_runner.path(directory))\n )\n\n # If we aren't using the system ruby, add a version here\n if version != 'default':\n activate_file.write('export RBENV_VERSION=\"{0}\"\\n'.format(version))\n\n\ndef _install_ruby(runner, version):\n try:\n helpers.run_setup_cmd(runner, ('rbenv', 'download', version))\n except CalledProcessError: # pragma: no cover (usually find with download)\n # Failed to download from mirror for some reason, build it instead\n helpers.run_setup_cmd(runner, ('rbenv', 'install', version))\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n):\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n # TODO: this currently will fail if there's no version specified and\n # there's no system ruby installed. Is this ok?\n _install_rbenv(repo_cmd_runner, version=version)\n with in_env(repo_cmd_runner, version):\n # Need to call this before installing so rbenv's directories are\n # set up\n helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'init', '-'))\n if version != 'default':\n _install_ruby(repo_cmd_runner, version)\n # Need to call this after installing to set up the shims\n helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'rehash'))\n helpers.run_setup_cmd(\n repo_cmd_runner,\n ('gem', 'build') + repo_cmd_runner.star('.gemspec'),\n )\n helpers.run_setup_cmd(\n repo_cmd_runner,\n (\n ('gem', 'install', '--no-ri', '--no-rdoc') +\n repo_cmd_runner.star('.gem') + additional_dependencies\n ),\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']):\n return xargs((hook['entry'],) + tuple(hook['args']), file_args)\n", "path": "pre_commit/languages/ruby.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport os.path\nimport shutil\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import resource_filename\nfrom pre_commit.util import tarfile_open\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'rbenv'\n\n\ndef get_env_patch(venv, language_version):\n patches = (\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('RBENV_ROOT', venv),\n ('PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n os.path.join(venv, 'shims'), os.pathsep,\n os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),\n )),\n )\n if language_version != 'default':\n patches += (('RBENV_VERSION', language_version),)\n return patches\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n envdir = os.path.join(\n repo_cmd_runner.prefix_dir,\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir, language_version)):\n yield\n\n\ndef _install_rbenv(repo_cmd_runner, version='default'):\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n with tarfile_open(resource_filename('rbenv.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path('.'))\n shutil.move(\n repo_cmd_runner.path('rbenv'), repo_cmd_runner.path(directory),\n )\n\n # Only install ruby-build if the version is specified\n if version != 'default':\n # ruby-download\n with tarfile_open(resource_filename('ruby-download.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path(directory, 'plugins'))\n\n # ruby-build\n with tarfile_open(resource_filename('ruby-build.tar.gz')) as tf:\n tf.extractall(repo_cmd_runner.path(directory, 'plugins'))\n\n activate_path = repo_cmd_runner.path(directory, 'bin', 'activate')\n with io.open(activate_path, 'w') as activate_file:\n # This is similar to how you would install rbenv to your home directory\n # However we do a couple things to make the executables exposed and\n # configure it to work in our directory.\n # We also modify the PS1 variable for manual debugging sake.\n activate_file.write(\n '#!/usr/bin/env bash\\n'\n \"export RBENV_ROOT='{0}'\\n\"\n 'export PATH=\"$RBENV_ROOT/bin:$PATH\"\\n'\n 'eval \"$(rbenv init -)\"\\n'\n 'export PS1=\"(rbenv)$PS1\"\\n'\n # This lets us install gems in an isolated and repeatable\n # directory\n \"export GEM_HOME='{0}/gems'\\n\"\n 'export PATH=\"$GEM_HOME/bin:$PATH\"\\n'\n '\\n'.format(repo_cmd_runner.path(directory))\n )\n\n # If we aren't using the system ruby, add a version here\n if version != 'default':\n activate_file.write('export RBENV_VERSION=\"{0}\"\\n'.format(version))\n\n\ndef _install_ruby(runner, version):\n try:\n helpers.run_setup_cmd(runner, ('rbenv', 'download', version))\n except CalledProcessError: # pragma: no cover (usually find with download)\n # Failed to download from mirror for some reason, build it instead\n helpers.run_setup_cmd(runner, ('rbenv', 'install', version))\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n):\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n # TODO: this currently will fail if there's no version specified and\n # there's no system ruby installed. Is this ok?\n _install_rbenv(repo_cmd_runner, version=version)\n with in_env(repo_cmd_runner, version):\n # Need to call this before installing so rbenv's directories are\n # set up\n helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'init', '-'))\n if version != 'default':\n _install_ruby(repo_cmd_runner, version)\n # Need to call this after installing to set up the shims\n helpers.run_setup_cmd(repo_cmd_runner, ('rbenv', 'rehash'))\n helpers.run_setup_cmd(\n repo_cmd_runner,\n ('gem', 'build') + repo_cmd_runner.star('.gemspec'),\n )\n helpers.run_setup_cmd(\n repo_cmd_runner,\n (\n ('gem', 'install', '--no-ri', '--no-rdoc') +\n repo_cmd_runner.star('.gem') + additional_dependencies\n ),\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']):\n return xargs((hook['entry'],) + tuple(hook['args']), file_args)\n", "path": "pre_commit/languages/ruby.py"}]} | 1,763 | 206 |
gh_patches_debug_7386 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3499 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CSRF Failed Error
## Description
The error message
"CSRF Failed: Origin checking failed - https://internal.mathesar.org/ does not match any trusted origins."
is appearing when performing certain actions in Mathesar like creating a new internal db or saving a record.
## Additional context
This happened while testing internal.mathesar.org
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `config/settings/production.py`
Content:
```
1 from config.settings.common_settings import * # noqa
2
3 # Override default settings
4 DEBUG = False
5 MATHESAR_MODE = 'PRODUCTION'
6 # Use a local.py module for settings that shouldn't be version tracked
7 try:
8 from .local import * # noqa
9 except ImportError:
10 pass
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/config/settings/production.py b/config/settings/production.py
--- a/config/settings/production.py
+++ b/config/settings/production.py
@@ -3,6 +3,14 @@
# Override default settings
DEBUG = False
MATHESAR_MODE = 'PRODUCTION'
+
+'''
+This tells Django to trust the X-Forwarded-Proto header that comes from our proxy,
+and any time its value is 'https', then the request is guaranteed to be secure
+(i.e., it originally came in via HTTPS).
+'''
+SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
+
# Use a local.py module for settings that shouldn't be version tracked
try:
from .local import * # noqa
| {"golden_diff": "diff --git a/config/settings/production.py b/config/settings/production.py\n--- a/config/settings/production.py\n+++ b/config/settings/production.py\n@@ -3,6 +3,14 @@\n # Override default settings\n DEBUG = False\n MATHESAR_MODE = 'PRODUCTION'\n+\n+'''\n+This tells Django to trust the X-Forwarded-Proto header that comes from our proxy,\n+and any time its value is 'https', then the request is guaranteed to be secure\n+(i.e., it originally came in via HTTPS).\n+'''\n+SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\n+\n # Use a local.py module for settings that shouldn't be version tracked\n try:\n from .local import * # noqa\n", "issue": "CSRF Failed Error\n## Description\r\nThe error message \r\n\"CSRF Failed: Origin checking failed - https://internal.mathesar.org/ does not match any trusted origins.\"\r\nis appearing when performing certain actions in Mathesar like creating a new internal db or saving a record. \r\n\r\n## Additional context\r\nThis happened while testing internal.mathesar.org\n", "before_files": [{"content": "from config.settings.common_settings import * # noqa\n\n# Override default settings\nDEBUG = False\nMATHESAR_MODE = 'PRODUCTION'\n# Use a local.py module for settings that shouldn't be version tracked\ntry:\n from .local import * # noqa \nexcept ImportError:\n pass\n", "path": "config/settings/production.py"}], "after_files": [{"content": "from config.settings.common_settings import * # noqa\n\n# Override default settings\nDEBUG = False\nMATHESAR_MODE = 'PRODUCTION'\n\n'''\nThis tells Django to trust the X-Forwarded-Proto header that comes from our proxy,\nand any time its value is 'https', then the request is guaranteed to be secure\n(i.e., it originally came in via HTTPS).\n'''\nSECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\n\n# Use a local.py module for settings that shouldn't be version tracked\ntry:\n from .local import * # noqa \nexcept ImportError:\n pass\n", "path": "config/settings/production.py"}]} | 403 | 161 |
gh_patches_debug_23677 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2429 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OTLP Exporter: Add support for Histogram point types
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 import logging
15 from os import environ
16 from typing import Optional, Sequence
17 from grpc import ChannelCredentials, Compression
18 from opentelemetry.exporter.otlp.proto.grpc.exporter import (
19 OTLPExporterMixin,
20 get_resource_data,
21 )
22 from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (
23 ExportMetricsServiceRequest,
24 )
25 from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2_grpc import (
26 MetricsServiceStub,
27 )
28 from opentelemetry.proto.common.v1.common_pb2 import InstrumentationLibrary
29 from opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2
30 from opentelemetry.sdk.environment_variables import (
31 OTEL_EXPORTER_OTLP_METRICS_INSECURE,
32 )
33 from opentelemetry.sdk._metrics.point import (
34 Gauge,
35 Histogram,
36 Metric,
37 Sum,
38 )
39
40 from opentelemetry.sdk._metrics.export import (
41 MetricExporter,
42 MetricExportResult,
43 )
44
45 logger = logging.getLogger(__name__)
46
47
48 class OTLPMetricExporter(
49 MetricExporter,
50 OTLPExporterMixin[Metric, ExportMetricsServiceRequest, MetricExportResult],
51 ):
52 _result = MetricExportResult
53 _stub = MetricsServiceStub
54
55 def __init__(
56 self,
57 endpoint: Optional[str] = None,
58 insecure: Optional[bool] = None,
59 credentials: Optional[ChannelCredentials] = None,
60 headers: Optional[Sequence] = None,
61 timeout: Optional[int] = None,
62 compression: Optional[Compression] = None,
63 ):
64
65 if insecure is None:
66 insecure = environ.get(OTEL_EXPORTER_OTLP_METRICS_INSECURE)
67 if insecure is not None:
68 insecure = insecure.lower() == "true"
69
70 super().__init__(
71 **{
72 "endpoint": endpoint,
73 "insecure": insecure,
74 "credentials": credentials,
75 "headers": headers,
76 "timeout": timeout,
77 "compression": compression,
78 }
79 )
80
81 def _translate_data(
82 self, data: Sequence[Metric]
83 ) -> ExportMetricsServiceRequest:
84 sdk_resource_instrumentation_library_metrics = {}
85
86 for metric in data:
87 resource = metric.resource
88 instrumentation_library_map = (
89 sdk_resource_instrumentation_library_metrics.get(resource, {})
90 )
91 if not instrumentation_library_map:
92 sdk_resource_instrumentation_library_metrics[
93 resource
94 ] = instrumentation_library_map
95
96 instrumentation_library_metrics = instrumentation_library_map.get(
97 metric.instrumentation_info
98 )
99
100 if not instrumentation_library_metrics:
101 if metric.instrumentation_info is not None:
102 instrumentation_library_map[
103 metric.instrumentation_info
104 ] = pb2.InstrumentationLibraryMetrics(
105 instrumentation_library=InstrumentationLibrary(
106 name=metric.instrumentation_info.name,
107 version=metric.instrumentation_info.version,
108 )
109 )
110 else:
111 instrumentation_library_map[
112 metric.instrumentation_info
113 ] = pb2.InstrumentationLibraryMetrics()
114
115 instrumentation_library_metrics = instrumentation_library_map.get(
116 metric.instrumentation_info
117 )
118
119 pbmetric = pb2.Metric(
120 name=metric.name,
121 description=metric.description,
122 unit=metric.unit,
123 )
124 if isinstance(metric.point, Gauge):
125 pt = pb2.NumberDataPoint(
126 attributes=self._translate_attributes(metric.attributes),
127 time_unix_nano=metric.point.time_unix_nano,
128 )
129 if isinstance(metric.point.value, int):
130 pt.as_int = metric.point.value
131 else:
132 pt.as_double = metric.point.value
133 pbmetric.gauge.data_points.append(pt)
134 elif isinstance(metric.point, Histogram):
135 # TODO: implement histogram
136 pbmetric.histogram = pb2.Histogram(
137 data_points=[],
138 )
139 elif isinstance(metric.point, Sum):
140 pt = pb2.NumberDataPoint(
141 attributes=self._translate_attributes(metric.attributes),
142 start_time_unix_nano=metric.point.start_time_unix_nano,
143 time_unix_nano=metric.point.time_unix_nano,
144 )
145 if isinstance(metric.point.value, int):
146 pt.as_int = metric.point.value
147 else:
148 pt.as_double = metric.point.value
149 # note that because sum is a message type, the fields must be
150 # set individually rather than instantiating a pb2.Sum and setting
151 # it once
152 pbmetric.sum.aggregation_temporality = (
153 metric.point.aggregation_temporality
154 )
155 pbmetric.sum.is_monotonic = metric.point.is_monotonic
156 pbmetric.sum.data_points.append(pt)
157 else:
158 logger.warn("unsupported datapoint type %s", metric.point)
159 continue
160
161 instrumentation_library_metrics.metrics.append(
162 pbmetric,
163 )
164 return ExportMetricsServiceRequest(
165 resource_metrics=get_resource_data(
166 sdk_resource_instrumentation_library_metrics,
167 pb2.ResourceMetrics,
168 "metrics",
169 )
170 )
171
172 def export(self, metrics: Sequence[Metric]) -> MetricExportResult:
173 return self._export(metrics)
174
175 def shutdown(self):
176 pass
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py
--- a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py
@@ -132,10 +132,18 @@
pt.as_double = metric.point.value
pbmetric.gauge.data_points.append(pt)
elif isinstance(metric.point, Histogram):
- # TODO: implement histogram
- pbmetric.histogram = pb2.Histogram(
- data_points=[],
+ pt = pb2.HistogramDataPoint(
+ attributes=self._translate_attributes(metric.attributes),
+ time_unix_nano=metric.point.time_unix_nano,
+ start_time_unix_nano=metric.point.start_time_unix_nano,
+ count=sum(metric.point.bucket_counts),
+ bucket_counts=metric.point.bucket_counts,
+ explicit_bounds=metric.point.explicit_bounds,
+ )
+ pbmetric.histogram.aggregation_temporality = (
+ metric.point.aggregation_temporality
)
+ pbmetric.histogram.data_points.append(pt)
elif isinstance(metric.point, Sum):
pt = pb2.NumberDataPoint(
attributes=self._translate_attributes(metric.attributes),
| {"golden_diff": "diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py\n--- a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py\n+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py\n@@ -132,10 +132,18 @@\n pt.as_double = metric.point.value\n pbmetric.gauge.data_points.append(pt)\n elif isinstance(metric.point, Histogram):\n- # TODO: implement histogram\n- pbmetric.histogram = pb2.Histogram(\n- data_points=[],\n+ pt = pb2.HistogramDataPoint(\n+ attributes=self._translate_attributes(metric.attributes),\n+ time_unix_nano=metric.point.time_unix_nano,\n+ start_time_unix_nano=metric.point.start_time_unix_nano,\n+ count=sum(metric.point.bucket_counts),\n+ bucket_counts=metric.point.bucket_counts,\n+ explicit_bounds=metric.point.explicit_bounds,\n+ )\n+ pbmetric.histogram.aggregation_temporality = (\n+ metric.point.aggregation_temporality\n )\n+ pbmetric.histogram.data_points.append(pt)\n elif isinstance(metric.point, Sum):\n pt = pb2.NumberDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n", "issue": "OTLP Exporter: Add support for Histogram point types\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom os import environ\nfrom typing import Optional, Sequence\nfrom grpc import ChannelCredentials, Compression\nfrom opentelemetry.exporter.otlp.proto.grpc.exporter import (\n OTLPExporterMixin,\n get_resource_data,\n)\nfrom opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (\n ExportMetricsServiceRequest,\n)\nfrom opentelemetry.proto.collector.metrics.v1.metrics_service_pb2_grpc import (\n MetricsServiceStub,\n)\nfrom opentelemetry.proto.common.v1.common_pb2 import InstrumentationLibrary\nfrom opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2\nfrom opentelemetry.sdk.environment_variables import (\n OTEL_EXPORTER_OTLP_METRICS_INSECURE,\n)\nfrom opentelemetry.sdk._metrics.point import (\n Gauge,\n Histogram,\n Metric,\n Sum,\n)\n\nfrom opentelemetry.sdk._metrics.export import (\n MetricExporter,\n MetricExportResult,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass OTLPMetricExporter(\n MetricExporter,\n OTLPExporterMixin[Metric, ExportMetricsServiceRequest, MetricExportResult],\n):\n _result = MetricExportResult\n _stub = MetricsServiceStub\n\n def __init__(\n self,\n endpoint: Optional[str] = None,\n insecure: Optional[bool] = None,\n credentials: Optional[ChannelCredentials] = None,\n headers: Optional[Sequence] = None,\n timeout: Optional[int] = None,\n compression: Optional[Compression] = None,\n ):\n\n if insecure is None:\n insecure = environ.get(OTEL_EXPORTER_OTLP_METRICS_INSECURE)\n if insecure is not None:\n insecure = insecure.lower() == \"true\"\n\n super().__init__(\n **{\n \"endpoint\": endpoint,\n \"insecure\": insecure,\n \"credentials\": credentials,\n \"headers\": headers,\n \"timeout\": timeout,\n \"compression\": compression,\n }\n )\n\n def _translate_data(\n self, data: Sequence[Metric]\n ) -> ExportMetricsServiceRequest:\n sdk_resource_instrumentation_library_metrics = {}\n\n for metric in data:\n resource = metric.resource\n instrumentation_library_map = (\n sdk_resource_instrumentation_library_metrics.get(resource, {})\n )\n if not instrumentation_library_map:\n sdk_resource_instrumentation_library_metrics[\n resource\n ] = instrumentation_library_map\n\n instrumentation_library_metrics = instrumentation_library_map.get(\n metric.instrumentation_info\n )\n\n if not instrumentation_library_metrics:\n if metric.instrumentation_info is not None:\n instrumentation_library_map[\n metric.instrumentation_info\n ] = pb2.InstrumentationLibraryMetrics(\n instrumentation_library=InstrumentationLibrary(\n name=metric.instrumentation_info.name,\n version=metric.instrumentation_info.version,\n )\n )\n else:\n instrumentation_library_map[\n metric.instrumentation_info\n ] = pb2.InstrumentationLibraryMetrics()\n\n instrumentation_library_metrics = instrumentation_library_map.get(\n metric.instrumentation_info\n )\n\n pbmetric = pb2.Metric(\n name=metric.name,\n description=metric.description,\n unit=metric.unit,\n )\n if isinstance(metric.point, Gauge):\n pt = pb2.NumberDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n time_unix_nano=metric.point.time_unix_nano,\n )\n if isinstance(metric.point.value, int):\n pt.as_int = metric.point.value\n else:\n pt.as_double = metric.point.value\n pbmetric.gauge.data_points.append(pt)\n elif isinstance(metric.point, Histogram):\n # TODO: implement histogram\n pbmetric.histogram = pb2.Histogram(\n data_points=[],\n )\n elif isinstance(metric.point, Sum):\n pt = pb2.NumberDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n start_time_unix_nano=metric.point.start_time_unix_nano,\n time_unix_nano=metric.point.time_unix_nano,\n )\n if isinstance(metric.point.value, int):\n pt.as_int = metric.point.value\n else:\n pt.as_double = metric.point.value\n # note that because sum is a message type, the fields must be\n # set individually rather than instantiating a pb2.Sum and setting\n # it once\n pbmetric.sum.aggregation_temporality = (\n metric.point.aggregation_temporality\n )\n pbmetric.sum.is_monotonic = metric.point.is_monotonic\n pbmetric.sum.data_points.append(pt)\n else:\n logger.warn(\"unsupported datapoint type %s\", metric.point)\n continue\n\n instrumentation_library_metrics.metrics.append(\n pbmetric,\n )\n return ExportMetricsServiceRequest(\n resource_metrics=get_resource_data(\n sdk_resource_instrumentation_library_metrics,\n pb2.ResourceMetrics,\n \"metrics\",\n )\n )\n\n def export(self, metrics: Sequence[Metric]) -> MetricExportResult:\n return self._export(metrics)\n\n def shutdown(self):\n pass\n", "path": "exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom os import environ\nfrom typing import Optional, Sequence\nfrom grpc import ChannelCredentials, Compression\nfrom opentelemetry.exporter.otlp.proto.grpc.exporter import (\n OTLPExporterMixin,\n get_resource_data,\n)\nfrom opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (\n ExportMetricsServiceRequest,\n)\nfrom opentelemetry.proto.collector.metrics.v1.metrics_service_pb2_grpc import (\n MetricsServiceStub,\n)\nfrom opentelemetry.proto.common.v1.common_pb2 import InstrumentationLibrary\nfrom opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2\nfrom opentelemetry.sdk.environment_variables import (\n OTEL_EXPORTER_OTLP_METRICS_INSECURE,\n)\nfrom opentelemetry.sdk._metrics.point import (\n Gauge,\n Histogram,\n Metric,\n Sum,\n)\n\nfrom opentelemetry.sdk._metrics.export import (\n MetricExporter,\n MetricExportResult,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass OTLPMetricExporter(\n MetricExporter,\n OTLPExporterMixin[Metric, ExportMetricsServiceRequest, MetricExportResult],\n):\n _result = MetricExportResult\n _stub = MetricsServiceStub\n\n def __init__(\n self,\n endpoint: Optional[str] = None,\n insecure: Optional[bool] = None,\n credentials: Optional[ChannelCredentials] = None,\n headers: Optional[Sequence] = None,\n timeout: Optional[int] = None,\n compression: Optional[Compression] = None,\n ):\n\n if insecure is None:\n insecure = environ.get(OTEL_EXPORTER_OTLP_METRICS_INSECURE)\n if insecure is not None:\n insecure = insecure.lower() == \"true\"\n\n super().__init__(\n **{\n \"endpoint\": endpoint,\n \"insecure\": insecure,\n \"credentials\": credentials,\n \"headers\": headers,\n \"timeout\": timeout,\n \"compression\": compression,\n }\n )\n\n def _translate_data(\n self, data: Sequence[Metric]\n ) -> ExportMetricsServiceRequest:\n sdk_resource_instrumentation_library_metrics = {}\n\n for metric in data:\n resource = metric.resource\n instrumentation_library_map = (\n sdk_resource_instrumentation_library_metrics.get(resource, {})\n )\n if not instrumentation_library_map:\n sdk_resource_instrumentation_library_metrics[\n resource\n ] = instrumentation_library_map\n\n instrumentation_library_metrics = instrumentation_library_map.get(\n metric.instrumentation_info\n )\n\n if not instrumentation_library_metrics:\n if metric.instrumentation_info is not None:\n instrumentation_library_map[\n metric.instrumentation_info\n ] = pb2.InstrumentationLibraryMetrics(\n instrumentation_library=InstrumentationLibrary(\n name=metric.instrumentation_info.name,\n version=metric.instrumentation_info.version,\n )\n )\n else:\n instrumentation_library_map[\n metric.instrumentation_info\n ] = pb2.InstrumentationLibraryMetrics()\n\n instrumentation_library_metrics = instrumentation_library_map.get(\n metric.instrumentation_info\n )\n\n pbmetric = pb2.Metric(\n name=metric.name,\n description=metric.description,\n unit=metric.unit,\n )\n if isinstance(metric.point, Gauge):\n pt = pb2.NumberDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n time_unix_nano=metric.point.time_unix_nano,\n )\n if isinstance(metric.point.value, int):\n pt.as_int = metric.point.value\n else:\n pt.as_double = metric.point.value\n pbmetric.gauge.data_points.append(pt)\n elif isinstance(metric.point, Histogram):\n pt = pb2.HistogramDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n time_unix_nano=metric.point.time_unix_nano,\n start_time_unix_nano=metric.point.start_time_unix_nano,\n count=sum(metric.point.bucket_counts),\n bucket_counts=metric.point.bucket_counts,\n explicit_bounds=metric.point.explicit_bounds,\n )\n pbmetric.histogram.aggregation_temporality = (\n metric.point.aggregation_temporality\n )\n pbmetric.histogram.data_points.append(pt)\n elif isinstance(metric.point, Sum):\n pt = pb2.NumberDataPoint(\n attributes=self._translate_attributes(metric.attributes),\n start_time_unix_nano=metric.point.start_time_unix_nano,\n time_unix_nano=metric.point.time_unix_nano,\n )\n if isinstance(metric.point.value, int):\n pt.as_int = metric.point.value\n else:\n pt.as_double = metric.point.value\n # note that because sum is a message type, the fields must be\n # set individually rather than instantiating a pb2.Sum and setting\n # it once\n pbmetric.sum.aggregation_temporality = (\n metric.point.aggregation_temporality\n )\n pbmetric.sum.is_monotonic = metric.point.is_monotonic\n pbmetric.sum.data_points.append(pt)\n else:\n logger.warn(\"unsupported datapoint type %s\", metric.point)\n continue\n\n instrumentation_library_metrics.metrics.append(\n pbmetric,\n )\n return ExportMetricsServiceRequest(\n resource_metrics=get_resource_data(\n sdk_resource_instrumentation_library_metrics,\n pb2.ResourceMetrics,\n \"metrics\",\n )\n )\n\n def export(self, metrics: Sequence[Metric]) -> MetricExportResult:\n return self._export(metrics)\n\n def shutdown(self):\n pass\n", "path": "exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_metric_exporter/__init__.py"}]} | 1,904 | 356 |
gh_patches_debug_2722 | rasdani/github-patches | git_diff | ivy-llc__ivy-13420 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
standard_gamma
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/numpy/random/functions.py`
Content:
```
1 # local
2 import ivy
3 from ivy.functional.frontends.numpy.func_wrapper import (
4 to_ivy_arrays_and_back,
5 from_zero_dim_arrays_to_scalar,
6 )
7
8
9 @to_ivy_arrays_and_back
10 @from_zero_dim_arrays_to_scalar
11 def random_sample(size=None):
12 return ivy.random_uniform(low=0.0, high=1.0, shape=size, dtype="float64")
13
14
15 @to_ivy_arrays_and_back
16 @from_zero_dim_arrays_to_scalar
17 def dirichlet(alpha, size=None):
18 return ivy.dirichlet(alpha, size=size)
19
20
21 @to_ivy_arrays_and_back
22 @from_zero_dim_arrays_to_scalar
23 def uniform(low=0.0, high=1.0, size=None):
24 return ivy.random_uniform(low=low, high=high, shape=size, dtype="float64")
25
26
27 @to_ivy_arrays_and_back
28 @from_zero_dim_arrays_to_scalar
29 def geometric(p, size=None):
30 if p < 0 or p > 1:
31 raise ValueError("p must be in the interval [0, 1]")
32 oneMinusP = ivy.subtract(1, p)
33 sizeMinusOne = ivy.subtract(size, 1)
34
35 return ivy.multiply(ivy.pow(oneMinusP, sizeMinusOne), p)
36
37
38 @to_ivy_arrays_and_back
39 @from_zero_dim_arrays_to_scalar
40 def normal(loc=0.0, scale=1.0, size=None):
41 return ivy.random_normal(mean=loc, std=scale, shape=size, dtype="float64")
42
43
44 @to_ivy_arrays_and_back
45 @from_zero_dim_arrays_to_scalar
46 def poisson(lam=1.0, size=None):
47 return ivy.poisson(lam=lam, shape=size)
48
49
50 @to_ivy_arrays_and_back
51 @from_zero_dim_arrays_to_scalar
52 def multinomial(n, pvals, size=None):
53 assert not ivy.exists(size) or (len(size) > 0 and len(size) < 3)
54 batch_size = 1
55 if ivy.exists(size):
56 if len(size) == 2:
57 batch_size = size[0]
58 num_samples = size[1]
59 else:
60 num_samples = size[0]
61 else:
62 num_samples = len(pvals)
63 return ivy.multinomial(n, num_samples, batch_size=batch_size, probs=pvals)
64
65
66 @to_ivy_arrays_and_back
67 @from_zero_dim_arrays_to_scalar
68 def permutation(x, /):
69 if isinstance(x, int):
70 x = ivy.arange(x)
71 return ivy.shuffle(x)
72
73
74 @to_ivy_arrays_and_back
75 @from_zero_dim_arrays_to_scalar
76 def beta(a, b, size=None):
77 return ivy.beta(a, b, shape=size)
78
79
80 @to_ivy_arrays_and_back
81 @from_zero_dim_arrays_to_scalar
82 def shuffle(x, /):
83 if isinstance(x, int):
84 x = ivy.arange(x)
85 return ivy.shuffle(x)
86
87
88 @to_ivy_arrays_and_back
89 @from_zero_dim_arrays_to_scalar
90 def standard_normal(size=None):
91 return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype="float64")
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/numpy/random/functions.py b/ivy/functional/frontends/numpy/random/functions.py
--- a/ivy/functional/frontends/numpy/random/functions.py
+++ b/ivy/functional/frontends/numpy/random/functions.py
@@ -89,3 +89,9 @@
@from_zero_dim_arrays_to_scalar
def standard_normal(size=None):
return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype="float64")
+
+
+@to_ivy_arrays_and_back
+@from_zero_dim_arrays_to_scalar
+def standard_gamma(alpha):
+ return ivy.gamma(alpha, beta=1.0, dtype="float64")
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/random/functions.py b/ivy/functional/frontends/numpy/random/functions.py\n--- a/ivy/functional/frontends/numpy/random/functions.py\n+++ b/ivy/functional/frontends/numpy/random/functions.py\n@@ -89,3 +89,9 @@\n @from_zero_dim_arrays_to_scalar\n def standard_normal(size=None):\n return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype=\"float64\")\n+\n+\n+@to_ivy_arrays_and_back\n+@from_zero_dim_arrays_to_scalar\n+def standard_gamma(alpha):\n+ return ivy.gamma(alpha, beta=1.0, dtype=\"float64\")\n", "issue": "standard_gamma\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n from_zero_dim_arrays_to_scalar,\n)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef random_sample(size=None):\n return ivy.random_uniform(low=0.0, high=1.0, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef dirichlet(alpha, size=None):\n return ivy.dirichlet(alpha, size=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef uniform(low=0.0, high=1.0, size=None):\n return ivy.random_uniform(low=low, high=high, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef geometric(p, size=None):\n if p < 0 or p > 1:\n raise ValueError(\"p must be in the interval [0, 1]\")\n oneMinusP = ivy.subtract(1, p)\n sizeMinusOne = ivy.subtract(size, 1)\n\n return ivy.multiply(ivy.pow(oneMinusP, sizeMinusOne), p)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef normal(loc=0.0, scale=1.0, size=None):\n return ivy.random_normal(mean=loc, std=scale, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef poisson(lam=1.0, size=None):\n return ivy.poisson(lam=lam, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef multinomial(n, pvals, size=None):\n assert not ivy.exists(size) or (len(size) > 0 and len(size) < 3)\n batch_size = 1\n if ivy.exists(size):\n if len(size) == 2:\n batch_size = size[0]\n num_samples = size[1]\n else:\n num_samples = size[0]\n else:\n num_samples = len(pvals)\n return ivy.multinomial(n, num_samples, batch_size=batch_size, probs=pvals)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef permutation(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef beta(a, b, size=None):\n return ivy.beta(a, b, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef shuffle(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef standard_normal(size=None):\n return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype=\"float64\")\n", "path": "ivy/functional/frontends/numpy/random/functions.py"}], "after_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n from_zero_dim_arrays_to_scalar,\n)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef random_sample(size=None):\n return ivy.random_uniform(low=0.0, high=1.0, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef dirichlet(alpha, size=None):\n return ivy.dirichlet(alpha, size=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef uniform(low=0.0, high=1.0, size=None):\n return ivy.random_uniform(low=low, high=high, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef geometric(p, size=None):\n if p < 0 or p > 1:\n raise ValueError(\"p must be in the interval [0, 1]\")\n oneMinusP = ivy.subtract(1, p)\n sizeMinusOne = ivy.subtract(size, 1)\n\n return ivy.multiply(ivy.pow(oneMinusP, sizeMinusOne), p)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef normal(loc=0.0, scale=1.0, size=None):\n return ivy.random_normal(mean=loc, std=scale, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef poisson(lam=1.0, size=None):\n return ivy.poisson(lam=lam, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef multinomial(n, pvals, size=None):\n assert not ivy.exists(size) or (len(size) > 0 and len(size) < 3)\n batch_size = 1\n if ivy.exists(size):\n if len(size) == 2:\n batch_size = size[0]\n num_samples = size[1]\n else:\n num_samples = size[0]\n else:\n num_samples = len(pvals)\n return ivy.multinomial(n, num_samples, batch_size=batch_size, probs=pvals)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef permutation(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef beta(a, b, size=None):\n return ivy.beta(a, b, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef shuffle(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef standard_normal(size=None):\n return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef standard_gamma(alpha):\n return ivy.gamma(alpha, beta=1.0, dtype=\"float64\")\n", "path": "ivy/functional/frontends/numpy/random/functions.py"}]} | 1,144 | 155 |
gh_patches_debug_17898 | rasdani/github-patches | git_diff | OCA__social-531 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[12.0][mail_tracking] many MailTracking email not found warnings
This code snippet is polluting my logs with warning messages.
https://github.com/OCA/social/blob/8d985d8da9fa864113f87cd59a2b3173f7f89193/mail_tracking/controllers/main.py#L69-L78
How can this work if state will be marked 'opened' the first time? Each successive open will trigger above warning, because state is now 'opened' and will not match the above domain `('state', 'in', ['sent', 'delivered'])`.
Is it intended to generate warnings each time mail is opened? Or what am I missing?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mail_tracking/controllers/main.py`
Content:
```
1 # Copyright 2016 Antonio Espinosa - <[email protected]>
2 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
3
4 import werkzeug
5 import odoo
6 from contextlib import contextmanager
7 from odoo import api, http, SUPERUSER_ID
8
9 from odoo.addons.mail.controllers.main import MailController
10 import logging
11 import base64
12 _logger = logging.getLogger(__name__)
13
14 BLANK = 'R0lGODlhAQABAIAAANvf7wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw=='
15
16
17 @contextmanager
18 def db_env(dbname):
19 if not http.db_filter([dbname]):
20 raise werkzeug.exceptions.BadRequest()
21 cr = None
22 if dbname == http.request.db:
23 cr = http.request.cr
24 if not cr:
25 cr = odoo.sql_db.db_connect(dbname).cursor()
26 with api.Environment.manage():
27 yield api.Environment(cr, SUPERUSER_ID, {})
28
29
30 class MailTrackingController(MailController):
31
32 def _request_metadata(self):
33 """Prepare remote info metadata"""
34 request = http.request.httprequest
35 return {
36 'ip': request.remote_addr or False,
37 'user_agent': request.user_agent or False,
38 'os_family': request.user_agent.platform or False,
39 'ua_family': request.user_agent.browser or False,
40 }
41
42 @http.route(['/mail/tracking/all/<string:db>',
43 '/mail/tracking/event/<string:db>/<string:event_type>'],
44 type='http', auth='none', csrf=False)
45 def mail_tracking_event(self, db, event_type=None, **kw):
46 """Route used by external mail service"""
47 metadata = self._request_metadata()
48 res = None
49 with db_env(db) as env:
50 try:
51 res = env['mail.tracking.email'].event_process(
52 http.request, kw, metadata, event_type=event_type)
53 except Exception:
54 pass
55 if not res or res == 'NOT FOUND':
56 return werkzeug.exceptions.NotAcceptable()
57 return res
58
59 @http.route(['/mail/tracking/open/<string:db>'
60 '/<int:tracking_email_id>/blank.gif',
61 '/mail/tracking/open/<string:db>'
62 '/<int:tracking_email_id>/<string:token>/blank.gif'],
63 type='http', auth='none', methods=['GET'])
64 def mail_tracking_open(self, db, tracking_email_id, token=False, **kw):
65 """Route used to track mail openned (With & Without Token)"""
66 metadata = self._request_metadata()
67 with db_env(db) as env:
68 try:
69 tracking_email = env['mail.tracking.email'].search([
70 ('id', '=', tracking_email_id),
71 ('state', 'in', ['sent', 'delivered']),
72 ('token', '=', token),
73 ])
74 if tracking_email:
75 tracking_email.event_create('open', metadata)
76 else:
77 _logger.warning(
78 "MailTracking email '%s' not found", tracking_email_id)
79 except Exception:
80 pass
81
82 # Always return GIF blank image
83 response = werkzeug.wrappers.Response()
84 response.mimetype = 'image/gif'
85 response.data = base64.b64decode(BLANK)
86 return response
87
88 @http.route()
89 def mail_init_messaging(self):
90 """Route used to initial values of Discuss app"""
91 values = super().mail_init_messaging()
92 values.update({
93 'failed_counter':
94 http.request.env['mail.message'].get_failed_count(),
95 })
96 return values
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mail_tracking/controllers/main.py b/mail_tracking/controllers/main.py
--- a/mail_tracking/controllers/main.py
+++ b/mail_tracking/controllers/main.py
@@ -68,14 +68,13 @@
try:
tracking_email = env['mail.tracking.email'].search([
('id', '=', tracking_email_id),
- ('state', 'in', ['sent', 'delivered']),
('token', '=', token),
])
- if tracking_email:
- tracking_email.event_create('open', metadata)
- else:
+ if not tracking_email:
_logger.warning(
"MailTracking email '%s' not found", tracking_email_id)
+ elif tracking_email.state in ('sent', 'delivered'):
+ tracking_email.event_create('open', metadata)
except Exception:
pass
| {"golden_diff": "diff --git a/mail_tracking/controllers/main.py b/mail_tracking/controllers/main.py\n--- a/mail_tracking/controllers/main.py\n+++ b/mail_tracking/controllers/main.py\n@@ -68,14 +68,13 @@\n try:\n tracking_email = env['mail.tracking.email'].search([\n ('id', '=', tracking_email_id),\n- ('state', 'in', ['sent', 'delivered']),\n ('token', '=', token),\n ])\n- if tracking_email:\n- tracking_email.event_create('open', metadata)\n- else:\n+ if not tracking_email:\n _logger.warning(\n \"MailTracking email '%s' not found\", tracking_email_id)\n+ elif tracking_email.state in ('sent', 'delivered'):\n+ tracking_email.event_create('open', metadata)\n except Exception:\n pass\n", "issue": "[12.0][mail_tracking] many MailTracking email not found warnings\nThis code snippet is polluting my logs with warning messages.\r\n\r\nhttps://github.com/OCA/social/blob/8d985d8da9fa864113f87cd59a2b3173f7f89193/mail_tracking/controllers/main.py#L69-L78\r\n\r\nHow can this work if state will be marked 'opened' the first time? Each successive open will trigger above warning, because state is now 'opened' and will not match the above domain `('state', 'in', ['sent', 'delivered'])`.\r\n\r\n Is it intended to generate warnings each time mail is opened? Or what am I missing?\n", "before_files": [{"content": "# Copyright 2016 Antonio Espinosa - <[email protected]>\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\n\nimport werkzeug\nimport odoo\nfrom contextlib import contextmanager\nfrom odoo import api, http, SUPERUSER_ID\n\nfrom odoo.addons.mail.controllers.main import MailController\nimport logging\nimport base64\n_logger = logging.getLogger(__name__)\n\nBLANK = 'R0lGODlhAQABAIAAANvf7wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw=='\n\n\n@contextmanager\ndef db_env(dbname):\n if not http.db_filter([dbname]):\n raise werkzeug.exceptions.BadRequest()\n cr = None\n if dbname == http.request.db:\n cr = http.request.cr\n if not cr:\n cr = odoo.sql_db.db_connect(dbname).cursor()\n with api.Environment.manage():\n yield api.Environment(cr, SUPERUSER_ID, {})\n\n\nclass MailTrackingController(MailController):\n\n def _request_metadata(self):\n \"\"\"Prepare remote info metadata\"\"\"\n request = http.request.httprequest\n return {\n 'ip': request.remote_addr or False,\n 'user_agent': request.user_agent or False,\n 'os_family': request.user_agent.platform or False,\n 'ua_family': request.user_agent.browser or False,\n }\n\n @http.route(['/mail/tracking/all/<string:db>',\n '/mail/tracking/event/<string:db>/<string:event_type>'],\n type='http', auth='none', csrf=False)\n def mail_tracking_event(self, db, event_type=None, **kw):\n \"\"\"Route used by external mail service\"\"\"\n metadata = self._request_metadata()\n res = None\n with db_env(db) as env:\n try:\n res = env['mail.tracking.email'].event_process(\n http.request, kw, metadata, event_type=event_type)\n except Exception:\n pass\n if not res or res == 'NOT FOUND':\n return werkzeug.exceptions.NotAcceptable()\n return res\n\n @http.route(['/mail/tracking/open/<string:db>'\n '/<int:tracking_email_id>/blank.gif',\n '/mail/tracking/open/<string:db>'\n '/<int:tracking_email_id>/<string:token>/blank.gif'],\n type='http', auth='none', methods=['GET'])\n def mail_tracking_open(self, db, tracking_email_id, token=False, **kw):\n \"\"\"Route used to track mail openned (With & Without Token)\"\"\"\n metadata = self._request_metadata()\n with db_env(db) as env:\n try:\n tracking_email = env['mail.tracking.email'].search([\n ('id', '=', tracking_email_id),\n ('state', 'in', ['sent', 'delivered']),\n ('token', '=', token),\n ])\n if tracking_email:\n tracking_email.event_create('open', metadata)\n else:\n _logger.warning(\n \"MailTracking email '%s' not found\", tracking_email_id)\n except Exception:\n pass\n\n # Always return GIF blank image\n response = werkzeug.wrappers.Response()\n response.mimetype = 'image/gif'\n response.data = base64.b64decode(BLANK)\n return response\n\n @http.route()\n def mail_init_messaging(self):\n \"\"\"Route used to initial values of Discuss app\"\"\"\n values = super().mail_init_messaging()\n values.update({\n 'failed_counter':\n http.request.env['mail.message'].get_failed_count(),\n })\n return values\n", "path": "mail_tracking/controllers/main.py"}], "after_files": [{"content": "# Copyright 2016 Antonio Espinosa - <[email protected]>\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\n\nimport werkzeug\nimport odoo\nfrom contextlib import contextmanager\nfrom odoo import api, http, SUPERUSER_ID\n\nfrom odoo.addons.mail.controllers.main import MailController\nimport logging\nimport base64\n_logger = logging.getLogger(__name__)\n\nBLANK = 'R0lGODlhAQABAIAAANvf7wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw=='\n\n\n@contextmanager\ndef db_env(dbname):\n if not http.db_filter([dbname]):\n raise werkzeug.exceptions.BadRequest()\n cr = None\n if dbname == http.request.db:\n cr = http.request.cr\n if not cr:\n cr = odoo.sql_db.db_connect(dbname).cursor()\n with api.Environment.manage():\n yield api.Environment(cr, SUPERUSER_ID, {})\n\n\nclass MailTrackingController(MailController):\n\n def _request_metadata(self):\n \"\"\"Prepare remote info metadata\"\"\"\n request = http.request.httprequest\n return {\n 'ip': request.remote_addr or False,\n 'user_agent': request.user_agent or False,\n 'os_family': request.user_agent.platform or False,\n 'ua_family': request.user_agent.browser or False,\n }\n\n @http.route(['/mail/tracking/all/<string:db>',\n '/mail/tracking/event/<string:db>/<string:event_type>'],\n type='http', auth='none', csrf=False)\n def mail_tracking_event(self, db, event_type=None, **kw):\n \"\"\"Route used by external mail service\"\"\"\n metadata = self._request_metadata()\n res = None\n with db_env(db) as env:\n try:\n res = env['mail.tracking.email'].event_process(\n http.request, kw, metadata, event_type=event_type)\n except Exception:\n pass\n if not res or res == 'NOT FOUND':\n return werkzeug.exceptions.NotAcceptable()\n return res\n\n @http.route(['/mail/tracking/open/<string:db>'\n '/<int:tracking_email_id>/blank.gif',\n '/mail/tracking/open/<string:db>'\n '/<int:tracking_email_id>/<string:token>/blank.gif'],\n type='http', auth='none', methods=['GET'])\n def mail_tracking_open(self, db, tracking_email_id, token=False, **kw):\n \"\"\"Route used to track mail openned (With & Without Token)\"\"\"\n metadata = self._request_metadata()\n with db_env(db) as env:\n try:\n tracking_email = env['mail.tracking.email'].search([\n ('id', '=', tracking_email_id),\n ('token', '=', token),\n ])\n if not tracking_email:\n _logger.warning(\n \"MailTracking email '%s' not found\", tracking_email_id)\n elif tracking_email.state in ('sent', 'delivered'):\n tracking_email.event_create('open', metadata)\n except Exception:\n pass\n\n # Always return GIF blank image\n response = werkzeug.wrappers.Response()\n response.mimetype = 'image/gif'\n response.data = base64.b64decode(BLANK)\n return response\n\n @http.route()\n def mail_init_messaging(self):\n \"\"\"Route used to initial values of Discuss app\"\"\"\n values = super().mail_init_messaging()\n values.update({\n 'failed_counter':\n http.request.env['mail.message'].get_failed_count(),\n })\n return values\n", "path": "mail_tracking/controllers/main.py"}]} | 1,382 | 175 |
gh_patches_debug_19190 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-9709 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Using filter list with Jinja on metrics and metric input measures causes error when running `dbt parse`
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
When you define a metric or metric input measure with a filter list and include Jinja in the list, `dbt parse` triggers an error. Example YAML:
```
metrics:
- name: new_customer
description: Unique count of new customers.
label: New Customers
type: simple
type_params:
measure: customers_with_orders
filter:
- "{{ Dimension('customer__customer_type') }} = 'new'"
```
Running `dbt parse` in a repo with that metric definition triggers this error:
```
Compilation Error
Could not render {{ Dimension('customer__customer_type') }} = 'new': 'Dimension' is undefined
```
Meanwhile, if you apply the same filter as a string (example below), no error will be triggered.
```
metrics:
- name: new_customer
description: Unique count of new customers.
label: New Customers
type: simple
type_params:
measure: customers_with_orders
filter: "{{ Dimension('customer__customer_type') }} = 'new'"
```
### Expected Behavior
Applying the filter in a list should have the same impact as applying it as a string. Currently this limitation will prevent users from using multiple filters on metrics and metric input measures.
### Steps To Reproduce
1. Add a list-type filter on a metric or a metric input measure in a dbt project
2. Run `dbt parse` in the repo
### Relevant log output
```shell
Output when running dbt parse:
21:05:56 Running with dbt=1.7.3
21:05:56 Registered adapter: duckdb=1.7.0
21:05:56 Encountered an error:
Compilation Error
Could not render {{ Dimension('customer__customer_type') }} = 'new': 'Dimension' is undefined
```
### Environment
```markdown
- OS: Mac OSX 13.4.1
- Python: 3.9.16
- dbt: 1.7.3
```
### Which database adapter are you using with dbt?
other (mention it in "Additional Context")
### Additional Context
DuckDB
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/parser/schema_renderer.py`
Content:
```
1 from typing import Dict, Any
2
3 from dbt.config.renderer import BaseRenderer, Keypath
4
5
6 # This class renders dictionaries derived from "schema" yaml files.
7 # It calls Jinja on strings (in deep_map_render), except for certain
8 # keys which are skipped because they need to be rendered later
9 # (tests and description). Test configs are rendered in the
10 # generic test builder code, but skips the keyword args. The test
11 # keyword args are rendered to capture refs in render_test_update.
12 # Keyword args are finally rendered at compilation time.
13 # Descriptions are not rendered until 'process_docs'.
14 class SchemaYamlRenderer(BaseRenderer):
15 def __init__(self, context: Dict[str, Any], key: str) -> None:
16 super().__init__(context)
17 self.key = key
18
19 @property
20 def name(self):
21 return "Rendering yaml"
22
23 def _is_norender_key(self, keypath: Keypath) -> bool:
24 """
25 models:
26 - name: blah
27 description: blah
28 data_tests: ...
29 columns:
30 - name:
31 description: blah
32 data_tests: ...
33
34 Return True if it's tests, data_tests or description - those aren't rendered now
35 because they're rendered later in parse_generic_tests or process_docs.
36 "tests" and "data_tests" are both currently supported but "tests" has been deprecated
37 """
38 # top level descriptions and data_tests
39 if len(keypath) >= 1 and keypath[0] in ("tests", "data_tests", "description"):
40 return True
41
42 # columns descriptions and data_tests
43 if len(keypath) == 2 and keypath[1] in ("tests", "data_tests", "description"):
44 return True
45
46 # versions
47 if len(keypath) == 5 and keypath[4] == "description":
48 return True
49
50 if (
51 len(keypath) >= 3
52 and keypath[0] in ("columns", "dimensions", "measures", "entities")
53 and keypath[2] in ("tests", "data_tests", "description")
54 ):
55 return True
56
57 return False
58
59 # don't render descriptions or test keyword arguments
60 def should_render_keypath(self, keypath: Keypath) -> bool:
61 if len(keypath) < 1:
62 return True
63
64 if self.key == "sources":
65 if keypath[0] == "description":
66 return False
67 if keypath[0] == "tables":
68 if self._is_norender_key(keypath[2:]):
69 return False
70 elif self.key == "macros":
71 if keypath[0] == "arguments":
72 if self._is_norender_key(keypath[1:]):
73 return False
74 elif self._is_norender_key(keypath[0:]):
75 return False
76 elif self.key == "metrics":
77 # This ensures all key paths that end in 'filter' for a metric are skipped
78 if keypath[-1] == "filter":
79 return False
80 elif self._is_norender_key(keypath[0:]):
81 return False
82 elif self.key == "saved_queries":
83 if keypath[0] == "query_params" and len(keypath) > 1 and keypath[1] == "where":
84 return False
85 elif self._is_norender_key(keypath[0:]):
86 return False
87 else: # models, seeds, snapshots, analyses
88 if self._is_norender_key(keypath[0:]):
89 return False
90 return True
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/dbt/parser/schema_renderer.py b/core/dbt/parser/schema_renderer.py
--- a/core/dbt/parser/schema_renderer.py
+++ b/core/dbt/parser/schema_renderer.py
@@ -74,12 +74,13 @@
elif self._is_norender_key(keypath[0:]):
return False
elif self.key == "metrics":
- # This ensures all key paths that end in 'filter' for a metric are skipped
- if keypath[-1] == "filter":
+ # This ensures that metric filters are skipped
+ if keypath[-1] == "filter" or len(keypath) > 1 and keypath[-2] == "filter":
return False
elif self._is_norender_key(keypath[0:]):
return False
elif self.key == "saved_queries":
+ # This ensures that saved query filters are skipped
if keypath[0] == "query_params" and len(keypath) > 1 and keypath[1] == "where":
return False
elif self._is_norender_key(keypath[0:]):
| {"golden_diff": "diff --git a/core/dbt/parser/schema_renderer.py b/core/dbt/parser/schema_renderer.py\n--- a/core/dbt/parser/schema_renderer.py\n+++ b/core/dbt/parser/schema_renderer.py\n@@ -74,12 +74,13 @@\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"metrics\":\n- # This ensures all key paths that end in 'filter' for a metric are skipped\n- if keypath[-1] == \"filter\":\n+ # This ensures that metric filters are skipped\n+ if keypath[-1] == \"filter\" or len(keypath) > 1 and keypath[-2] == \"filter\":\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"saved_queries\":\n+ # This ensures that saved query filters are skipped\n if keypath[0] == \"query_params\" and len(keypath) > 1 and keypath[1] == \"where\":\n return False\n elif self._is_norender_key(keypath[0:]):\n", "issue": "[Bug] Using filter list with Jinja on metrics and metric input measures causes error when running `dbt parse`\n### Is this a new bug in dbt-core?\n\n- [X] I believe this is a new bug in dbt-core\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\n\n### Current Behavior\n\nWhen you define a metric or metric input measure with a filter list and include Jinja in the list, `dbt parse` triggers an error. Example YAML:\r\n```\r\nmetrics:\r\n - name: new_customer\r\n description: Unique count of new customers.\r\n label: New Customers\r\n type: simple\r\n type_params:\r\n measure: customers_with_orders\r\n filter:\r\n - \"{{ Dimension('customer__customer_type') }} = 'new'\"\r\n```\r\nRunning `dbt parse` in a repo with that metric definition triggers this error:\r\n```\r\nCompilation Error\r\n Could not render {{ Dimension('customer__customer_type') }} = 'new': 'Dimension' is undefined\r\n```\r\nMeanwhile, if you apply the same filter as a string (example below), no error will be triggered.\r\n```\r\nmetrics:\r\n - name: new_customer\r\n description: Unique count of new customers.\r\n label: New Customers\r\n type: simple\r\n type_params:\r\n measure: customers_with_orders\r\n filter: \"{{ Dimension('customer__customer_type') }} = 'new'\"\r\n```\n\n### Expected Behavior\n\nApplying the filter in a list should have the same impact as applying it as a string. Currently this limitation will prevent users from using multiple filters on metrics and metric input measures.\n\n### Steps To Reproduce\n\n1. Add a list-type filter on a metric or a metric input measure in a dbt project\r\n2. Run `dbt parse` in the repo\n\n### Relevant log output\n\n```shell\nOutput when running dbt parse:\r\n\r\n21:05:56 Running with dbt=1.7.3\r\n21:05:56 Registered adapter: duckdb=1.7.0\r\n21:05:56 Encountered an error:\r\nCompilation Error\r\n Could not render {{ Dimension('customer__customer_type') }} = 'new': 'Dimension' is undefined\n```\n\n\n### Environment\n\n```markdown\n- OS: Mac OSX 13.4.1\r\n- Python: 3.9.16\r\n- dbt: 1.7.3\n```\n\n\n### Which database adapter are you using with dbt?\n\nother (mention it in \"Additional Context\")\n\n### Additional Context\n\nDuckDB\n", "before_files": [{"content": "from typing import Dict, Any\n\nfrom dbt.config.renderer import BaseRenderer, Keypath\n\n\n# This class renders dictionaries derived from \"schema\" yaml files.\n# It calls Jinja on strings (in deep_map_render), except for certain\n# keys which are skipped because they need to be rendered later\n# (tests and description). Test configs are rendered in the\n# generic test builder code, but skips the keyword args. The test\n# keyword args are rendered to capture refs in render_test_update.\n# Keyword args are finally rendered at compilation time.\n# Descriptions are not rendered until 'process_docs'.\nclass SchemaYamlRenderer(BaseRenderer):\n def __init__(self, context: Dict[str, Any], key: str) -> None:\n super().__init__(context)\n self.key = key\n\n @property\n def name(self):\n return \"Rendering yaml\"\n\n def _is_norender_key(self, keypath: Keypath) -> bool:\n \"\"\"\n models:\n - name: blah\n description: blah\n data_tests: ...\n columns:\n - name:\n description: blah\n data_tests: ...\n\n Return True if it's tests, data_tests or description - those aren't rendered now\n because they're rendered later in parse_generic_tests or process_docs.\n \"tests\" and \"data_tests\" are both currently supported but \"tests\" has been deprecated\n \"\"\"\n # top level descriptions and data_tests\n if len(keypath) >= 1 and keypath[0] in (\"tests\", \"data_tests\", \"description\"):\n return True\n\n # columns descriptions and data_tests\n if len(keypath) == 2 and keypath[1] in (\"tests\", \"data_tests\", \"description\"):\n return True\n\n # versions\n if len(keypath) == 5 and keypath[4] == \"description\":\n return True\n\n if (\n len(keypath) >= 3\n and keypath[0] in (\"columns\", \"dimensions\", \"measures\", \"entities\")\n and keypath[2] in (\"tests\", \"data_tests\", \"description\")\n ):\n return True\n\n return False\n\n # don't render descriptions or test keyword arguments\n def should_render_keypath(self, keypath: Keypath) -> bool:\n if len(keypath) < 1:\n return True\n\n if self.key == \"sources\":\n if keypath[0] == \"description\":\n return False\n if keypath[0] == \"tables\":\n if self._is_norender_key(keypath[2:]):\n return False\n elif self.key == \"macros\":\n if keypath[0] == \"arguments\":\n if self._is_norender_key(keypath[1:]):\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"metrics\":\n # This ensures all key paths that end in 'filter' for a metric are skipped\n if keypath[-1] == \"filter\":\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"saved_queries\":\n if keypath[0] == \"query_params\" and len(keypath) > 1 and keypath[1] == \"where\":\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n else: # models, seeds, snapshots, analyses\n if self._is_norender_key(keypath[0:]):\n return False\n return True\n", "path": "core/dbt/parser/schema_renderer.py"}], "after_files": [{"content": "from typing import Dict, Any\n\nfrom dbt.config.renderer import BaseRenderer, Keypath\n\n\n# This class renders dictionaries derived from \"schema\" yaml files.\n# It calls Jinja on strings (in deep_map_render), except for certain\n# keys which are skipped because they need to be rendered later\n# (tests and description). Test configs are rendered in the\n# generic test builder code, but skips the keyword args. The test\n# keyword args are rendered to capture refs in render_test_update.\n# Keyword args are finally rendered at compilation time.\n# Descriptions are not rendered until 'process_docs'.\nclass SchemaYamlRenderer(BaseRenderer):\n def __init__(self, context: Dict[str, Any], key: str) -> None:\n super().__init__(context)\n self.key = key\n\n @property\n def name(self):\n return \"Rendering yaml\"\n\n def _is_norender_key(self, keypath: Keypath) -> bool:\n \"\"\"\n models:\n - name: blah\n description: blah\n data_tests: ...\n columns:\n - name:\n description: blah\n data_tests: ...\n\n Return True if it's tests, data_tests or description - those aren't rendered now\n because they're rendered later in parse_generic_tests or process_docs.\n \"tests\" and \"data_tests\" are both currently supported but \"tests\" has been deprecated\n \"\"\"\n # top level descriptions and data_tests\n if len(keypath) >= 1 and keypath[0] in (\"tests\", \"data_tests\", \"description\"):\n return True\n\n # columns descriptions and data_tests\n if len(keypath) == 2 and keypath[1] in (\"tests\", \"data_tests\", \"description\"):\n return True\n\n # versions\n if len(keypath) == 5 and keypath[4] == \"description\":\n return True\n\n if (\n len(keypath) >= 3\n and keypath[0] in (\"columns\", \"dimensions\", \"measures\", \"entities\")\n and keypath[2] in (\"tests\", \"data_tests\", \"description\")\n ):\n return True\n\n return False\n\n # don't render descriptions or test keyword arguments\n def should_render_keypath(self, keypath: Keypath) -> bool:\n if len(keypath) < 1:\n return True\n\n if self.key == \"sources\":\n if keypath[0] == \"description\":\n return False\n if keypath[0] == \"tables\":\n if self._is_norender_key(keypath[2:]):\n return False\n elif self.key == \"macros\":\n if keypath[0] == \"arguments\":\n if self._is_norender_key(keypath[1:]):\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"metrics\":\n # This ensures that metric filters are skipped\n if keypath[-1] == \"filter\" or len(keypath) > 1 and keypath[-2] == \"filter\":\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n elif self.key == \"saved_queries\":\n # This ensures that saved query filters are skipped\n if keypath[0] == \"query_params\" and len(keypath) > 1 and keypath[1] == \"where\":\n return False\n elif self._is_norender_key(keypath[0:]):\n return False\n else: # models, seeds, snapshots, analyses\n if self._is_norender_key(keypath[0:]):\n return False\n return True\n", "path": "core/dbt/parser/schema_renderer.py"}]} | 1,775 | 247 |
gh_patches_debug_29976 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-1275 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add release meta issue template for 1.1.1
Signed-off-by: Peter Zhu <[email protected]>
### Description
Add release meta issue template for 1.1.1
### Issues Resolved
[List any issues this PR will resolve]
### Check List
- [x] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check [here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/ci_workflow/ci_input_manifest.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import logging
8
9 from ci_workflow.ci_check_lists import CiCheckLists
10 from ci_workflow.ci_manifest import CiManifest
11 from ci_workflow.ci_target import CiTarget
12 from manifests.input_manifest import InputManifest
13 from system.temporary_directory import TemporaryDirectory
14
15
16 class CiInputManifest(CiManifest):
17 def __init__(self, file, args):
18 super().__init__(InputManifest.from_file(file), args)
19
20 def __check__(self):
21
22 target = CiTarget(version=self.manifest.build.version, snapshot=self.args.snapshot)
23
24 with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:
25 logging.info(f"Sanity-testing in {work_dir.name}")
26
27 logging.info(f"Sanity testing {self.manifest.build.name}")
28
29 for component in self.manifest.components.select(focus=self.args.component):
30 logging.info(f"Sanity testing {component.name}")
31
32 ci_check_list = CiCheckLists.from_component(component, target)
33 ci_check_list.checkout(work_dir.name)
34 ci_check_list.check()
35 logging.info("Done.")
36
```
Path: `src/ci_workflow/ci_target.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7
8 class CiTarget:
9 version: str
10 snapshot: bool
11
12 def __init__(self, version, snapshot=True):
13 self.version = version
14 self.snapshot = snapshot
15
16 @property
17 def opensearch_version(self):
18 return self.version + "-SNAPSHOT" if self.snapshot else self.version
19
20 @property
21 def component_version(self):
22 # BUG: the 4th digit is dictated by the component, it's not .0, this will break for 1.1.0.1
23 return self.version + ".0-SNAPSHOT" if self.snapshot else f"{self.version}.0"
24
```
Path: `src/ci_workflow/ci_check_manifest_component.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import logging
8
9 from build_workflow.build_args import BuildArgs
10 from ci_workflow.ci_check import CiCheckDist
11 from manifests.build_manifest import BuildManifest
12
13
14 class CiCheckManifestComponent(CiCheckDist):
15 class MissingComponentError(Exception):
16 def __init__(self, component, url):
17 super().__init__(f"Missing {component} in {url}.")
18
19 def check(self):
20 for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:
21 url = "/".join([self.component.dist, architecture, "manifest.yml"])
22 self.build_manifest = BuildManifest.from_url(url)
23 if self.component.name in self.build_manifest.components:
24 logging.info(f"Found {self.component.name} in {url}.")
25 else:
26 raise CiCheckManifestComponent.MissingComponentError(self.component.name, url)
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/ci_workflow/ci_check_manifest_component.py b/src/ci_workflow/ci_check_manifest_component.py
--- a/src/ci_workflow/ci_check_manifest_component.py
+++ b/src/ci_workflow/ci_check_manifest_component.py
@@ -18,7 +18,9 @@
def check(self):
for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:
- url = "/".join([self.component.dist, architecture, "manifest.yml"])
+ # Since we only have 'linux' builds now we hard code it to 'linux'
+ # Once we have all platform builds on S3 we can then add a second loop for 'BuildArgs.SUPPORTED_PLATFORMS'
+ url = "/".join([self.component.dist, "linux", architecture, "builds", self.target.name, "manifest.yml"])
self.build_manifest = BuildManifest.from_url(url)
if self.component.name in self.build_manifest.components:
logging.info(f"Found {self.component.name} in {url}.")
diff --git a/src/ci_workflow/ci_input_manifest.py b/src/ci_workflow/ci_input_manifest.py
--- a/src/ci_workflow/ci_input_manifest.py
+++ b/src/ci_workflow/ci_input_manifest.py
@@ -19,7 +19,7 @@
def __check__(self):
- target = CiTarget(version=self.manifest.build.version, snapshot=self.args.snapshot)
+ target = CiTarget(version=self.manifest.build.version, name=self.manifest.build.filename, snapshot=self.args.snapshot)
with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:
logging.info(f"Sanity-testing in {work_dir.name}")
diff --git a/src/ci_workflow/ci_target.py b/src/ci_workflow/ci_target.py
--- a/src/ci_workflow/ci_target.py
+++ b/src/ci_workflow/ci_target.py
@@ -7,10 +7,12 @@
class CiTarget:
version: str
+ name: str
snapshot: bool
- def __init__(self, version, snapshot=True):
+ def __init__(self, version, name, snapshot=True):
self.version = version
+ self.name = name
self.snapshot = snapshot
@property
| {"golden_diff": "diff --git a/src/ci_workflow/ci_check_manifest_component.py b/src/ci_workflow/ci_check_manifest_component.py\n--- a/src/ci_workflow/ci_check_manifest_component.py\n+++ b/src/ci_workflow/ci_check_manifest_component.py\n@@ -18,7 +18,9 @@\n \n def check(self):\n for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:\n- url = \"/\".join([self.component.dist, architecture, \"manifest.yml\"])\n+ # Since we only have 'linux' builds now we hard code it to 'linux'\n+ # Once we have all platform builds on S3 we can then add a second loop for 'BuildArgs.SUPPORTED_PLATFORMS'\n+ url = \"/\".join([self.component.dist, \"linux\", architecture, \"builds\", self.target.name, \"manifest.yml\"])\n self.build_manifest = BuildManifest.from_url(url)\n if self.component.name in self.build_manifest.components:\n logging.info(f\"Found {self.component.name} in {url}.\")\ndiff --git a/src/ci_workflow/ci_input_manifest.py b/src/ci_workflow/ci_input_manifest.py\n--- a/src/ci_workflow/ci_input_manifest.py\n+++ b/src/ci_workflow/ci_input_manifest.py\n@@ -19,7 +19,7 @@\n \n def __check__(self):\n \n- target = CiTarget(version=self.manifest.build.version, snapshot=self.args.snapshot)\n+ target = CiTarget(version=self.manifest.build.version, name=self.manifest.build.filename, snapshot=self.args.snapshot)\n \n with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:\n logging.info(f\"Sanity-testing in {work_dir.name}\")\ndiff --git a/src/ci_workflow/ci_target.py b/src/ci_workflow/ci_target.py\n--- a/src/ci_workflow/ci_target.py\n+++ b/src/ci_workflow/ci_target.py\n@@ -7,10 +7,12 @@\n \n class CiTarget:\n version: str\n+ name: str\n snapshot: bool\n \n- def __init__(self, version, snapshot=True):\n+ def __init__(self, version, name, snapshot=True):\n self.version = version\n+ self.name = name\n self.snapshot = snapshot\n \n @property\n", "issue": "Add release meta issue template for 1.1.1\nSigned-off-by: Peter Zhu <[email protected]>\r\n\r\n### Description\r\nAdd release meta issue template for 1.1.1\r\n \r\n### Issues Resolved\r\n[List any issues this PR will resolve]\r\n \r\n### Check List\r\n- [x] Commits are signed per the DCO using --signoff \r\n\r\nBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.\r\nFor more information on following Developer Certificate of Origin and signing off your commits, please check [here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin).\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\n\nfrom ci_workflow.ci_check_lists import CiCheckLists\nfrom ci_workflow.ci_manifest import CiManifest\nfrom ci_workflow.ci_target import CiTarget\nfrom manifests.input_manifest import InputManifest\nfrom system.temporary_directory import TemporaryDirectory\n\n\nclass CiInputManifest(CiManifest):\n def __init__(self, file, args):\n super().__init__(InputManifest.from_file(file), args)\n\n def __check__(self):\n\n target = CiTarget(version=self.manifest.build.version, snapshot=self.args.snapshot)\n\n with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:\n logging.info(f\"Sanity-testing in {work_dir.name}\")\n\n logging.info(f\"Sanity testing {self.manifest.build.name}\")\n\n for component in self.manifest.components.select(focus=self.args.component):\n logging.info(f\"Sanity testing {component.name}\")\n\n ci_check_list = CiCheckLists.from_component(component, target)\n ci_check_list.checkout(work_dir.name)\n ci_check_list.check()\n logging.info(\"Done.\")\n", "path": "src/ci_workflow/ci_input_manifest.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\n\nclass CiTarget:\n version: str\n snapshot: bool\n\n def __init__(self, version, snapshot=True):\n self.version = version\n self.snapshot = snapshot\n\n @property\n def opensearch_version(self):\n return self.version + \"-SNAPSHOT\" if self.snapshot else self.version\n\n @property\n def component_version(self):\n # BUG: the 4th digit is dictated by the component, it's not .0, this will break for 1.1.0.1\n return self.version + \".0-SNAPSHOT\" if self.snapshot else f\"{self.version}.0\"\n", "path": "src/ci_workflow/ci_target.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\n\nfrom build_workflow.build_args import BuildArgs\nfrom ci_workflow.ci_check import CiCheckDist\nfrom manifests.build_manifest import BuildManifest\n\n\nclass CiCheckManifestComponent(CiCheckDist):\n class MissingComponentError(Exception):\n def __init__(self, component, url):\n super().__init__(f\"Missing {component} in {url}.\")\n\n def check(self):\n for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:\n url = \"/\".join([self.component.dist, architecture, \"manifest.yml\"])\n self.build_manifest = BuildManifest.from_url(url)\n if self.component.name in self.build_manifest.components:\n logging.info(f\"Found {self.component.name} in {url}.\")\n else:\n raise CiCheckManifestComponent.MissingComponentError(self.component.name, url)\n", "path": "src/ci_workflow/ci_check_manifest_component.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\n\nfrom ci_workflow.ci_check_lists import CiCheckLists\nfrom ci_workflow.ci_manifest import CiManifest\nfrom ci_workflow.ci_target import CiTarget\nfrom manifests.input_manifest import InputManifest\nfrom system.temporary_directory import TemporaryDirectory\n\n\nclass CiInputManifest(CiManifest):\n def __init__(self, file, args):\n super().__init__(InputManifest.from_file(file), args)\n\n def __check__(self):\n\n target = CiTarget(version=self.manifest.build.version, name=self.manifest.build.filename, snapshot=self.args.snapshot)\n\n with TemporaryDirectory(keep=self.args.keep, chdir=True) as work_dir:\n logging.info(f\"Sanity-testing in {work_dir.name}\")\n\n logging.info(f\"Sanity testing {self.manifest.build.name}\")\n\n for component in self.manifest.components.select(focus=self.args.component):\n logging.info(f\"Sanity testing {component.name}\")\n\n ci_check_list = CiCheckLists.from_component(component, target)\n ci_check_list.checkout(work_dir.name)\n ci_check_list.check()\n logging.info(\"Done.\")\n", "path": "src/ci_workflow/ci_input_manifest.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\n\nclass CiTarget:\n version: str\n name: str\n snapshot: bool\n\n def __init__(self, version, name, snapshot=True):\n self.version = version\n self.name = name\n self.snapshot = snapshot\n\n @property\n def opensearch_version(self):\n return self.version + \"-SNAPSHOT\" if self.snapshot else self.version\n\n @property\n def component_version(self):\n # BUG: the 4th digit is dictated by the component, it's not .0, this will break for 1.1.0.1\n return self.version + \".0-SNAPSHOT\" if self.snapshot else f\"{self.version}.0\"\n", "path": "src/ci_workflow/ci_target.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\n\nfrom build_workflow.build_args import BuildArgs\nfrom ci_workflow.ci_check import CiCheckDist\nfrom manifests.build_manifest import BuildManifest\n\n\nclass CiCheckManifestComponent(CiCheckDist):\n class MissingComponentError(Exception):\n def __init__(self, component, url):\n super().__init__(f\"Missing {component} in {url}.\")\n\n def check(self):\n for architecture in BuildArgs.SUPPORTED_ARCHITECTURES:\n # Since we only have 'linux' builds now we hard code it to 'linux'\n # Once we have all platform builds on S3 we can then add a second loop for 'BuildArgs.SUPPORTED_PLATFORMS'\n url = \"/\".join([self.component.dist, \"linux\", architecture, \"builds\", self.target.name, \"manifest.yml\"])\n self.build_manifest = BuildManifest.from_url(url)\n if self.component.name in self.build_manifest.components:\n logging.info(f\"Found {self.component.name} in {url}.\")\n else:\n raise CiCheckManifestComponent.MissingComponentError(self.component.name, url)\n", "path": "src/ci_workflow/ci_check_manifest_component.py"}]} | 1,267 | 489 |
gh_patches_debug_40657 | rasdani/github-patches | git_diff | keras-team__keras-core-439 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Torch GPU test failures
Install PyTorch via -
```
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
```
We still have to install TF (for tf.data) and JAX (for import - though this could be avoided) in the environment via -
```
pip install tensorflow_cpu
pip install jax[cpu]
```
Test via -
```
KERAS_BACKEND=torch pytest keras_core --ignore keras_core/applications
```
Result summary:
```
FAILED keras_core/layers/convolutional/conv_test.py::ConvCorrectnessTest::test_conv3d0 - AssertionError:
FAILED keras_core/layers/convolutional/conv_test.py::ConvCorrectnessTest::test_conv3d1 - AssertionError:
FAILED keras_core/layers/convolutional/conv_transpose_test.py::ConvTransposeCorrectnessTest::test_conv3d_transpose2 - AssertionError:
FAILED keras_core/layers/preprocessing/random_brightness_test.py::RandomBrightnessTest::test_tf_data_compatibility - TypeError: in user code:
FAILED keras_core/layers/preprocessing/random_contrast_test.py::RandomContrastTest::test_tf_data_compatibility - TypeError: in user code:
```
ConvCorrectnessTest, ConvTransposeCorrectnessTest - Failed due to mismatch in test array values.
Preprpcoessing RandomBrightnessTest, RandomContrastTest failed on `numpy()` being calld on GPU tensor. It needs to be `cpu().numpy()` for torch.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras_core/layers/preprocessing/tf_data_layer.py`
Content:
```
1 from tensorflow import nest
2
3 from keras_core import backend
4 from keras_core.layers.layer import Layer
5 from keras_core.random.seed_generator import SeedGenerator
6 from keras_core.utils import backend_utils
7 from keras_core.utils import tracking
8
9
10 class TFDataLayer(Layer):
11 """Layer that can safely used in a tf.data pipeline.
12
13 The `call()` method must solely rely on `self.backend` ops.
14
15 Only supports a single input tensor argument.
16 """
17
18 def __init__(self, **kwargs):
19 super().__init__(**kwargs)
20 self.backend = backend_utils.DynamicBackend()
21 self._allow_non_tensor_positional_args = True
22
23 def __call__(self, inputs, **kwargs):
24 if backend_utils.in_tf_graph() and not isinstance(
25 inputs, backend.KerasTensor
26 ):
27 # We're in a TF graph, e.g. a tf.data pipeline.
28 self.backend.set_backend("tensorflow")
29 inputs = nest.map_structure(
30 lambda x: self.backend.convert_to_tensor(
31 x, dtype=self.compute_dtype
32 ),
33 inputs,
34 )
35 switch_convert_input_args = False
36 if self._convert_input_args:
37 self._convert_input_args = False
38 switch_convert_input_args = True
39 try:
40 outputs = super().__call__(inputs, **kwargs)
41 finally:
42 self.backend.reset()
43 if switch_convert_input_args:
44 self._convert_input_args = True
45 return outputs
46 return super().__call__(inputs, **kwargs)
47
48 @tracking.no_automatic_dependency_tracking
49 def _get_seed_generator(self, backend=None):
50 if backend is None or backend == self.backend._backend:
51 return self.generator
52 if not hasattr(self, "_backend_generators"):
53 self._backend_generators = {}
54 if backend in self._backend_generators:
55 return self._backend_generators[backend]
56 seed_generator = SeedGenerator(self.seed, backend=self.backend)
57 self._backend_generators[backend] = seed_generator
58 return seed_generator
59
```
Path: `keras_core/random/seed_generator.py`
Content:
```
1 import random as python_random
2
3 import numpy as np
4
5 from keras_core.api_export import keras_core_export
6
7
8 @keras_core_export("keras_core.random.SeedGenerator")
9 class SeedGenerator:
10 """Generates variable seeds upon each call to a RNG-using function.
11
12 In Keras, all RNG-using methods (such as `keras_core.random.normal()`)
13 are stateless, meaning that if you pass an integer seed to them
14 (such as `seed=42`), they will return the same values at each call.
15 In order to get different values at each call, you must use a
16 `SeedGenerator` instead as the seed argument. The `SeedGenerator`
17 instead is stateful.
18
19 Example:
20
21 ```python
22 seed_gen = keras_core.random.SeedGenerator(seed=42)
23 values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)
24 new_values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)
25 ```
26 """
27
28 def __init__(self, seed, **kwargs):
29 custom_backend = kwargs.pop("backend", None)
30 if kwargs:
31 raise ValueError(f"Unrecognized keyword arguments: {kwargs}")
32 if custom_backend is not None:
33 backend = custom_backend
34 else:
35 from keras_core import backend
36
37 if seed is None:
38 seed = make_default_seed()
39 if not isinstance(seed, int):
40 raise ValueError(
41 "Argument `seed` must be an integer. " f"Received: seed={seed}"
42 )
43
44 def seed_initializer(*args, **kwargs):
45 dtype = kwargs.get("dtype", None)
46 return backend.convert_to_tensor([seed, 0], dtype=dtype)
47
48 self.state = backend.Variable(
49 seed_initializer,
50 shape=(2,),
51 dtype="uint32",
52 trainable=False,
53 name="seed_generator_state",
54 )
55
56
57 def make_default_seed():
58 return python_random.randint(1, int(1e9))
59
60
61 def draw_seed(seed):
62 from keras_core.backend import convert_to_tensor
63
64 if isinstance(seed, SeedGenerator):
65 seed_state = seed.state
66 # Use * 1 to create a copy
67 new_seed_value = seed_state.value * 1
68 increment = convert_to_tensor(np.array([0, 1]), dtype="uint32")
69 seed.state.assign(seed_state + increment)
70 return new_seed_value
71 elif isinstance(seed, int):
72 return convert_to_tensor([seed, 0], dtype="uint32")
73 elif seed is None:
74 return convert_to_tensor([make_default_seed(), 0], dtype="uint32")
75 raise ValueError(
76 "Argument `seed` must be either an integer "
77 "or an instance of `SeedGenerator`. "
78 f"Received: seed={seed} (of type {type(seed)})"
79 )
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras_core/layers/preprocessing/tf_data_layer.py b/keras_core/layers/preprocessing/tf_data_layer.py
--- a/keras_core/layers/preprocessing/tf_data_layer.py
+++ b/keras_core/layers/preprocessing/tf_data_layer.py
@@ -1,6 +1,6 @@
from tensorflow import nest
-from keras_core import backend
+import keras_core.backend
from keras_core.layers.layer import Layer
from keras_core.random.seed_generator import SeedGenerator
from keras_core.utils import backend_utils
@@ -22,7 +22,7 @@
def __call__(self, inputs, **kwargs):
if backend_utils.in_tf_graph() and not isinstance(
- inputs, backend.KerasTensor
+ inputs, keras_core.backend.KerasTensor
):
# We're in a TF graph, e.g. a tf.data pipeline.
self.backend.set_backend("tensorflow")
@@ -47,7 +47,7 @@
@tracking.no_automatic_dependency_tracking
def _get_seed_generator(self, backend=None):
- if backend is None or backend == self.backend._backend:
+ if backend is None or backend == keras_core.backend.backend():
return self.generator
if not hasattr(self, "_backend_generators"):
self._backend_generators = {}
diff --git a/keras_core/random/seed_generator.py b/keras_core/random/seed_generator.py
--- a/keras_core/random/seed_generator.py
+++ b/keras_core/random/seed_generator.py
@@ -2,6 +2,7 @@
import numpy as np
+import keras_core.backend
from keras_core.api_export import keras_core_export
@@ -30,9 +31,9 @@
if kwargs:
raise ValueError(f"Unrecognized keyword arguments: {kwargs}")
if custom_backend is not None:
- backend = custom_backend
+ self.backend = custom_backend
else:
- from keras_core import backend
+ self.backend = keras_core.backend
if seed is None:
seed = make_default_seed()
@@ -43,9 +44,9 @@
def seed_initializer(*args, **kwargs):
dtype = kwargs.get("dtype", None)
- return backend.convert_to_tensor([seed, 0], dtype=dtype)
+ return self.backend.convert_to_tensor([seed, 0], dtype=dtype)
- self.state = backend.Variable(
+ self.state = self.backend.Variable(
seed_initializer,
shape=(2,),
dtype="uint32",
@@ -65,7 +66,9 @@
seed_state = seed.state
# Use * 1 to create a copy
new_seed_value = seed_state.value * 1
- increment = convert_to_tensor(np.array([0, 1]), dtype="uint32")
+ increment = seed.backend.convert_to_tensor(
+ np.array([0, 1]), dtype="uint32"
+ )
seed.state.assign(seed_state + increment)
return new_seed_value
elif isinstance(seed, int):
| {"golden_diff": "diff --git a/keras_core/layers/preprocessing/tf_data_layer.py b/keras_core/layers/preprocessing/tf_data_layer.py\n--- a/keras_core/layers/preprocessing/tf_data_layer.py\n+++ b/keras_core/layers/preprocessing/tf_data_layer.py\n@@ -1,6 +1,6 @@\n from tensorflow import nest\n \n-from keras_core import backend\n+import keras_core.backend\n from keras_core.layers.layer import Layer\n from keras_core.random.seed_generator import SeedGenerator\n from keras_core.utils import backend_utils\n@@ -22,7 +22,7 @@\n \n def __call__(self, inputs, **kwargs):\n if backend_utils.in_tf_graph() and not isinstance(\n- inputs, backend.KerasTensor\n+ inputs, keras_core.backend.KerasTensor\n ):\n # We're in a TF graph, e.g. a tf.data pipeline.\n self.backend.set_backend(\"tensorflow\")\n@@ -47,7 +47,7 @@\n \n @tracking.no_automatic_dependency_tracking\n def _get_seed_generator(self, backend=None):\n- if backend is None or backend == self.backend._backend:\n+ if backend is None or backend == keras_core.backend.backend():\n return self.generator\n if not hasattr(self, \"_backend_generators\"):\n self._backend_generators = {}\ndiff --git a/keras_core/random/seed_generator.py b/keras_core/random/seed_generator.py\n--- a/keras_core/random/seed_generator.py\n+++ b/keras_core/random/seed_generator.py\n@@ -2,6 +2,7 @@\n \n import numpy as np\n \n+import keras_core.backend\n from keras_core.api_export import keras_core_export\n \n \n@@ -30,9 +31,9 @@\n if kwargs:\n raise ValueError(f\"Unrecognized keyword arguments: {kwargs}\")\n if custom_backend is not None:\n- backend = custom_backend\n+ self.backend = custom_backend\n else:\n- from keras_core import backend\n+ self.backend = keras_core.backend\n \n if seed is None:\n seed = make_default_seed()\n@@ -43,9 +44,9 @@\n \n def seed_initializer(*args, **kwargs):\n dtype = kwargs.get(\"dtype\", None)\n- return backend.convert_to_tensor([seed, 0], dtype=dtype)\n+ return self.backend.convert_to_tensor([seed, 0], dtype=dtype)\n \n- self.state = backend.Variable(\n+ self.state = self.backend.Variable(\n seed_initializer,\n shape=(2,),\n dtype=\"uint32\",\n@@ -65,7 +66,9 @@\n seed_state = seed.state\n # Use * 1 to create a copy\n new_seed_value = seed_state.value * 1\n- increment = convert_to_tensor(np.array([0, 1]), dtype=\"uint32\")\n+ increment = seed.backend.convert_to_tensor(\n+ np.array([0, 1]), dtype=\"uint32\"\n+ )\n seed.state.assign(seed_state + increment)\n return new_seed_value\n elif isinstance(seed, int):\n", "issue": "Torch GPU test failures\nInstall PyTorch via - \r\n```\r\nconda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia\r\n```\r\n\r\nWe still have to install TF (for tf.data) and JAX (for import - though this could be avoided) in the environment via - \r\n```\r\npip install tensorflow_cpu\r\npip install jax[cpu]\r\n```\r\n\r\nTest via -\r\n```\r\n KERAS_BACKEND=torch pytest keras_core --ignore keras_core/applications\r\n```\r\nResult summary:\r\n```\r\nFAILED keras_core/layers/convolutional/conv_test.py::ConvCorrectnessTest::test_conv3d0 - AssertionError:\r\nFAILED keras_core/layers/convolutional/conv_test.py::ConvCorrectnessTest::test_conv3d1 - AssertionError:\r\nFAILED keras_core/layers/convolutional/conv_transpose_test.py::ConvTransposeCorrectnessTest::test_conv3d_transpose2 - AssertionError:\r\nFAILED keras_core/layers/preprocessing/random_brightness_test.py::RandomBrightnessTest::test_tf_data_compatibility - TypeError: in user code:\r\nFAILED keras_core/layers/preprocessing/random_contrast_test.py::RandomContrastTest::test_tf_data_compatibility - TypeError: in user code:\r\n```\r\n\r\nConvCorrectnessTest, ConvTransposeCorrectnessTest - Failed due to mismatch in test array values.\r\nPreprpcoessing RandomBrightnessTest, RandomContrastTest failed on `numpy()` being calld on GPU tensor. It needs to be `cpu().numpy()` for torch.\n", "before_files": [{"content": "from tensorflow import nest\n\nfrom keras_core import backend\nfrom keras_core.layers.layer import Layer\nfrom keras_core.random.seed_generator import SeedGenerator\nfrom keras_core.utils import backend_utils\nfrom keras_core.utils import tracking\n\n\nclass TFDataLayer(Layer):\n \"\"\"Layer that can safely used in a tf.data pipeline.\n\n The `call()` method must solely rely on `self.backend` ops.\n\n Only supports a single input tensor argument.\n \"\"\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.backend = backend_utils.DynamicBackend()\n self._allow_non_tensor_positional_args = True\n\n def __call__(self, inputs, **kwargs):\n if backend_utils.in_tf_graph() and not isinstance(\n inputs, backend.KerasTensor\n ):\n # We're in a TF graph, e.g. a tf.data pipeline.\n self.backend.set_backend(\"tensorflow\")\n inputs = nest.map_structure(\n lambda x: self.backend.convert_to_tensor(\n x, dtype=self.compute_dtype\n ),\n inputs,\n )\n switch_convert_input_args = False\n if self._convert_input_args:\n self._convert_input_args = False\n switch_convert_input_args = True\n try:\n outputs = super().__call__(inputs, **kwargs)\n finally:\n self.backend.reset()\n if switch_convert_input_args:\n self._convert_input_args = True\n return outputs\n return super().__call__(inputs, **kwargs)\n\n @tracking.no_automatic_dependency_tracking\n def _get_seed_generator(self, backend=None):\n if backend is None or backend == self.backend._backend:\n return self.generator\n if not hasattr(self, \"_backend_generators\"):\n self._backend_generators = {}\n if backend in self._backend_generators:\n return self._backend_generators[backend]\n seed_generator = SeedGenerator(self.seed, backend=self.backend)\n self._backend_generators[backend] = seed_generator\n return seed_generator\n", "path": "keras_core/layers/preprocessing/tf_data_layer.py"}, {"content": "import random as python_random\n\nimport numpy as np\n\nfrom keras_core.api_export import keras_core_export\n\n\n@keras_core_export(\"keras_core.random.SeedGenerator\")\nclass SeedGenerator:\n \"\"\"Generates variable seeds upon each call to a RNG-using function.\n\n In Keras, all RNG-using methods (such as `keras_core.random.normal()`)\n are stateless, meaning that if you pass an integer seed to them\n (such as `seed=42`), they will return the same values at each call.\n In order to get different values at each call, you must use a\n `SeedGenerator` instead as the seed argument. The `SeedGenerator`\n instead is stateful.\n\n Example:\n\n ```python\n seed_gen = keras_core.random.SeedGenerator(seed=42)\n values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)\n new_values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)\n ```\n \"\"\"\n\n def __init__(self, seed, **kwargs):\n custom_backend = kwargs.pop(\"backend\", None)\n if kwargs:\n raise ValueError(f\"Unrecognized keyword arguments: {kwargs}\")\n if custom_backend is not None:\n backend = custom_backend\n else:\n from keras_core import backend\n\n if seed is None:\n seed = make_default_seed()\n if not isinstance(seed, int):\n raise ValueError(\n \"Argument `seed` must be an integer. \" f\"Received: seed={seed}\"\n )\n\n def seed_initializer(*args, **kwargs):\n dtype = kwargs.get(\"dtype\", None)\n return backend.convert_to_tensor([seed, 0], dtype=dtype)\n\n self.state = backend.Variable(\n seed_initializer,\n shape=(2,),\n dtype=\"uint32\",\n trainable=False,\n name=\"seed_generator_state\",\n )\n\n\ndef make_default_seed():\n return python_random.randint(1, int(1e9))\n\n\ndef draw_seed(seed):\n from keras_core.backend import convert_to_tensor\n\n if isinstance(seed, SeedGenerator):\n seed_state = seed.state\n # Use * 1 to create a copy\n new_seed_value = seed_state.value * 1\n increment = convert_to_tensor(np.array([0, 1]), dtype=\"uint32\")\n seed.state.assign(seed_state + increment)\n return new_seed_value\n elif isinstance(seed, int):\n return convert_to_tensor([seed, 0], dtype=\"uint32\")\n elif seed is None:\n return convert_to_tensor([make_default_seed(), 0], dtype=\"uint32\")\n raise ValueError(\n \"Argument `seed` must be either an integer \"\n \"or an instance of `SeedGenerator`. \"\n f\"Received: seed={seed} (of type {type(seed)})\"\n )\n", "path": "keras_core/random/seed_generator.py"}], "after_files": [{"content": "from tensorflow import nest\n\nimport keras_core.backend\nfrom keras_core.layers.layer import Layer\nfrom keras_core.random.seed_generator import SeedGenerator\nfrom keras_core.utils import backend_utils\nfrom keras_core.utils import tracking\n\n\nclass TFDataLayer(Layer):\n \"\"\"Layer that can safely used in a tf.data pipeline.\n\n The `call()` method must solely rely on `self.backend` ops.\n\n Only supports a single input tensor argument.\n \"\"\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.backend = backend_utils.DynamicBackend()\n self._allow_non_tensor_positional_args = True\n\n def __call__(self, inputs, **kwargs):\n if backend_utils.in_tf_graph() and not isinstance(\n inputs, keras_core.backend.KerasTensor\n ):\n # We're in a TF graph, e.g. a tf.data pipeline.\n self.backend.set_backend(\"tensorflow\")\n inputs = nest.map_structure(\n lambda x: self.backend.convert_to_tensor(\n x, dtype=self.compute_dtype\n ),\n inputs,\n )\n switch_convert_input_args = False\n if self._convert_input_args:\n self._convert_input_args = False\n switch_convert_input_args = True\n try:\n outputs = super().__call__(inputs, **kwargs)\n finally:\n self.backend.reset()\n if switch_convert_input_args:\n self._convert_input_args = True\n return outputs\n return super().__call__(inputs, **kwargs)\n\n @tracking.no_automatic_dependency_tracking\n def _get_seed_generator(self, backend=None):\n if backend is None or backend == keras_core.backend.backend():\n return self.generator\n if not hasattr(self, \"_backend_generators\"):\n self._backend_generators = {}\n if backend in self._backend_generators:\n return self._backend_generators[backend]\n seed_generator = SeedGenerator(self.seed, backend=self.backend)\n self._backend_generators[backend] = seed_generator\n return seed_generator\n", "path": "keras_core/layers/preprocessing/tf_data_layer.py"}, {"content": "import random as python_random\n\nimport numpy as np\n\nimport keras_core.backend\nfrom keras_core.api_export import keras_core_export\n\n\n@keras_core_export(\"keras_core.random.SeedGenerator\")\nclass SeedGenerator:\n \"\"\"Generates variable seeds upon each call to a RNG-using function.\n\n In Keras, all RNG-using methods (such as `keras_core.random.normal()`)\n are stateless, meaning that if you pass an integer seed to them\n (such as `seed=42`), they will return the same values at each call.\n In order to get different values at each call, you must use a\n `SeedGenerator` instead as the seed argument. The `SeedGenerator`\n instead is stateful.\n\n Example:\n\n ```python\n seed_gen = keras_core.random.SeedGenerator(seed=42)\n values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)\n new_values = keras_core.random.normal(shape=(2, 3), seed=seed_gen)\n ```\n \"\"\"\n\n def __init__(self, seed, **kwargs):\n custom_backend = kwargs.pop(\"backend\", None)\n if kwargs:\n raise ValueError(f\"Unrecognized keyword arguments: {kwargs}\")\n if custom_backend is not None:\n self.backend = custom_backend\n else:\n self.backend = keras_core.backend\n\n if seed is None:\n seed = make_default_seed()\n if not isinstance(seed, int):\n raise ValueError(\n \"Argument `seed` must be an integer. \" f\"Received: seed={seed}\"\n )\n\n def seed_initializer(*args, **kwargs):\n dtype = kwargs.get(\"dtype\", None)\n return self.backend.convert_to_tensor([seed, 0], dtype=dtype)\n\n self.state = self.backend.Variable(\n seed_initializer,\n shape=(2,),\n dtype=\"uint32\",\n trainable=False,\n name=\"seed_generator_state\",\n )\n\n\ndef make_default_seed():\n return python_random.randint(1, int(1e9))\n\n\ndef draw_seed(seed):\n from keras_core.backend import convert_to_tensor\n\n if isinstance(seed, SeedGenerator):\n seed_state = seed.state\n # Use * 1 to create a copy\n new_seed_value = seed_state.value * 1\n increment = seed.backend.convert_to_tensor(\n np.array([0, 1]), dtype=\"uint32\"\n )\n seed.state.assign(seed_state + increment)\n return new_seed_value\n elif isinstance(seed, int):\n return convert_to_tensor([seed, 0], dtype=\"uint32\")\n elif seed is None:\n return convert_to_tensor([make_default_seed(), 0], dtype=\"uint32\")\n raise ValueError(\n \"Argument `seed` must be either an integer \"\n \"or an instance of `SeedGenerator`. \"\n f\"Received: seed={seed} (of type {type(seed)})\"\n )\n", "path": "keras_core/random/seed_generator.py"}]} | 1,903 | 668 |
gh_patches_debug_27608 | rasdani/github-patches | git_diff | pypa__virtualenv-2324 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Virtualenv 20.14.0 bundles broken setuptools
**Issue**
Virtualenv 20.14.0 has broken version of setuptools 61.0.0 bundled in. This causes broken installations of packages, especially missing package data files.
**Environment**
Various
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/virtualenv/seed/wheels/embed/__init__.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2
3 from virtualenv.seed.wheels.util import Wheel
4 from virtualenv.util.path import Path
5
6 BUNDLE_FOLDER = Path(__file__).absolute().parent
7 BUNDLE_SUPPORT = {
8 "3.11": {
9 "pip": "pip-22.0.4-py3-none-any.whl",
10 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
11 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
12 },
13 "3.10": {
14 "pip": "pip-22.0.4-py3-none-any.whl",
15 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
16 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
17 },
18 "3.9": {
19 "pip": "pip-22.0.4-py3-none-any.whl",
20 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
21 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
22 },
23 "3.8": {
24 "pip": "pip-22.0.4-py3-none-any.whl",
25 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
26 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
27 },
28 "3.7": {
29 "pip": "pip-22.0.4-py3-none-any.whl",
30 "setuptools": "setuptools-61.0.0-py3-none-any.whl",
31 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
32 },
33 "3.6": {
34 "pip": "pip-21.3.1-py3-none-any.whl",
35 "setuptools": "setuptools-59.6.0-py3-none-any.whl",
36 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
37 },
38 "3.5": {
39 "pip": "pip-20.3.4-py2.py3-none-any.whl",
40 "setuptools": "setuptools-50.3.2-py3-none-any.whl",
41 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
42 },
43 "2.7": {
44 "pip": "pip-20.3.4-py2.py3-none-any.whl",
45 "setuptools": "setuptools-44.1.1-py2.py3-none-any.whl",
46 "wheel": "wheel-0.37.1-py2.py3-none-any.whl",
47 },
48 }
49 MAX = "3.11"
50
51
52 def get_embed_wheel(distribution, for_py_version):
53 path = BUNDLE_FOLDER / (BUNDLE_SUPPORT.get(for_py_version, {}) or BUNDLE_SUPPORT[MAX]).get(distribution)
54 return Wheel.from_path(path)
55
56
57 __all__ = (
58 "get_embed_wheel",
59 "BUNDLE_SUPPORT",
60 "MAX",
61 "BUNDLE_FOLDER",
62 )
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/virtualenv/seed/wheels/embed/__init__.py b/src/virtualenv/seed/wheels/embed/__init__.py
--- a/src/virtualenv/seed/wheels/embed/__init__.py
+++ b/src/virtualenv/seed/wheels/embed/__init__.py
@@ -7,27 +7,27 @@
BUNDLE_SUPPORT = {
"3.11": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.10": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.9": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.8": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.7": {
"pip": "pip-22.0.4-py3-none-any.whl",
- "setuptools": "setuptools-61.0.0-py3-none-any.whl",
+ "setuptools": "setuptools-62.1.0-py3-none-any.whl",
"wheel": "wheel-0.37.1-py2.py3-none-any.whl",
},
"3.6": {
| {"golden_diff": "diff --git a/src/virtualenv/seed/wheels/embed/__init__.py b/src/virtualenv/seed/wheels/embed/__init__.py\n--- a/src/virtualenv/seed/wheels/embed/__init__.py\n+++ b/src/virtualenv/seed/wheels/embed/__init__.py\n@@ -7,27 +7,27 @@\n BUNDLE_SUPPORT = {\n \"3.11\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.10\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.9\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.8\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.7\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n- \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n+ \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.6\": {\n", "issue": "Virtualenv 20.14.0 bundles broken setuptools\n**Issue**\r\n\r\nVirtualenv 20.14.0 has broken version of setuptools 61.0.0 bundled in. This causes broken installations of packages, especially missing package data files.\r\n\r\n**Environment**\r\n\r\nVarious\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom virtualenv.seed.wheels.util import Wheel\nfrom virtualenv.util.path import Path\n\nBUNDLE_FOLDER = Path(__file__).absolute().parent\nBUNDLE_SUPPORT = {\n \"3.11\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.10\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.9\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.8\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.7\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-61.0.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.6\": {\n \"pip\": \"pip-21.3.1-py3-none-any.whl\",\n \"setuptools\": \"setuptools-59.6.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.5\": {\n \"pip\": \"pip-20.3.4-py2.py3-none-any.whl\",\n \"setuptools\": \"setuptools-50.3.2-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"2.7\": {\n \"pip\": \"pip-20.3.4-py2.py3-none-any.whl\",\n \"setuptools\": \"setuptools-44.1.1-py2.py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n}\nMAX = \"3.11\"\n\n\ndef get_embed_wheel(distribution, for_py_version):\n path = BUNDLE_FOLDER / (BUNDLE_SUPPORT.get(for_py_version, {}) or BUNDLE_SUPPORT[MAX]).get(distribution)\n return Wheel.from_path(path)\n\n\n__all__ = (\n \"get_embed_wheel\",\n \"BUNDLE_SUPPORT\",\n \"MAX\",\n \"BUNDLE_FOLDER\",\n)\n", "path": "src/virtualenv/seed/wheels/embed/__init__.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom virtualenv.seed.wheels.util import Wheel\nfrom virtualenv.util.path import Path\n\nBUNDLE_FOLDER = Path(__file__).absolute().parent\nBUNDLE_SUPPORT = {\n \"3.11\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.10\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.9\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.8\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.7\": {\n \"pip\": \"pip-22.0.4-py3-none-any.whl\",\n \"setuptools\": \"setuptools-62.1.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.6\": {\n \"pip\": \"pip-21.3.1-py3-none-any.whl\",\n \"setuptools\": \"setuptools-59.6.0-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"3.5\": {\n \"pip\": \"pip-20.3.4-py2.py3-none-any.whl\",\n \"setuptools\": \"setuptools-50.3.2-py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n \"2.7\": {\n \"pip\": \"pip-20.3.4-py2.py3-none-any.whl\",\n \"setuptools\": \"setuptools-44.1.1-py2.py3-none-any.whl\",\n \"wheel\": \"wheel-0.37.1-py2.py3-none-any.whl\",\n },\n}\nMAX = \"3.11\"\n\n\ndef get_embed_wheel(distribution, for_py_version):\n path = BUNDLE_FOLDER / (BUNDLE_SUPPORT.get(for_py_version, {}) or BUNDLE_SUPPORT[MAX]).get(distribution)\n return Wheel.from_path(path)\n\n\n__all__ = (\n \"get_embed_wheel\",\n \"BUNDLE_SUPPORT\",\n \"MAX\",\n \"BUNDLE_FOLDER\",\n)\n", "path": "src/virtualenv/seed/wheels/embed/__init__.py"}]} | 1,196 | 595 |
gh_patches_debug_26898 | rasdani/github-patches | git_diff | rotki__rotki-3042 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bitcoin legacy xpub with too many used address may fail to query balances after restart
## Problem Definition
If a user adds a legacy xpub that generates too many used addresses rotki will properly add the xpub and all its addresses. But then after a restart it will fail to query all balances again.
Error is:
```
Error at querying blockchain balances: bitcoin external API request for balances failed due to https://blockchain.info/multiaddr?active=XXXXXX
```
The failure is due to attempting to query all addresses from blockchain.info using the multiaddr endpoint which seems to have a non-documented limit.
blockchain.info errors with a 502 if too many bitcoin addresses are sent.
## Task
Fix it by experimenting and figuring out a safe limit.
Then apply that limit where we query for multiple legacy bitcoin addresses.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/chain/bitcoin/__init__.py`
Content:
```
1 from typing import Dict, List, Tuple
2
3 import requests
4
5 from rotkehlchen.errors import RemoteError, UnableToDecryptRemoteData
6 from rotkehlchen.fval import FVal
7 from rotkehlchen.typing import BTCAddress
8 from rotkehlchen.utils.misc import satoshis_to_btc
9 from rotkehlchen.utils.network import request_get_dict
10
11
12 def _have_bc1_accounts(accounts: List[BTCAddress]) -> bool:
13 return any(account.lower()[0:3] == 'bc1' for account in accounts)
14
15
16 def get_bitcoin_addresses_balances(accounts: List[BTCAddress]) -> Dict[BTCAddress, FVal]:
17 """Queries blockchain.info or blockstream for the balances of accounts
18
19 May raise:
20 - RemotError if there is a problem querying blockchain.info or blockstream
21 """
22 source = 'blockchain.info'
23 balances: Dict[BTCAddress, FVal] = {}
24 try:
25 if _have_bc1_accounts(accounts):
26 # if 1 account is bech32 we have to query blockstream. blockchaininfo won't work
27 source = 'blockstream'
28 balances = {}
29 for account in accounts:
30 url = f'https://blockstream.info/api/address/{account}'
31 response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)
32 stats = response_data['chain_stats']
33 balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])
34 balances[account] = satoshis_to_btc(balance)
35 else:
36 params = '|'.join(accounts)
37 btc_resp = request_get_dict(
38 url=f'https://blockchain.info/multiaddr?active={params}',
39 handle_429=True,
40 # If we get a 429 then their docs suggest 10 seconds
41 # https://blockchain.info/q
42 backoff_in_seconds=10,
43 )
44 for entry in btc_resp['addresses']:
45 balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))
46 except (
47 requests.exceptions.RequestException,
48 UnableToDecryptRemoteData,
49 requests.exceptions.Timeout,
50 ) as e:
51 raise RemoteError(f'bitcoin external API request for balances failed due to {str(e)}') from e # noqa: E501
52 except KeyError as e:
53 raise RemoteError(
54 f'Malformed response when querying bitcoin blockchain via {source}.'
55 f'Did not find key {e}',
56 ) from e
57
58 return balances
59
60
61 def _check_blockstream_for_transactions(
62 accounts: List[BTCAddress],
63 ) -> Dict[BTCAddress, Tuple[bool, FVal]]:
64 """May raise connection errors or KeyError"""
65 have_transactions = {}
66 for account in accounts:
67 url = f'https://blockstream.info/api/address/{account}'
68 response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)
69 stats = response_data['chain_stats']
70 balance = satoshis_to_btc(int(stats['funded_txo_sum']) - int(stats['spent_txo_sum']))
71 have_txs = stats['tx_count'] != 0
72 have_transactions[account] = (have_txs, balance)
73
74 return have_transactions
75
76
77 def _check_blockchaininfo_for_transactions(
78 accounts: List[BTCAddress],
79 ) -> Dict[BTCAddress, Tuple[bool, FVal]]:
80 """May raise RemotError or KeyError"""
81 have_transactions = {}
82 params = '|'.join(accounts)
83 btc_resp = request_get_dict(
84 url=f'https://blockchain.info/multiaddr?active={params}',
85 handle_429=True,
86 # If we get a 429 then their docs suggest 10 seconds
87 # https://blockchain.infoq/
88 backoff_in_seconds=15,
89 )
90 for entry in btc_resp['addresses']:
91 balance = satoshis_to_btc(entry['final_balance'])
92 have_transactions[entry['address']] = (entry['n_tx'] != 0, balance)
93
94 return have_transactions
95
96
97 def have_bitcoin_transactions(accounts: List[BTCAddress]) -> Dict[BTCAddress, Tuple[bool, FVal]]:
98 """
99 Takes a list of addresses and returns a mapping of which addresses have had transactions
100 and also their current balance
101
102 May raise:
103 - RemoteError if any of the queried websites fail to be queried
104 """
105 try:
106 if _have_bc1_accounts(accounts):
107 source = 'blockstream'
108 have_transactions = _check_blockstream_for_transactions(accounts)
109 else:
110 source = 'blockchain.info'
111 have_transactions = _check_blockchaininfo_for_transactions(accounts)
112 except (
113 requests.exceptions.RequestException,
114 UnableToDecryptRemoteData,
115 requests.exceptions.Timeout,
116 ) as e:
117 raise RemoteError(f'bitcoin external API request for transactions failed due to {str(e)}') from e # noqa: E501
118 except KeyError as e:
119 raise RemoteError(
120 f'Malformed response when querying bitcoin blockchain via {source}.'
121 f'Did not find key {str(e)}',
122 ) from e
123
124 return have_transactions
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rotkehlchen/chain/bitcoin/__init__.py b/rotkehlchen/chain/bitcoin/__init__.py
--- a/rotkehlchen/chain/bitcoin/__init__.py
+++ b/rotkehlchen/chain/bitcoin/__init__.py
@@ -33,16 +33,20 @@
balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])
balances[account] = satoshis_to_btc(balance)
else:
- params = '|'.join(accounts)
- btc_resp = request_get_dict(
- url=f'https://blockchain.info/multiaddr?active={params}',
- handle_429=True,
- # If we get a 429 then their docs suggest 10 seconds
- # https://blockchain.info/q
- backoff_in_seconds=10,
- )
- for entry in btc_resp['addresses']:
- balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))
+ # split the list of accounts into sublists of 80 addresses per list to overcome:
+ # https://github.com/rotki/rotki/issues/3037
+ accounts_chunks = [accounts[x:x + 80] for x in range(0, len(accounts), 80)]
+ for accounts_chunk in accounts_chunks:
+ params = '|'.join(accounts_chunk)
+ btc_resp = request_get_dict(
+ url=f'https://blockchain.info/multiaddr?active={params}',
+ handle_429=True,
+ # If we get a 429 then their docs suggest 10 seconds
+ # https://blockchain.info/q
+ backoff_in_seconds=10,
+ )
+ for entry in btc_resp['addresses']:
+ balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))
except (
requests.exceptions.RequestException,
UnableToDecryptRemoteData,
| {"golden_diff": "diff --git a/rotkehlchen/chain/bitcoin/__init__.py b/rotkehlchen/chain/bitcoin/__init__.py\n--- a/rotkehlchen/chain/bitcoin/__init__.py\n+++ b/rotkehlchen/chain/bitcoin/__init__.py\n@@ -33,16 +33,20 @@\n balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])\n balances[account] = satoshis_to_btc(balance)\n else:\n- params = '|'.join(accounts)\n- btc_resp = request_get_dict(\n- url=f'https://blockchain.info/multiaddr?active={params}',\n- handle_429=True,\n- # If we get a 429 then their docs suggest 10 seconds\n- # https://blockchain.info/q\n- backoff_in_seconds=10,\n- )\n- for entry in btc_resp['addresses']:\n- balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))\n+ # split the list of accounts into sublists of 80 addresses per list to overcome:\n+ # https://github.com/rotki/rotki/issues/3037\n+ accounts_chunks = [accounts[x:x + 80] for x in range(0, len(accounts), 80)]\n+ for accounts_chunk in accounts_chunks:\n+ params = '|'.join(accounts_chunk)\n+ btc_resp = request_get_dict(\n+ url=f'https://blockchain.info/multiaddr?active={params}',\n+ handle_429=True,\n+ # If we get a 429 then their docs suggest 10 seconds\n+ # https://blockchain.info/q\n+ backoff_in_seconds=10,\n+ )\n+ for entry in btc_resp['addresses']:\n+ balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))\n except (\n requests.exceptions.RequestException,\n UnableToDecryptRemoteData,\n", "issue": "Bitcoin legacy xpub with too many used address may fail to query balances after restart\n## Problem Definition\r\n\r\nIf a user adds a legacy xpub that generates too many used addresses rotki will properly add the xpub and all its addresses. But then after a restart it will fail to query all balances again.\r\n\r\nError is:\r\n\r\n```\r\nError at querying blockchain balances: bitcoin external API request for balances failed due to https://blockchain.info/multiaddr?active=XXXXXX\r\n```\r\n\r\nThe failure is due to attempting to query all addresses from blockchain.info using the multiaddr endpoint which seems to have a non-documented limit.\r\n\r\nblockchain.info errors with a 502 if too many bitcoin addresses are sent.\r\n\r\n## Task\r\n\r\nFix it by experimenting and figuring out a safe limit.\r\nThen apply that limit where we query for multiple legacy bitcoin addresses.\n", "before_files": [{"content": "from typing import Dict, List, Tuple\n\nimport requests\n\nfrom rotkehlchen.errors import RemoteError, UnableToDecryptRemoteData\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.typing import BTCAddress\nfrom rotkehlchen.utils.misc import satoshis_to_btc\nfrom rotkehlchen.utils.network import request_get_dict\n\n\ndef _have_bc1_accounts(accounts: List[BTCAddress]) -> bool:\n return any(account.lower()[0:3] == 'bc1' for account in accounts)\n\n\ndef get_bitcoin_addresses_balances(accounts: List[BTCAddress]) -> Dict[BTCAddress, FVal]:\n \"\"\"Queries blockchain.info or blockstream for the balances of accounts\n\n May raise:\n - RemotError if there is a problem querying blockchain.info or blockstream\n \"\"\"\n source = 'blockchain.info'\n balances: Dict[BTCAddress, FVal] = {}\n try:\n if _have_bc1_accounts(accounts):\n # if 1 account is bech32 we have to query blockstream. blockchaininfo won't work\n source = 'blockstream'\n balances = {}\n for account in accounts:\n url = f'https://blockstream.info/api/address/{account}'\n response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)\n stats = response_data['chain_stats']\n balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])\n balances[account] = satoshis_to_btc(balance)\n else:\n params = '|'.join(accounts)\n btc_resp = request_get_dict(\n url=f'https://blockchain.info/multiaddr?active={params}',\n handle_429=True,\n # If we get a 429 then their docs suggest 10 seconds\n # https://blockchain.info/q\n backoff_in_seconds=10,\n )\n for entry in btc_resp['addresses']:\n balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))\n except (\n requests.exceptions.RequestException,\n UnableToDecryptRemoteData,\n requests.exceptions.Timeout,\n ) as e:\n raise RemoteError(f'bitcoin external API request for balances failed due to {str(e)}') from e # noqa: E501\n except KeyError as e:\n raise RemoteError(\n f'Malformed response when querying bitcoin blockchain via {source}.'\n f'Did not find key {e}',\n ) from e\n\n return balances\n\n\ndef _check_blockstream_for_transactions(\n accounts: List[BTCAddress],\n) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"May raise connection errors or KeyError\"\"\"\n have_transactions = {}\n for account in accounts:\n url = f'https://blockstream.info/api/address/{account}'\n response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)\n stats = response_data['chain_stats']\n balance = satoshis_to_btc(int(stats['funded_txo_sum']) - int(stats['spent_txo_sum']))\n have_txs = stats['tx_count'] != 0\n have_transactions[account] = (have_txs, balance)\n\n return have_transactions\n\n\ndef _check_blockchaininfo_for_transactions(\n accounts: List[BTCAddress],\n) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"May raise RemotError or KeyError\"\"\"\n have_transactions = {}\n params = '|'.join(accounts)\n btc_resp = request_get_dict(\n url=f'https://blockchain.info/multiaddr?active={params}',\n handle_429=True,\n # If we get a 429 then their docs suggest 10 seconds\n # https://blockchain.infoq/\n backoff_in_seconds=15,\n )\n for entry in btc_resp['addresses']:\n balance = satoshis_to_btc(entry['final_balance'])\n have_transactions[entry['address']] = (entry['n_tx'] != 0, balance)\n\n return have_transactions\n\n\ndef have_bitcoin_transactions(accounts: List[BTCAddress]) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"\n Takes a list of addresses and returns a mapping of which addresses have had transactions\n and also their current balance\n\n May raise:\n - RemoteError if any of the queried websites fail to be queried\n \"\"\"\n try:\n if _have_bc1_accounts(accounts):\n source = 'blockstream'\n have_transactions = _check_blockstream_for_transactions(accounts)\n else:\n source = 'blockchain.info'\n have_transactions = _check_blockchaininfo_for_transactions(accounts)\n except (\n requests.exceptions.RequestException,\n UnableToDecryptRemoteData,\n requests.exceptions.Timeout,\n ) as e:\n raise RemoteError(f'bitcoin external API request for transactions failed due to {str(e)}') from e # noqa: E501\n except KeyError as e:\n raise RemoteError(\n f'Malformed response when querying bitcoin blockchain via {source}.'\n f'Did not find key {str(e)}',\n ) from e\n\n return have_transactions\n", "path": "rotkehlchen/chain/bitcoin/__init__.py"}], "after_files": [{"content": "from typing import Dict, List, Tuple\n\nimport requests\n\nfrom rotkehlchen.errors import RemoteError, UnableToDecryptRemoteData\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.typing import BTCAddress\nfrom rotkehlchen.utils.misc import satoshis_to_btc\nfrom rotkehlchen.utils.network import request_get_dict\n\n\ndef _have_bc1_accounts(accounts: List[BTCAddress]) -> bool:\n return any(account.lower()[0:3] == 'bc1' for account in accounts)\n\n\ndef get_bitcoin_addresses_balances(accounts: List[BTCAddress]) -> Dict[BTCAddress, FVal]:\n \"\"\"Queries blockchain.info or blockstream for the balances of accounts\n\n May raise:\n - RemotError if there is a problem querying blockchain.info or blockstream\n \"\"\"\n source = 'blockchain.info'\n balances: Dict[BTCAddress, FVal] = {}\n try:\n if _have_bc1_accounts(accounts):\n # if 1 account is bech32 we have to query blockstream. blockchaininfo won't work\n source = 'blockstream'\n balances = {}\n for account in accounts:\n url = f'https://blockstream.info/api/address/{account}'\n response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)\n stats = response_data['chain_stats']\n balance = int(stats['funded_txo_sum']) - int(stats['spent_txo_sum'])\n balances[account] = satoshis_to_btc(balance)\n else:\n # split the list of accounts into sublists of 80 addresses per list to overcome:\n # https://github.com/rotki/rotki/issues/3037\n accounts_chunks = [accounts[x:x + 80] for x in range(0, len(accounts), 80)]\n for accounts_chunk in accounts_chunks:\n params = '|'.join(accounts_chunk)\n btc_resp = request_get_dict(\n url=f'https://blockchain.info/multiaddr?active={params}',\n handle_429=True,\n # If we get a 429 then their docs suggest 10 seconds\n # https://blockchain.info/q\n backoff_in_seconds=10,\n )\n for entry in btc_resp['addresses']:\n balances[entry['address']] = satoshis_to_btc(FVal(entry['final_balance']))\n except (\n requests.exceptions.RequestException,\n UnableToDecryptRemoteData,\n requests.exceptions.Timeout,\n ) as e:\n raise RemoteError(f'bitcoin external API request for balances failed due to {str(e)}') from e # noqa: E501\n except KeyError as e:\n raise RemoteError(\n f'Malformed response when querying bitcoin blockchain via {source}.'\n f'Did not find key {e}',\n ) from e\n\n return balances\n\n\ndef _check_blockstream_for_transactions(\n accounts: List[BTCAddress],\n) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"May raise connection errors or KeyError\"\"\"\n have_transactions = {}\n for account in accounts:\n url = f'https://blockstream.info/api/address/{account}'\n response_data = request_get_dict(url=url, handle_429=True, backoff_in_seconds=4)\n stats = response_data['chain_stats']\n balance = satoshis_to_btc(int(stats['funded_txo_sum']) - int(stats['spent_txo_sum']))\n have_txs = stats['tx_count'] != 0\n have_transactions[account] = (have_txs, balance)\n\n return have_transactions\n\n\ndef _check_blockchaininfo_for_transactions(\n accounts: List[BTCAddress],\n) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"May raise RemotError or KeyError\"\"\"\n have_transactions = {}\n params = '|'.join(accounts)\n btc_resp = request_get_dict(\n url=f'https://blockchain.info/multiaddr?active={params}',\n handle_429=True,\n # If we get a 429 then their docs suggest 10 seconds\n # https://blockchain.infoq/\n backoff_in_seconds=15,\n )\n for entry in btc_resp['addresses']:\n balance = satoshis_to_btc(entry['final_balance'])\n have_transactions[entry['address']] = (entry['n_tx'] != 0, balance)\n\n return have_transactions\n\n\ndef have_bitcoin_transactions(accounts: List[BTCAddress]) -> Dict[BTCAddress, Tuple[bool, FVal]]:\n \"\"\"\n Takes a list of addresses and returns a mapping of which addresses have had transactions\n and also their current balance\n\n May raise:\n - RemoteError if any of the queried websites fail to be queried\n \"\"\"\n try:\n if _have_bc1_accounts(accounts):\n source = 'blockstream'\n have_transactions = _check_blockstream_for_transactions(accounts)\n else:\n source = 'blockchain.info'\n have_transactions = _check_blockchaininfo_for_transactions(accounts)\n except (\n requests.exceptions.RequestException,\n UnableToDecryptRemoteData,\n requests.exceptions.Timeout,\n ) as e:\n raise RemoteError(f'bitcoin external API request for transactions failed due to {str(e)}') from e # noqa: E501\n except KeyError as e:\n raise RemoteError(\n f'Malformed response when querying bitcoin blockchain via {source}.'\n f'Did not find key {str(e)}',\n ) from e\n\n return have_transactions\n", "path": "rotkehlchen/chain/bitcoin/__init__.py"}]} | 1,828 | 449 |
gh_patches_debug_21141 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-7505 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
matplotlib import issue: ImportError DLL load failed while importing _path...
Hey,
From searching around, matplotlib import errors occur here and there, usually the fix is to downgrade the matplotlib version, or to change the hook-matplotlib.py file. This did not resolve the issue.
matplotlib import is to blame because running the same `pyinstaller --onefile Script.py` command where matplotlib import is commented out works as expected.
Otherwise opening the newly created exe file writes these few lines before exiting (excuse me for not copying everything from the screenshot I took moments before it closed):

The error:
```
ImportError: DLL load failed while importing _path: The specific module could not be found.
[5780] Failed to execute script 'Script' due to unhandled exeption!
```
During the pyinstaller run, the WARNINGS regarding matplotlib packages:
```
2149 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\backends\_backend_agg.cp39-win_amd64.pyd
2189 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\_path.cp39-win_amd64.pyd
2209 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\backends\_tkagg.cp39-win_amd64.pyd
2239 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\_tri.cp39-win_amd64.pyd
2249 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\_qhull.cp39-win_amd64.pyd
2259 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\Users\...\venv\lib\site-packages\matplotlib\ft2font.cp39-win_amd64.pyd
```
A thing to note is that my python is Anaconda distribution, so I also get this warning:
`WARNING: Assuming this is not an Anaconda environment or an additional venv/pipenv/... environment manager is being used on top, because the conda-meta folder C:\Users\...\venv\conda-meta does not exist.`
But the warning comes up with matplotlib import commented out and everything works as expected, so I doubt this has something to do with the issue.
I tried:
- Reinstalling matplotlib and pyinstaller
- Im using the Anaconda Prompt, otherwise it doesn't even get to creating the Script.exe file.
- Installing Microsoft Visual C++ 2015-2022 Redistribution.
- Changing the hook-matplotlib.py in hooks folder (it is already set right - the suggestion is old).
Info:
- Windows 10 Enterprise 64-bit
- python --version
'Python 3.9.7'
- pyinstaller --version
'5.9.0'
- matplotlib.__version__
'3.7.1'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-matplotlib.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2013-2023, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 from PyInstaller import isolated
13
14
15 @isolated.decorate
16 def mpl_data_dir():
17 import matplotlib
18 return matplotlib.get_data_path()
19
20
21 datas = [
22 (mpl_data_dir(), "matplotlib/mpl-data"),
23 ]
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/hook-matplotlib.py b/PyInstaller/hooks/hook-matplotlib.py
--- a/PyInstaller/hooks/hook-matplotlib.py
+++ b/PyInstaller/hooks/hook-matplotlib.py
@@ -10,6 +10,8 @@
#-----------------------------------------------------------------------------
from PyInstaller import isolated
+from PyInstaller import compat
+from PyInstaller.utils import hooks as hookutils
@isolated.decorate
@@ -21,3 +23,16 @@
datas = [
(mpl_data_dir(), "matplotlib/mpl-data"),
]
+
+binaries = []
+
+# Windows PyPI wheels for `matplotlib` >= 3.7.0 use `delvewheel`.
+# In addition to DLLs from `matplotlib.libs` directory, which should be picked up automatically by dependency analysis
+# in contemporary PyInstaller versions, we also need to collect the load-order file. This used to be required for
+# python <= 3.7 (that lacked `os.add_dll_directory`), but is also needed for Anaconda python 3.8 and 3.9, where
+# `delvewheel` falls back to load-order file codepath due to Anaconda breaking `os.add_dll_directory` implementation.
+if compat.is_win and hookutils.is_module_satisfies('matplotlib >= 3.7.0'):
+ delvewheel_datas, delvewheel_binaries = hookutils.collect_delvewheel_libs_directory('matplotlib')
+
+ datas += delvewheel_datas
+ binaries += delvewheel_binaries
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-matplotlib.py b/PyInstaller/hooks/hook-matplotlib.py\n--- a/PyInstaller/hooks/hook-matplotlib.py\n+++ b/PyInstaller/hooks/hook-matplotlib.py\n@@ -10,6 +10,8 @@\n #-----------------------------------------------------------------------------\n \n from PyInstaller import isolated\n+from PyInstaller import compat\n+from PyInstaller.utils import hooks as hookutils\n \n \n @isolated.decorate\n@@ -21,3 +23,16 @@\n datas = [\n (mpl_data_dir(), \"matplotlib/mpl-data\"),\n ]\n+\n+binaries = []\n+\n+# Windows PyPI wheels for `matplotlib` >= 3.7.0 use `delvewheel`.\n+# In addition to DLLs from `matplotlib.libs` directory, which should be picked up automatically by dependency analysis\n+# in contemporary PyInstaller versions, we also need to collect the load-order file. This used to be required for\n+# python <= 3.7 (that lacked `os.add_dll_directory`), but is also needed for Anaconda python 3.8 and 3.9, where\n+# `delvewheel` falls back to load-order file codepath due to Anaconda breaking `os.add_dll_directory` implementation.\n+if compat.is_win and hookutils.is_module_satisfies('matplotlib >= 3.7.0'):\n+ delvewheel_datas, delvewheel_binaries = hookutils.collect_delvewheel_libs_directory('matplotlib')\n+\n+ datas += delvewheel_datas\n+ binaries += delvewheel_binaries\n", "issue": "matplotlib import issue: ImportError DLL load failed while importing _path...\nHey,\r\nFrom searching around, matplotlib import errors occur here and there, usually the fix is to downgrade the matplotlib version, or to change the hook-matplotlib.py file. This did not resolve the issue.\r\n\r\nmatplotlib import is to blame because running the same `pyinstaller --onefile Script.py` command where matplotlib import is commented out works as expected.\r\nOtherwise opening the newly created exe file writes these few lines before exiting (excuse me for not copying everything from the screenshot I took moments before it closed):\r\n\r\nThe error:\r\n```\r\nImportError: DLL load failed while importing _path: The specific module could not be found.\r\n[5780] Failed to execute script 'Script' due to unhandled exeption!\r\n\r\n```\r\n\r\nDuring the pyinstaller run, the WARNINGS regarding matplotlib packages:\r\n```\r\n2149 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\backends\\_backend_agg.cp39-win_amd64.pyd\r\n2189 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\_path.cp39-win_amd64.pyd\r\n2209 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\backends\\_tkagg.cp39-win_amd64.pyd\r\n2239 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\_tri.cp39-win_amd64.pyd\r\n2249 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\_qhull.cp39-win_amd64.pyd\r\n2259 WARNING: lib not found: msvcp140-e78ebc24b6ffa690be9375aacad743a7.dll dependency of C:\\Users\\...\\venv\\lib\\site-packages\\matplotlib\\ft2font.cp39-win_amd64.pyd\r\n```\r\n\r\nA thing to note is that my python is Anaconda distribution, so I also get this warning:\r\n`WARNING: Assuming this is not an Anaconda environment or an additional venv/pipenv/... environment manager is being used on top, because the conda-meta folder C:\\Users\\...\\venv\\conda-meta does not exist.`\r\nBut the warning comes up with matplotlib import commented out and everything works as expected, so I doubt this has something to do with the issue.\r\n\r\nI tried:\r\n- Reinstalling matplotlib and pyinstaller\r\n- Im using the Anaconda Prompt, otherwise it doesn't even get to creating the Script.exe file.\r\n- Installing Microsoft Visual C++ 2015-2022 Redistribution.\r\n- Changing the hook-matplotlib.py in hooks folder (it is already set right - the suggestion is old).\r\n\r\nInfo:\r\n- Windows 10 Enterprise 64-bit\r\n- python --version\r\n'Python 3.9.7'\r\n- pyinstaller --version\r\n'5.9.0'\r\n- matplotlib.__version__\r\n'3.7.1'\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2013-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller import isolated\n\n\[email protected]\ndef mpl_data_dir():\n import matplotlib\n return matplotlib.get_data_path()\n\n\ndatas = [\n (mpl_data_dir(), \"matplotlib/mpl-data\"),\n]\n", "path": "PyInstaller/hooks/hook-matplotlib.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2013-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller import isolated\nfrom PyInstaller import compat\nfrom PyInstaller.utils import hooks as hookutils\n\n\[email protected]\ndef mpl_data_dir():\n import matplotlib\n return matplotlib.get_data_path()\n\n\ndatas = [\n (mpl_data_dir(), \"matplotlib/mpl-data\"),\n]\n\nbinaries = []\n\n# Windows PyPI wheels for `matplotlib` >= 3.7.0 use `delvewheel`.\n# In addition to DLLs from `matplotlib.libs` directory, which should be picked up automatically by dependency analysis\n# in contemporary PyInstaller versions, we also need to collect the load-order file. This used to be required for\n# python <= 3.7 (that lacked `os.add_dll_directory`), but is also needed for Anaconda python 3.8 and 3.9, where\n# `delvewheel` falls back to load-order file codepath due to Anaconda breaking `os.add_dll_directory` implementation.\nif compat.is_win and hookutils.is_module_satisfies('matplotlib >= 3.7.0'):\n delvewheel_datas, delvewheel_binaries = hookutils.collect_delvewheel_libs_directory('matplotlib')\n\n datas += delvewheel_datas\n binaries += delvewheel_binaries\n", "path": "PyInstaller/hooks/hook-matplotlib.py"}]} | 1,349 | 342 |
gh_patches_debug_21548 | rasdani/github-patches | git_diff | ManimCommunity__manim-3329 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature request: optionally suppress "Manim Comunity v{version}"
## Description of proposed feature
Hello! As of recent versions of Manim, it is not possible to suppress the `Manim Community v{version}` message that is printed out when importing `manim`.
Actually, this is because importing `manim` imports `manim.__init__`, which in turn imports `manim.__main__` with this: https://github.com/ManimCommunity/manim/blob/88836df8ab1ea153ed57848a74a694c517962163/manim/__init__.py#L104-L113
Before, I could suppress the message using some redirect, but now if fails
```python
>>> import os
>>> from contextlib import redirect_stdout
>>>
>>> with redirect_stdout(open(os.devnull, "w")):
>>> print("HELLO")
>>> import manim
Manim Community v0.17.2
```
because Rich will still print to `stdout` (since the console was created before `stdout` was redirected).
Rich's console has an optional `file` argument, but there currently seem to be a bug with it, reported in https://github.com/Textualize/rich/issues/3083, that makes Rich's consoles only print to stdout.
A **nice feature** would be to make the behavior optional.
## How can the new feature be used?
There are a few possible solutions:
- Not printing the version at all;
- Moving the print message inside the `__main__`:
https://github.com/ManimCommunity/manim/blob/88836df8ab1ea153ed57848a74a694c517962163/manim/__main__.py#L59-L60
- Optionally print (or disable print) with some environ variable
- ...
## Additional comments
I develop Manim Slides, a Manim plugin, and I'd like to avoid printing the Manim Community version every time I import this package, because the user does not really need to know that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/__main__.py`
Content:
```
1 from __future__ import annotations
2
3 import sys
4
5 import cloup
6
7 from . import __version__, cli_ctx_settings, console
8 from .cli.cfg.group import cfg
9 from .cli.checkhealth.commands import checkhealth
10 from .cli.default_group import DefaultGroup
11 from .cli.init.commands import init
12 from .cli.plugins.commands import plugins
13 from .cli.render.commands import render
14 from .constants import EPILOG
15
16
17 def exit_early(ctx, param, value):
18 if value:
19 sys.exit()
20
21
22 console.print(f"Manim Community [green]v{__version__}[/green]\n")
23
24
25 @cloup.group(
26 context_settings=cli_ctx_settings,
27 cls=DefaultGroup,
28 default="render",
29 no_args_is_help=True,
30 help="Animation engine for explanatory math videos.",
31 epilog="See 'manim <command>' to read about a specific subcommand.\n\n"
32 "Note: the subcommand 'manim render' is called if no other subcommand "
33 "is specified. Run 'manim render --help' if you would like to know what the "
34 f"'-ql' or '-p' flags do, for example.\n\n{EPILOG}",
35 )
36 @cloup.option(
37 "--version",
38 is_flag=True,
39 help="Show version and exit.",
40 callback=exit_early,
41 is_eager=True,
42 expose_value=False,
43 )
44 @cloup.pass_context
45 def main(ctx):
46 """The entry point for manim."""
47 pass
48
49
50 main.add_command(checkhealth)
51 main.add_command(cfg)
52 main.add_command(plugins)
53 main.add_command(init)
54 main.add_command(render)
55
56 if __name__ == "__main__":
57 main()
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/manim/__main__.py b/manim/__main__.py
--- a/manim/__main__.py
+++ b/manim/__main__.py
@@ -2,6 +2,7 @@
import sys
+import click
import cloup
from . import __version__, cli_ctx_settings, console
@@ -14,12 +15,15 @@
from .constants import EPILOG
-def exit_early(ctx, param, value):
+def show_splash(ctx, param, value):
if value:
- sys.exit()
+ console.print(f"Manim Community [green]v{__version__}[/green]\n")
-console.print(f"Manim Community [green]v{__version__}[/green]\n")
+def print_version_and_exit(ctx, param, value):
+ show_splash(ctx, param, value)
+ if value:
+ ctx.exit()
@cloup.group(
@@ -37,7 +41,16 @@
"--version",
is_flag=True,
help="Show version and exit.",
- callback=exit_early,
+ callback=print_version_and_exit,
+ is_eager=True,
+ expose_value=False,
+)
[email protected](
+ "--show-splash/--hide-splash",
+ is_flag=True,
+ default=True,
+ help="Print splash message with version information.",
+ callback=show_splash,
is_eager=True,
expose_value=False,
)
| {"golden_diff": "diff --git a/manim/__main__.py b/manim/__main__.py\n--- a/manim/__main__.py\n+++ b/manim/__main__.py\n@@ -2,6 +2,7 @@\n \n import sys\n \n+import click\n import cloup\n \n from . import __version__, cli_ctx_settings, console\n@@ -14,12 +15,15 @@\n from .constants import EPILOG\n \n \n-def exit_early(ctx, param, value):\n+def show_splash(ctx, param, value):\n if value:\n- sys.exit()\n+ console.print(f\"Manim Community [green]v{__version__}[/green]\\n\")\n \n \n-console.print(f\"Manim Community [green]v{__version__}[/green]\\n\")\n+def print_version_and_exit(ctx, param, value):\n+ show_splash(ctx, param, value)\n+ if value:\n+ ctx.exit()\n \n \n @cloup.group(\n@@ -37,7 +41,16 @@\n \"--version\",\n is_flag=True,\n help=\"Show version and exit.\",\n- callback=exit_early,\n+ callback=print_version_and_exit,\n+ is_eager=True,\n+ expose_value=False,\n+)\[email protected](\n+ \"--show-splash/--hide-splash\",\n+ is_flag=True,\n+ default=True,\n+ help=\"Print splash message with version information.\",\n+ callback=show_splash,\n is_eager=True,\n expose_value=False,\n )\n", "issue": "Feature request: optionally suppress \"Manim Comunity v{version}\"\n## Description of proposed feature\r\n\r\nHello! As of recent versions of Manim, it is not possible to suppress the `Manim Community v{version}` message that is printed out when importing `manim`.\r\n\r\nActually, this is because importing `manim` imports `manim.__init__`, which in turn imports `manim.__main__` with this: https://github.com/ManimCommunity/manim/blob/88836df8ab1ea153ed57848a74a694c517962163/manim/__init__.py#L104-L113\r\n\r\nBefore, I could suppress the message using some redirect, but now if fails\r\n\r\n```python\r\n>>> import os\r\n>>> from contextlib import redirect_stdout\r\n>>> \r\n>>> with redirect_stdout(open(os.devnull, \"w\")):\r\n>>> print(\"HELLO\")\r\n>>> import manim\r\nManim Community v0.17.2\r\n```\r\nbecause Rich will still print to `stdout` (since the console was created before `stdout` was redirected). \r\n\r\nRich's console has an optional `file` argument, but there currently seem to be a bug with it, reported in https://github.com/Textualize/rich/issues/3083, that makes Rich's consoles only print to stdout.\r\n\r\nA **nice feature** would be to make the behavior optional.\r\n\r\n## How can the new feature be used?\r\n\r\nThere are a few possible solutions:\r\n\r\n- Not printing the version at all;\r\n- Moving the print message inside the `__main__`:\r\nhttps://github.com/ManimCommunity/manim/blob/88836df8ab1ea153ed57848a74a694c517962163/manim/__main__.py#L59-L60\r\n- Optionally print (or disable print) with some environ variable\r\n- ...\r\n\r\n## Additional comments\r\n\r\nI develop Manim Slides, a Manim plugin, and I'd like to avoid printing the Manim Community version every time I import this package, because the user does not really need to know that.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport sys\n\nimport cloup\n\nfrom . import __version__, cli_ctx_settings, console\nfrom .cli.cfg.group import cfg\nfrom .cli.checkhealth.commands import checkhealth\nfrom .cli.default_group import DefaultGroup\nfrom .cli.init.commands import init\nfrom .cli.plugins.commands import plugins\nfrom .cli.render.commands import render\nfrom .constants import EPILOG\n\n\ndef exit_early(ctx, param, value):\n if value:\n sys.exit()\n\n\nconsole.print(f\"Manim Community [green]v{__version__}[/green]\\n\")\n\n\[email protected](\n context_settings=cli_ctx_settings,\n cls=DefaultGroup,\n default=\"render\",\n no_args_is_help=True,\n help=\"Animation engine for explanatory math videos.\",\n epilog=\"See 'manim <command>' to read about a specific subcommand.\\n\\n\"\n \"Note: the subcommand 'manim render' is called if no other subcommand \"\n \"is specified. Run 'manim render --help' if you would like to know what the \"\n f\"'-ql' or '-p' flags do, for example.\\n\\n{EPILOG}\",\n)\[email protected](\n \"--version\",\n is_flag=True,\n help=\"Show version and exit.\",\n callback=exit_early,\n is_eager=True,\n expose_value=False,\n)\[email protected]_context\ndef main(ctx):\n \"\"\"The entry point for manim.\"\"\"\n pass\n\n\nmain.add_command(checkhealth)\nmain.add_command(cfg)\nmain.add_command(plugins)\nmain.add_command(init)\nmain.add_command(render)\n\nif __name__ == \"__main__\":\n main()\n", "path": "manim/__main__.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport sys\n\nimport click\nimport cloup\n\nfrom . import __version__, cli_ctx_settings, console\nfrom .cli.cfg.group import cfg\nfrom .cli.checkhealth.commands import checkhealth\nfrom .cli.default_group import DefaultGroup\nfrom .cli.init.commands import init\nfrom .cli.plugins.commands import plugins\nfrom .cli.render.commands import render\nfrom .constants import EPILOG\n\n\ndef show_splash(ctx, param, value):\n if value:\n console.print(f\"Manim Community [green]v{__version__}[/green]\\n\")\n\n\ndef print_version_and_exit(ctx, param, value):\n show_splash(ctx, param, value)\n if value:\n ctx.exit()\n\n\[email protected](\n context_settings=cli_ctx_settings,\n cls=DefaultGroup,\n default=\"render\",\n no_args_is_help=True,\n help=\"Animation engine for explanatory math videos.\",\n epilog=\"See 'manim <command>' to read about a specific subcommand.\\n\\n\"\n \"Note: the subcommand 'manim render' is called if no other subcommand \"\n \"is specified. Run 'manim render --help' if you would like to know what the \"\n f\"'-ql' or '-p' flags do, for example.\\n\\n{EPILOG}\",\n)\[email protected](\n \"--version\",\n is_flag=True,\n help=\"Show version and exit.\",\n callback=print_version_and_exit,\n is_eager=True,\n expose_value=False,\n)\[email protected](\n \"--show-splash/--hide-splash\",\n is_flag=True,\n default=True,\n help=\"Print splash message with version information.\",\n callback=show_splash,\n is_eager=True,\n expose_value=False,\n)\[email protected]_context\ndef main(ctx):\n \"\"\"The entry point for manim.\"\"\"\n pass\n\n\nmain.add_command(checkhealth)\nmain.add_command(cfg)\nmain.add_command(plugins)\nmain.add_command(init)\nmain.add_command(render)\n\nif __name__ == \"__main__\":\n main()\n", "path": "manim/__main__.py"}]} | 1,209 | 327 |
gh_patches_debug_708 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-2879 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'TypeError' object has no attribute 'tb_frame'
While fetching object details from insight inspect, getting kicked out from the ipython console with the following error.
'TypeError' object has no attribute 'tb_frame'
(gss-rules) ⌊gss-rules⌋»$ insights inspect insights.parsers.installed_rpms.InstalledRpms ~/scripts/rhel7_sosreport/
IPython Console Usage Info:
Enter 'InstalledRpms.' and tab to get a list of properties
Example:
In [1]: InstalledRpms.<property_name>
Out[1]: <property value>
To exit ipython enter 'exit' and hit enter or use 'CTL D'
Starting IPython Interpreter Now
In [1]: InstalledRpms
'TypeError' object has no attribute 'tb_frame'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3 from setuptools import setup, find_packages
4
5 __here__ = os.path.dirname(os.path.abspath(__file__))
6
7 package_info = dict.fromkeys(["RELEASE", "COMMIT", "VERSION", "NAME"])
8
9 for name in package_info:
10 with open(os.path.join(__here__, "insights", name)) as f:
11 package_info[name] = f.read().strip()
12
13 entry_points = {
14 'console_scripts': [
15 'insights-collect = insights.collect:main',
16 'insights-run = insights:main',
17 'insights = insights.command_parser:main',
18 'insights-cat = insights.tools.cat:main',
19 'insights-dupkeycheck = insights.tools.dupkeycheck:main',
20 'insights-inspect = insights.tools.insights_inspect:main',
21 'insights-info = insights.tools.query:main',
22 'insights-ocpshell= insights.ocpshell:main',
23 'client = insights.client:run',
24 'mangle = insights.util.mangle:main'
25 ]
26 }
27
28 runtime = set([
29 'six',
30 'requests',
31 'redis',
32 'cachecontrol',
33 'cachecontrol[redis]',
34 'cachecontrol[filecache]',
35 'defusedxml',
36 'lockfile',
37 'jinja2',
38 ])
39
40 if (sys.version_info < (2, 7)):
41 runtime.add('pyyaml>=3.10,<=3.13')
42 else:
43 runtime.add('pyyaml')
44
45
46 def maybe_require(pkg):
47 try:
48 __import__(pkg)
49 except ImportError:
50 runtime.add(pkg)
51
52
53 maybe_require("importlib")
54 maybe_require("argparse")
55
56
57 client = set([
58 'requests'
59 ])
60
61 develop = set([
62 'futures==3.0.5',
63 'wheel',
64 ])
65
66 docs = set([
67 'Sphinx<=3.0.2',
68 'nbsphinx',
69 'sphinx_rtd_theme',
70 'ipython',
71 'colorama',
72 'jinja2',
73 'Pygments'
74 ])
75
76 testing = set([
77 'coverage==4.3.4',
78 'pytest==3.0.6',
79 'pytest-cov==2.4.0',
80 'mock==2.0.0',
81 ])
82
83 cluster = set([
84 'ansible',
85 'pandas',
86 'colorama',
87 ])
88
89 openshift = set([
90 'openshift'
91 ])
92
93 linting = set([
94 'flake8==2.6.2',
95 ])
96
97 optional = set([
98 'python-cjson',
99 'python-logstash',
100 'python-statsd',
101 'watchdog',
102 ])
103
104 if __name__ == "__main__":
105 # allows for runtime modification of rpm name
106 name = os.environ.get("INSIGHTS_CORE_NAME", package_info["NAME"])
107
108 setup(
109 name=name,
110 version=package_info["VERSION"],
111 description="Insights Core is a data collection and analysis framework",
112 long_description=open("README.rst").read(),
113 url="https://github.com/redhatinsights/insights-core",
114 author="Red Hat, Inc.",
115 author_email="[email protected]",
116 packages=find_packages(),
117 install_requires=list(runtime),
118 package_data={'': ['LICENSE']},
119 license='Apache 2.0',
120 extras_require={
121 'develop': list(runtime | develop | client | docs | linting | testing | cluster),
122 'develop26': list(runtime | develop | client | linting | testing | cluster),
123 'client': list(runtime | client),
124 'client-develop': list(runtime | develop | client | linting | testing),
125 'cluster': list(runtime | cluster),
126 'openshift': list(runtime | openshift),
127 'optional': list(optional),
128 'docs': list(docs),
129 'linting': list(linting | client),
130 'testing': list(testing | client)
131 },
132 classifiers=[
133 'Development Status :: 5 - Production/Stable',
134 'Intended Audience :: Developers',
135 'Natural Language :: English',
136 'License :: OSI Approved :: Apache Software License',
137 'Programming Language :: Python',
138 'Programming Language :: Python :: 2.6',
139 'Programming Language :: Python :: 2.7',
140 'Programming Language :: Python :: 3.3',
141 'Programming Language :: Python :: 3.4',
142 'Programming Language :: Python :: 3.5',
143 'Programming Language :: Python :: 3.6'
144 ],
145 entry_points=entry_points,
146 include_package_data=True
147 )
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -70,7 +70,9 @@
'ipython',
'colorama',
'jinja2',
- 'Pygments'
+ 'Pygments',
+ 'jedi<0.18.0' # Open issue with jedi 0.18.0 and iPython <= 7.19
+ # https://github.com/davidhalter/jedi/issues/1714
])
testing = set([
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -70,7 +70,9 @@\n 'ipython',\n 'colorama',\n 'jinja2',\n- 'Pygments'\n+ 'Pygments',\n+ 'jedi<0.18.0' # Open issue with jedi 0.18.0 and iPython <= 7.19\n+ # https://github.com/davidhalter/jedi/issues/1714\n ])\n \n testing = set([\n", "issue": "'TypeError' object has no attribute 'tb_frame'\nWhile fetching object details from insight inspect, getting kicked out from the ipython console with the following error. \r\n 'TypeError' object has no attribute 'tb_frame'\r\n\r\n(gss-rules) \u230agss-rules\u230b\u00bb$ insights inspect insights.parsers.installed_rpms.InstalledRpms ~/scripts/rhel7_sosreport/\r\n\r\nIPython Console Usage Info:\r\n\r\nEnter 'InstalledRpms.' and tab to get a list of properties \r\nExample:\r\nIn [1]: InstalledRpms.<property_name>\r\nOut[1]: <property value>\r\n\r\nTo exit ipython enter 'exit' and hit enter or use 'CTL D'\r\n\r\nStarting IPython Interpreter Now \r\n\r\nIn [1]: InstalledRpms\r\n'TypeError' object has no attribute 'tb_frame'\r\n\n", "before_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-collect = insights.collect:main',\n 'insights-run = insights:main',\n 'insights = insights.command_parser:main',\n 'insights-cat = insights.tools.cat:main',\n 'insights-dupkeycheck = insights.tools.dupkeycheck:main',\n 'insights-inspect = insights.tools.insights_inspect:main',\n 'insights-info = insights.tools.query:main',\n 'insights-ocpshell= insights.ocpshell:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'six',\n 'requests',\n 'redis',\n 'cachecontrol',\n 'cachecontrol[redis]',\n 'cachecontrol[filecache]',\n 'defusedxml',\n 'lockfile',\n 'jinja2',\n])\n\nif (sys.version_info < (2, 7)):\n runtime.add('pyyaml>=3.10,<=3.13')\nelse:\n runtime.add('pyyaml')\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests'\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx<=3.0.2',\n 'nbsphinx',\n 'sphinx_rtd_theme',\n 'ipython',\n 'colorama',\n 'jinja2',\n 'Pygments'\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\ncluster = set([\n 'ansible',\n 'pandas',\n 'colorama',\n])\n\nopenshift = set([\n 'openshift'\n])\n\nlinting = set([\n 'flake8==2.6.2',\n])\n\noptional = set([\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing | cluster),\n 'develop26': list(runtime | develop | client | linting | testing | cluster),\n 'client': list(runtime | client),\n 'client-develop': list(runtime | develop | client | linting | testing),\n 'cluster': list(runtime | cluster),\n 'openshift': list(runtime | openshift),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-collect = insights.collect:main',\n 'insights-run = insights:main',\n 'insights = insights.command_parser:main',\n 'insights-cat = insights.tools.cat:main',\n 'insights-dupkeycheck = insights.tools.dupkeycheck:main',\n 'insights-inspect = insights.tools.insights_inspect:main',\n 'insights-info = insights.tools.query:main',\n 'insights-ocpshell= insights.ocpshell:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'six',\n 'requests',\n 'redis',\n 'cachecontrol',\n 'cachecontrol[redis]',\n 'cachecontrol[filecache]',\n 'defusedxml',\n 'lockfile',\n 'jinja2',\n])\n\nif (sys.version_info < (2, 7)):\n runtime.add('pyyaml>=3.10,<=3.13')\nelse:\n runtime.add('pyyaml')\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests'\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx<=3.0.2',\n 'nbsphinx',\n 'sphinx_rtd_theme',\n 'ipython',\n 'colorama',\n 'jinja2',\n 'Pygments',\n 'jedi<0.18.0' # Open issue with jedi 0.18.0 and iPython <= 7.19\n # https://github.com/davidhalter/jedi/issues/1714\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\ncluster = set([\n 'ansible',\n 'pandas',\n 'colorama',\n])\n\nopenshift = set([\n 'openshift'\n])\n\nlinting = set([\n 'flake8==2.6.2',\n])\n\noptional = set([\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing | cluster),\n 'develop26': list(runtime | develop | client | linting | testing | cluster),\n 'client': list(runtime | client),\n 'client-develop': list(runtime | develop | client | linting | testing),\n 'cluster': list(runtime | cluster),\n 'openshift': list(runtime | openshift),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}]} | 1,736 | 123 |
gh_patches_debug_7449 | rasdani/github-patches | git_diff | conda-forge__conda-smithy-140 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve lint messages on whitespace
The message about the spacing for selectors isn't clear to see the problem in the number of spaces:
`Selectors are suggested to take a " # []" form.`
It would be clearer to have a message like: `<two spaces> # <one space> [selector]`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_smithy/lint_recipe.py`
Content:
```
1 import os
2 import re
3
4 import jinja2
5 import ruamel.yaml
6
7
8 EXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',
9 'test', 'app', 'about', 'extra']
10
11 REQUIREMENTS_ORDER = ['build', 'run']
12
13
14 class NullUndefined(jinja2.Undefined):
15 def __unicode__(self):
16 return unicode(self._undefined_name)
17
18
19 def get_section(parent, name, lints):
20 section = parent.get(name, {})
21 if not isinstance(section, dict):
22 lints.append('The "{}" section was expected to be a dictionary, but '
23 'got a {}.'.format(name, type(section).__name__))
24 section = {}
25 return section
26
27
28 def lintify(meta, recipe_dir=None):
29 lints = []
30 major_sections = list(meta.keys())
31
32 # If the recipe_dir exists (no guarantee within this function) , we can
33 # find the meta.yaml within it.
34 meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')
35
36 source_section = get_section(meta, 'source', lints)
37 build_section = get_section(meta, 'build', lints)
38 requirements_section = get_section(meta, 'requirements', lints)
39 about_section = get_section(meta, 'about', lints)
40 extra_section = get_section(meta, 'extra', lints)
41
42 # 1: Top level meta.yaml keys should have a specific order.
43 section_order_sorted = sorted(major_sections,
44 key=EXPECTED_SECTION_ORDER.index)
45 if major_sections != section_order_sorted:
46 lints.append('The top level meta keys are in an unexpected order. '
47 'Expecting {}.'.format(section_order_sorted))
48
49 # 2: The about section should have a home, license and summary.
50 for about_item in ['home', 'license', 'summary']:
51 # if the section doesn't exist, or is just empty, lint it.
52 if not about_section.get(about_item, ''):
53 lints.append('The {} item is expected in the about section.'
54 ''.format(about_item))
55
56 # 3: The recipe should have some maintainers.
57 if not extra_section.get('recipe-maintainers', []):
58 lints.append('The recipe could do with some maintainers listed in '
59 'the "extra/recipe-maintainers" section.')
60
61 # 4: The recipe should have some tests.
62 if 'test' not in major_sections:
63 test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',
64 'run_test.pl']
65 a_test_file_exists = (recipe_dir is not None and
66 any(os.path.exists(os.path.join(recipe_dir,
67 test_file))
68 for test_file in test_files))
69 if not a_test_file_exists:
70 lints.append('The recipe must have some tests.')
71
72 # 5: License cannot be 'unknown.'
73 license = about_section.get('license', '').lower()
74 if 'unknown' == license.strip():
75 lints.append('The recipe license cannot be unknown.')
76
77 # 6: Selectors should be in a tidy form.
78 if recipe_dir is not None and os.path.exists(meta_fname):
79 bad_selectors = []
80 # Good selectors look like ".*\s\s#\s[...]"
81 good_selectors_pat = re.compile(r'(.+?)\s{2,}#\s\[(.+)\](?(2).*)$')
82 with open(meta_fname, 'r') as fh:
83 for selector_line in selector_lines(fh):
84 if not good_selectors_pat.match(selector_line):
85 bad_selectors.append(selector_line)
86 if bad_selectors:
87 lints.append('Selectors are suggested to take a '
88 '" # [<selector>]" form.')
89
90 # 7: The build section should have a build number.
91 if build_section.get('number', None) is None:
92 lints.append('The recipe must have a `build/number` section.')
93
94 # 8: The build section should be before the run section in requirements.
95 requirements_order_sorted = sorted(requirements_section,
96 key=REQUIREMENTS_ORDER.index)
97 if requirements_section.keys() != requirements_order_sorted:
98 lints.append('The `requirements/build` section should be defined '
99 'before the `requirements/run` section.')
100
101 # 9: Files downloaded should have a hash.
102 if ('url' in source_section and
103 not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):
104 lints.append('When defining a source/url please add a sha256, sha1 '
105 'or md5 checksum (sha256 preferably).')
106
107 return lints
108
109
110 def selector_lines(lines):
111 # Using the same pattern defined in conda-build (metadata.py),
112 # we identify selectors.
113 sel_pat = re.compile(r'(.+?)\s*(#.*)?\[(.+)\](?(2).*)$')
114
115 for line in lines:
116 line = line.rstrip()
117 if line.lstrip().startswith('#'):
118 # Don't bother with comment only lines
119 continue
120 m = sel_pat.match(line)
121 if m:
122 m.group(3)
123 yield line
124
125
126 def main(recipe_dir):
127 recipe_dir = os.path.abspath(recipe_dir)
128 recipe_meta = os.path.join(recipe_dir, 'meta.yaml')
129 if not os.path.exists(recipe_dir):
130 raise IOError('Feedstock has no recipe/meta.yaml.')
131
132 env = jinja2.Environment(undefined=NullUndefined)
133
134 with open(recipe_meta, 'r') as fh:
135 content = env.from_string(''.join(fh)).render()
136 meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)
137 results = lintify(meta, recipe_dir)
138 return results
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_smithy/lint_recipe.py b/conda_smithy/lint_recipe.py
--- a/conda_smithy/lint_recipe.py
+++ b/conda_smithy/lint_recipe.py
@@ -85,7 +85,7 @@
bad_selectors.append(selector_line)
if bad_selectors:
lints.append('Selectors are suggested to take a '
- '" # [<selector>]" form.')
+ '``<two spaces>#<one space>[<expression>]`` form.')
# 7: The build section should have a build number.
if build_section.get('number', None) is None:
| {"golden_diff": "diff --git a/conda_smithy/lint_recipe.py b/conda_smithy/lint_recipe.py\n--- a/conda_smithy/lint_recipe.py\n+++ b/conda_smithy/lint_recipe.py\n@@ -85,7 +85,7 @@\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n- '\" # [<selector>]\" form.')\n+ '``<two spaces>#<one space>[<expression>]`` form.')\n \n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n", "issue": "Improve lint messages on whitespace\nThe message about the spacing for selectors isn't clear to see the problem in the number of spaces:\n\n`Selectors are suggested to take a \" # []\" form.`\n\nIt would be clearer to have a message like: `<two spaces> # <one space> [selector]`\n\n", "before_files": [{"content": "import os\nimport re\n\nimport jinja2\nimport ruamel.yaml\n\n\nEXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',\n 'test', 'app', 'about', 'extra']\n\nREQUIREMENTS_ORDER = ['build', 'run']\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return unicode(self._undefined_name)\n\n\ndef get_section(parent, name, lints):\n section = parent.get(name, {})\n if not isinstance(section, dict):\n lints.append('The \"{}\" section was expected to be a dictionary, but '\n 'got a {}.'.format(name, type(section).__name__))\n section = {}\n return section\n\n\ndef lintify(meta, recipe_dir=None):\n lints = []\n major_sections = list(meta.keys())\n\n # If the recipe_dir exists (no guarantee within this function) , we can\n # find the meta.yaml within it.\n meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')\n\n source_section = get_section(meta, 'source', lints)\n build_section = get_section(meta, 'build', lints)\n requirements_section = get_section(meta, 'requirements', lints)\n about_section = get_section(meta, 'about', lints)\n extra_section = get_section(meta, 'extra', lints)\n\n # 1: Top level meta.yaml keys should have a specific order.\n section_order_sorted = sorted(major_sections,\n key=EXPECTED_SECTION_ORDER.index)\n if major_sections != section_order_sorted:\n lints.append('The top level meta keys are in an unexpected order. '\n 'Expecting {}.'.format(section_order_sorted))\n\n # 2: The about section should have a home, license and summary.\n for about_item in ['home', 'license', 'summary']:\n # if the section doesn't exist, or is just empty, lint it.\n if not about_section.get(about_item, ''):\n lints.append('The {} item is expected in the about section.'\n ''.format(about_item))\n\n # 3: The recipe should have some maintainers.\n if not extra_section.get('recipe-maintainers', []):\n lints.append('The recipe could do with some maintainers listed in '\n 'the \"extra/recipe-maintainers\" section.')\n\n # 4: The recipe should have some tests.\n if 'test' not in major_sections:\n test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',\n 'run_test.pl']\n a_test_file_exists = (recipe_dir is not None and\n any(os.path.exists(os.path.join(recipe_dir,\n test_file))\n for test_file in test_files))\n if not a_test_file_exists:\n lints.append('The recipe must have some tests.')\n\n # 5: License cannot be 'unknown.'\n license = about_section.get('license', '').lower()\n if 'unknown' == license.strip():\n lints.append('The recipe license cannot be unknown.')\n\n # 6: Selectors should be in a tidy form.\n if recipe_dir is not None and os.path.exists(meta_fname):\n bad_selectors = []\n # Good selectors look like \".*\\s\\s#\\s[...]\"\n good_selectors_pat = re.compile(r'(.+?)\\s{2,}#\\s\\[(.+)\\](?(2).*)$')\n with open(meta_fname, 'r') as fh:\n for selector_line in selector_lines(fh):\n if not good_selectors_pat.match(selector_line):\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n '\" # [<selector>]\" form.')\n\n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n lints.append('The recipe must have a `build/number` section.')\n\n # 8: The build section should be before the run section in requirements.\n requirements_order_sorted = sorted(requirements_section,\n key=REQUIREMENTS_ORDER.index)\n if requirements_section.keys() != requirements_order_sorted:\n lints.append('The `requirements/build` section should be defined '\n 'before the `requirements/run` section.')\n\n # 9: Files downloaded should have a hash.\n if ('url' in source_section and\n not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):\n lints.append('When defining a source/url please add a sha256, sha1 '\n 'or md5 checksum (sha256 preferably).')\n\n return lints\n\n\ndef selector_lines(lines):\n # Using the same pattern defined in conda-build (metadata.py),\n # we identify selectors.\n sel_pat = re.compile(r'(.+?)\\s*(#.*)?\\[(.+)\\](?(2).*)$')\n\n for line in lines:\n line = line.rstrip()\n if line.lstrip().startswith('#'):\n # Don't bother with comment only lines\n continue\n m = sel_pat.match(line)\n if m:\n m.group(3)\n yield line\n\n\ndef main(recipe_dir):\n recipe_dir = os.path.abspath(recipe_dir)\n recipe_meta = os.path.join(recipe_dir, 'meta.yaml')\n if not os.path.exists(recipe_dir):\n raise IOError('Feedstock has no recipe/meta.yaml.')\n\n env = jinja2.Environment(undefined=NullUndefined)\n\n with open(recipe_meta, 'r') as fh:\n content = env.from_string(''.join(fh)).render()\n meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)\n results = lintify(meta, recipe_dir)\n return results\n", "path": "conda_smithy/lint_recipe.py"}], "after_files": [{"content": "import os\nimport re\n\nimport jinja2\nimport ruamel.yaml\n\n\nEXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',\n 'test', 'app', 'about', 'extra']\n\nREQUIREMENTS_ORDER = ['build', 'run']\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return unicode(self._undefined_name)\n\n\ndef get_section(parent, name, lints):\n section = parent.get(name, {})\n if not isinstance(section, dict):\n lints.append('The \"{}\" section was expected to be a dictionary, but '\n 'got a {}.'.format(name, type(section).__name__))\n section = {}\n return section\n\n\ndef lintify(meta, recipe_dir=None):\n lints = []\n major_sections = list(meta.keys())\n\n # If the recipe_dir exists (no guarantee within this function) , we can\n # find the meta.yaml within it.\n meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')\n\n source_section = get_section(meta, 'source', lints)\n build_section = get_section(meta, 'build', lints)\n requirements_section = get_section(meta, 'requirements', lints)\n about_section = get_section(meta, 'about', lints)\n extra_section = get_section(meta, 'extra', lints)\n\n # 1: Top level meta.yaml keys should have a specific order.\n section_order_sorted = sorted(major_sections,\n key=EXPECTED_SECTION_ORDER.index)\n if major_sections != section_order_sorted:\n lints.append('The top level meta keys are in an unexpected order. '\n 'Expecting {}.'.format(section_order_sorted))\n\n # 2: The about section should have a home, license and summary.\n for about_item in ['home', 'license', 'summary']:\n # if the section doesn't exist, or is just empty, lint it.\n if not about_section.get(about_item, ''):\n lints.append('The {} item is expected in the about section.'\n ''.format(about_item))\n\n # 3: The recipe should have some maintainers.\n if not extra_section.get('recipe-maintainers', []):\n lints.append('The recipe could do with some maintainers listed in '\n 'the \"extra/recipe-maintainers\" section.')\n\n # 4: The recipe should have some tests.\n if 'test' not in major_sections:\n test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',\n 'run_test.pl']\n a_test_file_exists = (recipe_dir is not None and\n any(os.path.exists(os.path.join(recipe_dir,\n test_file))\n for test_file in test_files))\n if not a_test_file_exists:\n lints.append('The recipe must have some tests.')\n\n # 5: License cannot be 'unknown.'\n license = about_section.get('license', '').lower()\n if 'unknown' == license.strip():\n lints.append('The recipe license cannot be unknown.')\n\n # 6: Selectors should be in a tidy form.\n if recipe_dir is not None and os.path.exists(meta_fname):\n bad_selectors = []\n # Good selectors look like \".*\\s\\s#\\s[...]\"\n good_selectors_pat = re.compile(r'(.+?)\\s{2,}#\\s\\[(.+)\\](?(2).*)$')\n with open(meta_fname, 'r') as fh:\n for selector_line in selector_lines(fh):\n if not good_selectors_pat.match(selector_line):\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n '``<two spaces>#<one space>[<expression>]`` form.')\n\n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n lints.append('The recipe must have a `build/number` section.')\n\n # 8: The build section should be before the run section in requirements.\n requirements_order_sorted = sorted(requirements_section,\n key=REQUIREMENTS_ORDER.index)\n if requirements_section.keys() != requirements_order_sorted:\n lints.append('The `requirements/build` section should be defined '\n 'before the `requirements/run` section.')\n\n # 9: Files downloaded should have a hash.\n if ('url' in source_section and\n not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):\n lints.append('When defining a source/url please add a sha256, sha1 '\n 'or md5 checksum (sha256 preferably).')\n\n return lints\n\n\ndef selector_lines(lines):\n # Using the same pattern defined in conda-build (metadata.py),\n # we identify selectors.\n sel_pat = re.compile(r'(.+?)\\s*(#.*)?\\[(.+)\\](?(2).*)$')\n\n for line in lines:\n line = line.rstrip()\n if line.lstrip().startswith('#'):\n # Don't bother with comment only lines\n continue\n m = sel_pat.match(line)\n if m:\n m.group(3)\n yield line\n\n\ndef main(recipe_dir):\n recipe_dir = os.path.abspath(recipe_dir)\n recipe_meta = os.path.join(recipe_dir, 'meta.yaml')\n if not os.path.exists(recipe_dir):\n raise IOError('Feedstock has no recipe/meta.yaml.')\n\n env = jinja2.Environment(undefined=NullUndefined)\n\n with open(recipe_meta, 'r') as fh:\n content = env.from_string(''.join(fh)).render()\n meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)\n results = lintify(meta, recipe_dir)\n return results\n", "path": "conda_smithy/lint_recipe.py"}]} | 1,881 | 143 |
gh_patches_debug_35500 | rasdani/github-patches | git_diff | mozilla__bugbug-328 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Store pipeline tasks in the index, and update Taskcluster URLs in scripts
The scripts are still using the old URLs from releng-services.
Store pipeline tasks in the index, and update Taskcluster URLs in scripts
The scripts are still using the old URLs from releng-services.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/trainer.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import argparse
4 import lzma
5 import os
6 import shutil
7 from logging import INFO, basicConfig, getLogger
8 from urllib.request import urlretrieve
9
10 from bugbug.models.component import ComponentModel
11 from bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel
12 from bugbug.models.regression import RegressionModel
13 from bugbug.models.tracking import TrackingModel
14
15 basicConfig(level=INFO)
16 logger = getLogger(__name__)
17
18 BASE_URL = "https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_data.latest/artifacts/public"
19
20
21 class Trainer(object):
22 def decompress_file(self, path):
23 with lzma.open(f"{path}.xz", "rb") as input_f:
24 with open(path, "wb") as output_f:
25 shutil.copyfileobj(input_f, output_f)
26
27 def compress_file(self, path):
28 with open(path, "rb") as input_f:
29 with lzma.open(f"{path}.xz", "wb") as output_f:
30 shutil.copyfileobj(input_f, output_f)
31
32 def train_defect_enhancement_task(self):
33 logger.info("Training *defect vs enhancement vs task* model")
34 model = DefectEnhancementTaskModel()
35 model.train()
36 self.compress_file("defectenhancementtaskmodel")
37
38 def train_component(self):
39 logger.info("Training *component* model")
40 model = ComponentModel()
41 model.train()
42 self.compress_file("componentmodel")
43
44 def train_regression(self):
45 logger.info("Training *regression vs non-regression* model")
46 model = RegressionModel()
47 model.train()
48 self.compress_file("regressionmodel")
49
50 def train_tracking(self):
51 logger.info("Training *tracking* model")
52 model = TrackingModel()
53 model.train()
54 self.compress_file("trackingmodel")
55
56 def go(self, model):
57 # TODO: Stop hard-coding them
58 valid_models = ["defect", "component", "regression", "tracking"]
59
60 if model not in valid_models:
61 exception = (
62 f"Invalid model {model!r} name, use one of {valid_models!r} instead"
63 )
64 raise ValueError(exception)
65
66 # Download datasets that were built by bugbug_data.
67 os.makedirs("data", exist_ok=True)
68
69 # Bugs.json
70 logger.info("Downloading bugs database")
71 urlretrieve(f"{BASE_URL}/bugs.json.xz", "data/bugs.json.xz")
72 logger.info("Decompressing bugs database")
73 self.decompress_file("data/bugs.json")
74
75 # Commits.json
76 logger.info("Downloading commits database")
77 urlretrieve(f"{BASE_URL}/commits.json.xz", "data/commits.json.xz")
78 logger.info("Decompressing commits database")
79 self.decompress_file("data/commits.json")
80
81 if model == "defect":
82 # Train classifier for defect-vs-enhancement-vs-task.
83 self.train_defect_enhancement_task()
84 elif model == "component":
85 # Train classifier for the component of a bug.
86 self.train_component()
87 elif model == "regression":
88 # Train classifier for regression-vs-nonregression.
89 self.train_regression()
90 elif model == "tracking":
91 # Train classifier for tracking bugs.
92 self.train_tracking()
93 else:
94 # We shouldn't be here
95 raise Exception("valid_models is likely not up-to-date anymore")
96
97
98 def main():
99 description = "Train the models"
100 parser = argparse.ArgumentParser(description=description)
101
102 parser.add_argument("model", help="Which model to train.")
103
104 args = parser.parse_args()
105
106 retriever = Trainer()
107 retriever.go(args.model)
108
```
Path: `http_service/download_models.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import logging
7 import lzma
8 import os
9 import shutil
10 from urllib.request import urlretrieve
11
12 import requests
13
14 logging.basicConfig(level=logging.INFO)
15 LOGGER = logging.getLogger()
16
17 BASE_URL = "https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_train.latest/artifacts/public"
18
19 MODELS_NAMES = ("defectenhancementtask", "component", "regression")
20 MODELS_DIR = os.path.join(os.path.dirname(__file__), "models")
21
22
23 def retrieve_model(name):
24 os.makedirs(MODELS_DIR, exist_ok=True)
25
26 file_name = f"{name}model"
27 file_path = os.path.join(MODELS_DIR, file_name)
28
29 model_url = f"{BASE_URL}/{file_name}.xz"
30 LOGGER.info(f"Checking ETAG of {model_url}")
31 r = requests.head(model_url, allow_redirects=True)
32 r.raise_for_status()
33 new_etag = r.headers["ETag"]
34
35 try:
36 with open(f"{file_path}.etag", "r") as f:
37 old_etag = f.read()
38 except IOError:
39 old_etag = None
40
41 if old_etag != new_etag:
42 LOGGER.info(f"Downloading the model from {model_url}")
43 urlretrieve(model_url, f"{file_path}.xz")
44
45 with lzma.open(f"{file_path}.xz", "rb") as input_f:
46 with open(file_path, "wb") as output_f:
47 shutil.copyfileobj(input_f, output_f)
48 LOGGER.info(f"Written model in {file_path}")
49
50 with open(f"{file_path}.etag", "w") as f:
51 f.write(new_etag)
52 else:
53 LOGGER.info(f"ETAG for {model_url} is ok")
54
55 return file_path
56
57
58 def preload_models():
59 for model_name in MODELS_NAMES:
60 retrieve_model(model_name)
61
62
63 if __name__ == "__main__":
64 preload_models()
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/http_service/download_models.py b/http_service/download_models.py
--- a/http_service/download_models.py
+++ b/http_service/download_models.py
@@ -14,7 +14,7 @@
logging.basicConfig(level=logging.INFO)
LOGGER = logging.getLogger()
-BASE_URL = "https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_train.latest/artifacts/public"
+BASE_URL = "https://index.taskcluster.net/v1/task/project.relman.bugbug.train_{}.latest/artifacts/public"
MODELS_NAMES = ("defectenhancementtask", "component", "regression")
MODELS_DIR = os.path.join(os.path.dirname(__file__), "models")
@@ -26,7 +26,8 @@
file_name = f"{name}model"
file_path = os.path.join(MODELS_DIR, file_name)
- model_url = f"{BASE_URL}/{file_name}.xz"
+ base_model_url = BASE_URL.format(name)
+ model_url = f"{base_model_url}/{file_name}.xz"
LOGGER.info(f"Checking ETAG of {model_url}")
r = requests.head(model_url, allow_redirects=True)
r.raise_for_status()
diff --git a/scripts/trainer.py b/scripts/trainer.py
--- a/scripts/trainer.py
+++ b/scripts/trainer.py
@@ -15,7 +15,7 @@
basicConfig(level=INFO)
logger = getLogger(__name__)
-BASE_URL = "https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_data.latest/artifacts/public"
+BASE_URL = "https://index.taskcluster.net/v1/task/project.relman.bugbug.data_{}.latest/artifacts/public"
class Trainer(object):
@@ -68,13 +68,15 @@
# Bugs.json
logger.info("Downloading bugs database")
- urlretrieve(f"{BASE_URL}/bugs.json.xz", "data/bugs.json.xz")
+ bugs_url = BASE_URL.format("bugs")
+ urlretrieve(f"{bugs_url}/bugs.json.xz", "data/bugs.json.xz")
logger.info("Decompressing bugs database")
self.decompress_file("data/bugs.json")
# Commits.json
logger.info("Downloading commits database")
- urlretrieve(f"{BASE_URL}/commits.json.xz", "data/commits.json.xz")
+ commits_url = BASE_URL.format("commits")
+ urlretrieve(f"{commits_url}/commits.json.xz", "data/commits.json.xz")
logger.info("Decompressing commits database")
self.decompress_file("data/commits.json")
| {"golden_diff": "diff --git a/http_service/download_models.py b/http_service/download_models.py\n--- a/http_service/download_models.py\n+++ b/http_service/download_models.py\n@@ -14,7 +14,7 @@\n logging.basicConfig(level=logging.INFO)\n LOGGER = logging.getLogger()\n \n-BASE_URL = \"https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_train.latest/artifacts/public\"\n+BASE_URL = \"https://index.taskcluster.net/v1/task/project.relman.bugbug.train_{}.latest/artifacts/public\"\n \n MODELS_NAMES = (\"defectenhancementtask\", \"component\", \"regression\")\n MODELS_DIR = os.path.join(os.path.dirname(__file__), \"models\")\n@@ -26,7 +26,8 @@\n file_name = f\"{name}model\"\n file_path = os.path.join(MODELS_DIR, file_name)\n \n- model_url = f\"{BASE_URL}/{file_name}.xz\"\n+ base_model_url = BASE_URL.format(name)\n+ model_url = f\"{base_model_url}/{file_name}.xz\"\n LOGGER.info(f\"Checking ETAG of {model_url}\")\n r = requests.head(model_url, allow_redirects=True)\n r.raise_for_status()\ndiff --git a/scripts/trainer.py b/scripts/trainer.py\n--- a/scripts/trainer.py\n+++ b/scripts/trainer.py\n@@ -15,7 +15,7 @@\n basicConfig(level=INFO)\n logger = getLogger(__name__)\n \n-BASE_URL = \"https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_data.latest/artifacts/public\"\n+BASE_URL = \"https://index.taskcluster.net/v1/task/project.relman.bugbug.data_{}.latest/artifacts/public\"\n \n \n class Trainer(object):\n@@ -68,13 +68,15 @@\n \n # Bugs.json\n logger.info(\"Downloading bugs database\")\n- urlretrieve(f\"{BASE_URL}/bugs.json.xz\", \"data/bugs.json.xz\")\n+ bugs_url = BASE_URL.format(\"bugs\")\n+ urlretrieve(f\"{bugs_url}/bugs.json.xz\", \"data/bugs.json.xz\")\n logger.info(\"Decompressing bugs database\")\n self.decompress_file(\"data/bugs.json\")\n \n # Commits.json\n logger.info(\"Downloading commits database\")\n- urlretrieve(f\"{BASE_URL}/commits.json.xz\", \"data/commits.json.xz\")\n+ commits_url = BASE_URL.format(\"commits\")\n+ urlretrieve(f\"{commits_url}/commits.json.xz\", \"data/commits.json.xz\")\n logger.info(\"Decompressing commits database\")\n self.decompress_file(\"data/commits.json\")\n", "issue": "Store pipeline tasks in the index, and update Taskcluster URLs in scripts\nThe scripts are still using the old URLs from releng-services.\nStore pipeline tasks in the index, and update Taskcluster URLs in scripts\nThe scripts are still using the old URLs from releng-services.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport argparse\nimport lzma\nimport os\nimport shutil\nfrom logging import INFO, basicConfig, getLogger\nfrom urllib.request import urlretrieve\n\nfrom bugbug.models.component import ComponentModel\nfrom bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel\nfrom bugbug.models.regression import RegressionModel\nfrom bugbug.models.tracking import TrackingModel\n\nbasicConfig(level=INFO)\nlogger = getLogger(__name__)\n\nBASE_URL = \"https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_data.latest/artifacts/public\"\n\n\nclass Trainer(object):\n def decompress_file(self, path):\n with lzma.open(f\"{path}.xz\", \"rb\") as input_f:\n with open(path, \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n\n def compress_file(self, path):\n with open(path, \"rb\") as input_f:\n with lzma.open(f\"{path}.xz\", \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n\n def train_defect_enhancement_task(self):\n logger.info(\"Training *defect vs enhancement vs task* model\")\n model = DefectEnhancementTaskModel()\n model.train()\n self.compress_file(\"defectenhancementtaskmodel\")\n\n def train_component(self):\n logger.info(\"Training *component* model\")\n model = ComponentModel()\n model.train()\n self.compress_file(\"componentmodel\")\n\n def train_regression(self):\n logger.info(\"Training *regression vs non-regression* model\")\n model = RegressionModel()\n model.train()\n self.compress_file(\"regressionmodel\")\n\n def train_tracking(self):\n logger.info(\"Training *tracking* model\")\n model = TrackingModel()\n model.train()\n self.compress_file(\"trackingmodel\")\n\n def go(self, model):\n # TODO: Stop hard-coding them\n valid_models = [\"defect\", \"component\", \"regression\", \"tracking\"]\n\n if model not in valid_models:\n exception = (\n f\"Invalid model {model!r} name, use one of {valid_models!r} instead\"\n )\n raise ValueError(exception)\n\n # Download datasets that were built by bugbug_data.\n os.makedirs(\"data\", exist_ok=True)\n\n # Bugs.json\n logger.info(\"Downloading bugs database\")\n urlretrieve(f\"{BASE_URL}/bugs.json.xz\", \"data/bugs.json.xz\")\n logger.info(\"Decompressing bugs database\")\n self.decompress_file(\"data/bugs.json\")\n\n # Commits.json\n logger.info(\"Downloading commits database\")\n urlretrieve(f\"{BASE_URL}/commits.json.xz\", \"data/commits.json.xz\")\n logger.info(\"Decompressing commits database\")\n self.decompress_file(\"data/commits.json\")\n\n if model == \"defect\":\n # Train classifier for defect-vs-enhancement-vs-task.\n self.train_defect_enhancement_task()\n elif model == \"component\":\n # Train classifier for the component of a bug.\n self.train_component()\n elif model == \"regression\":\n # Train classifier for regression-vs-nonregression.\n self.train_regression()\n elif model == \"tracking\":\n # Train classifier for tracking bugs.\n self.train_tracking()\n else:\n # We shouldn't be here\n raise Exception(\"valid_models is likely not up-to-date anymore\")\n\n\ndef main():\n description = \"Train the models\"\n parser = argparse.ArgumentParser(description=description)\n\n parser.add_argument(\"model\", help=\"Which model to train.\")\n\n args = parser.parse_args()\n\n retriever = Trainer()\n retriever.go(args.model)\n", "path": "scripts/trainer.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\nimport lzma\nimport os\nimport shutil\nfrom urllib.request import urlretrieve\n\nimport requests\n\nlogging.basicConfig(level=logging.INFO)\nLOGGER = logging.getLogger()\n\nBASE_URL = \"https://index.taskcluster.net/v1/task/project.releng.services.project.testing.bugbug_train.latest/artifacts/public\"\n\nMODELS_NAMES = (\"defectenhancementtask\", \"component\", \"regression\")\nMODELS_DIR = os.path.join(os.path.dirname(__file__), \"models\")\n\n\ndef retrieve_model(name):\n os.makedirs(MODELS_DIR, exist_ok=True)\n\n file_name = f\"{name}model\"\n file_path = os.path.join(MODELS_DIR, file_name)\n\n model_url = f\"{BASE_URL}/{file_name}.xz\"\n LOGGER.info(f\"Checking ETAG of {model_url}\")\n r = requests.head(model_url, allow_redirects=True)\n r.raise_for_status()\n new_etag = r.headers[\"ETag\"]\n\n try:\n with open(f\"{file_path}.etag\", \"r\") as f:\n old_etag = f.read()\n except IOError:\n old_etag = None\n\n if old_etag != new_etag:\n LOGGER.info(f\"Downloading the model from {model_url}\")\n urlretrieve(model_url, f\"{file_path}.xz\")\n\n with lzma.open(f\"{file_path}.xz\", \"rb\") as input_f:\n with open(file_path, \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n LOGGER.info(f\"Written model in {file_path}\")\n\n with open(f\"{file_path}.etag\", \"w\") as f:\n f.write(new_etag)\n else:\n LOGGER.info(f\"ETAG for {model_url} is ok\")\n\n return file_path\n\n\ndef preload_models():\n for model_name in MODELS_NAMES:\n retrieve_model(model_name)\n\n\nif __name__ == \"__main__\":\n preload_models()\n", "path": "http_service/download_models.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport argparse\nimport lzma\nimport os\nimport shutil\nfrom logging import INFO, basicConfig, getLogger\nfrom urllib.request import urlretrieve\n\nfrom bugbug.models.component import ComponentModel\nfrom bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel\nfrom bugbug.models.regression import RegressionModel\nfrom bugbug.models.tracking import TrackingModel\n\nbasicConfig(level=INFO)\nlogger = getLogger(__name__)\n\nBASE_URL = \"https://index.taskcluster.net/v1/task/project.relman.bugbug.data_{}.latest/artifacts/public\"\n\n\nclass Trainer(object):\n def decompress_file(self, path):\n with lzma.open(f\"{path}.xz\", \"rb\") as input_f:\n with open(path, \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n\n def compress_file(self, path):\n with open(path, \"rb\") as input_f:\n with lzma.open(f\"{path}.xz\", \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n\n def train_defect_enhancement_task(self):\n logger.info(\"Training *defect vs enhancement vs task* model\")\n model = DefectEnhancementTaskModel()\n model.train()\n self.compress_file(\"defectenhancementtaskmodel\")\n\n def train_component(self):\n logger.info(\"Training *component* model\")\n model = ComponentModel()\n model.train()\n self.compress_file(\"componentmodel\")\n\n def train_regression(self):\n logger.info(\"Training *regression vs non-regression* model\")\n model = RegressionModel()\n model.train()\n self.compress_file(\"regressionmodel\")\n\n def train_tracking(self):\n logger.info(\"Training *tracking* model\")\n model = TrackingModel()\n model.train()\n self.compress_file(\"trackingmodel\")\n\n def go(self, model):\n # TODO: Stop hard-coding them\n valid_models = [\"defect\", \"component\", \"regression\", \"tracking\"]\n\n if model not in valid_models:\n exception = (\n f\"Invalid model {model!r} name, use one of {valid_models!r} instead\"\n )\n raise ValueError(exception)\n\n # Download datasets that were built by bugbug_data.\n os.makedirs(\"data\", exist_ok=True)\n\n # Bugs.json\n logger.info(\"Downloading bugs database\")\n bugs_url = BASE_URL.format(\"bugs\")\n urlretrieve(f\"{bugs_url}/bugs.json.xz\", \"data/bugs.json.xz\")\n logger.info(\"Decompressing bugs database\")\n self.decompress_file(\"data/bugs.json\")\n\n # Commits.json\n logger.info(\"Downloading commits database\")\n commits_url = BASE_URL.format(\"commits\")\n urlretrieve(f\"{commits_url}/commits.json.xz\", \"data/commits.json.xz\")\n logger.info(\"Decompressing commits database\")\n self.decompress_file(\"data/commits.json\")\n\n if model == \"defect\":\n # Train classifier for defect-vs-enhancement-vs-task.\n self.train_defect_enhancement_task()\n elif model == \"component\":\n # Train classifier for the component of a bug.\n self.train_component()\n elif model == \"regression\":\n # Train classifier for regression-vs-nonregression.\n self.train_regression()\n elif model == \"tracking\":\n # Train classifier for tracking bugs.\n self.train_tracking()\n else:\n # We shouldn't be here\n raise Exception(\"valid_models is likely not up-to-date anymore\")\n\n\ndef main():\n description = \"Train the models\"\n parser = argparse.ArgumentParser(description=description)\n\n parser.add_argument(\"model\", help=\"Which model to train.\")\n\n args = parser.parse_args()\n\n retriever = Trainer()\n retriever.go(args.model)\n", "path": "scripts/trainer.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\nimport lzma\nimport os\nimport shutil\nfrom urllib.request import urlretrieve\n\nimport requests\n\nlogging.basicConfig(level=logging.INFO)\nLOGGER = logging.getLogger()\n\nBASE_URL = \"https://index.taskcluster.net/v1/task/project.relman.bugbug.train_{}.latest/artifacts/public\"\n\nMODELS_NAMES = (\"defectenhancementtask\", \"component\", \"regression\")\nMODELS_DIR = os.path.join(os.path.dirname(__file__), \"models\")\n\n\ndef retrieve_model(name):\n os.makedirs(MODELS_DIR, exist_ok=True)\n\n file_name = f\"{name}model\"\n file_path = os.path.join(MODELS_DIR, file_name)\n\n base_model_url = BASE_URL.format(name)\n model_url = f\"{base_model_url}/{file_name}.xz\"\n LOGGER.info(f\"Checking ETAG of {model_url}\")\n r = requests.head(model_url, allow_redirects=True)\n r.raise_for_status()\n new_etag = r.headers[\"ETag\"]\n\n try:\n with open(f\"{file_path}.etag\", \"r\") as f:\n old_etag = f.read()\n except IOError:\n old_etag = None\n\n if old_etag != new_etag:\n LOGGER.info(f\"Downloading the model from {model_url}\")\n urlretrieve(model_url, f\"{file_path}.xz\")\n\n with lzma.open(f\"{file_path}.xz\", \"rb\") as input_f:\n with open(file_path, \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n LOGGER.info(f\"Written model in {file_path}\")\n\n with open(f\"{file_path}.etag\", \"w\") as f:\n f.write(new_etag)\n else:\n LOGGER.info(f\"ETAG for {model_url} is ok\")\n\n return file_path\n\n\ndef preload_models():\n for model_name in MODELS_NAMES:\n retrieve_model(model_name)\n\n\nif __name__ == \"__main__\":\n preload_models()\n", "path": "http_service/download_models.py"}]} | 1,949 | 579 |
gh_patches_debug_1302 | rasdani/github-patches | git_diff | pyca__cryptography-3803 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Signer/Verifier deprecation warning has wrong stacklevel
Seeing this with Cryptography 2.0:
```
.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:477: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
_warn_sign_verify_deprecated()
.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:382: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
_warn_sign_verify_deprecated()
```
I see a few open issues related to deprecations (e.g. #3794), but I'm not sure if any of them cover this particular message.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/backends/openssl/utils.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import warnings
8
9 from cryptography import utils
10 from cryptography.hazmat.primitives import hashes
11 from cryptography.hazmat.primitives.asymmetric.utils import Prehashed
12
13
14 def _calculate_digest_and_algorithm(backend, data, algorithm):
15 if not isinstance(algorithm, Prehashed):
16 hash_ctx = hashes.Hash(algorithm, backend)
17 hash_ctx.update(data)
18 data = hash_ctx.finalize()
19 else:
20 algorithm = algorithm._algorithm
21
22 if len(data) != algorithm.digest_size:
23 raise ValueError(
24 "The provided data must be the same length as the hash "
25 "algorithm's digest size."
26 )
27
28 return (data, algorithm)
29
30
31 def _check_not_prehashed(signature_algorithm):
32 if isinstance(signature_algorithm, Prehashed):
33 raise TypeError(
34 "Prehashed is only supported in the sign and verify methods. "
35 "It cannot be used with signer or verifier."
36 )
37
38
39 def _warn_sign_verify_deprecated():
40 warnings.warn(
41 "signer and verifier have been deprecated. Please use sign "
42 "and verify instead.",
43 utils.PersistentlyDeprecated,
44 stacklevel=2
45 )
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py
--- a/src/cryptography/hazmat/backends/openssl/utils.py
+++ b/src/cryptography/hazmat/backends/openssl/utils.py
@@ -41,5 +41,5 @@
"signer and verifier have been deprecated. Please use sign "
"and verify instead.",
utils.PersistentlyDeprecated,
- stacklevel=2
+ stacklevel=3
)
| {"golden_diff": "diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py\n--- a/src/cryptography/hazmat/backends/openssl/utils.py\n+++ b/src/cryptography/hazmat/backends/openssl/utils.py\n@@ -41,5 +41,5 @@\n \"signer and verifier have been deprecated. Please use sign \"\n \"and verify instead.\",\n utils.PersistentlyDeprecated,\n- stacklevel=2\n+ stacklevel=3\n )\n", "issue": "Signer/Verifier deprecation warning has wrong stacklevel\nSeeing this with Cryptography 2.0:\r\n\r\n```\r\n.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:477: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.\r\n _warn_sign_verify_deprecated()\r\n.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:382: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.\r\n _warn_sign_verify_deprecated()\r\n```\r\n\r\nI see a few open issues related to deprecations (e.g. #3794), but I'm not sure if any of them cover this particular message.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport warnings\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric.utils import Prehashed\n\n\ndef _calculate_digest_and_algorithm(backend, data, algorithm):\n if not isinstance(algorithm, Prehashed):\n hash_ctx = hashes.Hash(algorithm, backend)\n hash_ctx.update(data)\n data = hash_ctx.finalize()\n else:\n algorithm = algorithm._algorithm\n\n if len(data) != algorithm.digest_size:\n raise ValueError(\n \"The provided data must be the same length as the hash \"\n \"algorithm's digest size.\"\n )\n\n return (data, algorithm)\n\n\ndef _check_not_prehashed(signature_algorithm):\n if isinstance(signature_algorithm, Prehashed):\n raise TypeError(\n \"Prehashed is only supported in the sign and verify methods. \"\n \"It cannot be used with signer or verifier.\"\n )\n\n\ndef _warn_sign_verify_deprecated():\n warnings.warn(\n \"signer and verifier have been deprecated. Please use sign \"\n \"and verify instead.\",\n utils.PersistentlyDeprecated,\n stacklevel=2\n )\n", "path": "src/cryptography/hazmat/backends/openssl/utils.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport warnings\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric.utils import Prehashed\n\n\ndef _calculate_digest_and_algorithm(backend, data, algorithm):\n if not isinstance(algorithm, Prehashed):\n hash_ctx = hashes.Hash(algorithm, backend)\n hash_ctx.update(data)\n data = hash_ctx.finalize()\n else:\n algorithm = algorithm._algorithm\n\n if len(data) != algorithm.digest_size:\n raise ValueError(\n \"The provided data must be the same length as the hash \"\n \"algorithm's digest size.\"\n )\n\n return (data, algorithm)\n\n\ndef _check_not_prehashed(signature_algorithm):\n if isinstance(signature_algorithm, Prehashed):\n raise TypeError(\n \"Prehashed is only supported in the sign and verify methods. \"\n \"It cannot be used with signer or verifier.\"\n )\n\n\ndef _warn_sign_verify_deprecated():\n warnings.warn(\n \"signer and verifier have been deprecated. Please use sign \"\n \"and verify instead.\",\n utils.PersistentlyDeprecated,\n stacklevel=3\n )\n", "path": "src/cryptography/hazmat/backends/openssl/utils.py"}]} | 812 | 115 |
gh_patches_debug_15389 | rasdani/github-patches | git_diff | weecology__retriever-381 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bioclim data missing hdr files when downloaded to path
retriever download Bioclim -p
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/bioclim_2pt5.py`
Content:
```
1 #retriever
2
3 """Retriever script for direct download of Bioclim data"""
4
5 from retriever.lib.templates import Script
6
7
8 class main(Script):
9 def __init__(self, **kwargs):
10 Script.__init__(self, **kwargs)
11 self.name = "Bioclim 2.5 Minute Climate Data"
12 self.shortname = "Bioclim"
13 self.ref = "http://worldclim.org/bioclim"
14 self.urls = {"climate": "http://biogeo.ucdavis.edu/data/climate/worldclim/1_4/grid/cur/bio_2-5m_bil.zip"}
15 self.description = "Bioclimatic variables that are derived from the monthly temperature and rainfall values in order to generate more biologically meaningful variables."
16 self.citation = "Hijmans, R.J., S.E. Cameron, J.L. Parra, P.G. Jones and A. Jarvis, 2005. Very high resolution interpolated climate surfaces for global land areas. International Journal of Climatology 25: 1965-1978."
17 self.tags = ["Data Type > Compilation"]
18
19 def download(self, engine=None, debug=False):
20 if engine.name != "Download Only":
21 raise Exception("The Bioclim dataset contains only non-tabular data files, and can only be used with the 'download only' engine.")
22 Script.download(self, engine, debug)
23 file_names = ["bio%s.bil" % file_num for file_num in range(1, 20)]
24 self.engine.download_files_from_archive(self.urls["climate"], file_names)
25 self.engine.register_files(file_names)
26
27 SCRIPT = main()
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/bioclim_2pt5.py b/scripts/bioclim_2pt5.py
--- a/scripts/bioclim_2pt5.py
+++ b/scripts/bioclim_2pt5.py
@@ -20,8 +20,12 @@
if engine.name != "Download Only":
raise Exception("The Bioclim dataset contains only non-tabular data files, and can only be used with the 'download only' engine.")
Script.download(self, engine, debug)
- file_names = ["bio%s.bil" % file_num for file_num in range(1, 20)]
+ file_names = []
+ for file_num in range(1, 20):
+ for ext in (['bil', 'hdr']):
+ file_names += ["bio{0}.{1}".format(file_num, ext)]
self.engine.download_files_from_archive(self.urls["climate"], file_names)
self.engine.register_files(file_names)
SCRIPT = main()
+
| {"golden_diff": "diff --git a/scripts/bioclim_2pt5.py b/scripts/bioclim_2pt5.py\n--- a/scripts/bioclim_2pt5.py\n+++ b/scripts/bioclim_2pt5.py\n@@ -20,8 +20,12 @@\n if engine.name != \"Download Only\":\n raise Exception(\"The Bioclim dataset contains only non-tabular data files, and can only be used with the 'download only' engine.\")\n Script.download(self, engine, debug)\n- file_names = [\"bio%s.bil\" % file_num for file_num in range(1, 20)]\n+ file_names = []\n+ for file_num in range(1, 20):\n+ for ext in (['bil', 'hdr']):\n+ file_names += [\"bio{0}.{1}\".format(file_num, ext)]\n self.engine.download_files_from_archive(self.urls[\"climate\"], file_names)\n self.engine.register_files(file_names)\n \n SCRIPT = main()\n+\n", "issue": "Bioclim data missing hdr files when downloaded to path\n retriever download Bioclim -p\n\n", "before_files": [{"content": "#retriever\n\n\"\"\"Retriever script for direct download of Bioclim data\"\"\"\n\nfrom retriever.lib.templates import Script\n\n\nclass main(Script):\n def __init__(self, **kwargs):\n Script.__init__(self, **kwargs)\n self.name = \"Bioclim 2.5 Minute Climate Data\"\n self.shortname = \"Bioclim\"\n self.ref = \"http://worldclim.org/bioclim\"\n self.urls = {\"climate\": \"http://biogeo.ucdavis.edu/data/climate/worldclim/1_4/grid/cur/bio_2-5m_bil.zip\"}\n self.description = \"Bioclimatic variables that are derived from the monthly temperature and rainfall values in order to generate more biologically meaningful variables.\"\n self.citation = \"Hijmans, R.J., S.E. Cameron, J.L. Parra, P.G. Jones and A. Jarvis, 2005. Very high resolution interpolated climate surfaces for global land areas. International Journal of Climatology 25: 1965-1978.\"\n self.tags = [\"Data Type > Compilation\"]\n \n def download(self, engine=None, debug=False):\n if engine.name != \"Download Only\":\n raise Exception(\"The Bioclim dataset contains only non-tabular data files, and can only be used with the 'download only' engine.\")\n Script.download(self, engine, debug)\n file_names = [\"bio%s.bil\" % file_num for file_num in range(1, 20)]\n self.engine.download_files_from_archive(self.urls[\"climate\"], file_names)\n self.engine.register_files(file_names)\n\nSCRIPT = main()\n", "path": "scripts/bioclim_2pt5.py"}], "after_files": [{"content": "#retriever\n\n\"\"\"Retriever script for direct download of Bioclim data\"\"\"\n\nfrom retriever.lib.templates import Script\n\n\nclass main(Script):\n def __init__(self, **kwargs):\n Script.__init__(self, **kwargs)\n self.name = \"Bioclim 2.5 Minute Climate Data\"\n self.shortname = \"Bioclim\"\n self.ref = \"http://worldclim.org/bioclim\"\n self.urls = {\"climate\": \"http://biogeo.ucdavis.edu/data/climate/worldclim/1_4/grid/cur/bio_2-5m_bil.zip\"}\n self.description = \"Bioclimatic variables that are derived from the monthly temperature and rainfall values in order to generate more biologically meaningful variables.\"\n self.citation = \"Hijmans, R.J., S.E. Cameron, J.L. Parra, P.G. Jones and A. Jarvis, 2005. Very high resolution interpolated climate surfaces for global land areas. International Journal of Climatology 25: 1965-1978.\"\n self.tags = [\"Data Type > Compilation\"]\n \n def download(self, engine=None, debug=False):\n if engine.name != \"Download Only\":\n raise Exception(\"The Bioclim dataset contains only non-tabular data files, and can only be used with the 'download only' engine.\")\n Script.download(self, engine, debug)\n file_names = []\n for file_num in range(1, 20):\n for ext in (['bil', 'hdr']):\n file_names += [\"bio{0}.{1}\".format(file_num, ext)]\n self.engine.download_files_from_archive(self.urls[\"climate\"], file_names)\n self.engine.register_files(file_names)\n\nSCRIPT = main()\n\n", "path": "scripts/bioclim_2pt5.py"}]} | 692 | 214 |
gh_patches_debug_64312 | rasdani/github-patches | git_diff | pex-tool__pex-1932 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.108
On the docket:
+ [x] Fix slow PEX boot time when there are many extras. #1929
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.107"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.107"
+__version__ = "2.1.108"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.107\"\n+__version__ = \"2.1.108\"\n", "issue": "Release 2.1.108\nOn the docket:\r\n+ [x] Fix slow PEX boot time when there are many extras. #1929\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.107\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.108\"\n", "path": "pex/version.py"}]} | 345 | 98 |
gh_patches_debug_35624 | rasdani/github-patches | git_diff | ResonantGeoData__ResonantGeoData-756 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 error on FMV Detail View page
I noticed this bug when trying to use the FMV module on Danesfield. I tested it on a fresh RGD instance and it still occured.
To reproduce -
- Run the `rgd_fmv_wasabi` management command to populate your DB with FMV data
- Attempt to navigate to `http://localhost:8000/rgd_fmv/<:id>/` and it will return a 500 error with the error `string indices must be integers
`.
500 error on `/rgd_fmv/{spatial_id}` endpoint
I noticed this bug when trying to use the FMV module on Danesfield. I tested it on a fresh RGD instance and it still occurred.
To reproduce -
- Run the `rgd_fmv_wasabi` management command to populate your DB with FMV data
- Hit the `/rgd_fmv/{spatial_id}` API endpoint and a 500 will be returned with an error ```AttributeError at /api/rgd_fmv/1
Got AttributeError when attempting to get a value for field `file` on serializer `FMVSerializer`.
The serializer field might be named incorrectly and not match any attribute or key on the `FMVMeta` instance.
Original exception text was: 'FMVMeta' object has no attribute 'file'.```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django-rgd-fmv/rgd_fmv/rest/viewsets.py`
Content:
```
1 from rest_framework.decorators import action
2 from rgd.rest.base import ModelViewSet
3 from rgd_fmv import models, serializers
4
5
6 class FMVViewSet(ModelViewSet):
7 queryset = models.FMVMeta.objects.all()
8
9 def get_serializer_class(self):
10 if self.action in ['get', 'list']:
11 return serializers.FMVMetaSerializer
12 return serializers.FMVSerializer
13
14 @action(detail=True, serializer_class=serializers.FMVMetaDataSerializer)
15 def data(self, request, *args, **kwargs):
16 return self.retrieve(request, *args, **kwargs)
17
```
Path: `django-rgd-fmv/rgd_fmv/views.py`
Content:
```
1 import json
2
3 from rgd.views import SpatialDetailView
4
5 from . import models
6
7
8 class FMVMetaDetailView(SpatialDetailView):
9 model = models.FMVMeta
10
11 def get_context_data(self, *args, **kwargs):
12 context = super().get_context_data(*args, **kwargs)
13 context['frame_rate'] = json.dumps(self.object.fmv_file.frame_rate)
14 extents = context['extents']
15 if self.object.ground_union is not None:
16 # All or none of these will be set, only check one
17 extents['collect'] = self.object.ground_union.json
18 extents['ground_frames'] = self.object.ground_frames.json
19 extents['frame_numbers'] = self.object._blob_to_array(self.object.frame_numbers)
20 return context
21
```
Path: `django-rgd-fmv/rgd_fmv/urls.py`
Content:
```
1 from django.urls import path
2 from rest_framework.routers import SimpleRouter
3 from rgd_fmv import models, views
4 from rgd_fmv.rest import viewsets
5
6 router = SimpleRouter(trailing_slash=False)
7 router.register(r'api/rgd_fmv', viewsets.FMVViewSet)
8
9 urlpatterns = [
10 # Pages
11 path(
12 'rgd_fmv/<int:pk>/',
13 views.FMVMetaDetailView.as_view(),
14 name=models.FMVMeta.detail_view_name,
15 ),
16 ] + router.urls
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django-rgd-fmv/rgd_fmv/rest/viewsets.py b/django-rgd-fmv/rgd_fmv/rest/viewsets.py
--- a/django-rgd-fmv/rgd_fmv/rest/viewsets.py
+++ b/django-rgd-fmv/rgd_fmv/rest/viewsets.py
@@ -1,16 +1,17 @@
from rest_framework.decorators import action
-from rgd.rest.base import ModelViewSet
+from rgd.rest.base import ModelViewSet, ReadOnlyModelViewSet
from rgd_fmv import models, serializers
-class FMVViewSet(ModelViewSet):
+class FMVMetaViewSet(ReadOnlyModelViewSet):
queryset = models.FMVMeta.objects.all()
-
- def get_serializer_class(self):
- if self.action in ['get', 'list']:
- return serializers.FMVMetaSerializer
- return serializers.FMVSerializer
+ serializer_class = serializers.FMVMetaSerializer
@action(detail=True, serializer_class=serializers.FMVMetaDataSerializer)
def data(self, request, *args, **kwargs):
return self.retrieve(request, *args, **kwargs)
+
+
+class FMVViewSet(ModelViewSet):
+ queryset = models.FMV.objects.all()
+ serializer_class = serializers.FMVSerializer
diff --git a/django-rgd-fmv/rgd_fmv/urls.py b/django-rgd-fmv/rgd_fmv/urls.py
--- a/django-rgd-fmv/rgd_fmv/urls.py
+++ b/django-rgd-fmv/rgd_fmv/urls.py
@@ -4,7 +4,8 @@
from rgd_fmv.rest import viewsets
router = SimpleRouter(trailing_slash=False)
-router.register(r'api/rgd_fmv', viewsets.FMVViewSet)
+router.register(r'api/rgd_fmv', viewsets.FMVMetaViewSet, basename='fmv-meta')
+router.register(r'api/rgd_fmv/model', viewsets.FMVViewSet, basename='fmv')
urlpatterns = [
# Pages
diff --git a/django-rgd-fmv/rgd_fmv/views.py b/django-rgd-fmv/rgd_fmv/views.py
--- a/django-rgd-fmv/rgd_fmv/views.py
+++ b/django-rgd-fmv/rgd_fmv/views.py
@@ -11,10 +11,11 @@
def get_context_data(self, *args, **kwargs):
context = super().get_context_data(*args, **kwargs)
context['frame_rate'] = json.dumps(self.object.fmv_file.frame_rate)
- extents = context['extents']
+ extents = json.loads(context['extents'])
if self.object.ground_union is not None:
# All or none of these will be set, only check one
extents['collect'] = self.object.ground_union.json
extents['ground_frames'] = self.object.ground_frames.json
extents['frame_numbers'] = self.object._blob_to_array(self.object.frame_numbers)
+ context['extents'] = json.dumps(extents)
return context
| {"golden_diff": "diff --git a/django-rgd-fmv/rgd_fmv/rest/viewsets.py b/django-rgd-fmv/rgd_fmv/rest/viewsets.py\n--- a/django-rgd-fmv/rgd_fmv/rest/viewsets.py\n+++ b/django-rgd-fmv/rgd_fmv/rest/viewsets.py\n@@ -1,16 +1,17 @@\n from rest_framework.decorators import action\n-from rgd.rest.base import ModelViewSet\n+from rgd.rest.base import ModelViewSet, ReadOnlyModelViewSet\n from rgd_fmv import models, serializers\n \n \n-class FMVViewSet(ModelViewSet):\n+class FMVMetaViewSet(ReadOnlyModelViewSet):\n queryset = models.FMVMeta.objects.all()\n-\n- def get_serializer_class(self):\n- if self.action in ['get', 'list']:\n- return serializers.FMVMetaSerializer\n- return serializers.FMVSerializer\n+ serializer_class = serializers.FMVMetaSerializer\n \n @action(detail=True, serializer_class=serializers.FMVMetaDataSerializer)\n def data(self, request, *args, **kwargs):\n return self.retrieve(request, *args, **kwargs)\n+\n+\n+class FMVViewSet(ModelViewSet):\n+ queryset = models.FMV.objects.all()\n+ serializer_class = serializers.FMVSerializer\ndiff --git a/django-rgd-fmv/rgd_fmv/urls.py b/django-rgd-fmv/rgd_fmv/urls.py\n--- a/django-rgd-fmv/rgd_fmv/urls.py\n+++ b/django-rgd-fmv/rgd_fmv/urls.py\n@@ -4,7 +4,8 @@\n from rgd_fmv.rest import viewsets\n \n router = SimpleRouter(trailing_slash=False)\n-router.register(r'api/rgd_fmv', viewsets.FMVViewSet)\n+router.register(r'api/rgd_fmv', viewsets.FMVMetaViewSet, basename='fmv-meta')\n+router.register(r'api/rgd_fmv/model', viewsets.FMVViewSet, basename='fmv')\n \n urlpatterns = [\n # Pages\ndiff --git a/django-rgd-fmv/rgd_fmv/views.py b/django-rgd-fmv/rgd_fmv/views.py\n--- a/django-rgd-fmv/rgd_fmv/views.py\n+++ b/django-rgd-fmv/rgd_fmv/views.py\n@@ -11,10 +11,11 @@\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context['frame_rate'] = json.dumps(self.object.fmv_file.frame_rate)\n- extents = context['extents']\n+ extents = json.loads(context['extents'])\n if self.object.ground_union is not None:\n # All or none of these will be set, only check one\n extents['collect'] = self.object.ground_union.json\n extents['ground_frames'] = self.object.ground_frames.json\n extents['frame_numbers'] = self.object._blob_to_array(self.object.frame_numbers)\n+ context['extents'] = json.dumps(extents)\n return context\n", "issue": "500 error on FMV Detail View page\nI noticed this bug when trying to use the FMV module on Danesfield. I tested it on a fresh RGD instance and it still occured.\r\n\r\nTo reproduce -\r\n- Run the `rgd_fmv_wasabi` management command to populate your DB with FMV data\r\n- Attempt to navigate to `http://localhost:8000/rgd_fmv/<:id>/` and it will return a 500 error with the error `string indices must be integers\r\n`.\n500 error on `/rgd_fmv/{spatial_id}` endpoint\nI noticed this bug when trying to use the FMV module on Danesfield. I tested it on a fresh RGD instance and it still occurred.\r\n\r\nTo reproduce -\r\n\r\n- Run the `rgd_fmv_wasabi` management command to populate your DB with FMV data\r\n- Hit the `/rgd_fmv/{spatial_id}` API endpoint and a 500 will be returned with an error ```AttributeError at /api/rgd_fmv/1\r\nGot AttributeError when attempting to get a value for field `file` on serializer `FMVSerializer`.\r\nThe serializer field might be named incorrectly and not match any attribute or key on the `FMVMeta` instance.\r\nOriginal exception text was: 'FMVMeta' object has no attribute 'file'.```\n", "before_files": [{"content": "from rest_framework.decorators import action\nfrom rgd.rest.base import ModelViewSet\nfrom rgd_fmv import models, serializers\n\n\nclass FMVViewSet(ModelViewSet):\n queryset = models.FMVMeta.objects.all()\n\n def get_serializer_class(self):\n if self.action in ['get', 'list']:\n return serializers.FMVMetaSerializer\n return serializers.FMVSerializer\n\n @action(detail=True, serializer_class=serializers.FMVMetaDataSerializer)\n def data(self, request, *args, **kwargs):\n return self.retrieve(request, *args, **kwargs)\n", "path": "django-rgd-fmv/rgd_fmv/rest/viewsets.py"}, {"content": "import json\n\nfrom rgd.views import SpatialDetailView\n\nfrom . import models\n\n\nclass FMVMetaDetailView(SpatialDetailView):\n model = models.FMVMeta\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context['frame_rate'] = json.dumps(self.object.fmv_file.frame_rate)\n extents = context['extents']\n if self.object.ground_union is not None:\n # All or none of these will be set, only check one\n extents['collect'] = self.object.ground_union.json\n extents['ground_frames'] = self.object.ground_frames.json\n extents['frame_numbers'] = self.object._blob_to_array(self.object.frame_numbers)\n return context\n", "path": "django-rgd-fmv/rgd_fmv/views.py"}, {"content": "from django.urls import path\nfrom rest_framework.routers import SimpleRouter\nfrom rgd_fmv import models, views\nfrom rgd_fmv.rest import viewsets\n\nrouter = SimpleRouter(trailing_slash=False)\nrouter.register(r'api/rgd_fmv', viewsets.FMVViewSet)\n\nurlpatterns = [\n # Pages\n path(\n 'rgd_fmv/<int:pk>/',\n views.FMVMetaDetailView.as_view(),\n name=models.FMVMeta.detail_view_name,\n ),\n] + router.urls\n", "path": "django-rgd-fmv/rgd_fmv/urls.py"}], "after_files": [{"content": "from rest_framework.decorators import action\nfrom rgd.rest.base import ModelViewSet, ReadOnlyModelViewSet\nfrom rgd_fmv import models, serializers\n\n\nclass FMVMetaViewSet(ReadOnlyModelViewSet):\n queryset = models.FMVMeta.objects.all()\n serializer_class = serializers.FMVMetaSerializer\n\n @action(detail=True, serializer_class=serializers.FMVMetaDataSerializer)\n def data(self, request, *args, **kwargs):\n return self.retrieve(request, *args, **kwargs)\n\n\nclass FMVViewSet(ModelViewSet):\n queryset = models.FMV.objects.all()\n serializer_class = serializers.FMVSerializer\n", "path": "django-rgd-fmv/rgd_fmv/rest/viewsets.py"}, {"content": "import json\n\nfrom rgd.views import SpatialDetailView\n\nfrom . import models\n\n\nclass FMVMetaDetailView(SpatialDetailView):\n model = models.FMVMeta\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context['frame_rate'] = json.dumps(self.object.fmv_file.frame_rate)\n extents = json.loads(context['extents'])\n if self.object.ground_union is not None:\n # All or none of these will be set, only check one\n extents['collect'] = self.object.ground_union.json\n extents['ground_frames'] = self.object.ground_frames.json\n extents['frame_numbers'] = self.object._blob_to_array(self.object.frame_numbers)\n context['extents'] = json.dumps(extents)\n return context\n", "path": "django-rgd-fmv/rgd_fmv/views.py"}, {"content": "from django.urls import path\nfrom rest_framework.routers import SimpleRouter\nfrom rgd_fmv import models, views\nfrom rgd_fmv.rest import viewsets\n\nrouter = SimpleRouter(trailing_slash=False)\nrouter.register(r'api/rgd_fmv', viewsets.FMVMetaViewSet, basename='fmv-meta')\nrouter.register(r'api/rgd_fmv/model', viewsets.FMVViewSet, basename='fmv')\n\nurlpatterns = [\n # Pages\n path(\n 'rgd_fmv/<int:pk>/',\n views.FMVMetaDetailView.as_view(),\n name=models.FMVMeta.detail_view_name,\n ),\n] + router.urls\n", "path": "django-rgd-fmv/rgd_fmv/urls.py"}]} | 1,096 | 693 |
gh_patches_debug_7468 | rasdani/github-patches | git_diff | Kinto__kinto-1661 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OpenID payload cache uses the wrong cache key
The `cache_key` becomes the same for every access token due to this bug: https://github.com/Kinto/kinto/blob/e1e0d6be0024418fd100210901f9d2ca06344fe1/kinto/plugins/openid/__init__.py#L51
No matter what the `hmac_tokens` variable is the `cache_key` always becomes `'openid:verify:%s'`.
OpenID payload cache uses the wrong cache key
The `cache_key` becomes the same for every access token due to this bug: https://github.com/Kinto/kinto/blob/e1e0d6be0024418fd100210901f9d2ca06344fe1/kinto/plugins/openid/__init__.py#L51
No matter what the `hmac_tokens` variable is the `cache_key` always becomes `'openid:verify:%s'`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/plugins/openid/__init__.py`
Content:
```
1 import re
2
3 import requests
4 from pyramid import authentication as base_auth
5 from pyramid.interfaces import IAuthenticationPolicy
6 from zope.interface import implementer
7
8 from kinto.core import logger
9 from kinto.core import utils as core_utils
10 from kinto.core.openapi import OpenAPI
11
12 from .utils import fetch_openid_config
13
14
15 @implementer(IAuthenticationPolicy)
16 class OpenIDConnectPolicy(base_auth.CallbackAuthenticationPolicy):
17 def __init__(self, issuer, client_id, realm='Realm', **kwargs):
18 self.realm = realm
19 self.issuer = issuer
20 self.client_id = client_id
21 self.client_secret = kwargs.get('client_secret', '')
22 self.header_type = kwargs.get('header_type', 'Bearer')
23 self.userid_field = kwargs.get('userid_field', 'sub')
24 self.verification_ttl = int(kwargs.get('verification_ttl_seconds', 86400))
25
26 # Fetch OpenID config (at instantiation, ie. startup)
27 self.oid_config = fetch_openid_config(issuer)
28
29 self._jwt_keys = None
30
31 def unauthenticated_userid(self, request):
32 """Return the userid or ``None`` if token could not be verified.
33 """
34 settings = request.registry.settings
35 hmac_secret = settings['userid_hmac_secret']
36
37 authorization = request.headers.get('Authorization', '')
38 try:
39 authmeth, access_token = authorization.split(' ', 1)
40 except ValueError:
41 return None
42
43 if authmeth.lower() != self.header_type.lower():
44 return None
45
46 # XXX JWT Access token
47 # https://auth0.com/docs/tokens/access-token#access-token-format
48
49 # Check cache if these tokens were already verified.
50 hmac_tokens = core_utils.hmac_digest(hmac_secret, access_token)
51 cache_key = 'openid:verify:%s'.format(hmac_tokens)
52 payload = request.registry.cache.get(cache_key)
53 if payload is None:
54 # This can take some time.
55 payload = self._verify_token(access_token)
56 if payload is None:
57 return None
58 # Save for next time / refresh ttl.
59 request.registry.cache.set(cache_key, payload, ttl=self.verification_ttl)
60 # Extract meaningful field from userinfo (eg. email or sub)
61 return payload.get(self.userid_field)
62
63 def forget(self, request):
64 """A no-op. Credentials are sent on every request.
65 Return WWW-Authenticate Realm header for Bearer token.
66 """
67 return [('WWW-Authenticate', '%s realm="%s"' % (self.header_type, self.realm))]
68
69 def _verify_token(self, access_token):
70 uri = self.oid_config['userinfo_endpoint']
71 # Opaque access token string. Fetch user info from profile.
72 try:
73 resp = requests.get(uri, headers={'Authorization': 'Bearer ' + access_token})
74 resp.raise_for_status()
75 userprofile = resp.json()
76 return userprofile
77
78 except (requests.exceptions.HTTPError, ValueError, KeyError) as e:
79 logger.debug('Unable to fetch user profile from %s (%s)' % (uri, e))
80 return None
81
82
83 def includeme(config):
84 # Activate end-points.
85 config.scan('kinto.plugins.openid.views')
86
87 settings = config.get_settings()
88
89 openid_policies = []
90 for k, v in settings.items():
91 m = re.match('multiauth\.policy\.(.*)\.use', k)
92 if m:
93 if v.endswith('OpenIDConnectPolicy'):
94 openid_policies.append(m.group(1))
95
96 if len(openid_policies) == 0:
97 # Do not add the capability if no policy is configured.
98 return
99
100 providers_infos = []
101 for name in openid_policies:
102 issuer = settings['multiauth.policy.%s.issuer' % name]
103 openid_config = fetch_openid_config(issuer)
104
105 client_id = settings['multiauth.policy.%s.client_id' % name]
106 header_type = settings.get('multiauth.policy.%s.header_type', 'Bearer')
107
108 providers_infos.append({
109 'name': name,
110 'issuer': openid_config['issuer'],
111 'auth_path': '/openid/%s/login' % name,
112 'client_id': client_id,
113 'header_type': header_type,
114 'userinfo_endpoint': openid_config['userinfo_endpoint'],
115 })
116
117 OpenAPI.expose_authentication_method(name, {
118 'type': 'oauth2',
119 'authorizationUrl': openid_config['authorization_endpoint'],
120 })
121
122 config.add_api_capability(
123 'openid',
124 description='OpenID connect support.',
125 url='http://kinto.readthedocs.io/en/stable/api/1.x/authentication.html',
126 providers=providers_infos)
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/plugins/openid/__init__.py b/kinto/plugins/openid/__init__.py
--- a/kinto/plugins/openid/__init__.py
+++ b/kinto/plugins/openid/__init__.py
@@ -48,7 +48,7 @@
# Check cache if these tokens were already verified.
hmac_tokens = core_utils.hmac_digest(hmac_secret, access_token)
- cache_key = 'openid:verify:%s'.format(hmac_tokens)
+ cache_key = 'openid:verify:{}'.format(hmac_tokens)
payload = request.registry.cache.get(cache_key)
if payload is None:
# This can take some time.
| {"golden_diff": "diff --git a/kinto/plugins/openid/__init__.py b/kinto/plugins/openid/__init__.py\n--- a/kinto/plugins/openid/__init__.py\n+++ b/kinto/plugins/openid/__init__.py\n@@ -48,7 +48,7 @@\n \n # Check cache if these tokens were already verified.\n hmac_tokens = core_utils.hmac_digest(hmac_secret, access_token)\n- cache_key = 'openid:verify:%s'.format(hmac_tokens)\n+ cache_key = 'openid:verify:{}'.format(hmac_tokens)\n payload = request.registry.cache.get(cache_key)\n if payload is None:\n # This can take some time.\n", "issue": "OpenID payload cache uses the wrong cache key\nThe `cache_key` becomes the same for every access token due to this bug: https://github.com/Kinto/kinto/blob/e1e0d6be0024418fd100210901f9d2ca06344fe1/kinto/plugins/openid/__init__.py#L51\r\nNo matter what the `hmac_tokens` variable is the `cache_key` always becomes `'openid:verify:%s'`.\r\n\r\n\nOpenID payload cache uses the wrong cache key\nThe `cache_key` becomes the same for every access token due to this bug: https://github.com/Kinto/kinto/blob/e1e0d6be0024418fd100210901f9d2ca06344fe1/kinto/plugins/openid/__init__.py#L51\r\nNo matter what the `hmac_tokens` variable is the `cache_key` always becomes `'openid:verify:%s'`.\r\n\r\n\n", "before_files": [{"content": "import re\n\nimport requests\nfrom pyramid import authentication as base_auth\nfrom pyramid.interfaces import IAuthenticationPolicy\nfrom zope.interface import implementer\n\nfrom kinto.core import logger\nfrom kinto.core import utils as core_utils\nfrom kinto.core.openapi import OpenAPI\n\nfrom .utils import fetch_openid_config\n\n\n@implementer(IAuthenticationPolicy)\nclass OpenIDConnectPolicy(base_auth.CallbackAuthenticationPolicy):\n def __init__(self, issuer, client_id, realm='Realm', **kwargs):\n self.realm = realm\n self.issuer = issuer\n self.client_id = client_id\n self.client_secret = kwargs.get('client_secret', '')\n self.header_type = kwargs.get('header_type', 'Bearer')\n self.userid_field = kwargs.get('userid_field', 'sub')\n self.verification_ttl = int(kwargs.get('verification_ttl_seconds', 86400))\n\n # Fetch OpenID config (at instantiation, ie. startup)\n self.oid_config = fetch_openid_config(issuer)\n\n self._jwt_keys = None\n\n def unauthenticated_userid(self, request):\n \"\"\"Return the userid or ``None`` if token could not be verified.\n \"\"\"\n settings = request.registry.settings\n hmac_secret = settings['userid_hmac_secret']\n\n authorization = request.headers.get('Authorization', '')\n try:\n authmeth, access_token = authorization.split(' ', 1)\n except ValueError:\n return None\n\n if authmeth.lower() != self.header_type.lower():\n return None\n\n # XXX JWT Access token\n # https://auth0.com/docs/tokens/access-token#access-token-format\n\n # Check cache if these tokens were already verified.\n hmac_tokens = core_utils.hmac_digest(hmac_secret, access_token)\n cache_key = 'openid:verify:%s'.format(hmac_tokens)\n payload = request.registry.cache.get(cache_key)\n if payload is None:\n # This can take some time.\n payload = self._verify_token(access_token)\n if payload is None:\n return None\n # Save for next time / refresh ttl.\n request.registry.cache.set(cache_key, payload, ttl=self.verification_ttl)\n # Extract meaningful field from userinfo (eg. email or sub)\n return payload.get(self.userid_field)\n\n def forget(self, request):\n \"\"\"A no-op. Credentials are sent on every request.\n Return WWW-Authenticate Realm header for Bearer token.\n \"\"\"\n return [('WWW-Authenticate', '%s realm=\"%s\"' % (self.header_type, self.realm))]\n\n def _verify_token(self, access_token):\n uri = self.oid_config['userinfo_endpoint']\n # Opaque access token string. Fetch user info from profile.\n try:\n resp = requests.get(uri, headers={'Authorization': 'Bearer ' + access_token})\n resp.raise_for_status()\n userprofile = resp.json()\n return userprofile\n\n except (requests.exceptions.HTTPError, ValueError, KeyError) as e:\n logger.debug('Unable to fetch user profile from %s (%s)' % (uri, e))\n return None\n\n\ndef includeme(config):\n # Activate end-points.\n config.scan('kinto.plugins.openid.views')\n\n settings = config.get_settings()\n\n openid_policies = []\n for k, v in settings.items():\n m = re.match('multiauth\\.policy\\.(.*)\\.use', k)\n if m:\n if v.endswith('OpenIDConnectPolicy'):\n openid_policies.append(m.group(1))\n\n if len(openid_policies) == 0:\n # Do not add the capability if no policy is configured.\n return\n\n providers_infos = []\n for name in openid_policies:\n issuer = settings['multiauth.policy.%s.issuer' % name]\n openid_config = fetch_openid_config(issuer)\n\n client_id = settings['multiauth.policy.%s.client_id' % name]\n header_type = settings.get('multiauth.policy.%s.header_type', 'Bearer')\n\n providers_infos.append({\n 'name': name,\n 'issuer': openid_config['issuer'],\n 'auth_path': '/openid/%s/login' % name,\n 'client_id': client_id,\n 'header_type': header_type,\n 'userinfo_endpoint': openid_config['userinfo_endpoint'],\n })\n\n OpenAPI.expose_authentication_method(name, {\n 'type': 'oauth2',\n 'authorizationUrl': openid_config['authorization_endpoint'],\n })\n\n config.add_api_capability(\n 'openid',\n description='OpenID connect support.',\n url='http://kinto.readthedocs.io/en/stable/api/1.x/authentication.html',\n providers=providers_infos)\n", "path": "kinto/plugins/openid/__init__.py"}], "after_files": [{"content": "import re\n\nimport requests\nfrom pyramid import authentication as base_auth\nfrom pyramid.interfaces import IAuthenticationPolicy\nfrom zope.interface import implementer\n\nfrom kinto.core import logger\nfrom kinto.core import utils as core_utils\nfrom kinto.core.openapi import OpenAPI\n\nfrom .utils import fetch_openid_config\n\n\n@implementer(IAuthenticationPolicy)\nclass OpenIDConnectPolicy(base_auth.CallbackAuthenticationPolicy):\n def __init__(self, issuer, client_id, realm='Realm', **kwargs):\n self.realm = realm\n self.issuer = issuer\n self.client_id = client_id\n self.client_secret = kwargs.get('client_secret', '')\n self.header_type = kwargs.get('header_type', 'Bearer')\n self.userid_field = kwargs.get('userid_field', 'sub')\n self.verification_ttl = int(kwargs.get('verification_ttl_seconds', 86400))\n\n # Fetch OpenID config (at instantiation, ie. startup)\n self.oid_config = fetch_openid_config(issuer)\n\n self._jwt_keys = None\n\n def unauthenticated_userid(self, request):\n \"\"\"Return the userid or ``None`` if token could not be verified.\n \"\"\"\n settings = request.registry.settings\n hmac_secret = settings['userid_hmac_secret']\n\n authorization = request.headers.get('Authorization', '')\n try:\n authmeth, access_token = authorization.split(' ', 1)\n except ValueError:\n return None\n\n if authmeth.lower() != self.header_type.lower():\n return None\n\n # XXX JWT Access token\n # https://auth0.com/docs/tokens/access-token#access-token-format\n\n # Check cache if these tokens were already verified.\n hmac_tokens = core_utils.hmac_digest(hmac_secret, access_token)\n cache_key = 'openid:verify:{}'.format(hmac_tokens)\n payload = request.registry.cache.get(cache_key)\n if payload is None:\n # This can take some time.\n payload = self._verify_token(access_token)\n if payload is None:\n return None\n # Save for next time / refresh ttl.\n request.registry.cache.set(cache_key, payload, ttl=self.verification_ttl)\n # Extract meaningful field from userinfo (eg. email or sub)\n return payload.get(self.userid_field)\n\n def forget(self, request):\n \"\"\"A no-op. Credentials are sent on every request.\n Return WWW-Authenticate Realm header for Bearer token.\n \"\"\"\n return [('WWW-Authenticate', '%s realm=\"%s\"' % (self.header_type, self.realm))]\n\n def _verify_token(self, access_token):\n uri = self.oid_config['userinfo_endpoint']\n # Opaque access token string. Fetch user info from profile.\n try:\n resp = requests.get(uri, headers={'Authorization': 'Bearer ' + access_token})\n resp.raise_for_status()\n userprofile = resp.json()\n return userprofile\n\n except (requests.exceptions.HTTPError, ValueError, KeyError) as e:\n logger.debug('Unable to fetch user profile from %s (%s)' % (uri, e))\n return None\n\n\ndef includeme(config):\n # Activate end-points.\n config.scan('kinto.plugins.openid.views')\n\n settings = config.get_settings()\n\n openid_policies = []\n for k, v in settings.items():\n m = re.match('multiauth\\.policy\\.(.*)\\.use', k)\n if m:\n if v.endswith('OpenIDConnectPolicy'):\n openid_policies.append(m.group(1))\n\n if len(openid_policies) == 0:\n # Do not add the capability if no policy is configured.\n return\n\n providers_infos = []\n for name in openid_policies:\n issuer = settings['multiauth.policy.%s.issuer' % name]\n openid_config = fetch_openid_config(issuer)\n\n client_id = settings['multiauth.policy.%s.client_id' % name]\n header_type = settings.get('multiauth.policy.%s.header_type', 'Bearer')\n\n providers_infos.append({\n 'name': name,\n 'issuer': openid_config['issuer'],\n 'auth_path': '/openid/%s/login' % name,\n 'client_id': client_id,\n 'header_type': header_type,\n 'userinfo_endpoint': openid_config['userinfo_endpoint'],\n })\n\n OpenAPI.expose_authentication_method(name, {\n 'type': 'oauth2',\n 'authorizationUrl': openid_config['authorization_endpoint'],\n })\n\n config.add_api_capability(\n 'openid',\n description='OpenID connect support.',\n url='http://kinto.readthedocs.io/en/stable/api/1.x/authentication.html',\n providers=providers_infos)\n", "path": "kinto/plugins/openid/__init__.py"}]} | 1,766 | 145 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.