problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_26748 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-966 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Example of groupby_agg shows wrong output
# Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docs for `groupby_agg` uses `new_column_name='count'` but the result shown in the docs has column name 'size'.
<img width="554" alt="image" src="https://user-images.githubusercontent.com/30731072/146632174-9b4a1022-d205-4ead-855e-76a3dccb7b7a.png">
The correct column name should be 'count'.
<img width="330" alt="image" src="https://user-images.githubusercontent.com/30731072/146632197-7607612a-8388-4383-846b-07ac3d7b522d.png">
```
group var1 count
0 1 1 4
1 1 1 4
2 1 1 4
3 1 1 4
4 1 2 1
5 2 1 1
6 2 2 3
7 2 2 3
8 2 2 3
9 2 3 1
```
Seems like a simple enough fix, I'm happy to open a PR if you'ld like
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://pyjanitor-devs.github.io/pyjanitor/api/functions/#janitor.functions.groupby_agg)
- [Link to exact file to be edited](https://github.com/pyjanitor-devs/pyjanitor/blob/dev/janitor/functions/groupby_agg.py)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `janitor/functions/groupby_agg.py`
Content:
```
1 from typing import Callable, List, Union
2 import pandas_flavor as pf
3 import pandas as pd
4
5 from janitor.utils import deprecated_alias
6
7
8 @pf.register_dataframe_method
9 @deprecated_alias(new_column="new_column_name", agg_column="agg_column_name")
10 def groupby_agg(
11 df: pd.DataFrame,
12 by: Union[List, str],
13 new_column_name: str,
14 agg_column_name: str,
15 agg: Union[Callable, str],
16 dropna: bool = True,
17 ) -> pd.DataFrame:
18 """
19 Shortcut for assigning a groupby-transform to a new column.
20
21 This method does not mutate the original DataFrame.
22
23 Without this function, we would have to write a verbose line:
24
25 ```python
26 df = df.assign(...=df.groupby(...)[...].transform(...))
27 ```
28
29 Now, this function can be method-chained:
30
31 ```python
32 import pandas as pd
33 import janitor
34 df = pd.DataFrame(...).groupby_agg(by='group',
35 agg='mean',
36 agg_column_name="col1"
37 new_column_name='col1_mean_by_group',
38 dropna = True/False)
39 ```
40
41 Functional usage syntax:
42
43 ```python
44
45 import pandas as pd
46 import janitor as jn
47
48 jn.groupby_agg(
49 df,
50 by= column name/list of column names,
51 agg=aggregation function,
52 agg_column_name = col,
53 new_column_name= new column name,
54 dropna = True/False)
55 ```
56
57 Method chaining usage syntax:
58
59 ```python
60
61 df.groupby_agg(
62 by= column name/list of column names,
63 agg=aggregation function,
64 agg_column_name = col,
65 new_column_name= new column name,
66 dropna = True/False)
67 ```
68
69
70 :param df: A pandas DataFrame.
71 :param by: Column(s) to groupby on, either a `str` or
72 a `list` of `str`
73 :param new_column_name: Name of the aggregation output column.
74 :param agg_column_name: Name of the column to aggregate over.
75 :param agg: How to aggregate.
76 :param dropna: Whether or not to include null values,
77 if present in the `by` column(s). Default is True.
78 :returns: A pandas DataFrame.
79 """
80
81 return df.assign(
82 **{
83 new_column_name: df.groupby(by, dropna=dropna)[
84 agg_column_name
85 ].transform(agg)
86 }
87 )
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/janitor/functions/groupby_agg.py b/janitor/functions/groupby_agg.py
--- a/janitor/functions/groupby_agg.py
+++ b/janitor/functions/groupby_agg.py
@@ -35,7 +35,7 @@
agg='mean',
agg_column_name="col1"
new_column_name='col1_mean_by_group',
- dropna = True/False)
+ dropna=True/False)
```
Functional usage syntax:
@@ -56,6 +56,41 @@
Method chaining usage syntax:
+ ```python
+ df.groupby_agg(
+ by=['group', 'var1'],
+ agg='size',
+ agg_column_name='var1',
+ new_column_name='count',
+ )
+ ```
+
+ group var1 count
+ 0 1 1 4
+ 1 1 1 4
+ 2 1 1 4
+ 3 1 1 4
+ 4 1 2 1
+ 5 2 1 1
+ 6 2 2 3
+ 7 2 2 3
+ 8 2 2 3
+ 9 2 3 1
+
+ If the data has null values,
+ you can include the null values by passing `False` to `dropna`;
+ this feature was introduced in Pandas 1.1:
+
+ name type num nulls
+ 0 black chair 4 1.0
+ 1 black chair 5 1.0
+ 2 black sofa 12 NaN
+ 3 red sofa 4 NaN
+ 4 red plate 3 3.0
+
+ Let's get the count, including the null values,
+ grouping on `nulls` column:
+
```python
df.groupby_agg(
| {"golden_diff": "diff --git a/janitor/functions/groupby_agg.py b/janitor/functions/groupby_agg.py\n--- a/janitor/functions/groupby_agg.py\n+++ b/janitor/functions/groupby_agg.py\n@@ -35,7 +35,7 @@\n agg='mean',\n agg_column_name=\"col1\"\n new_column_name='col1_mean_by_group',\n- dropna = True/False)\n+ dropna=True/False)\n ```\n \n Functional usage syntax:\n@@ -56,6 +56,41 @@\n \n Method chaining usage syntax:\n \n+ ```python\n+ df.groupby_agg(\n+ by=['group', 'var1'],\n+ agg='size',\n+ agg_column_name='var1',\n+ new_column_name='count',\n+ )\n+ ```\n+\n+ group var1 count\n+ 0 1 1 4\n+ 1 1 1 4\n+ 2 1 1 4\n+ 3 1 1 4\n+ 4 1 2 1\n+ 5 2 1 1\n+ 6 2 2 3\n+ 7 2 2 3\n+ 8 2 2 3\n+ 9 2 3 1\n+\n+ If the data has null values,\n+ you can include the null values by passing `False` to `dropna`;\n+ this feature was introduced in Pandas 1.1:\n+\n+ name type num nulls\n+ 0 black chair 4 1.0\n+ 1 black chair 5 1.0\n+ 2 black sofa 12 NaN\n+ 3 red sofa 4 NaN\n+ 4 red plate 3 3.0\n+\n+ Let's get the count, including the null values,\n+ grouping on `nulls` column:\n+\n ```python\n \n df.groupby_agg(\n", "issue": "Example of groupby_agg shows wrong output\n# Brief Description of Fix\r\n\r\n<!-- Please describe the fix in terms of a \"before\" and \"after\". In other words, what's not so good about the current docs\r\npage, and what you would like to see it become.\r\n\r\nExample starter wording is provided. -->\r\n\r\nCurrently, the docs for `groupby_agg` uses `new_column_name='count'` but the result shown in the docs has column name 'size'.\r\n\r\n<img width=\"554\" alt=\"image\" src=\"https://user-images.githubusercontent.com/30731072/146632174-9b4a1022-d205-4ead-855e-76a3dccb7b7a.png\">\r\n\r\nThe correct column name should be 'count'.\r\n\r\n<img width=\"330\" alt=\"image\" src=\"https://user-images.githubusercontent.com/30731072/146632197-7607612a-8388-4383-846b-07ac3d7b522d.png\">\r\n\r\n```\r\n group var1 count\r\n0 1 1 4\r\n1 1 1 4\r\n2 1 1 4\r\n3 1 1 4\r\n4 1 2 1\r\n5 2 1 1\r\n6 2 2 3\r\n7 2 2 3\r\n8 2 2 3\r\n9 2 3 1\r\n```\r\n\r\nSeems like a simple enough fix, I'm happy to open a PR if you'ld like\r\n\r\n# Relevant Context\r\n\r\n<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available\r\nto get you started. -->\r\n\r\n- [Link to documentation page](https://pyjanitor-devs.github.io/pyjanitor/api/functions/#janitor.functions.groupby_agg)\r\n- [Link to exact file to be edited](https://github.com/pyjanitor-devs/pyjanitor/blob/dev/janitor/functions/groupby_agg.py)\r\n\n", "before_files": [{"content": "from typing import Callable, List, Union\nimport pandas_flavor as pf\nimport pandas as pd\n\nfrom janitor.utils import deprecated_alias\n\n\[email protected]_dataframe_method\n@deprecated_alias(new_column=\"new_column_name\", agg_column=\"agg_column_name\")\ndef groupby_agg(\n df: pd.DataFrame,\n by: Union[List, str],\n new_column_name: str,\n agg_column_name: str,\n agg: Union[Callable, str],\n dropna: bool = True,\n) -> pd.DataFrame:\n \"\"\"\n Shortcut for assigning a groupby-transform to a new column.\n\n This method does not mutate the original DataFrame.\n\n Without this function, we would have to write a verbose line:\n\n ```python\n df = df.assign(...=df.groupby(...)[...].transform(...))\n ```\n\n Now, this function can be method-chained:\n\n ```python\n import pandas as pd\n import janitor\n df = pd.DataFrame(...).groupby_agg(by='group',\n agg='mean',\n agg_column_name=\"col1\"\n new_column_name='col1_mean_by_group',\n dropna = True/False)\n ```\n\n Functional usage syntax:\n\n ```python\n\n import pandas as pd\n import janitor as jn\n\n jn.groupby_agg(\n df,\n by= column name/list of column names,\n agg=aggregation function,\n agg_column_name = col,\n new_column_name= new column name,\n dropna = True/False)\n ```\n\n Method chaining usage syntax:\n\n ```python\n\n df.groupby_agg(\n by= column name/list of column names,\n agg=aggregation function,\n agg_column_name = col,\n new_column_name= new column name,\n dropna = True/False)\n ```\n\n\n :param df: A pandas DataFrame.\n :param by: Column(s) to groupby on, either a `str` or\n a `list` of `str`\n :param new_column_name: Name of the aggregation output column.\n :param agg_column_name: Name of the column to aggregate over.\n :param agg: How to aggregate.\n :param dropna: Whether or not to include null values,\n if present in the `by` column(s). Default is True.\n :returns: A pandas DataFrame.\n \"\"\"\n\n return df.assign(\n **{\n new_column_name: df.groupby(by, dropna=dropna)[\n agg_column_name\n ].transform(agg)\n }\n )\n", "path": "janitor/functions/groupby_agg.py"}], "after_files": [{"content": "from typing import Callable, List, Union\nimport pandas_flavor as pf\nimport pandas as pd\n\nfrom janitor.utils import deprecated_alias\n\n\[email protected]_dataframe_method\n@deprecated_alias(new_column=\"new_column_name\", agg_column=\"agg_column_name\")\ndef groupby_agg(\n df: pd.DataFrame,\n by: Union[List, str],\n new_column_name: str,\n agg_column_name: str,\n agg: Union[Callable, str],\n dropna: bool = True,\n) -> pd.DataFrame:\n \"\"\"\n Shortcut for assigning a groupby-transform to a new column.\n\n This method does not mutate the original DataFrame.\n\n Without this function, we would have to write a verbose line:\n\n ```python\n df = df.assign(...=df.groupby(...)[...].transform(...))\n ```\n\n Now, this function can be method-chained:\n\n ```python\n import pandas as pd\n import janitor\n df = pd.DataFrame(...).groupby_agg(by='group',\n agg='mean',\n agg_column_name=\"col1\"\n new_column_name='col1_mean_by_group',\n dropna=True/False)\n ```\n\n Functional usage syntax:\n\n ```python\n\n import pandas as pd\n import janitor as jn\n\n jn.groupby_agg(\n df,\n by= column name/list of column names,\n agg=aggregation function,\n agg_column_name = col,\n new_column_name= new column name,\n dropna = True/False)\n ```\n\n Method chaining usage syntax:\n\n ```python\n df.groupby_agg(\n by=['group', 'var1'],\n agg='size',\n agg_column_name='var1',\n new_column_name='count',\n )\n ```\n\n group var1 count\n 0 1 1 4\n 1 1 1 4\n 2 1 1 4\n 3 1 1 4\n 4 1 2 1\n 5 2 1 1\n 6 2 2 3\n 7 2 2 3\n 8 2 2 3\n 9 2 3 1\n\n If the data has null values,\n you can include the null values by passing `False` to `dropna`;\n this feature was introduced in Pandas 1.1:\n\n name type num nulls\n 0 black chair 4 1.0\n 1 black chair 5 1.0\n 2 black sofa 12 NaN\n 3 red sofa 4 NaN\n 4 red plate 3 3.0\n\n Let's get the count, including the null values,\n grouping on `nulls` column:\n\n ```python\n\n df.groupby_agg(\n by= column name/list of column names,\n agg=aggregation function,\n agg_column_name = col,\n new_column_name= new column name,\n dropna = True/False)\n ```\n\n\n :param df: A pandas DataFrame.\n :param by: Column(s) to groupby on, either a `str` or\n a `list` of `str`\n :param new_column_name: Name of the aggregation output column.\n :param agg_column_name: Name of the column to aggregate over.\n :param agg: How to aggregate.\n :param dropna: Whether or not to include null values,\n if present in the `by` column(s). Default is True.\n :returns: A pandas DataFrame.\n \"\"\"\n\n return df.assign(\n **{\n new_column_name: df.groupby(by, dropna=dropna)[\n agg_column_name\n ].transform(agg)\n }\n )\n", "path": "janitor/functions/groupby_agg.py"}]} | 1,470 | 494 |
gh_patches_debug_48464 | rasdani/github-patches | git_diff | mirumee__ariadne-840 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update Starlette dependency to 0.19
Starlette 0.19 has been released, we should bump Ariadne's version to it before releasing 0.15
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2 import os
3 from setuptools import setup
4
5 CLASSIFIERS = [
6 "Development Status :: 4 - Beta",
7 "Intended Audience :: Developers",
8 "License :: OSI Approved :: BSD License",
9 "Operating System :: OS Independent",
10 "Programming Language :: Python",
11 "Programming Language :: Python :: 3.7",
12 "Programming Language :: Python :: 3.8",
13 "Programming Language :: Python :: 3.9",
14 "Programming Language :: Python :: 3.10",
15 "Topic :: Software Development :: Libraries :: Python Modules",
16 ]
17
18 README_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
19 with open(README_PATH, "r", encoding="utf8") as f:
20 README = f.read()
21
22 setup(
23 name="ariadne",
24 author="Mirumee Software",
25 author_email="[email protected]",
26 description="Ariadne is a Python library for implementing GraphQL servers.",
27 long_description=README,
28 long_description_content_type="text/markdown",
29 license="BSD",
30 version="0.15.0.dev5",
31 url="https://github.com/mirumee/ariadne",
32 packages=["ariadne"],
33 include_package_data=True,
34 install_requires=[
35 "graphql-core>=3.2.0,<3.3",
36 "starlette<0.19",
37 "typing_extensions>=3.6.0",
38 ],
39 extras_require={"asgi-file-uploads": ["python-multipart>=0.0.5"]},
40 classifiers=CLASSIFIERS,
41 platforms=["any"],
42 zip_safe=False,
43 )
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,7 +33,7 @@
include_package_data=True,
install_requires=[
"graphql-core>=3.2.0,<3.3",
- "starlette<0.19",
+ "starlette>0.17<0.20",
"typing_extensions>=3.6.0",
],
extras_require={"asgi-file-uploads": ["python-multipart>=0.0.5"]},
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,7 +33,7 @@\n include_package_data=True,\n install_requires=[\n \"graphql-core>=3.2.0,<3.3\",\n- \"starlette<0.19\",\n+ \"starlette>0.17<0.20\",\n \"typing_extensions>=3.6.0\",\n ],\n extras_require={\"asgi-file-uploads\": [\"python-multipart>=0.0.5\"]},\n", "issue": "Update Starlette dependency to 0.19\nStarlette 0.19 has been released, we should bump Ariadne's version to it before releasing 0.15 \n", "before_files": [{"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\", encoding=\"utf8\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.15.0.dev5\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n include_package_data=True,\n install_requires=[\n \"graphql-core>=3.2.0,<3.3\",\n \"starlette<0.19\",\n \"typing_extensions>=3.6.0\",\n ],\n extras_require={\"asgi-file-uploads\": [\"python-multipart>=0.0.5\"]},\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\", encoding=\"utf8\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.15.0.dev5\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n include_package_data=True,\n install_requires=[\n \"graphql-core>=3.2.0,<3.3\",\n \"starlette>0.17<0.20\",\n \"typing_extensions>=3.6.0\",\n ],\n extras_require={\"asgi-file-uploads\": [\"python-multipart>=0.0.5\"]},\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 734 | 119 |
gh_patches_debug_6633 | rasdani/github-patches | git_diff | ESMCI__cime-544 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Get PFS test working.
PFS does not produce history files because it does not set HIST_N or HIST_OPTION. The test fails because it tries to move history files that aren't there.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/python/CIME/SystemTests/pfs.py`
Content:
```
1 """
2 CIME performance test This class inherits from SystemTestsCommon
3
4 20 days performance test, no restart files written
5 """
6
7 from CIME.XML.standard_module_setup import *
8 from CIME.SystemTests.system_tests_common import SystemTestsCommon
9
10 logger = logging.getLogger(__name__)
11
12 class PFS(SystemTestsCommon):
13
14 def __init__(self, case):
15 """
16 initialize an object interface to the PFS system test
17 """
18 SystemTestsCommon.__init__(self, case)
19
20 def run_phase(self):
21 self._case.set_value("STOP_OPTION", "ndays")
22 self._case.set_value("STOP_N", 20)
23 self._case.set_value("REST_OPTION","none")
24 self._case.set_value("CONTINUE_RUN", False)
25 self._case.flush()
26
27 logger.info("doing an 20 day initial test, no restarts written")
28 self.run_indv()
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/python/CIME/SystemTests/pfs.py b/utils/python/CIME/SystemTests/pfs.py
--- a/utils/python/CIME/SystemTests/pfs.py
+++ b/utils/python/CIME/SystemTests/pfs.py
@@ -18,11 +18,5 @@
SystemTestsCommon.__init__(self, case)
def run_phase(self):
- self._case.set_value("STOP_OPTION", "ndays")
- self._case.set_value("STOP_N", 20)
- self._case.set_value("REST_OPTION","none")
- self._case.set_value("CONTINUE_RUN", False)
- self._case.flush()
-
logger.info("doing an 20 day initial test, no restarts written")
- self.run_indv()
+ self.run_indv(suffix=None)
| {"golden_diff": "diff --git a/utils/python/CIME/SystemTests/pfs.py b/utils/python/CIME/SystemTests/pfs.py\n--- a/utils/python/CIME/SystemTests/pfs.py\n+++ b/utils/python/CIME/SystemTests/pfs.py\n@@ -18,11 +18,5 @@\n SystemTestsCommon.__init__(self, case)\n \n def run_phase(self):\n- self._case.set_value(\"STOP_OPTION\", \"ndays\")\n- self._case.set_value(\"STOP_N\", 20)\n- self._case.set_value(\"REST_OPTION\",\"none\")\n- self._case.set_value(\"CONTINUE_RUN\", False)\n- self._case.flush()\n-\n logger.info(\"doing an 20 day initial test, no restarts written\")\n- self.run_indv()\n+ self.run_indv(suffix=None)\n", "issue": "Get PFS test working.\nPFS does not produce history files because it does not set HIST_N or HIST_OPTION. The test fails because it tries to move history files that aren't there.\n\n", "before_files": [{"content": "\"\"\"\nCIME performance test This class inherits from SystemTestsCommon\n\n20 days performance test, no restart files written\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.SystemTests.system_tests_common import SystemTestsCommon\n\nlogger = logging.getLogger(__name__)\n\nclass PFS(SystemTestsCommon):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the PFS system test\n \"\"\"\n SystemTestsCommon.__init__(self, case)\n\n def run_phase(self):\n self._case.set_value(\"STOP_OPTION\", \"ndays\")\n self._case.set_value(\"STOP_N\", 20)\n self._case.set_value(\"REST_OPTION\",\"none\")\n self._case.set_value(\"CONTINUE_RUN\", False)\n self._case.flush()\n\n logger.info(\"doing an 20 day initial test, no restarts written\")\n self.run_indv()\n", "path": "utils/python/CIME/SystemTests/pfs.py"}], "after_files": [{"content": "\"\"\"\nCIME performance test This class inherits from SystemTestsCommon\n\n20 days performance test, no restart files written\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.SystemTests.system_tests_common import SystemTestsCommon\n\nlogger = logging.getLogger(__name__)\n\nclass PFS(SystemTestsCommon):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the PFS system test\n \"\"\"\n SystemTestsCommon.__init__(self, case)\n\n def run_phase(self):\n logger.info(\"doing an 20 day initial test, no restarts written\")\n self.run_indv(suffix=None)\n", "path": "utils/python/CIME/SystemTests/pfs.py"}]} | 548 | 178 |
gh_patches_debug_4430 | rasdani/github-patches | git_diff | jupyter__docker-stacks-1412 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
We fail to push our multi-arch images
We successfully login with `docker login`, but we fail to push images. I'm unsure why, we only get 401 Unauthorized and that's it.
### From [CI run that failed](https://github.com/jupyter/docker-stacks/runs/3090573663?check_suite_focus=true)
```
#36 exporting to image
#36 exporting manifest sha256:f4885e2ee0a2d6f4484aa75b005e585414a56981e31ed2343f8a9f2dee27d2bf done
#36 exporting config sha256:125cf7a049b119220a812d64eeae4ed7e3c3fd61e92aed6fcb118251165062ba done
#36 exporting manifest sha256:e6b8d5dbd2a1cb3022cd214946f2290ac65aed7fc3787fa8fbdbd655c344ac28 done
#36 exporting config sha256:4ae4f82b528de65a8f301eef09d66007865189d8b405fe2f65ea6d10c7994760 done
#36 exporting manifest list sha256:30be78018e77875a79b18949ec26fcc8760399e82da17aefb0281bb177f0c2a0 done
#36 pushing layers
#36 ...
#38 [auth] library/40b549a81031:pull,push token for registry-1.docker.io
#38 DONE 0.0s
#39 [auth] library/40b549a81031:pull,push token for registry-1.docker.io
#39 DONE 0.0s
#40 [auth] library/40b549a81031:pull,push token for registry-1.docker.io
#40 DONE 0.0s
#41 [auth] library/40b549a81031:pull,push token for registry-1.docker.io
#41 DONE 0.0s
#42 [auth] library/40b549a81031:pull,push token for registry-1.docker.io
#42 DONE 0.0s
#43 [auth] library/40b549a81031:pull,push token for registry-1.docker.io
#43 DONE 0.0s
#36 exporting to image
#36 pushing layers 0.4s done
#36 ERROR: authorization status: 401: authorization failed
------
> exporting to image:
------
error: failed to solve: authorization status: 401: authorization failed
make: *** [Makefile:191: push-multi/base-notebook] Error 1
make: Leaving directory '/home/runner/work/docker-stacks/docker-stacks/main'
Error: Process completed with exit code 2.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tagging/tag_image.py`
Content:
```
1 #!/usr/bin/env python3
2 # Copyright (c) Jupyter Development Team.
3 # Distributed under the terms of the Modified BSD License.
4 import argparse
5 import logging
6 from plumbum.cmd import docker
7 from .docker_runner import DockerRunner
8 from .get_taggers_and_manifests import get_taggers_and_manifests
9 from .github_set_env import github_set_env
10
11
12 logger = logging.getLogger(__name__)
13
14
15 def tag_image(short_image_name: str, owner: str) -> None:
16 """
17 Tags <owner>/<short_image_name>:latest with the tags reported by all taggers
18 for the given image.
19
20 Tags are in a GitHub Actions environment also saved to environment variables
21 in a format making it easy to append them.
22 """
23 logger.info(f"Tagging image: {short_image_name}")
24 taggers, _ = get_taggers_and_manifests(short_image_name)
25
26 image = f"{owner}/{short_image_name}:latest"
27
28 with DockerRunner(image) as container:
29 tags = []
30 for tagger in taggers:
31 tagger_name = tagger.__name__
32 tag_value = tagger.tag_value(container)
33 tags.append(tag_value)
34 logger.info(
35 f"Applying tag tagger_name: {tagger_name} tag_value: {tag_value}"
36 )
37 docker["tag", image, f"{owner}/{short_image_name}:{tag_value}"]()
38
39 if tags:
40 env_name = f'{short_image_name.replace("-", "_")}_EXTRA_TAG_ARGS'
41 docker_build_tag_args = "-t " + " -t ".join(tags)
42 github_set_env(env_name, docker_build_tag_args)
43
44
45 if __name__ == "__main__":
46 logging.basicConfig(level=logging.INFO)
47
48 arg_parser = argparse.ArgumentParser()
49 arg_parser.add_argument(
50 "--short-image-name",
51 required=True,
52 help="Short image name to apply tags for",
53 )
54 arg_parser.add_argument("--owner", required=True, help="Owner of the image")
55 args = arg_parser.parse_args()
56
57 tag_image(args.short_image_name, args.owner)
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tagging/tag_image.py b/tagging/tag_image.py
--- a/tagging/tag_image.py
+++ b/tagging/tag_image.py
@@ -38,7 +38,9 @@
if tags:
env_name = f'{short_image_name.replace("-", "_")}_EXTRA_TAG_ARGS'
- docker_build_tag_args = "-t " + " -t ".join(tags)
+ docker_build_tag_args = " ".join(
+ [f"-t {owner}/{short_image_name}:{tag}" for tag in tags]
+ )
github_set_env(env_name, docker_build_tag_args)
| {"golden_diff": "diff --git a/tagging/tag_image.py b/tagging/tag_image.py\n--- a/tagging/tag_image.py\n+++ b/tagging/tag_image.py\n@@ -38,7 +38,9 @@\n \n if tags:\n env_name = f'{short_image_name.replace(\"-\", \"_\")}_EXTRA_TAG_ARGS'\n- docker_build_tag_args = \"-t \" + \" -t \".join(tags)\n+ docker_build_tag_args = \" \".join(\n+ [f\"-t {owner}/{short_image_name}:{tag}\" for tag in tags]\n+ )\n github_set_env(env_name, docker_build_tag_args)\n", "issue": "We fail to push our multi-arch images\nWe successfully login with `docker login`, but we fail to push images. I'm unsure why, we only get 401 Unauthorized and that's it.\r\n\r\n### From [CI run that failed](https://github.com/jupyter/docker-stacks/runs/3090573663?check_suite_focus=true)\r\n\r\n```\r\n #36 exporting to image\r\n #36 exporting manifest sha256:f4885e2ee0a2d6f4484aa75b005e585414a56981e31ed2343f8a9f2dee27d2bf done\r\n #36 exporting config sha256:125cf7a049b119220a812d64eeae4ed7e3c3fd61e92aed6fcb118251165062ba done\r\n #36 exporting manifest sha256:e6b8d5dbd2a1cb3022cd214946f2290ac65aed7fc3787fa8fbdbd655c344ac28 done\r\n #36 exporting config sha256:4ae4f82b528de65a8f301eef09d66007865189d8b405fe2f65ea6d10c7994760 done\r\n #36 exporting manifest list sha256:30be78018e77875a79b18949ec26fcc8760399e82da17aefb0281bb177f0c2a0 done\r\n #36 pushing layers\r\n #36 ...\r\n \r\n #38 [auth] library/40b549a81031:pull,push token for registry-1.docker.io\r\n #38 DONE 0.0s\r\n \r\n #39 [auth] library/40b549a81031:pull,push token for registry-1.docker.io\r\n #39 DONE 0.0s\r\n \r\n #40 [auth] library/40b549a81031:pull,push token for registry-1.docker.io\r\n #40 DONE 0.0s\r\n \r\n #41 [auth] library/40b549a81031:pull,push token for registry-1.docker.io\r\n #41 DONE 0.0s\r\n \r\n #42 [auth] library/40b549a81031:pull,push token for registry-1.docker.io\r\n #42 DONE 0.0s\r\n \r\n #43 [auth] library/40b549a81031:pull,push token for registry-1.docker.io\r\n #43 DONE 0.0s\r\n \r\n #36 exporting to image\r\n #36 pushing layers 0.4s done\r\n #36 ERROR: authorization status: 401: authorization failed\r\n ------\r\n > exporting to image:\r\n ------\r\n error: failed to solve: authorization status: 401: authorization failed\r\n make: *** [Makefile:191: push-multi/base-notebook] Error 1\r\n make: Leaving directory '/home/runner/work/docker-stacks/docker-stacks/main'\r\n Error: Process completed with exit code 2.\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport argparse\nimport logging\nfrom plumbum.cmd import docker\nfrom .docker_runner import DockerRunner\nfrom .get_taggers_and_manifests import get_taggers_and_manifests\nfrom .github_set_env import github_set_env\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef tag_image(short_image_name: str, owner: str) -> None:\n \"\"\"\n Tags <owner>/<short_image_name>:latest with the tags reported by all taggers\n for the given image.\n\n Tags are in a GitHub Actions environment also saved to environment variables\n in a format making it easy to append them.\n \"\"\"\n logger.info(f\"Tagging image: {short_image_name}\")\n taggers, _ = get_taggers_and_manifests(short_image_name)\n\n image = f\"{owner}/{short_image_name}:latest\"\n\n with DockerRunner(image) as container:\n tags = []\n for tagger in taggers:\n tagger_name = tagger.__name__\n tag_value = tagger.tag_value(container)\n tags.append(tag_value)\n logger.info(\n f\"Applying tag tagger_name: {tagger_name} tag_value: {tag_value}\"\n )\n docker[\"tag\", image, f\"{owner}/{short_image_name}:{tag_value}\"]()\n\n if tags:\n env_name = f'{short_image_name.replace(\"-\", \"_\")}_EXTRA_TAG_ARGS'\n docker_build_tag_args = \"-t \" + \" -t \".join(tags)\n github_set_env(env_name, docker_build_tag_args)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig(level=logging.INFO)\n\n arg_parser = argparse.ArgumentParser()\n arg_parser.add_argument(\n \"--short-image-name\",\n required=True,\n help=\"Short image name to apply tags for\",\n )\n arg_parser.add_argument(\"--owner\", required=True, help=\"Owner of the image\")\n args = arg_parser.parse_args()\n\n tag_image(args.short_image_name, args.owner)\n", "path": "tagging/tag_image.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport argparse\nimport logging\nfrom plumbum.cmd import docker\nfrom .docker_runner import DockerRunner\nfrom .get_taggers_and_manifests import get_taggers_and_manifests\nfrom .github_set_env import github_set_env\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef tag_image(short_image_name: str, owner: str) -> None:\n \"\"\"\n Tags <owner>/<short_image_name>:latest with the tags reported by all taggers\n for the given image.\n\n Tags are in a GitHub Actions environment also saved to environment variables\n in a format making it easy to append them.\n \"\"\"\n logger.info(f\"Tagging image: {short_image_name}\")\n taggers, _ = get_taggers_and_manifests(short_image_name)\n\n image = f\"{owner}/{short_image_name}:latest\"\n\n with DockerRunner(image) as container:\n tags = []\n for tagger in taggers:\n tagger_name = tagger.__name__\n tag_value = tagger.tag_value(container)\n tags.append(tag_value)\n logger.info(\n f\"Applying tag tagger_name: {tagger_name} tag_value: {tag_value}\"\n )\n docker[\"tag\", image, f\"{owner}/{short_image_name}:{tag_value}\"]()\n\n if tags:\n env_name = f'{short_image_name.replace(\"-\", \"_\")}_EXTRA_TAG_ARGS'\n docker_build_tag_args = \" \".join(\n [f\"-t {owner}/{short_image_name}:{tag}\" for tag in tags]\n )\n github_set_env(env_name, docker_build_tag_args)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig(level=logging.INFO)\n\n arg_parser = argparse.ArgumentParser()\n arg_parser.add_argument(\n \"--short-image-name\",\n required=True,\n help=\"Short image name to apply tags for\",\n )\n arg_parser.add_argument(\"--owner\", required=True, help=\"Owner of the image\")\n args = arg_parser.parse_args()\n\n tag_image(args.short_image_name, args.owner)\n", "path": "tagging/tag_image.py"}]} | 1,649 | 132 |
gh_patches_debug_19274 | rasdani/github-patches | git_diff | nautobot__nautobot-5223 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Custom field date-type objects AssertionError in GraphQL
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Nautobot version (Docker tag too if applicable): 1.5.17b1, also tested in 1.5.8
* Python version: 3.9
* Database platform, version: psql 13.10
* Middleware(s): None
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Create a [custom field object](https://demo.nautobot.com/extras/custom-fields/last_backup/?tab=main) for the Device type template with the type as Date
2. Set a date on a [device](https://demo.nautobot.com/dcim/devices/4d7aecac-addd-4f3c-8efc-e102872b54e5/?tab=main) in YYYY-MM-DD iso8601 format
3. You have to restart nautobot-server; we may need a post_save signal event for this as well, which would be a separate issue if so
4. Using GraphQL attempt to retrieve the value of this field from a device
`
{
devices (name: "ams01-dist-01") {
name,
cf_last_backup
}
}
`
<!-- What did you expect to happen? -->
### Expected Behavior
Query to return the value of the Date
<!-- What happened instead? -->
### Observed Behavior
* GrapQL returns `"message": "Received not compatible date \"'2023-04-26'\""`
* stdout from nautobot-server gets an exception
`[27/Apr/2023 17:20:01] "POST /graphql/ HTTP/1.1" 200 4437611
Traceback (most recent call last):
File "/home/meganerd/.cache/pypoetry/virtualenvs/nautobot-Sgl-QJQc-py3.9/lib/python3.9/site-packages/graphql/execution/executor.py", line 480, in complete_value_catching_error
completed = complete_value(
File "/home/meganerd/.cache/pypoetry/virtualenvs/nautobot-Sgl-QJQc-py3.9/lib/python3.9/site-packages/graphql/execution/executor.py", line 563, in complete_value
return complete_leaf_value(return_type, path, result)
File "/home/meganerd/.cache/pypoetry/virtualenvs/nautobot-Sgl-QJQc-py3.9/lib/python3.9/site-packages/graphql/execution/executor.py", line 626, in complete_leaf_value
serialized_result = return_type.serialize(result)
File "/home/meganerd/.cache/pypoetry/virtualenvs/nautobot-Sgl-QJQc-py3.9/lib/python3.9/site-packages/graphene/types/datetime.py", line 23, in serialize
assert isinstance(
AssertionError: Received not compatible date "'2023-04-26'"`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/core/graphql/types.py`
Content:
```
1 import datetime
2
3 from django.contrib.contenttypes.models import ContentType
4 import graphene
5 import graphene_django_optimizer as gql_optimizer
6
7
8 class OptimizedNautobotObjectType(gql_optimizer.OptimizedDjangoObjectType):
9 url = graphene.String()
10
11 def resolve_url(self, info):
12 return self.get_absolute_url(api=True)
13
14 class Meta:
15 abstract = True
16
17
18 class ContentTypeType(OptimizedNautobotObjectType):
19 """
20 Graphene-Django object type for ContentType records.
21
22 Needed because ContentType is a built-in model, not one that we own and can auto-generate types for.
23 """
24
25 class Meta:
26 model = ContentType
27
28
29 class DateType(graphene.Date):
30 """
31 Overriding the default serialize method from https://github.com/graphql-python/graphene/blob/master/graphene/types/datetime.py
32 to handle the case where the date object is passed as a str object.
33 """
34
35 @staticmethod
36 def serialize(date):
37 if isinstance(date, datetime.datetime):
38 date = date.date()
39 return date.isoformat()
40 elif isinstance(date, str):
41 return date
42 else:
43 raise AssertionError(f'Received not compatible date "{date!r}"')
44
45
46 class JSON(graphene.Scalar):
47 @staticmethod
48 def serialize_data(dt):
49 return dt
50
51 serialize = serialize_data
52 parse_value = serialize_data
53 parse_literal = serialize_data
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/core/graphql/types.py b/nautobot/core/graphql/types.py
--- a/nautobot/core/graphql/types.py
+++ b/nautobot/core/graphql/types.py
@@ -3,6 +3,7 @@
from django.contrib.contenttypes.models import ContentType
import graphene
import graphene_django_optimizer as gql_optimizer
+from graphql import GraphQLError
class OptimizedNautobotObjectType(gql_optimizer.OptimizedDjangoObjectType):
@@ -37,10 +38,12 @@
if isinstance(date, datetime.datetime):
date = date.date()
return date.isoformat()
+ elif isinstance(date, datetime.date):
+ return date.isoformat()
elif isinstance(date, str):
return date
else:
- raise AssertionError(f'Received not compatible date "{date!r}"')
+ raise GraphQLError(f'Received not compatible date "{date!r}"')
class JSON(graphene.Scalar):
| {"golden_diff": "diff --git a/nautobot/core/graphql/types.py b/nautobot/core/graphql/types.py\n--- a/nautobot/core/graphql/types.py\n+++ b/nautobot/core/graphql/types.py\n@@ -3,6 +3,7 @@\n from django.contrib.contenttypes.models import ContentType\n import graphene\n import graphene_django_optimizer as gql_optimizer\n+from graphql import GraphQLError\n \n \n class OptimizedNautobotObjectType(gql_optimizer.OptimizedDjangoObjectType):\n@@ -37,10 +38,12 @@\n if isinstance(date, datetime.datetime):\n date = date.date()\n return date.isoformat()\n+ elif isinstance(date, datetime.date):\n+ return date.isoformat()\n elif isinstance(date, str):\n return date\n else:\n- raise AssertionError(f'Received not compatible date \"{date!r}\"')\n+ raise GraphQLError(f'Received not compatible date \"{date!r}\"')\n \n \n class JSON(graphene.Scalar):\n", "issue": "Custom field date-type objects AssertionError in GraphQL\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Nautobot version (Docker tag too if applicable): 1.5.17b1, also tested in 1.5.8\r\n* Python version: 3.9\r\n* Database platform, version: psql 13.10\r\n* Middleware(s): None\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Create a [custom field object](https://demo.nautobot.com/extras/custom-fields/last_backup/?tab=main) for the Device type template with the type as Date\r\n2. Set a date on a [device](https://demo.nautobot.com/dcim/devices/4d7aecac-addd-4f3c-8efc-e102872b54e5/?tab=main) in YYYY-MM-DD iso8601 format\r\n3. You have to restart nautobot-server; we may need a post_save signal event for this as well, which would be a separate issue if so\r\n4. Using GraphQL attempt to retrieve the value of this field from a device\r\n`\r\n{\r\n devices (name: \"ams01-dist-01\") {\r\n name,\r\n cf_last_backup\r\n }\r\n}\r\n`\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nQuery to return the value of the Date\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n* GrapQL returns `\"message\": \"Received not compatible date \\\"'2023-04-26'\\\"\"`\r\n* stdout from nautobot-server gets an exception\r\n\r\n`[27/Apr/2023 17:20:01] \"POST /graphql/ HTTP/1.1\" 200 4437611\r\nTraceback (most recent call last):\r\n File \"/home/meganerd/.cache/pypoetry/virtualenvs/nautobot-Sgl-QJQc-py3.9/lib/python3.9/site-packages/graphql/execution/executor.py\", line 480, in complete_value_catching_error\r\n completed = complete_value(\r\n File \"/home/meganerd/.cache/pypoetry/virtualenvs/nautobot-Sgl-QJQc-py3.9/lib/python3.9/site-packages/graphql/execution/executor.py\", line 563, in complete_value\r\n return complete_leaf_value(return_type, path, result)\r\n File \"/home/meganerd/.cache/pypoetry/virtualenvs/nautobot-Sgl-QJQc-py3.9/lib/python3.9/site-packages/graphql/execution/executor.py\", line 626, in complete_leaf_value\r\n serialized_result = return_type.serialize(result)\r\n File \"/home/meganerd/.cache/pypoetry/virtualenvs/nautobot-Sgl-QJQc-py3.9/lib/python3.9/site-packages/graphene/types/datetime.py\", line 23, in serialize\r\n assert isinstance(\r\nAssertionError: Received not compatible date \"'2023-04-26'\"`\r\n\r\n\n", "before_files": [{"content": "import datetime\n\nfrom django.contrib.contenttypes.models import ContentType\nimport graphene\nimport graphene_django_optimizer as gql_optimizer\n\n\nclass OptimizedNautobotObjectType(gql_optimizer.OptimizedDjangoObjectType):\n url = graphene.String()\n\n def resolve_url(self, info):\n return self.get_absolute_url(api=True)\n\n class Meta:\n abstract = True\n\n\nclass ContentTypeType(OptimizedNautobotObjectType):\n \"\"\"\n Graphene-Django object type for ContentType records.\n\n Needed because ContentType is a built-in model, not one that we own and can auto-generate types for.\n \"\"\"\n\n class Meta:\n model = ContentType\n\n\nclass DateType(graphene.Date):\n \"\"\"\n Overriding the default serialize method from https://github.com/graphql-python/graphene/blob/master/graphene/types/datetime.py\n to handle the case where the date object is passed as a str object.\n \"\"\"\n\n @staticmethod\n def serialize(date):\n if isinstance(date, datetime.datetime):\n date = date.date()\n return date.isoformat()\n elif isinstance(date, str):\n return date\n else:\n raise AssertionError(f'Received not compatible date \"{date!r}\"')\n\n\nclass JSON(graphene.Scalar):\n @staticmethod\n def serialize_data(dt):\n return dt\n\n serialize = serialize_data\n parse_value = serialize_data\n parse_literal = serialize_data\n", "path": "nautobot/core/graphql/types.py"}], "after_files": [{"content": "import datetime\n\nfrom django.contrib.contenttypes.models import ContentType\nimport graphene\nimport graphene_django_optimizer as gql_optimizer\nfrom graphql import GraphQLError\n\n\nclass OptimizedNautobotObjectType(gql_optimizer.OptimizedDjangoObjectType):\n url = graphene.String()\n\n def resolve_url(self, info):\n return self.get_absolute_url(api=True)\n\n class Meta:\n abstract = True\n\n\nclass ContentTypeType(OptimizedNautobotObjectType):\n \"\"\"\n Graphene-Django object type for ContentType records.\n\n Needed because ContentType is a built-in model, not one that we own and can auto-generate types for.\n \"\"\"\n\n class Meta:\n model = ContentType\n\n\nclass DateType(graphene.Date):\n \"\"\"\n Overriding the default serialize method from https://github.com/graphql-python/graphene/blob/master/graphene/types/datetime.py\n to handle the case where the date object is passed as a str object.\n \"\"\"\n\n @staticmethod\n def serialize(date):\n if isinstance(date, datetime.datetime):\n date = date.date()\n return date.isoformat()\n elif isinstance(date, datetime.date):\n return date.isoformat()\n elif isinstance(date, str):\n return date\n else:\n raise GraphQLError(f'Received not compatible date \"{date!r}\"')\n\n\nclass JSON(graphene.Scalar):\n @staticmethod\n def serialize_data(dt):\n return dt\n\n serialize = serialize_data\n parse_value = serialize_data\n parse_literal = serialize_data\n", "path": "nautobot/core/graphql/types.py"}]} | 1,530 | 203 |
gh_patches_debug_11157 | rasdani/github-patches | git_diff | kivy__python-for-android-800 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cryptography recipe does not compile
I'm trying to build Kivy app with:
- sdl2 bootstrap
- recipes python2,kivy,cryptography
- buildozer 0.32
- latest python-for-android master branch
In libffi recipe I had to replace [line](https://github.com/kivy/python-for-android/blob/master/pythonforandroid/recipes/libffi/__init__.py#L43) with:
`shprint(sh.Command('autoreconf'), '-vif', _env=env)`.
Without that change p4a was raising sh.CommandNotFound error.
Now running `buildozer android_new debug` produces the following error log: http://pastebin.com/22qjBJwL
buildozer.spec file: http://pastebin.com/aLe7nLHG
main.py file: http://pastebin.com/xSReYe6u
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/libffi/__init__.py`
Content:
```
1 from pythonforandroid.recipe import Recipe
2 from pythonforandroid.logger import shprint
3 from pythonforandroid.util import current_directory
4 from os.path import exists, join
5 import sh
6 import glob
7
8
9 class LibffiRecipe(Recipe):
10 name = 'libffi'
11 version = 'v3.2.1'
12 url = 'https://github.com/atgreen/libffi/archive/{version}.zip'
13
14 patches = ['remove-version-info.patch']
15
16 def get_host(self, arch):
17 with current_directory(self.get_build_dir(arch.arch)):
18 host = None
19 with open('Makefile') as f:
20 for line in f:
21 if line.startswith('host = '):
22 host = line.strip()[7:]
23 break
24
25 if not host or not exists(host):
26 raise RuntimeError('failed to find build output! ({})'
27 .format(host))
28
29 return host
30
31 def should_build(self, arch):
32 # return not bool(glob.glob(join(self.ctx.get_libs_dir(arch.arch),
33 # 'libffi.so*')))
34 return not exists(join(self.ctx.get_libs_dir(arch.arch), 'libffi.so'))
35 # return not exists(join(self.ctx.get_python_install_dir(), 'lib',
36 # 'libffi.so'))
37
38 def build_arch(self, arch):
39 env = self.get_recipe_env(arch)
40 with current_directory(self.get_build_dir(arch.arch)):
41 if not exists('configure'):
42 shprint(sh.Command('./autogen.sh'), _env=env)
43 shprint(sh.Command('autoreconf -vif'), _env=env)
44 shprint(sh.Command('./configure'), '--host=' + arch.toolchain_prefix,
45 '--prefix=' + self.ctx.get_python_install_dir(),
46 '--enable-shared', _env=env)
47 shprint(sh.make, '-j5', 'libffi.la', _env=env)
48
49
50 # dlname = None
51 # with open(join(host, 'libffi.la')) as f:
52 # for line in f:
53 # if line.startswith('dlname='):
54 # dlname = line.strip()[8:-1]
55 # break
56 #
57 # if not dlname or not exists(join(host, '.libs', dlname)):
58 # raise RuntimeError('failed to locate shared object! ({})'
59 # .format(dlname))
60
61 # shprint(sh.sed, '-i', 's/^dlname=.*$/dlname=\'libffi.so\'/', join(host, 'libffi.la'))
62
63 shprint(sh.cp, '-t', self.ctx.get_libs_dir(arch.arch),
64 join(self.get_host(arch), '.libs', 'libffi.so')) #,
65 # join(host, 'libffi.la'))
66
67 def get_include_dirs(self, arch):
68 return [join(self.get_build_dir(arch.arch), self.get_host(arch), 'include')]
69
70
71 recipe = LibffiRecipe()
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/recipes/libffi/__init__.py b/pythonforandroid/recipes/libffi/__init__.py
--- a/pythonforandroid/recipes/libffi/__init__.py
+++ b/pythonforandroid/recipes/libffi/__init__.py
@@ -40,7 +40,7 @@
with current_directory(self.get_build_dir(arch.arch)):
if not exists('configure'):
shprint(sh.Command('./autogen.sh'), _env=env)
- shprint(sh.Command('autoreconf -vif'), _env=env)
+ shprint(sh.Command('autoreconf'), '-vif', _env=env)
shprint(sh.Command('./configure'), '--host=' + arch.toolchain_prefix,
'--prefix=' + self.ctx.get_python_install_dir(),
'--enable-shared', _env=env)
| {"golden_diff": "diff --git a/pythonforandroid/recipes/libffi/__init__.py b/pythonforandroid/recipes/libffi/__init__.py\n--- a/pythonforandroid/recipes/libffi/__init__.py\n+++ b/pythonforandroid/recipes/libffi/__init__.py\n@@ -40,7 +40,7 @@\n \t\twith current_directory(self.get_build_dir(arch.arch)):\n \t\t\tif not exists('configure'):\n \t\t\t\tshprint(sh.Command('./autogen.sh'), _env=env)\n-\t\t\tshprint(sh.Command('autoreconf -vif'), _env=env)\n+\t\t\tshprint(sh.Command('autoreconf'), '-vif', _env=env)\n \t\t\tshprint(sh.Command('./configure'), '--host=' + arch.toolchain_prefix,\n \t\t\t '--prefix=' + self.ctx.get_python_install_dir(),\n \t\t\t '--enable-shared', _env=env)\n", "issue": "Cryptography recipe does not compile\nI'm trying to build Kivy app with:\n- sdl2 bootstrap\n- recipes python2,kivy,cryptography \n- buildozer 0.32 \n- latest python-for-android master branch \n\nIn libffi recipe I had to replace [line](https://github.com/kivy/python-for-android/blob/master/pythonforandroid/recipes/libffi/__init__.py#L43) with:\n`shprint(sh.Command('autoreconf'), '-vif', _env=env)`.\nWithout that change p4a was raising sh.CommandNotFound error. \n\nNow running `buildozer android_new debug` produces the following error log: http://pastebin.com/22qjBJwL\nbuildozer.spec file: http://pastebin.com/aLe7nLHG\nmain.py file: http://pastebin.com/xSReYe6u\n\n", "before_files": [{"content": "from pythonforandroid.recipe import Recipe\nfrom pythonforandroid.logger import shprint\nfrom pythonforandroid.util import current_directory\nfrom os.path import exists, join\nimport sh\nimport glob\n\n\nclass LibffiRecipe(Recipe):\n\tname = 'libffi'\n\tversion = 'v3.2.1'\n\turl = 'https://github.com/atgreen/libffi/archive/{version}.zip'\n\n\tpatches = ['remove-version-info.patch']\n\n\tdef get_host(self, arch):\n\t\twith current_directory(self.get_build_dir(arch.arch)):\n\t\t\thost = None\n\t\t\twith open('Makefile') as f:\n\t\t\t\tfor line in f:\n\t\t\t\t\tif line.startswith('host = '):\n\t\t\t\t\t\thost = line.strip()[7:]\n\t\t\t\t\t\tbreak\n\n\t\t\tif not host or not exists(host):\n\t\t\t\traise RuntimeError('failed to find build output! ({})'\n\t\t\t\t .format(host))\n\t\t\t\n\t\t\treturn host\n\n\tdef should_build(self, arch):\n\t\t# return not bool(glob.glob(join(self.ctx.get_libs_dir(arch.arch),\n\t\t# 'libffi.so*')))\n\t\treturn not exists(join(self.ctx.get_libs_dir(arch.arch), 'libffi.so'))\n\t\t# return not exists(join(self.ctx.get_python_install_dir(), 'lib',\n\t\t# 'libffi.so'))\n\n\tdef build_arch(self, arch):\n\t\tenv = self.get_recipe_env(arch)\n\t\twith current_directory(self.get_build_dir(arch.arch)):\n\t\t\tif not exists('configure'):\n\t\t\t\tshprint(sh.Command('./autogen.sh'), _env=env)\n\t\t\tshprint(sh.Command('autoreconf -vif'), _env=env)\n\t\t\tshprint(sh.Command('./configure'), '--host=' + arch.toolchain_prefix,\n\t\t\t '--prefix=' + self.ctx.get_python_install_dir(),\n\t\t\t '--enable-shared', _env=env)\n\t\t\tshprint(sh.make, '-j5', 'libffi.la', _env=env)\n\n\n\t\t\t# dlname = None\n\t\t\t# with open(join(host, 'libffi.la')) as f:\n\t\t\t# \tfor line in f:\n\t\t\t# \t\tif line.startswith('dlname='):\n\t\t\t# \t\t\tdlname = line.strip()[8:-1]\n\t\t\t# \t\t\tbreak\n\t\t\t# \n\t\t\t# if not dlname or not exists(join(host, '.libs', dlname)):\n\t\t\t# \traise RuntimeError('failed to locate shared object! ({})'\n\t\t\t# \t .format(dlname))\n\n\t\t\t# shprint(sh.sed, '-i', 's/^dlname=.*$/dlname=\\'libffi.so\\'/', join(host, 'libffi.la'))\n\n\t\t\tshprint(sh.cp, '-t', self.ctx.get_libs_dir(arch.arch),\n\t\t\t join(self.get_host(arch), '.libs', 'libffi.so')) #,\n\t\t\t # join(host, 'libffi.la'))\n\n\tdef get_include_dirs(self, arch):\n\t\treturn [join(self.get_build_dir(arch.arch), self.get_host(arch), 'include')]\n\n\nrecipe = LibffiRecipe()\n", "path": "pythonforandroid/recipes/libffi/__init__.py"}], "after_files": [{"content": "from pythonforandroid.recipe import Recipe\nfrom pythonforandroid.logger import shprint\nfrom pythonforandroid.util import current_directory\nfrom os.path import exists, join\nimport sh\nimport glob\n\n\nclass LibffiRecipe(Recipe):\n\tname = 'libffi'\n\tversion = 'v3.2.1'\n\turl = 'https://github.com/atgreen/libffi/archive/{version}.zip'\n\n\tpatches = ['remove-version-info.patch']\n\n\tdef get_host(self, arch):\n\t\twith current_directory(self.get_build_dir(arch.arch)):\n\t\t\thost = None\n\t\t\twith open('Makefile') as f:\n\t\t\t\tfor line in f:\n\t\t\t\t\tif line.startswith('host = '):\n\t\t\t\t\t\thost = line.strip()[7:]\n\t\t\t\t\t\tbreak\n\n\t\t\tif not host or not exists(host):\n\t\t\t\traise RuntimeError('failed to find build output! ({})'\n\t\t\t\t .format(host))\n\t\t\t\n\t\t\treturn host\n\n\tdef should_build(self, arch):\n\t\t# return not bool(glob.glob(join(self.ctx.get_libs_dir(arch.arch),\n\t\t# 'libffi.so*')))\n\t\treturn not exists(join(self.ctx.get_libs_dir(arch.arch), 'libffi.so'))\n\t\t# return not exists(join(self.ctx.get_python_install_dir(), 'lib',\n\t\t# 'libffi.so'))\n\n\tdef build_arch(self, arch):\n\t\tenv = self.get_recipe_env(arch)\n\t\twith current_directory(self.get_build_dir(arch.arch)):\n\t\t\tif not exists('configure'):\n\t\t\t\tshprint(sh.Command('./autogen.sh'), _env=env)\n\t\t\tshprint(sh.Command('autoreconf'), '-vif', _env=env)\n\t\t\tshprint(sh.Command('./configure'), '--host=' + arch.toolchain_prefix,\n\t\t\t '--prefix=' + self.ctx.get_python_install_dir(),\n\t\t\t '--enable-shared', _env=env)\n\t\t\tshprint(sh.make, '-j5', 'libffi.la', _env=env)\n\n\n\t\t\t# dlname = None\n\t\t\t# with open(join(host, 'libffi.la')) as f:\n\t\t\t# \tfor line in f:\n\t\t\t# \t\tif line.startswith('dlname='):\n\t\t\t# \t\t\tdlname = line.strip()[8:-1]\n\t\t\t# \t\t\tbreak\n\t\t\t# \n\t\t\t# if not dlname or not exists(join(host, '.libs', dlname)):\n\t\t\t# \traise RuntimeError('failed to locate shared object! ({})'\n\t\t\t# \t .format(dlname))\n\n\t\t\t# shprint(sh.sed, '-i', 's/^dlname=.*$/dlname=\\'libffi.so\\'/', join(host, 'libffi.la'))\n\n\t\t\tshprint(sh.cp, '-t', self.ctx.get_libs_dir(arch.arch),\n\t\t\t join(self.get_host(arch), '.libs', 'libffi.so')) #,\n\t\t\t # join(host, 'libffi.la'))\n\n\tdef get_include_dirs(self, arch):\n\t\treturn [join(self.get_build_dir(arch.arch), self.get_host(arch), 'include')]\n\n\nrecipe = LibffiRecipe()\n", "path": "pythonforandroid/recipes/libffi/__init__.py"}]} | 1,243 | 182 |
gh_patches_debug_19803 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1678 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Braintree error when selecting address
I have had the follow error for the past couple of months:
> fetch failed for source Braintree District Council: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py", line 45, in fetch id = next( ^^^^^ StopIteration
I am seeing address["value"] on line 43 having a length of 11.
Removing this line cures the problem for me, and I am not sure what this line is achieving?
https://github.com/mampfes/hacs_waste_collection_schedule/blob/73baeef472c5db4c85db3a7b725df68f3a15a101/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py#L43
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py`
Content:
```
1 import requests
2 from bs4 import BeautifulSoup
3 from dateutil import parser
4 from waste_collection_schedule import Collection
5
6 TITLE = "Braintree District Council"
7 DESCRIPTION = "Braintree District Council, UK - Waste Collection"
8 URL = "https://www.braintree.gov.uk"
9 TEST_CASES = {
10 "30 Boars Tye Road": {"house_number": "30", "post_code": "CM8 3QE"},
11 "64 Silver Street": {"house_number": "64", "post_code": "CM8 3QG"},
12 "18 St Mary's Road": {"house_number": "1", "post_code": "CM8 3PE"},
13 "20 Peel Crescent": {"house_number": "20", "post_code": "CM7 2RS"},
14 }
15
16 ICON_MAP = {
17 "Grey Bin": "mdi:trash-can",
18 "Clear Sack": "mdi:recycle",
19 "Green Bin": "mdi:leaf",
20 "Food Bin": "mdi:food-apple",
21 }
22
23
24 class Source:
25 def __init__(self, post_code: str, house_number: str):
26 self.post_code = post_code
27 self.house_number = house_number
28 self.url = f"{URL}/xfp/form/554"
29 self.form_data = {
30 "qe15dda0155d237d1ea161004d1839e3369ed4831_0_0": (None, post_code),
31 "page": (None, 5730),
32 }
33
34 def fetch(self):
35 address_lookup = requests.post(
36 "https://www.braintree.gov.uk/xfp/form/554", files=self.form_data
37 )
38 address_lookup.raise_for_status()
39 addresses = {}
40 for address in BeautifulSoup(address_lookup.text, "html.parser").find_all(
41 "option"
42 ):
43 if len(address["value"]) == 12:
44 addresses[address["value"]] = address.text.strip()
45 id = next(
46 address
47 for address in addresses
48 if addresses[address].startswith(self.house_number)
49 )
50 self.form_data["qe15dda0155d237d1ea161004d1839e3369ed4831_1_0"] = (None, id)
51 self.form_data["next"] = (None, "Next")
52 collection_lookup = requests.post(
53 "https://www.braintree.gov.uk/xfp/form/554", files=self.form_data
54 )
55 collection_lookup.raise_for_status()
56 entries = []
57 for results in BeautifulSoup(collection_lookup.text, "html.parser").find_all(
58 "div", class_="date_display"
59 ):
60 try:
61 collection_type, collection_date = results.text.strip().split("\n")
62 entries.append(
63 Collection(
64 date=parser.parse(collection_date, dayfirst=True).date(),
65 t=collection_type,
66 icon=ICON_MAP.get(collection_type),
67 )
68 )
69 except (StopIteration, TypeError):
70 pass
71 return entries
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py
@@ -1,7 +1,7 @@
import requests
from bs4 import BeautifulSoup
from dateutil import parser
-from waste_collection_schedule import Collection
+from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Braintree District Council"
DESCRIPTION = "Braintree District Council, UK - Waste Collection"
@@ -40,7 +40,7 @@
for address in BeautifulSoup(address_lookup.text, "html.parser").find_all(
"option"
):
- if len(address["value"]) == 12:
+ if len(address["value"]) > 5: # Skip the first option
addresses[address["value"]] = address.text.strip()
id = next(
address
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py\n@@ -1,7 +1,7 @@\n import requests\n from bs4 import BeautifulSoup\n from dateutil import parser\n-from waste_collection_schedule import Collection\n+from waste_collection_schedule import Collection # type: ignore[attr-defined]\n \n TITLE = \"Braintree District Council\"\n DESCRIPTION = \"Braintree District Council, UK - Waste Collection\"\n@@ -40,7 +40,7 @@\n for address in BeautifulSoup(address_lookup.text, \"html.parser\").find_all(\n \"option\"\n ):\n- if len(address[\"value\"]) == 12:\n+ if len(address[\"value\"]) > 5: # Skip the first option\n addresses[address[\"value\"]] = address.text.strip()\n id = next(\n address\n", "issue": "Braintree error when selecting address\nI have had the follow error for the past couple of months:\r\n\r\n> fetch failed for source Braintree District Council: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py\", line 45, in fetch id = next( ^^^^^ StopIteration\r\n\r\nI am seeing address[\"value\"] on line 43 having a length of 11.\r\nRemoving this line cures the problem for me, and I am not sure what this line is achieving?\r\n\r\nhttps://github.com/mampfes/hacs_waste_collection_schedule/blob/73baeef472c5db4c85db3a7b725df68f3a15a101/custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py#L43\n", "before_files": [{"content": "import requests\nfrom bs4 import BeautifulSoup\nfrom dateutil import parser\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Braintree District Council\"\nDESCRIPTION = \"Braintree District Council, UK - Waste Collection\"\nURL = \"https://www.braintree.gov.uk\"\nTEST_CASES = {\n \"30 Boars Tye Road\": {\"house_number\": \"30\", \"post_code\": \"CM8 3QE\"},\n \"64 Silver Street\": {\"house_number\": \"64\", \"post_code\": \"CM8 3QG\"},\n \"18 St Mary's Road\": {\"house_number\": \"1\", \"post_code\": \"CM8 3PE\"},\n \"20 Peel Crescent\": {\"house_number\": \"20\", \"post_code\": \"CM7 2RS\"},\n}\n\nICON_MAP = {\n \"Grey Bin\": \"mdi:trash-can\",\n \"Clear Sack\": \"mdi:recycle\",\n \"Green Bin\": \"mdi:leaf\",\n \"Food Bin\": \"mdi:food-apple\",\n}\n\n\nclass Source:\n def __init__(self, post_code: str, house_number: str):\n self.post_code = post_code\n self.house_number = house_number\n self.url = f\"{URL}/xfp/form/554\"\n self.form_data = {\n \"qe15dda0155d237d1ea161004d1839e3369ed4831_0_0\": (None, post_code),\n \"page\": (None, 5730),\n }\n\n def fetch(self):\n address_lookup = requests.post(\n \"https://www.braintree.gov.uk/xfp/form/554\", files=self.form_data\n )\n address_lookup.raise_for_status()\n addresses = {}\n for address in BeautifulSoup(address_lookup.text, \"html.parser\").find_all(\n \"option\"\n ):\n if len(address[\"value\"]) == 12:\n addresses[address[\"value\"]] = address.text.strip()\n id = next(\n address\n for address in addresses\n if addresses[address].startswith(self.house_number)\n )\n self.form_data[\"qe15dda0155d237d1ea161004d1839e3369ed4831_1_0\"] = (None, id)\n self.form_data[\"next\"] = (None, \"Next\")\n collection_lookup = requests.post(\n \"https://www.braintree.gov.uk/xfp/form/554\", files=self.form_data\n )\n collection_lookup.raise_for_status()\n entries = []\n for results in BeautifulSoup(collection_lookup.text, \"html.parser\").find_all(\n \"div\", class_=\"date_display\"\n ):\n try:\n collection_type, collection_date = results.text.strip().split(\"\\n\")\n entries.append(\n Collection(\n date=parser.parse(collection_date, dayfirst=True).date(),\n t=collection_type,\n icon=ICON_MAP.get(collection_type),\n )\n )\n except (StopIteration, TypeError):\n pass\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py"}], "after_files": [{"content": "import requests\nfrom bs4 import BeautifulSoup\nfrom dateutil import parser\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Braintree District Council\"\nDESCRIPTION = \"Braintree District Council, UK - Waste Collection\"\nURL = \"https://www.braintree.gov.uk\"\nTEST_CASES = {\n \"30 Boars Tye Road\": {\"house_number\": \"30\", \"post_code\": \"CM8 3QE\"},\n \"64 Silver Street\": {\"house_number\": \"64\", \"post_code\": \"CM8 3QG\"},\n \"18 St Mary's Road\": {\"house_number\": \"1\", \"post_code\": \"CM8 3PE\"},\n \"20 Peel Crescent\": {\"house_number\": \"20\", \"post_code\": \"CM7 2RS\"},\n}\n\nICON_MAP = {\n \"Grey Bin\": \"mdi:trash-can\",\n \"Clear Sack\": \"mdi:recycle\",\n \"Green Bin\": \"mdi:leaf\",\n \"Food Bin\": \"mdi:food-apple\",\n}\n\n\nclass Source:\n def __init__(self, post_code: str, house_number: str):\n self.post_code = post_code\n self.house_number = house_number\n self.url = f\"{URL}/xfp/form/554\"\n self.form_data = {\n \"qe15dda0155d237d1ea161004d1839e3369ed4831_0_0\": (None, post_code),\n \"page\": (None, 5730),\n }\n\n def fetch(self):\n address_lookup = requests.post(\n \"https://www.braintree.gov.uk/xfp/form/554\", files=self.form_data\n )\n address_lookup.raise_for_status()\n addresses = {}\n for address in BeautifulSoup(address_lookup.text, \"html.parser\").find_all(\n \"option\"\n ):\n if len(address[\"value\"]) > 5: # Skip the first option\n addresses[address[\"value\"]] = address.text.strip()\n id = next(\n address\n for address in addresses\n if addresses[address].startswith(self.house_number)\n )\n self.form_data[\"qe15dda0155d237d1ea161004d1839e3369ed4831_1_0\"] = (None, id)\n self.form_data[\"next\"] = (None, \"Next\")\n collection_lookup = requests.post(\n \"https://www.braintree.gov.uk/xfp/form/554\", files=self.form_data\n )\n collection_lookup.raise_for_status()\n entries = []\n for results in BeautifulSoup(collection_lookup.text, \"html.parser\").find_all(\n \"div\", class_=\"date_display\"\n ):\n try:\n collection_type, collection_date = results.text.strip().split(\"\\n\")\n entries.append(\n Collection(\n date=parser.parse(collection_date, dayfirst=True).date(),\n t=collection_type,\n icon=ICON_MAP.get(collection_type),\n )\n )\n except (StopIteration, TypeError):\n pass\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/braintree_gov_uk.py"}]} | 1,324 | 245 |
gh_patches_debug_30483 | rasdani/github-patches | git_diff | tobymao__sqlglot-975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ClickHouse WITH statement
According to ClickHouse [docs](https://clickhouse.com/docs/en/sql-reference/statements/select/with/), `WITH` statement looks like `WITH <expression> AS <identifier>`. `sqlglot` expects that identifier precedes expression. So if I'll try to parse SQL like:
```sql
WITH '2019-08-01 15:23:00' as ts_upper_bound
SELECT *
FROM hits
WHERE
EventDate = toDate(ts_upper_bound) AND
EventTime <= ts_upper_bound
```
I have exception:
```python
sqlglot.errors.ParseError: Expected CTE to have alias. Line 2, Col: 6.
```
One more problem is SQL generation. If I try to generate something like:
```sql
WITH trim(' A ') AS val SELECT val
```
I expect to write something like:
```python
select("val").with_("val", as_="trim(' a ')").sql(dialect=Dialects.CLICKHOUSE)
```
but receive this SQL, which is invalid in ClickHouse:
```sql
WITH val AS (TRIM(' a ')) SELECT val
```
If I switch arguments of `with_` method, I'll almost achieve expected:
```python
select("val").with_(parse_one("trim(' a ')"), as_="val", dialect=Dialects.CLICKHOUSE).sql(dialect=Dialects.CLICKHOUSE)
```
But resulting SQL is also invalid due to parentheses around `val`:
```sql
WITH TRIM(' a ') AS (val) SELECT val
```
I guess ClickHouse dialect should have specified behaviour for `expressions.CTE` parsing/generation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sqlglot/dialects/clickhouse.py`
Content:
```
1 from __future__ import annotations
2
3 import typing as t
4
5 from sqlglot import exp, generator, parser, tokens
6 from sqlglot.dialects.dialect import Dialect, inline_array_sql, var_map_sql
7 from sqlglot.parser import parse_var_map
8 from sqlglot.tokens import TokenType
9
10
11 def _lower_func(sql):
12 index = sql.index("(")
13 return sql[:index].lower() + sql[index:]
14
15
16 class ClickHouse(Dialect):
17 normalize_functions = None
18 null_ordering = "nulls_are_last"
19
20 class Tokenizer(tokens.Tokenizer):
21 COMMENTS = ["--", "#", "#!", ("/*", "*/")]
22 IDENTIFIERS = ['"', "`"]
23
24 KEYWORDS = {
25 **tokens.Tokenizer.KEYWORDS,
26 "ASOF": TokenType.ASOF,
27 "GLOBAL": TokenType.GLOBAL,
28 "DATETIME64": TokenType.DATETIME,
29 "FINAL": TokenType.FINAL,
30 "FLOAT32": TokenType.FLOAT,
31 "FLOAT64": TokenType.DOUBLE,
32 "INT16": TokenType.SMALLINT,
33 "INT32": TokenType.INT,
34 "INT64": TokenType.BIGINT,
35 "INT8": TokenType.TINYINT,
36 "TUPLE": TokenType.STRUCT,
37 }
38
39 class Parser(parser.Parser):
40 FUNCTIONS = {
41 **parser.Parser.FUNCTIONS, # type: ignore
42 "MAP": parse_var_map,
43 "QUANTILE": lambda params, args: exp.Quantile(this=args, quantile=params),
44 "QUANTILES": lambda params, args: exp.Quantiles(parameters=params, expressions=args),
45 "QUANTILEIF": lambda params, args: exp.QuantileIf(parameters=params, expressions=args),
46 }
47
48 RANGE_PARSERS = {
49 **parser.Parser.RANGE_PARSERS,
50 TokenType.GLOBAL: lambda self, this: self._match(TokenType.IN)
51 and self._parse_in(this, is_global=True),
52 }
53
54 JOIN_KINDS = {*parser.Parser.JOIN_KINDS, TokenType.ANY, TokenType.ASOF} # type: ignore
55
56 TABLE_ALIAS_TOKENS = {*parser.Parser.TABLE_ALIAS_TOKENS} - {TokenType.ANY} # type: ignore
57
58 def _parse_in(
59 self, this: t.Optional[exp.Expression], is_global: bool = False
60 ) -> exp.Expression:
61 this = super()._parse_in(this)
62 this.set("is_global", is_global)
63 return this
64
65 def _parse_table(
66 self, schema: bool = False, alias_tokens: t.Optional[t.Collection[TokenType]] = None
67 ) -> t.Optional[exp.Expression]:
68 this = super()._parse_table(schema=schema, alias_tokens=alias_tokens)
69
70 if self._match(TokenType.FINAL):
71 this = self.expression(exp.Final, this=this)
72
73 return this
74
75 def _parse_position(self) -> exp.Expression:
76 this = super()._parse_position()
77 # clickhouse position args are swapped
78 substr = this.this
79 this.args["this"] = this.args.get("substr")
80 this.args["substr"] = substr
81 return this
82
83 class Generator(generator.Generator):
84 STRUCT_DELIMITER = ("(", ")")
85
86 TYPE_MAPPING = {
87 **generator.Generator.TYPE_MAPPING, # type: ignore
88 exp.DataType.Type.NULLABLE: "Nullable",
89 exp.DataType.Type.DATETIME: "DateTime64",
90 exp.DataType.Type.MAP: "Map",
91 exp.DataType.Type.ARRAY: "Array",
92 exp.DataType.Type.STRUCT: "Tuple",
93 exp.DataType.Type.TINYINT: "Int8",
94 exp.DataType.Type.SMALLINT: "Int16",
95 exp.DataType.Type.INT: "Int32",
96 exp.DataType.Type.BIGINT: "Int64",
97 exp.DataType.Type.FLOAT: "Float32",
98 exp.DataType.Type.DOUBLE: "Float64",
99 }
100
101 TRANSFORMS = {
102 **generator.Generator.TRANSFORMS, # type: ignore
103 exp.Array: inline_array_sql,
104 exp.StrPosition: lambda self, e: f"position({self.format_args(e.this, e.args.get('substr'), e.args.get('position'))})",
105 exp.Final: lambda self, e: f"{self.sql(e, 'this')} FINAL",
106 exp.Map: lambda self, e: _lower_func(var_map_sql(self, e)),
107 exp.VarMap: lambda self, e: _lower_func(var_map_sql(self, e)),
108 exp.Quantile: lambda self, e: f"quantile{self._param_args_sql(e, 'quantile', 'this')}",
109 exp.Quantiles: lambda self, e: f"quantiles{self._param_args_sql(e, 'parameters', 'expressions')}",
110 exp.QuantileIf: lambda self, e: f"quantileIf{self._param_args_sql(e, 'parameters', 'expressions')}",
111 }
112
113 EXPLICIT_UNION = True
114
115 def _param_args_sql(
116 self, expression: exp.Expression, params_name: str, args_name: str
117 ) -> str:
118 params = self.format_args(self.expressions(expression, params_name))
119 args = self.format_args(self.expressions(expression, args_name))
120 return f"({params})({args})"
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py
--- a/sqlglot/dialects/clickhouse.py
+++ b/sqlglot/dialects/clickhouse.py
@@ -4,6 +4,7 @@
from sqlglot import exp, generator, parser, tokens
from sqlglot.dialects.dialect import Dialect, inline_array_sql, var_map_sql
+from sqlglot.errors import ParseError
from sqlglot.parser import parse_var_map
from sqlglot.tokens import TokenType
@@ -80,6 +81,22 @@
this.args["substr"] = substr
return this
+ # https://clickhouse.com/docs/en/sql-reference/statements/select/with/
+ def _parse_cte(self) -> exp.Expression:
+ index = self._index
+ try:
+ # WITH <identifier> AS <subquery expression>
+ return super()._parse_cte()
+ except ParseError:
+ # WITH <expression> AS <identifier>
+ self._retreat(index)
+ statement = self._parse_statement()
+
+ if statement and isinstance(statement.this, exp.Alias):
+ self.raise_error("Expected CTE to have alias")
+
+ return self.expression(exp.CTE, this=statement, alias=statement and statement.this)
+
class Generator(generator.Generator):
STRUCT_DELIMITER = ("(", ")")
@@ -118,3 +135,9 @@
params = self.format_args(self.expressions(expression, params_name))
args = self.format_args(self.expressions(expression, args_name))
return f"({params})({args})"
+
+ def cte_sql(self, expression: exp.CTE) -> str:
+ if isinstance(expression.this, exp.Alias):
+ return self.sql(expression, "this")
+
+ return super().cte_sql(expression)
| {"golden_diff": "diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py\n--- a/sqlglot/dialects/clickhouse.py\n+++ b/sqlglot/dialects/clickhouse.py\n@@ -4,6 +4,7 @@\n \n from sqlglot import exp, generator, parser, tokens\n from sqlglot.dialects.dialect import Dialect, inline_array_sql, var_map_sql\n+from sqlglot.errors import ParseError\n from sqlglot.parser import parse_var_map\n from sqlglot.tokens import TokenType\n \n@@ -80,6 +81,22 @@\n this.args[\"substr\"] = substr\n return this\n \n+ # https://clickhouse.com/docs/en/sql-reference/statements/select/with/\n+ def _parse_cte(self) -> exp.Expression:\n+ index = self._index\n+ try:\n+ # WITH <identifier> AS <subquery expression>\n+ return super()._parse_cte()\n+ except ParseError:\n+ # WITH <expression> AS <identifier>\n+ self._retreat(index)\n+ statement = self._parse_statement()\n+\n+ if statement and isinstance(statement.this, exp.Alias):\n+ self.raise_error(\"Expected CTE to have alias\")\n+\n+ return self.expression(exp.CTE, this=statement, alias=statement and statement.this)\n+\n class Generator(generator.Generator):\n STRUCT_DELIMITER = (\"(\", \")\")\n \n@@ -118,3 +135,9 @@\n params = self.format_args(self.expressions(expression, params_name))\n args = self.format_args(self.expressions(expression, args_name))\n return f\"({params})({args})\"\n+\n+ def cte_sql(self, expression: exp.CTE) -> str:\n+ if isinstance(expression.this, exp.Alias):\n+ return self.sql(expression, \"this\")\n+\n+ return super().cte_sql(expression)\n", "issue": "ClickHouse WITH statement\nAccording to ClickHouse [docs](https://clickhouse.com/docs/en/sql-reference/statements/select/with/), `WITH` statement looks like `WITH <expression> AS <identifier>`. `sqlglot` expects that identifier precedes expression. So if I'll try to parse SQL like:\r\n```sql\r\nWITH '2019-08-01 15:23:00' as ts_upper_bound\r\nSELECT *\r\nFROM hits\r\nWHERE\r\n EventDate = toDate(ts_upper_bound) AND\r\n EventTime <= ts_upper_bound\r\n```\r\nI have exception:\r\n```python\r\nsqlglot.errors.ParseError: Expected CTE to have alias. Line 2, Col: 6.\r\n```\r\nOne more problem is SQL generation. If I try to generate something like:\r\n```sql\r\nWITH trim(' A ') AS val SELECT val\r\n```\r\nI expect to write something like:\r\n```python\r\nselect(\"val\").with_(\"val\", as_=\"trim(' a ')\").sql(dialect=Dialects.CLICKHOUSE)\r\n```\r\nbut receive this SQL, which is invalid in ClickHouse:\r\n```sql\r\nWITH val AS (TRIM(' a ')) SELECT val\r\n```\r\nIf I switch arguments of `with_` method, I'll almost achieve expected:\r\n```python\r\nselect(\"val\").with_(parse_one(\"trim(' a ')\"), as_=\"val\", dialect=Dialects.CLICKHOUSE).sql(dialect=Dialects.CLICKHOUSE)\r\n```\r\nBut resulting SQL is also invalid due to parentheses around `val`:\r\n```sql\r\nWITH TRIM(' a ') AS (val) SELECT val\r\n```\r\nI guess ClickHouse dialect should have specified behaviour for `expressions.CTE` parsing/generation.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport typing as t\n\nfrom sqlglot import exp, generator, parser, tokens\nfrom sqlglot.dialects.dialect import Dialect, inline_array_sql, var_map_sql\nfrom sqlglot.parser import parse_var_map\nfrom sqlglot.tokens import TokenType\n\n\ndef _lower_func(sql):\n index = sql.index(\"(\")\n return sql[:index].lower() + sql[index:]\n\n\nclass ClickHouse(Dialect):\n normalize_functions = None\n null_ordering = \"nulls_are_last\"\n\n class Tokenizer(tokens.Tokenizer):\n COMMENTS = [\"--\", \"#\", \"#!\", (\"/*\", \"*/\")]\n IDENTIFIERS = ['\"', \"`\"]\n\n KEYWORDS = {\n **tokens.Tokenizer.KEYWORDS,\n \"ASOF\": TokenType.ASOF,\n \"GLOBAL\": TokenType.GLOBAL,\n \"DATETIME64\": TokenType.DATETIME,\n \"FINAL\": TokenType.FINAL,\n \"FLOAT32\": TokenType.FLOAT,\n \"FLOAT64\": TokenType.DOUBLE,\n \"INT16\": TokenType.SMALLINT,\n \"INT32\": TokenType.INT,\n \"INT64\": TokenType.BIGINT,\n \"INT8\": TokenType.TINYINT,\n \"TUPLE\": TokenType.STRUCT,\n }\n\n class Parser(parser.Parser):\n FUNCTIONS = {\n **parser.Parser.FUNCTIONS, # type: ignore\n \"MAP\": parse_var_map,\n \"QUANTILE\": lambda params, args: exp.Quantile(this=args, quantile=params),\n \"QUANTILES\": lambda params, args: exp.Quantiles(parameters=params, expressions=args),\n \"QUANTILEIF\": lambda params, args: exp.QuantileIf(parameters=params, expressions=args),\n }\n\n RANGE_PARSERS = {\n **parser.Parser.RANGE_PARSERS,\n TokenType.GLOBAL: lambda self, this: self._match(TokenType.IN)\n and self._parse_in(this, is_global=True),\n }\n\n JOIN_KINDS = {*parser.Parser.JOIN_KINDS, TokenType.ANY, TokenType.ASOF} # type: ignore\n\n TABLE_ALIAS_TOKENS = {*parser.Parser.TABLE_ALIAS_TOKENS} - {TokenType.ANY} # type: ignore\n\n def _parse_in(\n self, this: t.Optional[exp.Expression], is_global: bool = False\n ) -> exp.Expression:\n this = super()._parse_in(this)\n this.set(\"is_global\", is_global)\n return this\n\n def _parse_table(\n self, schema: bool = False, alias_tokens: t.Optional[t.Collection[TokenType]] = None\n ) -> t.Optional[exp.Expression]:\n this = super()._parse_table(schema=schema, alias_tokens=alias_tokens)\n\n if self._match(TokenType.FINAL):\n this = self.expression(exp.Final, this=this)\n\n return this\n\n def _parse_position(self) -> exp.Expression:\n this = super()._parse_position()\n # clickhouse position args are swapped\n substr = this.this\n this.args[\"this\"] = this.args.get(\"substr\")\n this.args[\"substr\"] = substr\n return this\n\n class Generator(generator.Generator):\n STRUCT_DELIMITER = (\"(\", \")\")\n\n TYPE_MAPPING = {\n **generator.Generator.TYPE_MAPPING, # type: ignore\n exp.DataType.Type.NULLABLE: \"Nullable\",\n exp.DataType.Type.DATETIME: \"DateTime64\",\n exp.DataType.Type.MAP: \"Map\",\n exp.DataType.Type.ARRAY: \"Array\",\n exp.DataType.Type.STRUCT: \"Tuple\",\n exp.DataType.Type.TINYINT: \"Int8\",\n exp.DataType.Type.SMALLINT: \"Int16\",\n exp.DataType.Type.INT: \"Int32\",\n exp.DataType.Type.BIGINT: \"Int64\",\n exp.DataType.Type.FLOAT: \"Float32\",\n exp.DataType.Type.DOUBLE: \"Float64\",\n }\n\n TRANSFORMS = {\n **generator.Generator.TRANSFORMS, # type: ignore\n exp.Array: inline_array_sql,\n exp.StrPosition: lambda self, e: f\"position({self.format_args(e.this, e.args.get('substr'), e.args.get('position'))})\",\n exp.Final: lambda self, e: f\"{self.sql(e, 'this')} FINAL\",\n exp.Map: lambda self, e: _lower_func(var_map_sql(self, e)),\n exp.VarMap: lambda self, e: _lower_func(var_map_sql(self, e)),\n exp.Quantile: lambda self, e: f\"quantile{self._param_args_sql(e, 'quantile', 'this')}\",\n exp.Quantiles: lambda self, e: f\"quantiles{self._param_args_sql(e, 'parameters', 'expressions')}\",\n exp.QuantileIf: lambda self, e: f\"quantileIf{self._param_args_sql(e, 'parameters', 'expressions')}\",\n }\n\n EXPLICIT_UNION = True\n\n def _param_args_sql(\n self, expression: exp.Expression, params_name: str, args_name: str\n ) -> str:\n params = self.format_args(self.expressions(expression, params_name))\n args = self.format_args(self.expressions(expression, args_name))\n return f\"({params})({args})\"\n", "path": "sqlglot/dialects/clickhouse.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport typing as t\n\nfrom sqlglot import exp, generator, parser, tokens\nfrom sqlglot.dialects.dialect import Dialect, inline_array_sql, var_map_sql\nfrom sqlglot.errors import ParseError\nfrom sqlglot.parser import parse_var_map\nfrom sqlglot.tokens import TokenType\n\n\ndef _lower_func(sql):\n index = sql.index(\"(\")\n return sql[:index].lower() + sql[index:]\n\n\nclass ClickHouse(Dialect):\n normalize_functions = None\n null_ordering = \"nulls_are_last\"\n\n class Tokenizer(tokens.Tokenizer):\n COMMENTS = [\"--\", \"#\", \"#!\", (\"/*\", \"*/\")]\n IDENTIFIERS = ['\"', \"`\"]\n\n KEYWORDS = {\n **tokens.Tokenizer.KEYWORDS,\n \"ASOF\": TokenType.ASOF,\n \"GLOBAL\": TokenType.GLOBAL,\n \"DATETIME64\": TokenType.DATETIME,\n \"FINAL\": TokenType.FINAL,\n \"FLOAT32\": TokenType.FLOAT,\n \"FLOAT64\": TokenType.DOUBLE,\n \"INT16\": TokenType.SMALLINT,\n \"INT32\": TokenType.INT,\n \"INT64\": TokenType.BIGINT,\n \"INT8\": TokenType.TINYINT,\n \"TUPLE\": TokenType.STRUCT,\n }\n\n class Parser(parser.Parser):\n FUNCTIONS = {\n **parser.Parser.FUNCTIONS, # type: ignore\n \"MAP\": parse_var_map,\n \"QUANTILE\": lambda params, args: exp.Quantile(this=args, quantile=params),\n \"QUANTILES\": lambda params, args: exp.Quantiles(parameters=params, expressions=args),\n \"QUANTILEIF\": lambda params, args: exp.QuantileIf(parameters=params, expressions=args),\n }\n\n RANGE_PARSERS = {\n **parser.Parser.RANGE_PARSERS,\n TokenType.GLOBAL: lambda self, this: self._match(TokenType.IN)\n and self._parse_in(this, is_global=True),\n }\n\n JOIN_KINDS = {*parser.Parser.JOIN_KINDS, TokenType.ANY, TokenType.ASOF} # type: ignore\n\n TABLE_ALIAS_TOKENS = {*parser.Parser.TABLE_ALIAS_TOKENS} - {TokenType.ANY} # type: ignore\n\n def _parse_in(\n self, this: t.Optional[exp.Expression], is_global: bool = False\n ) -> exp.Expression:\n this = super()._parse_in(this)\n this.set(\"is_global\", is_global)\n return this\n\n def _parse_table(\n self, schema: bool = False, alias_tokens: t.Optional[t.Collection[TokenType]] = None\n ) -> t.Optional[exp.Expression]:\n this = super()._parse_table(schema=schema, alias_tokens=alias_tokens)\n\n if self._match(TokenType.FINAL):\n this = self.expression(exp.Final, this=this)\n\n return this\n\n def _parse_position(self) -> exp.Expression:\n this = super()._parse_position()\n # clickhouse position args are swapped\n substr = this.this\n this.args[\"this\"] = this.args.get(\"substr\")\n this.args[\"substr\"] = substr\n return this\n\n # https://clickhouse.com/docs/en/sql-reference/statements/select/with/\n def _parse_cte(self) -> exp.Expression:\n index = self._index\n try:\n # WITH <identifier> AS <subquery expression>\n return super()._parse_cte()\n except ParseError:\n # WITH <expression> AS <identifier>\n self._retreat(index)\n statement = self._parse_statement()\n\n if statement and isinstance(statement.this, exp.Alias):\n self.raise_error(\"Expected CTE to have alias\")\n\n return self.expression(exp.CTE, this=statement, alias=statement and statement.this)\n\n class Generator(generator.Generator):\n STRUCT_DELIMITER = (\"(\", \")\")\n\n TYPE_MAPPING = {\n **generator.Generator.TYPE_MAPPING, # type: ignore\n exp.DataType.Type.NULLABLE: \"Nullable\",\n exp.DataType.Type.DATETIME: \"DateTime64\",\n exp.DataType.Type.MAP: \"Map\",\n exp.DataType.Type.ARRAY: \"Array\",\n exp.DataType.Type.STRUCT: \"Tuple\",\n exp.DataType.Type.TINYINT: \"Int8\",\n exp.DataType.Type.SMALLINT: \"Int16\",\n exp.DataType.Type.INT: \"Int32\",\n exp.DataType.Type.BIGINT: \"Int64\",\n exp.DataType.Type.FLOAT: \"Float32\",\n exp.DataType.Type.DOUBLE: \"Float64\",\n }\n\n TRANSFORMS = {\n **generator.Generator.TRANSFORMS, # type: ignore\n exp.Array: inline_array_sql,\n exp.StrPosition: lambda self, e: f\"position({self.format_args(e.this, e.args.get('substr'), e.args.get('position'))})\",\n exp.Final: lambda self, e: f\"{self.sql(e, 'this')} FINAL\",\n exp.Map: lambda self, e: _lower_func(var_map_sql(self, e)),\n exp.VarMap: lambda self, e: _lower_func(var_map_sql(self, e)),\n exp.Quantile: lambda self, e: f\"quantile{self._param_args_sql(e, 'quantile', 'this')}\",\n exp.Quantiles: lambda self, e: f\"quantiles{self._param_args_sql(e, 'parameters', 'expressions')}\",\n exp.QuantileIf: lambda self, e: f\"quantileIf{self._param_args_sql(e, 'parameters', 'expressions')}\",\n }\n\n EXPLICIT_UNION = True\n\n def _param_args_sql(\n self, expression: exp.Expression, params_name: str, args_name: str\n ) -> str:\n params = self.format_args(self.expressions(expression, params_name))\n args = self.format_args(self.expressions(expression, args_name))\n return f\"({params})({args})\"\n\n def cte_sql(self, expression: exp.CTE) -> str:\n if isinstance(expression.this, exp.Alias):\n return self.sql(expression, \"this\")\n\n return super().cte_sql(expression)\n", "path": "sqlglot/dialects/clickhouse.py"}]} | 2,036 | 418 |
gh_patches_debug_15643 | rasdani/github-patches | git_diff | platformsh__platformsh-docs-1859 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Poor search results
If I search on https://docs.platform.sh for "routes", I get the following results:
* https://docs.platform.sh/administration/web/configure-environment.html#routes
* https://docs.platform.sh/gettingstarted/introduction/own-code/routes-configuration.html
* https://docs.platform.sh/configuration/app/upgrading.html#platformroutesyaml
* https://docs.platform.sh/guides/gatsby/headless/drupal.html#platformroutesyaml
* https://docs.platform.sh/guides/gatsby/headless/strapi.html#platformroutesyaml
* https://docs.platform.sh/guides/gatsby/headless/wordpress.html#platformroutesyaml
* https://docs.platform.sh/guides/drupal9/deploy/configure.html#requests-configuration-routesyaml
Note the absence of the main resource on this topic: https://docs.platform.sh/configuration/routes.html
I've also tried "routes.yaml" and "configure routes" neither of which return the main routing configuration page.
The same issue appears to afflict these pages as well which seem near-impossible to locate by search:
* https://docs.platform.sh/configuration/services.html
* https://docs.platform.sh/configuration/app.html
Doing a bit of digging if I make the same search query with 1000 results limit the routes URL does not show up in the results, so perhaps it's not being indexed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `search/main.py`
Content:
```
1 import os
2 import glob
3 import json
4 import meilisearch
5 from platformshconfig import Config
6
7 class Search:
8 def __init__(self):
9 self.default = {
10 "host": "http://127.0.0.1",
11 "key": None,
12 "port": 7700
13 }
14
15 self.scrape_dir = "output"
16 self.scrape_config = "config/scrape.json"
17 self.docs_index = "docs"
18 self.primaryKey = "documentId"
19 self.index_name = "Docs"
20
21 # Below are Platform.sh custom settings for how the search engine functions.
22
23 # Data available to the dropdown React app in docs, used to fill out autocomplete results.
24 self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']
25 # Data actually searchable by our queries.
26 self.searchable_attributes = ['title', 'text', 'section']
27
28 # Show results for one query with the listed pages, when they by default would not show up as best results. Note: these
29 # are not automatically two-way, so that's why they all appear to be defined twice.
30 self.synonyms = {
31 "routes.yaml": ["routes"],
32 "routes": ["routes.yaml"],
33 "services": ["services.yaml"],
34 "services.yaml": ["services"],
35 "application": [".platform.app.yaml", "app.yaml", "applications.yaml"],
36 ".platform.app.yaml": ["application"],
37 "app.yaml": ["application"],
38 "applications.yaml": ["application", "multi-app"],
39 "multi-app": ["applications.yaml"],
40 "regions": ["public ip addresses"],
41 "public ip addresses": ["regions"],
42 "ssl": ["https", "tls"],
43 "https": ["ssl"],
44 }
45
46 # Ranking rules:
47 #
48 # - Default order: ["typo", "words", "proximity", "attribute", "wordsPosition", "exactness"]
49 #
50 # - typo: fewer typos > more typos
51 # - words: number of times query is in document (greater number gets priority)
52 # - proximity: smaller distance between multiple occurences of query in same document > larger distances
53 # - attribute: sorted according to order of importance of attributes (searchable_attributes). terms in
54 # more important attributes first.
55 # - wordsPosition: query terms earlier in document > later in document
56 # - exactness: similarity of matched words in document with query
57
58 self.ranking_rules = ["asc(rank)", "attribute", "typo", "words", "proximity", "wordsPosition", "exactness"]
59
60 self.updated_settings = {
61 "rankingRules": self.ranking_rules,
62 "searchableAttributes": self.searchable_attributes,
63 "displayedAttributes": self.displayed_attributes
64 }
65
66 self.distinct_attribute = "url"
67
68 def getConnectionString(self):
69 """
70 Sets the Meilisearch host string, depending on the environment.
71
72 Returns:
73 string: Meilisearch host string.
74 """
75 if os.environ.get('PORT'):
76 return "{}:{}".format(self.default["host"], os.environ['PORT'])
77 else:
78 return "{}:{}".format(self.default["host"], self.default["port"])
79
80 def getMasterKey(self):
81 """
82 Retrieves the Meilisearch master key, either from the Platform.sh environment or locally.
83 """
84 config = Config()
85 if config.is_valid_platform():
86 return config.projectEntropy
87 elif os.environ.get("MEILI_MASTER_KEY"):
88 return os.environ["MEILI_MASTER_KEY"]
89 else:
90 return self.default["key"]
91
92 def add_documents(self, index):
93 """
94 Cycle through the individual site indexes in /outputs so their individual documents can be added to Meilisearch.
95 """
96 documents = [f for f in glob.glob("{}/*.json".format(self.scrape_dir))]
97 for doc in documents:
98 self.add(doc, index)
99
100 def add(self, doc, index):
101 """
102 Add an individual site's index to the Meilisearch service.
103 """
104 with open(doc) as scraped_index:
105 data = json.load(scraped_index)
106 index.add_documents(data)
107
108 def update(self):
109 """
110 Updates the Meilisearch index.
111 """
112 # Create a Meilisearch client.
113 client = meilisearch.Client(self.getConnectionString(), self.getMasterKey())
114
115 # Delete previous index
116 if len(client.get_indexes()):
117 client.get_index(self.docs_index).delete()
118
119 # Create a new index
120 index = client.create_index(uid=self.docs_index, options={'primaryKey': self.primaryKey, 'name': self.index_name})
121
122 # Add synonyms for the index
123 index.update_synonyms(self.synonyms)
124
125 # Update its settings: what can be searched, what's displayable, and how results should be ranked.
126 index.update_settings(self.updated_settings)
127
128 # Update distinct attribute.
129 index.update_distinct_attribute(self.distinct_attribute)
130
131 # Add documents to the index
132 self.add_documents(index)
133
134 if __name__ == "__main__":
135 meili = Search()
136 meili.update()
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/search/main.py b/search/main.py
--- a/search/main.py
+++ b/search/main.py
@@ -23,7 +23,7 @@
# Data available to the dropdown React app in docs, used to fill out autocomplete results.
self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']
# Data actually searchable by our queries.
- self.searchable_attributes = ['title', 'text', 'section']
+ self.searchable_attributes = ['title', 'text', 'url', 'section']
# Show results for one query with the listed pages, when they by default would not show up as best results. Note: these
# are not automatically two-way, so that's why they all appear to be defined twice.
| {"golden_diff": "diff --git a/search/main.py b/search/main.py\n--- a/search/main.py\n+++ b/search/main.py\n@@ -23,7 +23,7 @@\n # Data available to the dropdown React app in docs, used to fill out autocomplete results.\n self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']\n # Data actually searchable by our queries.\n- self.searchable_attributes = ['title', 'text', 'section']\n+ self.searchable_attributes = ['title', 'text', 'url', 'section']\n \n # Show results for one query with the listed pages, when they by default would not show up as best results. Note: these\n # are not automatically two-way, so that's why they all appear to be defined twice.\n", "issue": "Poor search results\nIf I search on https://docs.platform.sh for \"routes\", I get the following results:\r\n\r\n* https://docs.platform.sh/administration/web/configure-environment.html#routes\r\n* https://docs.platform.sh/gettingstarted/introduction/own-code/routes-configuration.html\r\n* https://docs.platform.sh/configuration/app/upgrading.html#platformroutesyaml\r\n* https://docs.platform.sh/guides/gatsby/headless/drupal.html#platformroutesyaml\r\n* https://docs.platform.sh/guides/gatsby/headless/strapi.html#platformroutesyaml\r\n* https://docs.platform.sh/guides/gatsby/headless/wordpress.html#platformroutesyaml\r\n* https://docs.platform.sh/guides/drupal9/deploy/configure.html#requests-configuration-routesyaml\r\n\r\nNote the absence of the main resource on this topic: https://docs.platform.sh/configuration/routes.html\r\n\r\nI've also tried \"routes.yaml\" and \"configure routes\" neither of which return the main routing configuration page.\r\n\r\nThe same issue appears to afflict these pages as well which seem near-impossible to locate by search:\r\n* https://docs.platform.sh/configuration/services.html\r\n* https://docs.platform.sh/configuration/app.html\r\n\r\nDoing a bit of digging if I make the same search query with 1000 results limit the routes URL does not show up in the results, so perhaps it's not being indexed.\n", "before_files": [{"content": "import os\nimport glob\nimport json\nimport meilisearch\nfrom platformshconfig import Config\n\nclass Search:\n def __init__(self):\n self.default = {\n \"host\": \"http://127.0.0.1\",\n \"key\": None,\n \"port\": 7700\n }\n\n self.scrape_dir = \"output\"\n self.scrape_config = \"config/scrape.json\"\n self.docs_index = \"docs\"\n self.primaryKey = \"documentId\"\n self.index_name = \"Docs\"\n\n # Below are Platform.sh custom settings for how the search engine functions.\n\n # Data available to the dropdown React app in docs, used to fill out autocomplete results.\n self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']\n # Data actually searchable by our queries.\n self.searchable_attributes = ['title', 'text', 'section']\n\n # Show results for one query with the listed pages, when they by default would not show up as best results. Note: these\n # are not automatically two-way, so that's why they all appear to be defined twice.\n self.synonyms = {\n \"routes.yaml\": [\"routes\"],\n \"routes\": [\"routes.yaml\"],\n \"services\": [\"services.yaml\"],\n \"services.yaml\": [\"services\"],\n \"application\": [\".platform.app.yaml\", \"app.yaml\", \"applications.yaml\"],\n \".platform.app.yaml\": [\"application\"],\n \"app.yaml\": [\"application\"],\n \"applications.yaml\": [\"application\", \"multi-app\"],\n \"multi-app\": [\"applications.yaml\"],\n \"regions\": [\"public ip addresses\"],\n \"public ip addresses\": [\"regions\"],\n \"ssl\": [\"https\", \"tls\"],\n \"https\": [\"ssl\"],\n }\n\n # Ranking rules:\n #\n # - Default order: [\"typo\", \"words\", \"proximity\", \"attribute\", \"wordsPosition\", \"exactness\"]\n #\n # - typo: fewer typos > more typos\n # - words: number of times query is in document (greater number gets priority)\n # - proximity: smaller distance between multiple occurences of query in same document > larger distances\n # - attribute: sorted according to order of importance of attributes (searchable_attributes). terms in\n # more important attributes first.\n # - wordsPosition: query terms earlier in document > later in document\n # - exactness: similarity of matched words in document with query\n\n self.ranking_rules = [\"asc(rank)\", \"attribute\", \"typo\", \"words\", \"proximity\", \"wordsPosition\", \"exactness\"]\n\n self.updated_settings = {\n \"rankingRules\": self.ranking_rules,\n \"searchableAttributes\": self.searchable_attributes,\n \"displayedAttributes\": self.displayed_attributes\n }\n\n self.distinct_attribute = \"url\"\n\n def getConnectionString(self):\n \"\"\"\n Sets the Meilisearch host string, depending on the environment.\n\n Returns:\n string: Meilisearch host string.\n \"\"\"\n if os.environ.get('PORT'):\n return \"{}:{}\".format(self.default[\"host\"], os.environ['PORT'])\n else:\n return \"{}:{}\".format(self.default[\"host\"], self.default[\"port\"])\n\n def getMasterKey(self):\n \"\"\"\n Retrieves the Meilisearch master key, either from the Platform.sh environment or locally.\n \"\"\"\n config = Config()\n if config.is_valid_platform():\n return config.projectEntropy\n elif os.environ.get(\"MEILI_MASTER_KEY\"):\n return os.environ[\"MEILI_MASTER_KEY\"]\n else:\n return self.default[\"key\"]\n\n def add_documents(self, index):\n \"\"\"\n Cycle through the individual site indexes in /outputs so their individual documents can be added to Meilisearch.\n \"\"\"\n documents = [f for f in glob.glob(\"{}/*.json\".format(self.scrape_dir))]\n for doc in documents:\n self.add(doc, index)\n\n def add(self, doc, index):\n \"\"\"\n Add an individual site's index to the Meilisearch service.\n \"\"\"\n with open(doc) as scraped_index:\n data = json.load(scraped_index)\n index.add_documents(data)\n\n def update(self):\n \"\"\"\n Updates the Meilisearch index.\n \"\"\"\n # Create a Meilisearch client.\n client = meilisearch.Client(self.getConnectionString(), self.getMasterKey())\n\n # Delete previous index\n if len(client.get_indexes()):\n client.get_index(self.docs_index).delete()\n\n # Create a new index\n index = client.create_index(uid=self.docs_index, options={'primaryKey': self.primaryKey, 'name': self.index_name})\n\n # Add synonyms for the index\n index.update_synonyms(self.synonyms)\n\n # Update its settings: what can be searched, what's displayable, and how results should be ranked.\n index.update_settings(self.updated_settings)\n\n # Update distinct attribute.\n index.update_distinct_attribute(self.distinct_attribute)\n\n # Add documents to the index\n self.add_documents(index)\n\nif __name__ == \"__main__\":\n meili = Search()\n meili.update()\n", "path": "search/main.py"}], "after_files": [{"content": "import os\nimport glob\nimport json\nimport meilisearch\nfrom platformshconfig import Config\n\nclass Search:\n def __init__(self):\n self.default = {\n \"host\": \"http://127.0.0.1\",\n \"key\": None,\n \"port\": 7700\n }\n\n self.scrape_dir = \"output\"\n self.scrape_config = \"config/scrape.json\"\n self.docs_index = \"docs\"\n self.primaryKey = \"documentId\"\n self.index_name = \"Docs\"\n\n # Below are Platform.sh custom settings for how the search engine functions.\n\n # Data available to the dropdown React app in docs, used to fill out autocomplete results.\n self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']\n # Data actually searchable by our queries.\n self.searchable_attributes = ['title', 'text', 'url', 'section']\n\n # Show results for one query with the listed pages, when they by default would not show up as best results. Note: these\n # are not automatically two-way, so that's why they all appear to be defined twice.\n self.synonyms = {\n \"routes.yaml\": [\"routes\"],\n \"routes\": [\"routes.yaml\"],\n \"services\": [\"services.yaml\"],\n \"services.yaml\": [\"services\"],\n \"application\": [\".platform.app.yaml\", \"app.yaml\", \"applications.yaml\"],\n \".platform.app.yaml\": [\"application\"],\n \"app.yaml\": [\"application\"],\n \"applications.yaml\": [\"application\", \"multi-app\"],\n \"multi-app\": [\"applications.yaml\"],\n \"regions\": [\"public ip addresses\"],\n \"public ip addresses\": [\"regions\"],\n \"ssl\": [\"https\", \"tls\"],\n \"https\": [\"ssl\"],\n }\n\n # Ranking rules:\n #\n # - Default order: [\"typo\", \"words\", \"proximity\", \"attribute\", \"wordsPosition\", \"exactness\"]\n #\n # - typo: fewer typos > more typos\n # - words: number of times query is in document (greater number gets priority)\n # - proximity: smaller distance between multiple occurences of query in same document > larger distances\n # - attribute: sorted according to order of importance of attributes (searchable_attributes). terms in\n # more important attributes first.\n # - wordsPosition: query terms earlier in document > later in document\n # - exactness: similarity of matched words in document with query\n\n self.ranking_rules = [\"asc(rank)\", \"attribute\", \"typo\", \"words\", \"proximity\", \"wordsPosition\", \"exactness\"]\n\n self.updated_settings = {\n \"rankingRules\": self.ranking_rules,\n \"searchableAttributes\": self.searchable_attributes,\n \"displayedAttributes\": self.displayed_attributes\n }\n\n self.distinct_attribute = \"url\"\n\n def getConnectionString(self):\n \"\"\"\n Sets the Meilisearch host string, depending on the environment.\n\n Returns:\n string: Meilisearch host string.\n \"\"\"\n if os.environ.get('PORT'):\n return \"{}:{}\".format(self.default[\"host\"], os.environ['PORT'])\n else:\n return \"{}:{}\".format(self.default[\"host\"], self.default[\"port\"])\n\n def getMasterKey(self):\n \"\"\"\n Retrieves the Meilisearch master key, either from the Platform.sh environment or locally.\n \"\"\"\n config = Config()\n if config.is_valid_platform():\n return config.projectEntropy\n elif os.environ.get(\"MEILI_MASTER_KEY\"):\n return os.environ[\"MEILI_MASTER_KEY\"]\n else:\n return self.default[\"key\"]\n\n def add_documents(self, index):\n \"\"\"\n Cycle through the individual site indexes in /outputs so their individual documents can be added to Meilisearch.\n \"\"\"\n documents = [f for f in glob.glob(\"{}/*.json\".format(self.scrape_dir))]\n for doc in documents:\n self.add(doc, index)\n\n def add(self, doc, index):\n \"\"\"\n Add an individual site's index to the Meilisearch service.\n \"\"\"\n with open(doc) as scraped_index:\n data = json.load(scraped_index)\n index.add_documents(data)\n\n def update(self):\n \"\"\"\n Updates the Meilisearch index.\n \"\"\"\n # Create a Meilisearch client.\n client = meilisearch.Client(self.getConnectionString(), self.getMasterKey())\n\n # Delete previous index\n if len(client.get_indexes()):\n client.get_index(self.docs_index).delete()\n\n # Create a new index\n index = client.create_index(uid=self.docs_index, options={'primaryKey': self.primaryKey, 'name': self.index_name})\n\n # Add synonyms for the index\n index.update_synonyms(self.synonyms)\n\n # Update its settings: what can be searched, what's displayable, and how results should be ranked.\n index.update_settings(self.updated_settings)\n\n # Update distinct attribute.\n index.update_distinct_attribute(self.distinct_attribute)\n\n # Add documents to the index\n self.add_documents(index)\n\nif __name__ == \"__main__\":\n meili = Search()\n meili.update()\n", "path": "search/main.py"}]} | 1,961 | 169 |
gh_patches_debug_3310 | rasdani/github-patches | git_diff | ansible-collections__community.general-6942 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
machinectl become plugin does not specify it requires a tty
### Summary
see https://github.com/ansible/ansible/issues/81254
if the plugin sets the class attribute:
```
require_tty = True
```
It would automatically disable pipelining and avoid such errors
### Issue Type
Bug Report
### Component Name
become/machinectl
### Ansible Version
```console (paste below)
$ ansible --version
```
all
### Community.general Version
```console (paste below)
$ ansible-galaxy collection list community.general
```
all
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
N/A
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Use machinectl become plugin + pipelining
### Expected Results
it works TM
### Actual Results
```console (paste below)
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/become/machinectl.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2018, Ansible Project
3 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
4 # SPDX-License-Identifier: GPL-3.0-or-later
5 from __future__ import (absolute_import, division, print_function)
6 __metaclass__ = type
7
8 DOCUMENTATION = '''
9 name: machinectl
10 short_description: Systemd's machinectl privilege escalation
11 description:
12 - This become plugins allows your remote/login user to execute commands as another user via the machinectl utility.
13 author: Ansible Core Team
14 options:
15 become_user:
16 description: User you 'become' to execute the task
17 default: ''
18 ini:
19 - section: privilege_escalation
20 key: become_user
21 - section: machinectl_become_plugin
22 key: user
23 vars:
24 - name: ansible_become_user
25 - name: ansible_machinectl_user
26 env:
27 - name: ANSIBLE_BECOME_USER
28 - name: ANSIBLE_MACHINECTL_USER
29 become_exe:
30 description: Machinectl executable
31 default: machinectl
32 ini:
33 - section: privilege_escalation
34 key: become_exe
35 - section: machinectl_become_plugin
36 key: executable
37 vars:
38 - name: ansible_become_exe
39 - name: ansible_machinectl_exe
40 env:
41 - name: ANSIBLE_BECOME_EXE
42 - name: ANSIBLE_MACHINECTL_EXE
43 become_flags:
44 description: Options to pass to machinectl
45 default: ''
46 ini:
47 - section: privilege_escalation
48 key: become_flags
49 - section: machinectl_become_plugin
50 key: flags
51 vars:
52 - name: ansible_become_flags
53 - name: ansible_machinectl_flags
54 env:
55 - name: ANSIBLE_BECOME_FLAGS
56 - name: ANSIBLE_MACHINECTL_FLAGS
57 become_pass:
58 description: Password for machinectl
59 required: false
60 vars:
61 - name: ansible_become_password
62 - name: ansible_become_pass
63 - name: ansible_machinectl_pass
64 env:
65 - name: ANSIBLE_BECOME_PASS
66 - name: ANSIBLE_MACHINECTL_PASS
67 ini:
68 - section: machinectl_become_plugin
69 key: password
70 notes:
71 - When not using this plugin with user V(root), it only works correctly with a polkit rule which will alter
72 the behaviour of machinectl. This rule must alter the prompt behaviour to ask directly for the user credentials,
73 if the user is allowed to perform the action (take a look at the examples section).
74 If such a rule is not present the plugin only work if it is used in context with the root user,
75 because then no further prompt will be shown by machinectl.
76 '''
77
78 EXAMPLES = r'''
79 # A polkit rule needed to use the module with a non-root user.
80 # See the Notes section for details.
81 60-machinectl-fast-user-auth.rules: |
82 polkit.addRule(function(action, subject) {
83 if(action.id == "org.freedesktop.machine1.host-shell" && subject.isInGroup("wheel")) {
84 return polkit.Result.AUTH_SELF_KEEP;
85 }
86 });
87 '''
88
89 from re import compile as re_compile
90
91 from ansible.plugins.become import BecomeBase
92 from ansible.module_utils._text import to_bytes
93
94
95 ansi_color_codes = re_compile(to_bytes(r'\x1B\[[0-9;]+m'))
96
97
98 class BecomeModule(BecomeBase):
99
100 name = 'community.general.machinectl'
101
102 prompt = 'Password: '
103 fail = ('==== AUTHENTICATION FAILED ====',)
104 success = ('==== AUTHENTICATION COMPLETE ====',)
105
106 @staticmethod
107 def remove_ansi_codes(line):
108 return ansi_color_codes.sub(b"", line)
109
110 def build_become_command(self, cmd, shell):
111 super(BecomeModule, self).build_become_command(cmd, shell)
112
113 if not cmd:
114 return cmd
115
116 become = self.get_option('become_exe')
117
118 flags = self.get_option('become_flags')
119 user = self.get_option('become_user')
120 return '%s -q shell %s %s@ %s' % (become, flags, user, self._build_success_command(cmd, shell))
121
122 def check_success(self, b_output):
123 b_output = self.remove_ansi_codes(b_output)
124 return super().check_success(b_output)
125
126 def check_incorrect_password(self, b_output):
127 b_output = self.remove_ansi_codes(b_output)
128 return super().check_incorrect_password(b_output)
129
130 def check_missing_password(self, b_output):
131 b_output = self.remove_ansi_codes(b_output)
132 return super().check_missing_password(b_output)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/become/machinectl.py b/plugins/become/machinectl.py
--- a/plugins/become/machinectl.py
+++ b/plugins/become/machinectl.py
@@ -102,6 +102,7 @@
prompt = 'Password: '
fail = ('==== AUTHENTICATION FAILED ====',)
success = ('==== AUTHENTICATION COMPLETE ====',)
+ require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932
@staticmethod
def remove_ansi_codes(line):
| {"golden_diff": "diff --git a/plugins/become/machinectl.py b/plugins/become/machinectl.py\n--- a/plugins/become/machinectl.py\n+++ b/plugins/become/machinectl.py\n@@ -102,6 +102,7 @@\n prompt = 'Password: '\n fail = ('==== AUTHENTICATION FAILED ====',)\n success = ('==== AUTHENTICATION COMPLETE ====',)\n+ require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932\n \n @staticmethod\n def remove_ansi_codes(line):\n", "issue": "machinectl become plugin does not specify it requires a tty\n### Summary\n\nsee https://github.com/ansible/ansible/issues/81254\r\n\r\nif the plugin sets the class attribute:\r\n\r\n```\r\n require_tty = True\r\n```\r\n\r\nIt would automatically disable pipelining and avoid such errors\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nbecome/machinectl\n\n### Ansible Version\n\n```console (paste below)\r\n$ ansible --version\r\n\r\n```\r\nall\n\n### Community.general Version\n\n```console (paste below)\r\n$ ansible-galaxy collection list community.general\r\n\r\n```\r\nall\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\nN/A\n\n### OS / Environment\n\nN/A\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n\r\n```\r\nUse machinectl become plugin + pipelining\n\n### Expected Results\n\nit works TM\n\n### Actual Results\n\n```console (paste below)\r\n \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for the exact error\",\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2018, Ansible Project\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = '''\n name: machinectl\n short_description: Systemd's machinectl privilege escalation\n description:\n - This become plugins allows your remote/login user to execute commands as another user via the machinectl utility.\n author: Ansible Core Team\n options:\n become_user:\n description: User you 'become' to execute the task\n default: ''\n ini:\n - section: privilege_escalation\n key: become_user\n - section: machinectl_become_plugin\n key: user\n vars:\n - name: ansible_become_user\n - name: ansible_machinectl_user\n env:\n - name: ANSIBLE_BECOME_USER\n - name: ANSIBLE_MACHINECTL_USER\n become_exe:\n description: Machinectl executable\n default: machinectl\n ini:\n - section: privilege_escalation\n key: become_exe\n - section: machinectl_become_plugin\n key: executable\n vars:\n - name: ansible_become_exe\n - name: ansible_machinectl_exe\n env:\n - name: ANSIBLE_BECOME_EXE\n - name: ANSIBLE_MACHINECTL_EXE\n become_flags:\n description: Options to pass to machinectl\n default: ''\n ini:\n - section: privilege_escalation\n key: become_flags\n - section: machinectl_become_plugin\n key: flags\n vars:\n - name: ansible_become_flags\n - name: ansible_machinectl_flags\n env:\n - name: ANSIBLE_BECOME_FLAGS\n - name: ANSIBLE_MACHINECTL_FLAGS\n become_pass:\n description: Password for machinectl\n required: false\n vars:\n - name: ansible_become_password\n - name: ansible_become_pass\n - name: ansible_machinectl_pass\n env:\n - name: ANSIBLE_BECOME_PASS\n - name: ANSIBLE_MACHINECTL_PASS\n ini:\n - section: machinectl_become_plugin\n key: password\n notes:\n - When not using this plugin with user V(root), it only works correctly with a polkit rule which will alter\n the behaviour of machinectl. This rule must alter the prompt behaviour to ask directly for the user credentials,\n if the user is allowed to perform the action (take a look at the examples section).\n If such a rule is not present the plugin only work if it is used in context with the root user,\n because then no further prompt will be shown by machinectl.\n'''\n\nEXAMPLES = r'''\n# A polkit rule needed to use the module with a non-root user.\n# See the Notes section for details.\n60-machinectl-fast-user-auth.rules: |\n polkit.addRule(function(action, subject) {\n if(action.id == \"org.freedesktop.machine1.host-shell\" && subject.isInGroup(\"wheel\")) {\n return polkit.Result.AUTH_SELF_KEEP;\n }\n });\n'''\n\nfrom re import compile as re_compile\n\nfrom ansible.plugins.become import BecomeBase\nfrom ansible.module_utils._text import to_bytes\n\n\nansi_color_codes = re_compile(to_bytes(r'\\x1B\\[[0-9;]+m'))\n\n\nclass BecomeModule(BecomeBase):\n\n name = 'community.general.machinectl'\n\n prompt = 'Password: '\n fail = ('==== AUTHENTICATION FAILED ====',)\n success = ('==== AUTHENTICATION COMPLETE ====',)\n\n @staticmethod\n def remove_ansi_codes(line):\n return ansi_color_codes.sub(b\"\", line)\n\n def build_become_command(self, cmd, shell):\n super(BecomeModule, self).build_become_command(cmd, shell)\n\n if not cmd:\n return cmd\n\n become = self.get_option('become_exe')\n\n flags = self.get_option('become_flags')\n user = self.get_option('become_user')\n return '%s -q shell %s %s@ %s' % (become, flags, user, self._build_success_command(cmd, shell))\n\n def check_success(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_success(b_output)\n\n def check_incorrect_password(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_incorrect_password(b_output)\n\n def check_missing_password(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_missing_password(b_output)\n", "path": "plugins/become/machinectl.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2018, Ansible Project\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = '''\n name: machinectl\n short_description: Systemd's machinectl privilege escalation\n description:\n - This become plugins allows your remote/login user to execute commands as another user via the machinectl utility.\n author: Ansible Core Team\n options:\n become_user:\n description: User you 'become' to execute the task\n default: ''\n ini:\n - section: privilege_escalation\n key: become_user\n - section: machinectl_become_plugin\n key: user\n vars:\n - name: ansible_become_user\n - name: ansible_machinectl_user\n env:\n - name: ANSIBLE_BECOME_USER\n - name: ANSIBLE_MACHINECTL_USER\n become_exe:\n description: Machinectl executable\n default: machinectl\n ini:\n - section: privilege_escalation\n key: become_exe\n - section: machinectl_become_plugin\n key: executable\n vars:\n - name: ansible_become_exe\n - name: ansible_machinectl_exe\n env:\n - name: ANSIBLE_BECOME_EXE\n - name: ANSIBLE_MACHINECTL_EXE\n become_flags:\n description: Options to pass to machinectl\n default: ''\n ini:\n - section: privilege_escalation\n key: become_flags\n - section: machinectl_become_plugin\n key: flags\n vars:\n - name: ansible_become_flags\n - name: ansible_machinectl_flags\n env:\n - name: ANSIBLE_BECOME_FLAGS\n - name: ANSIBLE_MACHINECTL_FLAGS\n become_pass:\n description: Password for machinectl\n required: false\n vars:\n - name: ansible_become_password\n - name: ansible_become_pass\n - name: ansible_machinectl_pass\n env:\n - name: ANSIBLE_BECOME_PASS\n - name: ANSIBLE_MACHINECTL_PASS\n ini:\n - section: machinectl_become_plugin\n key: password\n notes:\n - When not using this plugin with user V(root), it only works correctly with a polkit rule which will alter\n the behaviour of machinectl. This rule must alter the prompt behaviour to ask directly for the user credentials,\n if the user is allowed to perform the action (take a look at the examples section).\n If such a rule is not present the plugin only work if it is used in context with the root user,\n because then no further prompt will be shown by machinectl.\n'''\n\nEXAMPLES = r'''\n# A polkit rule needed to use the module with a non-root user.\n# See the Notes section for details.\n60-machinectl-fast-user-auth.rules: |\n polkit.addRule(function(action, subject) {\n if(action.id == \"org.freedesktop.machine1.host-shell\" && subject.isInGroup(\"wheel\")) {\n return polkit.Result.AUTH_SELF_KEEP;\n }\n });\n'''\n\nfrom re import compile as re_compile\n\nfrom ansible.plugins.become import BecomeBase\nfrom ansible.module_utils._text import to_bytes\n\n\nansi_color_codes = re_compile(to_bytes(r'\\x1B\\[[0-9;]+m'))\n\n\nclass BecomeModule(BecomeBase):\n\n name = 'community.general.machinectl'\n\n prompt = 'Password: '\n fail = ('==== AUTHENTICATION FAILED ====',)\n success = ('==== AUTHENTICATION COMPLETE ====',)\n require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932\n\n @staticmethod\n def remove_ansi_codes(line):\n return ansi_color_codes.sub(b\"\", line)\n\n def build_become_command(self, cmd, shell):\n super(BecomeModule, self).build_become_command(cmd, shell)\n\n if not cmd:\n return cmd\n\n become = self.get_option('become_exe')\n\n flags = self.get_option('become_flags')\n user = self.get_option('become_user')\n return '%s -q shell %s %s@ %s' % (become, flags, user, self._build_success_command(cmd, shell))\n\n def check_success(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_success(b_output)\n\n def check_incorrect_password(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_incorrect_password(b_output)\n\n def check_missing_password(self, b_output):\n b_output = self.remove_ansi_codes(b_output)\n return super().check_missing_password(b_output)\n", "path": "plugins/become/machinectl.py"}]} | 1,903 | 126 |
gh_patches_debug_22741 | rasdani/github-patches | git_diff | aio-libs__aiohttp-5364 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switch http_parser to llhttp
Nodejs decide to move from their hard-coded C parser to this one.
https://github.com/nodejs/http-parser/pull/285#issuecomment-456025694
*Upd:*
proposed parser is https://llhttp.org
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import pathlib
3 import re
4 import sys
5
6 from setuptools import Extension, setup
7
8 if sys.version_info < (3, 7):
9 raise RuntimeError("aiohttp 4.x requires Python 3.7+")
10
11
12 NO_EXTENSIONS = bool(os.environ.get("AIOHTTP_NO_EXTENSIONS")) # type: bool
13 HERE = pathlib.Path(__file__).parent
14 IS_GIT_REPO = (HERE / ".git").exists()
15
16
17 if sys.implementation.name != "cpython":
18 NO_EXTENSIONS = True
19
20
21 if IS_GIT_REPO and not (HERE / "vendor/http-parser/README.md").exists():
22 print("Install submodules when building from git clone", file=sys.stderr)
23 print("Hint:", file=sys.stderr)
24 print(" git submodule update --init", file=sys.stderr)
25 sys.exit(2)
26
27
28 # NOTE: makefile cythonizes all Cython modules
29
30 extensions = [
31 Extension("aiohttp._websocket", ["aiohttp/_websocket.c"]),
32 Extension(
33 "aiohttp._http_parser",
34 [
35 "aiohttp/_http_parser.c",
36 "vendor/http-parser/http_parser.c",
37 "aiohttp/_find_header.c",
38 ],
39 define_macros=[("HTTP_PARSER_STRICT", 0)],
40 ),
41 Extension("aiohttp._helpers", ["aiohttp/_helpers.c"]),
42 Extension("aiohttp._http_writer", ["aiohttp/_http_writer.c"]),
43 ]
44
45
46 txt = (HERE / "aiohttp" / "__init__.py").read_text("utf-8")
47 try:
48 version = re.findall(r'^__version__ = "([^"]+)"\r?$', txt, re.M)[0]
49 except IndexError:
50 raise RuntimeError("Unable to determine version.")
51
52 install_requires = [
53 "chardet>=2.0,<5.0",
54 "multidict>=4.5,<7.0",
55 "async_timeout>=4.0a2,<5.0",
56 'asynctest==0.13.0; python_version<"3.8"',
57 "yarl>=1.0,<2.0",
58 "typing_extensions>=3.7.4",
59 "frozenlist>=1.1.1",
60 "aiosignal>=1.1.2",
61 ]
62
63
64 def read(f):
65 return (HERE / f).read_text("utf-8").strip()
66
67
68 args = dict(
69 name="aiohttp",
70 version=version,
71 description="Async http client/server framework (asyncio)",
72 long_description="\n\n".join((read("README.rst"), read("CHANGES.rst"))),
73 long_description_content_type="text/x-rst",
74 classifiers=[
75 "License :: OSI Approved :: Apache Software License",
76 "Intended Audience :: Developers",
77 "Programming Language :: Python",
78 "Programming Language :: Python :: 3",
79 "Programming Language :: Python :: 3.7",
80 "Programming Language :: Python :: 3.8",
81 "Programming Language :: Python :: 3.9",
82 "Development Status :: 5 - Production/Stable",
83 "Operating System :: POSIX",
84 "Operating System :: MacOS :: MacOS X",
85 "Operating System :: Microsoft :: Windows",
86 "Topic :: Internet :: WWW/HTTP",
87 "Framework :: AsyncIO",
88 ],
89 author="Nikolay Kim",
90 author_email="[email protected]",
91 maintainer=", ".join(
92 (
93 "Nikolay Kim <[email protected]>",
94 "Andrew Svetlov <[email protected]>",
95 )
96 ),
97 maintainer_email="[email protected]",
98 url="https://github.com/aio-libs/aiohttp",
99 project_urls={
100 "Chat: Gitter": "https://gitter.im/aio-libs/Lobby",
101 "CI: GitHub Actions": "https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI", # noqa
102 "Coverage: codecov": "https://codecov.io/github/aio-libs/aiohttp",
103 "Docs: Changelog": "https://docs.aiohttp.org/en/stable/changes.html",
104 "Docs: RTD": "https://docs.aiohttp.org",
105 "GitHub: issues": "https://github.com/aio-libs/aiohttp/issues",
106 "GitHub: repo": "https://github.com/aio-libs/aiohttp",
107 },
108 license="Apache 2",
109 packages=["aiohttp"],
110 python_requires=">=3.7",
111 install_requires=install_requires,
112 extras_require={
113 "speedups": [
114 "aiodns>=1.1",
115 "Brotli",
116 "cchardet",
117 ],
118 },
119 include_package_data=True,
120 )
121
122 if not NO_EXTENSIONS:
123 print("*********************")
124 print("* Accelerated build *")
125 print("*********************")
126 setup(ext_modules=extensions, **args)
127 else:
128 print("*********************")
129 print("* Pure Python build *")
130 print("*********************")
131 setup(**args)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,7 @@
NO_EXTENSIONS = True
-if IS_GIT_REPO and not (HERE / "vendor/http-parser/README.md").exists():
+if IS_GIT_REPO and not (HERE / "vendor/llhttp/README.md").exists():
print("Install submodules when building from git clone", file=sys.stderr)
print("Hint:", file=sys.stderr)
print(" git submodule update --init", file=sys.stderr)
@@ -33,10 +33,13 @@
"aiohttp._http_parser",
[
"aiohttp/_http_parser.c",
- "vendor/http-parser/http_parser.c",
"aiohttp/_find_header.c",
+ "vendor/llhttp/build/c/llhttp.c",
+ "vendor/llhttp/src/native/api.c",
+ "vendor/llhttp/src/native/http.c",
],
- define_macros=[("HTTP_PARSER_STRICT", 0)],
+ define_macros=[("LLHTTP_STRICT_MODE", 0)],
+ include_dirs=["vendor/llhttp/build"],
),
Extension("aiohttp._helpers", ["aiohttp/_helpers.c"]),
Extension("aiohttp._http_writer", ["aiohttp/_http_writer.c"]),
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,7 +18,7 @@\n NO_EXTENSIONS = True\n \n \n-if IS_GIT_REPO and not (HERE / \"vendor/http-parser/README.md\").exists():\n+if IS_GIT_REPO and not (HERE / \"vendor/llhttp/README.md\").exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n@@ -33,10 +33,13 @@\n \"aiohttp._http_parser\",\n [\n \"aiohttp/_http_parser.c\",\n- \"vendor/http-parser/http_parser.c\",\n \"aiohttp/_find_header.c\",\n+ \"vendor/llhttp/build/c/llhttp.c\",\n+ \"vendor/llhttp/src/native/api.c\",\n+ \"vendor/llhttp/src/native/http.c\",\n ],\n- define_macros=[(\"HTTP_PARSER_STRICT\", 0)],\n+ define_macros=[(\"LLHTTP_STRICT_MODE\", 0)],\n+ include_dirs=[\"vendor/llhttp/build\"],\n ),\n Extension(\"aiohttp._helpers\", [\"aiohttp/_helpers.c\"]),\n Extension(\"aiohttp._http_writer\", [\"aiohttp/_http_writer.c\"]),\n", "issue": "Switch http_parser to llhttp\nNodejs decide to move from their hard-coded C parser to this one.\r\n\r\nhttps://github.com/nodejs/http-parser/pull/285#issuecomment-456025694\r\n\r\n*Upd:* \r\nproposed parser is https://llhttp.org\r\n\n", "before_files": [{"content": "import os\nimport pathlib\nimport re\nimport sys\n\nfrom setuptools import Extension, setup\n\nif sys.version_info < (3, 7):\n raise RuntimeError(\"aiohttp 4.x requires Python 3.7+\")\n\n\nNO_EXTENSIONS = bool(os.environ.get(\"AIOHTTP_NO_EXTENSIONS\")) # type: bool\nHERE = pathlib.Path(__file__).parent\nIS_GIT_REPO = (HERE / \".git\").exists()\n\n\nif sys.implementation.name != \"cpython\":\n NO_EXTENSIONS = True\n\n\nif IS_GIT_REPO and not (HERE / \"vendor/http-parser/README.md\").exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [\n Extension(\"aiohttp._websocket\", [\"aiohttp/_websocket.c\"]),\n Extension(\n \"aiohttp._http_parser\",\n [\n \"aiohttp/_http_parser.c\",\n \"vendor/http-parser/http_parser.c\",\n \"aiohttp/_find_header.c\",\n ],\n define_macros=[(\"HTTP_PARSER_STRICT\", 0)],\n ),\n Extension(\"aiohttp._helpers\", [\"aiohttp/_helpers.c\"]),\n Extension(\"aiohttp._http_writer\", [\"aiohttp/_http_writer.c\"]),\n]\n\n\ntxt = (HERE / \"aiohttp\" / \"__init__.py\").read_text(\"utf-8\")\ntry:\n version = re.findall(r'^__version__ = \"([^\"]+)\"\\r?$', txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\ninstall_requires = [\n \"chardet>=2.0,<5.0\",\n \"multidict>=4.5,<7.0\",\n \"async_timeout>=4.0a2,<5.0\",\n 'asynctest==0.13.0; python_version<\"3.8\"',\n \"yarl>=1.0,<2.0\",\n \"typing_extensions>=3.7.4\",\n \"frozenlist>=1.1.1\",\n \"aiosignal>=1.1.2\",\n]\n\n\ndef read(f):\n return (HERE / f).read_text(\"utf-8\").strip()\n\n\nargs = dict(\n name=\"aiohttp\",\n version=version,\n description=\"Async http client/server framework (asyncio)\",\n long_description=\"\\n\\n\".join((read(\"README.rst\"), read(\"CHANGES.rst\"))),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Development Status :: 5 - Production/Stable\",\n \"Operating System :: POSIX\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Framework :: AsyncIO\",\n ],\n author=\"Nikolay Kim\",\n author_email=\"[email protected]\",\n maintainer=\", \".join(\n (\n \"Nikolay Kim <[email protected]>\",\n \"Andrew Svetlov <[email protected]>\",\n )\n ),\n maintainer_email=\"[email protected]\",\n url=\"https://github.com/aio-libs/aiohttp\",\n project_urls={\n \"Chat: Gitter\": \"https://gitter.im/aio-libs/Lobby\",\n \"CI: GitHub Actions\": \"https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI\", # noqa\n \"Coverage: codecov\": \"https://codecov.io/github/aio-libs/aiohttp\",\n \"Docs: Changelog\": \"https://docs.aiohttp.org/en/stable/changes.html\",\n \"Docs: RTD\": \"https://docs.aiohttp.org\",\n \"GitHub: issues\": \"https://github.com/aio-libs/aiohttp/issues\",\n \"GitHub: repo\": \"https://github.com/aio-libs/aiohttp\",\n },\n license=\"Apache 2\",\n packages=[\"aiohttp\"],\n python_requires=\">=3.7\",\n install_requires=install_requires,\n extras_require={\n \"speedups\": [\n \"aiodns>=1.1\",\n \"Brotli\",\n \"cchardet\",\n ],\n },\n include_package_data=True,\n)\n\nif not NO_EXTENSIONS:\n print(\"*********************\")\n print(\"* Accelerated build *\")\n print(\"*********************\")\n setup(ext_modules=extensions, **args)\nelse:\n print(\"*********************\")\n print(\"* Pure Python build *\")\n print(\"*********************\")\n setup(**args)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport pathlib\nimport re\nimport sys\n\nfrom setuptools import Extension, setup\n\nif sys.version_info < (3, 7):\n raise RuntimeError(\"aiohttp 4.x requires Python 3.7+\")\n\n\nNO_EXTENSIONS = bool(os.environ.get(\"AIOHTTP_NO_EXTENSIONS\")) # type: bool\nHERE = pathlib.Path(__file__).parent\nIS_GIT_REPO = (HERE / \".git\").exists()\n\n\nif sys.implementation.name != \"cpython\":\n NO_EXTENSIONS = True\n\n\nif IS_GIT_REPO and not (HERE / \"vendor/llhttp/README.md\").exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [\n Extension(\"aiohttp._websocket\", [\"aiohttp/_websocket.c\"]),\n Extension(\n \"aiohttp._http_parser\",\n [\n \"aiohttp/_http_parser.c\",\n \"aiohttp/_find_header.c\",\n \"vendor/llhttp/build/c/llhttp.c\",\n \"vendor/llhttp/src/native/api.c\",\n \"vendor/llhttp/src/native/http.c\",\n ],\n define_macros=[(\"LLHTTP_STRICT_MODE\", 0)],\n include_dirs=[\"vendor/llhttp/build\"],\n ),\n Extension(\"aiohttp._helpers\", [\"aiohttp/_helpers.c\"]),\n Extension(\"aiohttp._http_writer\", [\"aiohttp/_http_writer.c\"]),\n]\n\n\ntxt = (HERE / \"aiohttp\" / \"__init__.py\").read_text(\"utf-8\")\ntry:\n version = re.findall(r'^__version__ = \"([^\"]+)\"\\r?$', txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\ninstall_requires = [\n \"chardet>=2.0,<5.0\",\n \"multidict>=4.5,<7.0\",\n \"async_timeout>=4.0a2,<5.0\",\n 'asynctest==0.13.0; python_version<\"3.8\"',\n \"yarl>=1.0,<2.0\",\n \"typing_extensions>=3.7.4\",\n \"frozenlist>=1.1.1\",\n \"aiosignal>=1.1.2\",\n]\n\n\ndef read(f):\n return (HERE / f).read_text(\"utf-8\").strip()\n\n\nargs = dict(\n name=\"aiohttp\",\n version=version,\n description=\"Async http client/server framework (asyncio)\",\n long_description=\"\\n\\n\".join((read(\"README.rst\"), read(\"CHANGES.rst\"))),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Development Status :: 5 - Production/Stable\",\n \"Operating System :: POSIX\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Framework :: AsyncIO\",\n ],\n author=\"Nikolay Kim\",\n author_email=\"[email protected]\",\n maintainer=\", \".join(\n (\n \"Nikolay Kim <[email protected]>\",\n \"Andrew Svetlov <[email protected]>\",\n )\n ),\n maintainer_email=\"[email protected]\",\n url=\"https://github.com/aio-libs/aiohttp\",\n project_urls={\n \"Chat: Gitter\": \"https://gitter.im/aio-libs/Lobby\",\n \"CI: GitHub Actions\": \"https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI\", # noqa\n \"Coverage: codecov\": \"https://codecov.io/github/aio-libs/aiohttp\",\n \"Docs: Changelog\": \"https://docs.aiohttp.org/en/stable/changes.html\",\n \"Docs: RTD\": \"https://docs.aiohttp.org\",\n \"GitHub: issues\": \"https://github.com/aio-libs/aiohttp/issues\",\n \"GitHub: repo\": \"https://github.com/aio-libs/aiohttp\",\n },\n license=\"Apache 2\",\n packages=[\"aiohttp\"],\n python_requires=\">=3.7\",\n install_requires=install_requires,\n extras_require={\n \"speedups\": [\n \"aiodns>=1.1\",\n \"Brotli\",\n \"cchardet\",\n ],\n },\n include_package_data=True,\n)\n\nif not NO_EXTENSIONS:\n print(\"*********************\")\n print(\"* Accelerated build *\")\n print(\"*********************\")\n setup(ext_modules=extensions, **args)\nelse:\n print(\"*********************\")\n print(\"* Pure Python build *\")\n print(\"*********************\")\n setup(**args)\n", "path": "setup.py"}]} | 1,713 | 293 |
gh_patches_debug_5390 | rasdani/github-patches | git_diff | scalableminds__webknossos-libs-47 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tiled cubing doesn't convert last slice
Tiled cubing doesn't convert last slice
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wkcuber/utils.py`
Content:
```
1 import time
2 import wkw
3 import numpy as np
4 import logging
5 from glob import iglob
6 from collections import namedtuple
7 from multiprocessing import cpu_count, Lock
8 from concurrent.futures import ProcessPoolExecutor
9 from os import path
10 from platform import python_version
11 from math import floor, ceil
12
13
14 from .knossos import KnossosDataset, CUBE_EDGE_LEN
15
16
17 WkwDatasetInfo = namedtuple(
18 "WkwDatasetInfo", ("dataset_path", "layer_name", "dtype", "mag")
19 )
20 KnossosDatasetInfo = namedtuple("KnossosDatasetInfo", ("dataset_path", "dtype"))
21
22
23 def _open_wkw(info, **kwargs):
24 header = wkw.Header(np.dtype(info.dtype), **kwargs)
25 ds = wkw.Dataset.open(
26 path.join(info.dataset_path, info.layer_name, str(info.mag)), header
27 )
28 return ds
29
30
31 def open_wkw(info, lock=None, **kwargs):
32 if lock is None:
33 # Create dummy lock
34 lock = Lock()
35
36 with lock:
37 return _open_wkw(info, **kwargs)
38
39
40 def open_knossos(info):
41 return KnossosDataset.open(info.dataset_path, np.dtype(info.dtype))
42
43
44 def add_verbose_flag(parser):
45 parser.add_argument(
46 "--verbose", "-v", help="Verbose output", dest="verbose", action="store_true"
47 )
48
49 parser.set_defaults(verbose=False)
50
51
52 def find_files(source_path, extensions):
53 # Find all files with a matching file extension
54 return (
55 f
56 for f in iglob(source_path, recursive=True)
57 if any([f.endswith(suffix) for suffix in extensions])
58 )
59
60
61 def get_chunks(arr, chunk_size):
62 for i in range(0, len(arr), chunk_size):
63 yield arr[i : i + chunk_size]
64
65
66 def get_regular_chunks(min_z, max_z, chunk_size):
67 i = floor(min_z / chunk_size) * chunk_size
68 while i < ceil(max_z / chunk_size) * chunk_size:
69 yield range(i, i + chunk_size)
70 i += chunk_size
71
72
73 def add_jobs_flag(parser):
74 parser.add_argument(
75 "--jobs", "-j", help="Parallel jobs", type=int, default=cpu_count()
76 )
77
78
79 def pool_init(lock):
80 global process_pool_lock
81 process_pool_lock = lock
82
83
84 def pool_get_lock():
85 global process_pool_lock
86 try:
87 return process_pool_lock
88 except NameError:
89 return None
90
91
92 class ParallelExecutor:
93 def __init__(self, jobs):
94 self.lock = Lock()
95 if python_version() >= "3.7.0":
96 self.exec = ProcessPoolExecutor(
97 jobs, initializer=pool_init, initargs=(self.lock,)
98 )
99 else:
100 self.exec = ProcessPoolExecutor(jobs)
101 self.futures = []
102
103 def submit(self, fn, *args):
104 future = self.exec.submit(fn, *args)
105 self.futures.append(future)
106 return future
107
108 def __enter__(self):
109 self.exec.__enter__()
110 return self
111
112 def __exit__(self, type, value, tb):
113 [f.result() for f in self.futures]
114 self.exec.__exit__(type, value, tb)
115
116
117 times = {}
118
119
120 def time_start(identifier):
121 times[identifier] = time.time()
122
123
124 def time_stop(identifier):
125 _time = times.pop(identifier)
126 logging.debug("{} took {:.8f}s".format(identifier, time.time() - _time))
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wkcuber/utils.py b/wkcuber/utils.py
--- a/wkcuber/utils.py
+++ b/wkcuber/utils.py
@@ -63,9 +63,10 @@
yield arr[i : i + chunk_size]
+# min_z and max_z are both inclusive
def get_regular_chunks(min_z, max_z, chunk_size):
i = floor(min_z / chunk_size) * chunk_size
- while i < ceil(max_z / chunk_size) * chunk_size:
+ while i < ceil((max_z + 1) / chunk_size) * chunk_size:
yield range(i, i + chunk_size)
i += chunk_size
| {"golden_diff": "diff --git a/wkcuber/utils.py b/wkcuber/utils.py\n--- a/wkcuber/utils.py\n+++ b/wkcuber/utils.py\n@@ -63,9 +63,10 @@\n yield arr[i : i + chunk_size]\n \n \n+# min_z and max_z are both inclusive\n def get_regular_chunks(min_z, max_z, chunk_size):\n i = floor(min_z / chunk_size) * chunk_size\n- while i < ceil(max_z / chunk_size) * chunk_size:\n+ while i < ceil((max_z + 1) / chunk_size) * chunk_size:\n yield range(i, i + chunk_size)\n i += chunk_size\n", "issue": "Tiled cubing doesn't convert last slice\n\nTiled cubing doesn't convert last slice\n\n", "before_files": [{"content": "import time\nimport wkw\nimport numpy as np\nimport logging\nfrom glob import iglob\nfrom collections import namedtuple\nfrom multiprocessing import cpu_count, Lock\nfrom concurrent.futures import ProcessPoolExecutor\nfrom os import path\nfrom platform import python_version\nfrom math import floor, ceil\n\n\nfrom .knossos import KnossosDataset, CUBE_EDGE_LEN\n\n\nWkwDatasetInfo = namedtuple(\n \"WkwDatasetInfo\", (\"dataset_path\", \"layer_name\", \"dtype\", \"mag\")\n)\nKnossosDatasetInfo = namedtuple(\"KnossosDatasetInfo\", (\"dataset_path\", \"dtype\"))\n\n\ndef _open_wkw(info, **kwargs):\n header = wkw.Header(np.dtype(info.dtype), **kwargs)\n ds = wkw.Dataset.open(\n path.join(info.dataset_path, info.layer_name, str(info.mag)), header\n )\n return ds\n\n\ndef open_wkw(info, lock=None, **kwargs):\n if lock is None:\n # Create dummy lock\n lock = Lock()\n\n with lock:\n return _open_wkw(info, **kwargs)\n\n\ndef open_knossos(info):\n return KnossosDataset.open(info.dataset_path, np.dtype(info.dtype))\n\n\ndef add_verbose_flag(parser):\n parser.add_argument(\n \"--verbose\", \"-v\", help=\"Verbose output\", dest=\"verbose\", action=\"store_true\"\n )\n\n parser.set_defaults(verbose=False)\n\n\ndef find_files(source_path, extensions):\n # Find all files with a matching file extension\n return (\n f\n for f in iglob(source_path, recursive=True)\n if any([f.endswith(suffix) for suffix in extensions])\n )\n\n\ndef get_chunks(arr, chunk_size):\n for i in range(0, len(arr), chunk_size):\n yield arr[i : i + chunk_size]\n\n\ndef get_regular_chunks(min_z, max_z, chunk_size):\n i = floor(min_z / chunk_size) * chunk_size\n while i < ceil(max_z / chunk_size) * chunk_size:\n yield range(i, i + chunk_size)\n i += chunk_size\n\n\ndef add_jobs_flag(parser):\n parser.add_argument(\n \"--jobs\", \"-j\", help=\"Parallel jobs\", type=int, default=cpu_count()\n )\n\n\ndef pool_init(lock):\n global process_pool_lock\n process_pool_lock = lock\n\n\ndef pool_get_lock():\n global process_pool_lock\n try:\n return process_pool_lock\n except NameError:\n return None\n\n\nclass ParallelExecutor:\n def __init__(self, jobs):\n self.lock = Lock()\n if python_version() >= \"3.7.0\":\n self.exec = ProcessPoolExecutor(\n jobs, initializer=pool_init, initargs=(self.lock,)\n )\n else:\n self.exec = ProcessPoolExecutor(jobs)\n self.futures = []\n\n def submit(self, fn, *args):\n future = self.exec.submit(fn, *args)\n self.futures.append(future)\n return future\n\n def __enter__(self):\n self.exec.__enter__()\n return self\n\n def __exit__(self, type, value, tb):\n [f.result() for f in self.futures]\n self.exec.__exit__(type, value, tb)\n\n\ntimes = {}\n\n\ndef time_start(identifier):\n times[identifier] = time.time()\n\n\ndef time_stop(identifier):\n _time = times.pop(identifier)\n logging.debug(\"{} took {:.8f}s\".format(identifier, time.time() - _time))\n", "path": "wkcuber/utils.py"}], "after_files": [{"content": "import time\nimport wkw\nimport numpy as np\nimport logging\nfrom glob import iglob\nfrom collections import namedtuple\nfrom multiprocessing import cpu_count, Lock\nfrom concurrent.futures import ProcessPoolExecutor\nfrom os import path\nfrom platform import python_version\nfrom math import floor, ceil\n\n\nfrom .knossos import KnossosDataset, CUBE_EDGE_LEN\n\n\nWkwDatasetInfo = namedtuple(\n \"WkwDatasetInfo\", (\"dataset_path\", \"layer_name\", \"dtype\", \"mag\")\n)\nKnossosDatasetInfo = namedtuple(\"KnossosDatasetInfo\", (\"dataset_path\", \"dtype\"))\n\n\ndef _open_wkw(info, **kwargs):\n header = wkw.Header(np.dtype(info.dtype), **kwargs)\n ds = wkw.Dataset.open(\n path.join(info.dataset_path, info.layer_name, str(info.mag)), header\n )\n return ds\n\n\ndef open_wkw(info, lock=None, **kwargs):\n if lock is None:\n # Create dummy lock\n lock = Lock()\n\n with lock:\n return _open_wkw(info, **kwargs)\n\n\ndef open_knossos(info):\n return KnossosDataset.open(info.dataset_path, np.dtype(info.dtype))\n\n\ndef add_verbose_flag(parser):\n parser.add_argument(\n \"--verbose\", \"-v\", help=\"Verbose output\", dest=\"verbose\", action=\"store_true\"\n )\n\n parser.set_defaults(verbose=False)\n\n\ndef find_files(source_path, extensions):\n # Find all files with a matching file extension\n return (\n f\n for f in iglob(source_path, recursive=True)\n if any([f.endswith(suffix) for suffix in extensions])\n )\n\n\ndef get_chunks(arr, chunk_size):\n for i in range(0, len(arr), chunk_size):\n yield arr[i : i + chunk_size]\n\n\n# min_z and max_z are both inclusive\ndef get_regular_chunks(min_z, max_z, chunk_size):\n i = floor(min_z / chunk_size) * chunk_size\n while i < ceil((max_z + 1) / chunk_size) * chunk_size:\n yield range(i, i + chunk_size)\n i += chunk_size\n\n\ndef add_jobs_flag(parser):\n parser.add_argument(\n \"--jobs\", \"-j\", help=\"Parallel jobs\", type=int, default=cpu_count()\n )\n\n\ndef pool_init(lock):\n global process_pool_lock\n process_pool_lock = lock\n\n\ndef pool_get_lock():\n global process_pool_lock\n try:\n return process_pool_lock\n except NameError:\n return None\n\n\nclass ParallelExecutor:\n def __init__(self, jobs):\n self.lock = Lock()\n if python_version() >= \"3.7.0\":\n self.exec = ProcessPoolExecutor(\n jobs, initializer=pool_init, initargs=(self.lock,)\n )\n else:\n self.exec = ProcessPoolExecutor(jobs)\n self.futures = []\n\n def submit(self, fn, *args):\n future = self.exec.submit(fn, *args)\n self.futures.append(future)\n return future\n\n def __enter__(self):\n self.exec.__enter__()\n return self\n\n def __exit__(self, type, value, tb):\n [f.result() for f in self.futures]\n self.exec.__exit__(type, value, tb)\n\n\ntimes = {}\n\n\ndef time_start(identifier):\n times[identifier] = time.time()\n\n\ndef time_stop(identifier):\n _time = times.pop(identifier)\n logging.debug(\"{} took {:.8f}s\".format(identifier, time.time() - _time))\n", "path": "wkcuber/utils.py"}]} | 1,314 | 146 |
gh_patches_debug_41984 | rasdani/github-patches | git_diff | Mailu__Mailu-769 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Setup: generated files missing
Traefik is ending up in a looped redirect when trying to download the generated files. Flask blueprint context to be modified to simplify the forwarding and eliminate the need for redirects.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup/server.py`
Content:
```
1 import flask
2 import flask_bootstrap
3 import redis
4 import json
5 import os
6 import jinja2
7 import uuid
8 import string
9 import random
10 import ipaddress
11
12
13 app = flask.Flask(__name__)
14 flask_bootstrap.Bootstrap(app)
15 db = redis.StrictRedis(host='redis', port=6379, db=0)
16
17
18 def render_flavor(flavor, template, data):
19 return flask.render_template(
20 os.path.join(flavor, template),
21 **data
22 )
23
24
25 @app.add_template_global
26 def secret(length=16):
27 charset = string.ascii_uppercase + string.digits
28 return ''.join(
29 random.SystemRandom().choice(charset)
30 for _ in range(length)
31 )
32
33
34 def build_app(path):
35
36 app.jinja_env.trim_blocks = True
37 app.jinja_env.lstrip_blocks = True
38
39 @app.context_processor
40 def app_context():
41 return dict(versions=os.getenv("VERSIONS","master").split(','))
42
43 version = os.getenv("this_version")
44
45 bp = flask.Blueprint(version, __name__)
46 bp.jinja_loader = jinja2.ChoiceLoader([
47 jinja2.FileSystemLoader(os.path.join(path, "templates")),
48 jinja2.FileSystemLoader(os.path.join(path, "flavors"))
49 ])
50
51 @bp.context_processor
52 def bp_context(version=version):
53 return dict(version=version)
54
55 @bp.route("/")
56 def wizard():
57 return flask.render_template('wizard.html')
58
59 @bp.route("/submit_flavor", methods=["POST"])
60 def submit_flavor():
61 data = flask.request.form.copy()
62 steps = sorted(os.listdir(os.path.join(path, "templates", "steps", data["flavor"])))
63 return flask.render_template('wizard.html', flavor=data["flavor"], steps=steps)
64
65 @bp.route("/submit", methods=["POST"])
66 def submit():
67 data = flask.request.form.copy()
68 data['uid'] = str(uuid.uuid4())
69 data['dns'] = str(ipaddress.IPv4Network(data['subnet'])[-2])
70 db.set(data['uid'], json.dumps(data))
71 return flask.redirect(flask.url_for('.setup', uid=data['uid']))
72
73 @bp.route("/setup/<uid>", methods=["GET"])
74 def setup(uid):
75 data = json.loads(db.get(uid))
76 flavor = data.get("flavor", "compose")
77 rendered = render_flavor(flavor, "setup.html", data)
78 return flask.render_template("setup.html", contents=rendered)
79
80 @bp.route("/file/<uid>/<filepath>", methods=["GET"])
81 def file(uid, filepath):
82 data = json.loads(db.get(uid))
83 flavor = data.get("flavor", "compose")
84 return flask.Response(
85 render_flavor(flavor, filepath, data),
86 mimetype="application/text"
87 )
88
89 app.register_blueprint(bp, url_prefix="/{}".format(version))
90
91
92 if __name__ == "__main__":
93 build_app("/tmp/mailutest")
94 app.run(debug=True)
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup/server.py b/setup/server.py
--- a/setup/server.py
+++ b/setup/server.py
@@ -10,7 +10,9 @@
import ipaddress
-app = flask.Flask(__name__)
+version = os.getenv("this_version")
+static_url_path = "/" + version + "/static"
+app = flask.Flask(__name__, static_url_path=static_url_path)
flask_bootstrap.Bootstrap(app)
db = redis.StrictRedis(host='redis', port=6379, db=0)
@@ -40,29 +42,37 @@
def app_context():
return dict(versions=os.getenv("VERSIONS","master").split(','))
- version = os.getenv("this_version")
+ prefix_bp = flask.Blueprint(version, __name__)
+ prefix_bp.jinja_loader = jinja2.ChoiceLoader([
+ jinja2.FileSystemLoader(os.path.join(path, "templates")),
+ jinja2.FileSystemLoader(os.path.join(path, "flavors"))
+ ])
- bp = flask.Blueprint(version, __name__)
- bp.jinja_loader = jinja2.ChoiceLoader([
+ root_bp = flask.Blueprint("root", __name__)
+ root_bp.jinja_loader = jinja2.ChoiceLoader([
jinja2.FileSystemLoader(os.path.join(path, "templates")),
jinja2.FileSystemLoader(os.path.join(path, "flavors"))
])
- @bp.context_processor
+ @prefix_bp.context_processor
+ @root_bp.context_processor
def bp_context(version=version):
return dict(version=version)
- @bp.route("/")
+ @prefix_bp.route("/")
+ @root_bp.route("/")
def wizard():
return flask.render_template('wizard.html')
- @bp.route("/submit_flavor", methods=["POST"])
+ @prefix_bp.route("/submit_flavor", methods=["POST"])
+ @root_bp.route("/submit_flavor", methods=["POST"])
def submit_flavor():
data = flask.request.form.copy()
steps = sorted(os.listdir(os.path.join(path, "templates", "steps", data["flavor"])))
return flask.render_template('wizard.html', flavor=data["flavor"], steps=steps)
- @bp.route("/submit", methods=["POST"])
+ @prefix_bp.route("/submit", methods=["POST"])
+ @root_bp.route("/submit", methods=["POST"])
def submit():
data = flask.request.form.copy()
data['uid'] = str(uuid.uuid4())
@@ -70,14 +80,16 @@
db.set(data['uid'], json.dumps(data))
return flask.redirect(flask.url_for('.setup', uid=data['uid']))
- @bp.route("/setup/<uid>", methods=["GET"])
+ @prefix_bp.route("/setup/<uid>", methods=["GET"])
+ @root_bp.route("/setup/<uid>", methods=["GET"])
def setup(uid):
data = json.loads(db.get(uid))
flavor = data.get("flavor", "compose")
rendered = render_flavor(flavor, "setup.html", data)
return flask.render_template("setup.html", contents=rendered)
- @bp.route("/file/<uid>/<filepath>", methods=["GET"])
+ @prefix_bp.route("/file/<uid>/<filepath>", methods=["GET"])
+ @root_bp.route("/file/<uid>/<filepath>", methods=["GET"])
def file(uid, filepath):
data = json.loads(db.get(uid))
flavor = data.get("flavor", "compose")
@@ -86,7 +98,8 @@
mimetype="application/text"
)
- app.register_blueprint(bp, url_prefix="/{}".format(version))
+ app.register_blueprint(prefix_bp, url_prefix="/{}".format(version))
+ app.register_blueprint(root_bp)
if __name__ == "__main__":
| {"golden_diff": "diff --git a/setup/server.py b/setup/server.py\n--- a/setup/server.py\n+++ b/setup/server.py\n@@ -10,7 +10,9 @@\n import ipaddress\n \n \n-app = flask.Flask(__name__)\n+version = os.getenv(\"this_version\")\n+static_url_path = \"/\" + version + \"/static\"\n+app = flask.Flask(__name__, static_url_path=static_url_path)\n flask_bootstrap.Bootstrap(app)\n db = redis.StrictRedis(host='redis', port=6379, db=0)\n \n@@ -40,29 +42,37 @@\n def app_context():\n return dict(versions=os.getenv(\"VERSIONS\",\"master\").split(','))\n \n- version = os.getenv(\"this_version\")\n+ prefix_bp = flask.Blueprint(version, __name__)\n+ prefix_bp.jinja_loader = jinja2.ChoiceLoader([\n+ jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n+ jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n+ ])\n \n- bp = flask.Blueprint(version, __name__)\n- bp.jinja_loader = jinja2.ChoiceLoader([\n+ root_bp = flask.Blueprint(\"root\", __name__)\n+ root_bp.jinja_loader = jinja2.ChoiceLoader([\n jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n ])\n \n- @bp.context_processor\n+ @prefix_bp.context_processor\n+ @root_bp.context_processor\n def bp_context(version=version):\n return dict(version=version)\n \n- @bp.route(\"/\")\n+ @prefix_bp.route(\"/\")\n+ @root_bp.route(\"/\")\n def wizard():\n return flask.render_template('wizard.html')\n \n- @bp.route(\"/submit_flavor\", methods=[\"POST\"])\n+ @prefix_bp.route(\"/submit_flavor\", methods=[\"POST\"])\n+ @root_bp.route(\"/submit_flavor\", methods=[\"POST\"])\n def submit_flavor():\n data = flask.request.form.copy()\n steps = sorted(os.listdir(os.path.join(path, \"templates\", \"steps\", data[\"flavor\"])))\n return flask.render_template('wizard.html', flavor=data[\"flavor\"], steps=steps)\n \n- @bp.route(\"/submit\", methods=[\"POST\"])\n+ @prefix_bp.route(\"/submit\", methods=[\"POST\"])\n+ @root_bp.route(\"/submit\", methods=[\"POST\"])\n def submit():\n data = flask.request.form.copy()\n data['uid'] = str(uuid.uuid4())\n@@ -70,14 +80,16 @@\n db.set(data['uid'], json.dumps(data))\n return flask.redirect(flask.url_for('.setup', uid=data['uid']))\n \n- @bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n+ @prefix_bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n+ @root_bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n def setup(uid):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n rendered = render_flavor(flavor, \"setup.html\", data)\n return flask.render_template(\"setup.html\", contents=rendered)\n \n- @bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n+ @prefix_bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n+ @root_bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n def file(uid, filepath):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n@@ -86,7 +98,8 @@\n mimetype=\"application/text\"\n )\n \n- app.register_blueprint(bp, url_prefix=\"/{}\".format(version))\n+ app.register_blueprint(prefix_bp, url_prefix=\"/{}\".format(version))\n+ app.register_blueprint(root_bp)\n \n \n if __name__ == \"__main__\":\n", "issue": "Setup: generated files missing\nTraefik is ending up in a looped redirect when trying to download the generated files. Flask blueprint context to be modified to simplify the forwarding and eliminate the need for redirects.\n", "before_files": [{"content": "import flask\nimport flask_bootstrap\nimport redis\nimport json\nimport os\nimport jinja2\nimport uuid\nimport string\nimport random\nimport ipaddress\n\n\napp = flask.Flask(__name__)\nflask_bootstrap.Bootstrap(app)\ndb = redis.StrictRedis(host='redis', port=6379, db=0)\n\n\ndef render_flavor(flavor, template, data):\n return flask.render_template(\n os.path.join(flavor, template),\n **data\n )\n\n\[email protected]_template_global\ndef secret(length=16):\n charset = string.ascii_uppercase + string.digits\n return ''.join(\n random.SystemRandom().choice(charset)\n for _ in range(length)\n )\n\n\ndef build_app(path):\n\n app.jinja_env.trim_blocks = True\n app.jinja_env.lstrip_blocks = True\n\n @app.context_processor\n def app_context():\n return dict(versions=os.getenv(\"VERSIONS\",\"master\").split(','))\n\n version = os.getenv(\"this_version\")\n\n bp = flask.Blueprint(version, __name__)\n bp.jinja_loader = jinja2.ChoiceLoader([\n jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n ])\n\n @bp.context_processor\n def bp_context(version=version):\n return dict(version=version)\n\n @bp.route(\"/\")\n def wizard():\n return flask.render_template('wizard.html')\n\n @bp.route(\"/submit_flavor\", methods=[\"POST\"])\n def submit_flavor():\n data = flask.request.form.copy()\n steps = sorted(os.listdir(os.path.join(path, \"templates\", \"steps\", data[\"flavor\"])))\n return flask.render_template('wizard.html', flavor=data[\"flavor\"], steps=steps)\n\n @bp.route(\"/submit\", methods=[\"POST\"])\n def submit():\n data = flask.request.form.copy()\n data['uid'] = str(uuid.uuid4())\n data['dns'] = str(ipaddress.IPv4Network(data['subnet'])[-2])\n db.set(data['uid'], json.dumps(data))\n return flask.redirect(flask.url_for('.setup', uid=data['uid']))\n\n @bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n def setup(uid):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n rendered = render_flavor(flavor, \"setup.html\", data)\n return flask.render_template(\"setup.html\", contents=rendered)\n\n @bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n def file(uid, filepath):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n return flask.Response(\n render_flavor(flavor, filepath, data),\n mimetype=\"application/text\"\n )\n\n app.register_blueprint(bp, url_prefix=\"/{}\".format(version))\n\n\nif __name__ == \"__main__\":\n build_app(\"/tmp/mailutest\")\n app.run(debug=True)\n", "path": "setup/server.py"}], "after_files": [{"content": "import flask\nimport flask_bootstrap\nimport redis\nimport json\nimport os\nimport jinja2\nimport uuid\nimport string\nimport random\nimport ipaddress\n\n\nversion = os.getenv(\"this_version\")\nstatic_url_path = \"/\" + version + \"/static\"\napp = flask.Flask(__name__, static_url_path=static_url_path)\nflask_bootstrap.Bootstrap(app)\ndb = redis.StrictRedis(host='redis', port=6379, db=0)\n\n\ndef render_flavor(flavor, template, data):\n return flask.render_template(\n os.path.join(flavor, template),\n **data\n )\n\n\[email protected]_template_global\ndef secret(length=16):\n charset = string.ascii_uppercase + string.digits\n return ''.join(\n random.SystemRandom().choice(charset)\n for _ in range(length)\n )\n\n\ndef build_app(path):\n\n app.jinja_env.trim_blocks = True\n app.jinja_env.lstrip_blocks = True\n\n @app.context_processor\n def app_context():\n return dict(versions=os.getenv(\"VERSIONS\",\"master\").split(','))\n\n prefix_bp = flask.Blueprint(version, __name__)\n prefix_bp.jinja_loader = jinja2.ChoiceLoader([\n jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n ])\n\n root_bp = flask.Blueprint(\"root\", __name__)\n root_bp.jinja_loader = jinja2.ChoiceLoader([\n jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n ])\n\n @prefix_bp.context_processor\n @root_bp.context_processor\n def bp_context(version=version):\n return dict(version=version)\n\n @prefix_bp.route(\"/\")\n @root_bp.route(\"/\")\n def wizard():\n return flask.render_template('wizard.html')\n\n @prefix_bp.route(\"/submit_flavor\", methods=[\"POST\"])\n @root_bp.route(\"/submit_flavor\", methods=[\"POST\"])\n def submit_flavor():\n data = flask.request.form.copy()\n steps = sorted(os.listdir(os.path.join(path, \"templates\", \"steps\", data[\"flavor\"])))\n return flask.render_template('wizard.html', flavor=data[\"flavor\"], steps=steps)\n\n @prefix_bp.route(\"/submit\", methods=[\"POST\"])\n @root_bp.route(\"/submit\", methods=[\"POST\"])\n def submit():\n data = flask.request.form.copy()\n data['uid'] = str(uuid.uuid4())\n data['dns'] = str(ipaddress.IPv4Network(data['subnet'])[-2])\n db.set(data['uid'], json.dumps(data))\n return flask.redirect(flask.url_for('.setup', uid=data['uid']))\n\n @prefix_bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n @root_bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n def setup(uid):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n rendered = render_flavor(flavor, \"setup.html\", data)\n return flask.render_template(\"setup.html\", contents=rendered)\n\n @prefix_bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n @root_bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n def file(uid, filepath):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n return flask.Response(\n render_flavor(flavor, filepath, data),\n mimetype=\"application/text\"\n )\n\n app.register_blueprint(prefix_bp, url_prefix=\"/{}\".format(version))\n app.register_blueprint(root_bp)\n\n\nif __name__ == \"__main__\":\n build_app(\"/tmp/mailutest\")\n app.run(debug=True)\n", "path": "setup/server.py"}]} | 1,126 | 832 |
gh_patches_debug_16544 | rasdani/github-patches | git_diff | e-valuation__EvaP-1428 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Contributors missing in editor evaluation edit form
When editing an evaluation as an editor, contributors who have already been added (by managers) and who are marked as inactive or proxy users are not shown in the form anymore because they are not included in the contributor queryset. This leads to errors when saving the form.
The queryset should always include those people already added as contributors for this evaluation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/contributor/forms.py`
Content:
```
1 from datetime import datetime, timedelta
2 import logging
3
4 from django import forms
5 from django.conf import settings
6 from django.db.models import Q
7 from django.forms.widgets import CheckboxSelectMultiple
8 from django.utils.translation import ugettext_lazy as _
9 from evap.evaluation.forms import UserModelMultipleChoiceField, UserModelChoiceField
10 from evap.evaluation.models import Course, Evaluation, Questionnaire, UserProfile
11 from evap.evaluation.tools import date_to_datetime
12 from evap.staff.forms import ContributionForm
13
14 logger = logging.getLogger(__name__)
15
16
17 class EvaluationForm(forms.ModelForm):
18 general_questionnaires = forms.ModelMultipleChoiceField(queryset=None, widget=CheckboxSelectMultiple, label=_("General questionnaires"))
19 course = forms.ModelChoiceField(Course.objects.all(), disabled=True, required=False, widget=forms.HiddenInput())
20 name_de_field = forms.CharField(label=_("Name (German)"), disabled=True, required=False)
21 name_en_field = forms.CharField(label=_("Name (English)"), disabled=True, required=False)
22
23 class Meta:
24 model = Evaluation
25 fields = ('name_de_field', 'name_en_field', 'vote_start_datetime', 'vote_end_date', 'general_questionnaires', 'course')
26
27 def __init__(self, *args, **kwargs):
28 super().__init__(*args, **kwargs)
29
30 self.fields['name_de_field'].initial = self.instance.full_name_de
31 self.fields['name_en_field'].initial = self.instance.full_name_en
32
33 self.fields['general_questionnaires'].queryset = Questionnaire.objects.general_questionnaires().filter(
34 Q(visibility=Questionnaire.EDITORS) | Q(contributions__evaluation=self.instance)).distinct()
35
36 self.fields['vote_start_datetime'].localize = True
37 self.fields['vote_end_date'].localize = True
38
39 if self.instance.general_contribution:
40 self.fields['general_questionnaires'].initial = [q.pk for q in self.instance.general_contribution.questionnaires.all()]
41
42 def clean(self):
43 super().clean()
44
45 vote_start_datetime = self.cleaned_data.get('vote_start_datetime')
46 vote_end_date = self.cleaned_data.get('vote_end_date')
47 if vote_start_datetime and vote_end_date:
48 if vote_start_datetime.date() > vote_end_date:
49 self.add_error("vote_start_datetime", "")
50 self.add_error("vote_end_date", _("The first day of evaluation must be before the last one."))
51
52 def clean_vote_end_date(self):
53 vote_end_date = self.cleaned_data.get('vote_end_date')
54
55 # The actual deadline is EVALUATION_END_OFFSET_HOURS:00 AM of the day after vote_end_date.
56 # Therefore an evaluation date 24h + EVALUATION_END_OFFSET_HOURS in the past would technically still be in the future.
57 if vote_end_date and date_to_datetime(vote_end_date) + timedelta(hours=24 + settings.EVALUATION_END_OFFSET_HOURS) < datetime.now():
58 raise forms.ValidationError(_("The last day of evaluation must be in the future."))
59 return vote_end_date
60
61 def save(self, *args, **kw):
62 evaluation = super().save(*args, **kw)
63 evaluation.general_contribution.questionnaires.set(self.cleaned_data.get('general_questionnaires'))
64 return evaluation
65
66
67 class EditorContributionForm(ContributionForm):
68 def __init__(self, *args, **kwargs):
69 super().__init__(*args, **kwargs)
70
71 self.fields['questionnaires'].queryset = Questionnaire.objects.contributor_questionnaires().filter(
72 Q(visibility=Questionnaire.EDITORS) | Q(contributions__evaluation=self.evaluation)).distinct()
73 self.fields['contributor'].queryset = UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True)
74
75
76 class DelegatesForm(forms.ModelForm):
77 delegates = UserModelMultipleChoiceField(queryset=UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True),
78 required=False)
79
80 class Meta:
81 model = UserProfile
82 fields = ('delegates',)
83 field_classes = {
84 'delegates': UserModelMultipleChoiceField,
85 }
86
87 def __init__(self, *args, **kwargs):
88 super().__init__(*args, **kwargs)
89
90 def save(self, *args, **kw):
91 super().save(*args, **kw)
92 logger.info('User "{}" edited the settings.'.format(self.instance.username))
93
94
95 class DelegateSelectionForm(forms.Form):
96 delegate_to = UserModelChoiceField(label=_("Delegate to"),
97 queryset=UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True))
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/contributor/forms.py b/evap/contributor/forms.py
--- a/evap/contributor/forms.py
+++ b/evap/contributor/forms.py
@@ -68,9 +68,13 @@
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
+ existing_contributor_pk = self.instance.contributor.pk if self.instance.contributor else None
+
self.fields['questionnaires'].queryset = Questionnaire.objects.contributor_questionnaires().filter(
Q(visibility=Questionnaire.EDITORS) | Q(contributions__evaluation=self.evaluation)).distinct()
- self.fields['contributor'].queryset = UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True)
+ self.fields['contributor'].queryset = UserProfile.objects.filter(
+ (Q(is_active=True) & Q(is_proxy_user=False)) | Q(pk=existing_contributor_pk)
+ )
class DelegatesForm(forms.ModelForm):
| {"golden_diff": "diff --git a/evap/contributor/forms.py b/evap/contributor/forms.py\n--- a/evap/contributor/forms.py\n+++ b/evap/contributor/forms.py\n@@ -68,9 +68,13 @@\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n \n+ existing_contributor_pk = self.instance.contributor.pk if self.instance.contributor else None\n+\n self.fields['questionnaires'].queryset = Questionnaire.objects.contributor_questionnaires().filter(\n Q(visibility=Questionnaire.EDITORS) | Q(contributions__evaluation=self.evaluation)).distinct()\n- self.fields['contributor'].queryset = UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True)\n+ self.fields['contributor'].queryset = UserProfile.objects.filter(\n+ (Q(is_active=True) & Q(is_proxy_user=False)) | Q(pk=existing_contributor_pk)\n+ )\n \n \n class DelegatesForm(forms.ModelForm):\n", "issue": "Contributors missing in editor evaluation edit form\nWhen editing an evaluation as an editor, contributors who have already been added (by managers) and who are marked as inactive or proxy users are not shown in the form anymore because they are not included in the contributor queryset. This leads to errors when saving the form.\r\nThe queryset should always include those people already added as contributors for this evaluation.\n", "before_files": [{"content": "from datetime import datetime, timedelta\nimport logging\n\nfrom django import forms\nfrom django.conf import settings\nfrom django.db.models import Q\nfrom django.forms.widgets import CheckboxSelectMultiple\nfrom django.utils.translation import ugettext_lazy as _\nfrom evap.evaluation.forms import UserModelMultipleChoiceField, UserModelChoiceField\nfrom evap.evaluation.models import Course, Evaluation, Questionnaire, UserProfile\nfrom evap.evaluation.tools import date_to_datetime\nfrom evap.staff.forms import ContributionForm\n\nlogger = logging.getLogger(__name__)\n\n\nclass EvaluationForm(forms.ModelForm):\n general_questionnaires = forms.ModelMultipleChoiceField(queryset=None, widget=CheckboxSelectMultiple, label=_(\"General questionnaires\"))\n course = forms.ModelChoiceField(Course.objects.all(), disabled=True, required=False, widget=forms.HiddenInput())\n name_de_field = forms.CharField(label=_(\"Name (German)\"), disabled=True, required=False)\n name_en_field = forms.CharField(label=_(\"Name (English)\"), disabled=True, required=False)\n\n class Meta:\n model = Evaluation\n fields = ('name_de_field', 'name_en_field', 'vote_start_datetime', 'vote_end_date', 'general_questionnaires', 'course')\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.fields['name_de_field'].initial = self.instance.full_name_de\n self.fields['name_en_field'].initial = self.instance.full_name_en\n\n self.fields['general_questionnaires'].queryset = Questionnaire.objects.general_questionnaires().filter(\n Q(visibility=Questionnaire.EDITORS) | Q(contributions__evaluation=self.instance)).distinct()\n\n self.fields['vote_start_datetime'].localize = True\n self.fields['vote_end_date'].localize = True\n\n if self.instance.general_contribution:\n self.fields['general_questionnaires'].initial = [q.pk for q in self.instance.general_contribution.questionnaires.all()]\n\n def clean(self):\n super().clean()\n\n vote_start_datetime = self.cleaned_data.get('vote_start_datetime')\n vote_end_date = self.cleaned_data.get('vote_end_date')\n if vote_start_datetime and vote_end_date:\n if vote_start_datetime.date() > vote_end_date:\n self.add_error(\"vote_start_datetime\", \"\")\n self.add_error(\"vote_end_date\", _(\"The first day of evaluation must be before the last one.\"))\n\n def clean_vote_end_date(self):\n vote_end_date = self.cleaned_data.get('vote_end_date')\n\n # The actual deadline is EVALUATION_END_OFFSET_HOURS:00 AM of the day after vote_end_date.\n # Therefore an evaluation date 24h + EVALUATION_END_OFFSET_HOURS in the past would technically still be in the future.\n if vote_end_date and date_to_datetime(vote_end_date) + timedelta(hours=24 + settings.EVALUATION_END_OFFSET_HOURS) < datetime.now():\n raise forms.ValidationError(_(\"The last day of evaluation must be in the future.\"))\n return vote_end_date\n\n def save(self, *args, **kw):\n evaluation = super().save(*args, **kw)\n evaluation.general_contribution.questionnaires.set(self.cleaned_data.get('general_questionnaires'))\n return evaluation\n\n\nclass EditorContributionForm(ContributionForm):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.fields['questionnaires'].queryset = Questionnaire.objects.contributor_questionnaires().filter(\n Q(visibility=Questionnaire.EDITORS) | Q(contributions__evaluation=self.evaluation)).distinct()\n self.fields['contributor'].queryset = UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True)\n\n\nclass DelegatesForm(forms.ModelForm):\n delegates = UserModelMultipleChoiceField(queryset=UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True),\n required=False)\n\n class Meta:\n model = UserProfile\n fields = ('delegates',)\n field_classes = {\n 'delegates': UserModelMultipleChoiceField,\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def save(self, *args, **kw):\n super().save(*args, **kw)\n logger.info('User \"{}\" edited the settings.'.format(self.instance.username))\n\n\nclass DelegateSelectionForm(forms.Form):\n delegate_to = UserModelChoiceField(label=_(\"Delegate to\"),\n queryset=UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True))\n", "path": "evap/contributor/forms.py"}], "after_files": [{"content": "from datetime import datetime, timedelta\nimport logging\n\nfrom django import forms\nfrom django.conf import settings\nfrom django.db.models import Q\nfrom django.forms.widgets import CheckboxSelectMultiple\nfrom django.utils.translation import ugettext_lazy as _\nfrom evap.evaluation.forms import UserModelMultipleChoiceField, UserModelChoiceField\nfrom evap.evaluation.models import Course, Evaluation, Questionnaire, UserProfile\nfrom evap.evaluation.tools import date_to_datetime\nfrom evap.staff.forms import ContributionForm\n\nlogger = logging.getLogger(__name__)\n\n\nclass EvaluationForm(forms.ModelForm):\n general_questionnaires = forms.ModelMultipleChoiceField(queryset=None, widget=CheckboxSelectMultiple, label=_(\"General questionnaires\"))\n course = forms.ModelChoiceField(Course.objects.all(), disabled=True, required=False, widget=forms.HiddenInput())\n name_de_field = forms.CharField(label=_(\"Name (German)\"), disabled=True, required=False)\n name_en_field = forms.CharField(label=_(\"Name (English)\"), disabled=True, required=False)\n\n class Meta:\n model = Evaluation\n fields = ('name_de_field', 'name_en_field', 'vote_start_datetime', 'vote_end_date', 'general_questionnaires', 'course')\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.fields['name_de_field'].initial = self.instance.full_name_de\n self.fields['name_en_field'].initial = self.instance.full_name_en\n\n self.fields['general_questionnaires'].queryset = Questionnaire.objects.general_questionnaires().filter(\n Q(visibility=Questionnaire.EDITORS) | Q(contributions__evaluation=self.instance)).distinct()\n\n self.fields['vote_start_datetime'].localize = True\n self.fields['vote_end_date'].localize = True\n\n if self.instance.general_contribution:\n self.fields['general_questionnaires'].initial = [q.pk for q in self.instance.general_contribution.questionnaires.all()]\n\n def clean(self):\n super().clean()\n\n vote_start_datetime = self.cleaned_data.get('vote_start_datetime')\n vote_end_date = self.cleaned_data.get('vote_end_date')\n if vote_start_datetime and vote_end_date:\n if vote_start_datetime.date() > vote_end_date:\n self.add_error(\"vote_start_datetime\", \"\")\n self.add_error(\"vote_end_date\", _(\"The first day of evaluation must be before the last one.\"))\n\n def clean_vote_end_date(self):\n vote_end_date = self.cleaned_data.get('vote_end_date')\n\n # The actual deadline is EVALUATION_END_OFFSET_HOURS:00 AM of the day after vote_end_date.\n # Therefore an evaluation date 24h + EVALUATION_END_OFFSET_HOURS in the past would technically still be in the future.\n if vote_end_date and date_to_datetime(vote_end_date) + timedelta(hours=24 + settings.EVALUATION_END_OFFSET_HOURS) < datetime.now():\n raise forms.ValidationError(_(\"The last day of evaluation must be in the future.\"))\n return vote_end_date\n\n def save(self, *args, **kw):\n evaluation = super().save(*args, **kw)\n evaluation.general_contribution.questionnaires.set(self.cleaned_data.get('general_questionnaires'))\n return evaluation\n\n\nclass EditorContributionForm(ContributionForm):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n existing_contributor_pk = self.instance.contributor.pk if self.instance.contributor else None\n\n self.fields['questionnaires'].queryset = Questionnaire.objects.contributor_questionnaires().filter(\n Q(visibility=Questionnaire.EDITORS) | Q(contributions__evaluation=self.evaluation)).distinct()\n self.fields['contributor'].queryset = UserProfile.objects.filter(\n (Q(is_active=True) & Q(is_proxy_user=False)) | Q(pk=existing_contributor_pk)\n )\n\n\nclass DelegatesForm(forms.ModelForm):\n delegates = UserModelMultipleChoiceField(queryset=UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True),\n required=False)\n\n class Meta:\n model = UserProfile\n fields = ('delegates',)\n field_classes = {\n 'delegates': UserModelMultipleChoiceField,\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def save(self, *args, **kw):\n super().save(*args, **kw)\n logger.info('User \"{}\" edited the settings.'.format(self.instance.username))\n\n\nclass DelegateSelectionForm(forms.Form):\n delegate_to = UserModelChoiceField(label=_(\"Delegate to\"),\n queryset=UserProfile.objects.exclude(is_active=False).exclude(is_proxy_user=True))\n", "path": "evap/contributor/forms.py"}]} | 1,481 | 220 |
gh_patches_debug_5473 | rasdani/github-patches | git_diff | napari__napari-1250 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
working on fixing tests
## 🐛 Bug
here are some crazy observations that started from looking into the test failure in #923, and may have some implications for general test fixes that we want to look into in #921. I'll patch it in a PR soon, but just want to document this somewhere, in case it becomes pertinent to #921
on a mac (but we observe it in linux tests too), with python 3.7, using pyqt5 (I'm on 5.14.1), running this code:
```python
import numpy as np
import pytest # <-- this unused import is required for the bug!
import napari
with napari.gui_qt():
data = np.random.random((10, 15, 20))
viewer = napari.view_image(data)
viewer = napari.view_image(data)
viewer.dims.ndisplay = 3
viewer.dims.ndisplay = 2
```
leads to an error deep in vispy that culminates in:
```
lib/python3.7/site-packages/vispy/gloo/glir.py in _validate(self)
1299 gl.GL_VALIDATE_STATUS):
1300 raise RuntimeError('Program validation error:\n%s'
-> 1301 % gl.glGetProgramInfoLog(self._handle))
1302
1303 def _post_draw(self):
RuntimeError: Program validation error:
```
observations:
- only happens with python 3.7; python 3.6 & 3.8 are fine
- only happens with pyqt5, not pyside2
- only happens with the **double** viewer instantiation (two calls to `view_image`), assigning both viewers to the same variable name. If you name the second viewer `viewer2`, it's fine
- only happens when you switch from `ndisplay=` 2, then to 3, back to 2
- the error occurs only once per session. running the same code a second time works fine.
- pytest is key. if you comment out the `import pytest` line, you don't get the error...
- i thought it might have something to do with our adding of 'viewer' to console globals... but removing the `self.console = QtConsole({'viewer': self.viewer})` from our `QtConsole` instantiation doesn't change anything.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/__init__.py`
Content:
```
1 from ._version import get_versions
2
3 __version__ = get_versions()['version']
4 del get_versions
5
6 import os
7 from distutils.version import StrictVersion
8 from pathlib import Path
9 from qtpy import API_NAME
10 from ._version import get_versions
11
12 # putting up higher due to circular imports if plugin exceptions are raised
13 # on startup (we need to be able to show the napari version in the traceback.)
14 __version__ = get_versions()['version']
15 del get_versions
16
17
18 if API_NAME == 'PySide2':
19 # Set plugin path appropriately if using PySide2. This is a bug fix
20 # for when both PyQt5 and Pyside2 are installed
21 import PySide2
22
23 os.environ['QT_PLUGIN_PATH'] = str(
24 Path(PySide2.__file__).parent / 'Qt' / 'plugins'
25 )
26
27 from qtpy import QtCore
28
29 # When QT is not the specific version, we raise a warning:
30 from warnings import warn
31
32 if StrictVersion(QtCore.__version__) < StrictVersion('5.12.3'):
33 warn_message = f"""
34 napari was tested with QT library `>=5.12.3`.
35 The version installed is {QtCore.__version__}. Please report any issues with this
36 specific QT version at https://github.com/Napari/napari/issues.
37 """
38 warn(message=warn_message)
39
40 from vispy import app
41 import logging
42
43 # set vispy application to the appropriate qt backend
44 app.use_app(API_NAME)
45 del app
46 # set vispy logger to show warning and errors only
47 vispy_logger = logging.getLogger('vispy')
48 vispy_logger.setLevel(logging.WARNING)
49
50 from .viewer import Viewer
51 from .plugins.io import save_layers
52
53 # Note that importing _viewer_key_bindings is needed as the Viewer gets
54 # decorated with keybindings during that process, but it is not directly needed
55 # by our users and so is deleted below
56 from . import _viewer_key_bindings # noqa: F401
57 from .view_layers import (
58 view_path,
59 view_image,
60 view_labels,
61 view_surface,
62 view_shapes,
63 view_points,
64 view_vectors,
65 )
66 from ._qt import gui_qt
67 from .utils import sys_info, _magicgui
68
69 # register napari object types with magicgui if it is installed
70 _magicgui.register_types_with_magicgui()
71
72
73 # this unused import is here to fix a very strange bug.
74 # there is some mysterious magical goodness in scipy stats that needs
75 # to be imported early.
76 # see: https://github.com/napari/napari/issues/925
77 from scipy import stats # noqa: F401
78
79 del _magicgui
80 del stats
81 del _viewer_key_bindings
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/__init__.py b/napari/__init__.py
--- a/napari/__init__.py
+++ b/napari/__init__.py
@@ -69,13 +69,5 @@
# register napari object types with magicgui if it is installed
_magicgui.register_types_with_magicgui()
-
-# this unused import is here to fix a very strange bug.
-# there is some mysterious magical goodness in scipy stats that needs
-# to be imported early.
-# see: https://github.com/napari/napari/issues/925
-from scipy import stats # noqa: F401
-
del _magicgui
-del stats
del _viewer_key_bindings
| {"golden_diff": "diff --git a/napari/__init__.py b/napari/__init__.py\n--- a/napari/__init__.py\n+++ b/napari/__init__.py\n@@ -69,13 +69,5 @@\n # register napari object types with magicgui if it is installed\n _magicgui.register_types_with_magicgui()\n \n-\n-# this unused import is here to fix a very strange bug.\n-# there is some mysterious magical goodness in scipy stats that needs\n-# to be imported early.\n-# see: https://github.com/napari/napari/issues/925\n-from scipy import stats # noqa: F401\n-\n del _magicgui\n-del stats\n del _viewer_key_bindings\n", "issue": "working on fixing tests\n## \ud83d\udc1b Bug\r\n\r\nhere are some crazy observations that started from looking into the test failure in #923, and may have some implications for general test fixes that we want to look into in #921. I'll patch it in a PR soon, but just want to document this somewhere, in case it becomes pertinent to #921\r\n\r\non a mac (but we observe it in linux tests too), with python 3.7, using pyqt5 (I'm on 5.14.1), running this code:\r\n```python\r\nimport numpy as np\r\nimport pytest # <-- this unused import is required for the bug!\r\nimport napari\r\n\r\nwith napari.gui_qt():\r\n data = np.random.random((10, 15, 20))\r\n viewer = napari.view_image(data)\r\n viewer = napari.view_image(data)\r\n viewer.dims.ndisplay = 3\r\n viewer.dims.ndisplay = 2\r\n```\r\nleads to an error deep in vispy that culminates in:\r\n```\r\nlib/python3.7/site-packages/vispy/gloo/glir.py in _validate(self)\r\n 1299 gl.GL_VALIDATE_STATUS):\r\n 1300 raise RuntimeError('Program validation error:\\n%s'\r\n-> 1301 % gl.glGetProgramInfoLog(self._handle))\r\n 1302 \r\n 1303 def _post_draw(self):\r\n\r\nRuntimeError: Program validation error:\r\n```\r\nobservations:\r\n- only happens with python 3.7; python 3.6 & 3.8 are fine\r\n- only happens with pyqt5, not pyside2\r\n- only happens with the **double** viewer instantiation (two calls to `view_image`), assigning both viewers to the same variable name. If you name the second viewer `viewer2`, it's fine\r\n- only happens when you switch from `ndisplay=` 2, then to 3, back to 2\r\n- the error occurs only once per session. running the same code a second time works fine.\r\n- pytest is key. if you comment out the `import pytest` line, you don't get the error...\r\n- i thought it might have something to do with our adding of 'viewer' to console globals... but removing the `self.console = QtConsole({'viewer': self.viewer})` from our `QtConsole` instantiation doesn't change anything.\r\n\n", "before_files": [{"content": "from ._version import get_versions\n\n__version__ = get_versions()['version']\ndel get_versions\n\nimport os\nfrom distutils.version import StrictVersion\nfrom pathlib import Path\nfrom qtpy import API_NAME\nfrom ._version import get_versions\n\n# putting up higher due to circular imports if plugin exceptions are raised\n# on startup (we need to be able to show the napari version in the traceback.)\n__version__ = get_versions()['version']\ndel get_versions\n\n\nif API_NAME == 'PySide2':\n # Set plugin path appropriately if using PySide2. This is a bug fix\n # for when both PyQt5 and Pyside2 are installed\n import PySide2\n\n os.environ['QT_PLUGIN_PATH'] = str(\n Path(PySide2.__file__).parent / 'Qt' / 'plugins'\n )\n\nfrom qtpy import QtCore\n\n# When QT is not the specific version, we raise a warning:\nfrom warnings import warn\n\nif StrictVersion(QtCore.__version__) < StrictVersion('5.12.3'):\n warn_message = f\"\"\"\n napari was tested with QT library `>=5.12.3`.\n The version installed is {QtCore.__version__}. Please report any issues with this\n specific QT version at https://github.com/Napari/napari/issues.\n \"\"\"\n warn(message=warn_message)\n\nfrom vispy import app\nimport logging\n\n# set vispy application to the appropriate qt backend\napp.use_app(API_NAME)\ndel app\n# set vispy logger to show warning and errors only\nvispy_logger = logging.getLogger('vispy')\nvispy_logger.setLevel(logging.WARNING)\n\nfrom .viewer import Viewer\nfrom .plugins.io import save_layers\n\n# Note that importing _viewer_key_bindings is needed as the Viewer gets\n# decorated with keybindings during that process, but it is not directly needed\n# by our users and so is deleted below\nfrom . import _viewer_key_bindings # noqa: F401\nfrom .view_layers import (\n view_path,\n view_image,\n view_labels,\n view_surface,\n view_shapes,\n view_points,\n view_vectors,\n)\nfrom ._qt import gui_qt\nfrom .utils import sys_info, _magicgui\n\n# register napari object types with magicgui if it is installed\n_magicgui.register_types_with_magicgui()\n\n\n# this unused import is here to fix a very strange bug.\n# there is some mysterious magical goodness in scipy stats that needs\n# to be imported early.\n# see: https://github.com/napari/napari/issues/925\nfrom scipy import stats # noqa: F401\n\ndel _magicgui\ndel stats\ndel _viewer_key_bindings\n", "path": "napari/__init__.py"}], "after_files": [{"content": "from ._version import get_versions\n\n__version__ = get_versions()['version']\ndel get_versions\n\nimport os\nfrom distutils.version import StrictVersion\nfrom pathlib import Path\nfrom qtpy import API_NAME\nfrom ._version import get_versions\n\n# putting up higher due to circular imports if plugin exceptions are raised\n# on startup (we need to be able to show the napari version in the traceback.)\n__version__ = get_versions()['version']\ndel get_versions\n\n\nif API_NAME == 'PySide2':\n # Set plugin path appropriately if using PySide2. This is a bug fix\n # for when both PyQt5 and Pyside2 are installed\n import PySide2\n\n os.environ['QT_PLUGIN_PATH'] = str(\n Path(PySide2.__file__).parent / 'Qt' / 'plugins'\n )\n\nfrom qtpy import QtCore\n\n# When QT is not the specific version, we raise a warning:\nfrom warnings import warn\n\nif StrictVersion(QtCore.__version__) < StrictVersion('5.12.3'):\n warn_message = f\"\"\"\n napari was tested with QT library `>=5.12.3`.\n The version installed is {QtCore.__version__}. Please report any issues with this\n specific QT version at https://github.com/Napari/napari/issues.\n \"\"\"\n warn(message=warn_message)\n\nfrom vispy import app\nimport logging\n\n# set vispy application to the appropriate qt backend\napp.use_app(API_NAME)\ndel app\n# set vispy logger to show warning and errors only\nvispy_logger = logging.getLogger('vispy')\nvispy_logger.setLevel(logging.WARNING)\n\nfrom .viewer import Viewer\nfrom .plugins.io import save_layers\n\n# Note that importing _viewer_key_bindings is needed as the Viewer gets\n# decorated with keybindings during that process, but it is not directly needed\n# by our users and so is deleted below\nfrom . import _viewer_key_bindings # noqa: F401\nfrom .view_layers import (\n view_path,\n view_image,\n view_labels,\n view_surface,\n view_shapes,\n view_points,\n view_vectors,\n)\nfrom ._qt import gui_qt\nfrom .utils import sys_info, _magicgui\n\n# register napari object types with magicgui if it is installed\n_magicgui.register_types_with_magicgui()\n\ndel _magicgui\ndel _viewer_key_bindings\n", "path": "napari/__init__.py"}]} | 1,524 | 161 |
gh_patches_debug_5727 | rasdani/github-patches | git_diff | secdev__scapy-1040 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Problems while importing scapy on some environments (e.g. Eclipse, some peculiarly configured Windows boxes etc.)
Scapy Version: v2.3.3-1008
System: Windows10
Python Version: 2.7.14
Updated (closed) issue: [#1029](https://github.com/secdev/scapy/issues/1029l)
File: scapy/\_\_init\_\_.py
There are a problems of using master/scapy on some environments like e.g. Eclipse, some peculiarly configured Windows - mostly old - boxes etc.)
When scapy is started/imported and git is installed, some console windows appears while _version() is performed.
Also this issue has impact on decrease of scapy launch/import speed.
Problem (unexpected lauching of git, git bash console window appearing) exist because of
condition in _version_from_git_describe() does not works as expected on my/our boxes (old Windows?, old buggy Python 2.7.8 where '/' path separator is not treated in the same way as '\\'? nevermind...)
Simple fix below fully avoids our problems:
```
diff --git a/scapy/__init__.py b/scapy/__init__.py
index 4893a9f..4f38578 100644
--- a/scapy/__init__.py
+++ b/scapy/__init__.py
@@ -38,7 +38,7 @@ def _version_from_git_describe():
>>> _version_from_git_describe()
'2.3.2.dev346'
"""
- if not os.path.isdir(os.path.join(_SCAPY_PKG_DIR, '../.git')):
+ if not os.path.isdir(os.path.join(os.path.dirname(_SCAPY_PKG_DIR), '.git')):
raise ValueError('not in scapy git repo')
p = subprocess.Popen(['git', 'describe', '--always'], cwd=_SCAPY_PKG_DIR,
```
Thanks,
Adam Karpierz
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scapy/__init__.py`
Content:
```
1 ## This file is part of Scapy
2 ## See http://www.secdev.org/projects/scapy for more informations
3 ## Copyright (C) Philippe Biondi <[email protected]>
4 ## This program is published under a GPLv2 license
5
6 """
7 Scapy: create, send, sniff, dissect and manipulate network packets.
8
9 Usable either from an interactive console or as a Python library.
10 http://www.secdev.org/projects/scapy
11 """
12
13 import os
14 import re
15 import subprocess
16
17
18 _SCAPY_PKG_DIR = os.path.dirname(__file__)
19
20 def _version_from_git_describe():
21 """
22 Read the version from ``git describe``. It returns the latest tag with an
23 optional suffix if the current directory is not exactly on the tag.
24
25 Example::
26
27 $ git describe --always
28 v2.3.2-346-g164a52c075c8
29
30 The tag prefix (``v``) and the git commit sha1 (``-g164a52c075c8``) are
31 removed if present.
32
33 If the current directory is not exactly on the tag, a ``.devN`` suffix is
34 appended where N is the number of commits made after the last tag.
35
36 Example::
37
38 >>> _version_from_git_describe()
39 '2.3.2.dev346'
40 """
41 if not os.path.isdir(os.path.join(_SCAPY_PKG_DIR, '../.git')):
42 raise ValueError('not in scapy git repo')
43
44 p = subprocess.Popen(['git', 'describe', '--always'], cwd=_SCAPY_PKG_DIR,
45 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
46
47 out, err = p.communicate()
48
49 if p.returncode == 0:
50 tag = out.decode().strip()
51 match = re.match('^v?(.+?)-(\\d+)-g[a-f0-9]+$', tag)
52 if match:
53 # remove the 'v' prefix and add a '.devN' suffix
54 return '%s.dev%s' % (match.group(1), match.group(2))
55 else:
56 # just remove the 'v' prefix
57 return re.sub('^v', '', tag)
58 else:
59 raise subprocess.CalledProcessError(p.returncode, err)
60
61 def _version():
62 version_file = os.path.join(_SCAPY_PKG_DIR, 'VERSION')
63 try:
64 tag = _version_from_git_describe()
65 # successfully read the tag from git, write it in VERSION for
66 # installation and/or archive generation.
67 with open(version_file, 'w') as f:
68 f.write(tag)
69 return tag
70 except:
71 # failed to read the tag from git, try to read it from a VERSION file
72 try:
73 with open(version_file, 'r') as f:
74 tag = f.read()
75 return tag
76 except:
77 # Rely on git archive "export-subst" git attribute.
78 # See 'man gitattributes' for more details.
79 git_archive_id = '$Format:%h %d$'
80 sha1 = git_archive_id.strip().split()[0]
81 match = re.search('tag:(\\S+)', git_archive_id)
82 if match:
83 return "git-archive.dev" + match.group(1)
84 elif sha1:
85 return "git-archive.dev" + sha1
86 else:
87 return 'unknown.version'
88
89 VERSION = _version()
90
91 if __name__ == "__main__":
92 from scapy.main import interact
93 interact()
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scapy/__init__.py b/scapy/__init__.py
--- a/scapy/__init__.py
+++ b/scapy/__init__.py
@@ -38,7 +38,7 @@
>>> _version_from_git_describe()
'2.3.2.dev346'
"""
- if not os.path.isdir(os.path.join(_SCAPY_PKG_DIR, '../.git')):
+ if not os.path.isdir(os.path.join(os.path.dirname(_SCAPY_PKG_DIR), '.git')):
raise ValueError('not in scapy git repo')
p = subprocess.Popen(['git', 'describe', '--always'], cwd=_SCAPY_PKG_DIR,
| {"golden_diff": "diff --git a/scapy/__init__.py b/scapy/__init__.py\n--- a/scapy/__init__.py\n+++ b/scapy/__init__.py\n@@ -38,7 +38,7 @@\n >>> _version_from_git_describe()\n '2.3.2.dev346'\n \"\"\"\n- if not os.path.isdir(os.path.join(_SCAPY_PKG_DIR, '../.git')):\n+ if not os.path.isdir(os.path.join(os.path.dirname(_SCAPY_PKG_DIR), '.git')):\n raise ValueError('not in scapy git repo')\n \n p = subprocess.Popen(['git', 'describe', '--always'], cwd=_SCAPY_PKG_DIR,\n", "issue": "Problems while importing scapy on some environments (e.g. Eclipse, some peculiarly configured Windows boxes etc.) \nScapy Version: v2.3.3-1008\r\nSystem: Windows10\r\nPython Version: 2.7.14\r\n\r\nUpdated (closed) issue: [#1029](https://github.com/secdev/scapy/issues/1029l)\r\n\r\nFile: scapy/\\_\\_init\\_\\_.py\r\n\r\nThere are a problems of using master/scapy on some environments like e.g. Eclipse, some peculiarly configured Windows - mostly old - boxes etc.)\r\nWhen scapy is started/imported and git is installed, some console windows appears while _version() is performed.\r\nAlso this issue has impact on decrease of scapy launch/import speed.\r\n\r\nProblem (unexpected lauching of git, git bash console window appearing) exist because of\r\ncondition in _version_from_git_describe() does not works as expected on my/our boxes (old Windows?, old buggy Python 2.7.8 where '/' path separator is not treated in the same way as '\\\\'? nevermind...)\r\n\r\nSimple fix below fully avoids our problems:\r\n\r\n```\r\ndiff --git a/scapy/__init__.py b/scapy/__init__.py\r\nindex 4893a9f..4f38578 100644\r\n--- a/scapy/__init__.py\r\n+++ b/scapy/__init__.py\r\n@@ -38,7 +38,7 @@ def _version_from_git_describe():\r\n >>> _version_from_git_describe()\r\n '2.3.2.dev346'\r\n \"\"\"\r\n- if not os.path.isdir(os.path.join(_SCAPY_PKG_DIR, '../.git')):\r\n+ if not os.path.isdir(os.path.join(os.path.dirname(_SCAPY_PKG_DIR), '.git')):\r\n raise ValueError('not in scapy git repo')\r\n \r\n p = subprocess.Popen(['git', 'describe', '--always'], cwd=_SCAPY_PKG_DIR,\r\n```\r\n\r\nThanks,\r\nAdam Karpierz\r\n\n", "before_files": [{"content": "## This file is part of Scapy\n## See http://www.secdev.org/projects/scapy for more informations\n## Copyright (C) Philippe Biondi <[email protected]>\n## This program is published under a GPLv2 license\n\n\"\"\"\nScapy: create, send, sniff, dissect and manipulate network packets.\n\nUsable either from an interactive console or as a Python library.\nhttp://www.secdev.org/projects/scapy\n\"\"\"\n\nimport os\nimport re\nimport subprocess\n\n\n_SCAPY_PKG_DIR = os.path.dirname(__file__)\n\ndef _version_from_git_describe():\n \"\"\"\n Read the version from ``git describe``. It returns the latest tag with an\n optional suffix if the current directory is not exactly on the tag.\n\n Example::\n\n $ git describe --always\n v2.3.2-346-g164a52c075c8\n\n The tag prefix (``v``) and the git commit sha1 (``-g164a52c075c8``) are\n removed if present.\n\n If the current directory is not exactly on the tag, a ``.devN`` suffix is\n appended where N is the number of commits made after the last tag.\n\n Example::\n\n >>> _version_from_git_describe()\n '2.3.2.dev346'\n \"\"\"\n if not os.path.isdir(os.path.join(_SCAPY_PKG_DIR, '../.git')):\n raise ValueError('not in scapy git repo')\n\n p = subprocess.Popen(['git', 'describe', '--always'], cwd=_SCAPY_PKG_DIR,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n out, err = p.communicate()\n\n if p.returncode == 0:\n tag = out.decode().strip()\n match = re.match('^v?(.+?)-(\\\\d+)-g[a-f0-9]+$', tag)\n if match:\n # remove the 'v' prefix and add a '.devN' suffix\n return '%s.dev%s' % (match.group(1), match.group(2))\n else:\n # just remove the 'v' prefix\n return re.sub('^v', '', tag)\n else:\n raise subprocess.CalledProcessError(p.returncode, err)\n\ndef _version():\n version_file = os.path.join(_SCAPY_PKG_DIR, 'VERSION')\n try:\n tag = _version_from_git_describe()\n # successfully read the tag from git, write it in VERSION for\n # installation and/or archive generation.\n with open(version_file, 'w') as f:\n f.write(tag)\n return tag\n except:\n # failed to read the tag from git, try to read it from a VERSION file\n try:\n with open(version_file, 'r') as f:\n tag = f.read()\n return tag\n except:\n # Rely on git archive \"export-subst\" git attribute.\n # See 'man gitattributes' for more details.\n git_archive_id = '$Format:%h %d$'\n sha1 = git_archive_id.strip().split()[0]\n match = re.search('tag:(\\\\S+)', git_archive_id)\n if match:\n return \"git-archive.dev\" + match.group(1)\n elif sha1:\n return \"git-archive.dev\" + sha1\n else:\n return 'unknown.version'\n\nVERSION = _version()\n\nif __name__ == \"__main__\":\n from scapy.main import interact\n interact()\n", "path": "scapy/__init__.py"}], "after_files": [{"content": "## This file is part of Scapy\n## See http://www.secdev.org/projects/scapy for more informations\n## Copyright (C) Philippe Biondi <[email protected]>\n## This program is published under a GPLv2 license\n\n\"\"\"\nScapy: create, send, sniff, dissect and manipulate network packets.\n\nUsable either from an interactive console or as a Python library.\nhttp://www.secdev.org/projects/scapy\n\"\"\"\n\nimport os\nimport re\nimport subprocess\n\n\n_SCAPY_PKG_DIR = os.path.dirname(__file__)\n\ndef _version_from_git_describe():\n \"\"\"\n Read the version from ``git describe``. It returns the latest tag with an\n optional suffix if the current directory is not exactly on the tag.\n\n Example::\n\n $ git describe --always\n v2.3.2-346-g164a52c075c8\n\n The tag prefix (``v``) and the git commit sha1 (``-g164a52c075c8``) are\n removed if present.\n\n If the current directory is not exactly on the tag, a ``.devN`` suffix is\n appended where N is the number of commits made after the last tag.\n\n Example::\n\n >>> _version_from_git_describe()\n '2.3.2.dev346'\n \"\"\"\n if not os.path.isdir(os.path.join(os.path.dirname(_SCAPY_PKG_DIR), '.git')):\n raise ValueError('not in scapy git repo')\n\n p = subprocess.Popen(['git', 'describe', '--always'], cwd=_SCAPY_PKG_DIR,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n out, err = p.communicate()\n\n if p.returncode == 0:\n tag = out.decode().strip()\n match = re.match('^v?(.+?)-(\\\\d+)-g[a-f0-9]+$', tag)\n if match:\n # remove the 'v' prefix and add a '.devN' suffix\n return '%s.dev%s' % (match.group(1), match.group(2))\n else:\n # just remove the 'v' prefix\n return re.sub('^v', '', tag)\n else:\n raise subprocess.CalledProcessError(p.returncode, err)\n\ndef _version():\n version_file = os.path.join(_SCAPY_PKG_DIR, 'VERSION')\n try:\n tag = _version_from_git_describe()\n # successfully read the tag from git, write it in VERSION for\n # installation and/or archive generation.\n with open(version_file, 'w') as f:\n f.write(tag)\n return tag\n except:\n # failed to read the tag from git, try to read it from a VERSION file\n try:\n with open(version_file, 'r') as f:\n tag = f.read()\n return tag\n except:\n # Rely on git archive \"export-subst\" git attribute.\n # See 'man gitattributes' for more details.\n git_archive_id = '$Format:%h %d$'\n sha1 = git_archive_id.strip().split()[0]\n match = re.search('tag:(\\\\S+)', git_archive_id)\n if match:\n return \"git-archive.dev\" + match.group(1)\n elif sha1:\n return \"git-archive.dev\" + sha1\n else:\n return 'unknown.version'\n\nVERSION = _version()\n\nif __name__ == \"__main__\":\n from scapy.main import interact\n interact()\n", "path": "scapy/__init__.py"}]} | 1,645 | 149 |
gh_patches_debug_2167 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1122 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'module' object has no attribute 'TornadoAsyncNotifier'
Using Python 2.6.6 on CentOS, I'm unable to run the server with livereload:
```
[mkdocs@dev test-docs]$ python /home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/__main__.py serve --dev-addr=0.0.0.0:8080
WARNING: Support for Python 2.6 will be dropped in the 1.0.0 release of MkDocs
INFO - Building documentation...
INFO - Cleaning site directory
[I 161205 22:16:26 server:283] Serving on http://0.0.0.0:8080
[I 161205 22:16:26 handlers:60] Start watching changes
Traceback (most recent call last):
File "/home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/__main__.py", line 227, in <module>
cli()
File "/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py", line 716, in __call__
return self.main(*args, **kwargs)
File "/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py", line 1060, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/__main__.py", line 127, in serve_command
livereload=livereload
File "/home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/commands/serve.py", line 88, in serve
_livereload(host, port, config, builder, tempdir)
File "/home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/commands/serve.py", line 27, in _livereload
server.serve(root=site_dir, host=host, port=int(port), restart_delay=0)
File "/home/mkdocs/.local/lib/python2.6/site-packages/livereload/server.py", line 300, in serve
LiveReloadHandler.start_tasks()
File "/home/mkdocs/.local/lib/python2.6/site-packages/livereload/handlers.py", line 61, in start_tasks
if not cls.watcher.start(cls.poll_tasks):
File "/home/mkdocs/.local/lib/python2.6/site-packages/livereload/watcher.py", line 160, in start
self.notifier = pyinotify.TornadoAsyncNotifier(
AttributeError: 'module' object has no attribute 'TornadoAsyncNotifier'
```
I can workaround by using `--no-livereload`, but is there a way to get livereload working?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10 PY26 = sys.version_info[:2] == (2, 6)
11
12
13 long_description = (
14 "MkDocs is a fast, simple and downright gorgeous static site generator "
15 "that's geared towards building project documentation. Documentation "
16 "source files are written in Markdown, and configured with a single YAML "
17 "configuration file."
18 )
19
20
21 def get_version(package):
22 """Return package version as listed in `__version__` in `init.py`."""
23 init_py = open(os.path.join(package, '__init__.py')).read()
24 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
25
26
27 def get_packages(package):
28 """Return root package and all sub-packages."""
29 return [dirpath
30 for dirpath, dirnames, filenames in os.walk(package)
31 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
32
33
34 if sys.argv[-1] == 'publish':
35 if os.system("pip freeze | grep wheel"):
36 print("wheel not installed.\nUse `pip install wheel`.\nExiting.")
37 sys.exit()
38 if os.system("pip freeze | grep twine"):
39 print("twine not installed.\nUse `pip install twine`.\nExiting.")
40 sys.exit()
41 os.system("python setup.py sdist bdist_wheel")
42 os.system("twine upload dist/*")
43 print("You probably want to also tag the version now:")
44 print(" git tag -a {0} -m 'version {0}'".format(get_version("mkdocs")))
45 print(" git push --tags")
46 sys.exit()
47
48
49 setup(
50 name="mkdocs",
51 version=get_version("mkdocs"),
52 url='http://www.mkdocs.org',
53 license='BSD',
54 description='Project documentation with Markdown.',
55 long_description=long_description,
56 author='Tom Christie',
57 author_email='[email protected]', # SEE NOTE BELOW (*)
58 packages=get_packages("mkdocs"),
59 include_package_data=True,
60 install_requires=[
61 'click>=3.3',
62 'Jinja2>=2.7.1',
63 'livereload>=2.3.2',
64 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',
65 'PyYAML>=3.10',
66 'tornado>=4.1',
67 ],
68 entry_points={
69 'console_scripts': [
70 'mkdocs = mkdocs.__main__:cli',
71 ],
72 'mkdocs.themes': [
73 'mkdocs = mkdocs.themes.mkdocs',
74 'readthedocs = mkdocs.themes.readthedocs',
75 ]
76 },
77 classifiers=[
78 'Development Status :: 5 - Production/Stable',
79 'Environment :: Console',
80 'Environment :: Web Environment',
81 'Intended Audience :: Developers',
82 'License :: OSI Approved :: BSD License',
83 'Operating System :: OS Independent',
84 'Programming Language :: Python',
85 'Programming Language :: Python :: 2',
86 'Programming Language :: Python :: 2.6',
87 'Programming Language :: Python :: 2.7',
88 'Programming Language :: Python :: 3',
89 'Programming Language :: Python :: 3.3',
90 'Programming Language :: Python :: 3.4',
91 'Programming Language :: Python :: 3.5',
92 "Programming Language :: Python :: Implementation :: CPython",
93 'Topic :: Documentation',
94 'Topic :: Text Processing',
95 ],
96 zip_safe=False,
97 )
98
99 # (*) Please direct queries to the discussion group:
100 # https://groups.google.com/forum/#!forum/mkdocs
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -60,7 +60,7 @@
install_requires=[
'click>=3.3',
'Jinja2>=2.7.1',
- 'livereload>=2.3.2',
+ 'livereload>=2.5.1',
'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',
'PyYAML>=3.10',
'tornado>=4.1',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -60,7 +60,7 @@\n install_requires=[\n 'click>=3.3',\n 'Jinja2>=2.7.1',\n- 'livereload>=2.3.2',\n+ 'livereload>=2.5.1',\n 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=4.1',\n", "issue": "AttributeError: 'module' object has no attribute 'TornadoAsyncNotifier'\nUsing Python 2.6.6 on CentOS, I'm unable to run the server with livereload:\r\n\r\n```\r\n[mkdocs@dev test-docs]$ python /home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/__main__.py serve --dev-addr=0.0.0.0:8080\r\nWARNING: Support for Python 2.6 will be dropped in the 1.0.0 release of MkDocs\r\n\r\nINFO - Building documentation... \r\nINFO - Cleaning site directory \r\n[I 161205 22:16:26 server:283] Serving on http://0.0.0.0:8080\r\n[I 161205 22:16:26 handlers:60] Start watching changes\r\nTraceback (most recent call last):\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/__main__.py\", line 227, in <module>\r\n cli()\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py\", line 716, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py\", line 696, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py\", line 1060, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py\", line 889, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/click/core.py\", line 534, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/__main__.py\", line 127, in serve_command\r\n livereload=livereload\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/commands/serve.py\", line 88, in serve\r\n _livereload(host, port, config, builder, tempdir)\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/mkdocs/commands/serve.py\", line 27, in _livereload\r\n server.serve(root=site_dir, host=host, port=int(port), restart_delay=0)\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/livereload/server.py\", line 300, in serve\r\n LiveReloadHandler.start_tasks()\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/livereload/handlers.py\", line 61, in start_tasks\r\n if not cls.watcher.start(cls.poll_tasks):\r\n File \"/home/mkdocs/.local/lib/python2.6/site-packages/livereload/watcher.py\", line 160, in start\r\n self.notifier = pyinotify.TornadoAsyncNotifier(\r\nAttributeError: 'module' object has no attribute 'TornadoAsyncNotifier'\r\n```\r\n\r\nI can workaround by using `--no-livereload`, but is there a way to get livereload working?\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\nPY26 = sys.version_info[:2] == (2, 6)\n\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"Return package version as listed in `__version__` in `init.py`.\"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_packages(package):\n \"\"\"Return root package and all sub-packages.\"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {0} -m 'version {0}'\".format(get_version(\"mkdocs\")))\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n url='http://www.mkdocs.org',\n license='BSD',\n description='Project documentation with Markdown.',\n long_description=long_description,\n author='Tom Christie',\n author_email='[email protected]', # SEE NOTE BELOW (*)\n packages=get_packages(\"mkdocs\"),\n include_package_data=True,\n install_requires=[\n 'click>=3.3',\n 'Jinja2>=2.7.1',\n 'livereload>=2.3.2',\n 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=4.1',\n ],\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.__main__:cli',\n ],\n 'mkdocs.themes': [\n 'mkdocs = mkdocs.themes.mkdocs',\n 'readthedocs = mkdocs.themes.readthedocs',\n ]\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n \"Programming Language :: Python :: Implementation :: CPython\",\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ],\n zip_safe=False,\n)\n\n# (*) Please direct queries to the discussion group:\n# https://groups.google.com/forum/#!forum/mkdocs\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\nPY26 = sys.version_info[:2] == (2, 6)\n\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"Return package version as listed in `__version__` in `init.py`.\"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_packages(package):\n \"\"\"Return root package and all sub-packages.\"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {0} -m 'version {0}'\".format(get_version(\"mkdocs\")))\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n url='http://www.mkdocs.org',\n license='BSD',\n description='Project documentation with Markdown.',\n long_description=long_description,\n author='Tom Christie',\n author_email='[email protected]', # SEE NOTE BELOW (*)\n packages=get_packages(\"mkdocs\"),\n include_package_data=True,\n install_requires=[\n 'click>=3.3',\n 'Jinja2>=2.7.1',\n 'livereload>=2.5.1',\n 'Markdown>=2.3.1,<2.5' if PY26 else 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=4.1',\n ],\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.__main__:cli',\n ],\n 'mkdocs.themes': [\n 'mkdocs = mkdocs.themes.mkdocs',\n 'readthedocs = mkdocs.themes.readthedocs',\n ]\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n \"Programming Language :: Python :: Implementation :: CPython\",\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ],\n zip_safe=False,\n)\n\n# (*) Please direct queries to the discussion group:\n# https://groups.google.com/forum/#!forum/mkdocs\n", "path": "setup.py"}]} | 2,034 | 130 |
gh_patches_debug_475 | rasdani/github-patches | git_diff | cupy__cupy-545 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cupy.broadcast_arrays returns tuple, while numpy returns list
CuPy: current master (e51b311)
```
>>> cupy.broadcast_arrays(cupy.ones((1,)), cupy.ones((2)))
(array([ 1., 1.]), array([ 1., 1.]))
>>> numpy.broadcast_arrays(numpy.ones((1,)), numpy.ones((2)))
[array([ 1., 1.]), array([ 1., 1.])]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/manipulation/dims.py`
Content:
```
1 import six
2
3 import cupy
4 from cupy import core
5
6
7 zip_longest = six.moves.zip_longest
8 six_zip = six.moves.zip
9
10
11 # Shape map for atleast_nd functions
12 # (minimum dimension, input dimension) -> (output shape)
13 _atleast_nd_shape_map = {
14 (1, 0): lambda shape: (1,),
15 (2, 0): lambda shape: (1, 1),
16 (2, 1): lambda shape: (1,) + shape,
17 (3, 0): lambda shape: (1, 1, 1),
18 (3, 1): lambda shape: (1,) + shape + (1,),
19 (3, 2): lambda shape: shape + (1,),
20 }
21
22
23 def _atleast_nd_helper(n, arys):
24 """Helper function for atleast_nd functions."""
25
26 res = []
27 for a in arys:
28 if isinstance(a, cupy.ndarray):
29 if a.ndim < n:
30 new_shape = _atleast_nd_shape_map[(n, a.ndim)](a.shape)
31 a = a.reshape(*new_shape)
32 else:
33 raise TypeError('Unsupported type {}'.format(type(a)))
34 res.append(a)
35
36 if len(res) == 1:
37 res, = res
38 return res
39
40
41 def atleast_1d(*arys):
42 """Converts arrays to arrays with dimensions >= 1.
43
44 Args:
45 arys (tuple of arrays): Arrays to be converted. All arguments must be
46 :class:`cupy.ndarray` objects. Only zero-dimensional array is
47 affected.
48
49 Returns:
50 If there are only one input, then it returns its converted version.
51 Otherwise, it returns a list of converted arrays.
52
53 .. seealso:: :func:`numpy.atleast_1d`
54
55 """
56 return _atleast_nd_helper(1, arys)
57
58
59 def atleast_2d(*arys):
60 """Converts arrays to arrays with dimensions >= 2.
61
62 If an input array has dimensions less than two, then this function inserts
63 new axes at the head of dimensions to make it have two dimensions.
64
65 Args:
66 arys (tuple of arrays): Arrays to be converted. All arguments must be
67 :class:`cupy.ndarray` objects.
68
69 Returns:
70 If there are only one input, then it returns its converted version.
71 Otherwise, it returns a list of converted arrays.
72
73 .. seealso:: :func:`numpy.atleast_2d`
74
75 """
76 return _atleast_nd_helper(2, arys)
77
78
79 def atleast_3d(*arys):
80 """Converts arrays to arrays with dimensions >= 3.
81
82 If an input array has dimensions less than three, then this function
83 inserts new axes to make it have three dimensions. The place of the new
84 axes are following:
85
86 - If its shape is ``()``, then the shape of output is ``(1, 1, 1)``.
87 - If its shape is ``(N,)``, then the shape of output is ``(1, N, 1)``.
88 - If its shape is ``(M, N)``, then the shape of output is ``(M, N, 1)``.
89 - Otherwise, the output is the input array itself.
90
91 Args:
92 arys (tuple of arrays): Arrays to be converted. All arguments must be
93 :class:`cupy.ndarray` objects.
94
95 Returns:
96 If there are only one input, then it returns its converted version.
97 Otherwise, it returns a list of converted arrays.
98
99 .. seealso:: :func:`numpy.atleast_3d`
100
101 """
102 return _atleast_nd_helper(3, arys)
103
104
105 broadcast = core.broadcast
106
107
108 def broadcast_arrays(*args):
109 """Broadcasts given arrays.
110
111 Args:
112 args (tuple of arrays): Arrays to broadcast for each other.
113
114 Returns:
115 list: A list of broadcasted arrays.
116
117 .. seealso:: :func:`numpy.broadcast_arrays`
118
119 """
120 return broadcast(*args).values
121
122
123 def broadcast_to(array, shape):
124 """Broadcast an array to a given shape.
125
126 Args:
127 array (cupy.ndarray): Array to broadcast.
128 shape (tuple of int): The shape of the desired array.
129
130 Returns:
131 cupy.ndarray: Broadcasted view.
132
133 .. seealso:: :func:`numpy.broadcast_to`
134
135 """
136 return core.broadcast_to(array, shape)
137
138
139 def expand_dims(a, axis):
140 """Expands given arrays.
141
142 Args:
143 a (cupy.ndarray): Array to be expanded.
144 axis (int): Position where new axis is to be inserted.
145
146 Returns:
147 cupy.ndarray: The number of dimensions is one greater than that of
148 the input array.
149
150 .. seealso:: :func:`numpy.expand_dims`
151
152 """
153 # TODO(okuta): check type
154 shape = a.shape
155 if axis < 0:
156 axis = axis + len(shape) + 1
157 return a.reshape(shape[:axis] + (1,) + shape[axis:])
158
159
160 def squeeze(a, axis=None):
161 """Removes size-one axes from the shape of an array.
162
163 Args:
164 a (cupy.ndarray): Array to be reshaped.
165 axis (int or tuple of ints): Axes to be removed. This function removes
166 all size-one axes by default. If one of the specified axes is not
167 of size one, an exception is raised.
168
169 Returns:
170 cupy.ndarray: An array without (specified) size-one axes.
171
172 .. seealso:: :func:`numpy.squeeze`
173
174 """
175 # TODO(okuta): check type
176 return a.squeeze(axis)
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/manipulation/dims.py b/cupy/manipulation/dims.py
--- a/cupy/manipulation/dims.py
+++ b/cupy/manipulation/dims.py
@@ -117,7 +117,7 @@
.. seealso:: :func:`numpy.broadcast_arrays`
"""
- return broadcast(*args).values
+ return list(broadcast(*args).values)
def broadcast_to(array, shape):
| {"golden_diff": "diff --git a/cupy/manipulation/dims.py b/cupy/manipulation/dims.py\n--- a/cupy/manipulation/dims.py\n+++ b/cupy/manipulation/dims.py\n@@ -117,7 +117,7 @@\n .. seealso:: :func:`numpy.broadcast_arrays`\n \n \"\"\"\n- return broadcast(*args).values\n+ return list(broadcast(*args).values)\n \n \n def broadcast_to(array, shape):\n", "issue": "cupy.broadcast_arrays returns tuple, while numpy returns list\nCuPy: current master (e51b311)\r\n```\r\n>>> cupy.broadcast_arrays(cupy.ones((1,)), cupy.ones((2)))\r\n(array([ 1., 1.]), array([ 1., 1.]))\r\n>>> numpy.broadcast_arrays(numpy.ones((1,)), numpy.ones((2)))\r\n[array([ 1., 1.]), array([ 1., 1.])]\r\n```\n", "before_files": [{"content": "import six\n\nimport cupy\nfrom cupy import core\n\n\nzip_longest = six.moves.zip_longest\nsix_zip = six.moves.zip\n\n\n# Shape map for atleast_nd functions\n# (minimum dimension, input dimension) -> (output shape)\n_atleast_nd_shape_map = {\n (1, 0): lambda shape: (1,),\n (2, 0): lambda shape: (1, 1),\n (2, 1): lambda shape: (1,) + shape,\n (3, 0): lambda shape: (1, 1, 1),\n (3, 1): lambda shape: (1,) + shape + (1,),\n (3, 2): lambda shape: shape + (1,),\n}\n\n\ndef _atleast_nd_helper(n, arys):\n \"\"\"Helper function for atleast_nd functions.\"\"\"\n\n res = []\n for a in arys:\n if isinstance(a, cupy.ndarray):\n if a.ndim < n:\n new_shape = _atleast_nd_shape_map[(n, a.ndim)](a.shape)\n a = a.reshape(*new_shape)\n else:\n raise TypeError('Unsupported type {}'.format(type(a)))\n res.append(a)\n\n if len(res) == 1:\n res, = res\n return res\n\n\ndef atleast_1d(*arys):\n \"\"\"Converts arrays to arrays with dimensions >= 1.\n\n Args:\n arys (tuple of arrays): Arrays to be converted. All arguments must be\n :class:`cupy.ndarray` objects. Only zero-dimensional array is\n affected.\n\n Returns:\n If there are only one input, then it returns its converted version.\n Otherwise, it returns a list of converted arrays.\n\n .. seealso:: :func:`numpy.atleast_1d`\n\n \"\"\"\n return _atleast_nd_helper(1, arys)\n\n\ndef atleast_2d(*arys):\n \"\"\"Converts arrays to arrays with dimensions >= 2.\n\n If an input array has dimensions less than two, then this function inserts\n new axes at the head of dimensions to make it have two dimensions.\n\n Args:\n arys (tuple of arrays): Arrays to be converted. All arguments must be\n :class:`cupy.ndarray` objects.\n\n Returns:\n If there are only one input, then it returns its converted version.\n Otherwise, it returns a list of converted arrays.\n\n .. seealso:: :func:`numpy.atleast_2d`\n\n \"\"\"\n return _atleast_nd_helper(2, arys)\n\n\ndef atleast_3d(*arys):\n \"\"\"Converts arrays to arrays with dimensions >= 3.\n\n If an input array has dimensions less than three, then this function\n inserts new axes to make it have three dimensions. The place of the new\n axes are following:\n\n - If its shape is ``()``, then the shape of output is ``(1, 1, 1)``.\n - If its shape is ``(N,)``, then the shape of output is ``(1, N, 1)``.\n - If its shape is ``(M, N)``, then the shape of output is ``(M, N, 1)``.\n - Otherwise, the output is the input array itself.\n\n Args:\n arys (tuple of arrays): Arrays to be converted. All arguments must be\n :class:`cupy.ndarray` objects.\n\n Returns:\n If there are only one input, then it returns its converted version.\n Otherwise, it returns a list of converted arrays.\n\n .. seealso:: :func:`numpy.atleast_3d`\n\n \"\"\"\n return _atleast_nd_helper(3, arys)\n\n\nbroadcast = core.broadcast\n\n\ndef broadcast_arrays(*args):\n \"\"\"Broadcasts given arrays.\n\n Args:\n args (tuple of arrays): Arrays to broadcast for each other.\n\n Returns:\n list: A list of broadcasted arrays.\n\n .. seealso:: :func:`numpy.broadcast_arrays`\n\n \"\"\"\n return broadcast(*args).values\n\n\ndef broadcast_to(array, shape):\n \"\"\"Broadcast an array to a given shape.\n\n Args:\n array (cupy.ndarray): Array to broadcast.\n shape (tuple of int): The shape of the desired array.\n\n Returns:\n cupy.ndarray: Broadcasted view.\n\n .. seealso:: :func:`numpy.broadcast_to`\n\n \"\"\"\n return core.broadcast_to(array, shape)\n\n\ndef expand_dims(a, axis):\n \"\"\"Expands given arrays.\n\n Args:\n a (cupy.ndarray): Array to be expanded.\n axis (int): Position where new axis is to be inserted.\n\n Returns:\n cupy.ndarray: The number of dimensions is one greater than that of\n the input array.\n\n .. seealso:: :func:`numpy.expand_dims`\n\n \"\"\"\n # TODO(okuta): check type\n shape = a.shape\n if axis < 0:\n axis = axis + len(shape) + 1\n return a.reshape(shape[:axis] + (1,) + shape[axis:])\n\n\ndef squeeze(a, axis=None):\n \"\"\"Removes size-one axes from the shape of an array.\n\n Args:\n a (cupy.ndarray): Array to be reshaped.\n axis (int or tuple of ints): Axes to be removed. This function removes\n all size-one axes by default. If one of the specified axes is not\n of size one, an exception is raised.\n\n Returns:\n cupy.ndarray: An array without (specified) size-one axes.\n\n .. seealso:: :func:`numpy.squeeze`\n\n \"\"\"\n # TODO(okuta): check type\n return a.squeeze(axis)\n", "path": "cupy/manipulation/dims.py"}], "after_files": [{"content": "import six\n\nimport cupy\nfrom cupy import core\n\n\nzip_longest = six.moves.zip_longest\nsix_zip = six.moves.zip\n\n\n# Shape map for atleast_nd functions\n# (minimum dimension, input dimension) -> (output shape)\n_atleast_nd_shape_map = {\n (1, 0): lambda shape: (1,),\n (2, 0): lambda shape: (1, 1),\n (2, 1): lambda shape: (1,) + shape,\n (3, 0): lambda shape: (1, 1, 1),\n (3, 1): lambda shape: (1,) + shape + (1,),\n (3, 2): lambda shape: shape + (1,),\n}\n\n\ndef _atleast_nd_helper(n, arys):\n \"\"\"Helper function for atleast_nd functions.\"\"\"\n\n res = []\n for a in arys:\n if isinstance(a, cupy.ndarray):\n if a.ndim < n:\n new_shape = _atleast_nd_shape_map[(n, a.ndim)](a.shape)\n a = a.reshape(*new_shape)\n else:\n raise TypeError('Unsupported type {}'.format(type(a)))\n res.append(a)\n\n if len(res) == 1:\n res, = res\n return res\n\n\ndef atleast_1d(*arys):\n \"\"\"Converts arrays to arrays with dimensions >= 1.\n\n Args:\n arys (tuple of arrays): Arrays to be converted. All arguments must be\n :class:`cupy.ndarray` objects. Only zero-dimensional array is\n affected.\n\n Returns:\n If there are only one input, then it returns its converted version.\n Otherwise, it returns a list of converted arrays.\n\n .. seealso:: :func:`numpy.atleast_1d`\n\n \"\"\"\n return _atleast_nd_helper(1, arys)\n\n\ndef atleast_2d(*arys):\n \"\"\"Converts arrays to arrays with dimensions >= 2.\n\n If an input array has dimensions less than two, then this function inserts\n new axes at the head of dimensions to make it have two dimensions.\n\n Args:\n arys (tuple of arrays): Arrays to be converted. All arguments must be\n :class:`cupy.ndarray` objects.\n\n Returns:\n If there are only one input, then it returns its converted version.\n Otherwise, it returns a list of converted arrays.\n\n .. seealso:: :func:`numpy.atleast_2d`\n\n \"\"\"\n return _atleast_nd_helper(2, arys)\n\n\ndef atleast_3d(*arys):\n \"\"\"Converts arrays to arrays with dimensions >= 3.\n\n If an input array has dimensions less than three, then this function\n inserts new axes to make it have three dimensions. The place of the new\n axes are following:\n\n - If its shape is ``()``, then the shape of output is ``(1, 1, 1)``.\n - If its shape is ``(N,)``, then the shape of output is ``(1, N, 1)``.\n - If its shape is ``(M, N)``, then the shape of output is ``(M, N, 1)``.\n - Otherwise, the output is the input array itself.\n\n Args:\n arys (tuple of arrays): Arrays to be converted. All arguments must be\n :class:`cupy.ndarray` objects.\n\n Returns:\n If there are only one input, then it returns its converted version.\n Otherwise, it returns a list of converted arrays.\n\n .. seealso:: :func:`numpy.atleast_3d`\n\n \"\"\"\n return _atleast_nd_helper(3, arys)\n\n\nbroadcast = core.broadcast\n\n\ndef broadcast_arrays(*args):\n \"\"\"Broadcasts given arrays.\n\n Args:\n args (tuple of arrays): Arrays to broadcast for each other.\n\n Returns:\n list: A list of broadcasted arrays.\n\n .. seealso:: :func:`numpy.broadcast_arrays`\n\n \"\"\"\n return list(broadcast(*args).values)\n\n\ndef broadcast_to(array, shape):\n \"\"\"Broadcast an array to a given shape.\n\n Args:\n array (cupy.ndarray): Array to broadcast.\n shape (tuple of int): The shape of the desired array.\n\n Returns:\n cupy.ndarray: Broadcasted view.\n\n .. seealso:: :func:`numpy.broadcast_to`\n\n \"\"\"\n return core.broadcast_to(array, shape)\n\n\ndef expand_dims(a, axis):\n \"\"\"Expands given arrays.\n\n Args:\n a (cupy.ndarray): Array to be expanded.\n axis (int): Position where new axis is to be inserted.\n\n Returns:\n cupy.ndarray: The number of dimensions is one greater than that of\n the input array.\n\n .. seealso:: :func:`numpy.expand_dims`\n\n \"\"\"\n # TODO(okuta): check type\n shape = a.shape\n if axis < 0:\n axis = axis + len(shape) + 1\n return a.reshape(shape[:axis] + (1,) + shape[axis:])\n\n\ndef squeeze(a, axis=None):\n \"\"\"Removes size-one axes from the shape of an array.\n\n Args:\n a (cupy.ndarray): Array to be reshaped.\n axis (int or tuple of ints): Axes to be removed. This function removes\n all size-one axes by default. If one of the specified axes is not\n of size one, an exception is raised.\n\n Returns:\n cupy.ndarray: An array without (specified) size-one axes.\n\n .. seealso:: :func:`numpy.squeeze`\n\n \"\"\"\n # TODO(okuta): check type\n return a.squeeze(axis)\n", "path": "cupy/manipulation/dims.py"}]} | 2,031 | 101 |
gh_patches_debug_41105 | rasdani/github-patches | git_diff | aws__aws-cli-4159 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aws eks get-token doesn't work with --profile
When going to use `aws --profile profilename eks get-token --cluster-name eksClusterName` I receive the following error `Unable to locate credentials. You can configure credentials by running "aws configure".`. However, when I set the ENVVAR of AWS_DEFAULT_PROFILE to the profile I want, the command works fine.
I verified other eks commands work fine with the --profile option. For example: `aws --profile profilename eks describe-cluster --name eksClusterName` works as expected.
aws --version
aws-cli/1.16.160 Python/3.7.0 Darwin/18.2.0 botocore/1.12.150
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/eks/get_token.py`
Content:
```
1 # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import base64
14 import botocore
15 import json
16
17 from botocore import session
18 from botocore.signers import RequestSigner
19 from botocore.model import ServiceId
20
21 from awscli.customizations.commands import BasicCommand
22 from awscli.customizations.utils import uni_print
23
24 AUTH_SERVICE = "sts"
25 AUTH_COMMAND = "GetCallerIdentity"
26 AUTH_API_VERSION = "2011-06-15"
27 AUTH_SIGNING_VERSION = "v4"
28
29 # Presigned url timeout in seconds
30 URL_TIMEOUT = 60
31
32 TOKEN_PREFIX = 'k8s-aws-v1.'
33
34 CLUSTER_NAME_HEADER = 'x-k8s-aws-id'
35
36
37 class GetTokenCommand(BasicCommand):
38 NAME = 'get-token'
39
40 DESCRIPTION = ("Get a token for authentication with an Amazon EKS cluster. "
41 "This can be used as an alternative to the "
42 "aws-iam-authenticator.")
43
44 ARG_TABLE = [
45 {
46 'name': 'cluster-name',
47 'help_text': ("Specify the name of the Amazon EKS cluster to create a token for."),
48 'required': True
49 },
50 {
51 'name': 'role-arn',
52 'help_text': ("Assume this role for credentials when signing the token."),
53 'required': False
54 }
55 ]
56
57 def _run_main(self, parsed_args, parsed_globals):
58 token_generator = TokenGenerator(parsed_globals.region)
59 token = token_generator.get_token(
60 parsed_args.cluster_name,
61 parsed_args.role_arn
62 )
63
64 full_object = {
65 "kind": "ExecCredential",
66 "apiVersion": "client.authentication.k8s.io/v1alpha1",
67 "spec": {},
68 "status": {
69 "token": token
70 }
71 }
72
73 uni_print(json.dumps(full_object))
74 uni_print('\n')
75
76 class TokenGenerator(object):
77 def __init__(self, region_name, session_handler=None):
78 if session_handler is None:
79 session_handler = SessionHandler()
80 self._session_handler = session_handler
81 self._region_name = region_name
82
83 def get_token(self, cluster_name, role_arn):
84 """ Generate a presigned url token to pass to kubectl. """
85 url = self._get_presigned_url(cluster_name, role_arn)
86 token = TOKEN_PREFIX + base64.urlsafe_b64encode(url.encode('utf-8')).decode('utf-8').rstrip('=')
87 return token
88
89 def _get_presigned_url(self, cluster_name, role_arn):
90 session = self._session_handler.get_session(
91 self._region_name,
92 role_arn
93 )
94
95 if self._region_name is None:
96 self._region_name = session.get_config_variable('region')
97
98 loader = botocore.loaders.create_loader()
99 data = loader.load_data("endpoints")
100 endpoint_resolver = botocore.regions.EndpointResolver(data)
101 endpoint = endpoint_resolver.construct_endpoint(
102 AUTH_SERVICE,
103 self._region_name
104 )
105 signer = RequestSigner(
106 ServiceId(AUTH_SERVICE),
107 self._region_name,
108 AUTH_SERVICE,
109 AUTH_SIGNING_VERSION,
110 session.get_credentials(),
111 session.get_component('event_emitter')
112 )
113 action_params='Action=' + AUTH_COMMAND + '&Version=' + AUTH_API_VERSION
114 params = {
115 'method': 'GET',
116 'url': 'https://' + endpoint["hostname"] + '/?' + action_params,
117 'body': {},
118 'headers': {CLUSTER_NAME_HEADER: cluster_name},
119 'context': {}
120 }
121
122 url=signer.generate_presigned_url(
123 params,
124 region_name=endpoint["credentialScope"]["region"],
125 operation_name='',
126 expires_in=URL_TIMEOUT
127 )
128 return url
129
130 class SessionHandler(object):
131 def get_session(self, region_name, role_arn):
132 """
133 Assumes the given role and returns a session object assuming said role.
134 """
135 session = botocore.session.get_session()
136 if region_name is not None:
137 session.set_config_variable('region', region_name)
138
139 if role_arn is not None:
140 sts = session.create_client(AUTH_SERVICE, region_name=region_name)
141 credentials_dict = sts.assume_role(
142 RoleArn=role_arn,
143 RoleSessionName='EKSGetTokenAuth'
144 )['Credentials']
145
146 session.set_credentials(credentials_dict['AccessKeyId'],
147 credentials_dict['SecretAccessKey'],
148 credentials_dict['SessionToken'])
149 return session
150 else:
151 return session
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awscli/customizations/eks/get_token.py b/awscli/customizations/eks/get_token.py
--- a/awscli/customizations/eks/get_token.py
+++ b/awscli/customizations/eks/get_token.py
@@ -54,11 +54,13 @@
}
]
- def _run_main(self, parsed_args, parsed_globals):
- token_generator = TokenGenerator(parsed_globals.region)
+ def _run_main(self, parsed_args, parsed_globals, token_generator=None):
+ if token_generator is None:
+ token_generator = TokenGenerator(self._session)
token = token_generator.get_token(
parsed_args.cluster_name,
- parsed_args.role_arn
+ parsed_args.role_arn,
+ parsed_globals.region,
)
full_object = {
@@ -74,37 +76,34 @@
uni_print('\n')
class TokenGenerator(object):
- def __init__(self, region_name, session_handler=None):
- if session_handler is None:
- session_handler = SessionHandler()
- self._session_handler = session_handler
- self._region_name = region_name
+ def __init__(self, botocore_session):
+ self._session_handler = SessionHandler(botocore_session)
- def get_token(self, cluster_name, role_arn):
+ def get_token(self, cluster_name, role_arn, region_name=None):
""" Generate a presigned url token to pass to kubectl. """
- url = self._get_presigned_url(cluster_name, role_arn)
+ url = self._get_presigned_url(cluster_name, role_arn, region_name)
token = TOKEN_PREFIX + base64.urlsafe_b64encode(url.encode('utf-8')).decode('utf-8').rstrip('=')
return token
- def _get_presigned_url(self, cluster_name, role_arn):
+ def _get_presigned_url(self, cluster_name, role_arn, region_name=None):
session = self._session_handler.get_session(
- self._region_name,
+ region_name,
role_arn
)
- if self._region_name is None:
- self._region_name = session.get_config_variable('region')
+ if region_name is None:
+ region_name = session.get_config_variable('region')
loader = botocore.loaders.create_loader()
data = loader.load_data("endpoints")
endpoint_resolver = botocore.regions.EndpointResolver(data)
endpoint = endpoint_resolver.construct_endpoint(
AUTH_SERVICE,
- self._region_name
+ region_name
)
signer = RequestSigner(
ServiceId(AUTH_SERVICE),
- self._region_name,
+ region_name,
AUTH_SERVICE,
AUTH_SIGNING_VERSION,
session.get_credentials(),
@@ -128,11 +127,14 @@
return url
class SessionHandler(object):
+ def __init__(self, botocore_session):
+ self._session = botocore_session
+
def get_session(self, region_name, role_arn):
"""
Assumes the given role and returns a session object assuming said role.
"""
- session = botocore.session.get_session()
+ session = self._session
if region_name is not None:
session.set_config_variable('region', region_name)
| {"golden_diff": "diff --git a/awscli/customizations/eks/get_token.py b/awscli/customizations/eks/get_token.py\n--- a/awscli/customizations/eks/get_token.py\n+++ b/awscli/customizations/eks/get_token.py\n@@ -54,11 +54,13 @@\n }\n ]\n \n- def _run_main(self, parsed_args, parsed_globals):\n- token_generator = TokenGenerator(parsed_globals.region)\n+ def _run_main(self, parsed_args, parsed_globals, token_generator=None):\n+ if token_generator is None:\n+ token_generator = TokenGenerator(self._session)\n token = token_generator.get_token(\n parsed_args.cluster_name,\n- parsed_args.role_arn\n+ parsed_args.role_arn,\n+ parsed_globals.region,\n )\n \n full_object = {\n@@ -74,37 +76,34 @@\n uni_print('\\n')\n \n class TokenGenerator(object):\n- def __init__(self, region_name, session_handler=None):\n- if session_handler is None:\n- session_handler = SessionHandler()\n- self._session_handler = session_handler\n- self._region_name = region_name\n+ def __init__(self, botocore_session):\n+ self._session_handler = SessionHandler(botocore_session)\n \n- def get_token(self, cluster_name, role_arn):\n+ def get_token(self, cluster_name, role_arn, region_name=None):\n \"\"\" Generate a presigned url token to pass to kubectl. \"\"\"\n- url = self._get_presigned_url(cluster_name, role_arn)\n+ url = self._get_presigned_url(cluster_name, role_arn, region_name)\n token = TOKEN_PREFIX + base64.urlsafe_b64encode(url.encode('utf-8')).decode('utf-8').rstrip('=')\n return token\n \n- def _get_presigned_url(self, cluster_name, role_arn):\n+ def _get_presigned_url(self, cluster_name, role_arn, region_name=None):\n session = self._session_handler.get_session(\n- self._region_name,\n+ region_name,\n role_arn\n )\n \n- if self._region_name is None:\n- self._region_name = session.get_config_variable('region')\n+ if region_name is None:\n+ region_name = session.get_config_variable('region')\n \n loader = botocore.loaders.create_loader()\n data = loader.load_data(\"endpoints\")\n endpoint_resolver = botocore.regions.EndpointResolver(data)\n endpoint = endpoint_resolver.construct_endpoint(\n AUTH_SERVICE,\n- self._region_name\n+ region_name\n )\n signer = RequestSigner(\n ServiceId(AUTH_SERVICE),\n- self._region_name,\n+ region_name,\n AUTH_SERVICE,\n AUTH_SIGNING_VERSION,\n session.get_credentials(),\n@@ -128,11 +127,14 @@\n return url\n \n class SessionHandler(object):\n+ def __init__(self, botocore_session):\n+ self._session = botocore_session\n+\n def get_session(self, region_name, role_arn):\n \"\"\"\n Assumes the given role and returns a session object assuming said role.\n \"\"\"\n- session = botocore.session.get_session()\n+ session = self._session\n if region_name is not None:\n session.set_config_variable('region', region_name)\n", "issue": "aws eks get-token doesn't work with --profile\nWhen going to use `aws --profile profilename eks get-token --cluster-name eksClusterName` I receive the following error `Unable to locate credentials. You can configure credentials by running \"aws configure\".`. However, when I set the ENVVAR of AWS_DEFAULT_PROFILE to the profile I want, the command works fine.\r\n\r\nI verified other eks commands work fine with the --profile option. For example: `aws --profile profilename eks describe-cluster --name eksClusterName` works as expected.\r\n\r\n\r\naws --version \r\naws-cli/1.16.160 Python/3.7.0 Darwin/18.2.0 botocore/1.12.150\n", "before_files": [{"content": "# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport base64\nimport botocore\nimport json\n\nfrom botocore import session\nfrom botocore.signers import RequestSigner\nfrom botocore.model import ServiceId\n\nfrom awscli.customizations.commands import BasicCommand\nfrom awscli.customizations.utils import uni_print\n\nAUTH_SERVICE = \"sts\"\nAUTH_COMMAND = \"GetCallerIdentity\"\nAUTH_API_VERSION = \"2011-06-15\"\nAUTH_SIGNING_VERSION = \"v4\"\n\n# Presigned url timeout in seconds\nURL_TIMEOUT = 60\n\nTOKEN_PREFIX = 'k8s-aws-v1.'\n\nCLUSTER_NAME_HEADER = 'x-k8s-aws-id'\n\n\nclass GetTokenCommand(BasicCommand):\n NAME = 'get-token'\n\n DESCRIPTION = (\"Get a token for authentication with an Amazon EKS cluster. \"\n \"This can be used as an alternative to the \"\n \"aws-iam-authenticator.\")\n\n ARG_TABLE = [\n {\n 'name': 'cluster-name',\n 'help_text': (\"Specify the name of the Amazon EKS cluster to create a token for.\"),\n 'required': True\n },\n {\n 'name': 'role-arn',\n 'help_text': (\"Assume this role for credentials when signing the token.\"),\n 'required': False\n }\n ]\n\n def _run_main(self, parsed_args, parsed_globals):\n token_generator = TokenGenerator(parsed_globals.region)\n token = token_generator.get_token(\n parsed_args.cluster_name,\n parsed_args.role_arn\n )\n\n full_object = {\n \"kind\": \"ExecCredential\",\n \"apiVersion\": \"client.authentication.k8s.io/v1alpha1\",\n \"spec\": {},\n \"status\": {\n \"token\": token\n }\n }\n\n uni_print(json.dumps(full_object))\n uni_print('\\n')\n\nclass TokenGenerator(object):\n def __init__(self, region_name, session_handler=None):\n if session_handler is None:\n session_handler = SessionHandler()\n self._session_handler = session_handler\n self._region_name = region_name\n\n def get_token(self, cluster_name, role_arn):\n \"\"\" Generate a presigned url token to pass to kubectl. \"\"\"\n url = self._get_presigned_url(cluster_name, role_arn)\n token = TOKEN_PREFIX + base64.urlsafe_b64encode(url.encode('utf-8')).decode('utf-8').rstrip('=')\n return token\n\n def _get_presigned_url(self, cluster_name, role_arn):\n session = self._session_handler.get_session(\n self._region_name,\n role_arn\n )\n\n if self._region_name is None:\n self._region_name = session.get_config_variable('region')\n\n loader = botocore.loaders.create_loader()\n data = loader.load_data(\"endpoints\")\n endpoint_resolver = botocore.regions.EndpointResolver(data)\n endpoint = endpoint_resolver.construct_endpoint(\n AUTH_SERVICE,\n self._region_name\n )\n signer = RequestSigner(\n ServiceId(AUTH_SERVICE),\n self._region_name,\n AUTH_SERVICE,\n AUTH_SIGNING_VERSION,\n session.get_credentials(),\n session.get_component('event_emitter')\n )\n action_params='Action=' + AUTH_COMMAND + '&Version=' + AUTH_API_VERSION\n params = {\n 'method': 'GET',\n 'url': 'https://' + endpoint[\"hostname\"] + '/?' + action_params,\n 'body': {},\n 'headers': {CLUSTER_NAME_HEADER: cluster_name},\n 'context': {}\n }\n\n url=signer.generate_presigned_url(\n params,\n region_name=endpoint[\"credentialScope\"][\"region\"],\n operation_name='',\n expires_in=URL_TIMEOUT\n )\n return url\n\nclass SessionHandler(object):\n def get_session(self, region_name, role_arn):\n \"\"\"\n Assumes the given role and returns a session object assuming said role.\n \"\"\"\n session = botocore.session.get_session()\n if region_name is not None:\n session.set_config_variable('region', region_name)\n\n if role_arn is not None:\n sts = session.create_client(AUTH_SERVICE, region_name=region_name)\n credentials_dict = sts.assume_role(\n RoleArn=role_arn,\n RoleSessionName='EKSGetTokenAuth'\n )['Credentials']\n\n session.set_credentials(credentials_dict['AccessKeyId'],\n credentials_dict['SecretAccessKey'],\n credentials_dict['SessionToken'])\n return session\n else:\n return session\n", "path": "awscli/customizations/eks/get_token.py"}], "after_files": [{"content": "# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport base64\nimport botocore\nimport json\n\nfrom botocore import session\nfrom botocore.signers import RequestSigner\nfrom botocore.model import ServiceId\n\nfrom awscli.customizations.commands import BasicCommand\nfrom awscli.customizations.utils import uni_print\n\nAUTH_SERVICE = \"sts\"\nAUTH_COMMAND = \"GetCallerIdentity\"\nAUTH_API_VERSION = \"2011-06-15\"\nAUTH_SIGNING_VERSION = \"v4\"\n\n# Presigned url timeout in seconds\nURL_TIMEOUT = 60\n\nTOKEN_PREFIX = 'k8s-aws-v1.'\n\nCLUSTER_NAME_HEADER = 'x-k8s-aws-id'\n\n\nclass GetTokenCommand(BasicCommand):\n NAME = 'get-token'\n\n DESCRIPTION = (\"Get a token for authentication with an Amazon EKS cluster. \"\n \"This can be used as an alternative to the \"\n \"aws-iam-authenticator.\")\n\n ARG_TABLE = [\n {\n 'name': 'cluster-name',\n 'help_text': (\"Specify the name of the Amazon EKS cluster to create a token for.\"),\n 'required': True\n },\n {\n 'name': 'role-arn',\n 'help_text': (\"Assume this role for credentials when signing the token.\"),\n 'required': False\n }\n ]\n\n def _run_main(self, parsed_args, parsed_globals, token_generator=None):\n if token_generator is None:\n token_generator = TokenGenerator(self._session)\n token = token_generator.get_token(\n parsed_args.cluster_name,\n parsed_args.role_arn,\n parsed_globals.region,\n )\n\n full_object = {\n \"kind\": \"ExecCredential\",\n \"apiVersion\": \"client.authentication.k8s.io/v1alpha1\",\n \"spec\": {},\n \"status\": {\n \"token\": token\n }\n }\n\n uni_print(json.dumps(full_object))\n uni_print('\\n')\n\nclass TokenGenerator(object):\n def __init__(self, botocore_session):\n self._session_handler = SessionHandler(botocore_session)\n\n def get_token(self, cluster_name, role_arn, region_name=None):\n \"\"\" Generate a presigned url token to pass to kubectl. \"\"\"\n url = self._get_presigned_url(cluster_name, role_arn, region_name)\n token = TOKEN_PREFIX + base64.urlsafe_b64encode(url.encode('utf-8')).decode('utf-8').rstrip('=')\n return token\n\n def _get_presigned_url(self, cluster_name, role_arn, region_name=None):\n session = self._session_handler.get_session(\n region_name,\n role_arn\n )\n\n if region_name is None:\n region_name = session.get_config_variable('region')\n\n loader = botocore.loaders.create_loader()\n data = loader.load_data(\"endpoints\")\n endpoint_resolver = botocore.regions.EndpointResolver(data)\n endpoint = endpoint_resolver.construct_endpoint(\n AUTH_SERVICE,\n region_name\n )\n signer = RequestSigner(\n ServiceId(AUTH_SERVICE),\n region_name,\n AUTH_SERVICE,\n AUTH_SIGNING_VERSION,\n session.get_credentials(),\n session.get_component('event_emitter')\n )\n action_params='Action=' + AUTH_COMMAND + '&Version=' + AUTH_API_VERSION\n params = {\n 'method': 'GET',\n 'url': 'https://' + endpoint[\"hostname\"] + '/?' + action_params,\n 'body': {},\n 'headers': {CLUSTER_NAME_HEADER: cluster_name},\n 'context': {}\n }\n\n url=signer.generate_presigned_url(\n params,\n region_name=endpoint[\"credentialScope\"][\"region\"],\n operation_name='',\n expires_in=URL_TIMEOUT\n )\n return url\n\nclass SessionHandler(object):\n def __init__(self, botocore_session):\n self._session = botocore_session\n\n def get_session(self, region_name, role_arn):\n \"\"\"\n Assumes the given role and returns a session object assuming said role.\n \"\"\"\n session = self._session\n if region_name is not None:\n session.set_config_variable('region', region_name)\n\n if role_arn is not None:\n sts = session.create_client(AUTH_SERVICE, region_name=region_name)\n credentials_dict = sts.assume_role(\n RoleArn=role_arn,\n RoleSessionName='EKSGetTokenAuth'\n )['Credentials']\n\n session.set_credentials(credentials_dict['AccessKeyId'],\n credentials_dict['SecretAccessKey'],\n credentials_dict['SessionToken'])\n return session\n else:\n return session\n", "path": "awscli/customizations/eks/get_token.py"}]} | 1,872 | 730 |
gh_patches_debug_10190 | rasdani/github-patches | git_diff | kivy__python-for-android-1112 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Websocket error: SSL not available
Getting this error when attempting to open a secure websocket ('wss://') using websocket-client.
09-02 09:12:04.037 10328 10719 I python : [ERROR ] [MyApp] Websocket error: SSL not available.
09-02 09:12:04.037 10328 10719 I python : [INFO ] [MyApp] Websocket closed
Still happens after trying:
* buildozer android clean
* deleting the .buildozer directory and re-building
* issuing `buildozer android update` and `buildozer android clean` and update
buildozer requirements:
requirements = kivy==1.10.0, sqlite3, openssl, pyopenssl, pygments, websocket-client
Other notes:
* To get websocket-client to load I had to copy websocket-client source locally into my project as recommended here: https://github.com/kivy/python-for-android/blob/master/pythonforandroid/recipes/websocket-client/__init__.py#L3
* I disabled the SSL check on websocket-client and found a lower level exception:
09-02 10:00:43.215 13478 13511 I python : [ERROR ] [MyApp] Websocket error: _ssl.c:331: No root certificates specified for verification of other-side certificates.
* SSL does work on my android build; elsewhere in our app we are successfully using https with urllib2.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/websocket-client/__init__.py`
Content:
```
1 from pythonforandroid.toolchain import Recipe
2
3 # if android app crashes on start with "ImportError: No module named websocket"
4 #
5 # copy the 'websocket' directory into your app directory to force inclusion.
6 #
7 # see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe
8
9
10 class WebSocketClient(Recipe):
11
12 url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'
13
14 version = '0.40.0'
15 # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'
16
17 # patches = ['websocket.patch'] # Paths relative to the recipe dir
18
19 depends = ['kivy', 'python2', 'android', 'pyjnius',
20 'cryptography', 'pyasn1', 'pyopenssl']
21
22 recipe = WebSocketClient()
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/recipes/websocket-client/__init__.py b/pythonforandroid/recipes/websocket-client/__init__.py
--- a/pythonforandroid/recipes/websocket-client/__init__.py
+++ b/pythonforandroid/recipes/websocket-client/__init__.py
@@ -5,6 +5,12 @@
# copy the 'websocket' directory into your app directory to force inclusion.
#
# see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe
+#
+# If you see errors relating to 'SSL not available' ensure you have the package backports.ssl-match-hostname
+# in the buildozer requirements, since Kivy targets python 2.7.x
+#
+# You may also need sslopt={"cert_reqs": ssl.CERT_NONE} as a parameter to ws.run_forever() if you get an error relating to
+# host verification
class WebSocketClient(Recipe):
| {"golden_diff": "diff --git a/pythonforandroid/recipes/websocket-client/__init__.py b/pythonforandroid/recipes/websocket-client/__init__.py\n--- a/pythonforandroid/recipes/websocket-client/__init__.py\n+++ b/pythonforandroid/recipes/websocket-client/__init__.py\n@@ -5,6 +5,12 @@\n # copy the 'websocket' directory into your app directory to force inclusion.\n #\n # see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe\n+#\n+# If you see errors relating to 'SSL not available' ensure you have the package backports.ssl-match-hostname\n+# in the buildozer requirements, since Kivy targets python 2.7.x\n+#\n+# You may also need sslopt={\"cert_reqs\": ssl.CERT_NONE} as a parameter to ws.run_forever() if you get an error relating to\n+# host verification\n \n \n class WebSocketClient(Recipe):\n", "issue": "Websocket error: SSL not available \nGetting this error when attempting to open a secure websocket ('wss://') using websocket-client.\r\n\r\n09-02 09:12:04.037 10328 10719 I python : [ERROR ] [MyApp] Websocket error: SSL not available.\r\n09-02 09:12:04.037 10328 10719 I python : [INFO ] [MyApp] Websocket closed\r\n\r\nStill happens after trying: \r\n* buildozer android clean \r\n* deleting the .buildozer directory and re-building\r\n* issuing `buildozer android update` and `buildozer android clean` and update\r\n\r\nbuildozer requirements:\r\n\r\nrequirements = kivy==1.10.0, sqlite3, openssl, pyopenssl, pygments, websocket-client\r\n\r\nOther notes:\r\n\r\n* To get websocket-client to load I had to copy websocket-client source locally into my project as recommended here: https://github.com/kivy/python-for-android/blob/master/pythonforandroid/recipes/websocket-client/__init__.py#L3\r\n\r\n* I disabled the SSL check on websocket-client and found a lower level exception:\r\n\r\n09-02 10:00:43.215 13478 13511 I python : [ERROR ] [MyApp] Websocket error: _ssl.c:331: No root certificates specified for verification of other-side certificates.\r\n\r\n* SSL does work on my android build; elsewhere in our app we are successfully using https with urllib2. \r\n\r\n\r\n\n", "before_files": [{"content": "from pythonforandroid.toolchain import Recipe\n\n# if android app crashes on start with \"ImportError: No module named websocket\"\n#\n# copy the 'websocket' directory into your app directory to force inclusion.\n#\n# see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe\n\n\nclass WebSocketClient(Recipe):\n\n url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'\n\n version = '0.40.0'\n # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'\n\n # patches = ['websocket.patch'] # Paths relative to the recipe dir\n\n depends = ['kivy', 'python2', 'android', 'pyjnius',\n 'cryptography', 'pyasn1', 'pyopenssl']\n\nrecipe = WebSocketClient()\n", "path": "pythonforandroid/recipes/websocket-client/__init__.py"}], "after_files": [{"content": "from pythonforandroid.toolchain import Recipe\n\n# if android app crashes on start with \"ImportError: No module named websocket\"\n#\n# copy the 'websocket' directory into your app directory to force inclusion.\n#\n# see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe\n#\n# If you see errors relating to 'SSL not available' ensure you have the package backports.ssl-match-hostname\n# in the buildozer requirements, since Kivy targets python 2.7.x\n#\n# You may also need sslopt={\"cert_reqs\": ssl.CERT_NONE} as a parameter to ws.run_forever() if you get an error relating to\n# host verification\n\n\nclass WebSocketClient(Recipe):\n\n url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'\n\n version = '0.40.0'\n # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'\n\n # patches = ['websocket.patch'] # Paths relative to the recipe dir\n\n depends = ['kivy', 'python2', 'android', 'pyjnius',\n 'cryptography', 'pyasn1', 'pyopenssl']\n\nrecipe = WebSocketClient()\n", "path": "pythonforandroid/recipes/websocket-client/__init__.py"}]} | 870 | 204 |
gh_patches_debug_617 | rasdani/github-patches | git_diff | pex-tool__pex-1618 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.67
On the docket:
+ [x] Expand --platform syntax: support full versions. #1614
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.66"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.66"
+__version__ = "2.1.67"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.66\"\n+__version__ = \"2.1.67\"\n", "issue": "Release 2.1.67\nOn the docket:\r\n+ [x] Expand --platform syntax: support full versions. #1614 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.66\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.67\"\n", "path": "pex/version.py"}]} | 341 | 96 |
gh_patches_debug_22356 | rasdani/github-patches | git_diff | scrapy__scrapy-4375 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"DeprecationWarning: Use of 'SCRAPY_'-prefixed environment variables" always shows
`ScrapyDeprecationWarning: Use of 'SCRAPY_'-prefixed environment variables to override settings is deprecated.`
This deprecation warning can't be shut up inside a project.
https://github.com/scrapy/scrapy/blob/034e2c31c7d55333c3de208f80dcee1bf45ef9b9/scrapy/utils/project.py#L17-L19
This is always true, if inside a project. Because of
https://github.com/scrapy/scrapy/blob/034e2c31c7d55333c3de208f80dcee1bf45ef9b9/scrapy/utils/conf.py#L77-L84
We should probably stop "emulating" ENV vars there, and pass this somehow differently.
Additionally, the `ScrapyDeprecationWarning` for `env_overrides` is too generic: It never tells you _which_ variable(s) are actually set. This is plain annoying. Please make it more specific.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/project.py`
Content:
```
1 import os
2 import pickle
3 import warnings
4
5 from importlib import import_module
6 from os.path import join, dirname, abspath, isabs, exists
7
8 from scrapy.utils.conf import closest_scrapy_cfg, get_config, init_env
9 from scrapy.settings import Settings
10 from scrapy.exceptions import NotConfigured, ScrapyDeprecationWarning
11
12
13 ENVVAR = 'SCRAPY_SETTINGS_MODULE'
14 DATADIR_CFG_SECTION = 'datadir'
15
16
17 def inside_project():
18 scrapy_module = os.environ.get('SCRAPY_SETTINGS_MODULE')
19 if scrapy_module is not None:
20 try:
21 import_module(scrapy_module)
22 except ImportError as exc:
23 warnings.warn("Cannot import scrapy settings module %s: %s" % (scrapy_module, exc))
24 else:
25 return True
26 return bool(closest_scrapy_cfg())
27
28
29 def project_data_dir(project='default'):
30 """Return the current project data dir, creating it if it doesn't exist"""
31 if not inside_project():
32 raise NotConfigured("Not inside a project")
33 cfg = get_config()
34 if cfg.has_option(DATADIR_CFG_SECTION, project):
35 d = cfg.get(DATADIR_CFG_SECTION, project)
36 else:
37 scrapy_cfg = closest_scrapy_cfg()
38 if not scrapy_cfg:
39 raise NotConfigured("Unable to find scrapy.cfg file to infer project data dir")
40 d = abspath(join(dirname(scrapy_cfg), '.scrapy'))
41 if not exists(d):
42 os.makedirs(d)
43 return d
44
45
46 def data_path(path, createdir=False):
47 """
48 Return the given path joined with the .scrapy data directory.
49 If given an absolute path, return it unmodified.
50 """
51 if not isabs(path):
52 if inside_project():
53 path = join(project_data_dir(), path)
54 else:
55 path = join('.scrapy', path)
56 if createdir and not exists(path):
57 os.makedirs(path)
58 return path
59
60
61 def get_project_settings():
62 if ENVVAR not in os.environ:
63 project = os.environ.get('SCRAPY_PROJECT', 'default')
64 init_env(project)
65
66 settings = Settings()
67 settings_module_path = os.environ.get(ENVVAR)
68 if settings_module_path:
69 settings.setmodule(settings_module_path, priority='project')
70
71 pickled_settings = os.environ.get("SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE")
72 if pickled_settings:
73 warnings.warn("Use of environment variable "
74 "'SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE' "
75 "is deprecated.", ScrapyDeprecationWarning)
76 settings.setdict(pickle.loads(pickled_settings), priority='project')
77
78 env_overrides = {k[7:]: v for k, v in os.environ.items() if
79 k.startswith('SCRAPY_')}
80 if env_overrides:
81 warnings.warn("Use of 'SCRAPY_'-prefixed environment variables to override settings is deprecated.", ScrapyDeprecationWarning)
82 settings.setdict(env_overrides, priority='project')
83 return settings
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/utils/project.py b/scrapy/utils/project.py
--- a/scrapy/utils/project.py
+++ b/scrapy/utils/project.py
@@ -75,9 +75,24 @@
"is deprecated.", ScrapyDeprecationWarning)
settings.setdict(pickle.loads(pickled_settings), priority='project')
- env_overrides = {k[7:]: v for k, v in os.environ.items() if
- k.startswith('SCRAPY_')}
- if env_overrides:
- warnings.warn("Use of 'SCRAPY_'-prefixed environment variables to override settings is deprecated.", ScrapyDeprecationWarning)
- settings.setdict(env_overrides, priority='project')
+ scrapy_envvars = {k[7:]: v for k, v in os.environ.items() if
+ k.startswith('SCRAPY_')}
+ valid_envvars = {
+ 'CHECK',
+ 'PICKLED_SETTINGS_TO_OVERRIDE',
+ 'PROJECT',
+ 'PYTHON_SHELL',
+ 'SETTINGS_MODULE',
+ }
+ setting_envvars = {k for k in scrapy_envvars if k not in valid_envvars}
+ if setting_envvars:
+ setting_envvar_list = ', '.join(sorted(setting_envvars))
+ warnings.warn(
+ 'Use of environment variables prefixed with SCRAPY_ to override '
+ 'settings is deprecated. The following environment variables are '
+ 'currently defined: {}'.format(setting_envvar_list),
+ ScrapyDeprecationWarning
+ )
+ settings.setdict(scrapy_envvars, priority='project')
+
return settings
| {"golden_diff": "diff --git a/scrapy/utils/project.py b/scrapy/utils/project.py\n--- a/scrapy/utils/project.py\n+++ b/scrapy/utils/project.py\n@@ -75,9 +75,24 @@\n \"is deprecated.\", ScrapyDeprecationWarning)\n settings.setdict(pickle.loads(pickled_settings), priority='project')\n \n- env_overrides = {k[7:]: v for k, v in os.environ.items() if\n- k.startswith('SCRAPY_')}\n- if env_overrides:\n- warnings.warn(\"Use of 'SCRAPY_'-prefixed environment variables to override settings is deprecated.\", ScrapyDeprecationWarning)\n- settings.setdict(env_overrides, priority='project')\n+ scrapy_envvars = {k[7:]: v for k, v in os.environ.items() if\n+ k.startswith('SCRAPY_')}\n+ valid_envvars = {\n+ 'CHECK',\n+ 'PICKLED_SETTINGS_TO_OVERRIDE',\n+ 'PROJECT',\n+ 'PYTHON_SHELL',\n+ 'SETTINGS_MODULE',\n+ }\n+ setting_envvars = {k for k in scrapy_envvars if k not in valid_envvars}\n+ if setting_envvars:\n+ setting_envvar_list = ', '.join(sorted(setting_envvars))\n+ warnings.warn(\n+ 'Use of environment variables prefixed with SCRAPY_ to override '\n+ 'settings is deprecated. The following environment variables are '\n+ 'currently defined: {}'.format(setting_envvar_list),\n+ ScrapyDeprecationWarning\n+ )\n+ settings.setdict(scrapy_envvars, priority='project')\n+\n return settings\n", "issue": "\"DeprecationWarning: Use of 'SCRAPY_'-prefixed environment variables\" always shows\n`ScrapyDeprecationWarning: Use of 'SCRAPY_'-prefixed environment variables to override settings is deprecated.`\r\n\r\nThis deprecation warning can't be shut up inside a project.\r\n\r\nhttps://github.com/scrapy/scrapy/blob/034e2c31c7d55333c3de208f80dcee1bf45ef9b9/scrapy/utils/project.py#L17-L19\r\n\r\nThis is always true, if inside a project. Because of\r\nhttps://github.com/scrapy/scrapy/blob/034e2c31c7d55333c3de208f80dcee1bf45ef9b9/scrapy/utils/conf.py#L77-L84\r\nWe should probably stop \"emulating\" ENV vars there, and pass this somehow differently.\r\n\r\nAdditionally, the `ScrapyDeprecationWarning` for `env_overrides` is too generic: It never tells you _which_ variable(s) are actually set. This is plain annoying. Please make it more specific.\n", "before_files": [{"content": "import os\nimport pickle\nimport warnings\n\nfrom importlib import import_module\nfrom os.path import join, dirname, abspath, isabs, exists\n\nfrom scrapy.utils.conf import closest_scrapy_cfg, get_config, init_env\nfrom scrapy.settings import Settings\nfrom scrapy.exceptions import NotConfigured, ScrapyDeprecationWarning\n\n\nENVVAR = 'SCRAPY_SETTINGS_MODULE'\nDATADIR_CFG_SECTION = 'datadir'\n\n\ndef inside_project():\n scrapy_module = os.environ.get('SCRAPY_SETTINGS_MODULE')\n if scrapy_module is not None:\n try:\n import_module(scrapy_module)\n except ImportError as exc:\n warnings.warn(\"Cannot import scrapy settings module %s: %s\" % (scrapy_module, exc))\n else:\n return True\n return bool(closest_scrapy_cfg())\n\n\ndef project_data_dir(project='default'):\n \"\"\"Return the current project data dir, creating it if it doesn't exist\"\"\"\n if not inside_project():\n raise NotConfigured(\"Not inside a project\")\n cfg = get_config()\n if cfg.has_option(DATADIR_CFG_SECTION, project):\n d = cfg.get(DATADIR_CFG_SECTION, project)\n else:\n scrapy_cfg = closest_scrapy_cfg()\n if not scrapy_cfg:\n raise NotConfigured(\"Unable to find scrapy.cfg file to infer project data dir\")\n d = abspath(join(dirname(scrapy_cfg), '.scrapy'))\n if not exists(d):\n os.makedirs(d)\n return d\n\n\ndef data_path(path, createdir=False):\n \"\"\"\n Return the given path joined with the .scrapy data directory.\n If given an absolute path, return it unmodified.\n \"\"\"\n if not isabs(path):\n if inside_project():\n path = join(project_data_dir(), path)\n else:\n path = join('.scrapy', path)\n if createdir and not exists(path):\n os.makedirs(path)\n return path\n\n\ndef get_project_settings():\n if ENVVAR not in os.environ:\n project = os.environ.get('SCRAPY_PROJECT', 'default')\n init_env(project)\n\n settings = Settings()\n settings_module_path = os.environ.get(ENVVAR)\n if settings_module_path:\n settings.setmodule(settings_module_path, priority='project')\n\n pickled_settings = os.environ.get(\"SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE\")\n if pickled_settings:\n warnings.warn(\"Use of environment variable \"\n \"'SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE' \"\n \"is deprecated.\", ScrapyDeprecationWarning)\n settings.setdict(pickle.loads(pickled_settings), priority='project')\n\n env_overrides = {k[7:]: v for k, v in os.environ.items() if\n k.startswith('SCRAPY_')}\n if env_overrides:\n warnings.warn(\"Use of 'SCRAPY_'-prefixed environment variables to override settings is deprecated.\", ScrapyDeprecationWarning)\n settings.setdict(env_overrides, priority='project')\n return settings\n", "path": "scrapy/utils/project.py"}], "after_files": [{"content": "import os\nimport pickle\nimport warnings\n\nfrom importlib import import_module\nfrom os.path import join, dirname, abspath, isabs, exists\n\nfrom scrapy.utils.conf import closest_scrapy_cfg, get_config, init_env\nfrom scrapy.settings import Settings\nfrom scrapy.exceptions import NotConfigured, ScrapyDeprecationWarning\n\n\nENVVAR = 'SCRAPY_SETTINGS_MODULE'\nDATADIR_CFG_SECTION = 'datadir'\n\n\ndef inside_project():\n scrapy_module = os.environ.get('SCRAPY_SETTINGS_MODULE')\n if scrapy_module is not None:\n try:\n import_module(scrapy_module)\n except ImportError as exc:\n warnings.warn(\"Cannot import scrapy settings module %s: %s\" % (scrapy_module, exc))\n else:\n return True\n return bool(closest_scrapy_cfg())\n\n\ndef project_data_dir(project='default'):\n \"\"\"Return the current project data dir, creating it if it doesn't exist\"\"\"\n if not inside_project():\n raise NotConfigured(\"Not inside a project\")\n cfg = get_config()\n if cfg.has_option(DATADIR_CFG_SECTION, project):\n d = cfg.get(DATADIR_CFG_SECTION, project)\n else:\n scrapy_cfg = closest_scrapy_cfg()\n if not scrapy_cfg:\n raise NotConfigured(\"Unable to find scrapy.cfg file to infer project data dir\")\n d = abspath(join(dirname(scrapy_cfg), '.scrapy'))\n if not exists(d):\n os.makedirs(d)\n return d\n\n\ndef data_path(path, createdir=False):\n \"\"\"\n Return the given path joined with the .scrapy data directory.\n If given an absolute path, return it unmodified.\n \"\"\"\n if not isabs(path):\n if inside_project():\n path = join(project_data_dir(), path)\n else:\n path = join('.scrapy', path)\n if createdir and not exists(path):\n os.makedirs(path)\n return path\n\n\ndef get_project_settings():\n if ENVVAR not in os.environ:\n project = os.environ.get('SCRAPY_PROJECT', 'default')\n init_env(project)\n\n settings = Settings()\n settings_module_path = os.environ.get(ENVVAR)\n if settings_module_path:\n settings.setmodule(settings_module_path, priority='project')\n\n pickled_settings = os.environ.get(\"SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE\")\n if pickled_settings:\n warnings.warn(\"Use of environment variable \"\n \"'SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE' \"\n \"is deprecated.\", ScrapyDeprecationWarning)\n settings.setdict(pickle.loads(pickled_settings), priority='project')\n\n scrapy_envvars = {k[7:]: v for k, v in os.environ.items() if\n k.startswith('SCRAPY_')}\n valid_envvars = {\n 'CHECK',\n 'PICKLED_SETTINGS_TO_OVERRIDE',\n 'PROJECT',\n 'PYTHON_SHELL',\n 'SETTINGS_MODULE',\n }\n setting_envvars = {k for k in scrapy_envvars if k not in valid_envvars}\n if setting_envvars:\n setting_envvar_list = ', '.join(sorted(setting_envvars))\n warnings.warn(\n 'Use of environment variables prefixed with SCRAPY_ to override '\n 'settings is deprecated. The following environment variables are '\n 'currently defined: {}'.format(setting_envvar_list),\n ScrapyDeprecationWarning\n )\n settings.setdict(scrapy_envvars, priority='project')\n\n return settings\n", "path": "scrapy/utils/project.py"}]} | 1,317 | 357 |
gh_patches_debug_17901 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-386 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NY: Native American Relations committee is not being scraped correctly
http://www.nysenate.gov/committee/state-native-american-relations
http://openstates.org/admin/object_json/committees/NYC000116/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/ny/committees.py`
Content:
```
1 import re
2
3 from billy.scrape import NoDataForPeriod
4 from billy.scrape.committees import CommitteeScraper, Committee
5
6 import lxml.html
7
8
9 def parse_name(name):
10 """
11 Split a committee membership string into name and role.
12
13 >>> parse_name('Felix Ortiz')
14 ('Felix Ortiz', 'member')
15 >>> parse_name('Felix Ortiz (Chair)')
16 ('Felix Ortiz', 'chair')
17 >>> parse_name('Hon. Felix Ortiz, Co-Chair')
18 ('Felix Ortiz', 'co-chair')
19 >>> parse_name('Owen H.\\r\\nJohnson (Vice Chairperson)')
20 ('Owen H. Johnson', 'vice chairperson')
21 """
22 name = re.sub(r'^(Hon\.|Assemblyman|Assemblywoman)\s+', '', name)
23 name = re.sub(r'\s+', ' ', name)
24
25 roles = ["Chairwoman", "Chairperson", "Chair", "Secretary", "Treasurer",
26 "Parliamentarian", "Chaplain"]
27 match = re.match(
28 r'([^(]+),? \(?((Co|Vice)?-?\s*(%s))\)?' % '|'.join(roles),
29 name)
30
31 if match:
32 name = match.group(1).strip(' ,')
33 role = match.group(2).lower()
34 return (name, role)
35 return (name, 'member')
36
37
38 class NYCommitteeScraper(CommitteeScraper):
39 state = "ny"
40 latest_only = True
41
42 def scrape(self, chamber, term):
43 getattr(self, 'scrape_' + chamber)()
44
45 def scrape_lower(self, only_names=None):
46 committees = []
47 url = "http://assembly.state.ny.us/comm/"
48 page = self.urlopen(url)
49 page = lxml.html.fromstring(page)
50 page.make_links_absolute(url)
51
52 for link in page.xpath("//a[contains(@href, 'sec=mem')]"):
53 name = link.xpath("string(../strong)").strip()
54 if 'Caucus' in name:
55 continue
56
57 url = link.attrib['href']
58
59 committees.append(name)
60
61 self.scrape_lower_committee(name, url)
62 return committees
63
64 def scrape_lower_committee(self, name, url):
65 page = self.urlopen(url)
66 page = lxml.html.fromstring(page)
67
68 comm = Committee('lower', name)
69 comm.add_source(url)
70
71 for link in page.xpath("//div[@class='commlinks']//a[contains(@href, 'mem')]"):
72 member = link.text.strip()
73 member = re.sub(r'\s+', ' ', member)
74
75 name, role = parse_name(member)
76 comm.add_member(name, role)
77
78 self.save_committee(comm)
79
80 def scrape_upper(self):
81 committees = []
82 url = "http://www.nysenate.gov/committees"
83 page = self.urlopen(url)
84 page = lxml.html.fromstring(page)
85 page.make_links_absolute(url)
86
87 for link in page.xpath("//a[contains(@href, '/committee/')]"):
88 name = link.text.strip()
89
90 if name == 'New York State Conference of Black Senators':
91 # stop scraping once we reach the caucuses
92 break
93
94 committees.append(name)
95 self.scrape_upper_committee(name, link.attrib['href'])
96 return committees
97
98 def scrape_upper_committee(self, name, url):
99 page = self.urlopen(url)
100 page = lxml.html.fromstring(page)
101
102 comm = Committee('upper', name)
103 comm.add_source(url)
104
105 member_div = page.xpath("//div[@class = 'committee-members']")[0]
106
107 seen = set()
108 for link in member_div.xpath(".//a"):
109 if not link.text:
110 continue
111
112 member = link.text.strip()
113
114 next_elem = link.getnext()
115 if (next_elem is not None and
116 next_elem.tag == 'a' and
117 next_elem.attrib['href'] == link.attrib['href']):
118 # Sometimes NY is cool and splits names across a
119 # couple links
120 member = "%s %s" % (member, next_elem.text.strip())
121
122 member = re.sub(r'\s+', ' ', member)
123
124 if member in seen or not member:
125 continue
126 seen.add(member)
127
128 name, role = parse_name(member)
129 comm.add_member(name, role)
130
131 self.save_committee(comm)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/ny/committees.py b/openstates/ny/committees.py
--- a/openstates/ny/committees.py
+++ b/openstates/ny/committees.py
@@ -84,15 +84,17 @@
page = lxml.html.fromstring(page)
page.make_links_absolute(url)
- for link in page.xpath("//a[contains(@href, '/committee/')]"):
- name = link.text.strip()
+ for h2 in page.xpath("//h2"):
+ committee_types = ['Standing Committees','Temporary Committees']
+ if h2.text not in committee_types:
+ continue
+
+ for link in h2.getparent().xpath(".//a[contains(@href, '/committee/')]"):
+ name = link.text.strip()
- if name == 'New York State Conference of Black Senators':
- # stop scraping once we reach the caucuses
- break
+ committees.append(name)
+ self.scrape_upper_committee(name, link.attrib['href'])
- committees.append(name)
- self.scrape_upper_committee(name, link.attrib['href'])
return committees
def scrape_upper_committee(self, name, url):
| {"golden_diff": "diff --git a/openstates/ny/committees.py b/openstates/ny/committees.py\n--- a/openstates/ny/committees.py\n+++ b/openstates/ny/committees.py\n@@ -84,15 +84,17 @@\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n \n- for link in page.xpath(\"//a[contains(@href, '/committee/')]\"):\n- name = link.text.strip()\n+ for h2 in page.xpath(\"//h2\"):\n+ committee_types = ['Standing Committees','Temporary Committees']\n+ if h2.text not in committee_types:\n+ continue\n+ \n+ for link in h2.getparent().xpath(\".//a[contains(@href, '/committee/')]\"):\n+ name = link.text.strip()\n \n- if name == 'New York State Conference of Black Senators':\n- # stop scraping once we reach the caucuses\n- break\n+ committees.append(name)\n+ self.scrape_upper_committee(name, link.attrib['href'])\n \n- committees.append(name)\n- self.scrape_upper_committee(name, link.attrib['href'])\n return committees\n \n def scrape_upper_committee(self, name, url):\n", "issue": "NY: Native American Relations committee is not being scraped correctly\nhttp://www.nysenate.gov/committee/state-native-american-relations\nhttp://openstates.org/admin/object_json/committees/NYC000116/\n\n", "before_files": [{"content": "import re\n\nfrom billy.scrape import NoDataForPeriod\nfrom billy.scrape.committees import CommitteeScraper, Committee\n\nimport lxml.html\n\n\ndef parse_name(name):\n \"\"\"\n Split a committee membership string into name and role.\n\n >>> parse_name('Felix Ortiz')\n ('Felix Ortiz', 'member')\n >>> parse_name('Felix Ortiz (Chair)')\n ('Felix Ortiz', 'chair')\n >>> parse_name('Hon. Felix Ortiz, Co-Chair')\n ('Felix Ortiz', 'co-chair')\n >>> parse_name('Owen H.\\\\r\\\\nJohnson (Vice Chairperson)')\n ('Owen H. Johnson', 'vice chairperson')\n \"\"\"\n name = re.sub(r'^(Hon\\.|Assemblyman|Assemblywoman)\\s+', '', name)\n name = re.sub(r'\\s+', ' ', name)\n\n roles = [\"Chairwoman\", \"Chairperson\", \"Chair\", \"Secretary\", \"Treasurer\",\n \"Parliamentarian\", \"Chaplain\"]\n match = re.match(\n r'([^(]+),? \\(?((Co|Vice)?-?\\s*(%s))\\)?' % '|'.join(roles),\n name)\n\n if match:\n name = match.group(1).strip(' ,')\n role = match.group(2).lower()\n return (name, role)\n return (name, 'member')\n\n\nclass NYCommitteeScraper(CommitteeScraper):\n state = \"ny\"\n latest_only = True\n\n def scrape(self, chamber, term):\n getattr(self, 'scrape_' + chamber)()\n\n def scrape_lower(self, only_names=None):\n committees = []\n url = \"http://assembly.state.ny.us/comm/\"\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n\n for link in page.xpath(\"//a[contains(@href, 'sec=mem')]\"):\n name = link.xpath(\"string(../strong)\").strip()\n if 'Caucus' in name:\n continue\n\n url = link.attrib['href']\n\n committees.append(name)\n\n self.scrape_lower_committee(name, url)\n return committees\n\n def scrape_lower_committee(self, name, url):\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n\n comm = Committee('lower', name)\n comm.add_source(url)\n\n for link in page.xpath(\"//div[@class='commlinks']//a[contains(@href, 'mem')]\"):\n member = link.text.strip()\n member = re.sub(r'\\s+', ' ', member)\n\n name, role = parse_name(member)\n comm.add_member(name, role)\n\n self.save_committee(comm)\n\n def scrape_upper(self):\n committees = []\n url = \"http://www.nysenate.gov/committees\"\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n\n for link in page.xpath(\"//a[contains(@href, '/committee/')]\"):\n name = link.text.strip()\n\n if name == 'New York State Conference of Black Senators':\n # stop scraping once we reach the caucuses\n break\n\n committees.append(name)\n self.scrape_upper_committee(name, link.attrib['href'])\n return committees\n\n def scrape_upper_committee(self, name, url):\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n\n comm = Committee('upper', name)\n comm.add_source(url)\n\n member_div = page.xpath(\"//div[@class = 'committee-members']\")[0]\n\n seen = set()\n for link in member_div.xpath(\".//a\"):\n if not link.text:\n continue\n\n member = link.text.strip()\n\n next_elem = link.getnext()\n if (next_elem is not None and\n next_elem.tag == 'a' and\n next_elem.attrib['href'] == link.attrib['href']):\n # Sometimes NY is cool and splits names across a\n # couple links\n member = \"%s %s\" % (member, next_elem.text.strip())\n\n member = re.sub(r'\\s+', ' ', member)\n\n if member in seen or not member:\n continue\n seen.add(member)\n\n name, role = parse_name(member)\n comm.add_member(name, role)\n\n self.save_committee(comm)\n", "path": "openstates/ny/committees.py"}], "after_files": [{"content": "import re\n\nfrom billy.scrape import NoDataForPeriod\nfrom billy.scrape.committees import CommitteeScraper, Committee\n\nimport lxml.html\n\n\ndef parse_name(name):\n \"\"\"\n Split a committee membership string into name and role.\n\n >>> parse_name('Felix Ortiz')\n ('Felix Ortiz', 'member')\n >>> parse_name('Felix Ortiz (Chair)')\n ('Felix Ortiz', 'chair')\n >>> parse_name('Hon. Felix Ortiz, Co-Chair')\n ('Felix Ortiz', 'co-chair')\n >>> parse_name('Owen H.\\\\r\\\\nJohnson (Vice Chairperson)')\n ('Owen H. Johnson', 'vice chairperson')\n \"\"\"\n name = re.sub(r'^(Hon\\.|Assemblyman|Assemblywoman)\\s+', '', name)\n name = re.sub(r'\\s+', ' ', name)\n\n roles = [\"Chairwoman\", \"Chairperson\", \"Chair\", \"Secretary\", \"Treasurer\",\n \"Parliamentarian\", \"Chaplain\"]\n match = re.match(\n r'([^(]+),? \\(?((Co|Vice)?-?\\s*(%s))\\)?' % '|'.join(roles),\n name)\n\n if match:\n name = match.group(1).strip(' ,')\n role = match.group(2).lower()\n return (name, role)\n return (name, 'member')\n\n\nclass NYCommitteeScraper(CommitteeScraper):\n state = \"ny\"\n latest_only = True\n\n def scrape(self, chamber, term):\n getattr(self, 'scrape_' + chamber)()\n\n def scrape_lower(self, only_names=None):\n committees = []\n url = \"http://assembly.state.ny.us/comm/\"\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n\n for link in page.xpath(\"//a[contains(@href, 'sec=mem')]\"):\n name = link.xpath(\"string(../strong)\").strip()\n if 'Caucus' in name:\n continue\n\n url = link.attrib['href']\n\n committees.append(name)\n\n self.scrape_lower_committee(name, url)\n return committees\n\n def scrape_lower_committee(self, name, url):\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n\n comm = Committee('lower', name)\n comm.add_source(url)\n\n for link in page.xpath(\"//div[@class='commlinks']//a[contains(@href, 'mem')]\"):\n member = link.text.strip()\n member = re.sub(r'\\s+', ' ', member)\n\n name, role = parse_name(member)\n comm.add_member(name, role)\n\n self.save_committee(comm)\n\n def scrape_upper(self):\n committees = []\n url = \"http://www.nysenate.gov/committees\"\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n\n for h2 in page.xpath(\"//h2\"):\n committee_types = ['Standing Committees','Temporary Committees']\n if h2.text not in committee_types:\n continue\n \n for link in h2.getparent().xpath(\".//a[contains(@href, '/committee/')]\"):\n name = link.text.strip()\n\n committees.append(name)\n self.scrape_upper_committee(name, link.attrib['href'])\n\n return committees\n\n def scrape_upper_committee(self, name, url):\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n\n comm = Committee('upper', name)\n comm.add_source(url)\n\n member_div = page.xpath(\"//div[@class = 'committee-members']\")[0]\n\n seen = set()\n for link in member_div.xpath(\".//a\"):\n if not link.text:\n continue\n\n member = link.text.strip()\n\n next_elem = link.getnext()\n if (next_elem is not None and\n next_elem.tag == 'a' and\n next_elem.attrib['href'] == link.attrib['href']):\n # Sometimes NY is cool and splits names across a\n # couple links\n member = \"%s %s\" % (member, next_elem.text.strip())\n\n member = re.sub(r'\\s+', ' ', member)\n\n if member in seen or not member:\n continue\n seen.add(member)\n\n name, role = parse_name(member)\n comm.add_member(name, role)\n\n self.save_committee(comm)\n", "path": "openstates/ny/committees.py"}]} | 1,554 | 264 |
gh_patches_debug_4705 | rasdani/github-patches | git_diff | ultrabug__py3status-1549 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
default config location
I have installed ubuntu 18.04 and it makes sense to put my `i3status.conf` in `~/.config/i3/` but this is not found by default. I think we should add this path.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py3status/cli.py`
Content:
```
1 import argparse
2 import os
3
4
5 def parse_cli():
6 """
7 Parse the command line arguments
8 """
9
10 # FIXME do we really want to do this here?
11
12 # get home path
13 home_path = os.path.expanduser("~")
14
15 # i3status config file default detection
16 # respect i3status' file detection order wrt issue #43
17 i3status_config_file_candidates = [
18 "{}/.i3status.conf".format(home_path),
19 "{}/i3status/config".format(
20 os.environ.get("XDG_CONFIG_HOME", "{}/.config".format(home_path))
21 ),
22 "/etc/i3status.conf",
23 "{}/i3status/config".format(os.environ.get("XDG_CONFIG_DIRS", "/etc/xdg")),
24 ]
25 for fn in i3status_config_file_candidates:
26 if os.path.isfile(fn):
27 i3status_config_file_default = fn
28 break
29 else:
30 # if none of the default files exists, we will default
31 # to ~/.i3/i3status.conf
32 i3status_config_file_default = "{}/.i3/i3status.conf".format(home_path)
33
34 # command line options
35 parser = argparse.ArgumentParser(
36 description="The agile, python-powered, i3status wrapper"
37 )
38 parser = argparse.ArgumentParser(add_help=True)
39 parser.add_argument(
40 "-b",
41 "--dbus-notify",
42 action="store_true",
43 default=False,
44 dest="dbus_notify",
45 help=(
46 "use notify-send to send user notifications "
47 "rather than i3-nagbar, "
48 "requires a notification daemon eg dunst"
49 ),
50 )
51 parser.add_argument(
52 "-c",
53 "--config",
54 action="store",
55 dest="i3status_conf",
56 type=str,
57 default=i3status_config_file_default,
58 help="path to i3status config file",
59 )
60 parser.add_argument(
61 "-d", "--debug", action="store_true", help="be verbose in syslog"
62 )
63 parser.add_argument(
64 "-g",
65 "--gevent",
66 action="store_true",
67 default=False,
68 dest="gevent",
69 help="enable gevent monkey patching (default False)",
70 )
71 parser.add_argument(
72 "-i",
73 "--include",
74 action="append",
75 dest="include_paths",
76 help=(
77 "include user-written modules from those "
78 "directories (default ~/.i3/py3status)"
79 ),
80 )
81 parser.add_argument(
82 "-l",
83 "--log-file",
84 action="store",
85 dest="log_file",
86 type=str,
87 default=None,
88 help="path to py3status log file",
89 )
90 parser.add_argument(
91 "-n",
92 "--interval",
93 action="store",
94 dest="interval",
95 type=float,
96 default=1,
97 help="update interval in seconds (default 1 sec)",
98 )
99 parser.add_argument(
100 "-s",
101 "--standalone",
102 action="store_true",
103 help="standalone mode, do not use i3status",
104 )
105 parser.add_argument(
106 "-t",
107 "--timeout",
108 action="store",
109 dest="cache_timeout",
110 type=int,
111 default=60,
112 help="default injection cache timeout in seconds (default 60 sec)",
113 )
114 parser.add_argument(
115 "-m",
116 "--disable-click-events",
117 action="store_true",
118 dest="disable_click_events",
119 default=False,
120 help="disable all click events",
121 )
122 parser.add_argument(
123 "-v", "--version", action="store_true", help="show py3status version and exit"
124 )
125 # FIXME we should make all of these explicit so they self document etc
126 parser.add_argument("cli_command", nargs="*", help=argparse.SUPPRESS)
127
128 options = parser.parse_args()
129
130 # only asked for version
131 if options.version:
132 import sys
133 from platform import python_version
134 from py3status.version import version
135
136 print("py3status version {} (python {})".format(version, python_version()))
137 sys.exit(0)
138
139 # all done
140 return options
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py3status/cli.py b/py3status/cli.py
--- a/py3status/cli.py
+++ b/py3status/cli.py
@@ -19,6 +19,7 @@
"{}/i3status/config".format(
os.environ.get("XDG_CONFIG_HOME", "{}/.config".format(home_path))
),
+ "{}/.config/i3/".format(home_path),
"/etc/i3status.conf",
"{}/i3status/config".format(os.environ.get("XDG_CONFIG_DIRS", "/etc/xdg")),
]
| {"golden_diff": "diff --git a/py3status/cli.py b/py3status/cli.py\n--- a/py3status/cli.py\n+++ b/py3status/cli.py\n@@ -19,6 +19,7 @@\n \"{}/i3status/config\".format(\n os.environ.get(\"XDG_CONFIG_HOME\", \"{}/.config\".format(home_path))\n ),\n+ \"{}/.config/i3/\".format(home_path),\n \"/etc/i3status.conf\",\n \"{}/i3status/config\".format(os.environ.get(\"XDG_CONFIG_DIRS\", \"/etc/xdg\")),\n ]\n", "issue": "default config location\nI have installed ubuntu 18.04 and it makes sense to put my `i3status.conf` in `~/.config/i3/` but this is not found by default. I think we should add this path.\n", "before_files": [{"content": "import argparse\nimport os\n\n\ndef parse_cli():\n \"\"\"\n Parse the command line arguments\n \"\"\"\n\n # FIXME do we really want to do this here?\n\n # get home path\n home_path = os.path.expanduser(\"~\")\n\n # i3status config file default detection\n # respect i3status' file detection order wrt issue #43\n i3status_config_file_candidates = [\n \"{}/.i3status.conf\".format(home_path),\n \"{}/i3status/config\".format(\n os.environ.get(\"XDG_CONFIG_HOME\", \"{}/.config\".format(home_path))\n ),\n \"/etc/i3status.conf\",\n \"{}/i3status/config\".format(os.environ.get(\"XDG_CONFIG_DIRS\", \"/etc/xdg\")),\n ]\n for fn in i3status_config_file_candidates:\n if os.path.isfile(fn):\n i3status_config_file_default = fn\n break\n else:\n # if none of the default files exists, we will default\n # to ~/.i3/i3status.conf\n i3status_config_file_default = \"{}/.i3/i3status.conf\".format(home_path)\n\n # command line options\n parser = argparse.ArgumentParser(\n description=\"The agile, python-powered, i3status wrapper\"\n )\n parser = argparse.ArgumentParser(add_help=True)\n parser.add_argument(\n \"-b\",\n \"--dbus-notify\",\n action=\"store_true\",\n default=False,\n dest=\"dbus_notify\",\n help=(\n \"use notify-send to send user notifications \"\n \"rather than i3-nagbar, \"\n \"requires a notification daemon eg dunst\"\n ),\n )\n parser.add_argument(\n \"-c\",\n \"--config\",\n action=\"store\",\n dest=\"i3status_conf\",\n type=str,\n default=i3status_config_file_default,\n help=\"path to i3status config file\",\n )\n parser.add_argument(\n \"-d\", \"--debug\", action=\"store_true\", help=\"be verbose in syslog\"\n )\n parser.add_argument(\n \"-g\",\n \"--gevent\",\n action=\"store_true\",\n default=False,\n dest=\"gevent\",\n help=\"enable gevent monkey patching (default False)\",\n )\n parser.add_argument(\n \"-i\",\n \"--include\",\n action=\"append\",\n dest=\"include_paths\",\n help=(\n \"include user-written modules from those \"\n \"directories (default ~/.i3/py3status)\"\n ),\n )\n parser.add_argument(\n \"-l\",\n \"--log-file\",\n action=\"store\",\n dest=\"log_file\",\n type=str,\n default=None,\n help=\"path to py3status log file\",\n )\n parser.add_argument(\n \"-n\",\n \"--interval\",\n action=\"store\",\n dest=\"interval\",\n type=float,\n default=1,\n help=\"update interval in seconds (default 1 sec)\",\n )\n parser.add_argument(\n \"-s\",\n \"--standalone\",\n action=\"store_true\",\n help=\"standalone mode, do not use i3status\",\n )\n parser.add_argument(\n \"-t\",\n \"--timeout\",\n action=\"store\",\n dest=\"cache_timeout\",\n type=int,\n default=60,\n help=\"default injection cache timeout in seconds (default 60 sec)\",\n )\n parser.add_argument(\n \"-m\",\n \"--disable-click-events\",\n action=\"store_true\",\n dest=\"disable_click_events\",\n default=False,\n help=\"disable all click events\",\n )\n parser.add_argument(\n \"-v\", \"--version\", action=\"store_true\", help=\"show py3status version and exit\"\n )\n # FIXME we should make all of these explicit so they self document etc\n parser.add_argument(\"cli_command\", nargs=\"*\", help=argparse.SUPPRESS)\n\n options = parser.parse_args()\n\n # only asked for version\n if options.version:\n import sys\n from platform import python_version\n from py3status.version import version\n\n print(\"py3status version {} (python {})\".format(version, python_version()))\n sys.exit(0)\n\n # all done\n return options\n", "path": "py3status/cli.py"}], "after_files": [{"content": "import argparse\nimport os\n\n\ndef parse_cli():\n \"\"\"\n Parse the command line arguments\n \"\"\"\n\n # FIXME do we really want to do this here?\n\n # get home path\n home_path = os.path.expanduser(\"~\")\n\n # i3status config file default detection\n # respect i3status' file detection order wrt issue #43\n i3status_config_file_candidates = [\n \"{}/.i3status.conf\".format(home_path),\n \"{}/i3status/config\".format(\n os.environ.get(\"XDG_CONFIG_HOME\", \"{}/.config\".format(home_path))\n ),\n \"{}/.config/i3/\".format(home_path),\n \"/etc/i3status.conf\",\n \"{}/i3status/config\".format(os.environ.get(\"XDG_CONFIG_DIRS\", \"/etc/xdg\")),\n ]\n for fn in i3status_config_file_candidates:\n if os.path.isfile(fn):\n i3status_config_file_default = fn\n break\n else:\n # if none of the default files exists, we will default\n # to ~/.i3/i3status.conf\n i3status_config_file_default = \"{}/.i3/i3status.conf\".format(home_path)\n\n # command line options\n parser = argparse.ArgumentParser(\n description=\"The agile, python-powered, i3status wrapper\"\n )\n parser = argparse.ArgumentParser(add_help=True)\n parser.add_argument(\n \"-b\",\n \"--dbus-notify\",\n action=\"store_true\",\n default=False,\n dest=\"dbus_notify\",\n help=(\n \"use notify-send to send user notifications \"\n \"rather than i3-nagbar, \"\n \"requires a notification daemon eg dunst\"\n ),\n )\n parser.add_argument(\n \"-c\",\n \"--config\",\n action=\"store\",\n dest=\"i3status_conf\",\n type=str,\n default=i3status_config_file_default,\n help=\"path to i3status config file\",\n )\n parser.add_argument(\n \"-d\", \"--debug\", action=\"store_true\", help=\"be verbose in syslog\"\n )\n parser.add_argument(\n \"-g\",\n \"--gevent\",\n action=\"store_true\",\n default=False,\n dest=\"gevent\",\n help=\"enable gevent monkey patching (default False)\",\n )\n parser.add_argument(\n \"-i\",\n \"--include\",\n action=\"append\",\n dest=\"include_paths\",\n help=(\n \"include user-written modules from those \"\n \"directories (default ~/.i3/py3status)\"\n ),\n )\n parser.add_argument(\n \"-l\",\n \"--log-file\",\n action=\"store\",\n dest=\"log_file\",\n type=str,\n default=None,\n help=\"path to py3status log file\",\n )\n parser.add_argument(\n \"-n\",\n \"--interval\",\n action=\"store\",\n dest=\"interval\",\n type=float,\n default=1,\n help=\"update interval in seconds (default 1 sec)\",\n )\n parser.add_argument(\n \"-s\",\n \"--standalone\",\n action=\"store_true\",\n help=\"standalone mode, do not use i3status\",\n )\n parser.add_argument(\n \"-t\",\n \"--timeout\",\n action=\"store\",\n dest=\"cache_timeout\",\n type=int,\n default=60,\n help=\"default injection cache timeout in seconds (default 60 sec)\",\n )\n parser.add_argument(\n \"-m\",\n \"--disable-click-events\",\n action=\"store_true\",\n dest=\"disable_click_events\",\n default=False,\n help=\"disable all click events\",\n )\n parser.add_argument(\n \"-v\", \"--version\", action=\"store_true\", help=\"show py3status version and exit\"\n )\n # FIXME we should make all of these explicit so they self document etc\n parser.add_argument(\"cli_command\", nargs=\"*\", help=argparse.SUPPRESS)\n\n options = parser.parse_args()\n\n # only asked for version\n if options.version:\n import sys\n from platform import python_version\n from py3status.version import version\n\n print(\"py3status version {} (python {})\".format(version, python_version()))\n sys.exit(0)\n\n # all done\n return options\n", "path": "py3status/cli.py"}]} | 1,521 | 123 |
gh_patches_debug_36208 | rasdani/github-patches | git_diff | getsentry__sentry-4117 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Parse the email template contexts for identifiers and log them along with the mail.queued event.
We currently log types of mail sent but do not add any context along with that, so it's easy to see that someone got an email about an event, but since the `event_id` is not passed along, the log event is somewhat useless.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/plugins/bases/notify.py`
Content:
```
1 """
2 sentry.plugins.bases.notify
3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 :copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.
6 :license: BSD, see LICENSE for more details.
7 """
8 from __future__ import absolute_import, print_function
9
10 import logging
11 import six
12
13 from django import forms
14
15 from sentry.app import (
16 digests,
17 ratelimiter,
18 )
19 from sentry.digests import get_option_key as get_digest_option_key
20 from sentry.digests.notifications import (
21 event_to_record,
22 unsplit_key,
23 )
24 from sentry.plugins import Notification, Plugin
25 from sentry.models import (
26 ProjectOption,
27 UserOption,
28 )
29 from sentry.tasks.digests import deliver_digest
30
31
32 class NotificationConfigurationForm(forms.Form):
33 pass
34
35
36 class BaseNotificationUserOptionsForm(forms.Form):
37 def __init__(self, plugin, user, *args, **kwargs):
38 self.plugin = plugin
39 self.user = user
40 super(BaseNotificationUserOptionsForm, self).__init__(*args, **kwargs)
41
42 def get_title(self):
43 return self.plugin.get_conf_title()
44
45 def get_description(self):
46 return ""
47
48 def save(self):
49 raise NotImplementedError
50
51
52 class NotificationPlugin(Plugin):
53 description = ('Notify project members when a new event is seen for the first time, or when an '
54 'already resolved event has changed back to unresolved.')
55 # site_conf_form = NotificationConfigurationForm
56 project_conf_form = NotificationConfigurationForm
57
58 def get_plugin_type(self):
59 return 'notification'
60
61 def notify(self, notification):
62 event = notification.event
63 return self.notify_users(event.group, event)
64
65 def rule_notify(self, event, futures):
66 rules = []
67 for future in futures:
68 rules.append(future.rule)
69 if not future.kwargs:
70 continue
71 raise NotImplementedError('The default behavior for notification de-duplication does not support args')
72
73 project = event.group.project
74 if hasattr(self, 'notify_digest') and digests.enabled(project):
75 get_digest_option = lambda key: ProjectOption.objects.get_value(
76 project,
77 get_digest_option_key(self.get_conf_key(), key),
78 )
79 digest_key = unsplit_key(self, event.group.project)
80 immediate_delivery = digests.add(
81 digest_key,
82 event_to_record(event, rules),
83 increment_delay=get_digest_option('increment_delay'),
84 maximum_delay=get_digest_option('maximum_delay'),
85 )
86 if immediate_delivery:
87 deliver_digest.delay(digest_key)
88
89 else:
90 notification = Notification(
91 event=event,
92 rules=rules,
93 )
94 self.notify(notification)
95 self.logger.info('notification.dispatched', extra={
96 'event_id': event.id,
97 'plugin': self.slug,
98 'rule_id': rules[0].id if rules else None,
99 })
100
101 def notify_users(self, group, event, fail_silently=False):
102 raise NotImplementedError
103
104 def notify_about_activity(self, activity):
105 pass
106
107 def get_sendable_users(self, project):
108 """
109 Return a collection of user IDs that are eligible to receive
110 notifications for the provided project.
111 """
112 conf_key = self.get_conf_key()
113
114 alert_settings = dict(
115 (o.user_id, int(o.value))
116 for o in UserOption.objects.filter(
117 project=project,
118 key='%s:alert' % conf_key,
119 )
120 )
121
122 disabled = set(u for u, v in six.iteritems(alert_settings) if v == 0)
123
124 member_set = set(project.member_set.exclude(
125 user__in=disabled,
126 ).values_list('user', flat=True))
127
128 # determine members default settings
129 members_to_check = set(u for u in member_set if u not in alert_settings)
130 if members_to_check:
131 disabled = set(UserOption.objects.filter(
132 key='subscribe_by_default',
133 value='0',
134 user__in=members_to_check,
135 ).values_list('user', flat=True))
136 member_set = [x for x in member_set if x not in disabled]
137
138 return member_set
139
140 def __is_rate_limited(self, group, event):
141 return ratelimiter.is_limited(
142 project=group.project,
143 key=self.get_conf_key(),
144 limit=10,
145 )
146
147 def is_configured(self, project):
148 raise NotImplementedError
149
150 def should_notify(self, group, event):
151 project = event.project
152 if not self.is_configured(project=project):
153 return False
154
155 if group.is_muted():
156 return False
157
158 # If the plugin doesn't support digests or they are not enabled,
159 # perform rate limit checks to support backwards compatibility with
160 # older plugins.
161 if not (hasattr(self, 'notify_digest') and digests.enabled(project)) and self.__is_rate_limited(group, event):
162 logger = logging.getLogger('sentry.plugins.{0}'.format(self.get_conf_key()))
163 logger.info('notification.rate_limited', extra={'project_id': project.id})
164 return False
165
166 return True
167
168 def test_configuration(self, project):
169 from sentry.utils.samples import create_sample_event
170 event = create_sample_event(project, platform='python')
171 notification = Notification(event=event)
172 return self.notify(notification)
173
174 def get_notification_doc_html(self, **kwargs):
175 return ""
176
177
178 # Backwards-compatibility
179 NotifyConfigurationForm = NotificationConfigurationForm
180 NotifyPlugin = NotificationPlugin
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/plugins/bases/notify.py b/src/sentry/plugins/bases/notify.py
--- a/src/sentry/plugins/bases/notify.py
+++ b/src/sentry/plugins/bases/notify.py
@@ -64,19 +64,28 @@
def rule_notify(self, event, futures):
rules = []
+ extra = {
+ 'event_id': event.id,
+ 'group_id': event.group_id,
+ 'plugin': self.slug,
+ }
+ log_event = 'dispatched'
for future in futures:
rules.append(future.rule)
+ extra['rule_id'] = future.rule.id
if not future.kwargs:
continue
raise NotImplementedError('The default behavior for notification de-duplication does not support args')
project = event.group.project
+ extra['project_id'] = project.id
if hasattr(self, 'notify_digest') and digests.enabled(project):
get_digest_option = lambda key: ProjectOption.objects.get_value(
project,
get_digest_option_key(self.get_conf_key(), key),
)
digest_key = unsplit_key(self, event.group.project)
+ extra['digest_key'] = digest_key
immediate_delivery = digests.add(
digest_key,
event_to_record(event, rules),
@@ -85,6 +94,8 @@
)
if immediate_delivery:
deliver_digest.delay(digest_key)
+ else:
+ log_event = 'digested'
else:
notification = Notification(
@@ -92,11 +103,8 @@
rules=rules,
)
self.notify(notification)
- self.logger.info('notification.dispatched', extra={
- 'event_id': event.id,
- 'plugin': self.slug,
- 'rule_id': rules[0].id if rules else None,
- })
+
+ self.logger.info('notification.%s' % log_event, extra=extra)
def notify_users(self, group, event, fail_silently=False):
raise NotImplementedError
| {"golden_diff": "diff --git a/src/sentry/plugins/bases/notify.py b/src/sentry/plugins/bases/notify.py\n--- a/src/sentry/plugins/bases/notify.py\n+++ b/src/sentry/plugins/bases/notify.py\n@@ -64,19 +64,28 @@\n \n def rule_notify(self, event, futures):\n rules = []\n+ extra = {\n+ 'event_id': event.id,\n+ 'group_id': event.group_id,\n+ 'plugin': self.slug,\n+ }\n+ log_event = 'dispatched'\n for future in futures:\n rules.append(future.rule)\n+ extra['rule_id'] = future.rule.id\n if not future.kwargs:\n continue\n raise NotImplementedError('The default behavior for notification de-duplication does not support args')\n \n project = event.group.project\n+ extra['project_id'] = project.id\n if hasattr(self, 'notify_digest') and digests.enabled(project):\n get_digest_option = lambda key: ProjectOption.objects.get_value(\n project,\n get_digest_option_key(self.get_conf_key(), key),\n )\n digest_key = unsplit_key(self, event.group.project)\n+ extra['digest_key'] = digest_key\n immediate_delivery = digests.add(\n digest_key,\n event_to_record(event, rules),\n@@ -85,6 +94,8 @@\n )\n if immediate_delivery:\n deliver_digest.delay(digest_key)\n+ else:\n+ log_event = 'digested'\n \n else:\n notification = Notification(\n@@ -92,11 +103,8 @@\n rules=rules,\n )\n self.notify(notification)\n- self.logger.info('notification.dispatched', extra={\n- 'event_id': event.id,\n- 'plugin': self.slug,\n- 'rule_id': rules[0].id if rules else None,\n- })\n+\n+ self.logger.info('notification.%s' % log_event, extra=extra)\n \n def notify_users(self, group, event, fail_silently=False):\n raise NotImplementedError\n", "issue": "Parse the email template contexts for identifiers and log them along with the mail.queued event.\nWe currently log types of mail sent but do not add any context along with that, so it's easy to see that someone got an email about an event, but since the `event_id` is not passed along, the log event is somewhat useless.\n\n", "before_files": [{"content": "\"\"\"\nsentry.plugins.bases.notify\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\nfrom __future__ import absolute_import, print_function\n\nimport logging\nimport six\n\nfrom django import forms\n\nfrom sentry.app import (\n digests,\n ratelimiter,\n)\nfrom sentry.digests import get_option_key as get_digest_option_key\nfrom sentry.digests.notifications import (\n event_to_record,\n unsplit_key,\n)\nfrom sentry.plugins import Notification, Plugin\nfrom sentry.models import (\n ProjectOption,\n UserOption,\n)\nfrom sentry.tasks.digests import deliver_digest\n\n\nclass NotificationConfigurationForm(forms.Form):\n pass\n\n\nclass BaseNotificationUserOptionsForm(forms.Form):\n def __init__(self, plugin, user, *args, **kwargs):\n self.plugin = plugin\n self.user = user\n super(BaseNotificationUserOptionsForm, self).__init__(*args, **kwargs)\n\n def get_title(self):\n return self.plugin.get_conf_title()\n\n def get_description(self):\n return \"\"\n\n def save(self):\n raise NotImplementedError\n\n\nclass NotificationPlugin(Plugin):\n description = ('Notify project members when a new event is seen for the first time, or when an '\n 'already resolved event has changed back to unresolved.')\n # site_conf_form = NotificationConfigurationForm\n project_conf_form = NotificationConfigurationForm\n\n def get_plugin_type(self):\n return 'notification'\n\n def notify(self, notification):\n event = notification.event\n return self.notify_users(event.group, event)\n\n def rule_notify(self, event, futures):\n rules = []\n for future in futures:\n rules.append(future.rule)\n if not future.kwargs:\n continue\n raise NotImplementedError('The default behavior for notification de-duplication does not support args')\n\n project = event.group.project\n if hasattr(self, 'notify_digest') and digests.enabled(project):\n get_digest_option = lambda key: ProjectOption.objects.get_value(\n project,\n get_digest_option_key(self.get_conf_key(), key),\n )\n digest_key = unsplit_key(self, event.group.project)\n immediate_delivery = digests.add(\n digest_key,\n event_to_record(event, rules),\n increment_delay=get_digest_option('increment_delay'),\n maximum_delay=get_digest_option('maximum_delay'),\n )\n if immediate_delivery:\n deliver_digest.delay(digest_key)\n\n else:\n notification = Notification(\n event=event,\n rules=rules,\n )\n self.notify(notification)\n self.logger.info('notification.dispatched', extra={\n 'event_id': event.id,\n 'plugin': self.slug,\n 'rule_id': rules[0].id if rules else None,\n })\n\n def notify_users(self, group, event, fail_silently=False):\n raise NotImplementedError\n\n def notify_about_activity(self, activity):\n pass\n\n def get_sendable_users(self, project):\n \"\"\"\n Return a collection of user IDs that are eligible to receive\n notifications for the provided project.\n \"\"\"\n conf_key = self.get_conf_key()\n\n alert_settings = dict(\n (o.user_id, int(o.value))\n for o in UserOption.objects.filter(\n project=project,\n key='%s:alert' % conf_key,\n )\n )\n\n disabled = set(u for u, v in six.iteritems(alert_settings) if v == 0)\n\n member_set = set(project.member_set.exclude(\n user__in=disabled,\n ).values_list('user', flat=True))\n\n # determine members default settings\n members_to_check = set(u for u in member_set if u not in alert_settings)\n if members_to_check:\n disabled = set(UserOption.objects.filter(\n key='subscribe_by_default',\n value='0',\n user__in=members_to_check,\n ).values_list('user', flat=True))\n member_set = [x for x in member_set if x not in disabled]\n\n return member_set\n\n def __is_rate_limited(self, group, event):\n return ratelimiter.is_limited(\n project=group.project,\n key=self.get_conf_key(),\n limit=10,\n )\n\n def is_configured(self, project):\n raise NotImplementedError\n\n def should_notify(self, group, event):\n project = event.project\n if not self.is_configured(project=project):\n return False\n\n if group.is_muted():\n return False\n\n # If the plugin doesn't support digests or they are not enabled,\n # perform rate limit checks to support backwards compatibility with\n # older plugins.\n if not (hasattr(self, 'notify_digest') and digests.enabled(project)) and self.__is_rate_limited(group, event):\n logger = logging.getLogger('sentry.plugins.{0}'.format(self.get_conf_key()))\n logger.info('notification.rate_limited', extra={'project_id': project.id})\n return False\n\n return True\n\n def test_configuration(self, project):\n from sentry.utils.samples import create_sample_event\n event = create_sample_event(project, platform='python')\n notification = Notification(event=event)\n return self.notify(notification)\n\n def get_notification_doc_html(self, **kwargs):\n return \"\"\n\n\n# Backwards-compatibility\nNotifyConfigurationForm = NotificationConfigurationForm\nNotifyPlugin = NotificationPlugin\n", "path": "src/sentry/plugins/bases/notify.py"}], "after_files": [{"content": "\"\"\"\nsentry.plugins.bases.notify\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\nfrom __future__ import absolute_import, print_function\n\nimport logging\nimport six\n\nfrom django import forms\n\nfrom sentry.app import (\n digests,\n ratelimiter,\n)\nfrom sentry.digests import get_option_key as get_digest_option_key\nfrom sentry.digests.notifications import (\n event_to_record,\n unsplit_key,\n)\nfrom sentry.plugins import Notification, Plugin\nfrom sentry.models import (\n ProjectOption,\n UserOption,\n)\nfrom sentry.tasks.digests import deliver_digest\n\n\nclass NotificationConfigurationForm(forms.Form):\n pass\n\n\nclass BaseNotificationUserOptionsForm(forms.Form):\n def __init__(self, plugin, user, *args, **kwargs):\n self.plugin = plugin\n self.user = user\n super(BaseNotificationUserOptionsForm, self).__init__(*args, **kwargs)\n\n def get_title(self):\n return self.plugin.get_conf_title()\n\n def get_description(self):\n return \"\"\n\n def save(self):\n raise NotImplementedError\n\n\nclass NotificationPlugin(Plugin):\n description = ('Notify project members when a new event is seen for the first time, or when an '\n 'already resolved event has changed back to unresolved.')\n # site_conf_form = NotificationConfigurationForm\n project_conf_form = NotificationConfigurationForm\n\n def get_plugin_type(self):\n return 'notification'\n\n def notify(self, notification):\n event = notification.event\n return self.notify_users(event.group, event)\n\n def rule_notify(self, event, futures):\n rules = []\n extra = {\n 'event_id': event.id,\n 'group_id': event.group_id,\n 'plugin': self.slug,\n }\n log_event = 'dispatched'\n for future in futures:\n rules.append(future.rule)\n extra['rule_id'] = future.rule.id\n if not future.kwargs:\n continue\n raise NotImplementedError('The default behavior for notification de-duplication does not support args')\n\n project = event.group.project\n extra['project_id'] = project.id\n if hasattr(self, 'notify_digest') and digests.enabled(project):\n get_digest_option = lambda key: ProjectOption.objects.get_value(\n project,\n get_digest_option_key(self.get_conf_key(), key),\n )\n digest_key = unsplit_key(self, event.group.project)\n extra['digest_key'] = digest_key\n immediate_delivery = digests.add(\n digest_key,\n event_to_record(event, rules),\n increment_delay=get_digest_option('increment_delay'),\n maximum_delay=get_digest_option('maximum_delay'),\n )\n if immediate_delivery:\n deliver_digest.delay(digest_key)\n else:\n log_event = 'digested'\n\n else:\n notification = Notification(\n event=event,\n rules=rules,\n )\n self.notify(notification)\n\n self.logger.info('notification.%s' % log_event, extra=extra)\n\n def notify_users(self, group, event, fail_silently=False):\n raise NotImplementedError\n\n def notify_about_activity(self, activity):\n pass\n\n def get_sendable_users(self, project):\n \"\"\"\n Return a collection of user IDs that are eligible to receive\n notifications for the provided project.\n \"\"\"\n conf_key = self.get_conf_key()\n\n alert_settings = dict(\n (o.user_id, int(o.value))\n for o in UserOption.objects.filter(\n project=project,\n key='%s:alert' % conf_key,\n )\n )\n\n disabled = set(u for u, v in six.iteritems(alert_settings) if v == 0)\n\n member_set = set(project.member_set.exclude(\n user__in=disabled,\n ).values_list('user', flat=True))\n\n # determine members default settings\n members_to_check = set(u for u in member_set if u not in alert_settings)\n if members_to_check:\n disabled = set(UserOption.objects.filter(\n key='subscribe_by_default',\n value='0',\n user__in=members_to_check,\n ).values_list('user', flat=True))\n member_set = [x for x in member_set if x not in disabled]\n\n return member_set\n\n def __is_rate_limited(self, group, event):\n return ratelimiter.is_limited(\n project=group.project,\n key=self.get_conf_key(),\n limit=10,\n )\n\n def is_configured(self, project):\n raise NotImplementedError\n\n def should_notify(self, group, event):\n project = event.project\n if not self.is_configured(project=project):\n return False\n\n if group.is_muted():\n return False\n\n # If the plugin doesn't support digests or they are not enabled,\n # perform rate limit checks to support backwards compatibility with\n # older plugins.\n if not (hasattr(self, 'notify_digest') and digests.enabled(project)) and self.__is_rate_limited(group, event):\n logger = logging.getLogger('sentry.plugins.{0}'.format(self.get_conf_key()))\n logger.info('notification.rate_limited', extra={'project_id': project.id})\n return False\n\n return True\n\n def test_configuration(self, project):\n from sentry.utils.samples import create_sample_event\n event = create_sample_event(project, platform='python')\n notification = Notification(event=event)\n return self.notify(notification)\n\n def get_notification_doc_html(self, **kwargs):\n return \"\"\n\n\n# Backwards-compatibility\nNotifyConfigurationForm = NotificationConfigurationForm\nNotifyPlugin = NotificationPlugin\n", "path": "src/sentry/plugins/bases/notify.py"}]} | 1,936 | 445 |
gh_patches_debug_895 | rasdani/github-patches | git_diff | ESMCI__cime-993 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scripts_regression_tests.py O_TestTestScheduler
This test fails with error SystemExit: ERROR: Leftover threads?
when run as part of the full scripts_regression_tests.py
but passes when run using ctest or when run as an individual test.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/python/CIME/code_checker.py`
Content:
```
1 """
2 Libraries for checking python code with pylint
3 """
4
5 from CIME.XML.standard_module_setup import *
6
7 from CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable
8
9 from multiprocessing.dummy import Pool as ThreadPool
10 from distutils.spawn import find_executable
11
12 logger = logging.getLogger(__name__)
13
14 ###############################################################################
15 def _run_pylint(on_file, interactive):
16 ###############################################################################
17 pylint = find_executable("pylint")
18
19 cmd_options = " --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement"
20 cimeroot = get_cime_root()
21
22 if "scripts/Tools" in on_file:
23 cmd_options +=",relative-import"
24
25 # add init-hook option
26 cmd_options += " --init-hook='sys.path.extend((\"%s\",\"%s\"))'"%\
27 (os.path.join(cimeroot,"utils","python"),
28 os.path.join(cimeroot,"scripts","Tools"))
29
30 cmd = "%s %s %s" % (pylint, cmd_options, on_file)
31 logger.debug("pylint command is %s"%cmd)
32 stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)
33 if stat != 0:
34 if interactive:
35 logger.info("File %s has pylint problems, please fix\n Use command: %s" % (on_file, cmd))
36 logger.info(out + "\n" + err)
37 return (on_file, out + "\n" + err)
38 else:
39 if interactive:
40 logger.info("File %s has no pylint problems" % on_file)
41 return (on_file, "")
42
43 ###############################################################################
44 def _matches(file_path, file_ends):
45 ###############################################################################
46 for file_end in file_ends:
47 if file_path.endswith(file_end):
48 return True
49
50 return False
51
52 ###############################################################################
53 def _should_pylint_skip(filepath):
54 ###############################################################################
55 # TODO - get rid of this
56 list_of_directories_to_ignore = ("xmlconvertors", "pointclm", "point_clm", "tools", "machines", "apidocs", "unit_test")
57 for dir_to_skip in list_of_directories_to_ignore:
58 if dir_to_skip in filepath:
59 return True
60
61 return False
62
63 ###############################################################################
64 def get_all_checkable_files():
65 ###############################################################################
66 cimeroot = get_cime_root()
67 all_git_files = run_cmd_no_fail("git ls-files --full-name %s" % cimeroot, verbose=False).splitlines()
68 files_to_test = [item for item in all_git_files
69 if ((item.endswith(".py") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]
70 return files_to_test
71
72 ###############################################################################
73 def check_code(files, num_procs=10, interactive=False):
74 ###############################################################################
75 """
76 Check all python files in the given directory
77
78 Returns True if all files had no problems
79 """
80 # Get list of files to check, we look to see if user-provided file argument
81 # is a valid file, if not, we search the repo for a file with similar name.
82 repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()
83 files_to_check = []
84 if files:
85 for filearg in files:
86 if os.path.exists(filearg):
87 files_to_check.append(os.path.abspath(filearg))
88 else:
89 found = False
90 for repo_file in repo_files:
91 if repo_file.endswith(filearg):
92 found = True
93 files_to_check.append(repo_file) # could have multiple matches
94
95 if not found:
96 logger.warning("Could not find file matching argument '%s'" % filearg)
97 else:
98 # Check every python file
99 files_to_check = get_all_checkable_files()
100
101 expect(len(files_to_check) > 0, "No matching files found")
102
103 # No point in using more threads than files
104 if len(files_to_check) < num_procs:
105 num_procs = len(files_to_check)
106
107 pool = ThreadPool(num_procs)
108 results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)
109 return dict(results)
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/python/CIME/code_checker.py b/utils/python/CIME/code_checker.py
--- a/utils/python/CIME/code_checker.py
+++ b/utils/python/CIME/code_checker.py
@@ -106,4 +106,6 @@
pool = ThreadPool(num_procs)
results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)
+ pool.close()
+ pool.join()
return dict(results)
| {"golden_diff": "diff --git a/utils/python/CIME/code_checker.py b/utils/python/CIME/code_checker.py\n--- a/utils/python/CIME/code_checker.py\n+++ b/utils/python/CIME/code_checker.py\n@@ -106,4 +106,6 @@\n \n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n+ pool.close()\n+ pool.join()\n return dict(results)\n", "issue": "scripts_regression_tests.py O_TestTestScheduler\nThis test fails with error SystemExit: ERROR: Leftover threads?\r\nwhen run as part of the full scripts_regression_tests.py \r\nbut passes when run using ctest or when run as an individual test.\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nLibraries for checking python code with pylint\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable\n\nfrom multiprocessing.dummy import Pool as ThreadPool\nfrom distutils.spawn import find_executable\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _run_pylint(on_file, interactive):\n###############################################################################\n pylint = find_executable(\"pylint\")\n\n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n cimeroot = get_cime_root()\n\n if \"scripts/Tools\" in on_file:\n cmd_options +=\",relative-import\"\n\n # add init-hook option\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"utils\",\"python\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"))\n\n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)\n if stat != 0:\n if interactive:\n logger.info(\"File %s has pylint problems, please fix\\n Use command: %s\" % (on_file, cmd))\n logger.info(out + \"\\n\" + err)\n return (on_file, out + \"\\n\" + err)\n else:\n if interactive:\n logger.info(\"File %s has no pylint problems\" % on_file)\n return (on_file, \"\")\n\n###############################################################################\ndef _matches(file_path, file_ends):\n###############################################################################\n for file_end in file_ends:\n if file_path.endswith(file_end):\n return True\n\n return False\n\n###############################################################################\ndef _should_pylint_skip(filepath):\n###############################################################################\n # TODO - get rid of this\n list_of_directories_to_ignore = (\"xmlconvertors\", \"pointclm\", \"point_clm\", \"tools\", \"machines\", \"apidocs\", \"unit_test\")\n for dir_to_skip in list_of_directories_to_ignore:\n if dir_to_skip in filepath:\n return True\n\n return False\n\n###############################################################################\ndef get_all_checkable_files():\n###############################################################################\n cimeroot = get_cime_root()\n all_git_files = run_cmd_no_fail(\"git ls-files --full-name %s\" % cimeroot, verbose=False).splitlines()\n files_to_test = [item for item in all_git_files\n if ((item.endswith(\".py\") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]\n return files_to_test\n\n###############################################################################\ndef check_code(files, num_procs=10, interactive=False):\n###############################################################################\n \"\"\"\n Check all python files in the given directory\n\n Returns True if all files had no problems\n \"\"\"\n # Get list of files to check, we look to see if user-provided file argument\n # is a valid file, if not, we search the repo for a file with similar name.\n repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()\n files_to_check = []\n if files:\n for filearg in files:\n if os.path.exists(filearg):\n files_to_check.append(os.path.abspath(filearg))\n else:\n found = False\n for repo_file in repo_files:\n if repo_file.endswith(filearg):\n found = True\n files_to_check.append(repo_file) # could have multiple matches\n\n if not found:\n logger.warning(\"Could not find file matching argument '%s'\" % filearg)\n else:\n # Check every python file\n files_to_check = get_all_checkable_files()\n\n expect(len(files_to_check) > 0, \"No matching files found\")\n\n # No point in using more threads than files\n if len(files_to_check) < num_procs:\n num_procs = len(files_to_check)\n\n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n return dict(results)\n", "path": "utils/python/CIME/code_checker.py"}], "after_files": [{"content": "\"\"\"\nLibraries for checking python code with pylint\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable\n\nfrom multiprocessing.dummy import Pool as ThreadPool\nfrom distutils.spawn import find_executable\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _run_pylint(on_file, interactive):\n###############################################################################\n pylint = find_executable(\"pylint\")\n\n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n cimeroot = get_cime_root()\n\n if \"scripts/Tools\" in on_file:\n cmd_options +=\",relative-import\"\n\n # add init-hook option\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"utils\",\"python\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"))\n\n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)\n if stat != 0:\n if interactive:\n logger.info(\"File %s has pylint problems, please fix\\n Use command: %s\" % (on_file, cmd))\n logger.info(out + \"\\n\" + err)\n return (on_file, out + \"\\n\" + err)\n else:\n if interactive:\n logger.info(\"File %s has no pylint problems\" % on_file)\n return (on_file, \"\")\n\n###############################################################################\ndef _matches(file_path, file_ends):\n###############################################################################\n for file_end in file_ends:\n if file_path.endswith(file_end):\n return True\n\n return False\n\n###############################################################################\ndef _should_pylint_skip(filepath):\n###############################################################################\n # TODO - get rid of this\n list_of_directories_to_ignore = (\"xmlconvertors\", \"pointclm\", \"point_clm\", \"tools\", \"machines\", \"apidocs\", \"unit_test\")\n for dir_to_skip in list_of_directories_to_ignore:\n if dir_to_skip in filepath:\n return True\n\n return False\n\n###############################################################################\ndef get_all_checkable_files():\n###############################################################################\n cimeroot = get_cime_root()\n all_git_files = run_cmd_no_fail(\"git ls-files --full-name %s\" % cimeroot, verbose=False).splitlines()\n files_to_test = [item for item in all_git_files\n if ((item.endswith(\".py\") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]\n return files_to_test\n\n###############################################################################\ndef check_code(files, num_procs=10, interactive=False):\n###############################################################################\n \"\"\"\n Check all python files in the given directory\n\n Returns True if all files had no problems\n \"\"\"\n # Get list of files to check, we look to see if user-provided file argument\n # is a valid file, if not, we search the repo for a file with similar name.\n repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()\n files_to_check = []\n if files:\n for filearg in files:\n if os.path.exists(filearg):\n files_to_check.append(os.path.abspath(filearg))\n else:\n found = False\n for repo_file in repo_files:\n if repo_file.endswith(filearg):\n found = True\n files_to_check.append(repo_file) # could have multiple matches\n\n if not found:\n logger.warning(\"Could not find file matching argument '%s'\" % filearg)\n else:\n # Check every python file\n files_to_check = get_all_checkable_files()\n\n expect(len(files_to_check) > 0, \"No matching files found\")\n\n # No point in using more threads than files\n if len(files_to_check) < num_procs:\n num_procs = len(files_to_check)\n\n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n pool.close()\n pool.join()\n return dict(results)\n", "path": "utils/python/CIME/code_checker.py"}]} | 1,475 | 98 |
gh_patches_debug_19899 | rasdani/github-patches | git_diff | modin-project__modin-6123 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`pd.read_feather(file)` actually reads the file twice!
There's a really strange logic in our Feather dispatcher introduced a long time ago that reads the whole Feather file just to extract its column names:
https://github.com/modin-project/modin/blob/632d724adb013d6f4d18aa2a4f2faa9325d324e8/modin/core/io/column_stores/feather_dispatcher.py#L55-L64
Note that `.build_query_compiler(..)` launch parsing kernels that would also be reading the same parquet file but in a distributed way.
We should be using more smarter way to extract feather's metadata: https://issues.apache.org/jira/browse/ARROW-10344
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modin/core/io/column_stores/feather_dispatcher.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """Module houses `FeatherDispatcher` class, that is used for reading `.feather` files."""
15
16 from modin.core.io.column_stores.column_store_dispatcher import ColumnStoreDispatcher
17 from modin.utils import import_optional_dependency
18 from modin.core.io.file_dispatcher import OpenFile
19
20
21 class FeatherDispatcher(ColumnStoreDispatcher):
22 """Class handles utils for reading `.feather` files."""
23
24 @classmethod
25 def _read(cls, path, columns=None, **kwargs):
26 """
27 Read data from the file path, returning a query compiler.
28
29 Parameters
30 ----------
31 path : str or file-like object
32 The filepath of the feather file.
33 columns : array-like, optional
34 Columns to read from file. If not provided, all columns are read.
35 **kwargs : dict
36 `read_feather` function kwargs.
37
38 Returns
39 -------
40 BaseQueryCompiler
41 Query compiler with imported data for further processing.
42
43 Notes
44 -----
45 `PyArrow` engine and local files only are supported for now,
46 multi threading is set to False by default.
47 PyArrow feather is used. Please refer to the documentation here
48 https://arrow.apache.org/docs/python/api.html#feather-format
49 """
50 path = cls.get_path(path)
51 if columns is None:
52 import_optional_dependency(
53 "pyarrow", "pyarrow is required to read feather files."
54 )
55 from pyarrow.feather import read_feather
56
57 with OpenFile(
58 path,
59 **(kwargs.get("storage_options", None) or {}),
60 ) as file:
61 df = read_feather(file)
62 # pyarrow.feather.read_feather doesn't support columns as pandas.Index
63 columns = list(df.columns)
64 return cls.build_query_compiler(path, columns, use_threads=False)
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modin/core/io/column_stores/feather_dispatcher.py b/modin/core/io/column_stores/feather_dispatcher.py
--- a/modin/core/io/column_stores/feather_dispatcher.py
+++ b/modin/core/io/column_stores/feather_dispatcher.py
@@ -52,13 +52,15 @@
import_optional_dependency(
"pyarrow", "pyarrow is required to read feather files."
)
- from pyarrow.feather import read_feather
+ from pyarrow import ipc
with OpenFile(
path,
**(kwargs.get("storage_options", None) or {}),
) as file:
- df = read_feather(file)
- # pyarrow.feather.read_feather doesn't support columns as pandas.Index
- columns = list(df.columns)
+ # Opens the file to extract its metadata
+ reader = ipc.open_file(file)
+ # TODO: pyarrow's schema contains much more metadata than just column names, it also
+ # has dtypes and index information that we could use when building a dataframe
+ columns = reader.schema.names
return cls.build_query_compiler(path, columns, use_threads=False)
| {"golden_diff": "diff --git a/modin/core/io/column_stores/feather_dispatcher.py b/modin/core/io/column_stores/feather_dispatcher.py\n--- a/modin/core/io/column_stores/feather_dispatcher.py\n+++ b/modin/core/io/column_stores/feather_dispatcher.py\n@@ -52,13 +52,15 @@\n import_optional_dependency(\n \"pyarrow\", \"pyarrow is required to read feather files.\"\n )\n- from pyarrow.feather import read_feather\n+ from pyarrow import ipc\n \n with OpenFile(\n path,\n **(kwargs.get(\"storage_options\", None) or {}),\n ) as file:\n- df = read_feather(file)\n- # pyarrow.feather.read_feather doesn't support columns as pandas.Index\n- columns = list(df.columns)\n+ # Opens the file to extract its metadata\n+ reader = ipc.open_file(file)\n+ # TODO: pyarrow's schema contains much more metadata than just column names, it also\n+ # has dtypes and index information that we could use when building a dataframe\n+ columns = reader.schema.names\n return cls.build_query_compiler(path, columns, use_threads=False)\n", "issue": "`pd.read_feather(file)` actually reads the file twice!\nThere's a really strange logic in our Feather dispatcher introduced a long time ago that reads the whole Feather file just to extract its column names:\r\nhttps://github.com/modin-project/modin/blob/632d724adb013d6f4d18aa2a4f2faa9325d324e8/modin/core/io/column_stores/feather_dispatcher.py#L55-L64\r\n\r\nNote that `.build_query_compiler(..)` launch parsing kernels that would also be reading the same parquet file but in a distributed way.\r\n\r\nWe should be using more smarter way to extract feather's metadata: https://issues.apache.org/jira/browse/ARROW-10344\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"Module houses `FeatherDispatcher` class, that is used for reading `.feather` files.\"\"\"\n\nfrom modin.core.io.column_stores.column_store_dispatcher import ColumnStoreDispatcher\nfrom modin.utils import import_optional_dependency\nfrom modin.core.io.file_dispatcher import OpenFile\n\n\nclass FeatherDispatcher(ColumnStoreDispatcher):\n \"\"\"Class handles utils for reading `.feather` files.\"\"\"\n\n @classmethod\n def _read(cls, path, columns=None, **kwargs):\n \"\"\"\n Read data from the file path, returning a query compiler.\n\n Parameters\n ----------\n path : str or file-like object\n The filepath of the feather file.\n columns : array-like, optional\n Columns to read from file. If not provided, all columns are read.\n **kwargs : dict\n `read_feather` function kwargs.\n\n Returns\n -------\n BaseQueryCompiler\n Query compiler with imported data for further processing.\n\n Notes\n -----\n `PyArrow` engine and local files only are supported for now,\n multi threading is set to False by default.\n PyArrow feather is used. Please refer to the documentation here\n https://arrow.apache.org/docs/python/api.html#feather-format\n \"\"\"\n path = cls.get_path(path)\n if columns is None:\n import_optional_dependency(\n \"pyarrow\", \"pyarrow is required to read feather files.\"\n )\n from pyarrow.feather import read_feather\n\n with OpenFile(\n path,\n **(kwargs.get(\"storage_options\", None) or {}),\n ) as file:\n df = read_feather(file)\n # pyarrow.feather.read_feather doesn't support columns as pandas.Index\n columns = list(df.columns)\n return cls.build_query_compiler(path, columns, use_threads=False)\n", "path": "modin/core/io/column_stores/feather_dispatcher.py"}], "after_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"Module houses `FeatherDispatcher` class, that is used for reading `.feather` files.\"\"\"\n\nfrom modin.core.io.column_stores.column_store_dispatcher import ColumnStoreDispatcher\nfrom modin.utils import import_optional_dependency\nfrom modin.core.io.file_dispatcher import OpenFile\n\n\nclass FeatherDispatcher(ColumnStoreDispatcher):\n \"\"\"Class handles utils for reading `.feather` files.\"\"\"\n\n @classmethod\n def _read(cls, path, columns=None, **kwargs):\n \"\"\"\n Read data from the file path, returning a query compiler.\n\n Parameters\n ----------\n path : str or file-like object\n The filepath of the feather file.\n columns : array-like, optional\n Columns to read from file. If not provided, all columns are read.\n **kwargs : dict\n `read_feather` function kwargs.\n\n Returns\n -------\n BaseQueryCompiler\n Query compiler with imported data for further processing.\n\n Notes\n -----\n `PyArrow` engine and local files only are supported for now,\n multi threading is set to False by default.\n PyArrow feather is used. Please refer to the documentation here\n https://arrow.apache.org/docs/python/api.html#feather-format\n \"\"\"\n path = cls.get_path(path)\n if columns is None:\n import_optional_dependency(\n \"pyarrow\", \"pyarrow is required to read feather files.\"\n )\n from pyarrow import ipc\n\n with OpenFile(\n path,\n **(kwargs.get(\"storage_options\", None) or {}),\n ) as file:\n # Opens the file to extract its metadata\n reader = ipc.open_file(file)\n # TODO: pyarrow's schema contains much more metadata than just column names, it also\n # has dtypes and index information that we could use when building a dataframe\n columns = reader.schema.names\n return cls.build_query_compiler(path, columns, use_threads=False)\n", "path": "modin/core/io/column_stores/feather_dispatcher.py"}]} | 1,097 | 258 |
gh_patches_debug_13994 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-2057 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Tesseract checks function doesn't recognize Chinese language (and some other languages) correctly
### Description
Good morning, I am new user of paperless-ngx. I tried to start a container using the lazy installation scripts.
`bash -c "$(curl -L https://raw.githubusercontent.com/paperless-ngx/paperless-ngx/main/install-paperless-ngx.sh)"`
**1. Problem Description**: when I set `PAPERLESS_OCR_LANGUAGE=eng+chi-sim`, the webserver cannot start correctly because it fails the tesseract language checks.
**2. Reason of the problem**:
In `docker/docker-entrypoint.sh`, the tesseract language packages are installed through the following command: `pkg="tesseract-ocr-$lang"`. And the naming could be found in [Ubuntu tesseract-ocr-all](https://packages.ubuntu.com/focal/tesseract-ocr-all) for example. So to install the Chinese Simplified package, you need to specify `chi-sim` in `PAPERLESS_OCR_LANGUAGES`.
However, when `-ngx` checks if the tesseract language package exists, it calls `tesseract --list-langs`.
I logged in the web server container using `docker exec -it paperless-webserver-1 bash`.
```
root@c95f0258d544:/usr/src/paperless/src# tesseract --list-langs
List of available languages (7):
chi_sim
deu
eng
fra
ita
osd
spa
```
Here the `chi-sim` appears as `chi_sim`
**3. Solution**: for users using some language, like Chinese, Korean or Arabic, etc.
`PAPERLESS_OCR_LANGUAGES`: this env parameter tells which tesseract-ocr packages to install
`PAPERLESS_OCR_LANGUAGE`: this env parameter tells which language in `tesseract --list-langs` will be used for OCR.
In this Chinese Simplified language situation, after setting:
```
PAPERLESS_OCR_LANGUAGE=eng+chi_sim
PAPERLESS_OCR_LANGUAGES=eng chi-sim
```
the webserver container could start correctly.
**4. Potential Update**
It's better to make `get_tesseract_langs()` function's output (in `paperless-ngx/src/paperless_tesseract/checks.py`) consistent with the `apt-get install tesseract-ocr-$lang`.
### Steps to reproduce
1. Use the lazy installer `bash -c "$(curl -L https://raw.githubusercontent.com/paperless-ngx/paperless-ngx/main/install-paperless-ngx.sh)"` or set `PAPERLESS_OCR_LANGUAGE=eng+chi-sim` and `PAPERLESS_OCR_LANGUAGES=eng chi-sim`
2. The server cannot pass the language checks
### Webserver logs
```bash
paperless-webserver-1 | Paperless-ngx docker container starting...
paperless-webserver-1 | Installing languages...
paperless-webserver-1 | Hit:1 http://deb.debian.org/debian bullseye InRelease
paperless-webserver-1 | Hit:2 http://deb.debian.org/debian-security bullseye-security InRelease
paperless-webserver-1 | Hit:3 http://deb.debian.org/debian bullseye-updates InRelease
paperless-webserver-1 | Reading package lists...
paperless-webserver-1 | Package tesseract-ocr-eng already installed!
paperless-webserver-1 | Package tesseract-ocr-chi-sim already installed!
paperless-webserver-1 | Creating directory /tmp/paperless
paperless-webserver-1 | Adjusting permissions of paperless files. This may take a while.
paperless-webserver-1 | Waiting for PostgreSQL to start...
paperless-webserver-1 | Waiting for Redis...
paperless-webserver-1 | Connected to Redis broker.
paperless-webserver-1 | Apply database migrations...
paperless-webserver-1 | SystemCheckError: System check identified some issues:
paperless-webserver-1 |
paperless-webserver-1 | ERRORS:
paperless-
```
### Browser logs
_No response_
### Paperless-ngx version
1.9.2
### Host OS
Ubuntu 22.04
### Installation method
Docker - official image
### Browser
_No response_
### Configuration changes
_No response_
### Other
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/paperless_tesseract/checks.py`
Content:
```
1 import subprocess
2
3 from django.conf import settings
4 from django.core.checks import Error
5 from django.core.checks import register
6 from django.core.checks import Warning
7
8
9 def get_tesseract_langs():
10 with subprocess.Popen(["tesseract", "--list-langs"], stdout=subprocess.PIPE) as p:
11 stdout, stderr = p.communicate()
12
13 return stdout.decode().strip().split("\n")[1:]
14
15
16 @register()
17 def check_default_language_available(app_configs, **kwargs):
18 installed_langs = get_tesseract_langs()
19
20 if not settings.OCR_LANGUAGE:
21 return [
22 Warning(
23 "No OCR language has been specified with PAPERLESS_OCR_LANGUAGE. "
24 "This means that tesseract will fallback to english.",
25 ),
26 ]
27
28 specified_langs = settings.OCR_LANGUAGE.split("+")
29
30 for lang in specified_langs:
31 if lang not in installed_langs:
32 return [
33 Error(
34 f"The selected ocr language {lang} is "
35 f"not installed. Paperless cannot OCR your documents "
36 f"without it. Please fix PAPERLESS_OCR_LANGUAGE.",
37 ),
38 ]
39
40 return []
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/paperless_tesseract/checks.py b/src/paperless_tesseract/checks.py
--- a/src/paperless_tesseract/checks.py
+++ b/src/paperless_tesseract/checks.py
@@ -1,3 +1,4 @@
+import shutil
import subprocess
from django.conf import settings
@@ -7,10 +8,16 @@
def get_tesseract_langs():
- with subprocess.Popen(["tesseract", "--list-langs"], stdout=subprocess.PIPE) as p:
- stdout, stderr = p.communicate()
+ proc = subprocess.run(
+ [shutil.which("tesseract"), "--list-langs"],
+ capture_output=True,
+ )
- return stdout.decode().strip().split("\n")[1:]
+ # Decode bytes to string, split on newlines, trim out the header
+ proc_lines = proc.stdout.decode("utf8", errors="ignore").strip().split("\n")[1:]
+
+ # Replace _ with - to convert two part languages to the expected code
+ return [x.replace("_", "-") for x in proc_lines]
@register()
| {"golden_diff": "diff --git a/src/paperless_tesseract/checks.py b/src/paperless_tesseract/checks.py\n--- a/src/paperless_tesseract/checks.py\n+++ b/src/paperless_tesseract/checks.py\n@@ -1,3 +1,4 @@\n+import shutil\n import subprocess\n \n from django.conf import settings\n@@ -7,10 +8,16 @@\n \n \n def get_tesseract_langs():\n- with subprocess.Popen([\"tesseract\", \"--list-langs\"], stdout=subprocess.PIPE) as p:\n- stdout, stderr = p.communicate()\n+ proc = subprocess.run(\n+ [shutil.which(\"tesseract\"), \"--list-langs\"],\n+ capture_output=True,\n+ )\n \n- return stdout.decode().strip().split(\"\\n\")[1:]\n+ # Decode bytes to string, split on newlines, trim out the header\n+ proc_lines = proc.stdout.decode(\"utf8\", errors=\"ignore\").strip().split(\"\\n\")[1:]\n+\n+ # Replace _ with - to convert two part languages to the expected code\n+ return [x.replace(\"_\", \"-\") for x in proc_lines]\n \n \n @register()\n", "issue": "[BUG] Tesseract checks function doesn't recognize Chinese language (and some other languages) correctly\n### Description\r\n\r\nGood morning, I am new user of paperless-ngx. I tried to start a container using the lazy installation scripts.\r\n`bash -c \"$(curl -L https://raw.githubusercontent.com/paperless-ngx/paperless-ngx/main/install-paperless-ngx.sh)\"`\r\n\r\n**1. Problem Description**: when I set `PAPERLESS_OCR_LANGUAGE=eng+chi-sim`, the webserver cannot start correctly because it fails the tesseract language checks.\r\n\r\n**2. Reason of the problem**: \r\nIn `docker/docker-entrypoint.sh`, the tesseract language packages are installed through the following command: `pkg=\"tesseract-ocr-$lang\"`. And the naming could be found in [Ubuntu tesseract-ocr-all](https://packages.ubuntu.com/focal/tesseract-ocr-all) for example. So to install the Chinese Simplified package, you need to specify `chi-sim` in `PAPERLESS_OCR_LANGUAGES`.\r\n\r\nHowever, when `-ngx` checks if the tesseract language package exists, it calls `tesseract --list-langs`. \r\nI logged in the web server container using `docker exec -it paperless-webserver-1 bash`.\r\n```\r\nroot@c95f0258d544:/usr/src/paperless/src# tesseract --list-langs\r\nList of available languages (7):\r\nchi_sim\r\ndeu\r\neng\r\nfra\r\nita\r\nosd\r\nspa\r\n```\r\nHere the `chi-sim` appears as `chi_sim`\r\n\r\n**3. Solution**: for users using some language, like Chinese, Korean or Arabic, etc.\r\n`PAPERLESS_OCR_LANGUAGES`: this env parameter tells which tesseract-ocr packages to install\r\n`PAPERLESS_OCR_LANGUAGE`: this env parameter tells which language in `tesseract --list-langs` will be used for OCR.\r\n\r\nIn this Chinese Simplified language situation, after setting:\r\n```\r\nPAPERLESS_OCR_LANGUAGE=eng+chi_sim\r\nPAPERLESS_OCR_LANGUAGES=eng chi-sim\r\n```\r\nthe webserver container could start correctly.\r\n\r\n**4. Potential Update**\r\nIt's better to make `get_tesseract_langs()` function's output (in `paperless-ngx/src/paperless_tesseract/checks.py`) consistent with the `apt-get install tesseract-ocr-$lang`.\r\n\r\n\r\n### Steps to reproduce\r\n\r\n1. Use the lazy installer `bash -c \"$(curl -L https://raw.githubusercontent.com/paperless-ngx/paperless-ngx/main/install-paperless-ngx.sh)\"` or set `PAPERLESS_OCR_LANGUAGE=eng+chi-sim` and `PAPERLESS_OCR_LANGUAGES=eng chi-sim`\r\n2. The server cannot pass the language checks\r\n\r\n\r\n### Webserver logs\r\n\r\n```bash\r\npaperless-webserver-1 | Paperless-ngx docker container starting...\r\npaperless-webserver-1 | Installing languages...\r\npaperless-webserver-1 | Hit:1 http://deb.debian.org/debian bullseye InRelease\r\npaperless-webserver-1 | Hit:2 http://deb.debian.org/debian-security bullseye-security InRelease\r\npaperless-webserver-1 | Hit:3 http://deb.debian.org/debian bullseye-updates InRelease\r\npaperless-webserver-1 | Reading package lists...\r\npaperless-webserver-1 | Package tesseract-ocr-eng already installed!\r\npaperless-webserver-1 | Package tesseract-ocr-chi-sim already installed!\r\npaperless-webserver-1 | Creating directory /tmp/paperless\r\npaperless-webserver-1 | Adjusting permissions of paperless files. This may take a while.\r\npaperless-webserver-1 | Waiting for PostgreSQL to start...\r\npaperless-webserver-1 | Waiting for Redis...\r\npaperless-webserver-1 | Connected to Redis broker.\r\npaperless-webserver-1 | Apply database migrations...\r\npaperless-webserver-1 | SystemCheckError: System check identified some issues:\r\npaperless-webserver-1 | \r\npaperless-webserver-1 | ERRORS:\r\npaperless-\r\n```\r\n\r\n\r\n### Browser logs\r\n\r\n_No response_\r\n\r\n### Paperless-ngx version\r\n\r\n1.9.2\r\n\r\n### Host OS\r\n\r\nUbuntu 22.04\r\n\r\n### Installation method\r\n\r\nDocker - official image\r\n\r\n### Browser\r\n\r\n_No response_\r\n\r\n### Configuration changes\r\n\r\n_No response_\r\n\r\n### Other\r\n\r\n_No response_\n", "before_files": [{"content": "import subprocess\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\n\ndef get_tesseract_langs():\n with subprocess.Popen([\"tesseract\", \"--list-langs\"], stdout=subprocess.PIPE) as p:\n stdout, stderr = p.communicate()\n\n return stdout.decode().strip().split(\"\\n\")[1:]\n\n\n@register()\ndef check_default_language_available(app_configs, **kwargs):\n installed_langs = get_tesseract_langs()\n\n if not settings.OCR_LANGUAGE:\n return [\n Warning(\n \"No OCR language has been specified with PAPERLESS_OCR_LANGUAGE. \"\n \"This means that tesseract will fallback to english.\",\n ),\n ]\n\n specified_langs = settings.OCR_LANGUAGE.split(\"+\")\n\n for lang in specified_langs:\n if lang not in installed_langs:\n return [\n Error(\n f\"The selected ocr language {lang} is \"\n f\"not installed. Paperless cannot OCR your documents \"\n f\"without it. Please fix PAPERLESS_OCR_LANGUAGE.\",\n ),\n ]\n\n return []\n", "path": "src/paperless_tesseract/checks.py"}], "after_files": [{"content": "import shutil\nimport subprocess\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\n\ndef get_tesseract_langs():\n proc = subprocess.run(\n [shutil.which(\"tesseract\"), \"--list-langs\"],\n capture_output=True,\n )\n\n # Decode bytes to string, split on newlines, trim out the header\n proc_lines = proc.stdout.decode(\"utf8\", errors=\"ignore\").strip().split(\"\\n\")[1:]\n\n # Replace _ with - to convert two part languages to the expected code\n return [x.replace(\"_\", \"-\") for x in proc_lines]\n\n\n@register()\ndef check_default_language_available(app_configs, **kwargs):\n installed_langs = get_tesseract_langs()\n\n if not settings.OCR_LANGUAGE:\n return [\n Warning(\n \"No OCR language has been specified with PAPERLESS_OCR_LANGUAGE. \"\n \"This means that tesseract will fallback to english.\",\n ),\n ]\n\n specified_langs = settings.OCR_LANGUAGE.split(\"+\")\n\n for lang in specified_langs:\n if lang not in installed_langs:\n return [\n Error(\n f\"The selected ocr language {lang} is \"\n f\"not installed. Paperless cannot OCR your documents \"\n f\"without it. Please fix PAPERLESS_OCR_LANGUAGE.\",\n ),\n ]\n\n return []\n", "path": "src/paperless_tesseract/checks.py"}]} | 1,538 | 248 |
gh_patches_debug_37785 | rasdani/github-patches | git_diff | DataDog__dd-agent-1241 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Marathon >= 0.7 failing
The Marathon plugin checks the TaskRateLimit attribute from the Marathon 0.6 API version, which has been removed https://github.com/mesosphere/marathon/blob/master/docs/docs/upgrade/06xto070.md#task-rate-limit
you can see the dependency here:
https://github.com/DataDog/dd-agent/blob/73dd75035f1982d31818bb8a4a2a6cc856cf8bb7/checks.d/marathon.py#L30
using marathon v0.7.6, the `dd-agent info` looks like this:
```
marathon
--------
- instance #0 [ERROR]: KeyError('taskRateLimit',)
- Collected 1 metric, 0 events & 1 service check
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checks.d/marathon.py`
Content:
```
1 # stdlib
2 import time
3 from hashlib import md5
4 import urllib2
5
6 # project
7 from checks import AgentCheck
8 from util import headers
9
10 # 3rd party
11 import simplejson as json
12 import requests
13
14 class Marathon(AgentCheck):
15 def check(self, instance):
16 if 'url' not in instance:
17 raise Exception('Marathon instance missing "url" value.')
18
19 # Load values from the instance config
20 url = instance['url']
21 instance_tags = instance.get('tags', [])
22 default_timeout = self.init_config.get('default_timeout', 5)
23 timeout = float(instance.get('timeout', default_timeout))
24
25 response = self.get_v2_apps(url, timeout)
26 if response is not None:
27 self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)
28 for app in response['apps']:
29 tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags
30 for attr in ['taskRateLimit','instances','cpus','mem','tasksStaged','tasksRunning']:
31 self.gauge('marathon.' + attr, app[attr], tags=tags)
32 versions_reply = self.get_v2_app_versions(url, app['id'], timeout)
33 if versions_reply is not None:
34 self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)
35
36 def get_v2_apps(self, url, timeout):
37 # Use a hash of the URL as an aggregation key
38 aggregation_key = md5(url).hexdigest()
39 try:
40 r = requests.get(url + "/v2/apps", timeout=timeout)
41 except requests.exceptions.Timeout as e:
42 # If there's a timeout
43 self.timeout_event(url, timeout, aggregation_key)
44 raise Exception("Timeout when hitting %s" % url)
45
46 if r.status_code != 200:
47 self.status_code_event(url, r, aggregation_key)
48 raise Exception("Got %s when hitting %s" % (r.status_code, url))
49
50 # Condition for request v1.x backward compatibility
51 if hasattr(r.json, '__call__'):
52 return r.json()
53 else:
54 return r.json
55
56 def get_v2_app_versions(self, url, app_id, timeout):
57 # Use a hash of the URL as an aggregation key
58 aggregation_key = md5(url).hexdigest()
59
60 try:
61 r = requests.get(url + "/v2/apps/" + app_id + "/versions", timeout=timeout)
62 except requests.exceptions.Timeout as e:
63 # If there's a timeout
64 self.timeout_event(url, timeout, aggregation_key)
65 self.warning("Timeout when hitting %s" % url)
66 return None
67
68 if r.status_code != 200:
69 self.status_code_event(url, r, aggregation_key)
70 self.warning("Got %s when hitting %s" % (r.status_code, url))
71 return None
72
73 return r.json()
74
75 def timeout_event(self, url, timeout, aggregation_key):
76 self.event({
77 'timestamp': int(time.time()),
78 'event_type': 'http_check',
79 'msg_title': 'URL timeout',
80 'msg_text': '%s timed out after %s seconds.' % (url, timeout),
81 'aggregation_key': aggregation_key
82 })
83
84 def status_code_event(self, url, r, aggregation_key):
85 self.event({
86 'timestamp': int(time.time()),
87 'event_type': 'http_check',
88 'msg_title': 'Invalid reponse code for %s' % url,
89 'msg_text': '%s returned a status of %s' % (url, r.status_code),
90 'aggregation_key': aggregation_key
91 })
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checks.d/marathon.py b/checks.d/marathon.py
--- a/checks.d/marathon.py
+++ b/checks.d/marathon.py
@@ -1,11 +1,9 @@
# stdlib
import time
from hashlib import md5
-import urllib2
# project
from checks import AgentCheck
-from util import headers
# 3rd party
import simplejson as json
@@ -27,8 +25,9 @@
self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)
for app in response['apps']:
tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags
- for attr in ['taskRateLimit','instances','cpus','mem','tasksStaged','tasksRunning']:
- self.gauge('marathon.' + attr, app[attr], tags=tags)
+ for attr in ['taskRateLimit', 'instances', 'cpus', 'mem', 'tasksStaged', 'tasksRunning', 'backoffSeconds', 'backoffFactor']:
+ if attr in app:
+ self.gauge('marathon.' + attr, app[attr], tags=tags)
versions_reply = self.get_v2_app_versions(url, app['id'], timeout)
if versions_reply is not None:
self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)
@@ -38,7 +37,7 @@
aggregation_key = md5(url).hexdigest()
try:
r = requests.get(url + "/v2/apps", timeout=timeout)
- except requests.exceptions.Timeout as e:
+ except requests.exceptions.Timeout:
# If there's a timeout
self.timeout_event(url, timeout, aggregation_key)
raise Exception("Timeout when hitting %s" % url)
@@ -56,10 +55,10 @@
def get_v2_app_versions(self, url, app_id, timeout):
# Use a hash of the URL as an aggregation key
aggregation_key = md5(url).hexdigest()
-
+
try:
r = requests.get(url + "/v2/apps/" + app_id + "/versions", timeout=timeout)
- except requests.exceptions.Timeout as e:
+ except requests.exceptions.Timeout:
# If there's a timeout
self.timeout_event(url, timeout, aggregation_key)
self.warning("Timeout when hitting %s" % url)
| {"golden_diff": "diff --git a/checks.d/marathon.py b/checks.d/marathon.py\n--- a/checks.d/marathon.py\n+++ b/checks.d/marathon.py\n@@ -1,11 +1,9 @@\n # stdlib\n import time\n from hashlib import md5\n-import urllib2\n \n # project\n from checks import AgentCheck\n-from util import headers\n \n # 3rd party\n import simplejson as json\n@@ -27,8 +25,9 @@\n self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)\n for app in response['apps']:\n tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags\n- for attr in ['taskRateLimit','instances','cpus','mem','tasksStaged','tasksRunning']:\n- self.gauge('marathon.' + attr, app[attr], tags=tags)\n+ for attr in ['taskRateLimit', 'instances', 'cpus', 'mem', 'tasksStaged', 'tasksRunning', 'backoffSeconds', 'backoffFactor']:\n+ if attr in app:\n+ self.gauge('marathon.' + attr, app[attr], tags=tags)\n versions_reply = self.get_v2_app_versions(url, app['id'], timeout)\n if versions_reply is not None:\n self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)\n@@ -38,7 +37,7 @@\n aggregation_key = md5(url).hexdigest()\n try:\n r = requests.get(url + \"/v2/apps\", timeout=timeout)\n- except requests.exceptions.Timeout as e:\n+ except requests.exceptions.Timeout:\n # If there's a timeout\n self.timeout_event(url, timeout, aggregation_key)\n raise Exception(\"Timeout when hitting %s\" % url)\n@@ -56,10 +55,10 @@\n def get_v2_app_versions(self, url, app_id, timeout):\n # Use a hash of the URL as an aggregation key\n aggregation_key = md5(url).hexdigest()\n- \n+\n try:\n r = requests.get(url + \"/v2/apps/\" + app_id + \"/versions\", timeout=timeout)\n- except requests.exceptions.Timeout as e:\n+ except requests.exceptions.Timeout:\n # If there's a timeout\n self.timeout_event(url, timeout, aggregation_key)\n self.warning(\"Timeout when hitting %s\" % url)\n", "issue": "Marathon >= 0.7 failing\nThe Marathon plugin checks the TaskRateLimit attribute from the Marathon 0.6 API version, which has been removed https://github.com/mesosphere/marathon/blob/master/docs/docs/upgrade/06xto070.md#task-rate-limit\n\nyou can see the dependency here:\nhttps://github.com/DataDog/dd-agent/blob/73dd75035f1982d31818bb8a4a2a6cc856cf8bb7/checks.d/marathon.py#L30\n\nusing marathon v0.7.6, the `dd-agent info` looks like this:\n\n```\n marathon\n --------\n - instance #0 [ERROR]: KeyError('taskRateLimit',)\n - Collected 1 metric, 0 events & 1 service check\n```\n\n", "before_files": [{"content": "# stdlib\nimport time\nfrom hashlib import md5\nimport urllib2\n\n# project\nfrom checks import AgentCheck\nfrom util import headers\n\n# 3rd party\nimport simplejson as json\nimport requests\n\nclass Marathon(AgentCheck):\n def check(self, instance):\n if 'url' not in instance:\n raise Exception('Marathon instance missing \"url\" value.')\n\n # Load values from the instance config\n url = instance['url']\n instance_tags = instance.get('tags', [])\n default_timeout = self.init_config.get('default_timeout', 5)\n timeout = float(instance.get('timeout', default_timeout))\n\n response = self.get_v2_apps(url, timeout)\n if response is not None:\n self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)\n for app in response['apps']:\n tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags\n for attr in ['taskRateLimit','instances','cpus','mem','tasksStaged','tasksRunning']:\n self.gauge('marathon.' + attr, app[attr], tags=tags)\n versions_reply = self.get_v2_app_versions(url, app['id'], timeout)\n if versions_reply is not None:\n self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)\n\n def get_v2_apps(self, url, timeout):\n # Use a hash of the URL as an aggregation key\n aggregation_key = md5(url).hexdigest()\n try:\n r = requests.get(url + \"/v2/apps\", timeout=timeout)\n except requests.exceptions.Timeout as e:\n # If there's a timeout\n self.timeout_event(url, timeout, aggregation_key)\n raise Exception(\"Timeout when hitting %s\" % url)\n\n if r.status_code != 200:\n self.status_code_event(url, r, aggregation_key)\n raise Exception(\"Got %s when hitting %s\" % (r.status_code, url))\n\n # Condition for request v1.x backward compatibility\n if hasattr(r.json, '__call__'):\n return r.json()\n else:\n return r.json\n\n def get_v2_app_versions(self, url, app_id, timeout):\n # Use a hash of the URL as an aggregation key\n aggregation_key = md5(url).hexdigest()\n \n try:\n r = requests.get(url + \"/v2/apps/\" + app_id + \"/versions\", timeout=timeout)\n except requests.exceptions.Timeout as e:\n # If there's a timeout\n self.timeout_event(url, timeout, aggregation_key)\n self.warning(\"Timeout when hitting %s\" % url)\n return None\n\n if r.status_code != 200:\n self.status_code_event(url, r, aggregation_key)\n self.warning(\"Got %s when hitting %s\" % (r.status_code, url))\n return None\n\n return r.json()\n\n def timeout_event(self, url, timeout, aggregation_key):\n self.event({\n 'timestamp': int(time.time()),\n 'event_type': 'http_check',\n 'msg_title': 'URL timeout',\n 'msg_text': '%s timed out after %s seconds.' % (url, timeout),\n 'aggregation_key': aggregation_key\n })\n\n def status_code_event(self, url, r, aggregation_key):\n self.event({\n 'timestamp': int(time.time()),\n 'event_type': 'http_check',\n 'msg_title': 'Invalid reponse code for %s' % url,\n 'msg_text': '%s returned a status of %s' % (url, r.status_code),\n 'aggregation_key': aggregation_key\n })\n", "path": "checks.d/marathon.py"}], "after_files": [{"content": "# stdlib\nimport time\nfrom hashlib import md5\n\n# project\nfrom checks import AgentCheck\n\n# 3rd party\nimport simplejson as json\nimport requests\n\nclass Marathon(AgentCheck):\n def check(self, instance):\n if 'url' not in instance:\n raise Exception('Marathon instance missing \"url\" value.')\n\n # Load values from the instance config\n url = instance['url']\n instance_tags = instance.get('tags', [])\n default_timeout = self.init_config.get('default_timeout', 5)\n timeout = float(instance.get('timeout', default_timeout))\n\n response = self.get_v2_apps(url, timeout)\n if response is not None:\n self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)\n for app in response['apps']:\n tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags\n for attr in ['taskRateLimit', 'instances', 'cpus', 'mem', 'tasksStaged', 'tasksRunning', 'backoffSeconds', 'backoffFactor']:\n if attr in app:\n self.gauge('marathon.' + attr, app[attr], tags=tags)\n versions_reply = self.get_v2_app_versions(url, app['id'], timeout)\n if versions_reply is not None:\n self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)\n\n def get_v2_apps(self, url, timeout):\n # Use a hash of the URL as an aggregation key\n aggregation_key = md5(url).hexdigest()\n try:\n r = requests.get(url + \"/v2/apps\", timeout=timeout)\n except requests.exceptions.Timeout:\n # If there's a timeout\n self.timeout_event(url, timeout, aggregation_key)\n raise Exception(\"Timeout when hitting %s\" % url)\n\n if r.status_code != 200:\n self.status_code_event(url, r, aggregation_key)\n raise Exception(\"Got %s when hitting %s\" % (r.status_code, url))\n\n # Condition for request v1.x backward compatibility\n if hasattr(r.json, '__call__'):\n return r.json()\n else:\n return r.json\n\n def get_v2_app_versions(self, url, app_id, timeout):\n # Use a hash of the URL as an aggregation key\n aggregation_key = md5(url).hexdigest()\n\n try:\n r = requests.get(url + \"/v2/apps/\" + app_id + \"/versions\", timeout=timeout)\n except requests.exceptions.Timeout:\n # If there's a timeout\n self.timeout_event(url, timeout, aggregation_key)\n self.warning(\"Timeout when hitting %s\" % url)\n return None\n\n if r.status_code != 200:\n self.status_code_event(url, r, aggregation_key)\n self.warning(\"Got %s when hitting %s\" % (r.status_code, url))\n return None\n\n return r.json()\n\n def timeout_event(self, url, timeout, aggregation_key):\n self.event({\n 'timestamp': int(time.time()),\n 'event_type': 'http_check',\n 'msg_title': 'URL timeout',\n 'msg_text': '%s timed out after %s seconds.' % (url, timeout),\n 'aggregation_key': aggregation_key\n })\n\n def status_code_event(self, url, r, aggregation_key):\n self.event({\n 'timestamp': int(time.time()),\n 'event_type': 'http_check',\n 'msg_title': 'Invalid reponse code for %s' % url,\n 'msg_text': '%s returned a status of %s' % (url, r.status_code),\n 'aggregation_key': aggregation_key\n })\n", "path": "checks.d/marathon.py"}]} | 1,417 | 531 |
gh_patches_debug_27684 | rasdani/github-patches | git_diff | TheAlgorithms__Python-7406 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update gaussian_naive_bayes.py
Using the seaborn.heatmap library to plot the confusion matrix instead of the plot_confusion_matrix method from the sklearn.metrics module.
### Describe your change:
Adding to the previous update, to avoid a warning that the plot_confusion_matrix method from the sklearn.metrics throws, I have used the heatmap module of seaborn which takes an argument - the confusion matrix obtained from the confusion_matrix method in the sklearn.metrics library.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `machine_learning/gaussian_naive_bayes.py`
Content:
```
1 # Gaussian Naive Bayes Example
2 from matplotlib import pyplot as plt
3 from sklearn.datasets import load_iris
4 from sklearn.metrics import plot_confusion_matrix
5 from sklearn.model_selection import train_test_split
6 from sklearn.naive_bayes import GaussianNB
7
8
9 def main():
10
11 """
12 Gaussian Naive Bayes Example using sklearn function.
13 Iris type dataset is used to demonstrate algorithm.
14 """
15
16 # Load Iris dataset
17 iris = load_iris()
18
19 # Split dataset into train and test data
20 x = iris["data"] # features
21 y = iris["target"]
22 x_train, x_test, y_train, y_test = train_test_split(
23 x, y, test_size=0.3, random_state=1
24 )
25
26 # Gaussian Naive Bayes
27 nb_model = GaussianNB()
28 nb_model.fit(x_train, y_train)
29
30 # Display Confusion Matrix
31 plot_confusion_matrix(
32 nb_model,
33 x_test,
34 y_test,
35 display_labels=iris["target_names"],
36 cmap="Blues",
37 normalize="true",
38 )
39 plt.title("Normalized Confusion Matrix - IRIS Dataset")
40 plt.show()
41
42
43 if __name__ == "__main__":
44 main()
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/machine_learning/gaussian_naive_bayes.py b/machine_learning/gaussian_naive_bayes.py
--- a/machine_learning/gaussian_naive_bayes.py
+++ b/machine_learning/gaussian_naive_bayes.py
@@ -1,7 +1,9 @@
# Gaussian Naive Bayes Example
+import time
+
from matplotlib import pyplot as plt
from sklearn.datasets import load_iris
-from sklearn.metrics import plot_confusion_matrix
+from sklearn.metrics import accuracy_score, plot_confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
@@ -25,7 +27,9 @@
# Gaussian Naive Bayes
nb_model = GaussianNB()
- nb_model.fit(x_train, y_train)
+ time.sleep(2.9)
+ model_fit = nb_model.fit(x_train, y_train)
+ y_pred = model_fit.predict(x_test) # Predictions on the test set
# Display Confusion Matrix
plot_confusion_matrix(
@@ -33,12 +37,16 @@
x_test,
y_test,
display_labels=iris["target_names"],
- cmap="Blues",
+ cmap="Blues", # although, Greys_r has a better contrast...
normalize="true",
)
plt.title("Normalized Confusion Matrix - IRIS Dataset")
plt.show()
+ time.sleep(1.8)
+ final_accuracy = 100 * accuracy_score(y_true=y_test, y_pred=y_pred)
+ print(f"The overall accuracy of the model is: {round(final_accuracy, 2)}%")
+
if __name__ == "__main__":
main()
| {"golden_diff": "diff --git a/machine_learning/gaussian_naive_bayes.py b/machine_learning/gaussian_naive_bayes.py\n--- a/machine_learning/gaussian_naive_bayes.py\n+++ b/machine_learning/gaussian_naive_bayes.py\n@@ -1,7 +1,9 @@\n # Gaussian Naive Bayes Example\n+import time\n+\n from matplotlib import pyplot as plt\n from sklearn.datasets import load_iris\n-from sklearn.metrics import plot_confusion_matrix\n+from sklearn.metrics import accuracy_score, plot_confusion_matrix\n from sklearn.model_selection import train_test_split\n from sklearn.naive_bayes import GaussianNB\n \n@@ -25,7 +27,9 @@\n \n # Gaussian Naive Bayes\n nb_model = GaussianNB()\n- nb_model.fit(x_train, y_train)\n+ time.sleep(2.9)\n+ model_fit = nb_model.fit(x_train, y_train)\n+ y_pred = model_fit.predict(x_test) # Predictions on the test set\n \n # Display Confusion Matrix\n plot_confusion_matrix(\n@@ -33,12 +37,16 @@\n x_test,\n y_test,\n display_labels=iris[\"target_names\"],\n- cmap=\"Blues\",\n+ cmap=\"Blues\", # although, Greys_r has a better contrast...\n normalize=\"true\",\n )\n plt.title(\"Normalized Confusion Matrix - IRIS Dataset\")\n plt.show()\n \n+ time.sleep(1.8)\n+ final_accuracy = 100 * accuracy_score(y_true=y_test, y_pred=y_pred)\n+ print(f\"The overall accuracy of the model is: {round(final_accuracy, 2)}%\")\n+\n \n if __name__ == \"__main__\":\n main()\n", "issue": "Update gaussian_naive_bayes.py\nUsing the seaborn.heatmap library to plot the confusion matrix instead of the plot_confusion_matrix method from the sklearn.metrics module.\r\n\r\n### Describe your change:\r\nAdding to the previous update, to avoid a warning that the plot_confusion_matrix method from the sklearn.metrics throws, I have used the heatmap module of seaborn which takes an argument - the confusion matrix obtained from the confusion_matrix method in the sklearn.metrics library.\r\n\r\n\r\n* [ ] Add an algorithm?\r\n* [x] Fix a bug or typo in an existing algorithm?\r\n* [ ] Documentation change?\r\n\r\n### Checklist:\r\n* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).\r\n* [x] This pull request is all my own work -- I have not plagiarized.\r\n* [x] I know that pull requests will not be merged if they fail the automated tests.\r\n* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.\r\n* [x] All new Python files are placed inside an existing directory.\r\n* [x] All filenames are in all lowercase characters with no spaces or dashes.\r\n* [x] All functions and variable names follow Python naming conventions.\r\n* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).\r\n* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.\r\n* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.\r\n* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.\r\n\n", "before_files": [{"content": "# Gaussian Naive Bayes Example\nfrom matplotlib import pyplot as plt\nfrom sklearn.datasets import load_iris\nfrom sklearn.metrics import plot_confusion_matrix\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import GaussianNB\n\n\ndef main():\n\n \"\"\"\n Gaussian Naive Bayes Example using sklearn function.\n Iris type dataset is used to demonstrate algorithm.\n \"\"\"\n\n # Load Iris dataset\n iris = load_iris()\n\n # Split dataset into train and test data\n x = iris[\"data\"] # features\n y = iris[\"target\"]\n x_train, x_test, y_train, y_test = train_test_split(\n x, y, test_size=0.3, random_state=1\n )\n\n # Gaussian Naive Bayes\n nb_model = GaussianNB()\n nb_model.fit(x_train, y_train)\n\n # Display Confusion Matrix\n plot_confusion_matrix(\n nb_model,\n x_test,\n y_test,\n display_labels=iris[\"target_names\"],\n cmap=\"Blues\",\n normalize=\"true\",\n )\n plt.title(\"Normalized Confusion Matrix - IRIS Dataset\")\n plt.show()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "machine_learning/gaussian_naive_bayes.py"}], "after_files": [{"content": "# Gaussian Naive Bayes Example\nimport time\n\nfrom matplotlib import pyplot as plt\nfrom sklearn.datasets import load_iris\nfrom sklearn.metrics import accuracy_score, plot_confusion_matrix\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import GaussianNB\n\n\ndef main():\n\n \"\"\"\n Gaussian Naive Bayes Example using sklearn function.\n Iris type dataset is used to demonstrate algorithm.\n \"\"\"\n\n # Load Iris dataset\n iris = load_iris()\n\n # Split dataset into train and test data\n x = iris[\"data\"] # features\n y = iris[\"target\"]\n x_train, x_test, y_train, y_test = train_test_split(\n x, y, test_size=0.3, random_state=1\n )\n\n # Gaussian Naive Bayes\n nb_model = GaussianNB()\n time.sleep(2.9)\n model_fit = nb_model.fit(x_train, y_train)\n y_pred = model_fit.predict(x_test) # Predictions on the test set\n\n # Display Confusion Matrix\n plot_confusion_matrix(\n nb_model,\n x_test,\n y_test,\n display_labels=iris[\"target_names\"],\n cmap=\"Blues\", # although, Greys_r has a better contrast...\n normalize=\"true\",\n )\n plt.title(\"Normalized Confusion Matrix - IRIS Dataset\")\n plt.show()\n\n time.sleep(1.8)\n final_accuracy = 100 * accuracy_score(y_true=y_test, y_pred=y_pred)\n print(f\"The overall accuracy of the model is: {round(final_accuracy, 2)}%\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "machine_learning/gaussian_naive_bayes.py"}]} | 991 | 380 |
gh_patches_debug_4865 | rasdani/github-patches | git_diff | svthalia__concrexit-2218 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Don't revoke staff status for superusers
### Describe the bug
After #312, staff status will get revoked for superusers too. That shouldn't happen
### How to reproduce
Have a super user that is not in a committee.
### Expected behaviour
No staff status revoking
### Screenshots
### Additional context
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/activemembers/services.py`
Content:
```
1 from django.db.models import Count, Q
2 from django.utils import timezone
3
4 from activemembers.models import Committee
5 from members.models.member import Member
6
7
8 def generate_statistics() -> dict:
9 """Generate statistics about number of members in each committee."""
10 committees = Committee.active_objects.annotate(
11 member_count=(
12 Count(
13 "members",
14 filter=(
15 Q(membergroupmembership__until=None)
16 | Q(membergroupmembership__until__gte=timezone.now())
17 ),
18 )
19 )
20 )
21
22 data = {
23 "labels": [],
24 "datasets": [
25 {"data": []},
26 ],
27 }
28 for committee in committees:
29 data["labels"].append(committee.name)
30 data["datasets"][0]["data"].append(committee.member_count)
31
32 return data
33
34
35 def revoke_staff_permission_for_users_in_no_commitee():
36 members = Member.objects.filter(is_staff=True)
37 revoked = []
38 for member in members:
39 if member.get_member_groups().count() == 0:
40 revoked.append(member.id)
41 member.is_staff = False
42 member.save()
43 return Member.objects.filter(pk__in=revoked)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/activemembers/services.py b/website/activemembers/services.py
--- a/website/activemembers/services.py
+++ b/website/activemembers/services.py
@@ -36,7 +36,7 @@
members = Member.objects.filter(is_staff=True)
revoked = []
for member in members:
- if member.get_member_groups().count() == 0:
+ if member.get_member_groups().count() == 0 and not member.is_superuser:
revoked.append(member.id)
member.is_staff = False
member.save()
| {"golden_diff": "diff --git a/website/activemembers/services.py b/website/activemembers/services.py\n--- a/website/activemembers/services.py\n+++ b/website/activemembers/services.py\n@@ -36,7 +36,7 @@\n members = Member.objects.filter(is_staff=True)\n revoked = []\n for member in members:\n- if member.get_member_groups().count() == 0:\n+ if member.get_member_groups().count() == 0 and not member.is_superuser:\n revoked.append(member.id)\n member.is_staff = False\n member.save()\n", "issue": "Don't revoke staff status for superusers\n### Describe the bug\r\nAfter #312, staff status will get revoked for superusers too. That shouldn't happen\r\n\r\n### How to reproduce\r\nHave a super user that is not in a committee.\r\n\r\n### Expected behaviour\r\nNo staff status revoking\r\n\r\n### Screenshots\r\n\r\n### Additional context\r\n\n", "before_files": [{"content": "from django.db.models import Count, Q\nfrom django.utils import timezone\n\nfrom activemembers.models import Committee\nfrom members.models.member import Member\n\n\ndef generate_statistics() -> dict:\n \"\"\"Generate statistics about number of members in each committee.\"\"\"\n committees = Committee.active_objects.annotate(\n member_count=(\n Count(\n \"members\",\n filter=(\n Q(membergroupmembership__until=None)\n | Q(membergroupmembership__until__gte=timezone.now())\n ),\n )\n )\n )\n\n data = {\n \"labels\": [],\n \"datasets\": [\n {\"data\": []},\n ],\n }\n for committee in committees:\n data[\"labels\"].append(committee.name)\n data[\"datasets\"][0][\"data\"].append(committee.member_count)\n\n return data\n\n\ndef revoke_staff_permission_for_users_in_no_commitee():\n members = Member.objects.filter(is_staff=True)\n revoked = []\n for member in members:\n if member.get_member_groups().count() == 0:\n revoked.append(member.id)\n member.is_staff = False\n member.save()\n return Member.objects.filter(pk__in=revoked)\n", "path": "website/activemembers/services.py"}], "after_files": [{"content": "from django.db.models import Count, Q\nfrom django.utils import timezone\n\nfrom activemembers.models import Committee\nfrom members.models.member import Member\n\n\ndef generate_statistics() -> dict:\n \"\"\"Generate statistics about number of members in each committee.\"\"\"\n committees = Committee.active_objects.annotate(\n member_count=(\n Count(\n \"members\",\n filter=(\n Q(membergroupmembership__until=None)\n | Q(membergroupmembership__until__gte=timezone.now())\n ),\n )\n )\n )\n\n data = {\n \"labels\": [],\n \"datasets\": [\n {\"data\": []},\n ],\n }\n for committee in committees:\n data[\"labels\"].append(committee.name)\n data[\"datasets\"][0][\"data\"].append(committee.member_count)\n\n return data\n\n\ndef revoke_staff_permission_for_users_in_no_commitee():\n members = Member.objects.filter(is_staff=True)\n revoked = []\n for member in members:\n if member.get_member_groups().count() == 0 and not member.is_superuser:\n revoked.append(member.id)\n member.is_staff = False\n member.save()\n return Member.objects.filter(pk__in=revoked)\n", "path": "website/activemembers/services.py"}]} | 652 | 126 |
gh_patches_debug_28541 | rasdani/github-patches | git_diff | pyca__cryptography-5517 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Revisit _extra_compile_args in build_openssl.py
Now that we're dropping 1.0.2 in #5511, it's time to review that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/_cffi_src/build_openssl.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import os
8 import sys
9 from distutils import dist
10 from distutils.ccompiler import get_default_compiler
11 from distutils.command.config import config
12
13 from _cffi_src.utils import (
14 build_ffi_for_binding,
15 compiler_type,
16 extra_link_args,
17 )
18
19
20 def _get_openssl_libraries(platform):
21 if os.environ.get("CRYPTOGRAPHY_SUPPRESS_LINK_FLAGS", None):
22 return []
23 # OpenSSL goes by a different library name on different operating systems.
24 if platform == "win32" and compiler_type() == "msvc":
25 return [
26 "libssl",
27 "libcrypto",
28 "advapi32",
29 "crypt32",
30 "gdi32",
31 "user32",
32 "ws2_32",
33 ]
34 else:
35 # darwin, linux, mingw all use this path
36 # In some circumstances, the order in which these libs are
37 # specified on the linker command-line is significant;
38 # libssl must come before libcrypto
39 # (https://marc.info/?l=openssl-users&m=135361825921871)
40 # -lpthread required due to usage of pthread an potential
41 # existance of a static part containing e.g. pthread_atfork
42 # (https://github.com/pyca/cryptography/issues/5084)
43 if sys.platform == "zos":
44 return ["ssl", "crypto"]
45 else:
46 return ["ssl", "crypto", "pthread"]
47
48
49 def _extra_compile_args(platform):
50 """
51 We set -Wconversion args here so that we only do Wconversion checks on the
52 code we're compiling and not on cffi itself (as passing -Wconversion in
53 CFLAGS would do). We set no error on sign conversion because some
54 function signatures in OpenSSL have changed from long -> unsigned long
55 in the past. Since that isn't a precision issue we don't care.
56 When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can
57 revisit this.
58 """
59 # make sure the compiler used supports the flags to be added
60 is_gcc = False
61 if get_default_compiler() == "unix":
62 d = dist.Distribution()
63 cmd = config(d)
64 cmd._check_compiler()
65 is_gcc = (
66 "gcc" in cmd.compiler.compiler[0]
67 or "clang" in cmd.compiler.compiler[0]
68 )
69 if is_gcc or not (
70 platform in ["win32", "hp-ux11", "sunos5"]
71 or platform.startswith("aix")
72 ):
73 return ["-Wconversion", "-Wno-error=sign-conversion"]
74 else:
75 return []
76
77
78 ffi = build_ffi_for_binding(
79 module_name="_openssl",
80 module_prefix="_cffi_src.openssl.",
81 modules=[
82 # This goes first so we can define some cryptography-wide symbols.
83 "cryptography",
84 "aes",
85 "asn1",
86 "bignum",
87 "bio",
88 "cmac",
89 "conf",
90 "crypto",
91 "ct",
92 "dh",
93 "dsa",
94 "ec",
95 "ecdh",
96 "ecdsa",
97 "engine",
98 "err",
99 "evp",
100 "fips",
101 "hmac",
102 "nid",
103 "objects",
104 "ocsp",
105 "opensslv",
106 "osrandom_engine",
107 "pem",
108 "pkcs12",
109 "rand",
110 "rsa",
111 "ssl",
112 "x509",
113 "x509name",
114 "x509v3",
115 "x509_vfy",
116 "pkcs7",
117 "callbacks",
118 ],
119 libraries=_get_openssl_libraries(sys.platform),
120 # These args are passed here so that we only do Wconversion checks on the
121 # code we're compiling and not on cffi itself (as passing -Wconversion in
122 # CFLAGS would do). We set no error on sign convesrion because some
123 # function signatures in OpenSSL have changed from long -> unsigned long
124 # in the past. Since that isn't a precision issue we don't care.
125 # When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can
126 # revisit this.
127 extra_compile_args=_extra_compile_args(sys.platform),
128 extra_link_args=extra_link_args(compiler_type()),
129 )
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/_cffi_src/build_openssl.py b/src/_cffi_src/build_openssl.py
--- a/src/_cffi_src/build_openssl.py
+++ b/src/_cffi_src/build_openssl.py
@@ -51,10 +51,9 @@
We set -Wconversion args here so that we only do Wconversion checks on the
code we're compiling and not on cffi itself (as passing -Wconversion in
CFLAGS would do). We set no error on sign conversion because some
- function signatures in OpenSSL have changed from long -> unsigned long
- in the past. Since that isn't a precision issue we don't care.
- When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can
- revisit this.
+ function signatures in LibreSSL differ from OpenSSL have changed on long
+ vs. unsigned long in the past. Since that isn't a precision issue we don't
+ care.
"""
# make sure the compiler used supports the flags to be added
is_gcc = False
@@ -117,13 +116,6 @@
"callbacks",
],
libraries=_get_openssl_libraries(sys.platform),
- # These args are passed here so that we only do Wconversion checks on the
- # code we're compiling and not on cffi itself (as passing -Wconversion in
- # CFLAGS would do). We set no error on sign convesrion because some
- # function signatures in OpenSSL have changed from long -> unsigned long
- # in the past. Since that isn't a precision issue we don't care.
- # When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can
- # revisit this.
extra_compile_args=_extra_compile_args(sys.platform),
extra_link_args=extra_link_args(compiler_type()),
)
| {"golden_diff": "diff --git a/src/_cffi_src/build_openssl.py b/src/_cffi_src/build_openssl.py\n--- a/src/_cffi_src/build_openssl.py\n+++ b/src/_cffi_src/build_openssl.py\n@@ -51,10 +51,9 @@\n We set -Wconversion args here so that we only do Wconversion checks on the\n code we're compiling and not on cffi itself (as passing -Wconversion in\n CFLAGS would do). We set no error on sign conversion because some\n- function signatures in OpenSSL have changed from long -> unsigned long\n- in the past. Since that isn't a precision issue we don't care.\n- When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can\n- revisit this.\n+ function signatures in LibreSSL differ from OpenSSL have changed on long\n+ vs. unsigned long in the past. Since that isn't a precision issue we don't\n+ care.\n \"\"\"\n # make sure the compiler used supports the flags to be added\n is_gcc = False\n@@ -117,13 +116,6 @@\n \"callbacks\",\n ],\n libraries=_get_openssl_libraries(sys.platform),\n- # These args are passed here so that we only do Wconversion checks on the\n- # code we're compiling and not on cffi itself (as passing -Wconversion in\n- # CFLAGS would do). We set no error on sign convesrion because some\n- # function signatures in OpenSSL have changed from long -> unsigned long\n- # in the past. Since that isn't a precision issue we don't care.\n- # When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can\n- # revisit this.\n extra_compile_args=_extra_compile_args(sys.platform),\n extra_link_args=extra_link_args(compiler_type()),\n )\n", "issue": "Revisit _extra_compile_args in build_openssl.py\nNow that we're dropping 1.0.2 in #5511, it's time to review that.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\nfrom distutils import dist\nfrom distutils.ccompiler import get_default_compiler\nfrom distutils.command.config import config\n\nfrom _cffi_src.utils import (\n build_ffi_for_binding,\n compiler_type,\n extra_link_args,\n)\n\n\ndef _get_openssl_libraries(platform):\n if os.environ.get(\"CRYPTOGRAPHY_SUPPRESS_LINK_FLAGS\", None):\n return []\n # OpenSSL goes by a different library name on different operating systems.\n if platform == \"win32\" and compiler_type() == \"msvc\":\n return [\n \"libssl\",\n \"libcrypto\",\n \"advapi32\",\n \"crypt32\",\n \"gdi32\",\n \"user32\",\n \"ws2_32\",\n ]\n else:\n # darwin, linux, mingw all use this path\n # In some circumstances, the order in which these libs are\n # specified on the linker command-line is significant;\n # libssl must come before libcrypto\n # (https://marc.info/?l=openssl-users&m=135361825921871)\n # -lpthread required due to usage of pthread an potential\n # existance of a static part containing e.g. pthread_atfork\n # (https://github.com/pyca/cryptography/issues/5084)\n if sys.platform == \"zos\":\n return [\"ssl\", \"crypto\"]\n else:\n return [\"ssl\", \"crypto\", \"pthread\"]\n\n\ndef _extra_compile_args(platform):\n \"\"\"\n We set -Wconversion args here so that we only do Wconversion checks on the\n code we're compiling and not on cffi itself (as passing -Wconversion in\n CFLAGS would do). We set no error on sign conversion because some\n function signatures in OpenSSL have changed from long -> unsigned long\n in the past. Since that isn't a precision issue we don't care.\n When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can\n revisit this.\n \"\"\"\n # make sure the compiler used supports the flags to be added\n is_gcc = False\n if get_default_compiler() == \"unix\":\n d = dist.Distribution()\n cmd = config(d)\n cmd._check_compiler()\n is_gcc = (\n \"gcc\" in cmd.compiler.compiler[0]\n or \"clang\" in cmd.compiler.compiler[0]\n )\n if is_gcc or not (\n platform in [\"win32\", \"hp-ux11\", \"sunos5\"]\n or platform.startswith(\"aix\")\n ):\n return [\"-Wconversion\", \"-Wno-error=sign-conversion\"]\n else:\n return []\n\n\nffi = build_ffi_for_binding(\n module_name=\"_openssl\",\n module_prefix=\"_cffi_src.openssl.\",\n modules=[\n # This goes first so we can define some cryptography-wide symbols.\n \"cryptography\",\n \"aes\",\n \"asn1\",\n \"bignum\",\n \"bio\",\n \"cmac\",\n \"conf\",\n \"crypto\",\n \"ct\",\n \"dh\",\n \"dsa\",\n \"ec\",\n \"ecdh\",\n \"ecdsa\",\n \"engine\",\n \"err\",\n \"evp\",\n \"fips\",\n \"hmac\",\n \"nid\",\n \"objects\",\n \"ocsp\",\n \"opensslv\",\n \"osrandom_engine\",\n \"pem\",\n \"pkcs12\",\n \"rand\",\n \"rsa\",\n \"ssl\",\n \"x509\",\n \"x509name\",\n \"x509v3\",\n \"x509_vfy\",\n \"pkcs7\",\n \"callbacks\",\n ],\n libraries=_get_openssl_libraries(sys.platform),\n # These args are passed here so that we only do Wconversion checks on the\n # code we're compiling and not on cffi itself (as passing -Wconversion in\n # CFLAGS would do). We set no error on sign convesrion because some\n # function signatures in OpenSSL have changed from long -> unsigned long\n # in the past. Since that isn't a precision issue we don't care.\n # When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can\n # revisit this.\n extra_compile_args=_extra_compile_args(sys.platform),\n extra_link_args=extra_link_args(compiler_type()),\n)\n", "path": "src/_cffi_src/build_openssl.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\nfrom distutils import dist\nfrom distutils.ccompiler import get_default_compiler\nfrom distutils.command.config import config\n\nfrom _cffi_src.utils import (\n build_ffi_for_binding,\n compiler_type,\n extra_link_args,\n)\n\n\ndef _get_openssl_libraries(platform):\n if os.environ.get(\"CRYPTOGRAPHY_SUPPRESS_LINK_FLAGS\", None):\n return []\n # OpenSSL goes by a different library name on different operating systems.\n if platform == \"win32\" and compiler_type() == \"msvc\":\n return [\n \"libssl\",\n \"libcrypto\",\n \"advapi32\",\n \"crypt32\",\n \"gdi32\",\n \"user32\",\n \"ws2_32\",\n ]\n else:\n # darwin, linux, mingw all use this path\n # In some circumstances, the order in which these libs are\n # specified on the linker command-line is significant;\n # libssl must come before libcrypto\n # (https://marc.info/?l=openssl-users&m=135361825921871)\n # -lpthread required due to usage of pthread an potential\n # existance of a static part containing e.g. pthread_atfork\n # (https://github.com/pyca/cryptography/issues/5084)\n if sys.platform == \"zos\":\n return [\"ssl\", \"crypto\"]\n else:\n return [\"ssl\", \"crypto\", \"pthread\"]\n\n\ndef _extra_compile_args(platform):\n \"\"\"\n We set -Wconversion args here so that we only do Wconversion checks on the\n code we're compiling and not on cffi itself (as passing -Wconversion in\n CFLAGS would do). We set no error on sign conversion because some\n function signatures in LibreSSL differ from OpenSSL have changed on long\n vs. unsigned long in the past. Since that isn't a precision issue we don't\n care.\n \"\"\"\n # make sure the compiler used supports the flags to be added\n is_gcc = False\n if get_default_compiler() == \"unix\":\n d = dist.Distribution()\n cmd = config(d)\n cmd._check_compiler()\n is_gcc = (\n \"gcc\" in cmd.compiler.compiler[0]\n or \"clang\" in cmd.compiler.compiler[0]\n )\n if is_gcc or not (\n platform in [\"win32\", \"hp-ux11\", \"sunos5\"]\n or platform.startswith(\"aix\")\n ):\n return [\"-Wconversion\", \"-Wno-error=sign-conversion\"]\n else:\n return []\n\n\nffi = build_ffi_for_binding(\n module_name=\"_openssl\",\n module_prefix=\"_cffi_src.openssl.\",\n modules=[\n # This goes first so we can define some cryptography-wide symbols.\n \"cryptography\",\n \"aes\",\n \"asn1\",\n \"bignum\",\n \"bio\",\n \"cmac\",\n \"conf\",\n \"crypto\",\n \"ct\",\n \"dh\",\n \"dsa\",\n \"ec\",\n \"ecdh\",\n \"ecdsa\",\n \"engine\",\n \"err\",\n \"evp\",\n \"fips\",\n \"hmac\",\n \"nid\",\n \"objects\",\n \"ocsp\",\n \"opensslv\",\n \"osrandom_engine\",\n \"pem\",\n \"pkcs12\",\n \"rand\",\n \"rsa\",\n \"ssl\",\n \"x509\",\n \"x509name\",\n \"x509v3\",\n \"x509_vfy\",\n \"pkcs7\",\n \"callbacks\",\n ],\n libraries=_get_openssl_libraries(sys.platform),\n extra_compile_args=_extra_compile_args(sys.platform),\n extra_link_args=extra_link_args(compiler_type()),\n)\n", "path": "src/_cffi_src/build_openssl.py"}]} | 1,608 | 422 |
gh_patches_debug_2408 | rasdani/github-patches | git_diff | tornadoweb__tornado-3167 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tornado 6.2 release readiness
I'm creating this issue to collect feedback on the 6.2 betas. For the folks who have tried them, do you think the release is ready to go or are there still more changes to be made?
Tagging @minrk and @graingert as authors of relevant PRs, although I'd welcome feedback from anyone interested in this release.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tornado/__init__.py`
Content:
```
1 #
2 # Copyright 2009 Facebook
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License"); you may
5 # not use this file except in compliance with the License. You may obtain
6 # a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13 # License for the specific language governing permissions and limitations
14 # under the License.
15
16 """The Tornado web server and tools."""
17
18 # version is a human-readable version number.
19
20 # version_info is a four-tuple for programmatic comparison. The first
21 # three numbers are the components of the version number. The fourth
22 # is zero for an official release, positive for a development branch,
23 # or negative for a release candidate or beta (after the base version
24 # number has been incremented)
25 version = "6.2b2"
26 version_info = (6, 2, 0, -98)
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tornado/__init__.py b/tornado/__init__.py
--- a/tornado/__init__.py
+++ b/tornado/__init__.py
@@ -22,5 +22,5 @@
# is zero for an official release, positive for a development branch,
# or negative for a release candidate or beta (after the base version
# number has been incremented)
-version = "6.2b2"
-version_info = (6, 2, 0, -98)
+version = "6.2"
+version_info = (6, 2, 0, 0)
| {"golden_diff": "diff --git a/tornado/__init__.py b/tornado/__init__.py\n--- a/tornado/__init__.py\n+++ b/tornado/__init__.py\n@@ -22,5 +22,5 @@\n # is zero for an official release, positive for a development branch,\n # or negative for a release candidate or beta (after the base version\n # number has been incremented)\n-version = \"6.2b2\"\n-version_info = (6, 2, 0, -98)\n+version = \"6.2\"\n+version_info = (6, 2, 0, 0)\n", "issue": "Tornado 6.2 release readiness\nI'm creating this issue to collect feedback on the 6.2 betas. For the folks who have tried them, do you think the release is ready to go or are there still more changes to be made?\r\n\r\nTagging @minrk and @graingert as authors of relevant PRs, although I'd welcome feedback from anyone interested in this release. \n", "before_files": [{"content": "#\n# Copyright 2009 Facebook\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\n\n\"\"\"The Tornado web server and tools.\"\"\"\n\n# version is a human-readable version number.\n\n# version_info is a four-tuple for programmatic comparison. The first\n# three numbers are the components of the version number. The fourth\n# is zero for an official release, positive for a development branch,\n# or negative for a release candidate or beta (after the base version\n# number has been incremented)\nversion = \"6.2b2\"\nversion_info = (6, 2, 0, -98)\n", "path": "tornado/__init__.py"}], "after_files": [{"content": "#\n# Copyright 2009 Facebook\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\n\n\"\"\"The Tornado web server and tools.\"\"\"\n\n# version is a human-readable version number.\n\n# version_info is a four-tuple for programmatic comparison. The first\n# three numbers are the components of the version number. The fourth\n# is zero for an official release, positive for a development branch,\n# or negative for a release candidate or beta (after the base version\n# number has been incremented)\nversion = \"6.2\"\nversion_info = (6, 2, 0, 0)\n", "path": "tornado/__init__.py"}]} | 635 | 132 |
gh_patches_debug_37518 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-475 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
data.umweltprofis.at: iCal service retired
I had to change the interval in my data.umweltprofis.at waste schedule when I noticed that the server throws an error when requesting a new iCal link at https://data.umweltprofis.at/opendata/AppointmentService/index.aspx
I contacted their support and they told me that this service is not going to be fixed because they are currently working on (another?) app.
Should this integration be marked as broken for the time being? Existing iCal URLs continue to work, at least for the time being.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py`
Content:
```
1 import logging
2 import requests
3 from waste_collection_schedule import Collection # type: ignore[attr-defined]
4 from waste_collection_schedule.service.ICS import ICS
5
6 TITLE = "UMWELTPROFIS"
7 DESCRIPTION = "Source for Umweltprofis"
8 URL = "https://www.umweltprofis.at"
9 TEST_CASES = {
10 "Ebensee": {"url": "https://data.umweltprofis.at/OpenData/AppointmentService/AppointmentService.asmx/GetIcalWastePickupCalendar?key=KXX_K0bIXDdk0NrTkk3xWqLM9-bsNgIVBE6FMXDObTqxmp9S39nIqwhf9LTIAX9shrlpfCYU7TG_8pS9NjkAJnM_ruQ1SYm3V9YXVRfLRws1"},
11 }
12
13 _LOGGER = logging.getLogger(__name__)
14
15
16 class Source:
17 def __init__(self, url):
18 self._url = url
19 self._ics = ICS()
20
21 def fetch(self):
22 r = requests.get(self._url)
23 if r.status_code != 200:
24 _LOGGER.error("Error querying calendar data")
25 return []
26
27 fixed_text = r.text.replace("REFRESH - INTERVAL; VALUE = ", "REFRESH-INTERVAL;VALUE=")
28
29 dates = self._ics.convert(fixed_text)
30
31 entries = []
32 for d in dates:
33 entries.append(Collection(d[0], d[1]))
34 return entries
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py
@@ -1,5 +1,7 @@
import logging
import requests
+from datetime import datetime
+from xml.dom.minidom import parseString
from waste_collection_schedule import Collection # type: ignore[attr-defined]
from waste_collection_schedule.service.ICS import ICS
@@ -8,17 +10,33 @@
URL = "https://www.umweltprofis.at"
TEST_CASES = {
"Ebensee": {"url": "https://data.umweltprofis.at/OpenData/AppointmentService/AppointmentService.asmx/GetIcalWastePickupCalendar?key=KXX_K0bIXDdk0NrTkk3xWqLM9-bsNgIVBE6FMXDObTqxmp9S39nIqwhf9LTIAX9shrlpfCYU7TG_8pS9NjkAJnM_ruQ1SYm3V9YXVRfLRws1"},
+ "Rohrbach": {"xmlurl": "https://data.umweltprofis.at/opendata/AppointmentService/AppointmentService.asmx/GetTermineForLocationSecured?Key=TEMPKeyabvvMKVCic0cMcmsTEMPKey&StreetNr=118213&HouseNr=Alle&intervall=Alle"},
}
_LOGGER = logging.getLogger(__name__)
+def getText(element):
+ s = ""
+ for e in element.childNodes:
+ if e.nodeType == e.TEXT_NODE:
+ s += e.nodeValue
+ return s
class Source:
- def __init__(self, url):
+ def __init__(self, url=None, xmlurl=None):
self._url = url
+ self._xmlurl = xmlurl
self._ics = ICS()
+ if url is None and xmlurl is None:
+ raise Exception("either url or xmlurl needs to be specified")
def fetch(self):
+ if self._url is not None:
+ return self.fetch_ics()
+ elif self._xmlurl is not None:
+ return self.fetch_xml()
+
+ def fetch_ics(self):
r = requests.get(self._url)
if r.status_code != 200:
_LOGGER.error("Error querying calendar data")
@@ -32,3 +50,18 @@
for d in dates:
entries.append(Collection(d[0], d[1]))
return entries
+
+ def fetch_xml(self):
+ r = requests.get(self._xmlurl)
+ r.raise_for_status()
+
+ doc = parseString(r.text)
+ appointments = doc.getElementsByTagName("AppointmentEntry")
+
+ entries = []
+ for a in appointments:
+ date_string = getText(a.getElementsByTagName("Datum")[0])
+ date = datetime.fromisoformat(date_string).date()
+ waste_type = getText(a.getElementsByTagName("WasteType")[0])
+ entries.append(Collection(date, waste_type))
+ return entries
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py\n@@ -1,5 +1,7 @@\n import logging\n import requests\n+from datetime import datetime\n+from xml.dom.minidom import parseString\n from waste_collection_schedule import Collection # type: ignore[attr-defined]\n from waste_collection_schedule.service.ICS import ICS\n \n@@ -8,17 +10,33 @@\n URL = \"https://www.umweltprofis.at\"\n TEST_CASES = {\n \"Ebensee\": {\"url\": \"https://data.umweltprofis.at/OpenData/AppointmentService/AppointmentService.asmx/GetIcalWastePickupCalendar?key=KXX_K0bIXDdk0NrTkk3xWqLM9-bsNgIVBE6FMXDObTqxmp9S39nIqwhf9LTIAX9shrlpfCYU7TG_8pS9NjkAJnM_ruQ1SYm3V9YXVRfLRws1\"},\n+ \"Rohrbach\": {\"xmlurl\": \"https://data.umweltprofis.at/opendata/AppointmentService/AppointmentService.asmx/GetTermineForLocationSecured?Key=TEMPKeyabvvMKVCic0cMcmsTEMPKey&StreetNr=118213&HouseNr=Alle&intervall=Alle\"},\n }\n \n _LOGGER = logging.getLogger(__name__)\n \n+def getText(element):\n+ s = \"\"\n+ for e in element.childNodes:\n+ if e.nodeType == e.TEXT_NODE:\n+ s += e.nodeValue\n+ return s\n \n class Source:\n- def __init__(self, url):\n+ def __init__(self, url=None, xmlurl=None):\n self._url = url\n+ self._xmlurl = xmlurl\n self._ics = ICS()\n+ if url is None and xmlurl is None:\n+ raise Exception(\"either url or xmlurl needs to be specified\")\n \n def fetch(self):\n+ if self._url is not None:\n+ return self.fetch_ics()\n+ elif self._xmlurl is not None:\n+ return self.fetch_xml()\n+\n+ def fetch_ics(self):\n r = requests.get(self._url)\n if r.status_code != 200:\n _LOGGER.error(\"Error querying calendar data\")\n@@ -32,3 +50,18 @@\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n+\n+ def fetch_xml(self):\n+ r = requests.get(self._xmlurl)\n+ r.raise_for_status()\n+\n+ doc = parseString(r.text)\n+ appointments = doc.getElementsByTagName(\"AppointmentEntry\")\n+\n+ entries = []\n+ for a in appointments:\n+ date_string = getText(a.getElementsByTagName(\"Datum\")[0])\n+ date = datetime.fromisoformat(date_string).date()\n+ waste_type = getText(a.getElementsByTagName(\"WasteType\")[0])\n+ entries.append(Collection(date, waste_type))\n+ return entries\n", "issue": "data.umweltprofis.at: iCal service retired\nI had to change the interval in my data.umweltprofis.at waste schedule when I noticed that the server throws an error when requesting a new iCal link at https://data.umweltprofis.at/opendata/AppointmentService/index.aspx\r\n\r\nI contacted their support and they told me that this service is not going to be fixed because they are currently working on (another?) app.\r\n\r\nShould this integration be marked as broken for the time being? Existing iCal URLs continue to work, at least for the time being.\n", "before_files": [{"content": "import logging\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"UMWELTPROFIS\"\nDESCRIPTION = \"Source for Umweltprofis\"\nURL = \"https://www.umweltprofis.at\"\nTEST_CASES = {\n \"Ebensee\": {\"url\": \"https://data.umweltprofis.at/OpenData/AppointmentService/AppointmentService.asmx/GetIcalWastePickupCalendar?key=KXX_K0bIXDdk0NrTkk3xWqLM9-bsNgIVBE6FMXDObTqxmp9S39nIqwhf9LTIAX9shrlpfCYU7TG_8pS9NjkAJnM_ruQ1SYm3V9YXVRfLRws1\"},\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, url):\n self._url = url\n self._ics = ICS()\n\n def fetch(self):\n r = requests.get(self._url)\n if r.status_code != 200:\n _LOGGER.error(\"Error querying calendar data\")\n return []\n\n fixed_text = r.text.replace(\"REFRESH - INTERVAL; VALUE = \", \"REFRESH-INTERVAL;VALUE=\")\n\n dates = self._ics.convert(fixed_text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py"}], "after_files": [{"content": "import logging\nimport requests\nfrom datetime import datetime\nfrom xml.dom.minidom import parseString\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"UMWELTPROFIS\"\nDESCRIPTION = \"Source for Umweltprofis\"\nURL = \"https://www.umweltprofis.at\"\nTEST_CASES = {\n \"Ebensee\": {\"url\": \"https://data.umweltprofis.at/OpenData/AppointmentService/AppointmentService.asmx/GetIcalWastePickupCalendar?key=KXX_K0bIXDdk0NrTkk3xWqLM9-bsNgIVBE6FMXDObTqxmp9S39nIqwhf9LTIAX9shrlpfCYU7TG_8pS9NjkAJnM_ruQ1SYm3V9YXVRfLRws1\"},\n \"Rohrbach\": {\"xmlurl\": \"https://data.umweltprofis.at/opendata/AppointmentService/AppointmentService.asmx/GetTermineForLocationSecured?Key=TEMPKeyabvvMKVCic0cMcmsTEMPKey&StreetNr=118213&HouseNr=Alle&intervall=Alle\"},\n}\n\n_LOGGER = logging.getLogger(__name__)\n\ndef getText(element):\n s = \"\"\n for e in element.childNodes:\n if e.nodeType == e.TEXT_NODE:\n s += e.nodeValue\n return s\n\nclass Source:\n def __init__(self, url=None, xmlurl=None):\n self._url = url\n self._xmlurl = xmlurl\n self._ics = ICS()\n if url is None and xmlurl is None:\n raise Exception(\"either url or xmlurl needs to be specified\")\n\n def fetch(self):\n if self._url is not None:\n return self.fetch_ics()\n elif self._xmlurl is not None:\n return self.fetch_xml()\n\n def fetch_ics(self):\n r = requests.get(self._url)\n if r.status_code != 200:\n _LOGGER.error(\"Error querying calendar data\")\n return []\n\n fixed_text = r.text.replace(\"REFRESH - INTERVAL; VALUE = \", \"REFRESH-INTERVAL;VALUE=\")\n\n dates = self._ics.convert(fixed_text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n\n def fetch_xml(self):\n r = requests.get(self._xmlurl)\n r.raise_for_status()\n\n doc = parseString(r.text)\n appointments = doc.getElementsByTagName(\"AppointmentEntry\")\n\n entries = []\n for a in appointments:\n date_string = getText(a.getElementsByTagName(\"Datum\")[0])\n date = datetime.fromisoformat(date_string).date()\n waste_type = getText(a.getElementsByTagName(\"WasteType\")[0])\n entries.append(Collection(date, waste_type))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/data_umweltprofis_at.py"}]} | 786 | 735 |
gh_patches_debug_31544 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1982 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IAM ManagedPolicy PolicyDocument Character limit of 6144 should not include whitespaces
*cfn-lint version: (`cfn-lint --version`)* 0.48.2
*Description of issue.*
There is a 6,144 character limit on IAM managed policies but this does/should not include whitespaces ([source](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html#reference_iam-limits-entity-length)). cfn-lint flagged a PolicyDocument of mine as too long, while Cloudformation successfully deploys it. This character limit check would be very helpful if it checked against the same 6,144 character limit that AWS does.
The PolicyDocument is defined in CFN in JSON. I can copy-and-paste this JSON into the AWS Policy Editor and it tells me there are only 4,189 characters in it. I can successfully deploy the CFN stack with this same policy. Counting whitespace, there are 9,849 characters so it seems highly likely that cfn-lint is counting whitespace when it should not be.
cfn-lint complaint:
```E3502 Role trust policy JSON text cannot be longer than 6144 characters```
Please provide as much information as possible:
* Template linting issues:
* Please provide a CloudFormation sample that generated the issue.
* If present, please add links to the (official) documentation for clarification.
* Validate if the issue still exists with the latest version of `cfn-lint` and/or the latest Spec files
* Feature request:
* Please provide argumentation about the missing feature. Context is key!
Cfn-lint uses the [CloudFormation Resource Specifications](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-resource-specification.html) as the base to do validation. These files are included as part of the application version. Please update to the latest version of `cfn-lint` or update the spec files manually (`cfn-lint -u`)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/properties/JsonSize.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import datetime
6 import json
7 import re
8 import six
9 import cfnlint.helpers
10 from cfnlint.rules import CloudFormationLintRule
11 from cfnlint.rules import RuleMatch
12 from cfnlint.helpers import RESOURCE_SPECS
13
14
15 class JsonSize(CloudFormationLintRule):
16 """Check if JSON Object Size is within the specified length"""
17 id = 'E3502'
18 shortdesc = 'Check if a JSON Object is within size limits'
19 description = 'Validate properties that are JSON values so that their length is within the limits'
20 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html'
21 tags = ['resources', 'limits', 'json']
22
23 def initialize(self, cfn):
24 """Initialize the rule"""
25 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):
26 self.resource_property_types.append(resource_type_spec)
27 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
28 self.resource_sub_property_types.append(property_type_spec)
29
30 def _serialize_date(self, obj):
31 if isinstance(obj, datetime.date):
32 return obj.isoformat()
33 raise TypeError('Object of type {} is not JSON serializable'.format(obj.__class__.__name__))
34
35 def check_value(self, value, path, prop, cfn, specs):
36 """Check Role.AssumeRolePolicyDocument is within limits"""
37 matches = []
38
39 def remove_functions(obj):
40 """ Replaces intrinsic functions with string """
41 if isinstance(obj, dict):
42 new_obj = {}
43 if len(obj) == 1:
44 for k, v in obj.items():
45 if k in cfnlint.helpers.FUNCTIONS:
46 if k == 'Fn::Sub':
47 if isinstance(v, six.string_types):
48 return re.sub(r'\${.*}', '', v)
49 if isinstance(v, list):
50 return re.sub(r'\${.*}', '', v[0])
51 else:
52 new_obj[k] = remove_functions(v)
53 return new_obj
54 else:
55 for k, v in obj.items():
56 new_obj[k] = remove_functions(v)
57 return new_obj
58 elif isinstance(obj, list):
59 new_list = []
60 for v in obj:
61 new_list.append(remove_functions(v))
62 return new_list
63
64 return obj
65
66 scenarios = cfn.get_object_without_nested_conditions(value, path)
67 json_max_size = specs.get('JsonMax')
68 for scenario in scenarios:
69 if len(json.dumps(remove_functions(scenario['Object'][prop]), separators=(',', ':'), default=self._serialize_date)) > json_max_size:
70 if scenario['Scenario']:
71 message = 'Role trust policy JSON text cannot be longer than {0} characters when {1}'
72 scenario_text = ' and '.join(['when condition "%s" is %s' % (
73 k, v) for (k, v) in scenario['Scenario'].items()])
74 matches.append(
75 RuleMatch(path + [prop], message.format(json_max_size, scenario_text)))
76 else:
77 message = 'Role trust policy JSON text cannot be longer than {0} characters'
78 matches.append(
79 RuleMatch(
80 path + [prop],
81 message.format(json_max_size),
82 )
83 )
84
85 return matches
86
87 def check(self, cfn, properties, specs, path):
88 """Check itself"""
89 matches = []
90 for p_value, p_path in properties.items_safe(path[:]):
91 for prop in p_value:
92 if prop in specs:
93 value = specs.get(prop).get('Value', {})
94 if value:
95 value_type = value.get('ValueType', '')
96 primitive_type = specs.get(prop).get('PrimitiveType')
97 if primitive_type == 'Json':
98 matches.extend(
99 self.check_value(
100 p_value, p_path, prop, cfn,
101 RESOURCE_SPECS.get(cfn.regions[0]).get(
102 'ValueTypes').get(value_type, {})
103 )
104 )
105 return matches
106
107 def match_resource_sub_properties(self, properties, property_type, path, cfn):
108 """Match for sub properties"""
109 matches = list()
110
111 specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
112 'PropertyTypes').get(property_type, {}).get('Properties', {})
113 matches.extend(self.check(cfn, properties, specs, path))
114
115 return matches
116
117 def match_resource_properties(self, properties, resource_type, path, cfn):
118 """Check CloudFormation Properties"""
119 matches = list()
120
121 specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
122 'ResourceTypes').get(resource_type, {}).get('Properties', {})
123 matches.extend(self.check(cfn, properties, specs, path))
124
125 return matches
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/properties/JsonSize.py b/src/cfnlint/rules/resources/properties/JsonSize.py
--- a/src/cfnlint/rules/resources/properties/JsonSize.py
+++ b/src/cfnlint/rules/resources/properties/JsonSize.py
@@ -66,19 +66,25 @@
scenarios = cfn.get_object_without_nested_conditions(value, path)
json_max_size = specs.get('JsonMax')
for scenario in scenarios:
- if len(json.dumps(remove_functions(scenario['Object'][prop]), separators=(',', ':'), default=self._serialize_date)) > json_max_size:
+ j = remove_functions(scenario['Object'][prop])
+ if isinstance(j, six.string_types):
+ try:
+ j = json.loads(j)
+ except: #pylint: disable=bare-except
+ continue
+ if len(json.dumps(j, separators=(',', ':'), default=self._serialize_date)) > json_max_size:
if scenario['Scenario']:
- message = 'Role trust policy JSON text cannot be longer than {0} characters when {1}'
+ message = '{0} JSON text cannot be longer than {1} characters when {2}'
scenario_text = ' and '.join(['when condition "%s" is %s' % (
k, v) for (k, v) in scenario['Scenario'].items()])
matches.append(
- RuleMatch(path + [prop], message.format(json_max_size, scenario_text)))
+ RuleMatch(path + [prop], message.format(prop, json_max_size, scenario_text)))
else:
- message = 'Role trust policy JSON text cannot be longer than {0} characters'
+ message = '{0} JSON text cannot be longer than {1} characters'
matches.append(
RuleMatch(
path + [prop],
- message.format(json_max_size),
+ message.format(prop, json_max_size),
)
)
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/JsonSize.py b/src/cfnlint/rules/resources/properties/JsonSize.py\n--- a/src/cfnlint/rules/resources/properties/JsonSize.py\n+++ b/src/cfnlint/rules/resources/properties/JsonSize.py\n@@ -66,19 +66,25 @@\n scenarios = cfn.get_object_without_nested_conditions(value, path)\n json_max_size = specs.get('JsonMax')\n for scenario in scenarios:\n- if len(json.dumps(remove_functions(scenario['Object'][prop]), separators=(',', ':'), default=self._serialize_date)) > json_max_size:\n+ j = remove_functions(scenario['Object'][prop])\n+ if isinstance(j, six.string_types):\n+ try:\n+ j = json.loads(j)\n+ except: #pylint: disable=bare-except\n+ continue\n+ if len(json.dumps(j, separators=(',', ':'), default=self._serialize_date)) > json_max_size:\n if scenario['Scenario']:\n- message = 'Role trust policy JSON text cannot be longer than {0} characters when {1}'\n+ message = '{0} JSON text cannot be longer than {1} characters when {2}'\n scenario_text = ' and '.join(['when condition \"%s\" is %s' % (\n k, v) for (k, v) in scenario['Scenario'].items()])\n matches.append(\n- RuleMatch(path + [prop], message.format(json_max_size, scenario_text)))\n+ RuleMatch(path + [prop], message.format(prop, json_max_size, scenario_text)))\n else:\n- message = 'Role trust policy JSON text cannot be longer than {0} characters'\n+ message = '{0} JSON text cannot be longer than {1} characters'\n matches.append(\n RuleMatch(\n path + [prop],\n- message.format(json_max_size),\n+ message.format(prop, json_max_size),\n )\n )\n", "issue": "IAM ManagedPolicy PolicyDocument Character limit of 6144 should not include whitespaces\n*cfn-lint version: (`cfn-lint --version`)* 0.48.2\r\n\r\n*Description of issue.*\r\nThere is a 6,144 character limit on IAM managed policies but this does/should not include whitespaces ([source](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html#reference_iam-limits-entity-length)). cfn-lint flagged a PolicyDocument of mine as too long, while Cloudformation successfully deploys it. This character limit check would be very helpful if it checked against the same 6,144 character limit that AWS does.\r\n\r\nThe PolicyDocument is defined in CFN in JSON. I can copy-and-paste this JSON into the AWS Policy Editor and it tells me there are only 4,189 characters in it. I can successfully deploy the CFN stack with this same policy. Counting whitespace, there are 9,849 characters so it seems highly likely that cfn-lint is counting whitespace when it should not be. \r\n\r\ncfn-lint complaint:\r\n```E3502 Role trust policy JSON text cannot be longer than 6144 characters```\r\n\r\nPlease provide as much information as possible:\r\n* Template linting issues:\r\n * Please provide a CloudFormation sample that generated the issue.\r\n * If present, please add links to the (official) documentation for clarification.\r\n * Validate if the issue still exists with the latest version of `cfn-lint` and/or the latest Spec files\r\n* Feature request:\r\n * Please provide argumentation about the missing feature. Context is key!\r\n\r\n\r\nCfn-lint uses the [CloudFormation Resource Specifications](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-resource-specification.html) as the base to do validation. These files are included as part of the application version. Please update to the latest version of `cfn-lint` or update the spec files manually (`cfn-lint -u`)\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport datetime\nimport json\nimport re\nimport six\nimport cfnlint.helpers\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass JsonSize(CloudFormationLintRule):\n \"\"\"Check if JSON Object Size is within the specified length\"\"\"\n id = 'E3502'\n shortdesc = 'Check if a JSON Object is within size limits'\n description = 'Validate properties that are JSON values so that their length is within the limits'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html'\n tags = ['resources', 'limits', 'json']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def _serialize_date(self, obj):\n if isinstance(obj, datetime.date):\n return obj.isoformat()\n raise TypeError('Object of type {} is not JSON serializable'.format(obj.__class__.__name__))\n\n def check_value(self, value, path, prop, cfn, specs):\n \"\"\"Check Role.AssumeRolePolicyDocument is within limits\"\"\"\n matches = []\n\n def remove_functions(obj):\n \"\"\" Replaces intrinsic functions with string \"\"\"\n if isinstance(obj, dict):\n new_obj = {}\n if len(obj) == 1:\n for k, v in obj.items():\n if k in cfnlint.helpers.FUNCTIONS:\n if k == 'Fn::Sub':\n if isinstance(v, six.string_types):\n return re.sub(r'\\${.*}', '', v)\n if isinstance(v, list):\n return re.sub(r'\\${.*}', '', v[0])\n else:\n new_obj[k] = remove_functions(v)\n return new_obj\n else:\n for k, v in obj.items():\n new_obj[k] = remove_functions(v)\n return new_obj\n elif isinstance(obj, list):\n new_list = []\n for v in obj:\n new_list.append(remove_functions(v))\n return new_list\n\n return obj\n\n scenarios = cfn.get_object_without_nested_conditions(value, path)\n json_max_size = specs.get('JsonMax')\n for scenario in scenarios:\n if len(json.dumps(remove_functions(scenario['Object'][prop]), separators=(',', ':'), default=self._serialize_date)) > json_max_size:\n if scenario['Scenario']:\n message = 'Role trust policy JSON text cannot be longer than {0} characters when {1}'\n scenario_text = ' and '.join(['when condition \"%s\" is %s' % (\n k, v) for (k, v) in scenario['Scenario'].items()])\n matches.append(\n RuleMatch(path + [prop], message.format(json_max_size, scenario_text)))\n else:\n message = 'Role trust policy JSON text cannot be longer than {0} characters'\n matches.append(\n RuleMatch(\n path + [prop],\n message.format(json_max_size),\n )\n )\n\n return matches\n\n def check(self, cfn, properties, specs, path):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in specs:\n value = specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n primitive_type = specs.get(prop).get('PrimitiveType')\n if primitive_type == 'Json':\n matches.extend(\n self.check_value(\n p_value, p_path, prop, cfn,\n RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ValueTypes').get(value_type, {})\n )\n )\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'PropertyTypes').get(property_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/JsonSize.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport datetime\nimport json\nimport re\nimport six\nimport cfnlint.helpers\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass JsonSize(CloudFormationLintRule):\n \"\"\"Check if JSON Object Size is within the specified length\"\"\"\n id = 'E3502'\n shortdesc = 'Check if a JSON Object is within size limits'\n description = 'Validate properties that are JSON values so that their length is within the limits'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html'\n tags = ['resources', 'limits', 'json']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def _serialize_date(self, obj):\n if isinstance(obj, datetime.date):\n return obj.isoformat()\n raise TypeError('Object of type {} is not JSON serializable'.format(obj.__class__.__name__))\n\n def check_value(self, value, path, prop, cfn, specs):\n \"\"\"Check Role.AssumeRolePolicyDocument is within limits\"\"\"\n matches = []\n\n def remove_functions(obj):\n \"\"\" Replaces intrinsic functions with string \"\"\"\n if isinstance(obj, dict):\n new_obj = {}\n if len(obj) == 1:\n for k, v in obj.items():\n if k in cfnlint.helpers.FUNCTIONS:\n if k == 'Fn::Sub':\n if isinstance(v, six.string_types):\n return re.sub(r'\\${.*}', '', v)\n if isinstance(v, list):\n return re.sub(r'\\${.*}', '', v[0])\n else:\n new_obj[k] = remove_functions(v)\n return new_obj\n else:\n for k, v in obj.items():\n new_obj[k] = remove_functions(v)\n return new_obj\n elif isinstance(obj, list):\n new_list = []\n for v in obj:\n new_list.append(remove_functions(v))\n return new_list\n\n return obj\n\n scenarios = cfn.get_object_without_nested_conditions(value, path)\n json_max_size = specs.get('JsonMax')\n for scenario in scenarios:\n j = remove_functions(scenario['Object'][prop])\n if isinstance(j, six.string_types):\n try:\n j = json.loads(j)\n except: #pylint: disable=bare-except\n continue\n if len(json.dumps(j, separators=(',', ':'), default=self._serialize_date)) > json_max_size:\n if scenario['Scenario']:\n message = '{0} JSON text cannot be longer than {1} characters when {2}'\n scenario_text = ' and '.join(['when condition \"%s\" is %s' % (\n k, v) for (k, v) in scenario['Scenario'].items()])\n matches.append(\n RuleMatch(path + [prop], message.format(prop, json_max_size, scenario_text)))\n else:\n message = '{0} JSON text cannot be longer than {1} characters'\n matches.append(\n RuleMatch(\n path + [prop],\n message.format(prop, json_max_size),\n )\n )\n\n return matches\n\n def check(self, cfn, properties, specs, path):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in specs:\n value = specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n primitive_type = specs.get(prop).get('PrimitiveType')\n if primitive_type == 'Json':\n matches.extend(\n self.check_value(\n p_value, p_path, prop, cfn,\n RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ValueTypes').get(value_type, {})\n )\n )\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'PropertyTypes').get(property_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/JsonSize.py"}]} | 2,010 | 419 |
gh_patches_debug_13843 | rasdani/github-patches | git_diff | rucio__rucio-1799 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
REST call for requests is broken by '/' in DIDs
Motivation
----------
As discussed in #1786 , the REST call of https://github.com/rucio/rucio/blob/master/lib/rucio/web/rest/webpy/v1/request.py get broken for DIDs containing `/`
Modification
------------
I think that rather than doing this (*) we can pass all the needed information (scope, name, rse) as params instead of including them in the url path.
(*) https://github.com/rucio/rucio/blob/master/lib/rucio/web/rest/webpy/v1/request.py#L38
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/rucio/web/rest/webpy/v1/request.py`
Content:
```
1 #!/usr/bin/env python
2 # Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 # Authors:
17 # - Mario Lassnig <[email protected]>, 2014-2018
18 # - Vincent Garonne <[email protected]>, 2017
19 #
20 # PY3K COMPATIBLE
21
22 import json
23
24 from logging import getLogger, StreamHandler, DEBUG
25
26 from web import application, ctx, loadhook, header
27
28 from rucio.api import request
29 from rucio.common.utils import generate_http_error, APIEncoder
30 from rucio.web.rest.common import rucio_loadhook, RucioController, exception_wrapper
31
32
33 LOGGER = getLogger("rucio.request")
34 SH = StreamHandler()
35 SH.setLevel(DEBUG)
36 LOGGER.addHandler(SH)
37
38 URLS = ('/(.+)/(.+)/(.+)', 'RequestGet',)
39
40
41 class RequestGet(RucioController):
42 """ REST API to get requests. """
43
44 @exception_wrapper
45 def GET(self, scope, name, rse):
46 """
47 List request for given DID to a destination RSE.
48
49 HTTP Success:
50 200 OK
51
52 HTTP Error:
53 401 Unauthorized
54 404 Request Not Found
55 """
56
57 header('Content-Type', 'application/json')
58
59 try:
60 return json.dumps(request.get_request_by_did(scope=scope,
61 name=name,
62 rse=rse,
63 issuer=ctx.env.get('issuer')),
64 cls=APIEncoder)
65 except:
66 raise generate_http_error(404, 'RequestNotFound', 'No request found for DID %s:%s at RSE %s' % (scope,
67 name,
68 rse))
69
70
71 """----------------------
72 Web service startup
73 ----------------------"""
74
75 APP = application(URLS, globals())
76 APP.add_processor(loadhook(rucio_loadhook))
77 application = APP.wsgifunc()
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/rucio/web/rest/webpy/v1/request.py b/lib/rucio/web/rest/webpy/v1/request.py
--- a/lib/rucio/web/rest/webpy/v1/request.py
+++ b/lib/rucio/web/rest/webpy/v1/request.py
@@ -26,6 +26,7 @@
from web import application, ctx, loadhook, header
from rucio.api import request
+from rucio.common.schema import SCOPE_NAME_REGEXP
from rucio.common.utils import generate_http_error, APIEncoder
from rucio.web.rest.common import rucio_loadhook, RucioController, exception_wrapper
@@ -35,7 +36,7 @@
SH.setLevel(DEBUG)
LOGGER.addHandler(SH)
-URLS = ('/(.+)/(.+)/(.+)', 'RequestGet',)
+URLS = ('%s/(.+)' % SCOPE_NAME_REGEXP, 'RequestGet',)
class RequestGet(RucioController):
| {"golden_diff": "diff --git a/lib/rucio/web/rest/webpy/v1/request.py b/lib/rucio/web/rest/webpy/v1/request.py\n--- a/lib/rucio/web/rest/webpy/v1/request.py\n+++ b/lib/rucio/web/rest/webpy/v1/request.py\n@@ -26,6 +26,7 @@\n from web import application, ctx, loadhook, header\n \n from rucio.api import request\n+from rucio.common.schema import SCOPE_NAME_REGEXP\n from rucio.common.utils import generate_http_error, APIEncoder\n from rucio.web.rest.common import rucio_loadhook, RucioController, exception_wrapper\n \n@@ -35,7 +36,7 @@\n SH.setLevel(DEBUG)\n LOGGER.addHandler(SH)\n \n-URLS = ('/(.+)/(.+)/(.+)', 'RequestGet',)\n+URLS = ('%s/(.+)' % SCOPE_NAME_REGEXP, 'RequestGet',)\n \n \n class RequestGet(RucioController):\n", "issue": "REST call for requests is broken by '/' in DIDs\nMotivation\r\n----------\r\nAs discussed in #1786 , the REST call of https://github.com/rucio/rucio/blob/master/lib/rucio/web/rest/webpy/v1/request.py get broken for DIDs containing `/`\r\n\r\nModification\r\n------------\r\nI think that rather than doing this (*) we can pass all the needed information (scope, name, rse) as params instead of including them in the url path. \r\n\r\n(*) https://github.com/rucio/rucio/blob/master/lib/rucio/web/rest/webpy/v1/request.py#L38\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Mario Lassnig <[email protected]>, 2014-2018\n# - Vincent Garonne <[email protected]>, 2017\n#\n# PY3K COMPATIBLE\n\nimport json\n\nfrom logging import getLogger, StreamHandler, DEBUG\n\nfrom web import application, ctx, loadhook, header\n\nfrom rucio.api import request\nfrom rucio.common.utils import generate_http_error, APIEncoder\nfrom rucio.web.rest.common import rucio_loadhook, RucioController, exception_wrapper\n\n\nLOGGER = getLogger(\"rucio.request\")\nSH = StreamHandler()\nSH.setLevel(DEBUG)\nLOGGER.addHandler(SH)\n\nURLS = ('/(.+)/(.+)/(.+)', 'RequestGet',)\n\n\nclass RequestGet(RucioController):\n \"\"\" REST API to get requests. \"\"\"\n\n @exception_wrapper\n def GET(self, scope, name, rse):\n \"\"\"\n List request for given DID to a destination RSE.\n\n HTTP Success:\n 200 OK\n\n HTTP Error:\n 401 Unauthorized\n 404 Request Not Found\n \"\"\"\n\n header('Content-Type', 'application/json')\n\n try:\n return json.dumps(request.get_request_by_did(scope=scope,\n name=name,\n rse=rse,\n issuer=ctx.env.get('issuer')),\n cls=APIEncoder)\n except:\n raise generate_http_error(404, 'RequestNotFound', 'No request found for DID %s:%s at RSE %s' % (scope,\n name,\n rse))\n\n\n\"\"\"----------------------\n Web service startup\n----------------------\"\"\"\n\nAPP = application(URLS, globals())\nAPP.add_processor(loadhook(rucio_loadhook))\napplication = APP.wsgifunc()\n", "path": "lib/rucio/web/rest/webpy/v1/request.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Mario Lassnig <[email protected]>, 2014-2018\n# - Vincent Garonne <[email protected]>, 2017\n#\n# PY3K COMPATIBLE\n\nimport json\n\nfrom logging import getLogger, StreamHandler, DEBUG\n\nfrom web import application, ctx, loadhook, header\n\nfrom rucio.api import request\nfrom rucio.common.schema import SCOPE_NAME_REGEXP\nfrom rucio.common.utils import generate_http_error, APIEncoder\nfrom rucio.web.rest.common import rucio_loadhook, RucioController, exception_wrapper\n\n\nLOGGER = getLogger(\"rucio.request\")\nSH = StreamHandler()\nSH.setLevel(DEBUG)\nLOGGER.addHandler(SH)\n\nURLS = ('%s/(.+)' % SCOPE_NAME_REGEXP, 'RequestGet',)\n\n\nclass RequestGet(RucioController):\n \"\"\" REST API to get requests. \"\"\"\n\n @exception_wrapper\n def GET(self, scope, name, rse):\n \"\"\"\n List request for given DID to a destination RSE.\n\n HTTP Success:\n 200 OK\n\n HTTP Error:\n 401 Unauthorized\n 404 Request Not Found\n \"\"\"\n\n header('Content-Type', 'application/json')\n\n try:\n return json.dumps(request.get_request_by_did(scope=scope,\n name=name,\n rse=rse,\n issuer=ctx.env.get('issuer')),\n cls=APIEncoder)\n except:\n raise generate_http_error(404, 'RequestNotFound', 'No request found for DID %s:%s at RSE %s' % (scope,\n name,\n rse))\n\n\n\"\"\"----------------------\n Web service startup\n----------------------\"\"\"\n\nAPP = application(URLS, globals())\nAPP.add_processor(loadhook(rucio_loadhook))\napplication = APP.wsgifunc()\n", "path": "lib/rucio/web/rest/webpy/v1/request.py"}]} | 1,098 | 211 |
gh_patches_debug_23537 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-6652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GCP- dataflow deprecated view and now doesn't return complete information
Looks like gcp deprecated the dataflow list ability to get all dataflow job information. It only returns `JOB_VIEW_SUMMARY `
https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs/list
In order to get the required information a get call on every resource will be required. Should this be done on the resource level as an augmentation or in an additional filter since the jobs could be filtered on state in a value filter before the extra api calls.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/c7n_gcp/c7n_gcp/resources/dataflow.py`
Content:
```
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 import jmespath
4
5 from c7n_gcp.provider import resources
6 from c7n_gcp.query import QueryResourceManager, TypeInfo
7
8
9 @resources.register('dataflow-job')
10 class DataflowJob(QueryResourceManager):
11 """GCP resource: https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs
12 """
13
14 class resource_type(TypeInfo):
15 service = 'dataflow'
16 version = 'v1b3'
17 component = 'projects.jobs'
18 enum_spec = ('aggregated', 'jobs[]', None)
19 scope_key = 'projectId'
20 name = id = 'name'
21 get_requires_event = True
22 default_report_fields = [
23 'name', 'currentState', 'createTime', 'location']
24 permissions = ('dataflow.jobs.list',)
25
26 @staticmethod
27 def get(client, event):
28 return client.execute_command(
29 'get', {
30 'projectId': jmespath.search('resource.labels.project_id', event),
31 'jobId': jmespath.search('protoPayload.request.job_id', event)
32 }
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/c7n_gcp/c7n_gcp/resources/dataflow.py b/tools/c7n_gcp/c7n_gcp/resources/dataflow.py
--- a/tools/c7n_gcp/c7n_gcp/resources/dataflow.py
+++ b/tools/c7n_gcp/c7n_gcp/resources/dataflow.py
@@ -1,6 +1,7 @@
# Copyright The Cloud Custodian Authors.
# SPDX-License-Identifier: Apache-2.0
import jmespath
+from googleapiclient.errors import HttpError
from c7n_gcp.provider import resources
from c7n_gcp.query import QueryResourceManager, TypeInfo
@@ -31,3 +32,27 @@
'jobId': jmespath.search('protoPayload.request.job_id', event)
}
)
+
+ def resources(self, query=None):
+ query_filter = 'ACTIVE'
+ if self.data.get('query'):
+ query_filter = self.data['query'][0].get('filter', 'ACTIVE')
+
+ return super(DataflowJob, self).resources(query={'filter': query_filter})
+
+ def augment(self, resources):
+ client = self.get_client()
+ results = []
+ for r in resources:
+ ref = {
+ 'jobId': r['id'],
+ 'projectId': r['projectId'],
+ 'view': 'JOB_VIEW_ALL'
+ }
+ try:
+ results.append(
+ client.execute_query(
+ 'get', verb_arguments=ref))
+ except HttpError:
+ results.append(r)
+ return results
| {"golden_diff": "diff --git a/tools/c7n_gcp/c7n_gcp/resources/dataflow.py b/tools/c7n_gcp/c7n_gcp/resources/dataflow.py\n--- a/tools/c7n_gcp/c7n_gcp/resources/dataflow.py\n+++ b/tools/c7n_gcp/c7n_gcp/resources/dataflow.py\n@@ -1,6 +1,7 @@\n # Copyright The Cloud Custodian Authors.\n # SPDX-License-Identifier: Apache-2.0\n import jmespath\n+from googleapiclient.errors import HttpError\n \n from c7n_gcp.provider import resources\n from c7n_gcp.query import QueryResourceManager, TypeInfo\n@@ -31,3 +32,27 @@\n 'jobId': jmespath.search('protoPayload.request.job_id', event)\n }\n )\n+\n+ def resources(self, query=None):\n+ query_filter = 'ACTIVE'\n+ if self.data.get('query'):\n+ query_filter = self.data['query'][0].get('filter', 'ACTIVE')\n+\n+ return super(DataflowJob, self).resources(query={'filter': query_filter})\n+\n+ def augment(self, resources):\n+ client = self.get_client()\n+ results = []\n+ for r in resources:\n+ ref = {\n+ 'jobId': r['id'],\n+ 'projectId': r['projectId'],\n+ 'view': 'JOB_VIEW_ALL'\n+ }\n+ try:\n+ results.append(\n+ client.execute_query(\n+ 'get', verb_arguments=ref))\n+ except HttpError:\n+ results.append(r)\n+ return results\n", "issue": "GCP- dataflow deprecated view and now doesn't return complete information\nLooks like gcp deprecated the dataflow list ability to get all dataflow job information. It only returns `JOB_VIEW_SUMMARY `\r\n\r\nhttps://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs/list\r\n\r\nIn order to get the required information a get call on every resource will be required. Should this be done on the resource level as an augmentation or in an additional filter since the jobs could be filtered on state in a value filter before the extra api calls. \n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport jmespath\n\nfrom c7n_gcp.provider import resources\nfrom c7n_gcp.query import QueryResourceManager, TypeInfo\n\n\[email protected]('dataflow-job')\nclass DataflowJob(QueryResourceManager):\n \"\"\"GCP resource: https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs\n \"\"\"\n\n class resource_type(TypeInfo):\n service = 'dataflow'\n version = 'v1b3'\n component = 'projects.jobs'\n enum_spec = ('aggregated', 'jobs[]', None)\n scope_key = 'projectId'\n name = id = 'name'\n get_requires_event = True\n default_report_fields = [\n 'name', 'currentState', 'createTime', 'location']\n permissions = ('dataflow.jobs.list',)\n\n @staticmethod\n def get(client, event):\n return client.execute_command(\n 'get', {\n 'projectId': jmespath.search('resource.labels.project_id', event),\n 'jobId': jmespath.search('protoPayload.request.job_id', event)\n }\n )\n", "path": "tools/c7n_gcp/c7n_gcp/resources/dataflow.py"}], "after_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport jmespath\nfrom googleapiclient.errors import HttpError\n\nfrom c7n_gcp.provider import resources\nfrom c7n_gcp.query import QueryResourceManager, TypeInfo\n\n\[email protected]('dataflow-job')\nclass DataflowJob(QueryResourceManager):\n \"\"\"GCP resource: https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs\n \"\"\"\n\n class resource_type(TypeInfo):\n service = 'dataflow'\n version = 'v1b3'\n component = 'projects.jobs'\n enum_spec = ('aggregated', 'jobs[]', None)\n scope_key = 'projectId'\n name = id = 'name'\n get_requires_event = True\n default_report_fields = [\n 'name', 'currentState', 'createTime', 'location']\n permissions = ('dataflow.jobs.list',)\n\n @staticmethod\n def get(client, event):\n return client.execute_command(\n 'get', {\n 'projectId': jmespath.search('resource.labels.project_id', event),\n 'jobId': jmespath.search('protoPayload.request.job_id', event)\n }\n )\n\n def resources(self, query=None):\n query_filter = 'ACTIVE'\n if self.data.get('query'):\n query_filter = self.data['query'][0].get('filter', 'ACTIVE')\n\n return super(DataflowJob, self).resources(query={'filter': query_filter})\n\n def augment(self, resources):\n client = self.get_client()\n results = []\n for r in resources:\n ref = {\n 'jobId': r['id'],\n 'projectId': r['projectId'],\n 'view': 'JOB_VIEW_ALL'\n }\n try:\n results.append(\n client.execute_query(\n 'get', verb_arguments=ref))\n except HttpError:\n results.append(r)\n return results\n", "path": "tools/c7n_gcp/c7n_gcp/resources/dataflow.py"}]} | 687 | 348 |
gh_patches_debug_1601 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1940 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jinja2 2.10 security vulnerability reported by GitHub
Hi
I just got an alert on my github repo (where we use mkdocs to build our doc) for Jinja2 2.10:
- https://github.com/eclipse/openj9-docs/network/alert/buildenv/requirements.txt/Jinja2/open
Recommendation is to move to 2.10.1.
Are there any known issues with moving to this level? Are there any plans to update the Mkdocs dependency list to this level?
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from setuptools import setup
4 import re
5 import os
6 import sys
7
8
9 long_description = (
10 "MkDocs is a fast, simple and downright gorgeous static site generator "
11 "that's geared towards building project documentation. Documentation "
12 "source files are written in Markdown, and configured with a single YAML "
13 "configuration file."
14 )
15
16
17 def get_version(package):
18 """Return package version as listed in `__version__` in `init.py`."""
19 init_py = open(os.path.join(package, '__init__.py')).read()
20 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
21
22
23 def get_packages(package):
24 """Return root package and all sub-packages."""
25 return [dirpath
26 for dirpath, dirnames, filenames in os.walk(package)
27 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
28
29
30 if sys.argv[-1] == 'publish':
31 if os.system("pip freeze | grep wheel"):
32 print("wheel not installed.\nUse `pip install wheel`.\nExiting.")
33 sys.exit()
34 if os.system("pip freeze | grep twine"):
35 print("twine not installed.\nUse `pip install twine`.\nExiting.")
36 sys.exit()
37 os.system("python setup.py sdist bdist_wheel")
38 os.system("twine upload dist/*")
39 print("You probably want to also tag the version now:")
40 print(" git tag -a {0} -m 'version {0}'".format(get_version("mkdocs")))
41 print(" git push --tags")
42 sys.exit()
43
44
45 setup(
46 name="mkdocs",
47 version=get_version("mkdocs"),
48 url='https://www.mkdocs.org',
49 license='BSD',
50 description='Project documentation with Markdown.',
51 long_description=long_description,
52 author='Tom Christie',
53 author_email='[email protected]', # SEE NOTE BELOW (*)
54 packages=get_packages("mkdocs"),
55 include_package_data=True,
56 install_requires=[
57 'click>=3.3',
58 'Jinja2>=2.7.1',
59 'livereload>=2.5.1',
60 'lunr[languages]>=0.5.2',
61 'Markdown>=2.3.1',
62 'PyYAML>=3.10',
63 'tornado>=5.0'
64 ],
65 python_requires='>=3.5',
66 entry_points={
67 'console_scripts': [
68 'mkdocs = mkdocs.__main__:cli',
69 ],
70 'mkdocs.themes': [
71 'mkdocs = mkdocs.themes.mkdocs',
72 'readthedocs = mkdocs.themes.readthedocs',
73 ],
74 'mkdocs.plugins': [
75 'search = mkdocs.contrib.search:SearchPlugin',
76 ],
77 },
78 classifiers=[
79 'Development Status :: 5 - Production/Stable',
80 'Environment :: Console',
81 'Environment :: Web Environment',
82 'Intended Audience :: Developers',
83 'License :: OSI Approved :: BSD License',
84 'Operating System :: OS Independent',
85 'Programming Language :: Python',
86 'Programming Language :: Python :: 3',
87 'Programming Language :: Python :: 3.5',
88 'Programming Language :: Python :: 3.6',
89 'Programming Language :: Python :: 3.7',
90 'Programming Language :: Python :: 3 :: Only',
91 "Programming Language :: Python :: Implementation :: CPython",
92 "Programming Language :: Python :: Implementation :: PyPy",
93 'Topic :: Documentation',
94 'Topic :: Text Processing',
95 ],
96 zip_safe=False,
97 )
98
99 # (*) Please direct queries to the discussion group:
100 # https://groups.google.com/forum/#!forum/mkdocs
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -55,7 +55,7 @@
include_package_data=True,
install_requires=[
'click>=3.3',
- 'Jinja2>=2.7.1',
+ 'Jinja2>=2.10.1',
'livereload>=2.5.1',
'lunr[languages]>=0.5.2',
'Markdown>=2.3.1',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -55,7 +55,7 @@\n include_package_data=True,\n install_requires=[\n 'click>=3.3',\n- 'Jinja2>=2.7.1',\n+ 'Jinja2>=2.10.1',\n 'livereload>=2.5.1',\n 'lunr[languages]>=0.5.2',\n 'Markdown>=2.3.1',\n", "issue": "Jinja2 2.10 security vulnerability reported by GitHub\nHi\r\n\r\nI just got an alert on my github repo (where we use mkdocs to build our doc) for Jinja2 2.10:\r\n\r\n- https://github.com/eclipse/openj9-docs/network/alert/buildenv/requirements.txt/Jinja2/open\r\n\r\nRecommendation is to move to 2.10.1.\r\n\r\nAre there any known issues with moving to this level? Are there any plans to update the Mkdocs dependency list to this level? \r\n\r\nThanks!\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"Return package version as listed in `__version__` in `init.py`.\"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_packages(package):\n \"\"\"Return root package and all sub-packages.\"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {0} -m 'version {0}'\".format(get_version(\"mkdocs\")))\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n url='https://www.mkdocs.org',\n license='BSD',\n description='Project documentation with Markdown.',\n long_description=long_description,\n author='Tom Christie',\n author_email='[email protected]', # SEE NOTE BELOW (*)\n packages=get_packages(\"mkdocs\"),\n include_package_data=True,\n install_requires=[\n 'click>=3.3',\n 'Jinja2>=2.7.1',\n 'livereload>=2.5.1',\n 'lunr[languages]>=0.5.2',\n 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=5.0'\n ],\n python_requires='>=3.5',\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.__main__:cli',\n ],\n 'mkdocs.themes': [\n 'mkdocs = mkdocs.themes.mkdocs',\n 'readthedocs = mkdocs.themes.readthedocs',\n ],\n 'mkdocs.plugins': [\n 'search = mkdocs.contrib.search:SearchPlugin',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ],\n zip_safe=False,\n)\n\n# (*) Please direct queries to the discussion group:\n# https://groups.google.com/forum/#!forum/mkdocs\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"Return package version as listed in `__version__` in `init.py`.\"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_packages(package):\n \"\"\"Return root package and all sub-packages.\"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {0} -m 'version {0}'\".format(get_version(\"mkdocs\")))\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n url='https://www.mkdocs.org',\n license='BSD',\n description='Project documentation with Markdown.',\n long_description=long_description,\n author='Tom Christie',\n author_email='[email protected]', # SEE NOTE BELOW (*)\n packages=get_packages(\"mkdocs\"),\n include_package_data=True,\n install_requires=[\n 'click>=3.3',\n 'Jinja2>=2.10.1',\n 'livereload>=2.5.1',\n 'lunr[languages]>=0.5.2',\n 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=5.0'\n ],\n python_requires='>=3.5',\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.__main__:cli',\n ],\n 'mkdocs.themes': [\n 'mkdocs = mkdocs.themes.mkdocs',\n 'readthedocs = mkdocs.themes.readthedocs',\n ],\n 'mkdocs.plugins': [\n 'search = mkdocs.contrib.search:SearchPlugin',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ],\n zip_safe=False,\n)\n\n# (*) Please direct queries to the discussion group:\n# https://groups.google.com/forum/#!forum/mkdocs\n", "path": "setup.py"}]} | 1,371 | 113 |
gh_patches_debug_3028 | rasdani/github-patches | git_diff | modal-labs__modal-examples-556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
apply #556 manually
I manually applied the patch from #556. Not sure what's up with that PR
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `01_getting_started/hello_world.py`
Content:
```
1 # # Hello, world!
2 #
3 # This is a trivial example of a Modal function, but it illustrates a few features:
4 #
5 # * You can print things to stdout and stderr.
6 # * You can return data.
7 # * You can map over a function.
8 #
9 # ## Import Modal and define the app
10 #
11 # Let's start with the top level imports.
12 # You need to import Modal and define the app.
13 # A stub is an object that defines everything that will be run.
14
15 import sys
16
17 import modal
18
19 stub = modal.Stub("example-hello-world")
20
21 # ## Defining a function
22 #
23 # Here we define a Modal function using the `modal.function` decorator.
24 # The body of the function will automatically be run remotely.
25 # This particular function is pretty silly: it just prints "hello"
26 # and "world" alternatingly to standard out and standard error.
27
28
29 @stub.function()
30 def f(i):
31 if i % 2 == 0:
32 print("hello", i)
33 else:
34 print("world", i, file=sys.stderr)
35
36 return i * i
37
38
39 # ## Running it
40 #
41 # Finally, let's actually invoke it.
42 # We put this invocation code inside a `@stub.local_entrypoint()`.
43 # This is because this module will be imported in the cloud, and we don't want
44 # this code to be executed a second time in the cloud.
45 #
46 # Run `modal run hello_world.py` and the `@stub.local_entrypoint()` decorator will handle
47 # starting the Modal app and then executing the wrapped function body.
48 #
49 # Inside the `main()` function body, we are calling the function `f` in three ways:
50 #
51 # 1 As a simple local call, `f(1000)`
52 # 2. As a simple *remote* call `f.remote(1000)`
53 # 3. By mapping over the integers `0..19`
54
55
56 @stub.local_entrypoint()
57 def main():
58 # Call the function locally.
59 print(f.local(1000))
60
61 # Call the function remotely.
62 print(f.remote(1000))
63
64 # Parallel map.
65 total = 0
66 for ret in f.map(range(20)):
67 total += ret
68
69 print(total)
70
71
72 # ## What happens?
73 #
74 # When you do `.remote` on function `f`, Modal will execute `f` **in the cloud,**
75 # not locally on your computer. It will take the code, put it inside a
76 # container, run it, and stream all the output back to your local
77 # computer.
78 #
79 # Try doing one of these things next.
80 #
81 # ### Change the code and run again
82 #
83 # For instance, change the `print` statement in the function `f`.
84 # You can see that the latest code is always run.
85 #
86 # Modal's goal is to make running code in the cloud feel like you're
87 # running code locally. You don't need to run any commands to rebuild,
88 # push containers, or go to a web UI to download logs.
89 #
90 # ### Map over a larger dataset
91 #
92 # Change the map range from 20 to some large number. You can see that
93 # Modal will create and run more containers in parallel.
94 #
95 # The function `f` is obviously silly and doesn't do much, but you could
96 # imagine something more significant, like:
97 #
98 # * Training a machine learning model
99 # * Transcoding media
100 # * Backtesting a trading algorithm.
101 #
102 # Modal lets you parallelize that operation trivially by running hundreds or
103 # thousands of containers in the cloud.
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/01_getting_started/hello_world.py b/01_getting_started/hello_world.py
--- a/01_getting_started/hello_world.py
+++ b/01_getting_started/hello_world.py
@@ -48,7 +48,7 @@
#
# Inside the `main()` function body, we are calling the function `f` in three ways:
#
-# 1 As a simple local call, `f(1000)`
+# 1 As a simple local call, `f.local(1000)`
# 2. As a simple *remote* call `f.remote(1000)`
# 3. By mapping over the integers `0..19`
| {"golden_diff": "diff --git a/01_getting_started/hello_world.py b/01_getting_started/hello_world.py\n--- a/01_getting_started/hello_world.py\n+++ b/01_getting_started/hello_world.py\n@@ -48,7 +48,7 @@\n #\n # Inside the `main()` function body, we are calling the function `f` in three ways:\n #\n-# 1 As a simple local call, `f(1000)`\n+# 1 As a simple local call, `f.local(1000)`\n # 2. As a simple *remote* call `f.remote(1000)`\n # 3. By mapping over the integers `0..19`\n", "issue": "apply #556 manually\nI manually applied the patch from #556. Not sure what's up with that PR\n", "before_files": [{"content": "# # Hello, world!\n#\n# This is a trivial example of a Modal function, but it illustrates a few features:\n#\n# * You can print things to stdout and stderr.\n# * You can return data.\n# * You can map over a function.\n#\n# ## Import Modal and define the app\n#\n# Let's start with the top level imports.\n# You need to import Modal and define the app.\n# A stub is an object that defines everything that will be run.\n\nimport sys\n\nimport modal\n\nstub = modal.Stub(\"example-hello-world\")\n\n# ## Defining a function\n#\n# Here we define a Modal function using the `modal.function` decorator.\n# The body of the function will automatically be run remotely.\n# This particular function is pretty silly: it just prints \"hello\"\n# and \"world\" alternatingly to standard out and standard error.\n\n\[email protected]()\ndef f(i):\n if i % 2 == 0:\n print(\"hello\", i)\n else:\n print(\"world\", i, file=sys.stderr)\n\n return i * i\n\n\n# ## Running it\n#\n# Finally, let's actually invoke it.\n# We put this invocation code inside a `@stub.local_entrypoint()`.\n# This is because this module will be imported in the cloud, and we don't want\n# this code to be executed a second time in the cloud.\n#\n# Run `modal run hello_world.py` and the `@stub.local_entrypoint()` decorator will handle\n# starting the Modal app and then executing the wrapped function body.\n#\n# Inside the `main()` function body, we are calling the function `f` in three ways:\n#\n# 1 As a simple local call, `f(1000)`\n# 2. As a simple *remote* call `f.remote(1000)`\n# 3. By mapping over the integers `0..19`\n\n\[email protected]_entrypoint()\ndef main():\n # Call the function locally.\n print(f.local(1000))\n\n # Call the function remotely.\n print(f.remote(1000))\n\n # Parallel map.\n total = 0\n for ret in f.map(range(20)):\n total += ret\n\n print(total)\n\n\n# ## What happens?\n#\n# When you do `.remote` on function `f`, Modal will execute `f` **in the cloud,**\n# not locally on your computer. It will take the code, put it inside a\n# container, run it, and stream all the output back to your local\n# computer.\n#\n# Try doing one of these things next.\n#\n# ### Change the code and run again\n#\n# For instance, change the `print` statement in the function `f`.\n# You can see that the latest code is always run.\n#\n# Modal's goal is to make running code in the cloud feel like you're\n# running code locally. You don't need to run any commands to rebuild,\n# push containers, or go to a web UI to download logs.\n#\n# ### Map over a larger dataset\n#\n# Change the map range from 20 to some large number. You can see that\n# Modal will create and run more containers in parallel.\n#\n# The function `f` is obviously silly and doesn't do much, but you could\n# imagine something more significant, like:\n#\n# * Training a machine learning model\n# * Transcoding media\n# * Backtesting a trading algorithm.\n#\n# Modal lets you parallelize that operation trivially by running hundreds or\n# thousands of containers in the cloud.\n", "path": "01_getting_started/hello_world.py"}], "after_files": [{"content": "# # Hello, world!\n#\n# This is a trivial example of a Modal function, but it illustrates a few features:\n#\n# * You can print things to stdout and stderr.\n# * You can return data.\n# * You can map over a function.\n#\n# ## Import Modal and define the app\n#\n# Let's start with the top level imports.\n# You need to import Modal and define the app.\n# A stub is an object that defines everything that will be run.\n\nimport sys\n\nimport modal\n\nstub = modal.Stub(\"example-hello-world\")\n\n# ## Defining a function\n#\n# Here we define a Modal function using the `modal.function` decorator.\n# The body of the function will automatically be run remotely.\n# This particular function is pretty silly: it just prints \"hello\"\n# and \"world\" alternatingly to standard out and standard error.\n\n\[email protected]()\ndef f(i):\n if i % 2 == 0:\n print(\"hello\", i)\n else:\n print(\"world\", i, file=sys.stderr)\n\n return i * i\n\n\n# ## Running it\n#\n# Finally, let's actually invoke it.\n# We put this invocation code inside a `@stub.local_entrypoint()`.\n# This is because this module will be imported in the cloud, and we don't want\n# this code to be executed a second time in the cloud.\n#\n# Run `modal run hello_world.py` and the `@stub.local_entrypoint()` decorator will handle\n# starting the Modal app and then executing the wrapped function body.\n#\n# Inside the `main()` function body, we are calling the function `f` in three ways:\n#\n# 1 As a simple local call, `f.local(1000)`\n# 2. As a simple *remote* call `f.remote(1000)`\n# 3. By mapping over the integers `0..19`\n\n\[email protected]_entrypoint()\ndef main():\n # Call the function locally.\n print(f.local(1000))\n\n # Call the function remotely.\n print(f.remote(1000))\n\n # Parallel map.\n total = 0\n for ret in f.map(range(20)):\n total += ret\n\n print(total)\n\n\n# ## What happens?\n#\n# When you do `.remote` on function `f`, Modal will execute `f` **in the cloud,**\n# not locally on your computer. It will take the code, put it inside a\n# container, run it, and stream all the output back to your local\n# computer.\n#\n# Try doing one of these things next.\n#\n# ### Change the code and run again\n#\n# For instance, change the `print` statement in the function `f`.\n# You can see that the latest code is always run.\n#\n# Modal's goal is to make running code in the cloud feel like you're\n# running code locally. You don't need to run any commands to rebuild,\n# push containers, or go to a web UI to download logs.\n#\n# ### Map over a larger dataset\n#\n# Change the map range from 20 to some large number. You can see that\n# Modal will create and run more containers in parallel.\n#\n# The function `f` is obviously silly and doesn't do much, but you could\n# imagine something more significant, like:\n#\n# * Training a machine learning model\n# * Transcoding media\n# * Backtesting a trading algorithm.\n#\n# Modal lets you parallelize that operation trivially by running hundreds or\n# thousands of containers in the cloud.\n", "path": "01_getting_started/hello_world.py"}]} | 1,262 | 160 |
gh_patches_debug_33431 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-273 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[E2503] fails incorrectly when intrinsic function used in Protocol value
*cfn-lint version: 0.4.2*
*Description of issue.*
This is valid, and conforms to the spec, but rule throws an error:
```yaml
Parameters:
TestParam:
Type: String
Default: TCP
Conditions:
TestCond: !Equals ['a', 'a']
Resources:
OpenShiftMasterELB:
Type: AWS::ElasticLoadBalancing::LoadBalancer
Properties:
Subnets:
- subnet-1234abcd
SecurityGroups:
- sg-1234abcd
Listeners:
# Fails on Protocol
- InstancePort: '1'
InstanceProtocol: !Ref TestParam
LoadBalancerPort: '1'
Protocol: !Ref TestParam
# Also fails on Protocol
- InstancePort: '2'
InstanceProtocol: !If [TestCond, TCP, SSL]
LoadBalancerPort: '2'
Protocol: !If [TestCond, TCP, SSL]
# Works
- InstancePort: '3'
InstanceProtocol: !If [TestCond, TCP, SSL]
LoadBalancerPort: '3'
Protocol: TCP
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/elb/Elb.py`
Content:
```
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 from cfnlint import CloudFormationLintRule
18 from cfnlint import RuleMatch
19
20
21 class Elb(CloudFormationLintRule):
22 """Check if Elb Resource Properties"""
23 id = 'E2503'
24 shortdesc = 'Resource ELB Properties'
25 description = 'See if Elb Resource Properties are set correctly \
26 HTTPS has certificate HTTP has no certificate'
27 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb-listener.html'
28 tags = ['properties', 'elb']
29
30 def match(self, cfn):
31 """Check ELB Resource Parameters"""
32
33 matches = list()
34
35 results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])
36 for result in results:
37 protocol = result['Value'].get('Protocol')
38 if protocol:
39 if protocol not in ['HTTP', 'HTTPS', 'TCP']:
40 message = 'Protocol is invalid for {0}'
41 path = result['Path'] + ['Protocol']
42 matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
43 elif protocol in ['HTTPS']:
44 certificate = result['Value'].get('Certificates')
45 if not certificate:
46 message = 'Certificates should be specified when using HTTPS for {0}'
47 path = result['Path'] + ['Protocol']
48 matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
49
50 results = cfn.get_resource_properties(['AWS::ElasticLoadBalancing::LoadBalancer', 'Listeners'])
51 for result in results:
52 if isinstance(result['Value'], list):
53 for index, listener in enumerate(result['Value']):
54 protocol = listener.get('Protocol')
55 if protocol:
56 if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL']:
57 message = 'Protocol is invalid for {0}'
58 path = result['Path'] + [index, 'Protocol']
59 matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
60 elif protocol in ['HTTPS', 'SSL']:
61 certificate = listener.get('SSLCertificateId')
62 if not certificate:
63 message = 'Certificates should be specified when using HTTPS for {0}'
64 path = result['Path'] + [index, 'Protocol']
65 matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
66
67 return matches
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/elb/Elb.py b/src/cfnlint/rules/resources/elb/Elb.py
--- a/src/cfnlint/rules/resources/elb/Elb.py
+++ b/src/cfnlint/rules/resources/elb/Elb.py
@@ -30,13 +30,21 @@
def match(self, cfn):
"""Check ELB Resource Parameters"""
+ def is_intrinsic(input_obj):
+ """Checks if a given input looks like an intrinsic function"""
+
+ if isinstance(input_obj, dict) and len(input_obj) == 1:
+ if list(input_obj.keys())[0] == 'Ref' or list(input_obj.keys())[0].startswith('Fn::'):
+ return True
+ return False
+
matches = list()
results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])
for result in results:
protocol = result['Value'].get('Protocol')
if protocol:
- if protocol not in ['HTTP', 'HTTPS', 'TCP']:
+ if protocol not in ['HTTP', 'HTTPS', 'TCP'] and not is_intrinsic(protocol):
message = 'Protocol is invalid for {0}'
path = result['Path'] + ['Protocol']
matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
@@ -53,7 +61,7 @@
for index, listener in enumerate(result['Value']):
protocol = listener.get('Protocol')
if protocol:
- if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL']:
+ if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL'] and not is_intrinsic(protocol):
message = 'Protocol is invalid for {0}'
path = result['Path'] + [index, 'Protocol']
matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/elb/Elb.py b/src/cfnlint/rules/resources/elb/Elb.py\n--- a/src/cfnlint/rules/resources/elb/Elb.py\n+++ b/src/cfnlint/rules/resources/elb/Elb.py\n@@ -30,13 +30,21 @@\n def match(self, cfn):\n \"\"\"Check ELB Resource Parameters\"\"\"\n \n+ def is_intrinsic(input_obj):\n+ \"\"\"Checks if a given input looks like an intrinsic function\"\"\"\n+\n+ if isinstance(input_obj, dict) and len(input_obj) == 1:\n+ if list(input_obj.keys())[0] == 'Ref' or list(input_obj.keys())[0].startswith('Fn::'):\n+ return True\n+ return False\n+\n matches = list()\n \n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])\n for result in results:\n protocol = result['Value'].get('Protocol')\n if protocol:\n- if protocol not in ['HTTP', 'HTTPS', 'TCP']:\n+ if protocol not in ['HTTP', 'HTTPS', 'TCP'] and not is_intrinsic(protocol):\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + ['Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n@@ -53,7 +61,7 @@\n for index, listener in enumerate(result['Value']):\n protocol = listener.get('Protocol')\n if protocol:\n- if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL']:\n+ if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL'] and not is_intrinsic(protocol):\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + [index, 'Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n", "issue": "[E2503] fails incorrectly when intrinsic function used in Protocol value \n*cfn-lint version: 0.4.2*\r\n\r\n*Description of issue.*\r\n\r\nThis is valid, and conforms to the spec, but rule throws an error:\r\n\r\n```yaml\r\nParameters:\r\n TestParam:\r\n Type: String\r\n Default: TCP\r\nConditions:\r\n TestCond: !Equals ['a', 'a']\r\nResources:\r\n OpenShiftMasterELB:\r\n Type: AWS::ElasticLoadBalancing::LoadBalancer\r\n Properties:\r\n Subnets:\r\n - subnet-1234abcd\r\n SecurityGroups:\r\n - sg-1234abcd\r\n Listeners:\r\n # Fails on Protocol\r\n - InstancePort: '1'\r\n InstanceProtocol: !Ref TestParam\r\n LoadBalancerPort: '1'\r\n Protocol: !Ref TestParam\r\n # Also fails on Protocol\r\n - InstancePort: '2'\r\n InstanceProtocol: !If [TestCond, TCP, SSL]\r\n LoadBalancerPort: '2'\r\n Protocol: !If [TestCond, TCP, SSL]\r\n # Works\r\n - InstancePort: '3'\r\n InstanceProtocol: !If [TestCond, TCP, SSL]\r\n LoadBalancerPort: '3'\r\n Protocol: TCP\r\n```\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass Elb(CloudFormationLintRule):\n \"\"\"Check if Elb Resource Properties\"\"\"\n id = 'E2503'\n shortdesc = 'Resource ELB Properties'\n description = 'See if Elb Resource Properties are set correctly \\\nHTTPS has certificate HTTP has no certificate'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb-listener.html'\n tags = ['properties', 'elb']\n\n def match(self, cfn):\n \"\"\"Check ELB Resource Parameters\"\"\"\n\n matches = list()\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])\n for result in results:\n protocol = result['Value'].get('Protocol')\n if protocol:\n if protocol not in ['HTTP', 'HTTPS', 'TCP']:\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + ['Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n elif protocol in ['HTTPS']:\n certificate = result['Value'].get('Certificates')\n if not certificate:\n message = 'Certificates should be specified when using HTTPS for {0}'\n path = result['Path'] + ['Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancing::LoadBalancer', 'Listeners'])\n for result in results:\n if isinstance(result['Value'], list):\n for index, listener in enumerate(result['Value']):\n protocol = listener.get('Protocol')\n if protocol:\n if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL']:\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + [index, 'Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n elif protocol in ['HTTPS', 'SSL']:\n certificate = listener.get('SSLCertificateId')\n if not certificate:\n message = 'Certificates should be specified when using HTTPS for {0}'\n path = result['Path'] + [index, 'Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n\n return matches\n", "path": "src/cfnlint/rules/resources/elb/Elb.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass Elb(CloudFormationLintRule):\n \"\"\"Check if Elb Resource Properties\"\"\"\n id = 'E2503'\n shortdesc = 'Resource ELB Properties'\n description = 'See if Elb Resource Properties are set correctly \\\nHTTPS has certificate HTTP has no certificate'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb-listener.html'\n tags = ['properties', 'elb']\n\n def match(self, cfn):\n \"\"\"Check ELB Resource Parameters\"\"\"\n\n def is_intrinsic(input_obj):\n \"\"\"Checks if a given input looks like an intrinsic function\"\"\"\n\n if isinstance(input_obj, dict) and len(input_obj) == 1:\n if list(input_obj.keys())[0] == 'Ref' or list(input_obj.keys())[0].startswith('Fn::'):\n return True\n return False\n\n matches = list()\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])\n for result in results:\n protocol = result['Value'].get('Protocol')\n if protocol:\n if protocol not in ['HTTP', 'HTTPS', 'TCP'] and not is_intrinsic(protocol):\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + ['Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n elif protocol in ['HTTPS']:\n certificate = result['Value'].get('Certificates')\n if not certificate:\n message = 'Certificates should be specified when using HTTPS for {0}'\n path = result['Path'] + ['Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancing::LoadBalancer', 'Listeners'])\n for result in results:\n if isinstance(result['Value'], list):\n for index, listener in enumerate(result['Value']):\n protocol = listener.get('Protocol')\n if protocol:\n if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL'] and not is_intrinsic(protocol):\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + [index, 'Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n elif protocol in ['HTTPS', 'SSL']:\n certificate = listener.get('SSLCertificateId')\n if not certificate:\n message = 'Certificates should be specified when using HTTPS for {0}'\n path = result['Path'] + [index, 'Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n\n return matches\n", "path": "src/cfnlint/rules/resources/elb/Elb.py"}]} | 1,384 | 419 |
gh_patches_debug_4866 | rasdani/github-patches | git_diff | locustio__locust-528 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Python 3.6 to build pipeline
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # encoding: utf-8
2
3 from setuptools import setup, find_packages, Command
4 import sys, os, re, ast
5
6
7 # parse version from locust/__init__.py
8 _version_re = re.compile(r'__version__\s+=\s+(.*)')
9 _init_file = os.path.join(os.path.abspath(os.path.dirname(__file__)), "locust", "__init__.py")
10 with open(_init_file, 'rb') as f:
11 version = str(ast.literal_eval(_version_re.search(
12 f.read().decode('utf-8')).group(1)))
13
14 setup(
15 name='locustio',
16 version=version,
17 description="Website load testing framework",
18 long_description="""Locust is a python utility for doing easy, distributed load testing of a web site""",
19 classifiers=[
20 "Topic :: Software Development :: Testing :: Traffic Generation",
21 "Development Status :: 4 - Beta",
22 "License :: OSI Approved :: MIT License",
23 "Operating System :: OS Independent",
24 "Programming Language :: Python",
25 "Programming Language :: Python :: 2",
26 "Programming Language :: Python :: 2.7",
27 "Programming Language :: Python :: 3",
28 "Programming Language :: Python :: 3.3",
29 "Programming Language :: Python :: 3.4",
30 "Programming Language :: Python :: 3.5",
31 "Intended Audience :: Developers",
32 "Intended Audience :: System Administrators",
33 ],
34 keywords='',
35 author='Jonatan Heyman, Carl Bystrom, Joakim Hamrén, Hugo Heyman',
36 author_email='',
37 url='http://locust.io',
38 license='MIT',
39 packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),
40 include_package_data=True,
41 zip_safe=False,
42 install_requires=["gevent>=1.1.2", "flask>=0.10.1", "requests>=2.9.1", "msgpack-python>=0.4.2", "six>=1.10.0", "pyzmq==15.2.0"],
43 tests_require=['unittest2', 'mock'],
44 entry_points={
45 'console_scripts': [
46 'locust = locust.main:main',
47 ]
48 },
49 )
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,6 +28,7 @@
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
+ "Programming Language :: Python :: 3.6",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,6 +28,7 @@\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n+ \"Programming Language :: Python :: 3.6\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n ],\n", "issue": "Add Python 3.6 to build pipeline\n\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom setuptools import setup, find_packages, Command\nimport sys, os, re, ast\n\n\n# parse version from locust/__init__.py\n_version_re = re.compile(r'__version__\\s+=\\s+(.*)')\n_init_file = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"locust\", \"__init__.py\")\nwith open(_init_file, 'rb') as f:\n version = str(ast.literal_eval(_version_re.search(\n f.read().decode('utf-8')).group(1)))\n\nsetup(\n name='locustio',\n version=version,\n description=\"Website load testing framework\",\n long_description=\"\"\"Locust is a python utility for doing easy, distributed load testing of a web site\"\"\",\n classifiers=[\n \"Topic :: Software Development :: Testing :: Traffic Generation\",\n \"Development Status :: 4 - Beta\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n ],\n keywords='',\n author='Jonatan Heyman, Carl Bystrom, Joakim Hamr\u00e9n, Hugo Heyman',\n author_email='',\n url='http://locust.io',\n license='MIT',\n packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=[\"gevent>=1.1.2\", \"flask>=0.10.1\", \"requests>=2.9.1\", \"msgpack-python>=0.4.2\", \"six>=1.10.0\", \"pyzmq==15.2.0\"],\n tests_require=['unittest2', 'mock'],\n entry_points={\n 'console_scripts': [\n 'locust = locust.main:main',\n ]\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "# encoding: utf-8\n\nfrom setuptools import setup, find_packages, Command\nimport sys, os, re, ast\n\n\n# parse version from locust/__init__.py\n_version_re = re.compile(r'__version__\\s+=\\s+(.*)')\n_init_file = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"locust\", \"__init__.py\")\nwith open(_init_file, 'rb') as f:\n version = str(ast.literal_eval(_version_re.search(\n f.read().decode('utf-8')).group(1)))\n\nsetup(\n name='locustio',\n version=version,\n description=\"Website load testing framework\",\n long_description=\"\"\"Locust is a python utility for doing easy, distributed load testing of a web site\"\"\",\n classifiers=[\n \"Topic :: Software Development :: Testing :: Traffic Generation\",\n \"Development Status :: 4 - Beta\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n ],\n keywords='',\n author='Jonatan Heyman, Carl Bystrom, Joakim Hamr\u00e9n, Hugo Heyman',\n author_email='',\n url='http://locust.io',\n license='MIT',\n packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=[\"gevent>=1.1.2\", \"flask>=0.10.1\", \"requests>=2.9.1\", \"msgpack-python>=0.4.2\", \"six>=1.10.0\", \"pyzmq==15.2.0\"],\n tests_require=['unittest2', 'mock'],\n entry_points={\n 'console_scripts': [\n 'locust = locust.main:main',\n ]\n },\n)\n", "path": "setup.py"}]} | 837 | 101 |
gh_patches_debug_5987 | rasdani/github-patches | git_diff | arviz-devs__arviz-343 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs are broken
Looks like one of the examples still uses `n_eff`. From travis:
```
Exception occurred:
File "/home/travis/build/arviz-devs/arviz/examples/plot_forest_ridge.py", line 20, in <module>
n_eff=False)
TypeError: plot_forest() got an unexpected keyword argument 'n_eff'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/plot_forest_ridge.py`
Content:
```
1 """
2 Ridgeplot
3 =========
4
5 _thumb: .8, .5
6 """
7 import arviz as az
8
9 az.style.use('arviz-darkgrid')
10
11 non_centered_data = az.load_arviz_data('non_centered_eight')
12 fig, axes = az.plot_forest(non_centered_data,
13 kind='ridgeplot',
14 var_names=['theta'],
15 combined=True,
16 textsize=11,
17 ridgeplot_overlap=3,
18 colors='white',
19 r_hat=False,
20 n_eff=False)
21 axes[0].set_title('Estimated theta for eight schools model', fontsize=11)
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/plot_forest_ridge.py b/examples/plot_forest_ridge.py
--- a/examples/plot_forest_ridge.py
+++ b/examples/plot_forest_ridge.py
@@ -15,7 +15,5 @@
combined=True,
textsize=11,
ridgeplot_overlap=3,
- colors='white',
- r_hat=False,
- n_eff=False)
+ colors='white')
axes[0].set_title('Estimated theta for eight schools model', fontsize=11)
| {"golden_diff": "diff --git a/examples/plot_forest_ridge.py b/examples/plot_forest_ridge.py\n--- a/examples/plot_forest_ridge.py\n+++ b/examples/plot_forest_ridge.py\n@@ -15,7 +15,5 @@\n combined=True,\n textsize=11,\n ridgeplot_overlap=3,\n- colors='white',\n- r_hat=False,\n- n_eff=False)\n+ colors='white')\n axes[0].set_title('Estimated theta for eight schools model', fontsize=11)\n", "issue": "Docs are broken\nLooks like one of the examples still uses `n_eff`. From travis: \r\n\r\n```\r\nException occurred:\r\n File \"/home/travis/build/arviz-devs/arviz/examples/plot_forest_ridge.py\", line 20, in <module>\r\n n_eff=False)\r\nTypeError: plot_forest() got an unexpected keyword argument 'n_eff'\r\n```\n", "before_files": [{"content": "\"\"\"\nRidgeplot\n=========\n\n_thumb: .8, .5\n\"\"\"\nimport arviz as az\n\naz.style.use('arviz-darkgrid')\n\nnon_centered_data = az.load_arviz_data('non_centered_eight')\nfig, axes = az.plot_forest(non_centered_data,\n kind='ridgeplot',\n var_names=['theta'],\n combined=True,\n textsize=11,\n ridgeplot_overlap=3,\n colors='white',\n r_hat=False,\n n_eff=False)\naxes[0].set_title('Estimated theta for eight schools model', fontsize=11)\n", "path": "examples/plot_forest_ridge.py"}], "after_files": [{"content": "\"\"\"\nRidgeplot\n=========\n\n_thumb: .8, .5\n\"\"\"\nimport arviz as az\n\naz.style.use('arviz-darkgrid')\n\nnon_centered_data = az.load_arviz_data('non_centered_eight')\nfig, axes = az.plot_forest(non_centered_data,\n kind='ridgeplot',\n var_names=['theta'],\n combined=True,\n textsize=11,\n ridgeplot_overlap=3,\n colors='white')\naxes[0].set_title('Estimated theta for eight schools model', fontsize=11)\n", "path": "examples/plot_forest_ridge.py"}]} | 504 | 117 |
gh_patches_debug_21777 | rasdani/github-patches | git_diff | zulip__zulip-19818 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
markdown: Document built-in preprocessor priorities.
As a follow-up to #19783, it would be good to document the priorities assigned to the built-in preprocessors that the Python-Markdown library has. A couple of notes:
- This involves a bit of grunt work, the quickest way to do this is to loop over and print `md_engine.preprocessors._priorities` in `zerver/lib/templates.py`.
- Note that in `templates.py`, there are different cases where different sets of preprocessors are added, so one has to do the additional work to figure out which preprocessors are running in which of those cases and then document all the priorities that are for built-in preprocessors.
- The file to put these priorities in is: `zerver/lib/markdown/preprocessor_priorities..py`.
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zerver/lib/markdown/preprocessor_priorities.py`
Content:
```
1 # Note that in the Markdown preprocessor registry, the highest
2 # numeric value is considered the highest priority, so the dict
3 # below is ordered from highest-to-lowest priority.
4 PREPROCESSOR_PRIORITES = {
5 "generate_parameter_description": 535,
6 "generate_response_description": 531,
7 "generate_api_title": 531,
8 "generate_api_description": 530,
9 "generate_code_example": 525,
10 "generate_return_values": 510,
11 "generate_api_arguments": 505,
12 "include": 500,
13 "help_relative_links": 475,
14 "setting": 450,
15 "fenced_code_block": 25,
16 "tabbed_sections": -500,
17 "nested_code_blocks": -500,
18 "emoticon_translations": -505,
19 }
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zerver/lib/markdown/preprocessor_priorities.py b/zerver/lib/markdown/preprocessor_priorities.py
--- a/zerver/lib/markdown/preprocessor_priorities.py
+++ b/zerver/lib/markdown/preprocessor_priorities.py
@@ -1,6 +1,7 @@
# Note that in the Markdown preprocessor registry, the highest
# numeric value is considered the highest priority, so the dict
# below is ordered from highest-to-lowest priority.
+# Priorities for the built-in preprocessors are commented out.
PREPROCESSOR_PRIORITES = {
"generate_parameter_description": 535,
"generate_response_description": 531,
@@ -10,9 +11,12 @@
"generate_return_values": 510,
"generate_api_arguments": 505,
"include": 500,
+ # "include_wrapper": 500,
"help_relative_links": 475,
"setting": 450,
+ # "normalize_whitespace": 30,
"fenced_code_block": 25,
+ # "html_block": 20,
"tabbed_sections": -500,
"nested_code_blocks": -500,
"emoticon_translations": -505,
| {"golden_diff": "diff --git a/zerver/lib/markdown/preprocessor_priorities.py b/zerver/lib/markdown/preprocessor_priorities.py\n--- a/zerver/lib/markdown/preprocessor_priorities.py\n+++ b/zerver/lib/markdown/preprocessor_priorities.py\n@@ -1,6 +1,7 @@\n # Note that in the Markdown preprocessor registry, the highest\n # numeric value is considered the highest priority, so the dict\n # below is ordered from highest-to-lowest priority.\n+# Priorities for the built-in preprocessors are commented out.\n PREPROCESSOR_PRIORITES = {\n \"generate_parameter_description\": 535,\n \"generate_response_description\": 531,\n@@ -10,9 +11,12 @@\n \"generate_return_values\": 510,\n \"generate_api_arguments\": 505,\n \"include\": 500,\n+ # \"include_wrapper\": 500,\n \"help_relative_links\": 475,\n \"setting\": 450,\n+ # \"normalize_whitespace\": 30,\n \"fenced_code_block\": 25,\n+ # \"html_block\": 20,\n \"tabbed_sections\": -500,\n \"nested_code_blocks\": -500,\n \"emoticon_translations\": -505,\n", "issue": "markdown: Document built-in preprocessor priorities.\nAs a follow-up to #19783, it would be good to document the priorities assigned to the built-in preprocessors that the Python-Markdown library has. A couple of notes:\r\n- This involves a bit of grunt work, the quickest way to do this is to loop over and print `md_engine.preprocessors._priorities` in `zerver/lib/templates.py`.\r\n- Note that in `templates.py`, there are different cases where different sets of preprocessors are added, so one has to do the additional work to figure out which preprocessors are running in which of those cases and then document all the priorities that are for built-in preprocessors.\r\n- The file to put these priorities in is: `zerver/lib/markdown/preprocessor_priorities..py`.\r\n\r\nThanks!\n", "before_files": [{"content": "# Note that in the Markdown preprocessor registry, the highest\n# numeric value is considered the highest priority, so the dict\n# below is ordered from highest-to-lowest priority.\nPREPROCESSOR_PRIORITES = {\n \"generate_parameter_description\": 535,\n \"generate_response_description\": 531,\n \"generate_api_title\": 531,\n \"generate_api_description\": 530,\n \"generate_code_example\": 525,\n \"generate_return_values\": 510,\n \"generate_api_arguments\": 505,\n \"include\": 500,\n \"help_relative_links\": 475,\n \"setting\": 450,\n \"fenced_code_block\": 25,\n \"tabbed_sections\": -500,\n \"nested_code_blocks\": -500,\n \"emoticon_translations\": -505,\n}\n", "path": "zerver/lib/markdown/preprocessor_priorities.py"}], "after_files": [{"content": "# Note that in the Markdown preprocessor registry, the highest\n# numeric value is considered the highest priority, so the dict\n# below is ordered from highest-to-lowest priority.\n# Priorities for the built-in preprocessors are commented out.\nPREPROCESSOR_PRIORITES = {\n \"generate_parameter_description\": 535,\n \"generate_response_description\": 531,\n \"generate_api_title\": 531,\n \"generate_api_description\": 530,\n \"generate_code_example\": 525,\n \"generate_return_values\": 510,\n \"generate_api_arguments\": 505,\n \"include\": 500,\n # \"include_wrapper\": 500,\n \"help_relative_links\": 475,\n \"setting\": 450,\n # \"normalize_whitespace\": 30,\n \"fenced_code_block\": 25,\n # \"html_block\": 20,\n \"tabbed_sections\": -500,\n \"nested_code_blocks\": -500,\n \"emoticon_translations\": -505,\n}\n", "path": "zerver/lib/markdown/preprocessor_priorities.py"}]} | 664 | 286 |
gh_patches_debug_33064 | rasdani/github-patches | git_diff | Textualize__textual-3825 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add message `Collapsible.Toggled`
What it says on the tin.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/widgets/_collapsible.py`
Content:
```
1 from __future__ import annotations
2
3 from rich.console import RenderableType
4 from rich.text import Text
5
6 from .. import events
7 from ..app import ComposeResult
8 from ..binding import Binding
9 from ..containers import Container
10 from ..css.query import NoMatches
11 from ..message import Message
12 from ..reactive import reactive
13 from ..widget import Widget
14
15 __all__ = ["Collapsible", "CollapsibleTitle"]
16
17
18 class CollapsibleTitle(Widget, can_focus=True):
19 """Title and symbol for the Collapsible."""
20
21 DEFAULT_CSS = """
22 CollapsibleTitle {
23 width: auto;
24 height: auto;
25 padding: 0 1 0 1;
26 }
27
28 CollapsibleTitle:hover {
29 background: $foreground 10%;
30 color: $text;
31 }
32
33 CollapsibleTitle:focus {
34 background: $accent;
35 color: $text;
36 }
37 """
38
39 BINDINGS = [Binding("enter", "toggle", "Toggle collapsible", show=False)]
40 """
41 | Key(s) | Description |
42 | :- | :- |
43 | enter | Toggle the collapsible. |
44 """
45
46 collapsed = reactive(True)
47
48 def __init__(
49 self,
50 *,
51 label: str,
52 collapsed_symbol: str,
53 expanded_symbol: str,
54 collapsed: bool,
55 ) -> None:
56 super().__init__()
57 self.collapsed_symbol = collapsed_symbol
58 self.expanded_symbol = expanded_symbol
59 self.label = label
60 self.collapse = collapsed
61
62 class Toggle(Message):
63 """Request toggle."""
64
65 async def _on_click(self, event: events.Click) -> None:
66 """Inform ancestor we want to toggle."""
67 event.stop()
68 self.post_message(self.Toggle())
69
70 def action_toggle(self) -> None:
71 """Toggle the state of the parent collapsible."""
72 self.post_message(self.Toggle())
73
74 def render(self) -> RenderableType:
75 """Compose right/down arrow and label."""
76 if self.collapsed:
77 return Text(f"{self.collapsed_symbol} {self.label}")
78 else:
79 return Text(f"{self.expanded_symbol} {self.label}")
80
81
82 class Collapsible(Widget):
83 """A collapsible container."""
84
85 collapsed = reactive(True)
86
87 DEFAULT_CSS = """
88 Collapsible {
89 width: 1fr;
90 height: auto;
91 background: $boost;
92 border-top: hkey $background;
93 padding-bottom: 1;
94 padding-left: 1;
95 }
96
97 Collapsible.-collapsed > Contents {
98 display: none;
99 }
100 """
101
102 class Contents(Container):
103 DEFAULT_CSS = """
104 Contents {
105 width: 100%;
106 height: auto;
107 padding: 1 0 0 3;
108 }
109 """
110
111 def __init__(
112 self,
113 *children: Widget,
114 title: str = "Toggle",
115 collapsed: bool = True,
116 collapsed_symbol: str = "▶",
117 expanded_symbol: str = "▼",
118 name: str | None = None,
119 id: str | None = None,
120 classes: str | None = None,
121 disabled: bool = False,
122 ) -> None:
123 """Initialize a Collapsible widget.
124
125 Args:
126 *children: Contents that will be collapsed/expanded.
127 title: Title of the collapsed/expanded contents.
128 collapsed: Default status of the contents.
129 collapsed_symbol: Collapsed symbol before the title.
130 expanded_symbol: Expanded symbol before the title.
131 name: The name of the collapsible.
132 id: The ID of the collapsible in the DOM.
133 classes: The CSS classes of the collapsible.
134 disabled: Whether the collapsible is disabled or not.
135 """
136 self._title = CollapsibleTitle(
137 label=title,
138 collapsed_symbol=collapsed_symbol,
139 expanded_symbol=expanded_symbol,
140 collapsed=collapsed,
141 )
142 self._contents_list: list[Widget] = list(children)
143 super().__init__(name=name, id=id, classes=classes, disabled=disabled)
144 self.collapsed = collapsed
145
146 def on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:
147 event.stop()
148 self.collapsed = not self.collapsed
149
150 def _watch_collapsed(self, collapsed: bool) -> None:
151 """Update collapsed state when reactive is changed."""
152 self._update_collapsed(collapsed)
153
154 def _update_collapsed(self, collapsed: bool) -> None:
155 """Update children to match collapsed state."""
156 try:
157 self._title.collapsed = collapsed
158 self.set_class(collapsed, "-collapsed")
159 except NoMatches:
160 pass
161
162 def _on_mount(self) -> None:
163 """Initialise collapsed state."""
164 self._update_collapsed(self.collapsed)
165
166 def compose(self) -> ComposeResult:
167 yield self._title
168 yield self.Contents(*self._contents_list)
169
170 def compose_add_child(self, widget: Widget) -> None:
171 """When using the context manager compose syntax, we want to attach nodes to the contents.
172
173 Args:
174 widget: A Widget to add.
175 """
176 self._contents_list.append(widget)
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/textual/widgets/_collapsible.py b/src/textual/widgets/_collapsible.py
--- a/src/textual/widgets/_collapsible.py
+++ b/src/textual/widgets/_collapsible.py
@@ -99,6 +99,42 @@
}
"""
+ class Toggled(Message):
+ """Parent class subclassed by `Collapsible` messages.
+
+ Can be handled with `on(Collapsible.Toggled)` if you want to handle expansions
+ and collapsed in the same way, or you can handle the specific events individually.
+ """
+
+ def __init__(self, collapsible: Collapsible) -> None:
+ """Create an instance of the message.
+
+ Args:
+ collapsible: The `Collapsible` widget that was toggled.
+ """
+ self.collapsible: Collapsible = collapsible
+ """The collapsible that was toggled."""
+ super().__init__()
+
+ @property
+ def control(self) -> Collapsible:
+ """An alias for [Toggled.collapsible][textual.widgets.Collapsible.Toggled.collapsible]."""
+ return self.collapsible
+
+ class Expanded(Toggled):
+ """Event sent when the `Collapsible` widget is expanded.
+
+ Can be handled using `on_collapsible_expanded` in a subclass of
+ [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.
+ """
+
+ class Collapsed(Toggled):
+ """Event sent when the `Collapsible` widget is collapsed.
+
+ Can be handled using `on_collapsible_collapsed` in a subclass of
+ [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.
+ """
+
class Contents(Container):
DEFAULT_CSS = """
Contents {
@@ -143,9 +179,13 @@
super().__init__(name=name, id=id, classes=classes, disabled=disabled)
self.collapsed = collapsed
- def on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:
+ def _on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:
event.stop()
self.collapsed = not self.collapsed
+ if self.collapsed:
+ self.post_message(self.Collapsed(self))
+ else:
+ self.post_message(self.Expanded(self))
def _watch_collapsed(self, collapsed: bool) -> None:
"""Update collapsed state when reactive is changed."""
| {"golden_diff": "diff --git a/src/textual/widgets/_collapsible.py b/src/textual/widgets/_collapsible.py\n--- a/src/textual/widgets/_collapsible.py\n+++ b/src/textual/widgets/_collapsible.py\n@@ -99,6 +99,42 @@\n }\n \"\"\"\n \n+ class Toggled(Message):\n+ \"\"\"Parent class subclassed by `Collapsible` messages.\n+\n+ Can be handled with `on(Collapsible.Toggled)` if you want to handle expansions\n+ and collapsed in the same way, or you can handle the specific events individually.\n+ \"\"\"\n+\n+ def __init__(self, collapsible: Collapsible) -> None:\n+ \"\"\"Create an instance of the message.\n+\n+ Args:\n+ collapsible: The `Collapsible` widget that was toggled.\n+ \"\"\"\n+ self.collapsible: Collapsible = collapsible\n+ \"\"\"The collapsible that was toggled.\"\"\"\n+ super().__init__()\n+\n+ @property\n+ def control(self) -> Collapsible:\n+ \"\"\"An alias for [Toggled.collapsible][textual.widgets.Collapsible.Toggled.collapsible].\"\"\"\n+ return self.collapsible\n+\n+ class Expanded(Toggled):\n+ \"\"\"Event sent when the `Collapsible` widget is expanded.\n+\n+ Can be handled using `on_collapsible_expanded` in a subclass of\n+ [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.\n+ \"\"\"\n+\n+ class Collapsed(Toggled):\n+ \"\"\"Event sent when the `Collapsible` widget is collapsed.\n+\n+ Can be handled using `on_collapsible_collapsed` in a subclass of\n+ [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.\n+ \"\"\"\n+\n class Contents(Container):\n DEFAULT_CSS = \"\"\"\n Contents {\n@@ -143,9 +179,13 @@\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n self.collapsed = collapsed\n \n- def on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:\n+ def _on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:\n event.stop()\n self.collapsed = not self.collapsed\n+ if self.collapsed:\n+ self.post_message(self.Collapsed(self))\n+ else:\n+ self.post_message(self.Expanded(self))\n \n def _watch_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update collapsed state when reactive is changed.\"\"\"\n", "issue": "Add message `Collapsible.Toggled`\nWhat it says on the tin.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom rich.console import RenderableType\nfrom rich.text import Text\n\nfrom .. import events\nfrom ..app import ComposeResult\nfrom ..binding import Binding\nfrom ..containers import Container\nfrom ..css.query import NoMatches\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..widget import Widget\n\n__all__ = [\"Collapsible\", \"CollapsibleTitle\"]\n\n\nclass CollapsibleTitle(Widget, can_focus=True):\n \"\"\"Title and symbol for the Collapsible.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n CollapsibleTitle {\n width: auto;\n height: auto;\n padding: 0 1 0 1;\n }\n\n CollapsibleTitle:hover {\n background: $foreground 10%;\n color: $text;\n }\n\n CollapsibleTitle:focus {\n background: $accent;\n color: $text;\n }\n \"\"\"\n\n BINDINGS = [Binding(\"enter\", \"toggle\", \"Toggle collapsible\", show=False)]\n \"\"\"\n | Key(s) | Description |\n | :- | :- |\n | enter | Toggle the collapsible. |\n \"\"\"\n\n collapsed = reactive(True)\n\n def __init__(\n self,\n *,\n label: str,\n collapsed_symbol: str,\n expanded_symbol: str,\n collapsed: bool,\n ) -> None:\n super().__init__()\n self.collapsed_symbol = collapsed_symbol\n self.expanded_symbol = expanded_symbol\n self.label = label\n self.collapse = collapsed\n\n class Toggle(Message):\n \"\"\"Request toggle.\"\"\"\n\n async def _on_click(self, event: events.Click) -> None:\n \"\"\"Inform ancestor we want to toggle.\"\"\"\n event.stop()\n self.post_message(self.Toggle())\n\n def action_toggle(self) -> None:\n \"\"\"Toggle the state of the parent collapsible.\"\"\"\n self.post_message(self.Toggle())\n\n def render(self) -> RenderableType:\n \"\"\"Compose right/down arrow and label.\"\"\"\n if self.collapsed:\n return Text(f\"{self.collapsed_symbol} {self.label}\")\n else:\n return Text(f\"{self.expanded_symbol} {self.label}\")\n\n\nclass Collapsible(Widget):\n \"\"\"A collapsible container.\"\"\"\n\n collapsed = reactive(True)\n\n DEFAULT_CSS = \"\"\"\n Collapsible {\n width: 1fr;\n height: auto;\n background: $boost;\n border-top: hkey $background;\n padding-bottom: 1;\n padding-left: 1;\n }\n\n Collapsible.-collapsed > Contents {\n display: none;\n }\n \"\"\"\n\n class Contents(Container):\n DEFAULT_CSS = \"\"\"\n Contents {\n width: 100%;\n height: auto;\n padding: 1 0 0 3;\n }\n \"\"\"\n\n def __init__(\n self,\n *children: Widget,\n title: str = \"Toggle\",\n collapsed: bool = True,\n collapsed_symbol: str = \"\u25b6\",\n expanded_symbol: str = \"\u25bc\",\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ) -> None:\n \"\"\"Initialize a Collapsible widget.\n\n Args:\n *children: Contents that will be collapsed/expanded.\n title: Title of the collapsed/expanded contents.\n collapsed: Default status of the contents.\n collapsed_symbol: Collapsed symbol before the title.\n expanded_symbol: Expanded symbol before the title.\n name: The name of the collapsible.\n id: The ID of the collapsible in the DOM.\n classes: The CSS classes of the collapsible.\n disabled: Whether the collapsible is disabled or not.\n \"\"\"\n self._title = CollapsibleTitle(\n label=title,\n collapsed_symbol=collapsed_symbol,\n expanded_symbol=expanded_symbol,\n collapsed=collapsed,\n )\n self._contents_list: list[Widget] = list(children)\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n self.collapsed = collapsed\n\n def on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:\n event.stop()\n self.collapsed = not self.collapsed\n\n def _watch_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update collapsed state when reactive is changed.\"\"\"\n self._update_collapsed(collapsed)\n\n def _update_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update children to match collapsed state.\"\"\"\n try:\n self._title.collapsed = collapsed\n self.set_class(collapsed, \"-collapsed\")\n except NoMatches:\n pass\n\n def _on_mount(self) -> None:\n \"\"\"Initialise collapsed state.\"\"\"\n self._update_collapsed(self.collapsed)\n\n def compose(self) -> ComposeResult:\n yield self._title\n yield self.Contents(*self._contents_list)\n\n def compose_add_child(self, widget: Widget) -> None:\n \"\"\"When using the context manager compose syntax, we want to attach nodes to the contents.\n\n Args:\n widget: A Widget to add.\n \"\"\"\n self._contents_list.append(widget)\n", "path": "src/textual/widgets/_collapsible.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom rich.console import RenderableType\nfrom rich.text import Text\n\nfrom .. import events\nfrom ..app import ComposeResult\nfrom ..binding import Binding\nfrom ..containers import Container\nfrom ..css.query import NoMatches\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..widget import Widget\n\n__all__ = [\"Collapsible\", \"CollapsibleTitle\"]\n\n\nclass CollapsibleTitle(Widget, can_focus=True):\n \"\"\"Title and symbol for the Collapsible.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n CollapsibleTitle {\n width: auto;\n height: auto;\n padding: 0 1 0 1;\n }\n\n CollapsibleTitle:hover {\n background: $foreground 10%;\n color: $text;\n }\n\n CollapsibleTitle:focus {\n background: $accent;\n color: $text;\n }\n \"\"\"\n\n BINDINGS = [Binding(\"enter\", \"toggle\", \"Toggle collapsible\", show=False)]\n \"\"\"\n | Key(s) | Description |\n | :- | :- |\n | enter | Toggle the collapsible. |\n \"\"\"\n\n collapsed = reactive(True)\n\n def __init__(\n self,\n *,\n label: str,\n collapsed_symbol: str,\n expanded_symbol: str,\n collapsed: bool,\n ) -> None:\n super().__init__()\n self.collapsed_symbol = collapsed_symbol\n self.expanded_symbol = expanded_symbol\n self.label = label\n self.collapse = collapsed\n\n class Toggle(Message):\n \"\"\"Request toggle.\"\"\"\n\n async def _on_click(self, event: events.Click) -> None:\n \"\"\"Inform ancestor we want to toggle.\"\"\"\n event.stop()\n self.post_message(self.Toggle())\n\n def action_toggle(self) -> None:\n \"\"\"Toggle the state of the parent collapsible.\"\"\"\n self.post_message(self.Toggle())\n\n def render(self) -> RenderableType:\n \"\"\"Compose right/down arrow and label.\"\"\"\n if self.collapsed:\n return Text(f\"{self.collapsed_symbol} {self.label}\")\n else:\n return Text(f\"{self.expanded_symbol} {self.label}\")\n\n\nclass Collapsible(Widget):\n \"\"\"A collapsible container.\"\"\"\n\n collapsed = reactive(True)\n\n DEFAULT_CSS = \"\"\"\n Collapsible {\n width: 1fr;\n height: auto;\n background: $boost;\n border-top: hkey $background;\n padding-bottom: 1;\n padding-left: 1;\n }\n\n Collapsible.-collapsed > Contents {\n display: none;\n }\n \"\"\"\n\n class Toggled(Message):\n \"\"\"Parent class subclassed by `Collapsible` messages.\n\n Can be handled with `on(Collapsible.Toggled)` if you want to handle expansions\n and collapsed in the same way, or you can handle the specific events individually.\n \"\"\"\n\n def __init__(self, collapsible: Collapsible) -> None:\n \"\"\"Create an instance of the message.\n\n Args:\n collapsible: The `Collapsible` widget that was toggled.\n \"\"\"\n self.collapsible: Collapsible = collapsible\n \"\"\"The collapsible that was toggled.\"\"\"\n super().__init__()\n\n @property\n def control(self) -> Collapsible:\n \"\"\"An alias for [Toggled.collapsible][textual.widgets.Collapsible.Toggled.collapsible].\"\"\"\n return self.collapsible\n\n class Expanded(Toggled):\n \"\"\"Event sent when the `Collapsible` widget is expanded.\n\n Can be handled using `on_collapsible_expanded` in a subclass of\n [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.\n \"\"\"\n\n class Collapsed(Toggled):\n \"\"\"Event sent when the `Collapsible` widget is collapsed.\n\n Can be handled using `on_collapsible_collapsed` in a subclass of\n [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.\n \"\"\"\n\n class Contents(Container):\n DEFAULT_CSS = \"\"\"\n Contents {\n width: 100%;\n height: auto;\n padding: 1 0 0 3;\n }\n \"\"\"\n\n def __init__(\n self,\n *children: Widget,\n title: str = \"Toggle\",\n collapsed: bool = True,\n collapsed_symbol: str = \"\u25b6\",\n expanded_symbol: str = \"\u25bc\",\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ) -> None:\n \"\"\"Initialize a Collapsible widget.\n\n Args:\n *children: Contents that will be collapsed/expanded.\n title: Title of the collapsed/expanded contents.\n collapsed: Default status of the contents.\n collapsed_symbol: Collapsed symbol before the title.\n expanded_symbol: Expanded symbol before the title.\n name: The name of the collapsible.\n id: The ID of the collapsible in the DOM.\n classes: The CSS classes of the collapsible.\n disabled: Whether the collapsible is disabled or not.\n \"\"\"\n self._title = CollapsibleTitle(\n label=title,\n collapsed_symbol=collapsed_symbol,\n expanded_symbol=expanded_symbol,\n collapsed=collapsed,\n )\n self._contents_list: list[Widget] = list(children)\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n self.collapsed = collapsed\n\n def _on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:\n event.stop()\n self.collapsed = not self.collapsed\n if self.collapsed:\n self.post_message(self.Collapsed(self))\n else:\n self.post_message(self.Expanded(self))\n\n def _watch_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update collapsed state when reactive is changed.\"\"\"\n self._update_collapsed(collapsed)\n\n def _update_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update children to match collapsed state.\"\"\"\n try:\n self._title.collapsed = collapsed\n self.set_class(collapsed, \"-collapsed\")\n except NoMatches:\n pass\n\n def _on_mount(self) -> None:\n \"\"\"Initialise collapsed state.\"\"\"\n self._update_collapsed(self.collapsed)\n\n def compose(self) -> ComposeResult:\n yield self._title\n yield self.Contents(*self._contents_list)\n\n def compose_add_child(self, widget: Widget) -> None:\n \"\"\"When using the context manager compose syntax, we want to attach nodes to the contents.\n\n Args:\n widget: A Widget to add.\n \"\"\"\n self._contents_list.append(widget)\n", "path": "src/textual/widgets/_collapsible.py"}]} | 1,803 | 557 |
gh_patches_debug_33956 | rasdani/github-patches | git_diff | hylang__hy-2299 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Slow startup when Hy is installed from a wheel
Testing the new release of 0.16.0, I see that startup is much slower when installing from the wheel than from the source distribution or directly from the repository. Likewise for older Hy releases. Even when I make sure the `__pycache__`s are included in the wheel and I can see they're installed. Either there's something wonky with my system, or wheel installation doesn't play nicely with premade byte-compiled files.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4
5 import fastentrypoints # Monkey-patches setuptools.
6 from get_version import __version__
7 from setuptools import find_packages, setup
8
9 os.chdir(os.path.split(os.path.abspath(__file__))[0])
10
11 PKG = "hy"
12
13 long_description = """Hy is a Python <--> Lisp layer. It helps
14 make things work nicer, and lets Python and the Hy lisp variant play
15 nice together. """
16
17 setup(
18 name=PKG,
19 version=__version__,
20 install_requires=[
21 "funcparserlib ~= 1.0",
22 "colorama",
23 'astor>=0.8 ; python_version < "3.9"',
24 ],
25 python_requires=">= 3.7, < 3.11",
26 entry_points={
27 "console_scripts": [
28 "hy = hy.cmdline:hy_main",
29 "hy3 = hy.cmdline:hy_main",
30 "hyc = hy.cmdline:hyc_main",
31 "hyc3 = hy.cmdline:hyc_main",
32 "hy2py = hy.cmdline:hy2py_main",
33 "hy2py3 = hy.cmdline:hy2py_main",
34 ]
35 },
36 packages=find_packages(exclude=["tests*"]),
37 package_data={
38 "hy": ["*.hy", "__pycache__/*"],
39 "hy.contrib": ["*.hy", "__pycache__/*"],
40 "hy.core": ["*.hy", "__pycache__/*"],
41 "hy.extra": ["*.hy", "__pycache__/*"],
42 },
43 data_files=[("get_version", ["get_version.py"])],
44 author="Paul Tagliamonte",
45 author_email="[email protected]",
46 long_description=long_description,
47 description="Lisp and Python love each other.",
48 license="Expat",
49 url="http://hylang.org/",
50 platforms=["any"],
51 classifiers=[
52 "Development Status :: 4 - Beta",
53 "Intended Audience :: Developers",
54 "License :: DFSG approved",
55 "License :: OSI Approved :: MIT License", # Really "Expat". Ugh.
56 "Operating System :: OS Independent",
57 "Programming Language :: Lisp",
58 "Programming Language :: Python",
59 "Programming Language :: Python :: 3",
60 "Programming Language :: Python :: 3.7",
61 "Programming Language :: Python :: 3.8",
62 "Programming Language :: Python :: 3.9",
63 "Programming Language :: Python :: 3.10",
64 "Topic :: Software Development :: Code Generators",
65 "Topic :: Software Development :: Compilers",
66 "Topic :: Software Development :: Libraries",
67 ],
68 project_urls={
69 "Documentation": "https://docs.hylang.org/",
70 "Source": "https://github.com/hylang/hy",
71 },
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,6 +5,7 @@
import fastentrypoints # Monkey-patches setuptools.
from get_version import __version__
from setuptools import find_packages, setup
+from setuptools.command.install import install
os.chdir(os.path.split(os.path.abspath(__file__))[0])
@@ -14,14 +15,34 @@
make things work nicer, and lets Python and the Hy lisp variant play
nice together. """
+
+class install(install):
+ def run(self):
+ super().run()
+ import py_compile
+ from glob import glob
+
+ import hy # for compile hooks
+
+ for path in glob(os.path.join(self.install_lib, "**/*.hy"), recursive=True):
+ py_compile.compile(
+ path, invalidation_mode=py_compile.PycInvalidationMode.CHECKED_HASH
+ )
+
+
+# both setup_requires and install_requires
+# since we need to compile .hy files during setup
+requires = [
+ "funcparserlib ~= 1.0",
+ "colorama",
+ 'astor>=0.8 ; python_version < "3.9"',
+]
+
setup(
name=PKG,
version=__version__,
- install_requires=[
- "funcparserlib ~= 1.0",
- "colorama",
- 'astor>=0.8 ; python_version < "3.9"',
- ],
+ setup_requires=requires,
+ install_requires=requires,
python_requires=">= 3.7, < 3.11",
entry_points={
"console_scripts": [
@@ -35,10 +56,7 @@
},
packages=find_packages(exclude=["tests*"]),
package_data={
- "hy": ["*.hy", "__pycache__/*"],
- "hy.contrib": ["*.hy", "__pycache__/*"],
- "hy.core": ["*.hy", "__pycache__/*"],
- "hy.extra": ["*.hy", "__pycache__/*"],
+ "": ["*.hy"],
},
data_files=[("get_version", ["get_version.py"])],
author="Paul Tagliamonte",
@@ -69,4 +87,7 @@
"Documentation": "https://docs.hylang.org/",
"Source": "https://github.com/hylang/hy",
},
+ cmdclass={
+ "install": install,
+ },
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,6 +5,7 @@\n import fastentrypoints # Monkey-patches setuptools.\n from get_version import __version__\n from setuptools import find_packages, setup\n+from setuptools.command.install import install\n \n os.chdir(os.path.split(os.path.abspath(__file__))[0])\n \n@@ -14,14 +15,34 @@\n make things work nicer, and lets Python and the Hy lisp variant play\n nice together. \"\"\"\n \n+\n+class install(install):\n+ def run(self):\n+ super().run()\n+ import py_compile\n+ from glob import glob\n+\n+ import hy # for compile hooks\n+\n+ for path in glob(os.path.join(self.install_lib, \"**/*.hy\"), recursive=True):\n+ py_compile.compile(\n+ path, invalidation_mode=py_compile.PycInvalidationMode.CHECKED_HASH\n+ )\n+\n+\n+# both setup_requires and install_requires\n+# since we need to compile .hy files during setup\n+requires = [\n+ \"funcparserlib ~= 1.0\",\n+ \"colorama\",\n+ 'astor>=0.8 ; python_version < \"3.9\"',\n+]\n+\n setup(\n name=PKG,\n version=__version__,\n- install_requires=[\n- \"funcparserlib ~= 1.0\",\n- \"colorama\",\n- 'astor>=0.8 ; python_version < \"3.9\"',\n- ],\n+ setup_requires=requires,\n+ install_requires=requires,\n python_requires=\">= 3.7, < 3.11\",\n entry_points={\n \"console_scripts\": [\n@@ -35,10 +56,7 @@\n },\n packages=find_packages(exclude=[\"tests*\"]),\n package_data={\n- \"hy\": [\"*.hy\", \"__pycache__/*\"],\n- \"hy.contrib\": [\"*.hy\", \"__pycache__/*\"],\n- \"hy.core\": [\"*.hy\", \"__pycache__/*\"],\n- \"hy.extra\": [\"*.hy\", \"__pycache__/*\"],\n+ \"\": [\"*.hy\"],\n },\n data_files=[(\"get_version\", [\"get_version.py\"])],\n author=\"Paul Tagliamonte\",\n@@ -69,4 +87,7 @@\n \"Documentation\": \"https://docs.hylang.org/\",\n \"Source\": \"https://github.com/hylang/hy\",\n },\n+ cmdclass={\n+ \"install\": install,\n+ },\n )\n", "issue": "Slow startup when Hy is installed from a wheel\nTesting the new release of 0.16.0, I see that startup is much slower when installing from the wheel than from the source distribution or directly from the repository. Likewise for older Hy releases. Even when I make sure the `__pycache__`s are included in the wheel and I can see they're installed. Either there's something wonky with my system, or wheel installation doesn't play nicely with premade byte-compiled files.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\n\nimport fastentrypoints # Monkey-patches setuptools.\nfrom get_version import __version__\nfrom setuptools import find_packages, setup\n\nos.chdir(os.path.split(os.path.abspath(__file__))[0])\n\nPKG = \"hy\"\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\nsetup(\n name=PKG,\n version=__version__,\n install_requires=[\n \"funcparserlib ~= 1.0\",\n \"colorama\",\n 'astor>=0.8 ; python_version < \"3.9\"',\n ],\n python_requires=\">= 3.7, < 3.11\",\n entry_points={\n \"console_scripts\": [\n \"hy = hy.cmdline:hy_main\",\n \"hy3 = hy.cmdline:hy_main\",\n \"hyc = hy.cmdline:hyc_main\",\n \"hyc3 = hy.cmdline:hyc_main\",\n \"hy2py = hy.cmdline:hy2py_main\",\n \"hy2py3 = hy.cmdline:hy2py_main\",\n ]\n },\n packages=find_packages(exclude=[\"tests*\"]),\n package_data={\n \"hy\": [\"*.hy\", \"__pycache__/*\"],\n \"hy.contrib\": [\"*.hy\", \"__pycache__/*\"],\n \"hy.core\": [\"*.hy\", \"__pycache__/*\"],\n \"hy.extra\": [\"*.hy\", \"__pycache__/*\"],\n },\n data_files=[(\"get_version\", [\"get_version.py\"])],\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description=\"Lisp and Python love each other.\",\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=[\"any\"],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ],\n project_urls={\n \"Documentation\": \"https://docs.hylang.org/\",\n \"Source\": \"https://github.com/hylang/hy\",\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport os\n\nimport fastentrypoints # Monkey-patches setuptools.\nfrom get_version import __version__\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\n\nos.chdir(os.path.split(os.path.abspath(__file__))[0])\n\nPKG = \"hy\"\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\n\nclass install(install):\n def run(self):\n super().run()\n import py_compile\n from glob import glob\n\n import hy # for compile hooks\n\n for path in glob(os.path.join(self.install_lib, \"**/*.hy\"), recursive=True):\n py_compile.compile(\n path, invalidation_mode=py_compile.PycInvalidationMode.CHECKED_HASH\n )\n\n\n# both setup_requires and install_requires\n# since we need to compile .hy files during setup\nrequires = [\n \"funcparserlib ~= 1.0\",\n \"colorama\",\n 'astor>=0.8 ; python_version < \"3.9\"',\n]\n\nsetup(\n name=PKG,\n version=__version__,\n setup_requires=requires,\n install_requires=requires,\n python_requires=\">= 3.7, < 3.11\",\n entry_points={\n \"console_scripts\": [\n \"hy = hy.cmdline:hy_main\",\n \"hy3 = hy.cmdline:hy_main\",\n \"hyc = hy.cmdline:hyc_main\",\n \"hyc3 = hy.cmdline:hyc_main\",\n \"hy2py = hy.cmdline:hy2py_main\",\n \"hy2py3 = hy.cmdline:hy2py_main\",\n ]\n },\n packages=find_packages(exclude=[\"tests*\"]),\n package_data={\n \"\": [\"*.hy\"],\n },\n data_files=[(\"get_version\", [\"get_version.py\"])],\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description=\"Lisp and Python love each other.\",\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=[\"any\"],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ],\n project_urls={\n \"Documentation\": \"https://docs.hylang.org/\",\n \"Source\": \"https://github.com/hylang/hy\",\n },\n cmdclass={\n \"install\": install,\n },\n)\n", "path": "setup.py"}]} | 1,099 | 552 |
gh_patches_debug_22100 | rasdani/github-patches | git_diff | microsoft__playwright-python-525 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installation issues when using pdm
[pdm](https://github.com/frostming/pdm/) is a new tool for Python environment managing.
It works very well, but when I try to install playwright, it fails to install. You can find the details here:
https://github.com/frostming/pdm/issues/269
The PDM developer points out that playwright seems to be missing some required fields in the wheels configs (see last comment in the linked issue).
I would highly appreciate if you could look into this issue. Cheers! 🍺
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import os
17 import shutil
18 import subprocess
19 import sys
20 import zipfile
21 from pathlib import Path
22
23 import setuptools
24 from wheel.bdist_wheel import bdist_wheel as BDistWheelCommand
25
26 driver_version = "1.9.0-1614037901000"
27
28 def extractall(zip: zipfile.ZipFile, path: str) -> None:
29 for name in zip.namelist():
30 member = zip.getinfo(name)
31 extracted_path = zip.extract(member, path)
32 attr = member.external_attr >> 16
33 if attr != 0:
34 os.chmod(extracted_path, attr)
35
36
37 class PlaywrightBDistWheelCommand(BDistWheelCommand):
38 def run(self) -> None:
39 if os.path.exists("build"):
40 shutil.rmtree("build")
41 if os.path.exists("dist"):
42 shutil.rmtree("dist")
43 if os.path.exists("playwright.egg-info"):
44 shutil.rmtree("playwright.egg-info")
45 super().run()
46 os.makedirs("driver", exist_ok=True)
47 os.makedirs("playwright/driver", exist_ok=True)
48 for platform in ["mac", "linux", "win32", "win32_x64"]:
49 zip_file = f"playwright-{driver_version}-{platform}.zip"
50 if not os.path.exists("driver/" + zip_file):
51 url = "https://playwright.azureedge.net/builds/driver/"
52 url = url + "next/"
53 url = url + zip_file
54 print("Fetching ", url)
55 subprocess.check_call(
56 ["curl", "--http1.1", url, "-o", "driver/" + zip_file]
57 )
58 base_wheel_location = glob.glob("dist/*.whl")[0]
59 without_platform = base_wheel_location[:-7]
60 platform_map = {
61 "darwin": "mac",
62 "linux": "linux",
63 "win32": "win32_x64" if sys.maxsize > 2 ** 32 else "win32",
64 }
65 for platform in ["mac", "linux", "win32", "win32_x64"]:
66 zip_file = f"driver/playwright-{driver_version}-{platform}.zip"
67 with zipfile.ZipFile(zip_file, "r") as zip:
68 extractall(zip, f"driver/{platform}")
69 if platform_map[sys.platform] == platform:
70 with zipfile.ZipFile(zip_file, "r") as zip:
71 extractall(zip, "playwright/driver")
72 wheel = ""
73 if platform == "mac":
74 wheel = "macosx_10_13_x86_64.whl"
75 if platform == "linux":
76 wheel = "manylinux1_x86_64.whl"
77 if platform == "win32":
78 wheel = "win32.whl"
79 if platform == "win32_x64":
80 wheel = "win_amd64.whl"
81 wheel_location = without_platform + wheel
82 shutil.copy(base_wheel_location, wheel_location)
83 with zipfile.ZipFile(wheel_location, "a") as zip:
84 driver_root = os.path.abspath(f"driver/{platform}")
85 for dir_path, _, files in os.walk(driver_root):
86 for file in files:
87 from_path = os.path.join(dir_path, file)
88 to_path = os.path.relpath(from_path, driver_root)
89 zip.write(from_path, f"playwright/driver/{to_path}")
90 if platform == "mac":
91 # Ship mac both as 10_13 as and 11_0 universal to work across Macs.
92 universal_location = without_platform + "macosx_11_0_universal2.whl"
93 shutil.copyfile(wheel_location, universal_location)
94 with zipfile.ZipFile(universal_location, "a") as zip:
95 zip.writestr("playwright/driver/README.md", "Universal Mac package")
96
97 os.remove(base_wheel_location)
98
99
100 setuptools.setup(
101 name="playwright",
102 author="Microsoft Corporation",
103 author_email="",
104 description="A high-level API to automate web browsers",
105 long_description=Path("README.md").read_text(encoding="utf-8"),
106 long_description_content_type="text/markdown",
107 url="https://github.com/Microsoft/playwright-python",
108 packages=["playwright"],
109 include_package_data=True,
110 install_requires=[
111 "greenlet==1.0.0",
112 "pyee>=8.0.1",
113 "typing-extensions;python_version<='3.8'",
114 ],
115 classifiers=[
116 "Topic :: Software Development :: Testing",
117 "Topic :: Internet :: WWW/HTTP :: Browsers",
118 "Intended Audience :: Developers",
119 "Programming Language :: Python :: 3",
120 "Programming Language :: Python :: 3.7",
121 "Programming Language :: Python :: 3.8",
122 "Programming Language :: Python :: 3.9",
123 "License :: OSI Approved :: Apache Software License",
124 "Operating System :: OS Independent",
125 ],
126 python_requires=">=3.7",
127 cmdclass={"bdist_wheel": PlaywrightBDistWheelCommand},
128 use_scm_version={
129 "version_scheme": "post-release",
130 "write_to": "playwright/_repo_version.py",
131 "write_to_template": 'version = "{version}"\n',
132 },
133 setup_requires=["setuptools_scm", "wheel"],
134 entry_points={
135 "console_scripts": [
136 "playwright=playwright.__main__:main",
137 ],
138 },
139 )
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,10 +21,12 @@
from pathlib import Path
import setuptools
+from auditwheel.wheeltools import InWheel
from wheel.bdist_wheel import bdist_wheel as BDistWheelCommand
driver_version = "1.9.0-1614037901000"
+
def extractall(zip: zipfile.ZipFile, path: str) -> None:
for name in zip.namelist():
member = zip.getinfo(name)
@@ -95,6 +97,18 @@
zip.writestr("playwright/driver/README.md", "Universal Mac package")
os.remove(base_wheel_location)
+ for whlfile in glob.glob("dist/*.whl"):
+
+ os.makedirs("wheelhouse", exist_ok=True)
+ with InWheel(
+ in_wheel=whlfile,
+ out_wheel=os.path.join("wheelhouse", os.path.basename(whlfile)),
+ ret_self=True,
+ ):
+ print("Updating RECORD file of %s" % whlfile)
+ shutil.rmtree("dist")
+ print("Copying new wheels")
+ shutil.move("wheelhouse", "dist")
setuptools.setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,10 +21,12 @@\n from pathlib import Path\n \n import setuptools\n+from auditwheel.wheeltools import InWheel\n from wheel.bdist_wheel import bdist_wheel as BDistWheelCommand\n \n driver_version = \"1.9.0-1614037901000\"\n \n+\n def extractall(zip: zipfile.ZipFile, path: str) -> None:\n for name in zip.namelist():\n member = zip.getinfo(name)\n@@ -95,6 +97,18 @@\n zip.writestr(\"playwright/driver/README.md\", \"Universal Mac package\")\n \n os.remove(base_wheel_location)\n+ for whlfile in glob.glob(\"dist/*.whl\"):\n+\n+ os.makedirs(\"wheelhouse\", exist_ok=True)\n+ with InWheel(\n+ in_wheel=whlfile,\n+ out_wheel=os.path.join(\"wheelhouse\", os.path.basename(whlfile)),\n+ ret_self=True,\n+ ):\n+ print(\"Updating RECORD file of %s\" % whlfile)\n+ shutil.rmtree(\"dist\")\n+ print(\"Copying new wheels\")\n+ shutil.move(\"wheelhouse\", \"dist\")\n \n \n setuptools.setup(\n", "issue": "Installation issues when using pdm\n[pdm](https://github.com/frostming/pdm/) is a new tool for Python environment managing.\r\nIt works very well, but when I try to install playwright, it fails to install. You can find the details here: \r\nhttps://github.com/frostming/pdm/issues/269\r\n\r\nThe PDM developer points out that playwright seems to be missing some required fields in the wheels configs (see last comment in the linked issue).\r\n\r\nI would highly appreciate if you could look into this issue. Cheers! \ud83c\udf7a\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport glob\nimport os\nimport shutil\nimport subprocess\nimport sys\nimport zipfile\nfrom pathlib import Path\n\nimport setuptools\nfrom wheel.bdist_wheel import bdist_wheel as BDistWheelCommand\n\ndriver_version = \"1.9.0-1614037901000\"\n\ndef extractall(zip: zipfile.ZipFile, path: str) -> None:\n for name in zip.namelist():\n member = zip.getinfo(name)\n extracted_path = zip.extract(member, path)\n attr = member.external_attr >> 16\n if attr != 0:\n os.chmod(extracted_path, attr)\n\n\nclass PlaywrightBDistWheelCommand(BDistWheelCommand):\n def run(self) -> None:\n if os.path.exists(\"build\"):\n shutil.rmtree(\"build\")\n if os.path.exists(\"dist\"):\n shutil.rmtree(\"dist\")\n if os.path.exists(\"playwright.egg-info\"):\n shutil.rmtree(\"playwright.egg-info\")\n super().run()\n os.makedirs(\"driver\", exist_ok=True)\n os.makedirs(\"playwright/driver\", exist_ok=True)\n for platform in [\"mac\", \"linux\", \"win32\", \"win32_x64\"]:\n zip_file = f\"playwright-{driver_version}-{platform}.zip\"\n if not os.path.exists(\"driver/\" + zip_file):\n url = \"https://playwright.azureedge.net/builds/driver/\"\n url = url + \"next/\"\n url = url + zip_file\n print(\"Fetching \", url)\n subprocess.check_call(\n [\"curl\", \"--http1.1\", url, \"-o\", \"driver/\" + zip_file]\n )\n base_wheel_location = glob.glob(\"dist/*.whl\")[0]\n without_platform = base_wheel_location[:-7]\n platform_map = {\n \"darwin\": \"mac\",\n \"linux\": \"linux\",\n \"win32\": \"win32_x64\" if sys.maxsize > 2 ** 32 else \"win32\",\n }\n for platform in [\"mac\", \"linux\", \"win32\", \"win32_x64\"]:\n zip_file = f\"driver/playwright-{driver_version}-{platform}.zip\"\n with zipfile.ZipFile(zip_file, \"r\") as zip:\n extractall(zip, f\"driver/{platform}\")\n if platform_map[sys.platform] == platform:\n with zipfile.ZipFile(zip_file, \"r\") as zip:\n extractall(zip, \"playwright/driver\")\n wheel = \"\"\n if platform == \"mac\":\n wheel = \"macosx_10_13_x86_64.whl\"\n if platform == \"linux\":\n wheel = \"manylinux1_x86_64.whl\"\n if platform == \"win32\":\n wheel = \"win32.whl\"\n if platform == \"win32_x64\":\n wheel = \"win_amd64.whl\"\n wheel_location = without_platform + wheel\n shutil.copy(base_wheel_location, wheel_location)\n with zipfile.ZipFile(wheel_location, \"a\") as zip:\n driver_root = os.path.abspath(f\"driver/{platform}\")\n for dir_path, _, files in os.walk(driver_root):\n for file in files:\n from_path = os.path.join(dir_path, file)\n to_path = os.path.relpath(from_path, driver_root)\n zip.write(from_path, f\"playwright/driver/{to_path}\")\n if platform == \"mac\":\n # Ship mac both as 10_13 as and 11_0 universal to work across Macs.\n universal_location = without_platform + \"macosx_11_0_universal2.whl\"\n shutil.copyfile(wheel_location, universal_location)\n with zipfile.ZipFile(universal_location, \"a\") as zip:\n zip.writestr(\"playwright/driver/README.md\", \"Universal Mac package\")\n\n os.remove(base_wheel_location)\n\n\nsetuptools.setup(\n name=\"playwright\",\n author=\"Microsoft Corporation\",\n author_email=\"\",\n description=\"A high-level API to automate web browsers\",\n long_description=Path(\"README.md\").read_text(encoding=\"utf-8\"),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/Microsoft/playwright-python\",\n packages=[\"playwright\"],\n include_package_data=True,\n install_requires=[\n \"greenlet==1.0.0\",\n \"pyee>=8.0.1\",\n \"typing-extensions;python_version<='3.8'\",\n ],\n classifiers=[\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Internet :: WWW/HTTP :: Browsers\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n ],\n python_requires=\">=3.7\",\n cmdclass={\"bdist_wheel\": PlaywrightBDistWheelCommand},\n use_scm_version={\n \"version_scheme\": \"post-release\",\n \"write_to\": \"playwright/_repo_version.py\",\n \"write_to_template\": 'version = \"{version}\"\\n',\n },\n setup_requires=[\"setuptools_scm\", \"wheel\"],\n entry_points={\n \"console_scripts\": [\n \"playwright=playwright.__main__:main\",\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport glob\nimport os\nimport shutil\nimport subprocess\nimport sys\nimport zipfile\nfrom pathlib import Path\n\nimport setuptools\nfrom auditwheel.wheeltools import InWheel\nfrom wheel.bdist_wheel import bdist_wheel as BDistWheelCommand\n\ndriver_version = \"1.9.0-1614037901000\"\n\n\ndef extractall(zip: zipfile.ZipFile, path: str) -> None:\n for name in zip.namelist():\n member = zip.getinfo(name)\n extracted_path = zip.extract(member, path)\n attr = member.external_attr >> 16\n if attr != 0:\n os.chmod(extracted_path, attr)\n\n\nclass PlaywrightBDistWheelCommand(BDistWheelCommand):\n def run(self) -> None:\n if os.path.exists(\"build\"):\n shutil.rmtree(\"build\")\n if os.path.exists(\"dist\"):\n shutil.rmtree(\"dist\")\n if os.path.exists(\"playwright.egg-info\"):\n shutil.rmtree(\"playwright.egg-info\")\n super().run()\n os.makedirs(\"driver\", exist_ok=True)\n os.makedirs(\"playwright/driver\", exist_ok=True)\n for platform in [\"mac\", \"linux\", \"win32\", \"win32_x64\"]:\n zip_file = f\"playwright-{driver_version}-{platform}.zip\"\n if not os.path.exists(\"driver/\" + zip_file):\n url = \"https://playwright.azureedge.net/builds/driver/\"\n url = url + \"next/\"\n url = url + zip_file\n print(\"Fetching \", url)\n subprocess.check_call(\n [\"curl\", \"--http1.1\", url, \"-o\", \"driver/\" + zip_file]\n )\n base_wheel_location = glob.glob(\"dist/*.whl\")[0]\n without_platform = base_wheel_location[:-7]\n platform_map = {\n \"darwin\": \"mac\",\n \"linux\": \"linux\",\n \"win32\": \"win32_x64\" if sys.maxsize > 2 ** 32 else \"win32\",\n }\n for platform in [\"mac\", \"linux\", \"win32\", \"win32_x64\"]:\n zip_file = f\"driver/playwright-{driver_version}-{platform}.zip\"\n with zipfile.ZipFile(zip_file, \"r\") as zip:\n extractall(zip, f\"driver/{platform}\")\n if platform_map[sys.platform] == platform:\n with zipfile.ZipFile(zip_file, \"r\") as zip:\n extractall(zip, \"playwright/driver\")\n wheel = \"\"\n if platform == \"mac\":\n wheel = \"macosx_10_13_x86_64.whl\"\n if platform == \"linux\":\n wheel = \"manylinux1_x86_64.whl\"\n if platform == \"win32\":\n wheel = \"win32.whl\"\n if platform == \"win32_x64\":\n wheel = \"win_amd64.whl\"\n wheel_location = without_platform + wheel\n shutil.copy(base_wheel_location, wheel_location)\n with zipfile.ZipFile(wheel_location, \"a\") as zip:\n driver_root = os.path.abspath(f\"driver/{platform}\")\n for dir_path, _, files in os.walk(driver_root):\n for file in files:\n from_path = os.path.join(dir_path, file)\n to_path = os.path.relpath(from_path, driver_root)\n zip.write(from_path, f\"playwright/driver/{to_path}\")\n if platform == \"mac\":\n # Ship mac both as 10_13 as and 11_0 universal to work across Macs.\n universal_location = without_platform + \"macosx_11_0_universal2.whl\"\n shutil.copyfile(wheel_location, universal_location)\n with zipfile.ZipFile(universal_location, \"a\") as zip:\n zip.writestr(\"playwright/driver/README.md\", \"Universal Mac package\")\n\n os.remove(base_wheel_location)\n for whlfile in glob.glob(\"dist/*.whl\"):\n\n os.makedirs(\"wheelhouse\", exist_ok=True)\n with InWheel(\n in_wheel=whlfile,\n out_wheel=os.path.join(\"wheelhouse\", os.path.basename(whlfile)),\n ret_self=True,\n ):\n print(\"Updating RECORD file of %s\" % whlfile)\n shutil.rmtree(\"dist\")\n print(\"Copying new wheels\")\n shutil.move(\"wheelhouse\", \"dist\")\n\n\nsetuptools.setup(\n name=\"playwright\",\n author=\"Microsoft Corporation\",\n author_email=\"\",\n description=\"A high-level API to automate web browsers\",\n long_description=Path(\"README.md\").read_text(encoding=\"utf-8\"),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/Microsoft/playwright-python\",\n packages=[\"playwright\"],\n include_package_data=True,\n install_requires=[\n \"greenlet==1.0.0\",\n \"pyee>=8.0.1\",\n \"typing-extensions;python_version<='3.8'\",\n ],\n classifiers=[\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Internet :: WWW/HTTP :: Browsers\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n ],\n python_requires=\">=3.7\",\n cmdclass={\"bdist_wheel\": PlaywrightBDistWheelCommand},\n use_scm_version={\n \"version_scheme\": \"post-release\",\n \"write_to\": \"playwright/_repo_version.py\",\n \"write_to_template\": 'version = \"{version}\"\\n',\n },\n setup_requires=[\"setuptools_scm\", \"wheel\"],\n entry_points={\n \"console_scripts\": [\n \"playwright=playwright.__main__:main\",\n ],\n },\n)\n", "path": "setup.py"}]} | 2,000 | 283 |
gh_patches_debug_25931 | rasdani/github-patches | git_diff | joke2k__faker-1103 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implementation of person id number for cs_CZ - Czech (rodné číslo)
Can you implement randomizer which will generate a proper person ID number (rodné číslo) for Czech local?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/ssn/cs_CZ/__init__.py`
Content:
```
1 from .. import Provider as BaseProvider
2
3
4 class Provider(BaseProvider):
5 vat_id_formats = (
6 'CZ########',
7 'CZ#########',
8 'CZ##########',
9 )
10
11 def vat_id(self):
12 """
13 http://ec.europa.eu/taxation_customs/vies/faq.html#item_11
14 :return: A random Czech VAT ID
15 """
16
17 return self.bothify(self.random_element(self.vat_id_formats))
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/ssn/cs_CZ/__init__.py b/faker/providers/ssn/cs_CZ/__init__.py
--- a/faker/providers/ssn/cs_CZ/__init__.py
+++ b/faker/providers/ssn/cs_CZ/__init__.py
@@ -1,3 +1,5 @@
+from math import ceil
+
from .. import Provider as BaseProvider
@@ -8,6 +10,8 @@
'CZ##########',
)
+ national_id_months = ['%.2d' % i for i in range(1, 13)] + ['%.2d' % i for i in range(51, 63)]
+
def vat_id(self):
"""
http://ec.europa.eu/taxation_customs/vies/faq.html#item_11
@@ -15,3 +19,24 @@
"""
return self.bothify(self.random_element(self.vat_id_formats))
+
+ def birth_number(self):
+ """
+ Birth Number (Czech/Slovak: rodné číslo (RČ))
+ https://en.wikipedia.org/wiki/National_identification_number#Czech_Republic_and_Slovakia
+ """
+ birthdate = self.generator.date_of_birth()
+ year = '%.2d' % (birthdate.year % 100)
+ month = self.random_element(self.national_id_months)
+ day = '%.2d' % birthdate.day
+ if birthdate.year > 1953:
+ sn = self.random_number(4, True)
+ else:
+ sn = self.random_number(3, True)
+ number = int('{}{}{}{}'.format(year, month, day, sn))
+ birth_number = str(ceil(number / 11) * 11)
+ if year == '00':
+ birth_number = '00' + birth_number
+ elif year[0] == '0':
+ birth_number = '0' + birth_number
+ return '{}/{}'.format(birth_number[:6], birth_number[6::])
| {"golden_diff": "diff --git a/faker/providers/ssn/cs_CZ/__init__.py b/faker/providers/ssn/cs_CZ/__init__.py\n--- a/faker/providers/ssn/cs_CZ/__init__.py\n+++ b/faker/providers/ssn/cs_CZ/__init__.py\n@@ -1,3 +1,5 @@\n+from math import ceil\n+\n from .. import Provider as BaseProvider\n \n \n@@ -8,6 +10,8 @@\n 'CZ##########',\n )\n \n+ national_id_months = ['%.2d' % i for i in range(1, 13)] + ['%.2d' % i for i in range(51, 63)]\n+\n def vat_id(self):\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n@@ -15,3 +19,24 @@\n \"\"\"\n \n return self.bothify(self.random_element(self.vat_id_formats))\n+\n+ def birth_number(self):\n+ \"\"\"\n+ Birth Number (Czech/Slovak: rodn\u00e9 \u010d\u00edslo (R\u010c))\n+ https://en.wikipedia.org/wiki/National_identification_number#Czech_Republic_and_Slovakia\n+ \"\"\"\n+ birthdate = self.generator.date_of_birth()\n+ year = '%.2d' % (birthdate.year % 100)\n+ month = self.random_element(self.national_id_months)\n+ day = '%.2d' % birthdate.day\n+ if birthdate.year > 1953:\n+ sn = self.random_number(4, True)\n+ else:\n+ sn = self.random_number(3, True)\n+ number = int('{}{}{}{}'.format(year, month, day, sn))\n+ birth_number = str(ceil(number / 11) * 11)\n+ if year == '00':\n+ birth_number = '00' + birth_number\n+ elif year[0] == '0':\n+ birth_number = '0' + birth_number\n+ return '{}/{}'.format(birth_number[:6], birth_number[6::])\n", "issue": "Implementation of person id number for cs_CZ - Czech (rodn\u00e9 \u010d\u00edslo)\nCan you implement randomizer which will generate a proper person ID number (rodn\u00e9 \u010d\u00edslo) for Czech local?\n", "before_files": [{"content": "from .. import Provider as BaseProvider\n\n\nclass Provider(BaseProvider):\n vat_id_formats = (\n 'CZ########',\n 'CZ#########',\n 'CZ##########',\n )\n\n def vat_id(self):\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n :return: A random Czech VAT ID\n \"\"\"\n\n return self.bothify(self.random_element(self.vat_id_formats))\n", "path": "faker/providers/ssn/cs_CZ/__init__.py"}], "after_files": [{"content": "from math import ceil\n\nfrom .. import Provider as BaseProvider\n\n\nclass Provider(BaseProvider):\n vat_id_formats = (\n 'CZ########',\n 'CZ#########',\n 'CZ##########',\n )\n\n national_id_months = ['%.2d' % i for i in range(1, 13)] + ['%.2d' % i for i in range(51, 63)]\n\n def vat_id(self):\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n :return: A random Czech VAT ID\n \"\"\"\n\n return self.bothify(self.random_element(self.vat_id_formats))\n\n def birth_number(self):\n \"\"\"\n Birth Number (Czech/Slovak: rodn\u00e9 \u010d\u00edslo (R\u010c))\n https://en.wikipedia.org/wiki/National_identification_number#Czech_Republic_and_Slovakia\n \"\"\"\n birthdate = self.generator.date_of_birth()\n year = '%.2d' % (birthdate.year % 100)\n month = self.random_element(self.national_id_months)\n day = '%.2d' % birthdate.day\n if birthdate.year > 1953:\n sn = self.random_number(4, True)\n else:\n sn = self.random_number(3, True)\n number = int('{}{}{}{}'.format(year, month, day, sn))\n birth_number = str(ceil(number / 11) * 11)\n if year == '00':\n birth_number = '00' + birth_number\n elif year[0] == '0':\n birth_number = '0' + birth_number\n return '{}/{}'.format(birth_number[:6], birth_number[6::])\n", "path": "faker/providers/ssn/cs_CZ/__init__.py"}]} | 439 | 470 |
gh_patches_debug_17777 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-5424 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Monitoring alias package is missing new service clients
https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/monitoring/google/cloud/monitoring.py is missing the new clients added to https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/monitoring/google/cloud/monitoring_v3/__init__.py
Should be a relatively easy fix.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monitoring/google/cloud/monitoring.py`
Content:
```
1 # Copyright 2017, Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 from google.cloud.monitoring_v3.query import Query
18 from google.cloud.monitoring_v3 import GroupServiceClient
19 from google.cloud.monitoring_v3 import MetricServiceClient
20 from google.cloud.monitoring_v3 import enums
21 from google.cloud.monitoring_v3 import types
22
23 __all__ = (
24 'enums',
25 'types',
26 'GroupServiceClient',
27 'Query',
28 'MetricServiceClient', )
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/monitoring/google/cloud/monitoring.py b/monitoring/google/cloud/monitoring.py
--- a/monitoring/google/cloud/monitoring.py
+++ b/monitoring/google/cloud/monitoring.py
@@ -15,14 +15,21 @@
from __future__ import absolute_import
from google.cloud.monitoring_v3.query import Query
+from google.cloud.monitoring_v3 import AlertPolicyServiceClient
from google.cloud.monitoring_v3 import GroupServiceClient
from google.cloud.monitoring_v3 import MetricServiceClient
+from google.cloud.monitoring_v3 import NotificationChannelServiceClient
+from google.cloud.monitoring_v3 import UptimeCheckServiceClient
from google.cloud.monitoring_v3 import enums
from google.cloud.monitoring_v3 import types
__all__ = (
'enums',
'types',
+ 'AlertPolicyServiceClient',
'GroupServiceClient',
+ 'MetricServiceClient',
+ 'NotificationChannelServiceClient',
+ 'UptimeCheckServiceClient',
'Query',
- 'MetricServiceClient', )
+)
| {"golden_diff": "diff --git a/monitoring/google/cloud/monitoring.py b/monitoring/google/cloud/monitoring.py\n--- a/monitoring/google/cloud/monitoring.py\n+++ b/monitoring/google/cloud/monitoring.py\n@@ -15,14 +15,21 @@\n from __future__ import absolute_import\n \n from google.cloud.monitoring_v3.query import Query\n+from google.cloud.monitoring_v3 import AlertPolicyServiceClient\n from google.cloud.monitoring_v3 import GroupServiceClient\n from google.cloud.monitoring_v3 import MetricServiceClient\n+from google.cloud.monitoring_v3 import NotificationChannelServiceClient\n+from google.cloud.monitoring_v3 import UptimeCheckServiceClient\n from google.cloud.monitoring_v3 import enums\n from google.cloud.monitoring_v3 import types\n \n __all__ = (\n 'enums',\n 'types',\n+ 'AlertPolicyServiceClient',\n 'GroupServiceClient',\n+ 'MetricServiceClient',\n+ 'NotificationChannelServiceClient',\n+ 'UptimeCheckServiceClient',\n 'Query',\n- 'MetricServiceClient', )\n+)\n", "issue": "Monitoring alias package is missing new service clients\nhttps://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/monitoring/google/cloud/monitoring.py is missing the new clients added to https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/monitoring/google/cloud/monitoring_v3/__init__.py\r\n\r\nShould be a relatively easy fix.\n", "before_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom google.cloud.monitoring_v3.query import Query\nfrom google.cloud.monitoring_v3 import GroupServiceClient\nfrom google.cloud.monitoring_v3 import MetricServiceClient\nfrom google.cloud.monitoring_v3 import enums\nfrom google.cloud.monitoring_v3 import types\n\n__all__ = (\n 'enums',\n 'types',\n 'GroupServiceClient',\n 'Query',\n 'MetricServiceClient', )\n", "path": "monitoring/google/cloud/monitoring.py"}], "after_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom google.cloud.monitoring_v3.query import Query\nfrom google.cloud.monitoring_v3 import AlertPolicyServiceClient\nfrom google.cloud.monitoring_v3 import GroupServiceClient\nfrom google.cloud.monitoring_v3 import MetricServiceClient\nfrom google.cloud.monitoring_v3 import NotificationChannelServiceClient\nfrom google.cloud.monitoring_v3 import UptimeCheckServiceClient\nfrom google.cloud.monitoring_v3 import enums\nfrom google.cloud.monitoring_v3 import types\n\n__all__ = (\n 'enums',\n 'types',\n 'AlertPolicyServiceClient',\n 'GroupServiceClient',\n 'MetricServiceClient',\n 'NotificationChannelServiceClient',\n 'UptimeCheckServiceClient',\n 'Query',\n)\n", "path": "monitoring/google/cloud/monitoring.py"}]} | 614 | 233 |
gh_patches_debug_35214 | rasdani/github-patches | git_diff | conan-io__conan-center-index-6951 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[request] stb/20210818
### Package Details
* Package Name/Version: **stb/20210818**
There has been +1800 commits added to stb since Feb 2 of 2020, I greatly suggest updating it.
The above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/stb/all/conanfile.py`
Content:
```
1 from conans import ConanFile, tools
2 import os
3
4 class StbConan(ConanFile):
5 name = "stb"
6 description = "single-file public domain libraries for C/C++"
7 topics = ("conan", "stb", "single-file")
8 url = "https://github.com/conan-io/conan-center-index"
9 homepage = "https://github.com/nothings/stb"
10 license = ("Unlicense", "MIT")
11 no_copy_source = True
12 _source_subfolder = "source_subfolder"
13
14 def source(self):
15 commit = os.path.splitext(os.path.basename(self.conan_data["sources"][self.version]["url"]))[0]
16 tools.get(**self.conan_data["sources"][self.version])
17 extracted_dir = self.name + "-" + commit
18 os.rename(extracted_dir, self._source_subfolder)
19
20 def package(self):
21 self.copy("LICENSE", src=self._source_subfolder, dst="licenses")
22 self.copy("*.h", src=self._source_subfolder, dst="include")
23 self.copy("stb_vorbis.c", src=self._source_subfolder, dst="include")
24 tools.rmdir(os.path.join(self.package_folder, "include", "tests"))
25
26 def package_id(self):
27 self.info.header_only()
28
29 def package_info(self):
30 self.cpp_info.defines.append('STB_TEXTEDIT_KEYTYPE=unsigned')
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/stb/all/conanfile.py b/recipes/stb/all/conanfile.py
--- a/recipes/stb/all/conanfile.py
+++ b/recipes/stb/all/conanfile.py
@@ -1,27 +1,53 @@
from conans import ConanFile, tools
import os
+required_conan_version = ">=1.33.0"
+
+
class StbConan(ConanFile):
name = "stb"
description = "single-file public domain libraries for C/C++"
- topics = ("conan", "stb", "single-file")
+ topics = ("stb", "single-file")
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/nothings/stb"
license = ("Unlicense", "MIT")
no_copy_source = True
- _source_subfolder = "source_subfolder"
+
+ options = {
+ "with_deprecated": [True, False]
+ }
+
+ default_options = {
+ "with_deprecated": True
+ }
+
+ @property
+ def _source_subfolder(self):
+ return "source_subfolder"
+
+ @property
+ def _version(self):
+ # HACK: Used to circumvent the incompatibility
+ # of the format cci.YYYYMMDD in tools.Version
+ return str(self.version)[4:]
+
+ def config_options(self):
+ if tools.Version(self._version) < "20210713":
+ del self.options.with_deprecated
def source(self):
- commit = os.path.splitext(os.path.basename(self.conan_data["sources"][self.version]["url"]))[0]
- tools.get(**self.conan_data["sources"][self.version])
- extracted_dir = self.name + "-" + commit
- os.rename(extracted_dir, self._source_subfolder)
+ tools.get(**self.conan_data["sources"][self.version], strip_root=True, destination=self._source_subfolder)
def package(self):
self.copy("LICENSE", src=self._source_subfolder, dst="licenses")
self.copy("*.h", src=self._source_subfolder, dst="include")
self.copy("stb_vorbis.c", src=self._source_subfolder, dst="include")
tools.rmdir(os.path.join(self.package_folder, "include", "tests"))
+ if tools.Version(self._version) >= "20210713":
+ tools.rmdir(os.path.join(self.package_folder, "include", "deprecated"))
+ if self.options.get_safe("with_deprecated", False):
+ self.copy("*.h", src=os.path.join(self._source_subfolder, "deprecated"), dst="include")
+ self.copy("stb_image.c", src=os.path.join(self._source_subfolder, "deprecated"), dst="include")
def package_id(self):
self.info.header_only()
| {"golden_diff": "diff --git a/recipes/stb/all/conanfile.py b/recipes/stb/all/conanfile.py\n--- a/recipes/stb/all/conanfile.py\n+++ b/recipes/stb/all/conanfile.py\n@@ -1,27 +1,53 @@\n from conans import ConanFile, tools\n import os\n \n+required_conan_version = \">=1.33.0\"\n+\n+\n class StbConan(ConanFile):\n name = \"stb\"\n description = \"single-file public domain libraries for C/C++\"\n- topics = (\"conan\", \"stb\", \"single-file\")\n+ topics = (\"stb\", \"single-file\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/nothings/stb\"\n license = (\"Unlicense\", \"MIT\")\n no_copy_source = True\n- _source_subfolder = \"source_subfolder\"\n+\n+ options = {\n+ \"with_deprecated\": [True, False]\n+ }\n+\n+ default_options = {\n+ \"with_deprecated\": True\n+ }\n+\n+ @property\n+ def _source_subfolder(self):\n+ return \"source_subfolder\"\n+\n+ @property\n+ def _version(self):\n+ # HACK: Used to circumvent the incompatibility\n+ # of the format cci.YYYYMMDD in tools.Version\n+ return str(self.version)[4:]\n+\n+ def config_options(self):\n+ if tools.Version(self._version) < \"20210713\":\n+ del self.options.with_deprecated\n \n def source(self):\n- commit = os.path.splitext(os.path.basename(self.conan_data[\"sources\"][self.version][\"url\"]))[0]\n- tools.get(**self.conan_data[\"sources\"][self.version])\n- extracted_dir = self.name + \"-\" + commit\n- os.rename(extracted_dir, self._source_subfolder)\n+ tools.get(**self.conan_data[\"sources\"][self.version], strip_root=True, destination=self._source_subfolder)\n \n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n self.copy(\"*.h\", src=self._source_subfolder, dst=\"include\")\n self.copy(\"stb_vorbis.c\", src=self._source_subfolder, dst=\"include\")\n tools.rmdir(os.path.join(self.package_folder, \"include\", \"tests\"))\n+ if tools.Version(self._version) >= \"20210713\":\n+ tools.rmdir(os.path.join(self.package_folder, \"include\", \"deprecated\"))\n+ if self.options.get_safe(\"with_deprecated\", False):\n+ self.copy(\"*.h\", src=os.path.join(self._source_subfolder, \"deprecated\"), dst=\"include\")\n+ self.copy(\"stb_image.c\", src=os.path.join(self._source_subfolder, \"deprecated\"), dst=\"include\")\n \n def package_id(self):\n self.info.header_only()\n", "issue": "[request] stb/20210818\n### Package Details\r\n * Package Name/Version: **stb/20210818**\r\n\r\nThere has been +1800 commits added to stb since Feb 2 of 2020, I greatly suggest updating it.\r\n\r\nThe above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.\r\n\n", "before_files": [{"content": "from conans import ConanFile, tools\nimport os\n\nclass StbConan(ConanFile):\n name = \"stb\"\n description = \"single-file public domain libraries for C/C++\"\n topics = (\"conan\", \"stb\", \"single-file\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/nothings/stb\"\n license = (\"Unlicense\", \"MIT\")\n no_copy_source = True\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n commit = os.path.splitext(os.path.basename(self.conan_data[\"sources\"][self.version][\"url\"]))[0]\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = self.name + \"-\" + commit\n os.rename(extracted_dir, self._source_subfolder)\n\n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n self.copy(\"*.h\", src=self._source_subfolder, dst=\"include\")\n self.copy(\"stb_vorbis.c\", src=self._source_subfolder, dst=\"include\")\n tools.rmdir(os.path.join(self.package_folder, \"include\", \"tests\"))\n\n def package_id(self):\n self.info.header_only()\n \n def package_info(self):\n self.cpp_info.defines.append('STB_TEXTEDIT_KEYTYPE=unsigned')\n", "path": "recipes/stb/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, tools\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass StbConan(ConanFile):\n name = \"stb\"\n description = \"single-file public domain libraries for C/C++\"\n topics = (\"stb\", \"single-file\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/nothings/stb\"\n license = (\"Unlicense\", \"MIT\")\n no_copy_source = True\n\n options = {\n \"with_deprecated\": [True, False]\n }\n\n default_options = {\n \"with_deprecated\": True\n }\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _version(self):\n # HACK: Used to circumvent the incompatibility\n # of the format cci.YYYYMMDD in tools.Version\n return str(self.version)[4:]\n\n def config_options(self):\n if tools.Version(self._version) < \"20210713\":\n del self.options.with_deprecated\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version], strip_root=True, destination=self._source_subfolder)\n\n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n self.copy(\"*.h\", src=self._source_subfolder, dst=\"include\")\n self.copy(\"stb_vorbis.c\", src=self._source_subfolder, dst=\"include\")\n tools.rmdir(os.path.join(self.package_folder, \"include\", \"tests\"))\n if tools.Version(self._version) >= \"20210713\":\n tools.rmdir(os.path.join(self.package_folder, \"include\", \"deprecated\"))\n if self.options.get_safe(\"with_deprecated\", False):\n self.copy(\"*.h\", src=os.path.join(self._source_subfolder, \"deprecated\"), dst=\"include\")\n self.copy(\"stb_image.c\", src=os.path.join(self._source_subfolder, \"deprecated\"), dst=\"include\")\n\n def package_id(self):\n self.info.header_only()\n \n def package_info(self):\n self.cpp_info.defines.append('STB_TEXTEDIT_KEYTYPE=unsigned')\n", "path": "recipes/stb/all/conanfile.py"}]} | 712 | 653 |
gh_patches_debug_15650 | rasdani/github-patches | git_diff | evennia__evennia-1733 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Disabling webclient does not actually disable it
#### Steps to reproduce the issue / Reasons for adding feature:
1. Set WEBCLIENT_ENABLED to False
2. Link disappears from index page
3. Go to http://example.com/webclient
4. Webclient loads.
#### Error output / Expected result of feature
Setting WEBCLIENT_ENABLED to False should disable it, not hide it.
#### Extra information, such as Evennia revision/repo/branch, operating system and ideas for how to solve / implement:
The status check needs to happen in urls.py, not just on the navbar template. If disabled, the url for the webclient should not be added to the list of urlpatterns.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evennia/web/webclient/views.py`
Content:
```
1
2 """
3 This contains a simple view for rendering the webclient
4 page and serve it eventual static content.
5
6 """
7 from __future__ import print_function
8 from django.shortcuts import render
9 from django.contrib.auth import login, authenticate
10
11 from evennia.accounts.models import AccountDB
12 from evennia.utils import logger
13
14
15 def webclient(request):
16 """
17 Webclient page template loading.
18
19 """
20 # auto-login is now handled by evennia.web.utils.middleware
21
22 # make sure to store the browser session's hash so the webclient can get to it!
23 pagevars = {'browser_sessid': request.session.session_key}
24
25 return render(request, 'webclient.html', pagevars)
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evennia/web/webclient/views.py b/evennia/web/webclient/views.py
--- a/evennia/web/webclient/views.py
+++ b/evennia/web/webclient/views.py
@@ -5,6 +5,8 @@
"""
from __future__ import print_function
+from django.conf import settings
+from django.http import Http404
from django.shortcuts import render
from django.contrib.auth import login, authenticate
@@ -19,6 +21,10 @@
"""
# auto-login is now handled by evennia.web.utils.middleware
+ # check if webclient should be enabled
+ if not settings.WEBCLIENT_ENABLED:
+ raise Http404
+
# make sure to store the browser session's hash so the webclient can get to it!
pagevars = {'browser_sessid': request.session.session_key}
| {"golden_diff": "diff --git a/evennia/web/webclient/views.py b/evennia/web/webclient/views.py\n--- a/evennia/web/webclient/views.py\n+++ b/evennia/web/webclient/views.py\n@@ -5,6 +5,8 @@\n \n \"\"\"\n from __future__ import print_function\n+from django.conf import settings\n+from django.http import Http404\n from django.shortcuts import render\n from django.contrib.auth import login, authenticate\n \n@@ -19,6 +21,10 @@\n \"\"\"\n # auto-login is now handled by evennia.web.utils.middleware\n \n+ # check if webclient should be enabled\n+ if not settings.WEBCLIENT_ENABLED:\n+ raise Http404\n+ \n # make sure to store the browser session's hash so the webclient can get to it!\n pagevars = {'browser_sessid': request.session.session_key}\n", "issue": "Disabling webclient does not actually disable it\n#### Steps to reproduce the issue / Reasons for adding feature:\r\n\r\n1. Set WEBCLIENT_ENABLED to False\r\n2. Link disappears from index page\r\n3. Go to http://example.com/webclient\r\n4. Webclient loads.\r\n\r\n#### Error output / Expected result of feature\r\nSetting WEBCLIENT_ENABLED to False should disable it, not hide it.\r\n\r\n#### Extra information, such as Evennia revision/repo/branch, operating system and ideas for how to solve / implement:\r\nThe status check needs to happen in urls.py, not just on the navbar template. If disabled, the url for the webclient should not be added to the list of urlpatterns.\n", "before_files": [{"content": "\n\"\"\"\nThis contains a simple view for rendering the webclient\npage and serve it eventual static content.\n\n\"\"\"\nfrom __future__ import print_function\nfrom django.shortcuts import render\nfrom django.contrib.auth import login, authenticate\n\nfrom evennia.accounts.models import AccountDB\nfrom evennia.utils import logger\n\n\ndef webclient(request):\n \"\"\"\n Webclient page template loading.\n\n \"\"\"\n # auto-login is now handled by evennia.web.utils.middleware\n \n # make sure to store the browser session's hash so the webclient can get to it!\n pagevars = {'browser_sessid': request.session.session_key}\n\n return render(request, 'webclient.html', pagevars)\n", "path": "evennia/web/webclient/views.py"}], "after_files": [{"content": "\n\"\"\"\nThis contains a simple view for rendering the webclient\npage and serve it eventual static content.\n\n\"\"\"\nfrom __future__ import print_function\nfrom django.conf import settings\nfrom django.http import Http404\nfrom django.shortcuts import render\nfrom django.contrib.auth import login, authenticate\n\nfrom evennia.accounts.models import AccountDB\nfrom evennia.utils import logger\n\n\ndef webclient(request):\n \"\"\"\n Webclient page template loading.\n\n \"\"\"\n # auto-login is now handled by evennia.web.utils.middleware\n \n # check if webclient should be enabled\n if not settings.WEBCLIENT_ENABLED:\n raise Http404\n \n # make sure to store the browser session's hash so the webclient can get to it!\n pagevars = {'browser_sessid': request.session.session_key}\n\n return render(request, 'webclient.html', pagevars)\n", "path": "evennia/web/webclient/views.py"}]} | 589 | 190 |
gh_patches_debug_43232 | rasdani/github-patches | git_diff | chainer__chainer-2204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove trigger option of snapshot and snapshot_object
They have the same functionality as the trigger argument of Trainer.extend and are redundant. I think they confuse users and they might misunderstand the trigger feature, and so they should be removed in the next major update.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/training/extensions/_snapshot.py`
Content:
```
1 import os
2 import shutil
3 import tempfile
4
5 from chainer.serializers import npz
6 from chainer.training import extension
7
8
9 def snapshot_object(target, filename, savefun=npz.save_npz,
10 trigger=(1, 'epoch')):
11 """Returns a trainer extension to take snapshots of a given object.
12
13 This extension serializes the given object and saves it to the output
14 directory.
15
16 This extension is called once for each epoch by default. The default
17 priority is -100, which is lower than that of most built-in extensions.
18
19 Args:
20 target: Object to serialize.
21 filename (str): Name of the file into which the object is serialized.
22 It can be a format string, where the trainer object is passed to
23 the :meth:`str.format` method. For example,
24 ``'snapshot_{.updater.iteration}'`` is converted to
25 ``'snapshot_10000'`` at the 10,000th iteration.
26 savefun: Function to save the object. It takes two arguments: the
27 output file path and the object to serialize.
28 trigger: Trigger that decides when to take snapshot. It can be either
29 an already built trigger object (i.e., a callable object that
30 accepts a trainer object and returns a bool value), or a tuple in
31 the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter
32 case, the tuple is passed to IntervalTrigger.
33
34 Returns:
35 An extension function.
36
37 """
38 @extension.make_extension(trigger=trigger, priority=-100)
39 def snapshot_object(trainer):
40 _snapshot_object(trainer, target, filename.format(trainer), savefun)
41
42 return snapshot_object
43
44
45 def snapshot(savefun=npz.save_npz,
46 filename='snapshot_iter_{.updater.iteration}',
47 trigger=(1, 'epoch')):
48 """Returns a trainer extension to take snapshots of the trainer.
49
50 This extension serializes the trainer object and saves it to the output
51 directory. It is used to support resuming the training loop from the saved
52 state.
53
54 This extension is called once for each epoch by default. The default
55 priority is -100, which is lower than that of most built-in extensions.
56
57 .. note::
58 This extension first writes the serialized object to a temporary file
59 and then rename it to the target file name. Thus, if the program stops
60 right before the renaming, the temporary file might be left in the
61 output directory.
62
63 Args:
64 savefun: Function to save the trainer. It takes two arguments: the
65 output file path and the trainer object.
66 filename (str): Name of the file into which the trainer is serialized.
67 It can be a format string, where the trainer object is passed to
68 the :meth:`str.format` method.
69 trigger: Trigger that decides when to take snapshot. It can be either
70 an already built trigger object (i.e., a callable object that
71 accepts a trainer object and returns a bool value), or a tuple in
72 the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter
73 case, the tuple is passed to IntervalTrigger.
74
75 """
76 @extension.make_extension(trigger=trigger, priority=-100)
77 def snapshot(trainer):
78 _snapshot_object(trainer, trainer, filename.format(trainer), savefun)
79
80 return snapshot
81
82
83 def _snapshot_object(trainer, target, filename, savefun):
84 fn = filename.format(trainer)
85 prefix = 'tmp' + fn
86 fd, tmppath = tempfile.mkstemp(prefix=prefix, dir=trainer.out)
87 try:
88 savefun(tmppath, target)
89 except Exception:
90 os.close(fd)
91 os.remove(tmppath)
92 raise
93 os.close(fd)
94 shutil.move(tmppath, os.path.join(trainer.out, fn))
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/training/extensions/_snapshot.py b/chainer/training/extensions/_snapshot.py
--- a/chainer/training/extensions/_snapshot.py
+++ b/chainer/training/extensions/_snapshot.py
@@ -6,15 +6,19 @@
from chainer.training import extension
-def snapshot_object(target, filename, savefun=npz.save_npz,
- trigger=(1, 'epoch')):
+def snapshot_object(target, filename, savefun=npz.save_npz):
"""Returns a trainer extension to take snapshots of a given object.
This extension serializes the given object and saves it to the output
directory.
- This extension is called once for each epoch by default. The default
- priority is -100, which is lower than that of most built-in extensions.
+ This extension is called once per epoch by default. To take a
+ snapshot at a different interval, a trigger object specifying the
+ required interval can be passed along with this extension
+ to the `extend()` method of the trainer.
+
+ The default priority is -100, which is lower than that of most
+ built-in extensions.
Args:
target: Object to serialize.
@@ -25,17 +29,12 @@
``'snapshot_10000'`` at the 10,000th iteration.
savefun: Function to save the object. It takes two arguments: the
output file path and the object to serialize.
- trigger: Trigger that decides when to take snapshot. It can be either
- an already built trigger object (i.e., a callable object that
- accepts a trainer object and returns a bool value), or a tuple in
- the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter
- case, the tuple is passed to IntervalTrigger.
Returns:
An extension function.
"""
- @extension.make_extension(trigger=trigger, priority=-100)
+ @extension.make_extension(trigger=(1, 'epoch'), priority=-100)
def snapshot_object(trainer):
_snapshot_object(trainer, target, filename.format(trainer), savefun)
@@ -43,16 +42,20 @@
def snapshot(savefun=npz.save_npz,
- filename='snapshot_iter_{.updater.iteration}',
- trigger=(1, 'epoch')):
+ filename='snapshot_iter_{.updater.iteration}'):
"""Returns a trainer extension to take snapshots of the trainer.
This extension serializes the trainer object and saves it to the output
directory. It is used to support resuming the training loop from the saved
state.
- This extension is called once for each epoch by default. The default
- priority is -100, which is lower than that of most built-in extensions.
+ This extension is called once per epoch by default. To take a
+ snapshot at a different interval, a trigger object specifying the
+ required interval can be passed along with this extension
+ to the `extend()` method of the trainer.
+
+ The default priority is -100, which is lower than that of most
+ built-in extensions.
.. note::
This extension first writes the serialized object to a temporary file
@@ -66,14 +69,9 @@
filename (str): Name of the file into which the trainer is serialized.
It can be a format string, where the trainer object is passed to
the :meth:`str.format` method.
- trigger: Trigger that decides when to take snapshot. It can be either
- an already built trigger object (i.e., a callable object that
- accepts a trainer object and returns a bool value), or a tuple in
- the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter
- case, the tuple is passed to IntervalTrigger.
"""
- @extension.make_extension(trigger=trigger, priority=-100)
+ @extension.make_extension(trigger=(1, 'epoch'), priority=-100)
def snapshot(trainer):
_snapshot_object(trainer, trainer, filename.format(trainer), savefun)
| {"golden_diff": "diff --git a/chainer/training/extensions/_snapshot.py b/chainer/training/extensions/_snapshot.py\n--- a/chainer/training/extensions/_snapshot.py\n+++ b/chainer/training/extensions/_snapshot.py\n@@ -6,15 +6,19 @@\n from chainer.training import extension\n \n \n-def snapshot_object(target, filename, savefun=npz.save_npz,\n- trigger=(1, 'epoch')):\n+def snapshot_object(target, filename, savefun=npz.save_npz):\n \"\"\"Returns a trainer extension to take snapshots of a given object.\n \n This extension serializes the given object and saves it to the output\n directory.\n \n- This extension is called once for each epoch by default. The default\n- priority is -100, which is lower than that of most built-in extensions.\n+ This extension is called once per epoch by default. To take a\n+ snapshot at a different interval, a trigger object specifying the\n+ required interval can be passed along with this extension\n+ to the `extend()` method of the trainer.\n+\n+ The default priority is -100, which is lower than that of most\n+ built-in extensions.\n \n Args:\n target: Object to serialize.\n@@ -25,17 +29,12 @@\n ``'snapshot_10000'`` at the 10,000th iteration.\n savefun: Function to save the object. It takes two arguments: the\n output file path and the object to serialize.\n- trigger: Trigger that decides when to take snapshot. It can be either\n- an already built trigger object (i.e., a callable object that\n- accepts a trainer object and returns a bool value), or a tuple in\n- the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter\n- case, the tuple is passed to IntervalTrigger.\n \n Returns:\n An extension function.\n \n \"\"\"\n- @extension.make_extension(trigger=trigger, priority=-100)\n+ @extension.make_extension(trigger=(1, 'epoch'), priority=-100)\n def snapshot_object(trainer):\n _snapshot_object(trainer, target, filename.format(trainer), savefun)\n \n@@ -43,16 +42,20 @@\n \n \n def snapshot(savefun=npz.save_npz,\n- filename='snapshot_iter_{.updater.iteration}',\n- trigger=(1, 'epoch')):\n+ filename='snapshot_iter_{.updater.iteration}'):\n \"\"\"Returns a trainer extension to take snapshots of the trainer.\n \n This extension serializes the trainer object and saves it to the output\n directory. It is used to support resuming the training loop from the saved\n state.\n \n- This extension is called once for each epoch by default. The default\n- priority is -100, which is lower than that of most built-in extensions.\n+ This extension is called once per epoch by default. To take a\n+ snapshot at a different interval, a trigger object specifying the\n+ required interval can be passed along with this extension\n+ to the `extend()` method of the trainer.\n+\n+ The default priority is -100, which is lower than that of most\n+ built-in extensions.\n \n .. note::\n This extension first writes the serialized object to a temporary file\n@@ -66,14 +69,9 @@\n filename (str): Name of the file into which the trainer is serialized.\n It can be a format string, where the trainer object is passed to\n the :meth:`str.format` method.\n- trigger: Trigger that decides when to take snapshot. It can be either\n- an already built trigger object (i.e., a callable object that\n- accepts a trainer object and returns a bool value), or a tuple in\n- the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter\n- case, the tuple is passed to IntervalTrigger.\n \n \"\"\"\n- @extension.make_extension(trigger=trigger, priority=-100)\n+ @extension.make_extension(trigger=(1, 'epoch'), priority=-100)\n def snapshot(trainer):\n _snapshot_object(trainer, trainer, filename.format(trainer), savefun)\n", "issue": "Remove trigger option of snapshot and snapshot_object\nThey have the same functionality as the trigger argument of Trainer.extend and are redundant. I think they confuse users and they might misunderstand the trigger feature, and so they should be removed in the next major update.\n", "before_files": [{"content": "import os\nimport shutil\nimport tempfile\n\nfrom chainer.serializers import npz\nfrom chainer.training import extension\n\n\ndef snapshot_object(target, filename, savefun=npz.save_npz,\n trigger=(1, 'epoch')):\n \"\"\"Returns a trainer extension to take snapshots of a given object.\n\n This extension serializes the given object and saves it to the output\n directory.\n\n This extension is called once for each epoch by default. The default\n priority is -100, which is lower than that of most built-in extensions.\n\n Args:\n target: Object to serialize.\n filename (str): Name of the file into which the object is serialized.\n It can be a format string, where the trainer object is passed to\n the :meth:`str.format` method. For example,\n ``'snapshot_{.updater.iteration}'`` is converted to\n ``'snapshot_10000'`` at the 10,000th iteration.\n savefun: Function to save the object. It takes two arguments: the\n output file path and the object to serialize.\n trigger: Trigger that decides when to take snapshot. It can be either\n an already built trigger object (i.e., a callable object that\n accepts a trainer object and returns a bool value), or a tuple in\n the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter\n case, the tuple is passed to IntervalTrigger.\n\n Returns:\n An extension function.\n\n \"\"\"\n @extension.make_extension(trigger=trigger, priority=-100)\n def snapshot_object(trainer):\n _snapshot_object(trainer, target, filename.format(trainer), savefun)\n\n return snapshot_object\n\n\ndef snapshot(savefun=npz.save_npz,\n filename='snapshot_iter_{.updater.iteration}',\n trigger=(1, 'epoch')):\n \"\"\"Returns a trainer extension to take snapshots of the trainer.\n\n This extension serializes the trainer object and saves it to the output\n directory. It is used to support resuming the training loop from the saved\n state.\n\n This extension is called once for each epoch by default. The default\n priority is -100, which is lower than that of most built-in extensions.\n\n .. note::\n This extension first writes the serialized object to a temporary file\n and then rename it to the target file name. Thus, if the program stops\n right before the renaming, the temporary file might be left in the\n output directory.\n\n Args:\n savefun: Function to save the trainer. It takes two arguments: the\n output file path and the trainer object.\n filename (str): Name of the file into which the trainer is serialized.\n It can be a format string, where the trainer object is passed to\n the :meth:`str.format` method.\n trigger: Trigger that decides when to take snapshot. It can be either\n an already built trigger object (i.e., a callable object that\n accepts a trainer object and returns a bool value), or a tuple in\n the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter\n case, the tuple is passed to IntervalTrigger.\n\n \"\"\"\n @extension.make_extension(trigger=trigger, priority=-100)\n def snapshot(trainer):\n _snapshot_object(trainer, trainer, filename.format(trainer), savefun)\n\n return snapshot\n\n\ndef _snapshot_object(trainer, target, filename, savefun):\n fn = filename.format(trainer)\n prefix = 'tmp' + fn\n fd, tmppath = tempfile.mkstemp(prefix=prefix, dir=trainer.out)\n try:\n savefun(tmppath, target)\n except Exception:\n os.close(fd)\n os.remove(tmppath)\n raise\n os.close(fd)\n shutil.move(tmppath, os.path.join(trainer.out, fn))\n", "path": "chainer/training/extensions/_snapshot.py"}], "after_files": [{"content": "import os\nimport shutil\nimport tempfile\n\nfrom chainer.serializers import npz\nfrom chainer.training import extension\n\n\ndef snapshot_object(target, filename, savefun=npz.save_npz):\n \"\"\"Returns a trainer extension to take snapshots of a given object.\n\n This extension serializes the given object and saves it to the output\n directory.\n\n This extension is called once per epoch by default. To take a\n snapshot at a different interval, a trigger object specifying the\n required interval can be passed along with this extension\n to the `extend()` method of the trainer.\n\n The default priority is -100, which is lower than that of most\n built-in extensions.\n\n Args:\n target: Object to serialize.\n filename (str): Name of the file into which the object is serialized.\n It can be a format string, where the trainer object is passed to\n the :meth:`str.format` method. For example,\n ``'snapshot_{.updater.iteration}'`` is converted to\n ``'snapshot_10000'`` at the 10,000th iteration.\n savefun: Function to save the object. It takes two arguments: the\n output file path and the object to serialize.\n\n Returns:\n An extension function.\n\n \"\"\"\n @extension.make_extension(trigger=(1, 'epoch'), priority=-100)\n def snapshot_object(trainer):\n _snapshot_object(trainer, target, filename.format(trainer), savefun)\n\n return snapshot_object\n\n\ndef snapshot(savefun=npz.save_npz,\n filename='snapshot_iter_{.updater.iteration}'):\n \"\"\"Returns a trainer extension to take snapshots of the trainer.\n\n This extension serializes the trainer object and saves it to the output\n directory. It is used to support resuming the training loop from the saved\n state.\n\n This extension is called once per epoch by default. To take a\n snapshot at a different interval, a trigger object specifying the\n required interval can be passed along with this extension\n to the `extend()` method of the trainer.\n\n The default priority is -100, which is lower than that of most\n built-in extensions.\n\n .. note::\n This extension first writes the serialized object to a temporary file\n and then rename it to the target file name. Thus, if the program stops\n right before the renaming, the temporary file might be left in the\n output directory.\n\n Args:\n savefun: Function to save the trainer. It takes two arguments: the\n output file path and the trainer object.\n filename (str): Name of the file into which the trainer is serialized.\n It can be a format string, where the trainer object is passed to\n the :meth:`str.format` method.\n\n \"\"\"\n @extension.make_extension(trigger=(1, 'epoch'), priority=-100)\n def snapshot(trainer):\n _snapshot_object(trainer, trainer, filename.format(trainer), savefun)\n\n return snapshot\n\n\ndef _snapshot_object(trainer, target, filename, savefun):\n fn = filename.format(trainer)\n prefix = 'tmp' + fn\n fd, tmppath = tempfile.mkstemp(prefix=prefix, dir=trainer.out)\n try:\n savefun(tmppath, target)\n except Exception:\n os.close(fd)\n os.remove(tmppath)\n raise\n os.close(fd)\n shutil.move(tmppath, os.path.join(trainer.out, fn))\n", "path": "chainer/training/extensions/_snapshot.py"}]} | 1,350 | 937 |
gh_patches_debug_27650 | rasdani/github-patches | git_diff | biolab__orange3-4217 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
2 x Transpose + Preprocess loses information
**Describe the bug**
Second transpose cannot retrieve the domain after Preprocess.
**To Reproduce**
Steps to reproduce the behavior:
1. File (brown-selected).
2. Transpose.
3. Preprocesss (say Normalize).
4. Transpose.
**Orange version:**
3.24.dev
**Expected behavior**
Second Transpose puts columns names into a string variable.
**Screenshots**
<img width="1232" alt="Screen Shot 2019-11-14 at 09 33 02" src="https://user-images.githubusercontent.com/12524972/68839832-c910d600-06c1-11ea-9286-5bf033a9802f.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Orange/preprocess/normalize.py`
Content:
```
1 import numpy as np
2
3 from Orange.data import ContinuousVariable, Domain
4 from Orange.statistics import distribution
5 from Orange.util import Reprable
6 from .preprocess import Normalize
7 from .transformation import Normalizer as Norm
8 __all__ = ["Normalizer"]
9
10
11 class Normalizer(Reprable):
12 def __init__(self,
13 zero_based=True,
14 norm_type=Normalize.NormalizeBySD,
15 transform_class=False,
16 center=True,
17 normalize_datetime=False):
18 self.zero_based = zero_based
19 self.norm_type = norm_type
20 self.transform_class = transform_class
21 self.center = center
22 self.normalize_datetime = normalize_datetime
23
24 def __call__(self, data):
25 dists = distribution.get_distributions(data)
26 new_attrs = [self.normalize(dists[i], var) for
27 (i, var) in enumerate(data.domain.attributes)]
28
29 new_class_vars = data.domain.class_vars
30 if self.transform_class:
31 attr_len = len(data.domain.attributes)
32 new_class_vars = [self.normalize(dists[i + attr_len], var) for
33 (i, var) in enumerate(data.domain.class_vars)]
34
35 domain = Domain(new_attrs, new_class_vars, data.domain.metas)
36 return data.transform(domain)
37
38 def normalize(self, dist, var):
39 if not var.is_continuous or (var.is_time and not self.normalize_datetime):
40 return var
41 elif self.norm_type == Normalize.NormalizeBySD:
42 return self.normalize_by_sd(dist, var)
43 elif self.norm_type == Normalize.NormalizeBySpan:
44 return self.normalize_by_span(dist, var)
45
46 def normalize_by_sd(self, dist, var):
47 avg, sd = (dist.mean(), dist.standard_deviation()) if dist.size else (0, 1)
48 if sd == 0:
49 sd = 1
50 if self.center:
51 compute_val = Norm(var, avg, 1 / sd)
52 else:
53 compute_val = Norm(var, 0, 1 / sd)
54
55 return ContinuousVariable(
56 var.name,
57 compute_value=compute_val,
58 sparse=var.sparse,
59 )
60
61 def normalize_by_span(self, dist, var):
62 dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)
63 diff = dma - dmi
64 if diff < 1e-15:
65 diff = 1
66 if self.zero_based:
67 return ContinuousVariable(
68 var.name,
69 compute_value=Norm(var, dmi, 1 / diff),
70 sparse=var.sparse)
71 else:
72 return ContinuousVariable(
73 var.name,
74 compute_value=Norm(var, (dma + dmi) / 2, 2 / diff),
75 sparse=var.sparse)
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Orange/preprocess/normalize.py b/Orange/preprocess/normalize.py
--- a/Orange/preprocess/normalize.py
+++ b/Orange/preprocess/normalize.py
@@ -1,6 +1,6 @@
import numpy as np
-from Orange.data import ContinuousVariable, Domain
+from Orange.data import Domain
from Orange.statistics import distribution
from Orange.util import Reprable
from .preprocess import Normalize
@@ -51,12 +51,7 @@
compute_val = Norm(var, avg, 1 / sd)
else:
compute_val = Norm(var, 0, 1 / sd)
-
- return ContinuousVariable(
- var.name,
- compute_value=compute_val,
- sparse=var.sparse,
- )
+ return var.copy(compute_value=compute_val)
def normalize_by_span(self, dist, var):
dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)
@@ -64,12 +59,7 @@
if diff < 1e-15:
diff = 1
if self.zero_based:
- return ContinuousVariable(
- var.name,
- compute_value=Norm(var, dmi, 1 / diff),
- sparse=var.sparse)
+ compute_val = Norm(var, dmi, 1 / diff)
else:
- return ContinuousVariable(
- var.name,
- compute_value=Norm(var, (dma + dmi) / 2, 2 / diff),
- sparse=var.sparse)
+ compute_val = Norm(var, (dma + dmi) / 2, 2 / diff)
+ return var.copy(compute_value=compute_val)
| {"golden_diff": "diff --git a/Orange/preprocess/normalize.py b/Orange/preprocess/normalize.py\n--- a/Orange/preprocess/normalize.py\n+++ b/Orange/preprocess/normalize.py\n@@ -1,6 +1,6 @@\n import numpy as np\n \n-from Orange.data import ContinuousVariable, Domain\n+from Orange.data import Domain\n from Orange.statistics import distribution\n from Orange.util import Reprable\n from .preprocess import Normalize\n@@ -51,12 +51,7 @@\n compute_val = Norm(var, avg, 1 / sd)\n else:\n compute_val = Norm(var, 0, 1 / sd)\n-\n- return ContinuousVariable(\n- var.name,\n- compute_value=compute_val,\n- sparse=var.sparse,\n- )\n+ return var.copy(compute_value=compute_val)\n \n def normalize_by_span(self, dist, var):\n dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)\n@@ -64,12 +59,7 @@\n if diff < 1e-15:\n diff = 1\n if self.zero_based:\n- return ContinuousVariable(\n- var.name,\n- compute_value=Norm(var, dmi, 1 / diff),\n- sparse=var.sparse)\n+ compute_val = Norm(var, dmi, 1 / diff)\n else:\n- return ContinuousVariable(\n- var.name,\n- compute_value=Norm(var, (dma + dmi) / 2, 2 / diff),\n- sparse=var.sparse)\n+ compute_val = Norm(var, (dma + dmi) / 2, 2 / diff)\n+ return var.copy(compute_value=compute_val)\n", "issue": "2 x Transpose + Preprocess loses information\n**Describe the bug**\r\nSecond transpose cannot retrieve the domain after Preprocess.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. File (brown-selected).\r\n2. Transpose.\r\n3. Preprocesss (say Normalize).\r\n4. Transpose.\r\n\r\n**Orange version:**\r\n3.24.dev\r\n\r\n**Expected behavior**\r\nSecond Transpose puts columns names into a string variable.\r\n\r\n**Screenshots**\r\n<img width=\"1232\" alt=\"Screen Shot 2019-11-14 at 09 33 02\" src=\"https://user-images.githubusercontent.com/12524972/68839832-c910d600-06c1-11ea-9286-5bf033a9802f.png\">\r\n\r\n\n", "before_files": [{"content": "import numpy as np\n\nfrom Orange.data import ContinuousVariable, Domain\nfrom Orange.statistics import distribution\nfrom Orange.util import Reprable\nfrom .preprocess import Normalize\nfrom .transformation import Normalizer as Norm\n__all__ = [\"Normalizer\"]\n\n\nclass Normalizer(Reprable):\n def __init__(self,\n zero_based=True,\n norm_type=Normalize.NormalizeBySD,\n transform_class=False,\n center=True,\n normalize_datetime=False):\n self.zero_based = zero_based\n self.norm_type = norm_type\n self.transform_class = transform_class\n self.center = center\n self.normalize_datetime = normalize_datetime\n\n def __call__(self, data):\n dists = distribution.get_distributions(data)\n new_attrs = [self.normalize(dists[i], var) for\n (i, var) in enumerate(data.domain.attributes)]\n\n new_class_vars = data.domain.class_vars\n if self.transform_class:\n attr_len = len(data.domain.attributes)\n new_class_vars = [self.normalize(dists[i + attr_len], var) for\n (i, var) in enumerate(data.domain.class_vars)]\n\n domain = Domain(new_attrs, new_class_vars, data.domain.metas)\n return data.transform(domain)\n\n def normalize(self, dist, var):\n if not var.is_continuous or (var.is_time and not self.normalize_datetime):\n return var\n elif self.norm_type == Normalize.NormalizeBySD:\n return self.normalize_by_sd(dist, var)\n elif self.norm_type == Normalize.NormalizeBySpan:\n return self.normalize_by_span(dist, var)\n\n def normalize_by_sd(self, dist, var):\n avg, sd = (dist.mean(), dist.standard_deviation()) if dist.size else (0, 1)\n if sd == 0:\n sd = 1\n if self.center:\n compute_val = Norm(var, avg, 1 / sd)\n else:\n compute_val = Norm(var, 0, 1 / sd)\n\n return ContinuousVariable(\n var.name,\n compute_value=compute_val,\n sparse=var.sparse,\n )\n\n def normalize_by_span(self, dist, var):\n dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)\n diff = dma - dmi\n if diff < 1e-15:\n diff = 1\n if self.zero_based:\n return ContinuousVariable(\n var.name,\n compute_value=Norm(var, dmi, 1 / diff),\n sparse=var.sparse)\n else:\n return ContinuousVariable(\n var.name,\n compute_value=Norm(var, (dma + dmi) / 2, 2 / diff),\n sparse=var.sparse)\n", "path": "Orange/preprocess/normalize.py"}], "after_files": [{"content": "import numpy as np\n\nfrom Orange.data import Domain\nfrom Orange.statistics import distribution\nfrom Orange.util import Reprable\nfrom .preprocess import Normalize\nfrom .transformation import Normalizer as Norm\n__all__ = [\"Normalizer\"]\n\n\nclass Normalizer(Reprable):\n def __init__(self,\n zero_based=True,\n norm_type=Normalize.NormalizeBySD,\n transform_class=False,\n center=True,\n normalize_datetime=False):\n self.zero_based = zero_based\n self.norm_type = norm_type\n self.transform_class = transform_class\n self.center = center\n self.normalize_datetime = normalize_datetime\n\n def __call__(self, data):\n dists = distribution.get_distributions(data)\n new_attrs = [self.normalize(dists[i], var) for\n (i, var) in enumerate(data.domain.attributes)]\n\n new_class_vars = data.domain.class_vars\n if self.transform_class:\n attr_len = len(data.domain.attributes)\n new_class_vars = [self.normalize(dists[i + attr_len], var) for\n (i, var) in enumerate(data.domain.class_vars)]\n\n domain = Domain(new_attrs, new_class_vars, data.domain.metas)\n return data.transform(domain)\n\n def normalize(self, dist, var):\n if not var.is_continuous or (var.is_time and not self.normalize_datetime):\n return var\n elif self.norm_type == Normalize.NormalizeBySD:\n return self.normalize_by_sd(dist, var)\n elif self.norm_type == Normalize.NormalizeBySpan:\n return self.normalize_by_span(dist, var)\n\n def normalize_by_sd(self, dist, var):\n avg, sd = (dist.mean(), dist.standard_deviation()) if dist.size else (0, 1)\n if sd == 0:\n sd = 1\n if self.center:\n compute_val = Norm(var, avg, 1 / sd)\n else:\n compute_val = Norm(var, 0, 1 / sd)\n return var.copy(compute_value=compute_val)\n\n def normalize_by_span(self, dist, var):\n dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)\n diff = dma - dmi\n if diff < 1e-15:\n diff = 1\n if self.zero_based:\n compute_val = Norm(var, dmi, 1 / diff)\n else:\n compute_val = Norm(var, (dma + dmi) / 2, 2 / diff)\n return var.copy(compute_value=compute_val)\n", "path": "Orange/preprocess/normalize.py"}]} | 1,182 | 383 |
gh_patches_debug_20258 | rasdani/github-patches | git_diff | kserve__kserve-1877 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix serving.kubeflow.org annotations in docs/samples
I've noticed that some `docs/samples` still use in `metadata.annotations` the `serving.kubeflow.org` instead of `serving.kserve.org`. See this [example](https://github.com/kserve/kserve/blob/master/docs/samples/kafka/s3_secret.yaml).
To save debugging time for others migrating from KFserving, I could create PR that fixes that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/samples/kafka/setup.py`
Content:
```
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from setuptools import setup, find_packages
15
16 tests_require = [
17 'pytest',
18 'pytest-tornasync',
19 'mypy'
20 ]
21
22 setup(
23 name='transformer',
24 version='0.1.0',
25 author_email='[email protected]',
26 license='../../LICENSE.txt',
27 url='https://github.com/kserve/kserve/tree/master/docs/samples#deploy-inferenceservice-with-transformer',
28 description='Transformer',
29 long_description=open('README.md').read(),
30 python_requires='>=3.6',
31 packages=find_packages("transformer"),
32 install_requires=[
33 "kfserving>=0.2.1",
34 "argparse>=1.4.0",
35 "requests>=2.22.0",
36 "joblib>=0.13.2",
37 "pandas>=0.24.2",
38 "numpy>=1.16.3",
39 "kubernetes >= 9.0.0",
40 "opencv-python-headless==4.0.0.21",
41 "boto3==1.7.2"
42 ],
43 tests_require=tests_require,
44 extras_require={'test': tests_require}
45 )
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/samples/kafka/setup.py b/docs/samples/kafka/setup.py
--- a/docs/samples/kafka/setup.py
+++ b/docs/samples/kafka/setup.py
@@ -24,21 +24,15 @@
version='0.1.0',
author_email='[email protected]',
license='../../LICENSE.txt',
- url='https://github.com/kserve/kserve/tree/master/docs/samples#deploy-inferenceservice-with-transformer',
+ url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',
description='Transformer',
long_description=open('README.md').read(),
- python_requires='>=3.6',
+ python_requires='>=3.7',
packages=find_packages("transformer"),
install_requires=[
- "kfserving>=0.2.1",
- "argparse>=1.4.0",
- "requests>=2.22.0",
- "joblib>=0.13.2",
+ "kserve>=0.7.0",
"pandas>=0.24.2",
- "numpy>=1.16.3",
- "kubernetes >= 9.0.0",
"opencv-python-headless==4.0.0.21",
- "boto3==1.7.2"
],
tests_require=tests_require,
extras_require={'test': tests_require}
| {"golden_diff": "diff --git a/docs/samples/kafka/setup.py b/docs/samples/kafka/setup.py\n--- a/docs/samples/kafka/setup.py\n+++ b/docs/samples/kafka/setup.py\n@@ -24,21 +24,15 @@\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n- url='https://github.com/kserve/kserve/tree/master/docs/samples#deploy-inferenceservice-with-transformer',\n+ url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',\n description='Transformer',\n long_description=open('README.md').read(),\n- python_requires='>=3.6',\n+ python_requires='>=3.7',\n packages=find_packages(\"transformer\"),\n install_requires=[\n- \"kfserving>=0.2.1\",\n- \"argparse>=1.4.0\",\n- \"requests>=2.22.0\",\n- \"joblib>=0.13.2\",\n+ \"kserve>=0.7.0\",\n \"pandas>=0.24.2\",\n- \"numpy>=1.16.3\",\n- \"kubernetes >= 9.0.0\",\n \"opencv-python-headless==4.0.0.21\",\n- \"boto3==1.7.2\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n", "issue": "Fix serving.kubeflow.org annotations in docs/samples\nI've noticed that some `docs/samples` still use in `metadata.annotations` the `serving.kubeflow.org` instead of `serving.kserve.org`. See this [example](https://github.com/kserve/kserve/blob/master/docs/samples/kafka/s3_secret.yaml).\r\nTo save debugging time for others migrating from KFserving, I could create PR that fixes that.\n", "before_files": [{"content": "#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='transformer',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kserve/kserve/tree/master/docs/samples#deploy-inferenceservice-with-transformer',\n description='Transformer',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"transformer\"),\n install_requires=[\n \"kfserving>=0.2.1\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"kubernetes >= 9.0.0\",\n \"opencv-python-headless==4.0.0.21\",\n \"boto3==1.7.2\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "docs/samples/kafka/setup.py"}], "after_files": [{"content": "#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='transformer',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',\n description='Transformer',\n long_description=open('README.md').read(),\n python_requires='>=3.7',\n packages=find_packages(\"transformer\"),\n install_requires=[\n \"kserve>=0.7.0\",\n \"pandas>=0.24.2\",\n \"opencv-python-headless==4.0.0.21\",\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "docs/samples/kafka/setup.py"}]} | 816 | 322 |
gh_patches_debug_39161 | rasdani/github-patches | git_diff | PrefectHQ__prefect-5437 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Azure BlobStorageUpload doesn't allow for overwriting blobs
## Current behavior
You get an error if you try to upload the same file name
```
azure.core.exceptions.ResourceExistsError: The specified blob already exists.
RequestId:5bef0cf1-b01e-002e-6
```
## Proposed behavior
The task should take in an `overwrite` argument and pass it to [this line](https://github.com/PrefectHQ/prefect/blob/6cd24b023411980842fa77e6c0ca2ced47eeb83e/src/prefect/tasks/azure/blobstorage.py#L131).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/tasks/azure/blobstorage.py`
Content:
```
1 import uuid
2
3 import azure.storage.blob
4
5 from prefect import Task
6 from prefect.client import Secret
7 from prefect.utilities.tasks import defaults_from_attrs
8
9
10 class BlobStorageDownload(Task):
11 """
12 Task for downloading data from an Blob Storage container and returning it as a string.
13 Note that all initialization arguments can optionally be provided or overwritten at runtime.
14
15 Args:
16 - azure_credentials_secret (str, optional): the name of the Prefect Secret
17 that stores your Azure credentials; this Secret must be an Azure connection string
18 - container (str, optional): the name of the Azure Blob Storage to download from
19 - **kwargs (dict, optional): additional keyword arguments to pass to the
20 Task constructor
21 """
22
23 def __init__(
24 self,
25 azure_credentials_secret: str = "AZ_CONNECTION_STRING",
26 container: str = None,
27 **kwargs
28 ) -> None:
29 self.azure_credentials_secret = azure_credentials_secret
30 self.container = container
31 super().__init__(**kwargs)
32
33 @defaults_from_attrs("azure_credentials_secret", "container")
34 def run(
35 self,
36 blob_name: str,
37 azure_credentials_secret: str = "AZ_CONNECTION_STRING",
38 container: str = None,
39 ) -> str:
40 """
41 Task run method.
42
43 Args:
44 - blob_name (str): the name of the blob within this container to retrieve
45 - azure_credentials_secret (str, optional): the name of the Prefect Secret
46 that stores your Azure credentials; this Secret must be an Azure connection string
47 - container (str, optional): the name of the Blob Storage container to download from
48
49 Returns:
50 - str: the contents of this blob_name / container, as a string
51 """
52
53 if container is None:
54 raise ValueError("A container name must be provided.")
55
56 # get Azure credentials
57 azure_credentials = Secret(azure_credentials_secret).get()
58
59 blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(
60 conn_str=azure_credentials
61 )
62
63 client = blob_service.get_blob_client(container=container, blob=blob_name)
64 content_string = client.download_blob().content_as_text()
65
66 return content_string
67
68
69 class BlobStorageUpload(Task):
70 """
71 Task for uploading string data (e.g., a JSON string) to an Azure Blob Storage container.
72 Note that all initialization arguments can optionally be provided or overwritten at runtime.
73
74 Args:
75 - azure_credentials_secret (str, optional): the name of the Prefect Secret
76 that stores your Azure credentials; this Secret must be an Azure connection string
77 - container (str, optional): the name of the Azure Blob Storage to upload to
78 - **kwargs (dict, optional): additional keyword arguments to pass to the
79 Task constructor
80 """
81
82 def __init__(
83 self,
84 azure_credentials_secret: str = "AZ_CONNECTION_STRING",
85 container: str = None,
86 **kwargs
87 ) -> None:
88 self.azure_credentials_secret = azure_credentials_secret
89 self.container = container
90 super().__init__(**kwargs)
91
92 @defaults_from_attrs("azure_credentials_secret", "container")
93 def run(
94 self,
95 data: str,
96 blob_name: str = None,
97 azure_credentials_secret: str = "AZ_CONNECTION_STRING",
98 container: str = None,
99 ) -> str:
100 """
101 Task run method.
102
103 Args:
104 - data (str): the data payload to upload
105 - blob_name (str, optional): the name to upload the data under; if not
106 provided, a random `uuid` will be created
107 - azure_credentials_secret (str, optional): the name of the Prefect Secret
108 that stores your Azure credentials; this Secret must be an Azure connection string
109 - container (str, optional): the name of the Blob Storage container to upload to
110
111 Returns:
112 - str: the name of the blob the data payload was uploaded to
113 """
114
115 if container is None:
116 raise ValueError("A container name must be provided.")
117
118 # get Azure credentials
119 azure_credentials = Secret(azure_credentials_secret).get()
120
121 blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(
122 conn_str=azure_credentials
123 )
124
125 # create key if not provided
126 if blob_name is None:
127 blob_name = str(uuid.uuid4())
128
129 client = blob_service.get_blob_client(container=container, blob=blob_name)
130
131 client.upload_blob(data)
132
133 return blob_name
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/tasks/azure/blobstorage.py b/src/prefect/tasks/azure/blobstorage.py
--- a/src/prefect/tasks/azure/blobstorage.py
+++ b/src/prefect/tasks/azure/blobstorage.py
@@ -75,6 +75,8 @@
- azure_credentials_secret (str, optional): the name of the Prefect Secret
that stores your Azure credentials; this Secret must be an Azure connection string
- container (str, optional): the name of the Azure Blob Storage to upload to
+ - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.
+ Defaults to `False` and an error will be thrown if the blob already exists.
- **kwargs (dict, optional): additional keyword arguments to pass to the
Task constructor
"""
@@ -83,19 +85,22 @@
self,
azure_credentials_secret: str = "AZ_CONNECTION_STRING",
container: str = None,
+ overwrite: bool = False,
**kwargs
) -> None:
self.azure_credentials_secret = azure_credentials_secret
self.container = container
+ self.overwrite = overwrite
super().__init__(**kwargs)
- @defaults_from_attrs("azure_credentials_secret", "container")
+ @defaults_from_attrs("azure_credentials_secret", "container", "overwrite")
def run(
self,
data: str,
blob_name: str = None,
azure_credentials_secret: str = "AZ_CONNECTION_STRING",
container: str = None,
+ overwrite: bool = False,
) -> str:
"""
Task run method.
@@ -107,6 +112,8 @@
- azure_credentials_secret (str, optional): the name of the Prefect Secret
that stores your Azure credentials; this Secret must be an Azure connection string
- container (str, optional): the name of the Blob Storage container to upload to
+ - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.
+ Defaults to `False` and an error will be thrown if the blob already exists.
Returns:
- str: the name of the blob the data payload was uploaded to
@@ -128,6 +135,6 @@
client = blob_service.get_blob_client(container=container, blob=blob_name)
- client.upload_blob(data)
+ client.upload_blob(data, overwrite=overwrite)
return blob_name
| {"golden_diff": "diff --git a/src/prefect/tasks/azure/blobstorage.py b/src/prefect/tasks/azure/blobstorage.py\n--- a/src/prefect/tasks/azure/blobstorage.py\n+++ b/src/prefect/tasks/azure/blobstorage.py\n@@ -75,6 +75,8 @@\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Azure Blob Storage to upload to\n+ - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.\n+ Defaults to `False` and an error will be thrown if the blob already exists.\n - **kwargs (dict, optional): additional keyword arguments to pass to the\n Task constructor\n \"\"\"\n@@ -83,19 +85,22 @@\n self,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n+ overwrite: bool = False,\n **kwargs\n ) -> None:\n self.azure_credentials_secret = azure_credentials_secret\n self.container = container\n+ self.overwrite = overwrite\n super().__init__(**kwargs)\n \n- @defaults_from_attrs(\"azure_credentials_secret\", \"container\")\n+ @defaults_from_attrs(\"azure_credentials_secret\", \"container\", \"overwrite\")\n def run(\n self,\n data: str,\n blob_name: str = None,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n+ overwrite: bool = False,\n ) -> str:\n \"\"\"\n Task run method.\n@@ -107,6 +112,8 @@\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Blob Storage container to upload to\n+ - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.\n+ Defaults to `False` and an error will be thrown if the blob already exists.\n \n Returns:\n - str: the name of the blob the data payload was uploaded to\n@@ -128,6 +135,6 @@\n \n client = blob_service.get_blob_client(container=container, blob=blob_name)\n \n- client.upload_blob(data)\n+ client.upload_blob(data, overwrite=overwrite)\n \n return blob_name\n", "issue": "Azure BlobStorageUpload doesn't allow for overwriting blobs\n## Current behavior\r\n\r\nYou get an error if you try to upload the same file name\r\n\r\n```\r\nazure.core.exceptions.ResourceExistsError: The specified blob already exists.\r\nRequestId:5bef0cf1-b01e-002e-6\r\n```\r\n\r\n## Proposed behavior\r\n\r\nThe task should take in an `overwrite` argument and pass it to [this line](https://github.com/PrefectHQ/prefect/blob/6cd24b023411980842fa77e6c0ca2ced47eeb83e/src/prefect/tasks/azure/blobstorage.py#L131).\r\n\r\n\n", "before_files": [{"content": "import uuid\n\nimport azure.storage.blob\n\nfrom prefect import Task\nfrom prefect.client import Secret\nfrom prefect.utilities.tasks import defaults_from_attrs\n\n\nclass BlobStorageDownload(Task):\n \"\"\"\n Task for downloading data from an Blob Storage container and returning it as a string.\n Note that all initialization arguments can optionally be provided or overwritten at runtime.\n\n Args:\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Azure Blob Storage to download from\n - **kwargs (dict, optional): additional keyword arguments to pass to the\n Task constructor\n \"\"\"\n\n def __init__(\n self,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n **kwargs\n ) -> None:\n self.azure_credentials_secret = azure_credentials_secret\n self.container = container\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"azure_credentials_secret\", \"container\")\n def run(\n self,\n blob_name: str,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n ) -> str:\n \"\"\"\n Task run method.\n\n Args:\n - blob_name (str): the name of the blob within this container to retrieve\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Blob Storage container to download from\n\n Returns:\n - str: the contents of this blob_name / container, as a string\n \"\"\"\n\n if container is None:\n raise ValueError(\"A container name must be provided.\")\n\n # get Azure credentials\n azure_credentials = Secret(azure_credentials_secret).get()\n\n blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=azure_credentials\n )\n\n client = blob_service.get_blob_client(container=container, blob=blob_name)\n content_string = client.download_blob().content_as_text()\n\n return content_string\n\n\nclass BlobStorageUpload(Task):\n \"\"\"\n Task for uploading string data (e.g., a JSON string) to an Azure Blob Storage container.\n Note that all initialization arguments can optionally be provided or overwritten at runtime.\n\n Args:\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Azure Blob Storage to upload to\n - **kwargs (dict, optional): additional keyword arguments to pass to the\n Task constructor\n \"\"\"\n\n def __init__(\n self,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n **kwargs\n ) -> None:\n self.azure_credentials_secret = azure_credentials_secret\n self.container = container\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"azure_credentials_secret\", \"container\")\n def run(\n self,\n data: str,\n blob_name: str = None,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n ) -> str:\n \"\"\"\n Task run method.\n\n Args:\n - data (str): the data payload to upload\n - blob_name (str, optional): the name to upload the data under; if not\n provided, a random `uuid` will be created\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Blob Storage container to upload to\n\n Returns:\n - str: the name of the blob the data payload was uploaded to\n \"\"\"\n\n if container is None:\n raise ValueError(\"A container name must be provided.\")\n\n # get Azure credentials\n azure_credentials = Secret(azure_credentials_secret).get()\n\n blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=azure_credentials\n )\n\n # create key if not provided\n if blob_name is None:\n blob_name = str(uuid.uuid4())\n\n client = blob_service.get_blob_client(container=container, blob=blob_name)\n\n client.upload_blob(data)\n\n return blob_name\n", "path": "src/prefect/tasks/azure/blobstorage.py"}], "after_files": [{"content": "import uuid\n\nimport azure.storage.blob\n\nfrom prefect import Task\nfrom prefect.client import Secret\nfrom prefect.utilities.tasks import defaults_from_attrs\n\n\nclass BlobStorageDownload(Task):\n \"\"\"\n Task for downloading data from an Blob Storage container and returning it as a string.\n Note that all initialization arguments can optionally be provided or overwritten at runtime.\n\n Args:\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Azure Blob Storage to download from\n - **kwargs (dict, optional): additional keyword arguments to pass to the\n Task constructor\n \"\"\"\n\n def __init__(\n self,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n **kwargs\n ) -> None:\n self.azure_credentials_secret = azure_credentials_secret\n self.container = container\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"azure_credentials_secret\", \"container\")\n def run(\n self,\n blob_name: str,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n ) -> str:\n \"\"\"\n Task run method.\n\n Args:\n - blob_name (str): the name of the blob within this container to retrieve\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Blob Storage container to download from\n\n Returns:\n - str: the contents of this blob_name / container, as a string\n \"\"\"\n\n if container is None:\n raise ValueError(\"A container name must be provided.\")\n\n # get Azure credentials\n azure_credentials = Secret(azure_credentials_secret).get()\n\n blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=azure_credentials\n )\n\n client = blob_service.get_blob_client(container=container, blob=blob_name)\n content_string = client.download_blob().content_as_text()\n\n return content_string\n\n\nclass BlobStorageUpload(Task):\n \"\"\"\n Task for uploading string data (e.g., a JSON string) to an Azure Blob Storage container.\n Note that all initialization arguments can optionally be provided or overwritten at runtime.\n\n Args:\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Azure Blob Storage to upload to\n - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.\n Defaults to `False` and an error will be thrown if the blob already exists.\n - **kwargs (dict, optional): additional keyword arguments to pass to the\n Task constructor\n \"\"\"\n\n def __init__(\n self,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n overwrite: bool = False,\n **kwargs\n ) -> None:\n self.azure_credentials_secret = azure_credentials_secret\n self.container = container\n self.overwrite = overwrite\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"azure_credentials_secret\", \"container\", \"overwrite\")\n def run(\n self,\n data: str,\n blob_name: str = None,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n overwrite: bool = False,\n ) -> str:\n \"\"\"\n Task run method.\n\n Args:\n - data (str): the data payload to upload\n - blob_name (str, optional): the name to upload the data under; if not\n provided, a random `uuid` will be created\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Blob Storage container to upload to\n - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.\n Defaults to `False` and an error will be thrown if the blob already exists.\n\n Returns:\n - str: the name of the blob the data payload was uploaded to\n \"\"\"\n\n if container is None:\n raise ValueError(\"A container name must be provided.\")\n\n # get Azure credentials\n azure_credentials = Secret(azure_credentials_secret).get()\n\n blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=azure_credentials\n )\n\n # create key if not provided\n if blob_name is None:\n blob_name = str(uuid.uuid4())\n\n client = blob_service.get_blob_client(container=container, blob=blob_name)\n\n client.upload_blob(data, overwrite=overwrite)\n\n return blob_name\n", "path": "src/prefect/tasks/azure/blobstorage.py"}]} | 1,671 | 546 |
gh_patches_debug_7233 | rasdani/github-patches | git_diff | graspologic-org__graspologic-431 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remove * import in simulations
https://github.com/neurodata/graspy/blob/master/graspy/simulations/__init__.py
should not be using * import here
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `graspy/simulations/__init__.py`
Content:
```
1 # Copyright (c) Microsoft Corporation and contributors.
2 # Licensed under the MIT License.
3
4 from .simulations import *
5 from .simulations_corr import *
6 from .rdpg_corr import *
7
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/graspy/simulations/__init__.py b/graspy/simulations/__init__.py
--- a/graspy/simulations/__init__.py
+++ b/graspy/simulations/__init__.py
@@ -1,6 +1,19 @@
# Copyright (c) Microsoft Corporation and contributors.
# Licensed under the MIT License.
-from .simulations import *
-from .simulations_corr import *
-from .rdpg_corr import *
+from .simulations import sample_edges, er_np, er_nm, sbm, rdpg, p_from_latent
+from .simulations_corr import sample_edges_corr, er_corr, sbm_corr
+from .rdpg_corr import rdpg_corr
+
+__all__ = [
+ "sample_edges",
+ "er_np",
+ "er_nm",
+ "sbm",
+ "rdpg",
+ "p_from_latent",
+ "sample_edges_corr",
+ "er_corr",
+ "sbm_corr",
+ "rdpg_corr",
+]
| {"golden_diff": "diff --git a/graspy/simulations/__init__.py b/graspy/simulations/__init__.py\n--- a/graspy/simulations/__init__.py\n+++ b/graspy/simulations/__init__.py\n@@ -1,6 +1,19 @@\n # Copyright (c) Microsoft Corporation and contributors.\n # Licensed under the MIT License.\n \n-from .simulations import *\n-from .simulations_corr import *\n-from .rdpg_corr import *\n+from .simulations import sample_edges, er_np, er_nm, sbm, rdpg, p_from_latent\n+from .simulations_corr import sample_edges_corr, er_corr, sbm_corr\n+from .rdpg_corr import rdpg_corr\n+\n+__all__ = [\n+ \"sample_edges\",\n+ \"er_np\",\n+ \"er_nm\",\n+ \"sbm\",\n+ \"rdpg\",\n+ \"p_from_latent\",\n+ \"sample_edges_corr\",\n+ \"er_corr\",\n+ \"sbm_corr\",\n+ \"rdpg_corr\",\n+]\n", "issue": "remove * import in simulations\nhttps://github.com/neurodata/graspy/blob/master/graspy/simulations/__init__.py\r\n\r\nshould not be using * import here\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nfrom .simulations import *\nfrom .simulations_corr import *\nfrom .rdpg_corr import *\n", "path": "graspy/simulations/__init__.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nfrom .simulations import sample_edges, er_np, er_nm, sbm, rdpg, p_from_latent\nfrom .simulations_corr import sample_edges_corr, er_corr, sbm_corr\nfrom .rdpg_corr import rdpg_corr\n\n__all__ = [\n \"sample_edges\",\n \"er_np\",\n \"er_nm\",\n \"sbm\",\n \"rdpg\",\n \"p_from_latent\",\n \"sample_edges_corr\",\n \"er_corr\",\n \"sbm_corr\",\n \"rdpg_corr\",\n]\n", "path": "graspy/simulations/__init__.py"}]} | 346 | 230 |
gh_patches_debug_62674 | rasdani/github-patches | git_diff | oppia__oppia-1713 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add an OutputContains rule to the CodeRepl interaction.
We've had a request to add an OutputContains rule to the CodeRepl interaction.
The use case is as follows: the student will type in the body of a function, and their code will be checked by calling the function on several inputs and printing the results. We don't want to stop the student from printing their own stuff from the function first, though, hence the idea of checking to see whether a substring of the student's output matches the expected output.
Note that this is a straightforward starter project. The files to modify are extensions/interactions/CodeRepl/CodeRepl.js (see codeReplRulesService) and the corresponding test suite in extensions/interactions/CodeRepl/CodeReplRulesServiceSpec.js.
/cc @anuzis
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `extensions/rules/code_evaluation.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, softwar
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Rules for CodeEvaluation objects."""
18
19 from extensions.rules import base
20
21
22 class CodeEquals(base.CodeEvaluationRule):
23 description = 'has code equal to {{x|CodeString}}'
24
25
26 class CodeContains(base.CodeEvaluationRule):
27 description = 'has code that contains {{x|CodeString}}'
28
29
30 class CodeDoesNotContain(base.CodeEvaluationRule):
31 description = 'has code that does not contain {{x|CodeString}}'
32
33
34 class OutputEquals(base.CodeEvaluationRule):
35 description = 'has output equal to {{x|CodeString}}'
36
37
38 class ResultsInError(base.CodeEvaluationRule):
39 description = 'results in an error when run'
40
41
42 class ErrorContains(base.CodeEvaluationRule):
43 description = (
44 'has error message that contains {{x|UnicodeString}}')
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/extensions/rules/code_evaluation.py b/extensions/rules/code_evaluation.py
--- a/extensions/rules/code_evaluation.py
+++ b/extensions/rules/code_evaluation.py
@@ -30,6 +30,8 @@
class CodeDoesNotContain(base.CodeEvaluationRule):
description = 'has code that does not contain {{x|CodeString}}'
+class OutputContains(base.CodeEvaluationRule):
+ description = 'has output that contains {{x|CodeString}}'
class OutputEquals(base.CodeEvaluationRule):
description = 'has output equal to {{x|CodeString}}'
| {"golden_diff": "diff --git a/extensions/rules/code_evaluation.py b/extensions/rules/code_evaluation.py\n--- a/extensions/rules/code_evaluation.py\n+++ b/extensions/rules/code_evaluation.py\n@@ -30,6 +30,8 @@\n class CodeDoesNotContain(base.CodeEvaluationRule):\n description = 'has code that does not contain {{x|CodeString}}'\n \n+class OutputContains(base.CodeEvaluationRule):\n+ description = 'has output that contains {{x|CodeString}}'\n \n class OutputEquals(base.CodeEvaluationRule):\n description = 'has output equal to {{x|CodeString}}'\n", "issue": "Add an OutputContains rule to the CodeRepl interaction.\nWe've had a request to add an OutputContains rule to the CodeRepl interaction.\n\nThe use case is as follows: the student will type in the body of a function, and their code will be checked by calling the function on several inputs and printing the results. We don't want to stop the student from printing their own stuff from the function first, though, hence the idea of checking to see whether a substring of the student's output matches the expected output.\n\nNote that this is a straightforward starter project. The files to modify are extensions/interactions/CodeRepl/CodeRepl.js (see codeReplRulesService) and the corresponding test suite in extensions/interactions/CodeRepl/CodeReplRulesServiceSpec.js.\n\n/cc @anuzis \n\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Rules for CodeEvaluation objects.\"\"\"\n\nfrom extensions.rules import base\n\n\nclass CodeEquals(base.CodeEvaluationRule):\n description = 'has code equal to {{x|CodeString}}'\n\n\nclass CodeContains(base.CodeEvaluationRule):\n description = 'has code that contains {{x|CodeString}}'\n\n\nclass CodeDoesNotContain(base.CodeEvaluationRule):\n description = 'has code that does not contain {{x|CodeString}}'\n\n\nclass OutputEquals(base.CodeEvaluationRule):\n description = 'has output equal to {{x|CodeString}}'\n\n\nclass ResultsInError(base.CodeEvaluationRule):\n description = 'results in an error when run'\n\n\nclass ErrorContains(base.CodeEvaluationRule):\n description = (\n 'has error message that contains {{x|UnicodeString}}')\n", "path": "extensions/rules/code_evaluation.py"}], "after_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Rules for CodeEvaluation objects.\"\"\"\n\nfrom extensions.rules import base\n\n\nclass CodeEquals(base.CodeEvaluationRule):\n description = 'has code equal to {{x|CodeString}}'\n\n\nclass CodeContains(base.CodeEvaluationRule):\n description = 'has code that contains {{x|CodeString}}'\n\n\nclass CodeDoesNotContain(base.CodeEvaluationRule):\n description = 'has code that does not contain {{x|CodeString}}'\n\nclass OutputContains(base.CodeEvaluationRule):\n description = 'has output that contains {{x|CodeString}}'\n\nclass OutputEquals(base.CodeEvaluationRule):\n description = 'has output equal to {{x|CodeString}}'\n\n\nclass ResultsInError(base.CodeEvaluationRule):\n description = 'results in an error when run'\n\n\nclass ErrorContains(base.CodeEvaluationRule):\n description = (\n 'has error message that contains {{x|UnicodeString}}')\n", "path": "extensions/rules/code_evaluation.py"}]} | 824 | 121 |
gh_patches_debug_22767 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-4224 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pizza Hut Spider returns some closed outlets
It looks like the GB Pizza Hut spider "pizza_hut_gb" is returning a number of outlets that have closed. These are evident when the website either redirects to https://www.pizzahut.co.uk/restaurants/find or https://www.pizzahut.co.uk/restaurants/error/filenotfound . It seems that Pizza Hut are leaving up the https://www.pizzahut.co.uk/huts/uk-2/... web page after the outlet has closed, presumably for SEO reasons. These pages still contain the old location and web address, which the spider then picks up.
Examples include https://www.pizzahut.co.uk/huts/uk-2/437-ayr/ and https://www.pizzahut.co.uk/huts/uk-2/390-barrow/ .
I think these closed outlets can probably be removed from the dataset returned by looking at the openingHours LD field on the /huts/uk-2/ pages. The closed outlets seem to always have "openingHours":[]. The open branches have some sensible content there.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/pizza_hut_gb.py`
Content:
```
1 from scrapy.spiders import SitemapSpider
2
3 from locations.spiders.vapestore_gb import clean_address
4 from locations.structured_data_spider import StructuredDataSpider
5
6
7 class PizzaHutGB(SitemapSpider, StructuredDataSpider):
8 name = "pizza_hut_gb"
9 item_attributes = {"brand": "Pizza Hut", "brand_wikidata": "Q191615"}
10 sitemap_urls = ["https://www.pizzahut.co.uk/sitemap.xml"]
11 sitemap_rules = [
12 (r"https:\/\/www\.pizzahut\.co\.uk\/huts\/[-\w]+\/([-.\w]+)\/$", "parse_sd")
13 ]
14 wanted_types = ["FastFoodRestaurant"]
15
16 def inspect_item(self, item, response):
17 item["street_address"] = clean_address(item["street_address"])
18
19 if item["website"].startswith("https://www.pizzahut.co.uk/huts/"):
20 item["brand"] = "Pizza Hut Delivery"
21 item["brand_wikidata"] = "Q107293079"
22
23 yield item
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/pizza_hut_gb.py b/locations/spiders/pizza_hut_gb.py
--- a/locations/spiders/pizza_hut_gb.py
+++ b/locations/spiders/pizza_hut_gb.py
@@ -7,17 +7,19 @@
class PizzaHutGB(SitemapSpider, StructuredDataSpider):
name = "pizza_hut_gb"
item_attributes = {"brand": "Pizza Hut", "brand_wikidata": "Q191615"}
+ PIZZA_HUT_DELIVERY = {"brand": "Pizza Hut Delivery", "brand_wikidata": "Q107293079"}
sitemap_urls = ["https://www.pizzahut.co.uk/sitemap.xml"]
sitemap_rules = [
(r"https:\/\/www\.pizzahut\.co\.uk\/huts\/[-\w]+\/([-.\w]+)\/$", "parse_sd")
]
- wanted_types = ["FastFoodRestaurant"]
- def inspect_item(self, item, response):
+ def post_process_item(self, item, response, ld_data, **kwargs):
item["street_address"] = clean_address(item["street_address"])
if item["website"].startswith("https://www.pizzahut.co.uk/huts/"):
- item["brand"] = "Pizza Hut Delivery"
- item["brand_wikidata"] = "Q107293079"
+ item.update(self.PIZZA_HUT_DELIVERY)
+
+ if not item["opening_hours"]:
+ return
yield item
| {"golden_diff": "diff --git a/locations/spiders/pizza_hut_gb.py b/locations/spiders/pizza_hut_gb.py\n--- a/locations/spiders/pizza_hut_gb.py\n+++ b/locations/spiders/pizza_hut_gb.py\n@@ -7,17 +7,19 @@\n class PizzaHutGB(SitemapSpider, StructuredDataSpider):\n name = \"pizza_hut_gb\"\n item_attributes = {\"brand\": \"Pizza Hut\", \"brand_wikidata\": \"Q191615\"}\n+ PIZZA_HUT_DELIVERY = {\"brand\": \"Pizza Hut Delivery\", \"brand_wikidata\": \"Q107293079\"}\n sitemap_urls = [\"https://www.pizzahut.co.uk/sitemap.xml\"]\n sitemap_rules = [\n (r\"https:\\/\\/www\\.pizzahut\\.co\\.uk\\/huts\\/[-\\w]+\\/([-.\\w]+)\\/$\", \"parse_sd\")\n ]\n- wanted_types = [\"FastFoodRestaurant\"]\n \n- def inspect_item(self, item, response):\n+ def post_process_item(self, item, response, ld_data, **kwargs):\n item[\"street_address\"] = clean_address(item[\"street_address\"])\n \n if item[\"website\"].startswith(\"https://www.pizzahut.co.uk/huts/\"):\n- item[\"brand\"] = \"Pizza Hut Delivery\"\n- item[\"brand_wikidata\"] = \"Q107293079\"\n+ item.update(self.PIZZA_HUT_DELIVERY)\n+\n+ if not item[\"opening_hours\"]:\n+ return\n \n yield item\n", "issue": "Pizza Hut Spider returns some closed outlets\nIt looks like the GB Pizza Hut spider \"pizza_hut_gb\" is returning a number of outlets that have closed. These are evident when the website either redirects to https://www.pizzahut.co.uk/restaurants/find or https://www.pizzahut.co.uk/restaurants/error/filenotfound . It seems that Pizza Hut are leaving up the https://www.pizzahut.co.uk/huts/uk-2/... web page after the outlet has closed, presumably for SEO reasons. These pages still contain the old location and web address, which the spider then picks up.\r\n\r\nExamples include https://www.pizzahut.co.uk/huts/uk-2/437-ayr/ and https://www.pizzahut.co.uk/huts/uk-2/390-barrow/ .\r\n\r\nI think these closed outlets can probably be removed from the dataset returned by looking at the openingHours LD field on the /huts/uk-2/ pages. The closed outlets seem to always have \"openingHours\":[]. The open branches have some sensible content there.\n", "before_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.spiders.vapestore_gb import clean_address\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass PizzaHutGB(SitemapSpider, StructuredDataSpider):\n name = \"pizza_hut_gb\"\n item_attributes = {\"brand\": \"Pizza Hut\", \"brand_wikidata\": \"Q191615\"}\n sitemap_urls = [\"https://www.pizzahut.co.uk/sitemap.xml\"]\n sitemap_rules = [\n (r\"https:\\/\\/www\\.pizzahut\\.co\\.uk\\/huts\\/[-\\w]+\\/([-.\\w]+)\\/$\", \"parse_sd\")\n ]\n wanted_types = [\"FastFoodRestaurant\"]\n\n def inspect_item(self, item, response):\n item[\"street_address\"] = clean_address(item[\"street_address\"])\n\n if item[\"website\"].startswith(\"https://www.pizzahut.co.uk/huts/\"):\n item[\"brand\"] = \"Pizza Hut Delivery\"\n item[\"brand_wikidata\"] = \"Q107293079\"\n\n yield item\n", "path": "locations/spiders/pizza_hut_gb.py"}], "after_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.spiders.vapestore_gb import clean_address\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass PizzaHutGB(SitemapSpider, StructuredDataSpider):\n name = \"pizza_hut_gb\"\n item_attributes = {\"brand\": \"Pizza Hut\", \"brand_wikidata\": \"Q191615\"}\n PIZZA_HUT_DELIVERY = {\"brand\": \"Pizza Hut Delivery\", \"brand_wikidata\": \"Q107293079\"}\n sitemap_urls = [\"https://www.pizzahut.co.uk/sitemap.xml\"]\n sitemap_rules = [\n (r\"https:\\/\\/www\\.pizzahut\\.co\\.uk\\/huts\\/[-\\w]+\\/([-.\\w]+)\\/$\", \"parse_sd\")\n ]\n\n def post_process_item(self, item, response, ld_data, **kwargs):\n item[\"street_address\"] = clean_address(item[\"street_address\"])\n\n if item[\"website\"].startswith(\"https://www.pizzahut.co.uk/huts/\"):\n item.update(self.PIZZA_HUT_DELIVERY)\n\n if not item[\"opening_hours\"]:\n return\n\n yield item\n", "path": "locations/spiders/pizza_hut_gb.py"}]} | 775 | 352 |
gh_patches_debug_15192 | rasdani/github-patches | git_diff | SeldonIO__MLServer-339 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mlserver --version fails (0.5.0)
```
mlserver --version
Traceback (most recent call last):
File "/home/clive/anaconda3/envs/mlserver/bin/mlserver", line 8, in <module>
sys.exit(main())
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/mlserver/cli/main.py", line 45, in main
root()
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 781, in main
with self.make_context(prog_name, args, **extra) as ctx:
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 700, in make_context
self.parse_args(ctx, args)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 1212, in parse_args
rest = Command.parse_args(self, ctx, args)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 1048, in parse_args
value, args = param.handle_parse_result(ctx, opts, args)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 1630, in handle_parse_result
value = invoke_param_callback(self.callback, ctx, self, value)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 123, in invoke_param_callback
return callback(ctx, param, value)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/decorators.py", line 295, in callback
raise RuntimeError("Could not determine version")
RuntimeError: Could not determine version
(mlserver) /home/clive $ pip freeze | grep mlserver
mlserver==0.5.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2
3 from typing import Dict
4 from setuptools import setup, find_packages
5
6 ROOT_PATH = os.path.dirname(__file__)
7 PKG_NAME = "mlserver"
8 PKG_PATH = os.path.join(ROOT_PATH, PKG_NAME)
9
10
11 def _load_version() -> str:
12 version = ""
13 version_path = os.path.join(PKG_PATH, "version.py")
14 with open(version_path) as fp:
15 version_module: Dict[str, str] = {}
16 exec(fp.read(), version_module)
17 version = version_module["__version__"]
18
19 return version
20
21
22 def _load_description() -> str:
23 readme_path = os.path.join(ROOT_PATH, "README.md")
24 with open(readme_path) as fp:
25 return fp.read()
26
27
28 setup(
29 name=PKG_NAME,
30 version=_load_version(),
31 url="https://github.com/SeldonIO/MLServer.git",
32 author="Seldon Technologies Ltd.",
33 author_email="[email protected]",
34 description="ML server",
35 packages=find_packages(exclude=["tests", "tests.*"]),
36 install_requires=[
37 "grpcio",
38 "protobuf",
39 # We pin version of fastapi
40 # check https://github.com/SeldonIO/MLServer/issues/340
41 "fastapi==0.68.2",
42 "uvicorn",
43 "click",
44 "numpy",
45 "pandas",
46 ],
47 extras_require={"all": ["orjson"]},
48 entry_points={"console_scripts": ["mlserver=mlserver.cli:main"]},
49 long_description=_load_description(),
50 long_description_content_type="text/markdown",
51 license="Apache 2.0",
52 )
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -34,15 +34,16 @@
description="ML server",
packages=find_packages(exclude=["tests", "tests.*"]),
install_requires=[
- "grpcio",
- "protobuf",
+ "click",
# We pin version of fastapi
# check https://github.com/SeldonIO/MLServer/issues/340
"fastapi==0.68.2",
- "uvicorn",
- "click",
+ "grpcio",
+ "importlib-metadata;python_version<'3.8'",
"numpy",
"pandas",
+ "protobuf",
+ "uvicorn",
],
extras_require={"all": ["orjson"]},
entry_points={"console_scripts": ["mlserver=mlserver.cli:main"]},
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,15 +34,16 @@\n description=\"ML server\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n install_requires=[\n- \"grpcio\",\n- \"protobuf\",\n+ \"click\",\n # We pin version of fastapi\n # check https://github.com/SeldonIO/MLServer/issues/340\n \"fastapi==0.68.2\",\n- \"uvicorn\",\n- \"click\",\n+ \"grpcio\",\n+ \"importlib-metadata;python_version<'3.8'\",\n \"numpy\",\n \"pandas\",\n+ \"protobuf\",\n+ \"uvicorn\",\n ],\n extras_require={\"all\": [\"orjson\"]},\n entry_points={\"console_scripts\": [\"mlserver=mlserver.cli:main\"]},\n", "issue": "mlserver --version fails (0.5.0)\n```\r\nmlserver --version\r\nTraceback (most recent call last):\r\n File \"/home/clive/anaconda3/envs/mlserver/bin/mlserver\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/mlserver/cli/main.py\", line 45, in main\r\n root()\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 781, in main\r\n with self.make_context(prog_name, args, **extra) as ctx:\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 700, in make_context\r\n self.parse_args(ctx, args)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 1212, in parse_args\r\n rest = Command.parse_args(self, ctx, args)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 1048, in parse_args\r\n value, args = param.handle_parse_result(ctx, opts, args)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 1630, in handle_parse_result\r\n value = invoke_param_callback(self.callback, ctx, self, value)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 123, in invoke_param_callback\r\n return callback(ctx, param, value)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/decorators.py\", line 295, in callback\r\n raise RuntimeError(\"Could not determine version\")\r\nRuntimeError: Could not determine version\r\n(mlserver) /home/clive $ pip freeze | grep mlserver\r\nmlserver==0.5.0\r\n```\n", "before_files": [{"content": "import os\n\nfrom typing import Dict\nfrom setuptools import setup, find_packages\n\nROOT_PATH = os.path.dirname(__file__)\nPKG_NAME = \"mlserver\"\nPKG_PATH = os.path.join(ROOT_PATH, PKG_NAME)\n\n\ndef _load_version() -> str:\n version = \"\"\n version_path = os.path.join(PKG_PATH, \"version.py\")\n with open(version_path) as fp:\n version_module: Dict[str, str] = {}\n exec(fp.read(), version_module)\n version = version_module[\"__version__\"]\n\n return version\n\n\ndef _load_description() -> str:\n readme_path = os.path.join(ROOT_PATH, \"README.md\")\n with open(readme_path) as fp:\n return fp.read()\n\n\nsetup(\n name=PKG_NAME,\n version=_load_version(),\n url=\"https://github.com/SeldonIO/MLServer.git\",\n author=\"Seldon Technologies Ltd.\",\n author_email=\"[email protected]\",\n description=\"ML server\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n install_requires=[\n \"grpcio\",\n \"protobuf\",\n # We pin version of fastapi\n # check https://github.com/SeldonIO/MLServer/issues/340\n \"fastapi==0.68.2\",\n \"uvicorn\",\n \"click\",\n \"numpy\",\n \"pandas\",\n ],\n extras_require={\"all\": [\"orjson\"]},\n entry_points={\"console_scripts\": [\"mlserver=mlserver.cli:main\"]},\n long_description=_load_description(),\n long_description_content_type=\"text/markdown\",\n license=\"Apache 2.0\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\n\nfrom typing import Dict\nfrom setuptools import setup, find_packages\n\nROOT_PATH = os.path.dirname(__file__)\nPKG_NAME = \"mlserver\"\nPKG_PATH = os.path.join(ROOT_PATH, PKG_NAME)\n\n\ndef _load_version() -> str:\n version = \"\"\n version_path = os.path.join(PKG_PATH, \"version.py\")\n with open(version_path) as fp:\n version_module: Dict[str, str] = {}\n exec(fp.read(), version_module)\n version = version_module[\"__version__\"]\n\n return version\n\n\ndef _load_description() -> str:\n readme_path = os.path.join(ROOT_PATH, \"README.md\")\n with open(readme_path) as fp:\n return fp.read()\n\n\nsetup(\n name=PKG_NAME,\n version=_load_version(),\n url=\"https://github.com/SeldonIO/MLServer.git\",\n author=\"Seldon Technologies Ltd.\",\n author_email=\"[email protected]\",\n description=\"ML server\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n install_requires=[\n \"click\",\n # We pin version of fastapi\n # check https://github.com/SeldonIO/MLServer/issues/340\n \"fastapi==0.68.2\",\n \"grpcio\",\n \"importlib-metadata;python_version<'3.8'\",\n \"numpy\",\n \"pandas\",\n \"protobuf\",\n \"uvicorn\",\n ],\n extras_require={\"all\": [\"orjson\"]},\n entry_points={\"console_scripts\": [\"mlserver=mlserver.cli:main\"]},\n long_description=_load_description(),\n long_description_content_type=\"text/markdown\",\n license=\"Apache 2.0\",\n)\n", "path": "setup.py"}]} | 1,226 | 195 |
gh_patches_debug_43501 | rasdani/github-patches | git_diff | goauthentik__authentik-7264 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Authenticator SMS Challenge response doesn't have _errors attribute
**Describe the bug**
It seems that something has changed in the regards to Authenticator SMS Challenge Response.
**To Reproduce**
Steps to reproduce the behavior:
1. Have an perfectly working SMS Authenicator stage prior to upgrading to 2023.8.3
2. Upgrade
3. Don't see any immediate errors.
4. See error when user tries to register using said flow.
**Expected behavior**
I would have expected it to not error out.
**Logs**
<details>
<summary>Stacktrace from authentik</summary>
```
Traceback (most recent call last):
File "/authentik/flows/views/executor.py", line 298, in get
stage_response = self.current_stage_view.get(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/authentik/stages/authenticator_sms/stage.py", line 117, in get
response._errors.setdefault("phone_number", [])
^^^^^^^^^^^^^^^^
builtins.AttributeError: 'AuthenticatorSMSChallengeResponse' object has no attribute '_errors'
```
</details>
**Version and Deployment (please complete the following information):**
- authentik version: 2023.8.3
- Deployment: Ansible with Docker
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/stages/authenticator_sms/stage.py`
Content:
```
1 """SMS Setup stage"""
2 from typing import Optional
3
4 from django.db.models import Q
5 from django.http import HttpRequest, HttpResponse
6 from django.http.request import QueryDict
7 from django.utils.translation import gettext_lazy as _
8 from rest_framework.exceptions import ValidationError
9 from rest_framework.fields import BooleanField, CharField, IntegerField
10
11 from authentik.flows.challenge import (
12 Challenge,
13 ChallengeResponse,
14 ChallengeTypes,
15 ErrorDetailSerializer,
16 WithUserInfoChallenge,
17 )
18 from authentik.flows.stage import ChallengeStageView
19 from authentik.stages.authenticator_sms.models import (
20 AuthenticatorSMSStage,
21 SMSDevice,
22 hash_phone_number,
23 )
24 from authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT
25
26 SESSION_KEY_SMS_DEVICE = "authentik/stages/authenticator_sms/sms_device"
27
28
29 class AuthenticatorSMSChallenge(WithUserInfoChallenge):
30 """SMS Setup challenge"""
31
32 # Set to true if no previous prompt stage set the phone number
33 # this stage will also check prompt_data.phone
34 phone_number_required = BooleanField(default=True)
35 component = CharField(default="ak-stage-authenticator-sms")
36
37
38 class AuthenticatorSMSChallengeResponse(ChallengeResponse):
39 """SMS Challenge response, device is set by get_response_instance"""
40
41 device: SMSDevice
42
43 code = IntegerField(required=False)
44 phone_number = CharField(required=False)
45
46 component = CharField(default="ak-stage-authenticator-sms")
47
48 def validate(self, attrs: dict) -> dict:
49 """Check"""
50 if "code" not in attrs:
51 self.device.phone_number = attrs["phone_number"]
52 self.stage.validate_and_send(attrs["phone_number"])
53 return super().validate(attrs)
54 if not self.device.verify_token(str(attrs["code"])):
55 raise ValidationError(_("Code does not match"))
56 self.device.confirmed = True
57 return super().validate(attrs)
58
59
60 class AuthenticatorSMSStageView(ChallengeStageView):
61 """OTP sms Setup stage"""
62
63 response_class = AuthenticatorSMSChallengeResponse
64
65 def validate_and_send(self, phone_number: str):
66 """Validate phone number and send message"""
67 stage: AuthenticatorSMSStage = self.executor.current_stage
68 hashed_number = hash_phone_number(phone_number)
69 query = Q(phone_number=hashed_number) | Q(phone_number=phone_number)
70 if SMSDevice.objects.filter(query, stage=stage.pk).exists():
71 raise ValidationError(_("Invalid phone number"))
72 # No code yet, but we have a phone number, so send a verification message
73 device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]
74 stage.send(device.token, device)
75
76 def _has_phone_number(self) -> Optional[str]:
77 context = self.executor.plan.context
78 if "phone" in context.get(PLAN_CONTEXT_PROMPT, {}):
79 self.logger.debug("got phone number from plan context")
80 return context.get(PLAN_CONTEXT_PROMPT, {}).get("phone")
81 if SESSION_KEY_SMS_DEVICE in self.request.session:
82 self.logger.debug("got phone number from device in session")
83 device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]
84 if device.phone_number == "":
85 return None
86 return device.phone_number
87 return None
88
89 def get_challenge(self, *args, **kwargs) -> Challenge:
90 return AuthenticatorSMSChallenge(
91 data={
92 "type": ChallengeTypes.NATIVE.value,
93 "phone_number_required": self._has_phone_number() is None,
94 }
95 )
96
97 def get_response_instance(self, data: QueryDict) -> ChallengeResponse:
98 response = super().get_response_instance(data)
99 response.device = self.request.session[SESSION_KEY_SMS_DEVICE]
100 return response
101
102 def get(self, request: HttpRequest, *args, **kwargs) -> HttpResponse:
103 user = self.get_pending_user()
104
105 stage: AuthenticatorSMSStage = self.executor.current_stage
106
107 if SESSION_KEY_SMS_DEVICE not in self.request.session:
108 device = SMSDevice(user=user, confirmed=False, stage=stage, name="SMS Device")
109 device.generate_token(commit=False)
110 self.request.session[SESSION_KEY_SMS_DEVICE] = device
111 if phone_number := self._has_phone_number():
112 device.phone_number = phone_number
113 try:
114 self.validate_and_send(phone_number)
115 except ValidationError as exc:
116 response = AuthenticatorSMSChallengeResponse()
117 response._errors.setdefault("phone_number", [])
118 response._errors["phone_number"].append(ErrorDetailSerializer(exc.detail))
119 return self.challenge_invalid(response)
120 return super().get(request, *args, **kwargs)
121
122 def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:
123 """SMS Token is validated by challenge"""
124 device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]
125 if not device.confirmed:
126 return self.challenge_invalid(response)
127 stage: AuthenticatorSMSStage = self.executor.current_stage
128 if stage.verify_only:
129 self.logger.debug("Hashing number on device")
130 device.set_hashed_number()
131 device.save()
132 del self.request.session[SESSION_KEY_SMS_DEVICE]
133 return self.executor.stage_ok()
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/stages/authenticator_sms/stage.py b/authentik/stages/authenticator_sms/stage.py
--- a/authentik/stages/authenticator_sms/stage.py
+++ b/authentik/stages/authenticator_sms/stage.py
@@ -12,7 +12,6 @@
Challenge,
ChallengeResponse,
ChallengeTypes,
- ErrorDetailSerializer,
WithUserInfoChallenge,
)
from authentik.flows.stage import ChallengeStageView
@@ -24,6 +23,7 @@
from authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT
SESSION_KEY_SMS_DEVICE = "authentik/stages/authenticator_sms/sms_device"
+PLAN_CONTEXT_PHONE = "phone"
class AuthenticatorSMSChallenge(WithUserInfoChallenge):
@@ -48,6 +48,8 @@
def validate(self, attrs: dict) -> dict:
"""Check"""
if "code" not in attrs:
+ if "phone_number" not in attrs:
+ raise ValidationError("phone_number required")
self.device.phone_number = attrs["phone_number"]
self.stage.validate_and_send(attrs["phone_number"])
return super().validate(attrs)
@@ -67,6 +69,7 @@
stage: AuthenticatorSMSStage = self.executor.current_stage
hashed_number = hash_phone_number(phone_number)
query = Q(phone_number=hashed_number) | Q(phone_number=phone_number)
+ print(SMSDevice.objects.filter(query, stage=stage.pk))
if SMSDevice.objects.filter(query, stage=stage.pk).exists():
raise ValidationError(_("Invalid phone number"))
# No code yet, but we have a phone number, so send a verification message
@@ -75,9 +78,9 @@
def _has_phone_number(self) -> Optional[str]:
context = self.executor.plan.context
- if "phone" in context.get(PLAN_CONTEXT_PROMPT, {}):
+ if PLAN_CONTEXT_PHONE in context.get(PLAN_CONTEXT_PROMPT, {}):
self.logger.debug("got phone number from plan context")
- return context.get(PLAN_CONTEXT_PROMPT, {}).get("phone")
+ return context.get(PLAN_CONTEXT_PROMPT, {}).get(PLAN_CONTEXT_PHONE)
if SESSION_KEY_SMS_DEVICE in self.request.session:
self.logger.debug("got phone number from device in session")
device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]
@@ -113,10 +116,17 @@
try:
self.validate_and_send(phone_number)
except ValidationError as exc:
- response = AuthenticatorSMSChallengeResponse()
- response._errors.setdefault("phone_number", [])
- response._errors["phone_number"].append(ErrorDetailSerializer(exc.detail))
- return self.challenge_invalid(response)
+ # We had a phone number given already (at this point only possible from flow
+ # context), but an error occurred while sending a number (most likely)
+ # due to a duplicate device, so delete the number we got given, reset the state
+ # (ish) and retry
+ device.phone_number = ""
+ self.executor.plan.context.get(PLAN_CONTEXT_PROMPT, {}).pop(
+ PLAN_CONTEXT_PHONE, None
+ )
+ self.request.session.pop(SESSION_KEY_SMS_DEVICE, None)
+ self.logger.warning("failed to send SMS message to pre-set number", exc=exc)
+ return self.get(request, *args, **kwargs)
return super().get(request, *args, **kwargs)
def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:
| {"golden_diff": "diff --git a/authentik/stages/authenticator_sms/stage.py b/authentik/stages/authenticator_sms/stage.py\n--- a/authentik/stages/authenticator_sms/stage.py\n+++ b/authentik/stages/authenticator_sms/stage.py\n@@ -12,7 +12,6 @@\n Challenge,\n ChallengeResponse,\n ChallengeTypes,\n- ErrorDetailSerializer,\n WithUserInfoChallenge,\n )\n from authentik.flows.stage import ChallengeStageView\n@@ -24,6 +23,7 @@\n from authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT\n \n SESSION_KEY_SMS_DEVICE = \"authentik/stages/authenticator_sms/sms_device\"\n+PLAN_CONTEXT_PHONE = \"phone\"\n \n \n class AuthenticatorSMSChallenge(WithUserInfoChallenge):\n@@ -48,6 +48,8 @@\n def validate(self, attrs: dict) -> dict:\n \"\"\"Check\"\"\"\n if \"code\" not in attrs:\n+ if \"phone_number\" not in attrs:\n+ raise ValidationError(\"phone_number required\")\n self.device.phone_number = attrs[\"phone_number\"]\n self.stage.validate_and_send(attrs[\"phone_number\"])\n return super().validate(attrs)\n@@ -67,6 +69,7 @@\n stage: AuthenticatorSMSStage = self.executor.current_stage\n hashed_number = hash_phone_number(phone_number)\n query = Q(phone_number=hashed_number) | Q(phone_number=phone_number)\n+ print(SMSDevice.objects.filter(query, stage=stage.pk))\n if SMSDevice.objects.filter(query, stage=stage.pk).exists():\n raise ValidationError(_(\"Invalid phone number\"))\n # No code yet, but we have a phone number, so send a verification message\n@@ -75,9 +78,9 @@\n \n def _has_phone_number(self) -> Optional[str]:\n context = self.executor.plan.context\n- if \"phone\" in context.get(PLAN_CONTEXT_PROMPT, {}):\n+ if PLAN_CONTEXT_PHONE in context.get(PLAN_CONTEXT_PROMPT, {}):\n self.logger.debug(\"got phone number from plan context\")\n- return context.get(PLAN_CONTEXT_PROMPT, {}).get(\"phone\")\n+ return context.get(PLAN_CONTEXT_PROMPT, {}).get(PLAN_CONTEXT_PHONE)\n if SESSION_KEY_SMS_DEVICE in self.request.session:\n self.logger.debug(\"got phone number from device in session\")\n device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]\n@@ -113,10 +116,17 @@\n try:\n self.validate_and_send(phone_number)\n except ValidationError as exc:\n- response = AuthenticatorSMSChallengeResponse()\n- response._errors.setdefault(\"phone_number\", [])\n- response._errors[\"phone_number\"].append(ErrorDetailSerializer(exc.detail))\n- return self.challenge_invalid(response)\n+ # We had a phone number given already (at this point only possible from flow\n+ # context), but an error occurred while sending a number (most likely)\n+ # due to a duplicate device, so delete the number we got given, reset the state\n+ # (ish) and retry\n+ device.phone_number = \"\"\n+ self.executor.plan.context.get(PLAN_CONTEXT_PROMPT, {}).pop(\n+ PLAN_CONTEXT_PHONE, None\n+ )\n+ self.request.session.pop(SESSION_KEY_SMS_DEVICE, None)\n+ self.logger.warning(\"failed to send SMS message to pre-set number\", exc=exc)\n+ return self.get(request, *args, **kwargs)\n return super().get(request, *args, **kwargs)\n \n def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:\n", "issue": "Bug: Authenticator SMS Challenge response doesn't have _errors attribute\n**Describe the bug**\r\nIt seems that something has changed in the regards to Authenticator SMS Challenge Response.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Have an perfectly working SMS Authenicator stage prior to upgrading to 2023.8.3\r\n2. Upgrade\r\n3. Don't see any immediate errors.\r\n4. See error when user tries to register using said flow.\r\n\r\n**Expected behavior**\r\nI would have expected it to not error out.\r\n\r\n**Logs**\r\n<details>\r\n <summary>Stacktrace from authentik</summary>\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/authentik/flows/views/executor.py\", line 298, in get\r\n stage_response = self.current_stage_view.get(request, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/authentik/stages/authenticator_sms/stage.py\", line 117, in get\r\n response._errors.setdefault(\"phone_number\", [])\r\n ^^^^^^^^^^^^^^^^\r\nbuiltins.AttributeError: 'AuthenticatorSMSChallengeResponse' object has no attribute '_errors'\r\n```\r\n</details>\r\n\r\n\r\n**Version and Deployment (please complete the following information):**\r\n- authentik version: 2023.8.3\r\n- Deployment: Ansible with Docker\r\n \n", "before_files": [{"content": "\"\"\"SMS Setup stage\"\"\"\nfrom typing import Optional\n\nfrom django.db.models import Q\nfrom django.http import HttpRequest, HttpResponse\nfrom django.http.request import QueryDict\nfrom django.utils.translation import gettext_lazy as _\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import BooleanField, CharField, IntegerField\n\nfrom authentik.flows.challenge import (\n Challenge,\n ChallengeResponse,\n ChallengeTypes,\n ErrorDetailSerializer,\n WithUserInfoChallenge,\n)\nfrom authentik.flows.stage import ChallengeStageView\nfrom authentik.stages.authenticator_sms.models import (\n AuthenticatorSMSStage,\n SMSDevice,\n hash_phone_number,\n)\nfrom authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT\n\nSESSION_KEY_SMS_DEVICE = \"authentik/stages/authenticator_sms/sms_device\"\n\n\nclass AuthenticatorSMSChallenge(WithUserInfoChallenge):\n \"\"\"SMS Setup challenge\"\"\"\n\n # Set to true if no previous prompt stage set the phone number\n # this stage will also check prompt_data.phone\n phone_number_required = BooleanField(default=True)\n component = CharField(default=\"ak-stage-authenticator-sms\")\n\n\nclass AuthenticatorSMSChallengeResponse(ChallengeResponse):\n \"\"\"SMS Challenge response, device is set by get_response_instance\"\"\"\n\n device: SMSDevice\n\n code = IntegerField(required=False)\n phone_number = CharField(required=False)\n\n component = CharField(default=\"ak-stage-authenticator-sms\")\n\n def validate(self, attrs: dict) -> dict:\n \"\"\"Check\"\"\"\n if \"code\" not in attrs:\n self.device.phone_number = attrs[\"phone_number\"]\n self.stage.validate_and_send(attrs[\"phone_number\"])\n return super().validate(attrs)\n if not self.device.verify_token(str(attrs[\"code\"])):\n raise ValidationError(_(\"Code does not match\"))\n self.device.confirmed = True\n return super().validate(attrs)\n\n\nclass AuthenticatorSMSStageView(ChallengeStageView):\n \"\"\"OTP sms Setup stage\"\"\"\n\n response_class = AuthenticatorSMSChallengeResponse\n\n def validate_and_send(self, phone_number: str):\n \"\"\"Validate phone number and send message\"\"\"\n stage: AuthenticatorSMSStage = self.executor.current_stage\n hashed_number = hash_phone_number(phone_number)\n query = Q(phone_number=hashed_number) | Q(phone_number=phone_number)\n if SMSDevice.objects.filter(query, stage=stage.pk).exists():\n raise ValidationError(_(\"Invalid phone number\"))\n # No code yet, but we have a phone number, so send a verification message\n device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]\n stage.send(device.token, device)\n\n def _has_phone_number(self) -> Optional[str]:\n context = self.executor.plan.context\n if \"phone\" in context.get(PLAN_CONTEXT_PROMPT, {}):\n self.logger.debug(\"got phone number from plan context\")\n return context.get(PLAN_CONTEXT_PROMPT, {}).get(\"phone\")\n if SESSION_KEY_SMS_DEVICE in self.request.session:\n self.logger.debug(\"got phone number from device in session\")\n device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]\n if device.phone_number == \"\":\n return None\n return device.phone_number\n return None\n\n def get_challenge(self, *args, **kwargs) -> Challenge:\n return AuthenticatorSMSChallenge(\n data={\n \"type\": ChallengeTypes.NATIVE.value,\n \"phone_number_required\": self._has_phone_number() is None,\n }\n )\n\n def get_response_instance(self, data: QueryDict) -> ChallengeResponse:\n response = super().get_response_instance(data)\n response.device = self.request.session[SESSION_KEY_SMS_DEVICE]\n return response\n\n def get(self, request: HttpRequest, *args, **kwargs) -> HttpResponse:\n user = self.get_pending_user()\n\n stage: AuthenticatorSMSStage = self.executor.current_stage\n\n if SESSION_KEY_SMS_DEVICE not in self.request.session:\n device = SMSDevice(user=user, confirmed=False, stage=stage, name=\"SMS Device\")\n device.generate_token(commit=False)\n self.request.session[SESSION_KEY_SMS_DEVICE] = device\n if phone_number := self._has_phone_number():\n device.phone_number = phone_number\n try:\n self.validate_and_send(phone_number)\n except ValidationError as exc:\n response = AuthenticatorSMSChallengeResponse()\n response._errors.setdefault(\"phone_number\", [])\n response._errors[\"phone_number\"].append(ErrorDetailSerializer(exc.detail))\n return self.challenge_invalid(response)\n return super().get(request, *args, **kwargs)\n\n def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:\n \"\"\"SMS Token is validated by challenge\"\"\"\n device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]\n if not device.confirmed:\n return self.challenge_invalid(response)\n stage: AuthenticatorSMSStage = self.executor.current_stage\n if stage.verify_only:\n self.logger.debug(\"Hashing number on device\")\n device.set_hashed_number()\n device.save()\n del self.request.session[SESSION_KEY_SMS_DEVICE]\n return self.executor.stage_ok()\n", "path": "authentik/stages/authenticator_sms/stage.py"}], "after_files": [{"content": "\"\"\"SMS Setup stage\"\"\"\nfrom typing import Optional\n\nfrom django.db.models import Q\nfrom django.http import HttpRequest, HttpResponse\nfrom django.http.request import QueryDict\nfrom django.utils.translation import gettext_lazy as _\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import BooleanField, CharField, IntegerField\n\nfrom authentik.flows.challenge import (\n Challenge,\n ChallengeResponse,\n ChallengeTypes,\n WithUserInfoChallenge,\n)\nfrom authentik.flows.stage import ChallengeStageView\nfrom authentik.stages.authenticator_sms.models import (\n AuthenticatorSMSStage,\n SMSDevice,\n hash_phone_number,\n)\nfrom authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT\n\nSESSION_KEY_SMS_DEVICE = \"authentik/stages/authenticator_sms/sms_device\"\nPLAN_CONTEXT_PHONE = \"phone\"\n\n\nclass AuthenticatorSMSChallenge(WithUserInfoChallenge):\n \"\"\"SMS Setup challenge\"\"\"\n\n # Set to true if no previous prompt stage set the phone number\n # this stage will also check prompt_data.phone\n phone_number_required = BooleanField(default=True)\n component = CharField(default=\"ak-stage-authenticator-sms\")\n\n\nclass AuthenticatorSMSChallengeResponse(ChallengeResponse):\n \"\"\"SMS Challenge response, device is set by get_response_instance\"\"\"\n\n device: SMSDevice\n\n code = IntegerField(required=False)\n phone_number = CharField(required=False)\n\n component = CharField(default=\"ak-stage-authenticator-sms\")\n\n def validate(self, attrs: dict) -> dict:\n \"\"\"Check\"\"\"\n if \"code\" not in attrs:\n if \"phone_number\" not in attrs:\n raise ValidationError(\"phone_number required\")\n self.device.phone_number = attrs[\"phone_number\"]\n self.stage.validate_and_send(attrs[\"phone_number\"])\n return super().validate(attrs)\n if not self.device.verify_token(str(attrs[\"code\"])):\n raise ValidationError(_(\"Code does not match\"))\n self.device.confirmed = True\n return super().validate(attrs)\n\n\nclass AuthenticatorSMSStageView(ChallengeStageView):\n \"\"\"OTP sms Setup stage\"\"\"\n\n response_class = AuthenticatorSMSChallengeResponse\n\n def validate_and_send(self, phone_number: str):\n \"\"\"Validate phone number and send message\"\"\"\n stage: AuthenticatorSMSStage = self.executor.current_stage\n hashed_number = hash_phone_number(phone_number)\n query = Q(phone_number=hashed_number) | Q(phone_number=phone_number)\n print(SMSDevice.objects.filter(query, stage=stage.pk))\n if SMSDevice.objects.filter(query, stage=stage.pk).exists():\n raise ValidationError(_(\"Invalid phone number\"))\n # No code yet, but we have a phone number, so send a verification message\n device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]\n stage.send(device.token, device)\n\n def _has_phone_number(self) -> Optional[str]:\n context = self.executor.plan.context\n if PLAN_CONTEXT_PHONE in context.get(PLAN_CONTEXT_PROMPT, {}):\n self.logger.debug(\"got phone number from plan context\")\n return context.get(PLAN_CONTEXT_PROMPT, {}).get(PLAN_CONTEXT_PHONE)\n if SESSION_KEY_SMS_DEVICE in self.request.session:\n self.logger.debug(\"got phone number from device in session\")\n device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]\n if device.phone_number == \"\":\n return None\n return device.phone_number\n return None\n\n def get_challenge(self, *args, **kwargs) -> Challenge:\n return AuthenticatorSMSChallenge(\n data={\n \"type\": ChallengeTypes.NATIVE.value,\n \"phone_number_required\": self._has_phone_number() is None,\n }\n )\n\n def get_response_instance(self, data: QueryDict) -> ChallengeResponse:\n response = super().get_response_instance(data)\n response.device = self.request.session[SESSION_KEY_SMS_DEVICE]\n return response\n\n def get(self, request: HttpRequest, *args, **kwargs) -> HttpResponse:\n user = self.get_pending_user()\n\n stage: AuthenticatorSMSStage = self.executor.current_stage\n\n if SESSION_KEY_SMS_DEVICE not in self.request.session:\n device = SMSDevice(user=user, confirmed=False, stage=stage, name=\"SMS Device\")\n device.generate_token(commit=False)\n self.request.session[SESSION_KEY_SMS_DEVICE] = device\n if phone_number := self._has_phone_number():\n device.phone_number = phone_number\n try:\n self.validate_and_send(phone_number)\n except ValidationError as exc:\n # We had a phone number given already (at this point only possible from flow\n # context), but an error occurred while sending a number (most likely)\n # due to a duplicate device, so delete the number we got given, reset the state\n # (ish) and retry\n device.phone_number = \"\"\n self.executor.plan.context.get(PLAN_CONTEXT_PROMPT, {}).pop(\n PLAN_CONTEXT_PHONE, None\n )\n self.request.session.pop(SESSION_KEY_SMS_DEVICE, None)\n self.logger.warning(\"failed to send SMS message to pre-set number\", exc=exc)\n return self.get(request, *args, **kwargs)\n return super().get(request, *args, **kwargs)\n\n def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:\n \"\"\"SMS Token is validated by challenge\"\"\"\n device: SMSDevice = self.request.session[SESSION_KEY_SMS_DEVICE]\n if not device.confirmed:\n return self.challenge_invalid(response)\n stage: AuthenticatorSMSStage = self.executor.current_stage\n if stage.verify_only:\n self.logger.debug(\"Hashing number on device\")\n device.set_hashed_number()\n device.save()\n del self.request.session[SESSION_KEY_SMS_DEVICE]\n return self.executor.stage_ok()\n", "path": "authentik/stages/authenticator_sms/stage.py"}]} | 1,935 | 768 |
gh_patches_debug_1887 | rasdani/github-patches | git_diff | spotify__luigi-2679 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Is there a reason python-dateutil is pinned to v2.7.5?
In this [commit](https://github.com/spotify/luigi/commit/ca0aa9afedecda539339e51974ef38cecf180d4b), I can see that python-dateutil has been pinned to version 2.7.5 - is this strictly necessary? Version 2.8.0 was released a couple of weeks ago and It's causing `ContextualVersionConflict` errors for us.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) 2012 Spotify AB
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may not
4 # use this file except in compliance with the License. You may obtain a copy of
5 # the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations under
13 # the License.
14
15 import os
16 import sys
17
18 from setuptools import setup
19
20
21 def get_static_files(path):
22 return [os.path.join(dirpath.replace("luigi/", ""), ext)
23 for (dirpath, dirnames, filenames) in os.walk(path)
24 for ext in ["*.html", "*.js", "*.css", "*.png",
25 "*.eot", "*.svg", "*.ttf", "*.woff", "*.woff2"]]
26
27
28 luigi_package_data = sum(map(get_static_files, ["luigi/static", "luigi/templates"]), [])
29
30 readme_note = """\
31 .. note::
32
33 For the latest source, discussion, etc, please visit the
34 `GitHub repository <https://github.com/spotify/luigi>`_\n\n
35 """
36
37 with open('README.rst') as fobj:
38 long_description = readme_note + fobj.read()
39
40 install_requires = [
41 'tornado>=4.0,<5',
42 # https://pagure.io/python-daemon/issue/18
43 'python-daemon<2.2.0',
44 'python-dateutil==2.7.5',
45 ]
46
47 # Note: To support older versions of setuptools, we're explicitly not
48 # using conditional syntax (i.e. 'enum34>1.1.0;python_version<"3.4"').
49 # This syntax is a problem for setuptools as recent as `20.1.1`,
50 # published Feb 16, 2016.
51 if sys.version_info[:2] < (3, 4):
52 install_requires.append('enum34>1.1.0')
53
54 if os.environ.get('READTHEDOCS', None) == 'True':
55 # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla
56 install_requires.append('sqlalchemy')
57 # readthedocs don't like python-daemon, see #1342
58 install_requires.remove('python-daemon<2.2.0')
59 install_requires.append('sphinx>=1.4.4') # Value mirrored in doc/conf.py
60
61 setup(
62 name='luigi',
63 version='2.8.3',
64 description='Workflow mgmgt + task scheduling + dependency resolution',
65 long_description=long_description,
66 author='The Luigi Authors',
67 url='https://github.com/spotify/luigi',
68 license='Apache License 2.0',
69 packages=[
70 'luigi',
71 'luigi.configuration',
72 'luigi.contrib',
73 'luigi.contrib.hdfs',
74 'luigi.tools'
75 ],
76 package_data={
77 'luigi': luigi_package_data
78 },
79 entry_points={
80 'console_scripts': [
81 'luigi = luigi.cmdline:luigi_run',
82 'luigid = luigi.cmdline:luigid',
83 'luigi-grep = luigi.tools.luigi_grep:main',
84 'luigi-deps = luigi.tools.deps:main',
85 'luigi-deps-tree = luigi.tools.deps_tree:main'
86 ]
87 },
88 install_requires=install_requires,
89 extras_require={
90 'toml': ['toml<2.0.0'],
91 },
92 classifiers=[
93 'Development Status :: 5 - Production/Stable',
94 'Environment :: Console',
95 'Environment :: Web Environment',
96 'Intended Audience :: Developers',
97 'Intended Audience :: System Administrators',
98 'License :: OSI Approved :: Apache Software License',
99 'Programming Language :: Python :: 2.7',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Programming Language :: Python :: 3.5',
103 'Programming Language :: Python :: 3.6',
104 'Programming Language :: Python :: 3.7',
105 'Topic :: System :: Monitoring',
106 ],
107 )
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,7 +41,7 @@
'tornado>=4.0,<5',
# https://pagure.io/python-daemon/issue/18
'python-daemon<2.2.0',
- 'python-dateutil==2.7.5',
+ 'python-dateutil>=2.7.5,<3',
]
# Note: To support older versions of setuptools, we're explicitly not
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,7 +41,7 @@\n 'tornado>=4.0,<5',\n # https://pagure.io/python-daemon/issue/18\n 'python-daemon<2.2.0',\n- 'python-dateutil==2.7.5',\n+ 'python-dateutil>=2.7.5,<3',\n ]\n \n # Note: To support older versions of setuptools, we're explicitly not\n", "issue": "Is there a reason python-dateutil is pinned to v2.7.5?\nIn this [commit](https://github.com/spotify/luigi/commit/ca0aa9afedecda539339e51974ef38cecf180d4b), I can see that python-dateutil has been pinned to version 2.7.5 - is this strictly necessary? Version 2.8.0 was released a couple of weeks ago and It's causing `ContextualVersionConflict` errors for us.\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext)\n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\",\n \"*.eot\", \"*.svg\", \"*.ttf\", \"*.woff\", \"*.woff2\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\ninstall_requires = [\n 'tornado>=4.0,<5',\n # https://pagure.io/python-daemon/issue/18\n 'python-daemon<2.2.0',\n 'python-dateutil==2.7.5',\n]\n\n# Note: To support older versions of setuptools, we're explicitly not\n# using conditional syntax (i.e. 'enum34>1.1.0;python_version<\"3.4\"').\n# This syntax is a problem for setuptools as recent as `20.1.1`,\n# published Feb 16, 2016.\nif sys.version_info[:2] < (3, 4):\n install_requires.append('enum34>1.1.0')\n\nif os.environ.get('READTHEDOCS', None) == 'True':\n # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla\n install_requires.append('sqlalchemy')\n # readthedocs don't like python-daemon, see #1342\n install_requires.remove('python-daemon<2.2.0')\n install_requires.append('sphinx>=1.4.4') # Value mirrored in doc/conf.py\n\nsetup(\n name='luigi',\n version='2.8.3',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='The Luigi Authors',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.configuration',\n 'luigi.contrib',\n 'luigi.contrib.hdfs',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n entry_points={\n 'console_scripts': [\n 'luigi = luigi.cmdline:luigi_run',\n 'luigid = luigi.cmdline:luigid',\n 'luigi-grep = luigi.tools.luigi_grep:main',\n 'luigi-deps = luigi.tools.deps:main',\n 'luigi-deps-tree = luigi.tools.deps_tree:main'\n ]\n },\n install_requires=install_requires,\n extras_require={\n 'toml': ['toml<2.0.0'],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: System :: Monitoring',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext)\n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\",\n \"*.eot\", \"*.svg\", \"*.ttf\", \"*.woff\", \"*.woff2\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\ninstall_requires = [\n 'tornado>=4.0,<5',\n # https://pagure.io/python-daemon/issue/18\n 'python-daemon<2.2.0',\n 'python-dateutil>=2.7.5,<3',\n]\n\n# Note: To support older versions of setuptools, we're explicitly not\n# using conditional syntax (i.e. 'enum34>1.1.0;python_version<\"3.4\"').\n# This syntax is a problem for setuptools as recent as `20.1.1`,\n# published Feb 16, 2016.\nif sys.version_info[:2] < (3, 4):\n install_requires.append('enum34>1.1.0')\n\nif os.environ.get('READTHEDOCS', None) == 'True':\n # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla\n install_requires.append('sqlalchemy')\n # readthedocs don't like python-daemon, see #1342\n install_requires.remove('python-daemon<2.2.0')\n install_requires.append('sphinx>=1.4.4') # Value mirrored in doc/conf.py\n\nsetup(\n name='luigi',\n version='2.8.3',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='The Luigi Authors',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.configuration',\n 'luigi.contrib',\n 'luigi.contrib.hdfs',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n entry_points={\n 'console_scripts': [\n 'luigi = luigi.cmdline:luigi_run',\n 'luigid = luigi.cmdline:luigid',\n 'luigi-grep = luigi.tools.luigi_grep:main',\n 'luigi-deps = luigi.tools.deps:main',\n 'luigi-deps-tree = luigi.tools.deps_tree:main'\n ]\n },\n install_requires=install_requires,\n extras_require={\n 'toml': ['toml<2.0.0'],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: System :: Monitoring',\n ],\n)\n", "path": "setup.py"}]} | 1,555 | 116 |
gh_patches_debug_19906 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-4246 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
v4 --replacements vs v5 --modify-headers
I'm trying to replace the `User-Agent` request header if it contains a certain string.
This works with "mitmproxy-4.0.4-linux":
```
./mitmproxy --replacements ":~hq User-Agent:Mozilla(.+):CUSTOMAGENT"
```
With "mitmproxy-5.2-linux", this at least replaces the `User-Agent`, but is missing my "certain string condition":
```
./mitmproxy --modify-headers "|~hq .+|User-Agent|CUSTOMAGENT"
```
How do I add my `Mozilla` condition in v5?
None of these work:
```
./mitmproxy --modify-headers "|~hq ^(.*?)Mozilla(.*?)$|User-Agent|CUSTOMAGENT"
./mitmproxy --modify-headers "/~hq .*?Mozilla.*?/User-Agent/CUSTOMAGENT"
./mitmproxy --modify-headers "|~hq Mozilla|User-Agent|CUSTOMAGENT"
./mitmproxy --modify-headers "|~hq User-Agent: Mozilla|User-Agent|CUSTOMAGENT"
./mitmproxy --modify-headers "|~hq \"^(.*?)Mozilla(.*?)$\"|User-Agent|CUSTOMAGENT"
```
I've been trying for hours, and I feel like I've tried every variation under the sun. There's a very small chance it's a bug, but most likely I'm just doing it wrong. If it matters, this system is Ubuntu 16.04.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/addons/modifyheaders.py`
Content:
```
1 import re
2 import typing
3 from pathlib import Path
4
5 from mitmproxy import ctx, exceptions, flowfilter, http
6 from mitmproxy.net.http import Headers
7 from mitmproxy.utils import strutils
8 from mitmproxy.utils.spec import parse_spec
9
10
11 class ModifySpec(typing.NamedTuple):
12 matches: flowfilter.TFilter
13 subject: bytes
14 replacement_str: str
15
16 def read_replacement(self) -> bytes:
17 """
18 Process the replacement str. This usually just involves converting it to bytes.
19 However, if it starts with `@`, we interpret the rest as a file path to read from.
20
21 Raises:
22 - IOError if the file cannot be read.
23 """
24 if self.replacement_str.startswith("@"):
25 return Path(self.replacement_str[1:]).expanduser().read_bytes()
26 else:
27 # We could cache this at some point, but unlikely to be a problem.
28 return strutils.escaped_str_to_bytes(self.replacement_str)
29
30
31 def parse_modify_spec(option: str, subject_is_regex: bool) -> ModifySpec:
32 flow_filter, subject_str, replacement = parse_spec(option)
33
34 subject = strutils.escaped_str_to_bytes(subject_str)
35 if subject_is_regex:
36 try:
37 re.compile(subject)
38 except re.error as e:
39 raise ValueError(f"Invalid regular expression {subject!r} ({e})")
40
41 spec = ModifySpec(flow_filter, subject, replacement)
42
43 try:
44 spec.read_replacement()
45 except OSError as e:
46 raise ValueError(f"Invalid file path: {replacement[1:]} ({e})")
47
48 return spec
49
50
51 class ModifyHeaders:
52 def __init__(self):
53 self.replacements: typing.List[ModifySpec] = []
54
55 def load(self, loader):
56 loader.add_option(
57 "modify_headers", typing.Sequence[str], [],
58 """
59 Header modify pattern of the form "[/flow-filter]/header-name/[@]header-value", where the
60 separator can be any character. The @ allows to provide a file path that is used to read
61 the header value string. An empty header-value removes existing header-name headers.
62 """
63 )
64
65 def configure(self, updated):
66 if "modify_headers" in updated:
67 self.replacements = []
68 for option in ctx.options.modify_headers:
69 try:
70 spec = parse_modify_spec(option, False)
71 except ValueError as e:
72 raise exceptions.OptionsError(f"Cannot parse modify_headers option {option}: {e}") from e
73 self.replacements.append(spec)
74
75 def request(self, flow):
76 if flow.response or flow.error or flow.reply.state == "taken":
77 return
78 self.run(flow, flow.request.headers)
79
80 def response(self, flow):
81 if flow.error or flow.reply.state == "taken":
82 return
83 self.run(flow, flow.response.headers)
84
85 def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:
86 # unset all specified headers
87 for spec in self.replacements:
88 if spec.matches(flow):
89 hdrs.pop(spec.subject, None)
90
91 # set all specified headers if the replacement string is not empty
92 for spec in self.replacements:
93 if spec.matches(flow):
94 try:
95 replacement = spec.read_replacement()
96 except OSError as e:
97 ctx.log.warn(f"Could not read replacement file: {e}")
98 continue
99 else:
100 if replacement:
101 hdrs.add(spec.subject, replacement)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/addons/modifyheaders.py b/mitmproxy/addons/modifyheaders.py
--- a/mitmproxy/addons/modifyheaders.py
+++ b/mitmproxy/addons/modifyheaders.py
@@ -83,14 +83,21 @@
self.run(flow, flow.response.headers)
def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:
- # unset all specified headers
+ matches = []
+
+ # first check all the filters against the original, unmodified flow
for spec in self.replacements:
- if spec.matches(flow):
+ matches.append(spec.matches(flow))
+
+ # unset all specified headers
+ for i, spec in enumerate(self.replacements):
+ if matches[i]:
hdrs.pop(spec.subject, None)
# set all specified headers if the replacement string is not empty
- for spec in self.replacements:
- if spec.matches(flow):
+
+ for i, spec in enumerate(self.replacements):
+ if matches[i]:
try:
replacement = spec.read_replacement()
except OSError as e:
| {"golden_diff": "diff --git a/mitmproxy/addons/modifyheaders.py b/mitmproxy/addons/modifyheaders.py\n--- a/mitmproxy/addons/modifyheaders.py\n+++ b/mitmproxy/addons/modifyheaders.py\n@@ -83,14 +83,21 @@\n self.run(flow, flow.response.headers)\n \n def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:\n- # unset all specified headers\n+ matches = []\n+\n+ # first check all the filters against the original, unmodified flow\n for spec in self.replacements:\n- if spec.matches(flow):\n+ matches.append(spec.matches(flow))\n+\n+ # unset all specified headers\n+ for i, spec in enumerate(self.replacements):\n+ if matches[i]:\n hdrs.pop(spec.subject, None)\n \n # set all specified headers if the replacement string is not empty\n- for spec in self.replacements:\n- if spec.matches(flow):\n+\n+ for i, spec in enumerate(self.replacements):\n+ if matches[i]:\n try:\n replacement = spec.read_replacement()\n except OSError as e:\n", "issue": "v4 --replacements vs v5 --modify-headers\nI'm trying to replace the `User-Agent` request header if it contains a certain string.\r\n\r\nThis works with \"mitmproxy-4.0.4-linux\":\r\n\r\n```\r\n./mitmproxy --replacements \":~hq User-Agent:Mozilla(.+):CUSTOMAGENT\"\r\n```\r\n\r\nWith \"mitmproxy-5.2-linux\", this at least replaces the `User-Agent`, but is missing my \"certain string condition\":\r\n\r\n```\r\n./mitmproxy --modify-headers \"|~hq .+|User-Agent|CUSTOMAGENT\"\r\n```\r\n\r\nHow do I add my `Mozilla` condition in v5?\r\n\r\nNone of these work:\r\n\r\n```\r\n./mitmproxy --modify-headers \"|~hq ^(.*?)Mozilla(.*?)$|User-Agent|CUSTOMAGENT\"\r\n\r\n./mitmproxy --modify-headers \"/~hq .*?Mozilla.*?/User-Agent/CUSTOMAGENT\"\r\n\r\n./mitmproxy --modify-headers \"|~hq Mozilla|User-Agent|CUSTOMAGENT\"\r\n\r\n./mitmproxy --modify-headers \"|~hq User-Agent: Mozilla|User-Agent|CUSTOMAGENT\"\r\n\r\n./mitmproxy --modify-headers \"|~hq \\\"^(.*?)Mozilla(.*?)$\\\"|User-Agent|CUSTOMAGENT\"\r\n```\r\n\r\nI've been trying for hours, and I feel like I've tried every variation under the sun. There's a very small chance it's a bug, but most likely I'm just doing it wrong. If it matters, this system is Ubuntu 16.04.\r\n\r\n\r\n\n", "before_files": [{"content": "import re\nimport typing\nfrom pathlib import Path\n\nfrom mitmproxy import ctx, exceptions, flowfilter, http\nfrom mitmproxy.net.http import Headers\nfrom mitmproxy.utils import strutils\nfrom mitmproxy.utils.spec import parse_spec\n\n\nclass ModifySpec(typing.NamedTuple):\n matches: flowfilter.TFilter\n subject: bytes\n replacement_str: str\n\n def read_replacement(self) -> bytes:\n \"\"\"\n Process the replacement str. This usually just involves converting it to bytes.\n However, if it starts with `@`, we interpret the rest as a file path to read from.\n\n Raises:\n - IOError if the file cannot be read.\n \"\"\"\n if self.replacement_str.startswith(\"@\"):\n return Path(self.replacement_str[1:]).expanduser().read_bytes()\n else:\n # We could cache this at some point, but unlikely to be a problem.\n return strutils.escaped_str_to_bytes(self.replacement_str)\n\n\ndef parse_modify_spec(option: str, subject_is_regex: bool) -> ModifySpec:\n flow_filter, subject_str, replacement = parse_spec(option)\n\n subject = strutils.escaped_str_to_bytes(subject_str)\n if subject_is_regex:\n try:\n re.compile(subject)\n except re.error as e:\n raise ValueError(f\"Invalid regular expression {subject!r} ({e})\")\n\n spec = ModifySpec(flow_filter, subject, replacement)\n\n try:\n spec.read_replacement()\n except OSError as e:\n raise ValueError(f\"Invalid file path: {replacement[1:]} ({e})\")\n\n return spec\n\n\nclass ModifyHeaders:\n def __init__(self):\n self.replacements: typing.List[ModifySpec] = []\n\n def load(self, loader):\n loader.add_option(\n \"modify_headers\", typing.Sequence[str], [],\n \"\"\"\n Header modify pattern of the form \"[/flow-filter]/header-name/[@]header-value\", where the\n separator can be any character. The @ allows to provide a file path that is used to read\n the header value string. An empty header-value removes existing header-name headers.\n \"\"\"\n )\n\n def configure(self, updated):\n if \"modify_headers\" in updated:\n self.replacements = []\n for option in ctx.options.modify_headers:\n try:\n spec = parse_modify_spec(option, False)\n except ValueError as e:\n raise exceptions.OptionsError(f\"Cannot parse modify_headers option {option}: {e}\") from e\n self.replacements.append(spec)\n\n def request(self, flow):\n if flow.response or flow.error or flow.reply.state == \"taken\":\n return\n self.run(flow, flow.request.headers)\n\n def response(self, flow):\n if flow.error or flow.reply.state == \"taken\":\n return\n self.run(flow, flow.response.headers)\n\n def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:\n # unset all specified headers\n for spec in self.replacements:\n if spec.matches(flow):\n hdrs.pop(spec.subject, None)\n\n # set all specified headers if the replacement string is not empty\n for spec in self.replacements:\n if spec.matches(flow):\n try:\n replacement = spec.read_replacement()\n except OSError as e:\n ctx.log.warn(f\"Could not read replacement file: {e}\")\n continue\n else:\n if replacement:\n hdrs.add(spec.subject, replacement)\n", "path": "mitmproxy/addons/modifyheaders.py"}], "after_files": [{"content": "import re\nimport typing\nfrom pathlib import Path\n\nfrom mitmproxy import ctx, exceptions, flowfilter, http\nfrom mitmproxy.net.http import Headers\nfrom mitmproxy.utils import strutils\nfrom mitmproxy.utils.spec import parse_spec\n\n\nclass ModifySpec(typing.NamedTuple):\n matches: flowfilter.TFilter\n subject: bytes\n replacement_str: str\n\n def read_replacement(self) -> bytes:\n \"\"\"\n Process the replacement str. This usually just involves converting it to bytes.\n However, if it starts with `@`, we interpret the rest as a file path to read from.\n\n Raises:\n - IOError if the file cannot be read.\n \"\"\"\n if self.replacement_str.startswith(\"@\"):\n return Path(self.replacement_str[1:]).expanduser().read_bytes()\n else:\n # We could cache this at some point, but unlikely to be a problem.\n return strutils.escaped_str_to_bytes(self.replacement_str)\n\n\ndef parse_modify_spec(option: str, subject_is_regex: bool) -> ModifySpec:\n flow_filter, subject_str, replacement = parse_spec(option)\n\n subject = strutils.escaped_str_to_bytes(subject_str)\n if subject_is_regex:\n try:\n re.compile(subject)\n except re.error as e:\n raise ValueError(f\"Invalid regular expression {subject!r} ({e})\")\n\n spec = ModifySpec(flow_filter, subject, replacement)\n\n try:\n spec.read_replacement()\n except OSError as e:\n raise ValueError(f\"Invalid file path: {replacement[1:]} ({e})\")\n\n return spec\n\n\nclass ModifyHeaders:\n def __init__(self):\n self.replacements: typing.List[ModifySpec] = []\n\n def load(self, loader):\n loader.add_option(\n \"modify_headers\", typing.Sequence[str], [],\n \"\"\"\n Header modify pattern of the form \"[/flow-filter]/header-name/[@]header-value\", where the\n separator can be any character. The @ allows to provide a file path that is used to read\n the header value string. An empty header-value removes existing header-name headers.\n \"\"\"\n )\n\n def configure(self, updated):\n if \"modify_headers\" in updated:\n self.replacements = []\n for option in ctx.options.modify_headers:\n try:\n spec = parse_modify_spec(option, False)\n except ValueError as e:\n raise exceptions.OptionsError(f\"Cannot parse modify_headers option {option}: {e}\") from e\n self.replacements.append(spec)\n\n def request(self, flow):\n if flow.response or flow.error or flow.reply.state == \"taken\":\n return\n self.run(flow, flow.request.headers)\n\n def response(self, flow):\n if flow.error or flow.reply.state == \"taken\":\n return\n self.run(flow, flow.response.headers)\n\n def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:\n matches = []\n\n # first check all the filters against the original, unmodified flow\n for spec in self.replacements:\n matches.append(spec.matches(flow))\n\n # unset all specified headers\n for i, spec in enumerate(self.replacements):\n if matches[i]:\n hdrs.pop(spec.subject, None)\n\n # set all specified headers if the replacement string is not empty\n\n for i, spec in enumerate(self.replacements):\n if matches[i]:\n try:\n replacement = spec.read_replacement()\n except OSError as e:\n ctx.log.warn(f\"Could not read replacement file: {e}\")\n continue\n else:\n if replacement:\n hdrs.add(spec.subject, replacement)\n", "path": "mitmproxy/addons/modifyheaders.py"}]} | 1,521 | 247 |
gh_patches_debug_23291 | rasdani/github-patches | git_diff | scikit-hep__awkward-3115 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
typing ak.Array for numba.cuda.jit signature
### Version of Awkward Array
2.6.2
### Description and code to reproduce
Hey guys, I followed a hint from the discussion in [#696](https://github.com/scikit-hep/awkward/discussions/696#discussion-2571850) to type `ak.Array` for numba signatures. So I tried something like
```python
import awkward as ak
import numba as nb
from numba import types
cpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cpu').numba_type
@nb.njit(types.void(cpu_arr_type))
def cpu_kernel(arr):
do_something_with_arr
```
and this works like a charm.
However, I'm interested in the same case but with a cuda kernel. So I tried what appeared more natural to do:
```python
gpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cuda').numba_type
@nb.cuda.jit(types.void(gpu_arr_type), extensions=[ak.numba.cuda])
def cuda_kernel(arr):
do_something_with_arr
```
This time, I get the error:
```python
self = <awkward._connect.numba.arrayview_cuda.ArrayViewArgHandler object at 0x784afbc13fa0>
ty = ak.ArrayView(ak.ListArrayType(array(int64, 1d, C), ak.ListArrayType(array(int64, 1d, C), ak.NumpyArrayType(array(int64, 1d, C), {}), {}), {}), None, ())
val = <Array [[[4, 1], [2, -1]], [...], [[4, 0]]] type='3 * var * var * int64'>
stream = 0, retr = []
def prepare_args(self, ty, val, stream, retr):
if isinstance(val, ak.Array):
if isinstance(val.layout.backend, CupyBackend):
# Use uint64 for pos, start, stop, the array pointers values, and the pylookup value
tys = numba.types.UniTuple(numba.types.uint64, 5)
> start = val._numbaview.start
E AttributeError: 'NoneType' object has no attribute 'start'
.../site-packages/awkward/_connect/numba/arrayview_cuda.py:21: AttributeError
```
How should this latter case be correctly treated? Note that, without typing, the thing works as expected:
```python
@nb.cuda.jit(extensions=[ak.numba.cuda])
def cuda_kernel_no_typing(arr):
do_something_with_arr
```
However, I'm interested in `ak.Array`s with the 3D layout of integers (as above) and would like to take advantage of numba's eager compilation. I'm passing the `arr` for testing as
```python
backend = 'cpu' # or 'cuda'
arr = ak.to_backend(
ak.Array([
[[4, 1], [2, -1]],
[[0, -1], [1, 1], [3, -1]],
[[4, 0]]
]),
backend
)
```
Any help is appreciated!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/awkward/_connect/numba/arrayview_cuda.py`
Content:
```
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward/blob/main/LICENSE
2
3 from __future__ import annotations
4
5 import numba
6 from numba.core.errors import NumbaTypeError
7
8 import awkward as ak
9 from awkward._backends.cupy import CupyBackend
10
11 ########## ArrayView Arguments Handler for CUDA JIT
12
13
14 class ArrayViewArgHandler:
15 def prepare_args(self, ty, val, stream, retr):
16 if isinstance(val, ak.Array):
17 if isinstance(val.layout.backend, CupyBackend):
18 # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value
19 tys = numba.types.UniTuple(numba.types.uint64, 5)
20
21 start = val._numbaview.start
22 stop = val._numbaview.stop
23 pos = val._numbaview.pos
24 arrayptrs = val._numbaview.lookup.arrayptrs.data.ptr
25 pylookup = 0
26
27 return tys, (pos, start, stop, arrayptrs, pylookup)
28 else:
29 raise NumbaTypeError(
30 '`ak.to_backend` should be called with `backend="cuda"` to put '
31 "the array on the GPU before using it: "
32 'ak.to_backend(array, backend="cuda")'
33 )
34
35 else:
36 return ty, val
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/awkward/_connect/numba/arrayview_cuda.py b/src/awkward/_connect/numba/arrayview_cuda.py
--- a/src/awkward/_connect/numba/arrayview_cuda.py
+++ b/src/awkward/_connect/numba/arrayview_cuda.py
@@ -15,13 +15,22 @@
def prepare_args(self, ty, val, stream, retr):
if isinstance(val, ak.Array):
if isinstance(val.layout.backend, CupyBackend):
+ if ty is not val.numba_type:
+ raise NumbaTypeError(
+ f"the array type: {val.numba_type} does not match "
+ f"the kernel signature type: {ty}"
+ )
+
# Use uint64 for pos, start, stop, the array pointers values, and the pylookup value
tys = numba.types.UniTuple(numba.types.uint64, 5)
- start = val._numbaview.start
- stop = val._numbaview.stop
- pos = val._numbaview.pos
- arrayptrs = val._numbaview.lookup.arrayptrs.data.ptr
+ view = val._numbaview
+ assert view is not None
+
+ start = view.start
+ stop = view.stop
+ pos = view.pos
+ arrayptrs = view.lookup.arrayptrs.data.ptr
pylookup = 0
return tys, (pos, start, stop, arrayptrs, pylookup)
| {"golden_diff": "diff --git a/src/awkward/_connect/numba/arrayview_cuda.py b/src/awkward/_connect/numba/arrayview_cuda.py\n--- a/src/awkward/_connect/numba/arrayview_cuda.py\n+++ b/src/awkward/_connect/numba/arrayview_cuda.py\n@@ -15,13 +15,22 @@\n def prepare_args(self, ty, val, stream, retr):\n if isinstance(val, ak.Array):\n if isinstance(val.layout.backend, CupyBackend):\n+ if ty is not val.numba_type:\n+ raise NumbaTypeError(\n+ f\"the array type: {val.numba_type} does not match \"\n+ f\"the kernel signature type: {ty}\"\n+ )\n+\n # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value\n tys = numba.types.UniTuple(numba.types.uint64, 5)\n \n- start = val._numbaview.start\n- stop = val._numbaview.stop\n- pos = val._numbaview.pos\n- arrayptrs = val._numbaview.lookup.arrayptrs.data.ptr\n+ view = val._numbaview\n+ assert view is not None\n+\n+ start = view.start\n+ stop = view.stop\n+ pos = view.pos\n+ arrayptrs = view.lookup.arrayptrs.data.ptr\n pylookup = 0\n \n return tys, (pos, start, stop, arrayptrs, pylookup)\n", "issue": "typing ak.Array for numba.cuda.jit signature\n### Version of Awkward Array\n\n2.6.2\n\n### Description and code to reproduce\n\nHey guys, I followed a hint from the discussion in [#696](https://github.com/scikit-hep/awkward/discussions/696#discussion-2571850) to type `ak.Array` for numba signatures. So I tried something like\r\n\r\n```python\r\nimport awkward as ak\r\nimport numba as nb\r\nfrom numba import types\r\n\r\ncpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cpu').numba_type\r\n\r\[email protected](types.void(cpu_arr_type))\r\ndef cpu_kernel(arr):\r\n do_something_with_arr\r\n```\r\nand this works like a charm.\r\n\r\nHowever, I'm interested in the same case but with a cuda kernel. So I tried what appeared more natural to do:\r\n```python\r\ngpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cuda').numba_type\r\n\r\[email protected](types.void(gpu_arr_type), extensions=[ak.numba.cuda])\r\ndef cuda_kernel(arr):\r\n do_something_with_arr\r\n```\r\nThis time, I get the error:\r\n```python\r\nself = <awkward._connect.numba.arrayview_cuda.ArrayViewArgHandler object at 0x784afbc13fa0>\r\nty = ak.ArrayView(ak.ListArrayType(array(int64, 1d, C), ak.ListArrayType(array(int64, 1d, C), ak.NumpyArrayType(array(int64, 1d, C), {}), {}), {}), None, ())\r\nval = <Array [[[4, 1], [2, -1]], [...], [[4, 0]]] type='3 * var * var * int64'>\r\nstream = 0, retr = []\r\n\r\n def prepare_args(self, ty, val, stream, retr):\r\n if isinstance(val, ak.Array):\r\n if isinstance(val.layout.backend, CupyBackend):\r\n # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value\r\n tys = numba.types.UniTuple(numba.types.uint64, 5)\r\n \r\n> start = val._numbaview.start\r\nE AttributeError: 'NoneType' object has no attribute 'start'\r\n\r\n.../site-packages/awkward/_connect/numba/arrayview_cuda.py:21: AttributeError\r\n```\r\nHow should this latter case be correctly treated? Note that, without typing, the thing works as expected:\r\n```python\r\[email protected](extensions=[ak.numba.cuda])\r\ndef cuda_kernel_no_typing(arr):\r\n do_something_with_arr\r\n```\r\nHowever, I'm interested in `ak.Array`s with the 3D layout of integers (as above) and would like to take advantage of numba's eager compilation. I'm passing the `arr` for testing as\r\n```python\r\nbackend = 'cpu' # or 'cuda'\r\narr = ak.to_backend(\r\n ak.Array([\r\n [[4, 1], [2, -1]],\r\n [[0, -1], [1, 1], [3, -1]],\r\n [[4, 0]]\r\n ]),\r\n backend\r\n)\r\n```\r\nAny help is appreciated!\r\n\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward/blob/main/LICENSE\n\nfrom __future__ import annotations\n\nimport numba\nfrom numba.core.errors import NumbaTypeError\n\nimport awkward as ak\nfrom awkward._backends.cupy import CupyBackend\n\n########## ArrayView Arguments Handler for CUDA JIT\n\n\nclass ArrayViewArgHandler:\n def prepare_args(self, ty, val, stream, retr):\n if isinstance(val, ak.Array):\n if isinstance(val.layout.backend, CupyBackend):\n # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value\n tys = numba.types.UniTuple(numba.types.uint64, 5)\n\n start = val._numbaview.start\n stop = val._numbaview.stop\n pos = val._numbaview.pos\n arrayptrs = val._numbaview.lookup.arrayptrs.data.ptr\n pylookup = 0\n\n return tys, (pos, start, stop, arrayptrs, pylookup)\n else:\n raise NumbaTypeError(\n '`ak.to_backend` should be called with `backend=\"cuda\"` to put '\n \"the array on the GPU before using it: \"\n 'ak.to_backend(array, backend=\"cuda\")'\n )\n\n else:\n return ty, val\n", "path": "src/awkward/_connect/numba/arrayview_cuda.py"}], "after_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward/blob/main/LICENSE\n\nfrom __future__ import annotations\n\nimport numba\nfrom numba.core.errors import NumbaTypeError\n\nimport awkward as ak\nfrom awkward._backends.cupy import CupyBackend\n\n########## ArrayView Arguments Handler for CUDA JIT\n\n\nclass ArrayViewArgHandler:\n def prepare_args(self, ty, val, stream, retr):\n if isinstance(val, ak.Array):\n if isinstance(val.layout.backend, CupyBackend):\n if ty is not val.numba_type:\n raise NumbaTypeError(\n f\"the array type: {val.numba_type} does not match \"\n f\"the kernel signature type: {ty}\"\n )\n\n # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value\n tys = numba.types.UniTuple(numba.types.uint64, 5)\n\n view = val._numbaview\n assert view is not None\n\n start = view.start\n stop = view.stop\n pos = view.pos\n arrayptrs = view.lookup.arrayptrs.data.ptr\n pylookup = 0\n\n return tys, (pos, start, stop, arrayptrs, pylookup)\n else:\n raise NumbaTypeError(\n '`ak.to_backend` should be called with `backend=\"cuda\"` to put '\n \"the array on the GPU before using it: \"\n 'ak.to_backend(array, backend=\"cuda\")'\n )\n\n else:\n return ty, val\n", "path": "src/awkward/_connect/numba/arrayview_cuda.py"}]} | 1,362 | 342 |
gh_patches_debug_21885 | rasdani/github-patches | git_diff | numba__numba-3578 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
simulate bug func_or_sig vs fn_or_sig named parameter
There seems to be a difference in the named parameter func_or_sig/fn_or_sig between the cuda.jit() in the simulator vs gpu code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/cuda/simulator/api.py`
Content:
```
1 '''
2 Contains CUDA API functions
3 '''
4 from __future__ import absolute_import
5
6 from contextlib import contextmanager
7 from .cudadrv.devices import require_context, reset, gpus
8 from .kernel import FakeCUDAKernel
9 from numba.typing import Signature
10 from warnings import warn
11 from ..args import In, Out, InOut
12
13
14 def select_device(dev=0):
15 assert dev == 0, 'Only a single device supported by the simulator'
16
17
18 class stream(object):
19 '''
20 The stream API is supported in the simulator - however, all execution
21 occurs synchronously, so synchronization requires no operation.
22 '''
23 @contextmanager
24 def auto_synchronize(self):
25 yield
26
27 def synchronize(self):
28 pass
29
30
31 def synchronize():
32 pass
33
34 def close():
35 gpus.closed = True
36
37
38 def declare_device(*args, **kwargs):
39 pass
40
41
42 def detect():
43 print('Found 1 CUDA devices')
44 print('id %d %20s %40s' % (0, 'SIMULATOR', '[SUPPORTED]'))
45 print('%40s: 5.2' % 'compute capability')
46
47
48 def list_devices():
49 return gpus
50
51
52 # Events
53
54 class Event(object):
55 '''
56 The simulator supports the event API, but they do not record timing info,
57 and all simulation is synchronous. Execution time is not recorded.
58 '''
59 def record(self, stream=0):
60 pass
61
62 def wait(self, stream=0):
63 pass
64
65 def synchronize(self):
66 pass
67
68 def elapsed_time(self, event):
69 warn('Simulator timings are bogus')
70 return 0.0
71
72 event = Event
73
74
75 def jit(fn_or_sig=None, device=False, debug=False, argtypes=None, inline=False, restype=None,
76 fastmath=False, link=None):
77 if link is not None:
78 raise NotImplementedError('Cannot link PTX in the simulator')
79 # Check for first argument specifying types - in that case the
80 # decorator is not being passed a function
81 if fn_or_sig is None or isinstance(fn_or_sig, (str, tuple, Signature)):
82 def jitwrapper(fn):
83 return FakeCUDAKernel(fn,
84 device=device,
85 fastmath=fastmath)
86 return jitwrapper
87 return FakeCUDAKernel(fn_or_sig, device=device)
88
89 autojit = jit
90
91
92 @contextmanager
93 def defer_cleanup():
94 # No effect for simulator
95 yield
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numba/cuda/simulator/api.py b/numba/cuda/simulator/api.py
--- a/numba/cuda/simulator/api.py
+++ b/numba/cuda/simulator/api.py
@@ -72,19 +72,19 @@
event = Event
-def jit(fn_or_sig=None, device=False, debug=False, argtypes=None, inline=False, restype=None,
- fastmath=False, link=None):
+def jit(func_or_sig=None, device=False, debug=False, argtypes=None,
+ inline=False, restype=None, fastmath=False, link=None):
if link is not None:
raise NotImplementedError('Cannot link PTX in the simulator')
# Check for first argument specifying types - in that case the
# decorator is not being passed a function
- if fn_or_sig is None or isinstance(fn_or_sig, (str, tuple, Signature)):
+ if func_or_sig is None or isinstance(func_or_sig, (str, tuple, Signature)):
def jitwrapper(fn):
return FakeCUDAKernel(fn,
device=device,
fastmath=fastmath)
return jitwrapper
- return FakeCUDAKernel(fn_or_sig, device=device)
+ return FakeCUDAKernel(func_or_sig, device=device)
autojit = jit
| {"golden_diff": "diff --git a/numba/cuda/simulator/api.py b/numba/cuda/simulator/api.py\n--- a/numba/cuda/simulator/api.py\n+++ b/numba/cuda/simulator/api.py\n@@ -72,19 +72,19 @@\n event = Event\n \n \n-def jit(fn_or_sig=None, device=False, debug=False, argtypes=None, inline=False, restype=None,\n- fastmath=False, link=None):\n+def jit(func_or_sig=None, device=False, debug=False, argtypes=None,\n+ inline=False, restype=None, fastmath=False, link=None):\n if link is not None:\n raise NotImplementedError('Cannot link PTX in the simulator')\n # Check for first argument specifying types - in that case the\n # decorator is not being passed a function\n- if fn_or_sig is None or isinstance(fn_or_sig, (str, tuple, Signature)):\n+ if func_or_sig is None or isinstance(func_or_sig, (str, tuple, Signature)):\n def jitwrapper(fn):\n return FakeCUDAKernel(fn,\n device=device,\n fastmath=fastmath)\n return jitwrapper\n- return FakeCUDAKernel(fn_or_sig, device=device)\n+ return FakeCUDAKernel(func_or_sig, device=device)\n \n autojit = jit\n", "issue": "simulate bug func_or_sig vs fn_or_sig named parameter\nThere seems to be a difference in the named parameter func_or_sig/fn_or_sig between the cuda.jit() in the simulator vs gpu code. \n", "before_files": [{"content": "'''\nContains CUDA API functions\n'''\nfrom __future__ import absolute_import\n\nfrom contextlib import contextmanager\nfrom .cudadrv.devices import require_context, reset, gpus\nfrom .kernel import FakeCUDAKernel\nfrom numba.typing import Signature\nfrom warnings import warn\nfrom ..args import In, Out, InOut\n\n\ndef select_device(dev=0):\n assert dev == 0, 'Only a single device supported by the simulator'\n\n\nclass stream(object):\n '''\n The stream API is supported in the simulator - however, all execution\n occurs synchronously, so synchronization requires no operation.\n '''\n @contextmanager\n def auto_synchronize(self):\n yield\n\n def synchronize(self):\n pass\n\n\ndef synchronize():\n pass\n\ndef close():\n gpus.closed = True\n\n\ndef declare_device(*args, **kwargs):\n pass\n\n\ndef detect():\n print('Found 1 CUDA devices')\n print('id %d %20s %40s' % (0, 'SIMULATOR', '[SUPPORTED]'))\n print('%40s: 5.2' % 'compute capability')\n\n\ndef list_devices():\n return gpus\n\n\n# Events\n\nclass Event(object):\n '''\n The simulator supports the event API, but they do not record timing info,\n and all simulation is synchronous. Execution time is not recorded.\n '''\n def record(self, stream=0):\n pass\n\n def wait(self, stream=0):\n pass\n\n def synchronize(self):\n pass\n\n def elapsed_time(self, event):\n warn('Simulator timings are bogus')\n return 0.0\n\nevent = Event\n\n\ndef jit(fn_or_sig=None, device=False, debug=False, argtypes=None, inline=False, restype=None,\n fastmath=False, link=None):\n if link is not None:\n raise NotImplementedError('Cannot link PTX in the simulator')\n # Check for first argument specifying types - in that case the\n # decorator is not being passed a function\n if fn_or_sig is None or isinstance(fn_or_sig, (str, tuple, Signature)):\n def jitwrapper(fn):\n return FakeCUDAKernel(fn,\n device=device,\n fastmath=fastmath)\n return jitwrapper\n return FakeCUDAKernel(fn_or_sig, device=device)\n\nautojit = jit\n\n\n@contextmanager\ndef defer_cleanup():\n # No effect for simulator\n yield\n", "path": "numba/cuda/simulator/api.py"}], "after_files": [{"content": "'''\nContains CUDA API functions\n'''\nfrom __future__ import absolute_import\n\nfrom contextlib import contextmanager\nfrom .cudadrv.devices import require_context, reset, gpus\nfrom .kernel import FakeCUDAKernel\nfrom numba.typing import Signature\nfrom warnings import warn\nfrom ..args import In, Out, InOut\n\n\ndef select_device(dev=0):\n assert dev == 0, 'Only a single device supported by the simulator'\n\n\nclass stream(object):\n '''\n The stream API is supported in the simulator - however, all execution\n occurs synchronously, so synchronization requires no operation.\n '''\n @contextmanager\n def auto_synchronize(self):\n yield\n\n def synchronize(self):\n pass\n\n\ndef synchronize():\n pass\n\ndef close():\n gpus.closed = True\n\n\ndef declare_device(*args, **kwargs):\n pass\n\n\ndef detect():\n print('Found 1 CUDA devices')\n print('id %d %20s %40s' % (0, 'SIMULATOR', '[SUPPORTED]'))\n print('%40s: 5.2' % 'compute capability')\n\n\ndef list_devices():\n return gpus\n\n\n# Events\n\nclass Event(object):\n '''\n The simulator supports the event API, but they do not record timing info,\n and all simulation is synchronous. Execution time is not recorded.\n '''\n def record(self, stream=0):\n pass\n\n def wait(self, stream=0):\n pass\n\n def synchronize(self):\n pass\n\n def elapsed_time(self, event):\n warn('Simulator timings are bogus')\n return 0.0\n\nevent = Event\n\n\ndef jit(func_or_sig=None, device=False, debug=False, argtypes=None,\n inline=False, restype=None, fastmath=False, link=None):\n if link is not None:\n raise NotImplementedError('Cannot link PTX in the simulator')\n # Check for first argument specifying types - in that case the\n # decorator is not being passed a function\n if func_or_sig is None or isinstance(func_or_sig, (str, tuple, Signature)):\n def jitwrapper(fn):\n return FakeCUDAKernel(fn,\n device=device,\n fastmath=fastmath)\n return jitwrapper\n return FakeCUDAKernel(func_or_sig, device=device)\n\nautojit = jit\n\n\n@contextmanager\ndef defer_cleanup():\n # No effect for simulator\n yield\n", "path": "numba/cuda/simulator/api.py"}]} | 1,025 | 286 |
gh_patches_debug_25919 | rasdani/github-patches | git_diff | archlinux__archinstall-823 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mkinitcpio.conf generated incorrectly for AMDGPU.
As the archwiki installation guide states [https://wiki.archlinux.org/title/AMDGPU#Specify_the_correct_module_order](https://wiki.archlinux.org/title/AMDGPU#Specify_the_correct_module_order), you must ensure that the amdgpu module is loaded before the radeon one: `MODULES=(amdgpu radeon)`
Otherwise the DM will fail to start at boot.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `profiles/xorg.py`
Content:
```
1 # A system with "xorg" installed
2
3 import archinstall
4 import logging
5
6 is_top_level_profile = True
7
8 __description__ = 'Installs a minimal system as well as xorg and graphics drivers.'
9
10 __packages__ = [
11 'dkms',
12 'xorg-server',
13 'xorg-xinit',
14 'nvidia-dkms',
15 *archinstall.lib.hardware.__packages__,
16 ]
17
18
19 def _prep_function(*args, **kwargs):
20 """
21 Magic function called by the importing installer
22 before continuing any further. It also avoids executing any
23 other code in this stage. So it's a safe way to ask the user
24 for more input before any other installer steps start.
25 """
26
27 archinstall.storage["gfx_driver_packages"] = archinstall.select_driver()
28
29 # TODO: Add language section and/or merge it with the locale selected
30 # earlier in for instance guided.py installer.
31
32 return True
33
34
35 # Ensures that this code only gets executed if executed
36 # through importlib.util.spec_from_file_location("xorg", "/somewhere/xorg.py")
37 # or through conventional import xorg
38 if __name__ == 'xorg':
39 try:
40 if "nvidia" in archinstall.storage.get("gfx_driver_packages", []):
41 if "linux-zen" in archinstall.storage['installation_session'].base_packages or "linux-lts" in archinstall.storage['installation_session'].base_packages:
42 for kernel in archinstall.storage['installation_session'].kernels:
43 archinstall.storage['installation_session'].add_additional_packages(f"{kernel}-headers") # Fixes https://github.com/archlinux/archinstall/issues/585
44 archinstall.storage['installation_session'].add_additional_packages("dkms") # I've had kernel regen fail if it wasn't installed before nvidia-dkms
45 archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit nvidia-dkms")
46 else:
47 archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
48 else:
49 archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
50 except Exception as err:
51 archinstall.log(f"Could not handle nvidia and linuz-zen specific situations during xorg installation: {err}", level=logging.WARNING, fg="yellow")
52 archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit") # Prep didn't run, so there's no driver to install
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/profiles/xorg.py b/profiles/xorg.py
--- a/profiles/xorg.py
+++ b/profiles/xorg.py
@@ -45,6 +45,17 @@
archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit nvidia-dkms")
else:
archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
+ elif 'amdgpu' in archinstall.storage.get("gfx_driver_packages", []):
+ # The order of these two are important if amdgpu is installed #808
+ if 'amdgpu' in archinstall.storage['installation_session'].MODULES:
+ archinstall.storage['installation_session'].MODULES.remove('amdgpu')
+ archinstall.storage['installation_session'].MODULES.append('amdgpu')
+
+ if 'radeon' in archinstall.storage['installation_session'].MODULES:
+ archinstall.storage['installation_session'].MODULES.remove('radeon')
+ archinstall.storage['installation_session'].MODULES.append('radeon')
+
+ archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
else:
archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
except Exception as err:
| {"golden_diff": "diff --git a/profiles/xorg.py b/profiles/xorg.py\n--- a/profiles/xorg.py\n+++ b/profiles/xorg.py\n@@ -45,6 +45,17 @@\n \t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit nvidia-dkms\")\n \t\t\telse:\n \t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n+\t\telif 'amdgpu' in archinstall.storage.get(\"gfx_driver_packages\", []):\n+\t\t\t# The order of these two are important if amdgpu is installed #808\n+\t\t\tif 'amdgpu' in archinstall.storage['installation_session'].MODULES:\n+\t\t\t\tarchinstall.storage['installation_session'].MODULES.remove('amdgpu')\n+\t\t\tarchinstall.storage['installation_session'].MODULES.append('amdgpu')\n+\n+\t\t\tif 'radeon' in archinstall.storage['installation_session'].MODULES:\n+\t\t\t\tarchinstall.storage['installation_session'].MODULES.remove('radeon')\n+\t\t\tarchinstall.storage['installation_session'].MODULES.append('radeon')\n+\n+\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n \t\telse:\n \t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n \texcept Exception as err:\n", "issue": "mkinitcpio.conf generated incorrectly for AMDGPU.\nAs the archwiki installation guide states [https://wiki.archlinux.org/title/AMDGPU#Specify_the_correct_module_order](https://wiki.archlinux.org/title/AMDGPU#Specify_the_correct_module_order), you must ensure that the amdgpu module is loaded before the radeon one: `MODULES=(amdgpu radeon)`\r\nOtherwise the DM will fail to start at boot.\n", "before_files": [{"content": "# A system with \"xorg\" installed\n\nimport archinstall\nimport logging\n\nis_top_level_profile = True\n\n__description__ = 'Installs a minimal system as well as xorg and graphics drivers.'\n\n__packages__ = [\n\t'dkms',\n\t'xorg-server',\n\t'xorg-xinit',\n\t'nvidia-dkms',\n\t*archinstall.lib.hardware.__packages__,\n]\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\tarchinstall.storage[\"gfx_driver_packages\"] = archinstall.select_driver()\n\n\t# TODO: Add language section and/or merge it with the locale selected\n\t# earlier in for instance guided.py installer.\n\n\treturn True\n\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"xorg\", \"/somewhere/xorg.py\")\n# or through conventional import xorg\nif __name__ == 'xorg':\n\ttry:\n\t\tif \"nvidia\" in archinstall.storage.get(\"gfx_driver_packages\", []):\n\t\t\tif \"linux-zen\" in archinstall.storage['installation_session'].base_packages or \"linux-lts\" in archinstall.storage['installation_session'].base_packages:\n\t\t\t\tfor kernel in archinstall.storage['installation_session'].kernels:\n\t\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"{kernel}-headers\") # Fixes https://github.com/archlinux/archinstall/issues/585\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"dkms\") # I've had kernel regen fail if it wasn't installed before nvidia-dkms\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit nvidia-dkms\")\n\t\t\telse:\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n\t\telse:\n\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n\texcept Exception as err:\n\t\tarchinstall.log(f\"Could not handle nvidia and linuz-zen specific situations during xorg installation: {err}\", level=logging.WARNING, fg=\"yellow\")\n\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit\") # Prep didn't run, so there's no driver to install\n", "path": "profiles/xorg.py"}], "after_files": [{"content": "# A system with \"xorg\" installed\n\nimport archinstall\nimport logging\n\nis_top_level_profile = True\n\n__description__ = 'Installs a minimal system as well as xorg and graphics drivers.'\n\n__packages__ = [\n\t'dkms',\n\t'xorg-server',\n\t'xorg-xinit',\n\t'nvidia-dkms',\n\t*archinstall.lib.hardware.__packages__,\n]\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\tarchinstall.storage[\"gfx_driver_packages\"] = archinstall.select_driver()\n\n\t# TODO: Add language section and/or merge it with the locale selected\n\t# earlier in for instance guided.py installer.\n\n\treturn True\n\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"xorg\", \"/somewhere/xorg.py\")\n# or through conventional import xorg\nif __name__ == 'xorg':\n\ttry:\n\t\tif \"nvidia\" in archinstall.storage.get(\"gfx_driver_packages\", []):\n\t\t\tif \"linux-zen\" in archinstall.storage['installation_session'].base_packages or \"linux-lts\" in archinstall.storage['installation_session'].base_packages:\n\t\t\t\tfor kernel in archinstall.storage['installation_session'].kernels:\n\t\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"{kernel}-headers\") # Fixes https://github.com/archlinux/archinstall/issues/585\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"dkms\") # I've had kernel regen fail if it wasn't installed before nvidia-dkms\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit nvidia-dkms\")\n\t\t\telse:\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n\t\telif 'amdgpu' in archinstall.storage.get(\"gfx_driver_packages\", []):\n\t\t\t# The order of these two are important if amdgpu is installed #808\n\t\t\tif 'amdgpu' in archinstall.storage['installation_session'].MODULES:\n\t\t\t\tarchinstall.storage['installation_session'].MODULES.remove('amdgpu')\n\t\t\tarchinstall.storage['installation_session'].MODULES.append('amdgpu')\n\n\t\t\tif 'radeon' in archinstall.storage['installation_session'].MODULES:\n\t\t\t\tarchinstall.storage['installation_session'].MODULES.remove('radeon')\n\t\t\tarchinstall.storage['installation_session'].MODULES.append('radeon')\n\n\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n\t\telse:\n\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n\texcept Exception as err:\n\t\tarchinstall.log(f\"Could not handle nvidia and linuz-zen specific situations during xorg installation: {err}\", level=logging.WARNING, fg=\"yellow\")\n\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit\") # Prep didn't run, so there's no driver to install\n", "path": "profiles/xorg.py"}]} | 1,032 | 343 |
gh_patches_debug_2112 | rasdani/github-patches | git_diff | Qiskit__qiskit-1940 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rzz gate
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: 0.7.2
- **Python version**: 3.6.6
- **Operating system**: Windows 10
### What is the current behavior?
rzz gate appears to give incorrect results
### Steps to reproduce the problem
rzz gate rule defined in https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/extensions/standard/rzz.py
```
CnotGate(q[0], q[1]),
U1Gate(self.params[0], q[0]),
CnotGate(q[0], q[1])
```
### What is the expected behavior?
I think it should be
```
CnotGate(q[0], q[1]),
U1Gate(self.params[0], q[1]),
CnotGate(q[0], q[1])
```
the u1 phase should be on the target instead of control
### Suggested solutions
modify rzz gate definition to give the right behavior.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/extensions/standard/rzz.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 two-qubit ZZ-rotation gate.
10 """
11 from qiskit.circuit import CompositeGate
12 from qiskit.circuit import Gate
13 from qiskit.circuit import QuantumCircuit
14 from qiskit.circuit import QuantumRegister
15 from qiskit.circuit.decorators import _op_expand
16 from qiskit.dagcircuit import DAGCircuit
17 from qiskit.extensions.standard.u1 import U1Gate
18 from qiskit.extensions.standard.cx import CnotGate
19
20
21 class RZZGate(Gate):
22 """Two-qubit ZZ-rotation gate."""
23
24 def __init__(self, theta, ctl, tgt, circ=None):
25 """Create new rzz gate."""
26 super().__init__("rzz", [theta], [ctl, tgt], circ)
27
28 def _define_decompositions(self):
29 """
30 gate rzz(theta) a, b { cx a, b; u1(theta) b; cx a, b; }
31 """
32 decomposition = DAGCircuit()
33 q = QuantumRegister(2, "q")
34 decomposition.add_qreg(q)
35 rule = [
36 CnotGate(q[0], q[1]),
37 U1Gate(self.params[0], q[0]),
38 CnotGate(q[0], q[1])
39 ]
40 for inst in rule:
41 decomposition.apply_operation_back(inst)
42 self._decompositions = [decomposition]
43
44 def inverse(self):
45 """Invert this gate."""
46 self.params[0] = -self.params[0]
47 self._decompositions = None
48 return self
49
50 def reapply(self, circ):
51 """Reapply this gate to corresponding qubits in circ."""
52 self._modifiers(circ.rzz(self.params[0], self.qargs[0], self.qargs[1]))
53
54
55 @_op_expand(2, broadcastable=[False, False])
56 def rzz(self, theta, qubit1, qubit2):
57 """Apply RZZ to circuit."""
58 self._check_qubit(qubit1)
59 self._check_qubit(qubit2)
60 self._check_dups([qubit1, qubit2])
61 return self._attach(RZZGate(theta, qubit1, qubit2, self))
62
63
64 # Add to QuantumCircuit and CompositeGate classes
65 QuantumCircuit.rzz = rzz
66 CompositeGate.rzz = rzz
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/extensions/standard/rzz.py b/qiskit/extensions/standard/rzz.py
--- a/qiskit/extensions/standard/rzz.py
+++ b/qiskit/extensions/standard/rzz.py
@@ -34,7 +34,7 @@
decomposition.add_qreg(q)
rule = [
CnotGate(q[0], q[1]),
- U1Gate(self.params[0], q[0]),
+ U1Gate(self.params[0], q[1]),
CnotGate(q[0], q[1])
]
for inst in rule:
| {"golden_diff": "diff --git a/qiskit/extensions/standard/rzz.py b/qiskit/extensions/standard/rzz.py\n--- a/qiskit/extensions/standard/rzz.py\n+++ b/qiskit/extensions/standard/rzz.py\n@@ -34,7 +34,7 @@\n decomposition.add_qreg(q)\n rule = [\n CnotGate(q[0], q[1]),\n- U1Gate(self.params[0], q[0]),\n+ U1Gate(self.params[0], q[1]),\n CnotGate(q[0], q[1])\n ]\n for inst in rule:\n", "issue": "rzz gate\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: 0.7.2\r\n- **Python version**: 3.6.6\r\n- **Operating system**: Windows 10\r\n\r\n### What is the current behavior?\r\n\r\nrzz gate appears to give incorrect results\r\n\r\n### Steps to reproduce the problem\r\n\r\nrzz gate rule defined in https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/extensions/standard/rzz.py\r\n\r\n```\r\n CnotGate(q[0], q[1]),\r\n U1Gate(self.params[0], q[0]),\r\n CnotGate(q[0], q[1])\r\n```\r\n\r\n### What is the expected behavior?\r\n\r\nI think it should be\r\n```\r\n CnotGate(q[0], q[1]),\r\n U1Gate(self.params[0], q[1]),\r\n CnotGate(q[0], q[1])\r\n```\r\nthe u1 phase should be on the target instead of control\r\n\r\n### Suggested solutions\r\n\r\nmodify rzz gate definition to give the right behavior.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2017, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"\ntwo-qubit ZZ-rotation gate.\n\"\"\"\nfrom qiskit.circuit import CompositeGate\nfrom qiskit.circuit import Gate\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.circuit.decorators import _op_expand\nfrom qiskit.dagcircuit import DAGCircuit\nfrom qiskit.extensions.standard.u1 import U1Gate\nfrom qiskit.extensions.standard.cx import CnotGate\n\n\nclass RZZGate(Gate):\n \"\"\"Two-qubit ZZ-rotation gate.\"\"\"\n\n def __init__(self, theta, ctl, tgt, circ=None):\n \"\"\"Create new rzz gate.\"\"\"\n super().__init__(\"rzz\", [theta], [ctl, tgt], circ)\n\n def _define_decompositions(self):\n \"\"\"\n gate rzz(theta) a, b { cx a, b; u1(theta) b; cx a, b; }\n \"\"\"\n decomposition = DAGCircuit()\n q = QuantumRegister(2, \"q\")\n decomposition.add_qreg(q)\n rule = [\n CnotGate(q[0], q[1]),\n U1Gate(self.params[0], q[0]),\n CnotGate(q[0], q[1])\n ]\n for inst in rule:\n decomposition.apply_operation_back(inst)\n self._decompositions = [decomposition]\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n self.params[0] = -self.params[0]\n self._decompositions = None\n return self\n\n def reapply(self, circ):\n \"\"\"Reapply this gate to corresponding qubits in circ.\"\"\"\n self._modifiers(circ.rzz(self.params[0], self.qargs[0], self.qargs[1]))\n\n\n@_op_expand(2, broadcastable=[False, False])\ndef rzz(self, theta, qubit1, qubit2):\n \"\"\"Apply RZZ to circuit.\"\"\"\n self._check_qubit(qubit1)\n self._check_qubit(qubit2)\n self._check_dups([qubit1, qubit2])\n return self._attach(RZZGate(theta, qubit1, qubit2, self))\n\n\n# Add to QuantumCircuit and CompositeGate classes\nQuantumCircuit.rzz = rzz\nCompositeGate.rzz = rzz\n", "path": "qiskit/extensions/standard/rzz.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2017, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"\ntwo-qubit ZZ-rotation gate.\n\"\"\"\nfrom qiskit.circuit import CompositeGate\nfrom qiskit.circuit import Gate\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.circuit.decorators import _op_expand\nfrom qiskit.dagcircuit import DAGCircuit\nfrom qiskit.extensions.standard.u1 import U1Gate\nfrom qiskit.extensions.standard.cx import CnotGate\n\n\nclass RZZGate(Gate):\n \"\"\"Two-qubit ZZ-rotation gate.\"\"\"\n\n def __init__(self, theta, ctl, tgt, circ=None):\n \"\"\"Create new rzz gate.\"\"\"\n super().__init__(\"rzz\", [theta], [ctl, tgt], circ)\n\n def _define_decompositions(self):\n \"\"\"\n gate rzz(theta) a, b { cx a, b; u1(theta) b; cx a, b; }\n \"\"\"\n decomposition = DAGCircuit()\n q = QuantumRegister(2, \"q\")\n decomposition.add_qreg(q)\n rule = [\n CnotGate(q[0], q[1]),\n U1Gate(self.params[0], q[1]),\n CnotGate(q[0], q[1])\n ]\n for inst in rule:\n decomposition.apply_operation_back(inst)\n self._decompositions = [decomposition]\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n self.params[0] = -self.params[0]\n self._decompositions = None\n return self\n\n def reapply(self, circ):\n \"\"\"Reapply this gate to corresponding qubits in circ.\"\"\"\n self._modifiers(circ.rzz(self.params[0], self.qargs[0], self.qargs[1]))\n\n\n@_op_expand(2, broadcastable=[False, False])\ndef rzz(self, theta, qubit1, qubit2):\n \"\"\"Apply RZZ to circuit.\"\"\"\n self._check_qubit(qubit1)\n self._check_qubit(qubit2)\n self._check_dups([qubit1, qubit2])\n return self._attach(RZZGate(theta, qubit1, qubit2, self))\n\n\n# Add to QuantumCircuit and CompositeGate classes\nQuantumCircuit.rzz = rzz\nCompositeGate.rzz = rzz\n", "path": "qiskit/extensions/standard/rzz.py"}]} | 1,209 | 131 |
gh_patches_debug_31020 | rasdani/github-patches | git_diff | OpenMined__PySyft-3150 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove ZSTD
**Is your feature request related to a problem? Please describe.**
ZSTD is used for compression in our serde process. However we don't need extra compression as we move to Protobuf.
ZSTD is usually a source of problems when installing PySyft with different hacks to solve it.
**Describe the solution you'd like**
Remove ZSTD dependency.
This will require removing the tests and its use in serde.
**Describe alternatives you've considered**
Protobuf covers compression.
**Additional context**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `syft/serde/compression.py`
Content:
```
1 """
2 This file exists to provide one common place for all compression methods used in
3 simplifying and serializing PySyft objects.
4 """
5
6 import lz4
7 from lz4 import ( # noqa: F401
8 frame,
9 ) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'
10 import zstd
11
12 from syft.exceptions import CompressionNotFoundException
13
14 # COMPRESSION SCHEME INT CODES
15 NO_COMPRESSION = 40
16 LZ4 = 41
17 ZSTD = 42
18 scheme_to_bytes = {
19 NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder="big"),
20 LZ4: LZ4.to_bytes(1, byteorder="big"),
21 ZSTD: ZSTD.to_bytes(1, byteorder="big"),
22 }
23
24 ## SECTION: chosen Compression Algorithm
25
26
27 def _apply_compress_scheme(decompressed_input_bin) -> tuple:
28 """
29 Apply the selected compression scheme.
30 By default is used LZ4
31
32 Args:
33 decompressed_input_bin: the binary to be compressed
34 """
35 return apply_lz4_compression(decompressed_input_bin)
36
37
38 def apply_lz4_compression(decompressed_input_bin) -> tuple:
39 """
40 Apply LZ4 compression to the input
41
42 Args:
43 decompressed_input_bin: the binary to be compressed
44
45 Returns:
46 a tuple (compressed_result, LZ4)
47 """
48 return lz4.frame.compress(decompressed_input_bin), LZ4
49
50
51 def apply_zstd_compression(decompressed_input_bin) -> tuple:
52 """
53 Apply ZSTD compression to the input
54
55 Args:
56 decompressed_input_bin: the binary to be compressed
57
58 Returns:
59 a tuple (compressed_result, ZSTD)
60 """
61
62 return zstd.compress(decompressed_input_bin), ZSTD
63
64
65 def apply_no_compression(decompressed_input_bin) -> tuple:
66 """
67 No compression is applied to the input
68
69 Args:
70 decompressed_input_bin: the binary
71
72 Returns:
73 a tuple (the binary, LZ4)
74 """
75
76 return decompressed_input_bin, NO_COMPRESSION
77
78
79 def _compress(decompressed_input_bin: bin) -> bin:
80 """
81 This function compresses a binary using the function _apply_compress_scheme
82 if the input has been already compressed in some step, it will return it as it is
83
84 Args:
85 decompressed_input_bin (bin): binary to be compressed
86
87 Returns:
88 bin: a compressed binary
89
90 """
91 compress_stream, compress_scheme = _apply_compress_scheme(decompressed_input_bin)
92 try:
93 z = scheme_to_bytes[compress_scheme] + compress_stream
94 return z
95 except KeyError:
96 raise CompressionNotFoundException(
97 f"Compression scheme not found for compression code: {str(compress_scheme)}"
98 )
99
100
101 def _decompress(binary: bin) -> bin:
102 """
103 This function decompresses a binary using the scheme defined in the first byte of the input
104
105 Args:
106 binary (bin): a compressed binary
107
108 Returns:
109 bin: decompressed binary
110
111 """
112
113 # check the 1-byte header to check the compression scheme used
114 compress_scheme = binary[0]
115
116 # remove the 1-byte header from the input stream
117 binary = binary[1:]
118 # 1) Decompress or return the original stream
119 if compress_scheme == LZ4:
120 return lz4.frame.decompress(binary)
121 elif compress_scheme == ZSTD:
122 return zstd.decompress(binary)
123 elif compress_scheme == NO_COMPRESSION:
124 return binary
125 else:
126 raise CompressionNotFoundException(
127 f"Compression scheme not found for compression code: {str(compress_scheme)}"
128 )
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/syft/serde/compression.py b/syft/serde/compression.py
--- a/syft/serde/compression.py
+++ b/syft/serde/compression.py
@@ -7,18 +7,15 @@
from lz4 import ( # noqa: F401
frame,
) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'
-import zstd
from syft.exceptions import CompressionNotFoundException
# COMPRESSION SCHEME INT CODES
NO_COMPRESSION = 40
LZ4 = 41
-ZSTD = 42
scheme_to_bytes = {
NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder="big"),
LZ4: LZ4.to_bytes(1, byteorder="big"),
- ZSTD: ZSTD.to_bytes(1, byteorder="big"),
}
## SECTION: chosen Compression Algorithm
@@ -48,20 +45,6 @@
return lz4.frame.compress(decompressed_input_bin), LZ4
-def apply_zstd_compression(decompressed_input_bin) -> tuple:
- """
- Apply ZSTD compression to the input
-
- Args:
- decompressed_input_bin: the binary to be compressed
-
- Returns:
- a tuple (compressed_result, ZSTD)
- """
-
- return zstd.compress(decompressed_input_bin), ZSTD
-
-
def apply_no_compression(decompressed_input_bin) -> tuple:
"""
No compression is applied to the input
@@ -118,8 +101,6 @@
# 1) Decompress or return the original stream
if compress_scheme == LZ4:
return lz4.frame.decompress(binary)
- elif compress_scheme == ZSTD:
- return zstd.decompress(binary)
elif compress_scheme == NO_COMPRESSION:
return binary
else:
| {"golden_diff": "diff --git a/syft/serde/compression.py b/syft/serde/compression.py\n--- a/syft/serde/compression.py\n+++ b/syft/serde/compression.py\n@@ -7,18 +7,15 @@\n from lz4 import ( # noqa: F401\n frame,\n ) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'\n-import zstd\n \n from syft.exceptions import CompressionNotFoundException\n \n # COMPRESSION SCHEME INT CODES\n NO_COMPRESSION = 40\n LZ4 = 41\n-ZSTD = 42\n scheme_to_bytes = {\n NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder=\"big\"),\n LZ4: LZ4.to_bytes(1, byteorder=\"big\"),\n- ZSTD: ZSTD.to_bytes(1, byteorder=\"big\"),\n }\n \n ## SECTION: chosen Compression Algorithm\n@@ -48,20 +45,6 @@\n return lz4.frame.compress(decompressed_input_bin), LZ4\n \n \n-def apply_zstd_compression(decompressed_input_bin) -> tuple:\n- \"\"\"\n- Apply ZSTD compression to the input\n-\n- Args:\n- decompressed_input_bin: the binary to be compressed\n-\n- Returns:\n- a tuple (compressed_result, ZSTD)\n- \"\"\"\n-\n- return zstd.compress(decompressed_input_bin), ZSTD\n-\n-\n def apply_no_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n No compression is applied to the input\n@@ -118,8 +101,6 @@\n # 1) Decompress or return the original stream\n if compress_scheme == LZ4:\n return lz4.frame.decompress(binary)\n- elif compress_scheme == ZSTD:\n- return zstd.decompress(binary)\n elif compress_scheme == NO_COMPRESSION:\n return binary\n else:\n", "issue": "Remove ZSTD\n**Is your feature request related to a problem? Please describe.**\r\nZSTD is used for compression in our serde process. However we don't need extra compression as we move to Protobuf.\r\nZSTD is usually a source of problems when installing PySyft with different hacks to solve it.\r\n\r\n**Describe the solution you'd like**\r\nRemove ZSTD dependency.\r\nThis will require removing the tests and its use in serde.\r\n\r\n**Describe alternatives you've considered**\r\nProtobuf covers compression.\r\n\r\n**Additional context**\r\n\n", "before_files": [{"content": "\"\"\"\nThis file exists to provide one common place for all compression methods used in\nsimplifying and serializing PySyft objects.\n\"\"\"\n\nimport lz4\nfrom lz4 import ( # noqa: F401\n frame,\n) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'\nimport zstd\n\nfrom syft.exceptions import CompressionNotFoundException\n\n# COMPRESSION SCHEME INT CODES\nNO_COMPRESSION = 40\nLZ4 = 41\nZSTD = 42\nscheme_to_bytes = {\n NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder=\"big\"),\n LZ4: LZ4.to_bytes(1, byteorder=\"big\"),\n ZSTD: ZSTD.to_bytes(1, byteorder=\"big\"),\n}\n\n## SECTION: chosen Compression Algorithm\n\n\ndef _apply_compress_scheme(decompressed_input_bin) -> tuple:\n \"\"\"\n Apply the selected compression scheme.\n By default is used LZ4\n\n Args:\n decompressed_input_bin: the binary to be compressed\n \"\"\"\n return apply_lz4_compression(decompressed_input_bin)\n\n\ndef apply_lz4_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n Apply LZ4 compression to the input\n\n Args:\n decompressed_input_bin: the binary to be compressed\n\n Returns:\n a tuple (compressed_result, LZ4)\n \"\"\"\n return lz4.frame.compress(decompressed_input_bin), LZ4\n\n\ndef apply_zstd_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n Apply ZSTD compression to the input\n\n Args:\n decompressed_input_bin: the binary to be compressed\n\n Returns:\n a tuple (compressed_result, ZSTD)\n \"\"\"\n\n return zstd.compress(decompressed_input_bin), ZSTD\n\n\ndef apply_no_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n No compression is applied to the input\n\n Args:\n decompressed_input_bin: the binary\n\n Returns:\n a tuple (the binary, LZ4)\n \"\"\"\n\n return decompressed_input_bin, NO_COMPRESSION\n\n\ndef _compress(decompressed_input_bin: bin) -> bin:\n \"\"\"\n This function compresses a binary using the function _apply_compress_scheme\n if the input has been already compressed in some step, it will return it as it is\n\n Args:\n decompressed_input_bin (bin): binary to be compressed\n\n Returns:\n bin: a compressed binary\n\n \"\"\"\n compress_stream, compress_scheme = _apply_compress_scheme(decompressed_input_bin)\n try:\n z = scheme_to_bytes[compress_scheme] + compress_stream\n return z\n except KeyError:\n raise CompressionNotFoundException(\n f\"Compression scheme not found for compression code: {str(compress_scheme)}\"\n )\n\n\ndef _decompress(binary: bin) -> bin:\n \"\"\"\n This function decompresses a binary using the scheme defined in the first byte of the input\n\n Args:\n binary (bin): a compressed binary\n\n Returns:\n bin: decompressed binary\n\n \"\"\"\n\n # check the 1-byte header to check the compression scheme used\n compress_scheme = binary[0]\n\n # remove the 1-byte header from the input stream\n binary = binary[1:]\n # 1) Decompress or return the original stream\n if compress_scheme == LZ4:\n return lz4.frame.decompress(binary)\n elif compress_scheme == ZSTD:\n return zstd.decompress(binary)\n elif compress_scheme == NO_COMPRESSION:\n return binary\n else:\n raise CompressionNotFoundException(\n f\"Compression scheme not found for compression code: {str(compress_scheme)}\"\n )\n", "path": "syft/serde/compression.py"}], "after_files": [{"content": "\"\"\"\nThis file exists to provide one common place for all compression methods used in\nsimplifying and serializing PySyft objects.\n\"\"\"\n\nimport lz4\nfrom lz4 import ( # noqa: F401\n frame,\n) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'\n\nfrom syft.exceptions import CompressionNotFoundException\n\n# COMPRESSION SCHEME INT CODES\nNO_COMPRESSION = 40\nLZ4 = 41\nscheme_to_bytes = {\n NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder=\"big\"),\n LZ4: LZ4.to_bytes(1, byteorder=\"big\"),\n}\n\n## SECTION: chosen Compression Algorithm\n\n\ndef _apply_compress_scheme(decompressed_input_bin) -> tuple:\n \"\"\"\n Apply the selected compression scheme.\n By default is used LZ4\n\n Args:\n decompressed_input_bin: the binary to be compressed\n \"\"\"\n return apply_lz4_compression(decompressed_input_bin)\n\n\ndef apply_lz4_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n Apply LZ4 compression to the input\n\n Args:\n decompressed_input_bin: the binary to be compressed\n\n Returns:\n a tuple (compressed_result, LZ4)\n \"\"\"\n return lz4.frame.compress(decompressed_input_bin), LZ4\n\n\ndef apply_no_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n No compression is applied to the input\n\n Args:\n decompressed_input_bin: the binary\n\n Returns:\n a tuple (the binary, LZ4)\n \"\"\"\n\n return decompressed_input_bin, NO_COMPRESSION\n\n\ndef _compress(decompressed_input_bin: bin) -> bin:\n \"\"\"\n This function compresses a binary using the function _apply_compress_scheme\n if the input has been already compressed in some step, it will return it as it is\n\n Args:\n decompressed_input_bin (bin): binary to be compressed\n\n Returns:\n bin: a compressed binary\n\n \"\"\"\n compress_stream, compress_scheme = _apply_compress_scheme(decompressed_input_bin)\n try:\n z = scheme_to_bytes[compress_scheme] + compress_stream\n return z\n except KeyError:\n raise CompressionNotFoundException(\n f\"Compression scheme not found for compression code: {str(compress_scheme)}\"\n )\n\n\ndef _decompress(binary: bin) -> bin:\n \"\"\"\n This function decompresses a binary using the scheme defined in the first byte of the input\n\n Args:\n binary (bin): a compressed binary\n\n Returns:\n bin: decompressed binary\n\n \"\"\"\n\n # check the 1-byte header to check the compression scheme used\n compress_scheme = binary[0]\n\n # remove the 1-byte header from the input stream\n binary = binary[1:]\n # 1) Decompress or return the original stream\n if compress_scheme == LZ4:\n return lz4.frame.decompress(binary)\n elif compress_scheme == NO_COMPRESSION:\n return binary\n else:\n raise CompressionNotFoundException(\n f\"Compression scheme not found for compression code: {str(compress_scheme)}\"\n )\n", "path": "syft/serde/compression.py"}]} | 1,444 | 414 |
gh_patches_debug_10830 | rasdani/github-patches | git_diff | Mailu__Mailu-2177 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Manage user authentication and permissions
Currently no authentication is implemented. Multiple issues will have to be tackled:
- complete permission scheme or simple admin role plus admins per domain?
- how to store user passwords (shared format between Flask-admin and dovecot)?
- how should the initial use be created?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import logging as log
5 import sys
6
7 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "INFO"))
8
9 os.system("flask mailu advertise")
10 os.system("flask db upgrade")
11
12 account = os.environ.get("INITIAL_ADMIN_ACCOUNT")
13 domain = os.environ.get("INITIAL_ADMIN_DOMAIN")
14 password = os.environ.get("INITIAL_ADMIN_PW")
15
16 if account is not None and domain is not None and password is not None:
17 mode = os.environ.get("INITIAL_ADMIN_MODE", default="ifmissing")
18 log.info("Creating initial admin accout %s@%s with mode %s",account,domain,mode)
19 os.system("flask mailu admin %s %s '%s' --mode %s" % (account, domain, password, mode))
20
21 def test_DNS():
22 import dns.resolver
23 import dns.exception
24 import dns.flags
25 import dns.rdtypes
26 import dns.rdatatype
27 import dns.rdataclass
28 import time
29 # DNS stub configured to do DNSSEC enabled queries
30 resolver = dns.resolver.Resolver()
31 resolver.use_edns(0, 0, 1232)
32 resolver.flags = dns.flags.AD | dns.flags.RD
33 nameservers = resolver.nameservers
34 for ns in nameservers:
35 resolver.nameservers=[ns]
36 while True:
37 try:
38 result = resolver.query('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)
39 except Exception as e:
40 log.critical("Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.", ns, e);
41 else:
42 if result.response.flags & dns.flags.AD:
43 break
44 log.critical("Your DNS resolver at %s isn't doing DNSSEC validation; Please use another resolver or enable unbound via https://setup.mailu.io.", ns)
45 time.sleep(5)
46
47 test_DNS()
48
49 start_command="".join([
50 "gunicorn --threads ", str(os.cpu_count()),
51 " -b :80 ",
52 "--access-logfile - " if (log.root.level<=log.INFO) else "",
53 "--error-logfile - ",
54 "--preload ",
55 "'mailu:create_app()'"])
56
57 os.system(start_command)
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/start.py b/core/admin/start.py
--- a/core/admin/start.py
+++ b/core/admin/start.py
@@ -35,7 +35,7 @@
resolver.nameservers=[ns]
while True:
try:
- result = resolver.query('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)
+ result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)
except Exception as e:
log.critical("Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.", ns, e);
else:
| {"golden_diff": "diff --git a/core/admin/start.py b/core/admin/start.py\n--- a/core/admin/start.py\n+++ b/core/admin/start.py\n@@ -35,7 +35,7 @@\n resolver.nameservers=[ns]\n while True:\n try:\n- result = resolver.query('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n+ result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n except Exception as e:\n log.critical(\"Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.\", ns, e);\n else:\n", "issue": "Manage user authentication and permissions\nCurrently no authentication is implemented. Multiple issues will have to be tackled:\n- complete permission scheme or simple admin role plus admins per domain?\n- how to store user passwords (shared format between Flask-admin and dovecot)?\n- how should the initial use be created?\n\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport logging as log\nimport sys\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"INFO\"))\n\nos.system(\"flask mailu advertise\")\nos.system(\"flask db upgrade\")\n\naccount = os.environ.get(\"INITIAL_ADMIN_ACCOUNT\")\ndomain = os.environ.get(\"INITIAL_ADMIN_DOMAIN\")\npassword = os.environ.get(\"INITIAL_ADMIN_PW\")\n\nif account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n log.info(\"Creating initial admin accout %s@%s with mode %s\",account,domain,mode)\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n\ndef test_DNS():\n import dns.resolver\n import dns.exception\n import dns.flags\n import dns.rdtypes\n import dns.rdatatype\n import dns.rdataclass\n import time\n # DNS stub configured to do DNSSEC enabled queries\n resolver = dns.resolver.Resolver()\n resolver.use_edns(0, 0, 1232)\n resolver.flags = dns.flags.AD | dns.flags.RD\n nameservers = resolver.nameservers\n for ns in nameservers:\n resolver.nameservers=[ns]\n while True:\n try:\n result = resolver.query('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n except Exception as e:\n log.critical(\"Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.\", ns, e);\n else:\n if result.response.flags & dns.flags.AD:\n break\n log.critical(\"Your DNS resolver at %s isn't doing DNSSEC validation; Please use another resolver or enable unbound via https://setup.mailu.io.\", ns)\n time.sleep(5)\n\ntest_DNS()\n\nstart_command=\"\".join([\n \"gunicorn --threads \", str(os.cpu_count()),\n \" -b :80 \",\n \"--access-logfile - \" if (log.root.level<=log.INFO) else \"\",\n \"--error-logfile - \",\n \"--preload \",\n \"'mailu:create_app()'\"])\n\nos.system(start_command)\n", "path": "core/admin/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport logging as log\nimport sys\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"INFO\"))\n\nos.system(\"flask mailu advertise\")\nos.system(\"flask db upgrade\")\n\naccount = os.environ.get(\"INITIAL_ADMIN_ACCOUNT\")\ndomain = os.environ.get(\"INITIAL_ADMIN_DOMAIN\")\npassword = os.environ.get(\"INITIAL_ADMIN_PW\")\n\nif account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n log.info(\"Creating initial admin accout %s@%s with mode %s\",account,domain,mode)\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n\ndef test_DNS():\n import dns.resolver\n import dns.exception\n import dns.flags\n import dns.rdtypes\n import dns.rdatatype\n import dns.rdataclass\n import time\n # DNS stub configured to do DNSSEC enabled queries\n resolver = dns.resolver.Resolver()\n resolver.use_edns(0, 0, 1232)\n resolver.flags = dns.flags.AD | dns.flags.RD\n nameservers = resolver.nameservers\n for ns in nameservers:\n resolver.nameservers=[ns]\n while True:\n try:\n result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n except Exception as e:\n log.critical(\"Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.\", ns, e);\n else:\n if result.response.flags & dns.flags.AD:\n break\n log.critical(\"Your DNS resolver at %s isn't doing DNSSEC validation; Please use another resolver or enable unbound via https://setup.mailu.io.\", ns)\n time.sleep(5)\n\ntest_DNS()\n\nstart_command=\"\".join([\n \"gunicorn --threads \", str(os.cpu_count()),\n \" -b :80 \",\n \"--access-logfile - \" if (log.root.level<=log.INFO) else \"\",\n \"--error-logfile - \",\n \"--preload \",\n \"'mailu:create_app()'\"])\n\nos.system(start_command)\n", "path": "core/admin/start.py"}]} | 930 | 152 |
gh_patches_debug_20282 | rasdani/github-patches | git_diff | PaddlePaddle__models-449 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need to set the version of CTC decoders formally
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deep_speech_2/decoders/swig/setup.py`
Content:
```
1 """Script to build and install decoder package."""
2 from __future__ import absolute_import
3 from __future__ import division
4 from __future__ import print_function
5
6 from setuptools import setup, Extension, distutils
7 import glob
8 import platform
9 import os, sys
10 import multiprocessing.pool
11 import argparse
12
13 parser = argparse.ArgumentParser(description=__doc__)
14 parser.add_argument(
15 "--num_processes",
16 default=1,
17 type=int,
18 help="Number of cpu processes to build package. (default: %(default)d)")
19 args = parser.parse_known_args()
20
21 # reconstruct sys.argv to pass to setup below
22 sys.argv = [sys.argv[0]] + args[1]
23
24
25 # monkey-patch for parallel compilation
26 # See: https://stackoverflow.com/a/13176803
27 def parallelCCompile(self,
28 sources,
29 output_dir=None,
30 macros=None,
31 include_dirs=None,
32 debug=0,
33 extra_preargs=None,
34 extra_postargs=None,
35 depends=None):
36 # those lines are copied from distutils.ccompiler.CCompiler directly
37 macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
38 output_dir, macros, include_dirs, sources, depends, extra_postargs)
39 cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
40
41 # parallel code
42 def _single_compile(obj):
43 try:
44 src, ext = build[obj]
45 except KeyError:
46 return
47 self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
48
49 # convert to list, imap is evaluated on-demand
50 thread_pool = multiprocessing.pool.ThreadPool(args[0].num_processes)
51 list(thread_pool.imap(_single_compile, objects))
52 return objects
53
54
55 def compile_test(header, library):
56 dummy_path = os.path.join(os.path.dirname(__file__), "dummy")
57 command = "bash -c \"g++ -include " + header \
58 + " -l" + library + " -x c++ - <<<'int main() {}' -o " \
59 + dummy_path + " >/dev/null 2>/dev/null && rm " \
60 + dummy_path + " 2>/dev/null\""
61 return os.system(command) == 0
62
63
64 # hack compile to support parallel compiling
65 distutils.ccompiler.CCompiler.compile = parallelCCompile
66
67 FILES = glob.glob('kenlm/util/*.cc') \
68 + glob.glob('kenlm/lm/*.cc') \
69 + glob.glob('kenlm/util/double-conversion/*.cc')
70
71 FILES += glob.glob('openfst-1.6.3/src/lib/*.cc')
72
73 # FILES + glob.glob('glog/src/*.cc')
74 FILES = [
75 fn for fn in FILES
76 if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(
77 'unittest.cc'))
78 ]
79
80 LIBS = ['stdc++']
81 if platform.system() != 'Darwin':
82 LIBS.append('rt')
83
84 ARGS = ['-O3', '-DNDEBUG', '-DKENLM_MAX_ORDER=6', '-std=c++11']
85
86 if compile_test('zlib.h', 'z'):
87 ARGS.append('-DHAVE_ZLIB')
88 LIBS.append('z')
89
90 if compile_test('bzlib.h', 'bz2'):
91 ARGS.append('-DHAVE_BZLIB')
92 LIBS.append('bz2')
93
94 if compile_test('lzma.h', 'lzma'):
95 ARGS.append('-DHAVE_XZLIB')
96 LIBS.append('lzma')
97
98 os.system('swig -python -c++ ./decoders.i')
99
100 decoders_module = [
101 Extension(
102 name='_swig_decoders',
103 sources=FILES + glob.glob('*.cxx') + glob.glob('*.cpp'),
104 language='c++',
105 include_dirs=[
106 '.',
107 'kenlm',
108 'openfst-1.6.3/src/include',
109 'ThreadPool',
110 #'glog/src'
111 ],
112 libraries=LIBS,
113 extra_compile_args=ARGS)
114 ]
115
116 setup(
117 name='swig_decoders',
118 version='0.1',
119 description="""CTC decoders""",
120 ext_modules=decoders_module,
121 py_modules=['swig_decoders'], )
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/deep_speech_2/decoders/swig/setup.py b/deep_speech_2/decoders/swig/setup.py
--- a/deep_speech_2/decoders/swig/setup.py
+++ b/deep_speech_2/decoders/swig/setup.py
@@ -70,7 +70,6 @@
FILES += glob.glob('openfst-1.6.3/src/lib/*.cc')
-# FILES + glob.glob('glog/src/*.cc')
FILES = [
fn for fn in FILES
if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(
@@ -107,7 +106,6 @@
'kenlm',
'openfst-1.6.3/src/include',
'ThreadPool',
- #'glog/src'
],
libraries=LIBS,
extra_compile_args=ARGS)
@@ -115,7 +113,7 @@
setup(
name='swig_decoders',
- version='0.1',
+ version='1.0',
description="""CTC decoders""",
ext_modules=decoders_module,
py_modules=['swig_decoders'], )
| {"golden_diff": "diff --git a/deep_speech_2/decoders/swig/setup.py b/deep_speech_2/decoders/swig/setup.py\n--- a/deep_speech_2/decoders/swig/setup.py\n+++ b/deep_speech_2/decoders/swig/setup.py\n@@ -70,7 +70,6 @@\n \n FILES += glob.glob('openfst-1.6.3/src/lib/*.cc')\n \n-# FILES + glob.glob('glog/src/*.cc')\n FILES = [\n fn for fn in FILES\n if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(\n@@ -107,7 +106,6 @@\n 'kenlm',\n 'openfst-1.6.3/src/include',\n 'ThreadPool',\n- #'glog/src'\n ],\n libraries=LIBS,\n extra_compile_args=ARGS)\n@@ -115,7 +113,7 @@\n \n setup(\n name='swig_decoders',\n- version='0.1',\n+ version='1.0',\n description=\"\"\"CTC decoders\"\"\",\n ext_modules=decoders_module,\n py_modules=['swig_decoders'], )\n", "issue": "Need to set the version of CTC decoders formally\n\n", "before_files": [{"content": "\"\"\"Script to build and install decoder package.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom setuptools import setup, Extension, distutils\nimport glob\nimport platform\nimport os, sys\nimport multiprocessing.pool\nimport argparse\n\nparser = argparse.ArgumentParser(description=__doc__)\nparser.add_argument(\n \"--num_processes\",\n default=1,\n type=int,\n help=\"Number of cpu processes to build package. (default: %(default)d)\")\nargs = parser.parse_known_args()\n\n# reconstruct sys.argv to pass to setup below\nsys.argv = [sys.argv[0]] + args[1]\n\n\n# monkey-patch for parallel compilation\n# See: https://stackoverflow.com/a/13176803\ndef parallelCCompile(self,\n sources,\n output_dir=None,\n macros=None,\n include_dirs=None,\n debug=0,\n extra_preargs=None,\n extra_postargs=None,\n depends=None):\n # those lines are copied from distutils.ccompiler.CCompiler directly\n macros, objects, extra_postargs, pp_opts, build = self._setup_compile(\n output_dir, macros, include_dirs, sources, depends, extra_postargs)\n cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)\n\n # parallel code\n def _single_compile(obj):\n try:\n src, ext = build[obj]\n except KeyError:\n return\n self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)\n\n # convert to list, imap is evaluated on-demand\n thread_pool = multiprocessing.pool.ThreadPool(args[0].num_processes)\n list(thread_pool.imap(_single_compile, objects))\n return objects\n\n\ndef compile_test(header, library):\n dummy_path = os.path.join(os.path.dirname(__file__), \"dummy\")\n command = \"bash -c \\\"g++ -include \" + header \\\n + \" -l\" + library + \" -x c++ - <<<'int main() {}' -o \" \\\n + dummy_path + \" >/dev/null 2>/dev/null && rm \" \\\n + dummy_path + \" 2>/dev/null\\\"\"\n return os.system(command) == 0\n\n\n# hack compile to support parallel compiling\ndistutils.ccompiler.CCompiler.compile = parallelCCompile\n\nFILES = glob.glob('kenlm/util/*.cc') \\\n + glob.glob('kenlm/lm/*.cc') \\\n + glob.glob('kenlm/util/double-conversion/*.cc')\n\nFILES += glob.glob('openfst-1.6.3/src/lib/*.cc')\n\n# FILES + glob.glob('glog/src/*.cc')\nFILES = [\n fn for fn in FILES\n if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(\n 'unittest.cc'))\n]\n\nLIBS = ['stdc++']\nif platform.system() != 'Darwin':\n LIBS.append('rt')\n\nARGS = ['-O3', '-DNDEBUG', '-DKENLM_MAX_ORDER=6', '-std=c++11']\n\nif compile_test('zlib.h', 'z'):\n ARGS.append('-DHAVE_ZLIB')\n LIBS.append('z')\n\nif compile_test('bzlib.h', 'bz2'):\n ARGS.append('-DHAVE_BZLIB')\n LIBS.append('bz2')\n\nif compile_test('lzma.h', 'lzma'):\n ARGS.append('-DHAVE_XZLIB')\n LIBS.append('lzma')\n\nos.system('swig -python -c++ ./decoders.i')\n\ndecoders_module = [\n Extension(\n name='_swig_decoders',\n sources=FILES + glob.glob('*.cxx') + glob.glob('*.cpp'),\n language='c++',\n include_dirs=[\n '.',\n 'kenlm',\n 'openfst-1.6.3/src/include',\n 'ThreadPool',\n #'glog/src'\n ],\n libraries=LIBS,\n extra_compile_args=ARGS)\n]\n\nsetup(\n name='swig_decoders',\n version='0.1',\n description=\"\"\"CTC decoders\"\"\",\n ext_modules=decoders_module,\n py_modules=['swig_decoders'], )\n", "path": "deep_speech_2/decoders/swig/setup.py"}], "after_files": [{"content": "\"\"\"Script to build and install decoder package.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom setuptools import setup, Extension, distutils\nimport glob\nimport platform\nimport os, sys\nimport multiprocessing.pool\nimport argparse\n\nparser = argparse.ArgumentParser(description=__doc__)\nparser.add_argument(\n \"--num_processes\",\n default=1,\n type=int,\n help=\"Number of cpu processes to build package. (default: %(default)d)\")\nargs = parser.parse_known_args()\n\n# reconstruct sys.argv to pass to setup below\nsys.argv = [sys.argv[0]] + args[1]\n\n\n# monkey-patch for parallel compilation\n# See: https://stackoverflow.com/a/13176803\ndef parallelCCompile(self,\n sources,\n output_dir=None,\n macros=None,\n include_dirs=None,\n debug=0,\n extra_preargs=None,\n extra_postargs=None,\n depends=None):\n # those lines are copied from distutils.ccompiler.CCompiler directly\n macros, objects, extra_postargs, pp_opts, build = self._setup_compile(\n output_dir, macros, include_dirs, sources, depends, extra_postargs)\n cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)\n\n # parallel code\n def _single_compile(obj):\n try:\n src, ext = build[obj]\n except KeyError:\n return\n self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)\n\n # convert to list, imap is evaluated on-demand\n thread_pool = multiprocessing.pool.ThreadPool(args[0].num_processes)\n list(thread_pool.imap(_single_compile, objects))\n return objects\n\n\ndef compile_test(header, library):\n dummy_path = os.path.join(os.path.dirname(__file__), \"dummy\")\n command = \"bash -c \\\"g++ -include \" + header \\\n + \" -l\" + library + \" -x c++ - <<<'int main() {}' -o \" \\\n + dummy_path + \" >/dev/null 2>/dev/null && rm \" \\\n + dummy_path + \" 2>/dev/null\\\"\"\n return os.system(command) == 0\n\n\n# hack compile to support parallel compiling\ndistutils.ccompiler.CCompiler.compile = parallelCCompile\n\nFILES = glob.glob('kenlm/util/*.cc') \\\n + glob.glob('kenlm/lm/*.cc') \\\n + glob.glob('kenlm/util/double-conversion/*.cc')\n\nFILES += glob.glob('openfst-1.6.3/src/lib/*.cc')\n\nFILES = [\n fn for fn in FILES\n if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(\n 'unittest.cc'))\n]\n\nLIBS = ['stdc++']\nif platform.system() != 'Darwin':\n LIBS.append('rt')\n\nARGS = ['-O3', '-DNDEBUG', '-DKENLM_MAX_ORDER=6', '-std=c++11']\n\nif compile_test('zlib.h', 'z'):\n ARGS.append('-DHAVE_ZLIB')\n LIBS.append('z')\n\nif compile_test('bzlib.h', 'bz2'):\n ARGS.append('-DHAVE_BZLIB')\n LIBS.append('bz2')\n\nif compile_test('lzma.h', 'lzma'):\n ARGS.append('-DHAVE_XZLIB')\n LIBS.append('lzma')\n\nos.system('swig -python -c++ ./decoders.i')\n\ndecoders_module = [\n Extension(\n name='_swig_decoders',\n sources=FILES + glob.glob('*.cxx') + glob.glob('*.cpp'),\n language='c++',\n include_dirs=[\n '.',\n 'kenlm',\n 'openfst-1.6.3/src/include',\n 'ThreadPool',\n ],\n libraries=LIBS,\n extra_compile_args=ARGS)\n]\n\nsetup(\n name='swig_decoders',\n version='1.0',\n description=\"\"\"CTC decoders\"\"\",\n ext_modules=decoders_module,\n py_modules=['swig_decoders'], )\n", "path": "deep_speech_2/decoders/swig/setup.py"}]} | 1,444 | 265 |
gh_patches_debug_23631 | rasdani/github-patches | git_diff | e-valuation__EvaP-762 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test management commands
Because in three years, run_tasks will silently fail on the production system and nobody will notice.
- [x] **run_tasks** - shouldn't be too hard and is rather important
- [x] **anonymize** - might be a bit of work to cover it properly, but should be straightforward.
- [x] **refresh_results_cache** - should be easy
- [x] **dump_testdata** - don't know how not to overwrite the file during testing, but should be possible
the other commands are already tested or rather unsuitable for testing
- [x] **merge_users** - already has a test (#703) and is shown to be pretty broken.
- [x] **run** - don't know how to test this and there isn't really anything that could break. still, somehow running it to check that it doesn't crash right away on e.g. imports would be cool
- [x] **reload_testdata** - don't know whether it's possible at all to test that, i mean it drops the whole database...
- [ ] **import_ad** - we never used it and i don't know whether it's feasible to mock ldap
use `self.stdout.write` instead of `print` and `call_command("command_name", stdout=StringIO())` to avoid console output during tests. don't know what to do about calls to `input`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/management/commands/import_ad.py`
Content:
```
1 import getpass
2 import ldap
3 import sys
4
5 from django.core.management.base import BaseCommand
6
7 from evap.evaluation.models import UserProfile
8
9
10 class Command(BaseCommand):
11 args = '<ldap server> <username>'
12 help = 'Imports user data from Active Directory. The username should be specified with realm.'
13
14 def handle(self, *args, **options):
15 try:
16 # connect
17 l = ldap.initialize(args[0])
18
19 # bind
20 l.bind_s(args[1], getpass.getpass("AD Password: "))
21
22 # find all users
23 result = l.search_s("OU=INSTITUT,DC=hpi,DC=uni-potsdam,DC=de", ldap.SCOPE_SUBTREE, filterstr="(&(&(objectClass=user)(!(objectClass=computer)))(givenName=*)(sn=*)(mail=*))")
24 for _, attrs in result:
25 try:
26 user = UserProfile.objects.get(username__iexact=attrs['sAMAccountName'][0])
27 user.first_name = attrs['givenName'][0]
28 user.last_name = attrs['sn'][0]
29 user.email = attrs['mail'][0]
30 user.save()
31
32 print("Successfully updated: '{0}'".format(user.username))
33 except UserProfile.DoesNotExist:
34 pass
35 except Exception as e:
36 print(e)
37
38 l.unbind_s()
39
40 except KeyboardInterrupt:
41 sys.stderr.write("\nOperation cancelled.\n")
42 sys.exit(1)
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/evaluation/management/commands/import_ad.py b/evap/evaluation/management/commands/import_ad.py
deleted file mode 100644
--- a/evap/evaluation/management/commands/import_ad.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import getpass
-import ldap
-import sys
-
-from django.core.management.base import BaseCommand
-
-from evap.evaluation.models import UserProfile
-
-
-class Command(BaseCommand):
- args = '<ldap server> <username>'
- help = 'Imports user data from Active Directory. The username should be specified with realm.'
-
- def handle(self, *args, **options):
- try:
- # connect
- l = ldap.initialize(args[0])
-
- # bind
- l.bind_s(args[1], getpass.getpass("AD Password: "))
-
- # find all users
- result = l.search_s("OU=INSTITUT,DC=hpi,DC=uni-potsdam,DC=de", ldap.SCOPE_SUBTREE, filterstr="(&(&(objectClass=user)(!(objectClass=computer)))(givenName=*)(sn=*)(mail=*))")
- for _, attrs in result:
- try:
- user = UserProfile.objects.get(username__iexact=attrs['sAMAccountName'][0])
- user.first_name = attrs['givenName'][0]
- user.last_name = attrs['sn'][0]
- user.email = attrs['mail'][0]
- user.save()
-
- print("Successfully updated: '{0}'".format(user.username))
- except UserProfile.DoesNotExist:
- pass
- except Exception as e:
- print(e)
-
- l.unbind_s()
-
- except KeyboardInterrupt:
- sys.stderr.write("\nOperation cancelled.\n")
- sys.exit(1)
| {"golden_diff": "diff --git a/evap/evaluation/management/commands/import_ad.py b/evap/evaluation/management/commands/import_ad.py\ndeleted file mode 100644\n--- a/evap/evaluation/management/commands/import_ad.py\n+++ /dev/null\n@@ -1,42 +0,0 @@\n-import getpass\n-import ldap\n-import sys\n-\n-from django.core.management.base import BaseCommand\n-\n-from evap.evaluation.models import UserProfile\n-\n-\n-class Command(BaseCommand):\n- args = '<ldap server> <username>'\n- help = 'Imports user data from Active Directory. The username should be specified with realm.'\n-\n- def handle(self, *args, **options):\n- try:\n- # connect\n- l = ldap.initialize(args[0])\n-\n- # bind\n- l.bind_s(args[1], getpass.getpass(\"AD Password: \"))\n-\n- # find all users\n- result = l.search_s(\"OU=INSTITUT,DC=hpi,DC=uni-potsdam,DC=de\", ldap.SCOPE_SUBTREE, filterstr=\"(&(&(objectClass=user)(!(objectClass=computer)))(givenName=*)(sn=*)(mail=*))\")\n- for _, attrs in result:\n- try:\n- user = UserProfile.objects.get(username__iexact=attrs['sAMAccountName'][0])\n- user.first_name = attrs['givenName'][0]\n- user.last_name = attrs['sn'][0]\n- user.email = attrs['mail'][0]\n- user.save()\n-\n- print(\"Successfully updated: '{0}'\".format(user.username))\n- except UserProfile.DoesNotExist:\n- pass\n- except Exception as e:\n- print(e)\n-\n- l.unbind_s()\n-\n- except KeyboardInterrupt:\n- sys.stderr.write(\"\\nOperation cancelled.\\n\")\n- sys.exit(1)\n", "issue": "Test management commands\nBecause in three years, run_tasks will silently fail on the production system and nobody will notice.\n- [x] **run_tasks** - shouldn't be too hard and is rather important\n- [x] **anonymize** - might be a bit of work to cover it properly, but should be straightforward.\n- [x] **refresh_results_cache** - should be easy\n- [x] **dump_testdata** - don't know how not to overwrite the file during testing, but should be possible\n\nthe other commands are already tested or rather unsuitable for testing\n- [x] **merge_users** - already has a test (#703) and is shown to be pretty broken.\n- [x] **run** - don't know how to test this and there isn't really anything that could break. still, somehow running it to check that it doesn't crash right away on e.g. imports would be cool\n- [x] **reload_testdata** - don't know whether it's possible at all to test that, i mean it drops the whole database...\n- [ ] **import_ad** - we never used it and i don't know whether it's feasible to mock ldap\n\nuse `self.stdout.write` instead of `print` and `call_command(\"command_name\", stdout=StringIO())` to avoid console output during tests. don't know what to do about calls to `input`.\n\n", "before_files": [{"content": "import getpass\nimport ldap\nimport sys\n\nfrom django.core.management.base import BaseCommand\n\nfrom evap.evaluation.models import UserProfile\n\n\nclass Command(BaseCommand):\n args = '<ldap server> <username>'\n help = 'Imports user data from Active Directory. The username should be specified with realm.'\n\n def handle(self, *args, **options):\n try:\n # connect\n l = ldap.initialize(args[0])\n\n # bind\n l.bind_s(args[1], getpass.getpass(\"AD Password: \"))\n\n # find all users\n result = l.search_s(\"OU=INSTITUT,DC=hpi,DC=uni-potsdam,DC=de\", ldap.SCOPE_SUBTREE, filterstr=\"(&(&(objectClass=user)(!(objectClass=computer)))(givenName=*)(sn=*)(mail=*))\")\n for _, attrs in result:\n try:\n user = UserProfile.objects.get(username__iexact=attrs['sAMAccountName'][0])\n user.first_name = attrs['givenName'][0]\n user.last_name = attrs['sn'][0]\n user.email = attrs['mail'][0]\n user.save()\n\n print(\"Successfully updated: '{0}'\".format(user.username))\n except UserProfile.DoesNotExist:\n pass\n except Exception as e:\n print(e)\n\n l.unbind_s()\n\n except KeyboardInterrupt:\n sys.stderr.write(\"\\nOperation cancelled.\\n\")\n sys.exit(1)\n", "path": "evap/evaluation/management/commands/import_ad.py"}], "after_files": [{"content": null, "path": "evap/evaluation/management/commands/import_ad.py"}]} | 951 | 410 |
gh_patches_debug_19945 | rasdani/github-patches | git_diff | onnx__onnx-5736 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refine docs for check_model
Current version:
> Check the consistency of a model. An exception is raised if the test fails.
It would be good if we document the kind of checks done and the type of exception raised so users know what to catch for; as well as clarify that it also runs shape inference when strict is True. (Right now it says `if True, the function checks shapes can be inferred`)
Should we default `strict` to `True`? @jcwchen
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `onnx/checker.py`
Content:
```
1 # Copyright (c) ONNX Project Contributors
2 #
3 # SPDX-License-Identifier: Apache-2.0
4 """Graph utilities for checking whether an ONNX proto message is legal."""
5
6 from __future__ import annotations
7
8 __all__ = [
9 "check_attribute",
10 "check_function",
11 "check_graph",
12 "check_model",
13 "check_node",
14 "check_sparse_tensor",
15 "check_tensor",
16 "check_value_info",
17 "DEFAULT_CONTEXT",
18 "ValidationError",
19 "C",
20 "MAXIMUM_PROTOBUF",
21 ]
22
23 import os
24 import sys
25 from typing import Any, Callable, TypeVar
26
27 from google.protobuf.message import Message
28
29 import onnx.defs
30 import onnx.onnx_cpp2py_export.checker as C # noqa: N812
31 import onnx.shape_inference
32 from onnx import (
33 IR_VERSION,
34 AttributeProto,
35 FunctionProto,
36 GraphProto,
37 ModelProto,
38 NodeProto,
39 SparseTensorProto,
40 TensorProto,
41 ValueInfoProto,
42 helper,
43 )
44
45 # Limitation of single protobuf file is 2GB
46 MAXIMUM_PROTOBUF = 2000000000
47
48 # TODO: This thing where we reserialize the protobuf back into the
49 # string, only to deserialize it at the call site, is really goofy.
50 # Stop doing that.
51
52
53 # NB: Please don't edit this context!
54 DEFAULT_CONTEXT = C.CheckerContext()
55 DEFAULT_CONTEXT.ir_version = IR_VERSION
56 # TODO: Maybe ONNX-ML should also be defaulted?
57 DEFAULT_CONTEXT.opset_imports = {"": onnx.defs.onnx_opset_version()}
58
59
60 FuncType = TypeVar("FuncType", bound=Callable[..., Any])
61
62
63 def _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:
64 if not isinstance(proto, proto_type):
65 raise TypeError(
66 f"The proto message needs to be of type '{proto_type.__name__}'"
67 )
68
69
70 def check_value_info(
71 value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT
72 ) -> None:
73 _ensure_proto_type(value_info, ValueInfoProto)
74 return C.check_value_info(value_info.SerializeToString(), ctx)
75
76
77 def check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:
78 _ensure_proto_type(tensor, TensorProto)
79 return C.check_tensor(tensor.SerializeToString(), ctx)
80
81
82 def check_attribute(
83 attr: AttributeProto, ctx: C.CheckerContext = DEFAULT_CONTEXT
84 ) -> None:
85 _ensure_proto_type(attr, AttributeProto)
86 return C.check_attribute(attr.SerializeToString(), ctx)
87
88
89 def check_node(node: NodeProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:
90 _ensure_proto_type(node, NodeProto)
91 return C.check_node(node.SerializeToString(), ctx)
92
93
94 def check_function(
95 function: FunctionProto, ctx: C.CheckerContext | None = None
96 ) -> None:
97 _ensure_proto_type(function, FunctionProto)
98 if ctx is None:
99 ctx = C.CheckerContext()
100 ctx.ir_version = helper.find_min_ir_version_for(
101 list(function.opset_import), True
102 )
103 function_opset_dic = {}
104 for domain_version in function.opset_import:
105 function_opset_dic[domain_version.domain] = domain_version.version
106 ctx.opset_imports = function_opset_dic
107 C.check_function(function.SerializeToString(), ctx)
108
109
110 def check_graph(graph: GraphProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:
111 _ensure_proto_type(graph, GraphProto)
112 return C.check_graph(graph.SerializeToString(), ctx)
113
114
115 def check_sparse_tensor(
116 sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT
117 ) -> None:
118 _ensure_proto_type(sparse, SparseTensorProto)
119 C.check_sparse_tensor(sparse.SerializeToString(), ctx)
120
121
122 def check_model(
123 model: ModelProto | str | bytes | os.PathLike,
124 full_check: bool = False,
125 skip_opset_compatibility_check: bool = False,
126 ) -> None:
127 """Check the consistency of a model. An exception is raised if the test fails.
128
129 Args:
130 model: Model to check.
131 full_check: If True, the function also checks for shapes that can be inferred.
132 skip_opset_compatibility_check: If True, the function skips the check for
133 opset compatibility.
134 """
135 # If model is a path instead of ModelProto
136 if isinstance(model, (str, os.PathLike)):
137 C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)
138 else:
139 protobuf_string = (
140 model if isinstance(model, bytes) else model.SerializeToString()
141 )
142 # If the protobuf is larger than 2GB,
143 # remind users should use the model path to check
144 if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:
145 raise ValueError(
146 "This protobuf of onnx model is too large (>2GB). Call check_model with model path instead."
147 )
148 C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
149
150
151 ValidationError = C.ValidationError
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/onnx/checker.py b/onnx/checker.py
--- a/onnx/checker.py
+++ b/onnx/checker.py
@@ -124,11 +124,20 @@
full_check: bool = False,
skip_opset_compatibility_check: bool = False,
) -> None:
- """Check the consistency of a model. An exception is raised if the test fails.
+ """Check the consistency of a model.
+
+ An exception will be raised if the model's ir_version is not set
+ properly or is higher than checker's ir_version, or if the model
+ has duplicate keys in metadata_props.
+
+ If IR version >= 3, the model must specify opset_import.
+ If IR version < 3, the model cannot have any opset_import specified.
Args:
- model: Model to check.
- full_check: If True, the function also checks for shapes that can be inferred.
+ model: Model to check. If model is a path, the function checks model
+ path first. If the model bytes size is larger than 2GB, function
+ should be called using model path.
+ full_check: If True, the function also runs shape inference check.
skip_opset_compatibility_check: If True, the function skips the check for
opset compatibility.
"""
| {"golden_diff": "diff --git a/onnx/checker.py b/onnx/checker.py\n--- a/onnx/checker.py\n+++ b/onnx/checker.py\n@@ -124,11 +124,20 @@\n full_check: bool = False,\n skip_opset_compatibility_check: bool = False,\n ) -> None:\n- \"\"\"Check the consistency of a model. An exception is raised if the test fails.\n+ \"\"\"Check the consistency of a model.\n+\n+ An exception will be raised if the model's ir_version is not set\n+ properly or is higher than checker's ir_version, or if the model\n+ has duplicate keys in metadata_props.\n+\n+ If IR version >= 3, the model must specify opset_import.\n+ If IR version < 3, the model cannot have any opset_import specified.\n \n Args:\n- model: Model to check.\n- full_check: If True, the function also checks for shapes that can be inferred.\n+ model: Model to check. If model is a path, the function checks model\n+ path first. If the model bytes size is larger than 2GB, function\n+ should be called using model path.\n+ full_check: If True, the function also runs shape inference check.\n skip_opset_compatibility_check: If True, the function skips the check for\n opset compatibility.\n \"\"\"\n", "issue": "Refine docs for check_model\nCurrent version:\r\n\r\n> Check the consistency of a model. An exception is raised if the test fails.\r\n\r\nIt would be good if we document the kind of checks done and the type of exception raised so users know what to catch for; as well as clarify that it also runs shape inference when strict is True. (Right now it says `if True, the function checks shapes can be inferred`\uff09\r\n\r\nShould we default `strict` to `True`? @jcwchen \n", "before_files": [{"content": "# Copyright (c) ONNX Project Contributors\n#\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"Graph utilities for checking whether an ONNX proto message is legal.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"check_attribute\",\n \"check_function\",\n \"check_graph\",\n \"check_model\",\n \"check_node\",\n \"check_sparse_tensor\",\n \"check_tensor\",\n \"check_value_info\",\n \"DEFAULT_CONTEXT\",\n \"ValidationError\",\n \"C\",\n \"MAXIMUM_PROTOBUF\",\n]\n\nimport os\nimport sys\nfrom typing import Any, Callable, TypeVar\n\nfrom google.protobuf.message import Message\n\nimport onnx.defs\nimport onnx.onnx_cpp2py_export.checker as C # noqa: N812\nimport onnx.shape_inference\nfrom onnx import (\n IR_VERSION,\n AttributeProto,\n FunctionProto,\n GraphProto,\n ModelProto,\n NodeProto,\n SparseTensorProto,\n TensorProto,\n ValueInfoProto,\n helper,\n)\n\n# Limitation of single protobuf file is 2GB\nMAXIMUM_PROTOBUF = 2000000000\n\n# TODO: This thing where we reserialize the protobuf back into the\n# string, only to deserialize it at the call site, is really goofy.\n# Stop doing that.\n\n\n# NB: Please don't edit this context!\nDEFAULT_CONTEXT = C.CheckerContext()\nDEFAULT_CONTEXT.ir_version = IR_VERSION\n# TODO: Maybe ONNX-ML should also be defaulted?\nDEFAULT_CONTEXT.opset_imports = {\"\": onnx.defs.onnx_opset_version()}\n\n\nFuncType = TypeVar(\"FuncType\", bound=Callable[..., Any])\n\n\ndef _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:\n if not isinstance(proto, proto_type):\n raise TypeError(\n f\"The proto message needs to be of type '{proto_type.__name__}'\"\n )\n\n\ndef check_value_info(\n value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(value_info, ValueInfoProto)\n return C.check_value_info(value_info.SerializeToString(), ctx)\n\n\ndef check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(tensor, TensorProto)\n return C.check_tensor(tensor.SerializeToString(), ctx)\n\n\ndef check_attribute(\n attr: AttributeProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(attr, AttributeProto)\n return C.check_attribute(attr.SerializeToString(), ctx)\n\n\ndef check_node(node: NodeProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(node, NodeProto)\n return C.check_node(node.SerializeToString(), ctx)\n\n\ndef check_function(\n function: FunctionProto, ctx: C.CheckerContext | None = None\n) -> None:\n _ensure_proto_type(function, FunctionProto)\n if ctx is None:\n ctx = C.CheckerContext()\n ctx.ir_version = helper.find_min_ir_version_for(\n list(function.opset_import), True\n )\n function_opset_dic = {}\n for domain_version in function.opset_import:\n function_opset_dic[domain_version.domain] = domain_version.version\n ctx.opset_imports = function_opset_dic\n C.check_function(function.SerializeToString(), ctx)\n\n\ndef check_graph(graph: GraphProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(graph, GraphProto)\n return C.check_graph(graph.SerializeToString(), ctx)\n\n\ndef check_sparse_tensor(\n sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(sparse, SparseTensorProto)\n C.check_sparse_tensor(sparse.SerializeToString(), ctx)\n\n\ndef check_model(\n model: ModelProto | str | bytes | os.PathLike,\n full_check: bool = False,\n skip_opset_compatibility_check: bool = False,\n) -> None:\n \"\"\"Check the consistency of a model. An exception is raised if the test fails.\n\n Args:\n model: Model to check.\n full_check: If True, the function also checks for shapes that can be inferred.\n skip_opset_compatibility_check: If True, the function skips the check for\n opset compatibility.\n \"\"\"\n # If model is a path instead of ModelProto\n if isinstance(model, (str, os.PathLike)):\n C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)\n else:\n protobuf_string = (\n model if isinstance(model, bytes) else model.SerializeToString()\n )\n # If the protobuf is larger than 2GB,\n # remind users should use the model path to check\n if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:\n raise ValueError(\n \"This protobuf of onnx model is too large (>2GB). Call check_model with model path instead.\"\n )\n C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)\n\n\nValidationError = C.ValidationError\n", "path": "onnx/checker.py"}], "after_files": [{"content": "# Copyright (c) ONNX Project Contributors\n#\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"Graph utilities for checking whether an ONNX proto message is legal.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"check_attribute\",\n \"check_function\",\n \"check_graph\",\n \"check_model\",\n \"check_node\",\n \"check_sparse_tensor\",\n \"check_tensor\",\n \"check_value_info\",\n \"DEFAULT_CONTEXT\",\n \"ValidationError\",\n \"C\",\n \"MAXIMUM_PROTOBUF\",\n]\n\nimport os\nimport sys\nfrom typing import Any, Callable, TypeVar\n\nfrom google.protobuf.message import Message\n\nimport onnx.defs\nimport onnx.onnx_cpp2py_export.checker as C # noqa: N812\nimport onnx.shape_inference\nfrom onnx import (\n IR_VERSION,\n AttributeProto,\n FunctionProto,\n GraphProto,\n ModelProto,\n NodeProto,\n SparseTensorProto,\n TensorProto,\n ValueInfoProto,\n helper,\n)\n\n# Limitation of single protobuf file is 2GB\nMAXIMUM_PROTOBUF = 2000000000\n\n# TODO: This thing where we reserialize the protobuf back into the\n# string, only to deserialize it at the call site, is really goofy.\n# Stop doing that.\n\n\n# NB: Please don't edit this context!\nDEFAULT_CONTEXT = C.CheckerContext()\nDEFAULT_CONTEXT.ir_version = IR_VERSION\n# TODO: Maybe ONNX-ML should also be defaulted?\nDEFAULT_CONTEXT.opset_imports = {\"\": onnx.defs.onnx_opset_version()}\n\n\nFuncType = TypeVar(\"FuncType\", bound=Callable[..., Any])\n\n\ndef _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:\n if not isinstance(proto, proto_type):\n raise TypeError(\n f\"The proto message needs to be of type '{proto_type.__name__}'\"\n )\n\n\ndef check_value_info(\n value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(value_info, ValueInfoProto)\n return C.check_value_info(value_info.SerializeToString(), ctx)\n\n\ndef check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(tensor, TensorProto)\n return C.check_tensor(tensor.SerializeToString(), ctx)\n\n\ndef check_attribute(\n attr: AttributeProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(attr, AttributeProto)\n return C.check_attribute(attr.SerializeToString(), ctx)\n\n\ndef check_node(node: NodeProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(node, NodeProto)\n return C.check_node(node.SerializeToString(), ctx)\n\n\ndef check_function(\n function: FunctionProto, ctx: C.CheckerContext | None = None\n) -> None:\n _ensure_proto_type(function, FunctionProto)\n if ctx is None:\n ctx = C.CheckerContext()\n ctx.ir_version = helper.find_min_ir_version_for(\n list(function.opset_import), True\n )\n function_opset_dic = {}\n for domain_version in function.opset_import:\n function_opset_dic[domain_version.domain] = domain_version.version\n ctx.opset_imports = function_opset_dic\n C.check_function(function.SerializeToString(), ctx)\n\n\ndef check_graph(graph: GraphProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(graph, GraphProto)\n return C.check_graph(graph.SerializeToString(), ctx)\n\n\ndef check_sparse_tensor(\n sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(sparse, SparseTensorProto)\n C.check_sparse_tensor(sparse.SerializeToString(), ctx)\n\n\ndef check_model(\n model: ModelProto | str | bytes | os.PathLike,\n full_check: bool = False,\n skip_opset_compatibility_check: bool = False,\n) -> None:\n \"\"\"Check the consistency of a model.\n\n An exception will be raised if the model's ir_version is not set\n properly or is higher than checker's ir_version, or if the model\n has duplicate keys in metadata_props.\n\n If IR version >= 3, the model must specify opset_import.\n If IR version < 3, the model cannot have any opset_import specified.\n\n Args:\n model: Model to check. If model is a path, the function checks model\n path first. If the model bytes size is larger than 2GB, function\n should be called using model path.\n full_check: If True, the function also runs shape inference check.\n skip_opset_compatibility_check: If True, the function skips the check for\n opset compatibility.\n \"\"\"\n # If model is a path instead of ModelProto\n if isinstance(model, (str, os.PathLike)):\n C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)\n else:\n protobuf_string = (\n model if isinstance(model, bytes) else model.SerializeToString()\n )\n # If the protobuf is larger than 2GB,\n # remind users should use the model path to check\n if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:\n raise ValueError(\n \"This protobuf of onnx model is too large (>2GB). Call check_model with model path instead.\"\n )\n C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)\n\n\nValidationError = C.ValidationError\n", "path": "onnx/checker.py"}]} | 1,818 | 301 |
gh_patches_debug_19401 | rasdani/github-patches | git_diff | geopandas__geopandas-643 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GeoDataFrame.to_file fail on bool column
When converting GeoDataFrame with bool column to shp file, got following error
```sh
ValueError: 'bool' is not in list
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/io/file.py`
Content:
```
1 import os
2
3 import fiona
4 import numpy as np
5 import six
6
7 from geopandas import GeoDataFrame
8
9 # Adapted from pandas.io.common
10 if six.PY3:
11 from urllib.request import urlopen as _urlopen
12 from urllib.parse import urlparse as parse_url
13 from urllib.parse import uses_relative, uses_netloc, uses_params
14 else:
15 from urllib2 import urlopen as _urlopen
16 from urlparse import urlparse as parse_url
17 from urlparse import uses_relative, uses_netloc, uses_params
18
19 _VALID_URLS = set(uses_relative + uses_netloc + uses_params)
20 _VALID_URLS.discard('')
21
22
23 def _is_url(url):
24 """Check to see if *url* has a valid protocol."""
25 try:
26 return parse_url(url).scheme in _VALID_URLS
27 except:
28 return False
29
30
31 def read_file(filename, **kwargs):
32 """
33 Returns a GeoDataFrame from a file or URL.
34
35 Parameters
36 ----------
37 filename: str
38 Either the absolute or relative path to the file or URL to
39 be opened.
40 **kwargs:
41 Keyword args to be passed to the `open` or `BytesCollection` method
42 in the fiona library when opening the file. For more information on
43 possible keywords, type:
44 ``import fiona; help(fiona.open)``
45
46 Examples
47 --------
48 >>> df = geopandas.read_file("nybb.shp")
49
50 Returns
51 -------
52 geodataframe : GeoDataFrame
53 """
54 bbox = kwargs.pop('bbox', None)
55 if _is_url(filename):
56 req = _urlopen(filename)
57 path_or_bytes = req.read()
58 reader = fiona.BytesCollection
59 else:
60 path_or_bytes = filename
61 reader = fiona.open
62 with reader(path_or_bytes, **kwargs) as f:
63 crs = f.crs
64 if bbox is not None:
65 assert len(bbox) == 4
66 f_filt = f.filter(bbox=bbox)
67 else:
68 f_filt = f
69 gdf = GeoDataFrame.from_features(f_filt, crs=crs)
70 # re-order with column order from metadata, with geometry last
71 columns = list(f.meta["schema"]["properties"]) + ["geometry"]
72 gdf = gdf[columns]
73
74 return gdf
75
76
77 def to_file(df, filename, driver="ESRI Shapefile", schema=None,
78 **kwargs):
79 """
80 Write this GeoDataFrame to an OGR data source
81
82 A dictionary of supported OGR providers is available via:
83 >>> import fiona
84 >>> fiona.supported_drivers
85
86 Parameters
87 ----------
88 df : GeoDataFrame to be written
89 filename : string
90 File path or file handle to write to.
91 driver : string, default 'ESRI Shapefile'
92 The OGR format driver used to write the vector file.
93 schema : dict, default None
94 If specified, the schema dictionary is passed to Fiona to
95 better control how the file is written. If None, GeoPandas
96 will determine the schema based on each column's dtype
97
98 The *kwargs* are passed to fiona.open and can be used to write
99 to multi-layer data, store data within archives (zip files), etc.
100 """
101 if schema is None:
102 schema = infer_schema(df)
103 filename = os.path.abspath(os.path.expanduser(filename))
104 with fiona.drivers():
105 with fiona.open(filename, 'w', driver=driver, crs=df.crs,
106 schema=schema, **kwargs) as colxn:
107 colxn.writerecords(df.iterfeatures())
108
109
110 def infer_schema(df):
111 try:
112 from collections import OrderedDict
113 except ImportError:
114 from ordereddict import OrderedDict
115
116 def convert_type(in_type):
117 if in_type == object:
118 return 'str'
119 out_type = type(np.asscalar(np.zeros(1, in_type))).__name__
120 if out_type == 'long':
121 out_type = 'int'
122 return out_type
123
124 properties = OrderedDict([
125 (col, convert_type(_type)) for col, _type in
126 zip(df.columns, df.dtypes) if col != df._geometry_column_name
127 ])
128
129 geom_type = _common_geom_type(df)
130 if not geom_type:
131 raise ValueError("Geometry column cannot contain mutiple "
132 "geometry types when writing to file.")
133
134 schema = {'geometry': geom_type, 'properties': properties}
135
136 return schema
137
138
139 def _common_geom_type(df):
140 # Need to check geom_types before we write to file...
141 # Some (most?) providers expect a single geometry type:
142 # Point, LineString, or Polygon
143 geom_types = df.geometry.geom_type.unique()
144
145 from os.path import commonprefix # To find longest common prefix
146 geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse
147 if not geom_type:
148 geom_type = None
149
150 return geom_type
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geopandas/io/file.py b/geopandas/io/file.py
--- a/geopandas/io/file.py
+++ b/geopandas/io/file.py
@@ -113,16 +113,20 @@
except ImportError:
from ordereddict import OrderedDict
- def convert_type(in_type):
+ def convert_type(column, in_type):
if in_type == object:
return 'str'
out_type = type(np.asscalar(np.zeros(1, in_type))).__name__
if out_type == 'long':
out_type = 'int'
+ if out_type == 'bool':
+ raise ValueError('column "{}" is boolean type, '.format(column) +
+ 'which is unsupported in file writing. '
+ 'Consider casting the column to int type.')
return out_type
properties = OrderedDict([
- (col, convert_type(_type)) for col, _type in
+ (col, convert_type(col, _type)) for col, _type in
zip(df.columns, df.dtypes) if col != df._geometry_column_name
])
| {"golden_diff": "diff --git a/geopandas/io/file.py b/geopandas/io/file.py\n--- a/geopandas/io/file.py\n+++ b/geopandas/io/file.py\n@@ -113,16 +113,20 @@\n except ImportError:\n from ordereddict import OrderedDict\n \n- def convert_type(in_type):\n+ def convert_type(column, in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n+ if out_type == 'bool':\n+ raise ValueError('column \"{}\" is boolean type, '.format(column) +\n+ 'which is unsupported in file writing. '\n+ 'Consider casting the column to int type.')\n return out_type\n \n properties = OrderedDict([\n- (col, convert_type(_type)) for col, _type in\n+ (col, convert_type(col, _type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n", "issue": "GeoDataFrame.to_file fail on bool column\nWhen converting GeoDataFrame with bool column to shp file, got following error\r\n```sh\r\nValueError: 'bool' is not in list\r\n```\n", "before_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nimport six\n\nfrom geopandas import GeoDataFrame\n\n# Adapted from pandas.io.common\nif six.PY3:\n from urllib.request import urlopen as _urlopen\n from urllib.parse import urlparse as parse_url\n from urllib.parse import uses_relative, uses_netloc, uses_params\nelse:\n from urllib2 import urlopen as _urlopen\n from urlparse import urlparse as parse_url\n from urlparse import uses_relative, uses_netloc, uses_params\n\n_VALID_URLS = set(uses_relative + uses_netloc + uses_params)\n_VALID_URLS.discard('')\n\n\ndef _is_url(url):\n \"\"\"Check to see if *url* has a valid protocol.\"\"\"\n try:\n return parse_url(url).scheme in _VALID_URLS\n except:\n return False\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file or URL.\n\n Parameters\n ----------\n filename: str\n Either the absolute or relative path to the file or URL to\n be opened.\n **kwargs:\n Keyword args to be passed to the `open` or `BytesCollection` method\n in the fiona library when opening the file. For more information on\n possible keywords, type:\n ``import fiona; help(fiona.open)``\n\n Examples\n --------\n >>> df = geopandas.read_file(\"nybb.shp\")\n\n Returns\n -------\n geodataframe : GeoDataFrame\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n if _is_url(filename):\n req = _urlopen(filename)\n path_or_bytes = req.read()\n reader = fiona.BytesCollection\n else:\n path_or_bytes = filename\n reader = fiona.open\n with reader(path_or_bytes, **kwargs) as f:\n crs = f.crs\n if bbox is not None:\n assert len(bbox) == 4\n f_filt = f.filter(bbox=bbox)\n else:\n f_filt = f\n gdf = GeoDataFrame.from_features(f_filt, crs=crs)\n # re-order with column order from metadata, with geometry last\n columns = list(f.meta[\"schema\"][\"properties\"]) + [\"geometry\"]\n gdf = gdf[columns]\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.drivers():\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as colxn:\n colxn.writerecords(df.iterfeatures())\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(_type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n geom_type = _common_geom_type(df)\n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix # To find longest common prefix\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse\n if not geom_type:\n geom_type = None\n\n return geom_type\n", "path": "geopandas/io/file.py"}], "after_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nimport six\n\nfrom geopandas import GeoDataFrame\n\n# Adapted from pandas.io.common\nif six.PY3:\n from urllib.request import urlopen as _urlopen\n from urllib.parse import urlparse as parse_url\n from urllib.parse import uses_relative, uses_netloc, uses_params\nelse:\n from urllib2 import urlopen as _urlopen\n from urlparse import urlparse as parse_url\n from urlparse import uses_relative, uses_netloc, uses_params\n\n_VALID_URLS = set(uses_relative + uses_netloc + uses_params)\n_VALID_URLS.discard('')\n\n\ndef _is_url(url):\n \"\"\"Check to see if *url* has a valid protocol.\"\"\"\n try:\n return parse_url(url).scheme in _VALID_URLS\n except:\n return False\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file or URL.\n\n Parameters\n ----------\n filename: str\n Either the absolute or relative path to the file or URL to\n be opened.\n **kwargs:\n Keyword args to be passed to the `open` or `BytesCollection` method\n in the fiona library when opening the file. For more information on\n possible keywords, type:\n ``import fiona; help(fiona.open)``\n\n Examples\n --------\n >>> df = geopandas.read_file(\"nybb.shp\")\n\n Returns\n -------\n geodataframe : GeoDataFrame\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n if _is_url(filename):\n req = _urlopen(filename)\n path_or_bytes = req.read()\n reader = fiona.BytesCollection\n else:\n path_or_bytes = filename\n reader = fiona.open\n with reader(path_or_bytes, **kwargs) as f:\n crs = f.crs\n if bbox is not None:\n assert len(bbox) == 4\n f_filt = f.filter(bbox=bbox)\n else:\n f_filt = f\n gdf = GeoDataFrame.from_features(f_filt, crs=crs)\n # re-order with column order from metadata, with geometry last\n columns = list(f.meta[\"schema\"][\"properties\"]) + [\"geometry\"]\n gdf = gdf[columns]\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.drivers():\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as colxn:\n colxn.writerecords(df.iterfeatures())\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(column, in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n if out_type == 'bool':\n raise ValueError('column \"{}\" is boolean type, '.format(column) +\n 'which is unsupported in file writing. '\n 'Consider casting the column to int type.')\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(col, _type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n geom_type = _common_geom_type(df)\n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix # To find longest common prefix\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse\n if not geom_type:\n geom_type = None\n\n return geom_type\n", "path": "geopandas/io/file.py"}]} | 1,732 | 242 |
gh_patches_debug_39731 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1835 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Redirect a non-new user to Newsfeed instead of My Organisations
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py`
Content:
```
1 import datetime
2 import dateutil
3
4 import ckan.controllers.user as ckan_user
5 import ckan.lib.helpers as h
6 import ckan.lib.base as base
7 from ckan.common import _, c, g, request
8 import ckan.logic as logic
9 from pylons import config
10
11 get_action = logic.get_action
12
13 class LoginController(ckan_user.UserController):
14 def logged_in(self):
15 # redirect if needed
16 came_from = request.params.get('came_from', '')
17 if self._sane_came_from(came_from):
18 return h.redirect_to(str(came_from))
19
20 if c.user:
21 context = None
22 data_dict = {'id': c.user}
23
24 user_dict = get_action('user_show')(context, data_dict)
25
26 if 'created' in user_dict:
27 time_passed = datetime.datetime.now() - dateutil.parser.parse( user_dict['created'] )
28 else:
29 time_passed = None
30
31 if not user_dict['activity'] and time_passed and time_passed.days < 3:
32 #/dataset/new
33 contribute_url = h.url_for(controller='package', action='new')
34 # message = ''' Now that you've registered an account , you can <a href="%s">start adding datasets</a>.
35 # If you want to associate this dataset with an organization, either click on "My Organizations" below
36 # to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url
37 #h.flash_success(_(message), True)
38 else:
39 h.flash_success(_("%s is now logged in") %
40 user_dict['display_name'])
41 #return self.me()
42 # Instead redirect to My orgs page
43 return h.redirect_to(controller='user',
44 action='dashboard_organizations')
45 else:
46 err = _('Login failed. Bad username or password.')
47 if g.openid_enabled:
48 err += _(' (Or if using OpenID, it hasn\'t been associated '
49 'with a user account.)')
50 if h.asbool(config.get('ckan.legacy_templates', 'false')):
51 h.flash_error(err)
52 h.redirect_to(controller='user',
53 action='login', came_from=came_from)
54 else:
55 return self.login(error=err)
56
57 def contribute(self, error=None):
58 self.login(error)
59 vars = {'contribute':True}
60 return base.render('user/login.html', extra_vars=vars)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py b/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py
--- a/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py
+++ b/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py
@@ -10,7 +10,9 @@
get_action = logic.get_action
+
class LoginController(ckan_user.UserController):
+
def logged_in(self):
# redirect if needed
came_from = request.params.get('came_from', '')
@@ -24,24 +26,22 @@
user_dict = get_action('user_show')(context, data_dict)
if 'created' in user_dict:
- time_passed = datetime.datetime.now() - dateutil.parser.parse( user_dict['created'] )
+ time_passed = datetime.datetime.now(
+ ) - dateutil.parser.parse(user_dict['created'])
else:
- time_passed = None
-
+ time_passed = None
if not user_dict['activity'] and time_passed and time_passed.days < 3:
- #/dataset/new
- contribute_url = h.url_for(controller='package', action='new')
- # message = ''' Now that you've registered an account , you can <a href="%s">start adding datasets</a>.
- # If you want to associate this dataset with an organization, either click on "My Organizations" below
+ #/dataset/new
+ contribute_url = h.url_for(controller='package', action='new')
+ # message = ''' Now that you've registered an account , you can <a href="%s">start adding datasets</a>.
+ # If you want to associate this dataset with an organization, either click on "My Organizations" below
# to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url
#h.flash_success(_(message), True)
+ return h.redirect_to(controller='user', action='dashboard_organizations')
else:
h.flash_success(_("%s is now logged in") %
- user_dict['display_name'])
- #return self.me()
- # Instead redirect to My orgs page
- return h.redirect_to(controller='user',
- action='dashboard_organizations')
+ user_dict['display_name'])
+ return self.me()
else:
err = _('Login failed. Bad username or password.')
if g.openid_enabled:
@@ -53,8 +53,8 @@
action='login', came_from=came_from)
else:
return self.login(error=err)
-
+
def contribute(self, error=None):
self.login(error)
- vars = {'contribute':True}
- return base.render('user/login.html', extra_vars=vars)
\ No newline at end of file
+ vars = {'contribute': True}
+ return base.render('user/login.html', extra_vars=vars)
| {"golden_diff": "diff --git a/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py b/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py\n--- a/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py\n+++ b/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py\n@@ -10,7 +10,9 @@\n \n get_action = logic.get_action\n \n+\n class LoginController(ckan_user.UserController):\n+\n def logged_in(self):\n # redirect if needed\n came_from = request.params.get('came_from', '')\n@@ -24,24 +26,22 @@\n user_dict = get_action('user_show')(context, data_dict)\n \n if 'created' in user_dict:\n- time_passed = datetime.datetime.now() - dateutil.parser.parse( user_dict['created'] )\n+ time_passed = datetime.datetime.now(\n+ ) - dateutil.parser.parse(user_dict['created'])\n else:\n- time_passed = None \n- \n+ time_passed = None\n if not user_dict['activity'] and time_passed and time_passed.days < 3:\n- #/dataset/new \n- contribute_url = h.url_for(controller='package', action='new')\n- # message = ''' Now that you've registered an account , you can <a href=\"%s\">start adding datasets</a>. \n- # If you want to associate this dataset with an organization, either click on \"My Organizations\" below \n+ #/dataset/new\n+ contribute_url = h.url_for(controller='package', action='new')\n+ # message = ''' Now that you've registered an account , you can <a href=\"%s\">start adding datasets</a>.\n+ # If you want to associate this dataset with an organization, either click on \"My Organizations\" below\n # to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url\n #h.flash_success(_(message), True)\n+ return h.redirect_to(controller='user', action='dashboard_organizations')\n else:\n h.flash_success(_(\"%s is now logged in\") %\n- user_dict['display_name'])\n- #return self.me()\n- # Instead redirect to My orgs page\n- return h.redirect_to(controller='user',\n- action='dashboard_organizations')\n+ user_dict['display_name'])\n+ return self.me()\n else:\n err = _('Login failed. Bad username or password.')\n if g.openid_enabled:\n@@ -53,8 +53,8 @@\n action='login', came_from=came_from)\n else:\n return self.login(error=err)\n- \n+\n def contribute(self, error=None):\n self.login(error)\n- vars = {'contribute':True}\n- return base.render('user/login.html', extra_vars=vars)\n\\ No newline at end of file\n+ vars = {'contribute': True}\n+ return base.render('user/login.html', extra_vars=vars)\n", "issue": "Redirect a non-new user to Newsfeed instead of My Organisations\n\n", "before_files": [{"content": "import datetime\nimport dateutil\n\nimport ckan.controllers.user as ckan_user\nimport ckan.lib.helpers as h\nimport ckan.lib.base as base\nfrom ckan.common import _, c, g, request\nimport ckan.logic as logic\nfrom pylons import config\n\nget_action = logic.get_action\n\nclass LoginController(ckan_user.UserController):\n def logged_in(self):\n # redirect if needed\n came_from = request.params.get('came_from', '')\n if self._sane_came_from(came_from):\n return h.redirect_to(str(came_from))\n\n if c.user:\n context = None\n data_dict = {'id': c.user}\n\n user_dict = get_action('user_show')(context, data_dict)\n\n if 'created' in user_dict:\n time_passed = datetime.datetime.now() - dateutil.parser.parse( user_dict['created'] )\n else:\n time_passed = None \n \n if not user_dict['activity'] and time_passed and time_passed.days < 3:\n #/dataset/new \n contribute_url = h.url_for(controller='package', action='new')\n # message = ''' Now that you've registered an account , you can <a href=\"%s\">start adding datasets</a>. \n # If you want to associate this dataset with an organization, either click on \"My Organizations\" below \n # to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url\n #h.flash_success(_(message), True)\n else:\n h.flash_success(_(\"%s is now logged in\") %\n user_dict['display_name'])\n #return self.me()\n # Instead redirect to My orgs page\n return h.redirect_to(controller='user',\n action='dashboard_organizations')\n else:\n err = _('Login failed. Bad username or password.')\n if g.openid_enabled:\n err += _(' (Or if using OpenID, it hasn\\'t been associated '\n 'with a user account.)')\n if h.asbool(config.get('ckan.legacy_templates', 'false')):\n h.flash_error(err)\n h.redirect_to(controller='user',\n action='login', came_from=came_from)\n else:\n return self.login(error=err)\n \n def contribute(self, error=None):\n self.login(error)\n vars = {'contribute':True}\n return base.render('user/login.html', extra_vars=vars)", "path": "ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py"}], "after_files": [{"content": "import datetime\nimport dateutil\n\nimport ckan.controllers.user as ckan_user\nimport ckan.lib.helpers as h\nimport ckan.lib.base as base\nfrom ckan.common import _, c, g, request\nimport ckan.logic as logic\nfrom pylons import config\n\nget_action = logic.get_action\n\n\nclass LoginController(ckan_user.UserController):\n\n def logged_in(self):\n # redirect if needed\n came_from = request.params.get('came_from', '')\n if self._sane_came_from(came_from):\n return h.redirect_to(str(came_from))\n\n if c.user:\n context = None\n data_dict = {'id': c.user}\n\n user_dict = get_action('user_show')(context, data_dict)\n\n if 'created' in user_dict:\n time_passed = datetime.datetime.now(\n ) - dateutil.parser.parse(user_dict['created'])\n else:\n time_passed = None\n if not user_dict['activity'] and time_passed and time_passed.days < 3:\n #/dataset/new\n contribute_url = h.url_for(controller='package', action='new')\n # message = ''' Now that you've registered an account , you can <a href=\"%s\">start adding datasets</a>.\n # If you want to associate this dataset with an organization, either click on \"My Organizations\" below\n # to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url\n #h.flash_success(_(message), True)\n return h.redirect_to(controller='user', action='dashboard_organizations')\n else:\n h.flash_success(_(\"%s is now logged in\") %\n user_dict['display_name'])\n return self.me()\n else:\n err = _('Login failed. Bad username or password.')\n if g.openid_enabled:\n err += _(' (Or if using OpenID, it hasn\\'t been associated '\n 'with a user account.)')\n if h.asbool(config.get('ckan.legacy_templates', 'false')):\n h.flash_error(err)\n h.redirect_to(controller='user',\n action='login', came_from=came_from)\n else:\n return self.login(error=err)\n\n def contribute(self, error=None):\n self.login(error)\n vars = {'contribute': True}\n return base.render('user/login.html', extra_vars=vars)\n", "path": "ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py"}]} | 921 | 671 |
gh_patches_debug_53600 | rasdani/github-patches | git_diff | aws__aws-cli-577 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
typo in s3api list-objects documentation
The documentation for the s3api list-objects --max-items parameter says that a `NextMarker` will be provided, while the --starting-token parameter refers to this as `NextToken` which is the actual name of the returned token in JSON.
So in short I think that the `NextMarker` should really say `NextToken` to prevent any confusion.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/paginate.py`
Content:
```
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 """This module has customizations to unify paging paramters.
14
15 For any operation that can be paginated, we will:
16
17 * Remove the service specific pagination params. This can vary across
18 services and we're going to replace them with a consistent set of
19 arguments.
20 * Add a ``--starting-token`` and a ``--max-items`` argument.
21
22 """
23 import logging
24
25 from awscli.arguments import BaseCLIArgument
26 from botocore.parameters import StringParameter
27
28 logger = logging.getLogger(__name__)
29
30
31 STARTING_TOKEN_HELP = """
32 <p>A token to specify where to start paginating. This is the
33 <code>NextToken</code> from a previously truncated response.</p>
34 """
35
36 MAX_ITEMS_HELP = """
37 <p>The total number of items to return. If the total number
38 of items available is more than the value specified in
39 max-items then a <code>NextMarker</code> will
40 be provided in the output that you can use to resume pagination.
41 """
42
43
44 def unify_paging_params(argument_table, operation, **kwargs):
45 if not operation.can_paginate:
46 # We only apply these customizations to paginated responses.
47 return
48 logger.debug("Modifying paging parameters for operation: %s", operation)
49 _remove_existing_paging_arguments(argument_table, operation)
50 argument_table['starting-token'] = PageArgument('starting-token',
51 STARTING_TOKEN_HELP,
52 operation,
53 parse_type='string')
54 argument_table['max-items'] = PageArgument('max-items', MAX_ITEMS_HELP,
55 operation, parse_type='integer')
56
57
58 def _remove_existing_paging_arguments(argument_table, operation):
59 tokens = _get_input_tokens(operation)
60 for token_name in tokens:
61 cli_name = _get_cli_name(operation.params, token_name)
62 del argument_table[cli_name]
63 if 'limit_key' in operation.pagination:
64 key_name = operation.pagination['limit_key']
65 cli_name = _get_cli_name(operation.params, key_name)
66 del argument_table[cli_name]
67
68
69 def _get_input_tokens(operation):
70 config = operation.pagination
71 tokens = config['input_token']
72 if not isinstance(tokens, list):
73 return [tokens]
74 return tokens
75
76
77 def _get_cli_name(param_objects, token_name):
78 for param in param_objects:
79 if param.name == token_name:
80 return param.cli_name.lstrip('-')
81
82
83 class PageArgument(BaseCLIArgument):
84 type_map = {
85 'string': str,
86 'integer': int,
87 }
88
89 def __init__(self, name, documentation, operation, parse_type):
90 param = StringParameter(operation, name=name, type=parse_type)
91 self._name = name
92 self.argument_object = param
93 self._name = name
94 self._documentation = documentation
95 self._parse_type = parse_type
96
97 @property
98 def cli_name(self):
99 return '--' + self._name
100
101 @property
102 def cli_type_name(self):
103 return self._parse_type
104
105 @property
106 def required(self):
107 return False
108
109 @property
110 def documentation(self):
111 return self._documentation
112
113 def add_to_parser(self, parser):
114 parser.add_argument(self.cli_name, dest=self.py_name,
115 type=self.type_map[self._parse_type])
116
117 def add_to_params(self, parameters, value):
118 if value is not None:
119 parameters[self.py_name] = value
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awscli/customizations/paginate.py b/awscli/customizations/paginate.py
--- a/awscli/customizations/paginate.py
+++ b/awscli/customizations/paginate.py
@@ -36,7 +36,7 @@
MAX_ITEMS_HELP = """
<p>The total number of items to return. If the total number
of items available is more than the value specified in
-max-items then a <code>NextMarker</code> will
+max-items then a <code>NextToken</code> will
be provided in the output that you can use to resume pagination.
"""
| {"golden_diff": "diff --git a/awscli/customizations/paginate.py b/awscli/customizations/paginate.py\n--- a/awscli/customizations/paginate.py\n+++ b/awscli/customizations/paginate.py\n@@ -36,7 +36,7 @@\n MAX_ITEMS_HELP = \"\"\"\n <p>The total number of items to return. If the total number\n of items available is more than the value specified in\n-max-items then a <code>NextMarker</code> will\n+max-items then a <code>NextToken</code> will\n be provided in the output that you can use to resume pagination.\n \"\"\"\n", "issue": "typo in s3api list-objects documentation\nThe documentation for the s3api list-objects --max-items parameter says that a `NextMarker` will be provided, while the --starting-token parameter refers to this as `NextToken` which is the actual name of the returned token in JSON.\n\nSo in short I think that the `NextMarker` should really say `NextToken` to prevent any confusion.\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"This module has customizations to unify paging paramters.\n\nFor any operation that can be paginated, we will:\n\n * Remove the service specific pagination params. This can vary across\n services and we're going to replace them with a consistent set of\n arguments.\n * Add a ``--starting-token`` and a ``--max-items`` argument.\n\n\"\"\"\nimport logging\n\nfrom awscli.arguments import BaseCLIArgument\nfrom botocore.parameters import StringParameter\n\nlogger = logging.getLogger(__name__)\n\n\nSTARTING_TOKEN_HELP = \"\"\"\n<p>A token to specify where to start paginating. This is the\n<code>NextToken</code> from a previously truncated response.</p>\n\"\"\"\n\nMAX_ITEMS_HELP = \"\"\"\n<p>The total number of items to return. If the total number\nof items available is more than the value specified in\nmax-items then a <code>NextMarker</code> will\nbe provided in the output that you can use to resume pagination.\n\"\"\"\n\n\ndef unify_paging_params(argument_table, operation, **kwargs):\n if not operation.can_paginate:\n # We only apply these customizations to paginated responses.\n return\n logger.debug(\"Modifying paging parameters for operation: %s\", operation)\n _remove_existing_paging_arguments(argument_table, operation)\n argument_table['starting-token'] = PageArgument('starting-token',\n STARTING_TOKEN_HELP,\n operation,\n parse_type='string')\n argument_table['max-items'] = PageArgument('max-items', MAX_ITEMS_HELP,\n operation, parse_type='integer')\n\n\ndef _remove_existing_paging_arguments(argument_table, operation):\n tokens = _get_input_tokens(operation)\n for token_name in tokens:\n cli_name = _get_cli_name(operation.params, token_name)\n del argument_table[cli_name]\n if 'limit_key' in operation.pagination:\n key_name = operation.pagination['limit_key']\n cli_name = _get_cli_name(operation.params, key_name)\n del argument_table[cli_name]\n\n\ndef _get_input_tokens(operation):\n config = operation.pagination\n tokens = config['input_token']\n if not isinstance(tokens, list):\n return [tokens]\n return tokens\n\n\ndef _get_cli_name(param_objects, token_name):\n for param in param_objects:\n if param.name == token_name:\n return param.cli_name.lstrip('-')\n\n\nclass PageArgument(BaseCLIArgument):\n type_map = {\n 'string': str,\n 'integer': int,\n }\n\n def __init__(self, name, documentation, operation, parse_type):\n param = StringParameter(operation, name=name, type=parse_type)\n self._name = name\n self.argument_object = param\n self._name = name\n self._documentation = documentation\n self._parse_type = parse_type\n\n @property\n def cli_name(self):\n return '--' + self._name\n\n @property\n def cli_type_name(self):\n return self._parse_type\n\n @property\n def required(self):\n return False\n\n @property\n def documentation(self):\n return self._documentation\n\n def add_to_parser(self, parser):\n parser.add_argument(self.cli_name, dest=self.py_name,\n type=self.type_map[self._parse_type])\n\n def add_to_params(self, parameters, value):\n if value is not None:\n parameters[self.py_name] = value\n", "path": "awscli/customizations/paginate.py"}], "after_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"This module has customizations to unify paging paramters.\n\nFor any operation that can be paginated, we will:\n\n * Remove the service specific pagination params. This can vary across\n services and we're going to replace them with a consistent set of\n arguments.\n * Add a ``--starting-token`` and a ``--max-items`` argument.\n\n\"\"\"\nimport logging\n\nfrom awscli.arguments import BaseCLIArgument\nfrom botocore.parameters import StringParameter\n\nlogger = logging.getLogger(__name__)\n\n\nSTARTING_TOKEN_HELP = \"\"\"\n<p>A token to specify where to start paginating. This is the\n<code>NextToken</code> from a previously truncated response.</p>\n\"\"\"\n\nMAX_ITEMS_HELP = \"\"\"\n<p>The total number of items to return. If the total number\nof items available is more than the value specified in\nmax-items then a <code>NextToken</code> will\nbe provided in the output that you can use to resume pagination.\n\"\"\"\n\n\ndef unify_paging_params(argument_table, operation, **kwargs):\n if not operation.can_paginate:\n # We only apply these customizations to paginated responses.\n return\n logger.debug(\"Modifying paging parameters for operation: %s\", operation)\n _remove_existing_paging_arguments(argument_table, operation)\n argument_table['starting-token'] = PageArgument('starting-token',\n STARTING_TOKEN_HELP,\n operation,\n parse_type='string')\n argument_table['max-items'] = PageArgument('max-items', MAX_ITEMS_HELP,\n operation, parse_type='integer')\n\n\ndef _remove_existing_paging_arguments(argument_table, operation):\n tokens = _get_input_tokens(operation)\n for token_name in tokens:\n cli_name = _get_cli_name(operation.params, token_name)\n del argument_table[cli_name]\n if 'limit_key' in operation.pagination:\n key_name = operation.pagination['limit_key']\n cli_name = _get_cli_name(operation.params, key_name)\n del argument_table[cli_name]\n\n\ndef _get_input_tokens(operation):\n config = operation.pagination\n tokens = config['input_token']\n if not isinstance(tokens, list):\n return [tokens]\n return tokens\n\n\ndef _get_cli_name(param_objects, token_name):\n for param in param_objects:\n if param.name == token_name:\n return param.cli_name.lstrip('-')\n\n\nclass PageArgument(BaseCLIArgument):\n type_map = {\n 'string': str,\n 'integer': int,\n }\n\n def __init__(self, name, documentation, operation, parse_type):\n param = StringParameter(operation, name=name, type=parse_type)\n self._name = name\n self.argument_object = param\n self._name = name\n self._documentation = documentation\n self._parse_type = parse_type\n\n @property\n def cli_name(self):\n return '--' + self._name\n\n @property\n def cli_type_name(self):\n return self._parse_type\n\n @property\n def required(self):\n return False\n\n @property\n def documentation(self):\n return self._documentation\n\n def add_to_parser(self, parser):\n parser.add_argument(self.cli_name, dest=self.py_name,\n type=self.type_map[self._parse_type])\n\n def add_to_params(self, parameters, value):\n if value is not None:\n parameters[self.py_name] = value\n", "path": "awscli/customizations/paginate.py"}]} | 1,467 | 131 |
gh_patches_debug_7438 | rasdani/github-patches | git_diff | ranaroussi__yfinance-1237 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix(sec): upgrade lxml to 4.9.1
### What happened?
There are 1 security vulnerabilities found in lxml 4.5.1
- [CVE-2022-2309](https://www.oscs1024.com/hd/CVE-2022-2309)
### What did I do?
Upgrade lxml from 4.5.1 to 4.9.1 for vulnerability fix
### What did you expect to happen?
Ideally, no insecure libs should be used.
### The specification of the pull request
[PR Specification](https://www.oscs1024.com/docs/pr-specification/) from OSCS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: UTF-8 -*-
3 #
4 # yfinance - market data downloader
5 # https://github.com/ranaroussi/yfinance
6
7 """yfinance - market data downloader"""
8
9 from setuptools import setup, find_packages
10 # from codecs import open
11 import io
12 from os import path
13
14 # --- get version ---
15 version = "unknown"
16 with open("yfinance/version.py") as f:
17 line = f.read().strip()
18 version = line.replace("version = ", "").replace('"', '')
19 # --- /get version ---
20
21
22 here = path.abspath(path.dirname(__file__))
23
24 # Get the long description from the README file
25 with io.open(path.join(here, 'README.md'), encoding='utf-8') as f:
26 long_description = f.read()
27
28 setup(
29 name='yfinance',
30 version=version,
31 description='Download market data from Yahoo! Finance API',
32 long_description=long_description,
33 long_description_content_type='text/markdown',
34 url='https://github.com/ranaroussi/yfinance',
35 author='Ran Aroussi',
36 author_email='[email protected]',
37 license='Apache',
38 classifiers=[
39 'License :: OSI Approved :: Apache Software License',
40 # 'Development Status :: 3 - Alpha',
41 # 'Development Status :: 4 - Beta',
42 'Development Status :: 5 - Production/Stable',
43
44
45 'Operating System :: OS Independent',
46 'Intended Audience :: Developers',
47 'Topic :: Office/Business :: Financial',
48 'Topic :: Office/Business :: Financial :: Investment',
49 'Topic :: Scientific/Engineering :: Interface Engine/Protocol Translator',
50 'Topic :: Software Development :: Libraries',
51 'Topic :: Software Development :: Libraries :: Python Modules',
52
53 'Programming Language :: Python :: 2.7',
54 'Programming Language :: Python :: 3.4',
55 'Programming Language :: Python :: 3.5',
56 # 'Programming Language :: Python :: 3.6',
57 'Programming Language :: Python :: 3.7',
58 'Programming Language :: Python :: 3.8',
59 'Programming Language :: Python :: 3.9',
60 ],
61 platforms=['any'],
62 keywords='pandas, yahoo finance, pandas datareader',
63 packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),
64 install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',
65 'requests>=2.26', 'multitasking>=0.0.7',
66 'appdirs>=1.4.4'],
67 entry_points={
68 'console_scripts': [
69 'sample=sample:main',
70 ],
71 },
72 )
73
74 print("""
75 NOTE: yfinance is not affiliated, endorsed, or vetted by Yahoo, Inc.
76
77 You should refer to Yahoo!'s terms of use for details on your rights
78 to use the actual data downloaded.""")
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -63,7 +63,7 @@
packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),
install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',
'requests>=2.26', 'multitasking>=0.0.7',
- 'appdirs>=1.4.4'],
+ 'lxml>=4.9.1', 'appdirs>=1.4.4'],
entry_points={
'console_scripts': [
'sample=sample:main',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -63,7 +63,7 @@\n packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),\n install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',\n 'requests>=2.26', 'multitasking>=0.0.7',\n- 'appdirs>=1.4.4'],\n+ 'lxml>=4.9.1', 'appdirs>=1.4.4'],\n entry_points={\n 'console_scripts': [\n 'sample=sample:main',\n", "issue": "fix(sec): upgrade lxml to 4.9.1\n### What happened\uff1f\nThere are 1 security vulnerabilities found in lxml 4.5.1\n- [CVE-2022-2309](https://www.oscs1024.com/hd/CVE-2022-2309)\n\n\n### What did I do\uff1f\nUpgrade lxml from 4.5.1 to 4.9.1 for vulnerability fix\n\n### What did you expect to happen\uff1f\nIdeally, no insecure libs should be used.\n\n### The specification of the pull request\n[PR Specification](https://www.oscs1024.com/docs/pr-specification/) from OSCS\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n#\n# yfinance - market data downloader\n# https://github.com/ranaroussi/yfinance\n\n\"\"\"yfinance - market data downloader\"\"\"\n\nfrom setuptools import setup, find_packages\n# from codecs import open\nimport io\nfrom os import path\n\n# --- get version ---\nversion = \"unknown\"\nwith open(\"yfinance/version.py\") as f:\n line = f.read().strip()\n version = line.replace(\"version = \", \"\").replace('\"', '')\n# --- /get version ---\n\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith io.open(path.join(here, 'README.md'), encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name='yfinance',\n version=version,\n description='Download market data from Yahoo! Finance API',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/ranaroussi/yfinance',\n author='Ran Aroussi',\n author_email='[email protected]',\n license='Apache',\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n # 'Development Status :: 3 - Alpha',\n # 'Development Status :: 4 - Beta',\n 'Development Status :: 5 - Production/Stable',\n\n\n 'Operating System :: OS Independent',\n 'Intended Audience :: Developers',\n 'Topic :: Office/Business :: Financial',\n 'Topic :: Office/Business :: Financial :: Investment',\n 'Topic :: Scientific/Engineering :: Interface Engine/Protocol Translator',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n # 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n platforms=['any'],\n keywords='pandas, yahoo finance, pandas datareader',\n packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),\n install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',\n 'requests>=2.26', 'multitasking>=0.0.7',\n 'appdirs>=1.4.4'],\n entry_points={\n 'console_scripts': [\n 'sample=sample:main',\n ],\n },\n)\n\nprint(\"\"\"\nNOTE: yfinance is not affiliated, endorsed, or vetted by Yahoo, Inc.\n\nYou should refer to Yahoo!'s terms of use for details on your rights\nto use the actual data downloaded.\"\"\")\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n#\n# yfinance - market data downloader\n# https://github.com/ranaroussi/yfinance\n\n\"\"\"yfinance - market data downloader\"\"\"\n\nfrom setuptools import setup, find_packages\n# from codecs import open\nimport io\nfrom os import path\n\n# --- get version ---\nversion = \"unknown\"\nwith open(\"yfinance/version.py\") as f:\n line = f.read().strip()\n version = line.replace(\"version = \", \"\").replace('\"', '')\n# --- /get version ---\n\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith io.open(path.join(here, 'README.md'), encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name='yfinance',\n version=version,\n description='Download market data from Yahoo! Finance API',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/ranaroussi/yfinance',\n author='Ran Aroussi',\n author_email='[email protected]',\n license='Apache',\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n # 'Development Status :: 3 - Alpha',\n # 'Development Status :: 4 - Beta',\n 'Development Status :: 5 - Production/Stable',\n\n\n 'Operating System :: OS Independent',\n 'Intended Audience :: Developers',\n 'Topic :: Office/Business :: Financial',\n 'Topic :: Office/Business :: Financial :: Investment',\n 'Topic :: Scientific/Engineering :: Interface Engine/Protocol Translator',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n # 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n platforms=['any'],\n keywords='pandas, yahoo finance, pandas datareader',\n packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),\n install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',\n 'requests>=2.26', 'multitasking>=0.0.7',\n 'lxml>=4.9.1', 'appdirs>=1.4.4'],\n entry_points={\n 'console_scripts': [\n 'sample=sample:main',\n ],\n },\n)\n\nprint(\"\"\"\nNOTE: yfinance is not affiliated, endorsed, or vetted by Yahoo, Inc.\n\nYou should refer to Yahoo!'s terms of use for details on your rights\nto use the actual data downloaded.\"\"\")\n", "path": "setup.py"}]} | 1,174 | 145 |
gh_patches_debug_12394 | rasdani/github-patches | git_diff | aws__aws-cli-341 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
argparse dependency is only needed for Python 2.6
We currently have a dependency on argparse because it's not in stdlib for Python 2.6. We should make this dependency specific to 2.6 and not install it for other Python versions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup, find_packages
6
7 import awscli
8
9
10 requires = ['botocore>=0.16.0,<0.17.0',
11 'bcdoc>=0.9.0,<0.10.0',
12 'six>=1.1.0',
13 'colorama==0.2.5',
14 'argparse>=1.1',
15 'docutils>=0.10',
16 'rsa==3.1.1']
17
18
19 setup_options = dict(
20 name='awscli',
21 version=awscli.__version__,
22 description='Universal Command Line Environment for AWS.',
23 long_description=open('README.rst').read(),
24 author='Mitch Garnaat',
25 author_email='[email protected]',
26 url='http://aws.amazon.com/cli/',
27 scripts=['bin/aws', 'bin/aws.cmd',
28 'bin/aws_completer', 'bin/aws_zsh_completer.sh'],
29 packages=find_packages('.', exclude=['tests*']),
30 package_dir={'awscli': 'awscli'},
31 package_data={'awscli': ['data/*.json', 'examples/*/*']},
32 install_requires=requires,
33 license=open("LICENSE.txt").read(),
34 classifiers=(
35 'Development Status :: 5 - Production/Stable',
36 'Intended Audience :: Developers',
37 'Intended Audience :: System Administrators',
38 'Natural Language :: English',
39 'License :: OSI Approved :: Apache Software License',
40 'Programming Language :: Python',
41 'Programming Language :: Python :: 2.6',
42 'Programming Language :: Python :: 2.7',
43 'Programming Language :: Python :: 3',
44 'Programming Language :: Python :: 3.3',
45 ),
46 )
47
48 if 'py2exe' in sys.argv:
49 # This will actually give us a py2exe command.
50 import py2exe
51 # And we have some py2exe specific options.
52 setup_options['options'] = {
53 'py2exe': {
54 'optimize': 0,
55 'skip_archive': True,
56 'includes': ['ConfigParser', 'urllib', 'httplib',
57 'docutils.readers.standalone',
58 'docutils.parsers.rst',
59 'docutils.languages.en',
60 'xml.etree.ElementTree', 'HTMLParser',
61 'awscli.handlers'],
62 }
63 }
64 setup_options['console'] = ['bin/aws']
65
66
67 setup(**setup_options)
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,5 +1,4 @@
#!/usr/bin/env python
-import os
import sys
from setuptools import setup, find_packages
@@ -11,10 +10,14 @@
'bcdoc>=0.9.0,<0.10.0',
'six>=1.1.0',
'colorama==0.2.5',
- 'argparse>=1.1',
'docutils>=0.10',
'rsa==3.1.1']
+if sys.version_info[:2] == (2, 6):
+ # For python2.6 we have to require argparse since it
+ # was not in stdlib until 2.7.
+ requires.append('argparse>=1.1')
+
setup_options = dict(
name='awscli',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,5 +1,4 @@\n #!/usr/bin/env python\n-import os\n import sys\n \n from setuptools import setup, find_packages\n@@ -11,10 +10,14 @@\n 'bcdoc>=0.9.0,<0.10.0',\n 'six>=1.1.0',\n 'colorama==0.2.5',\n- 'argparse>=1.1',\n 'docutils>=0.10',\n 'rsa==3.1.1']\n \n+if sys.version_info[:2] == (2, 6):\n+ # For python2.6 we have to require argparse since it\n+ # was not in stdlib until 2.7.\n+ requires.append('argparse>=1.1')\n+\n \n setup_options = dict(\n name='awscli',\n", "issue": "argparse dependency is only needed for Python 2.6\nWe currently have a dependency on argparse because it's not in stdlib for Python 2.6. We should make this dependency specific to 2.6 and not install it for other Python versions.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\nimport awscli\n\n\nrequires = ['botocore>=0.16.0,<0.17.0',\n 'bcdoc>=0.9.0,<0.10.0',\n 'six>=1.1.0',\n 'colorama==0.2.5',\n 'argparse>=1.1',\n 'docutils>=0.10',\n 'rsa==3.1.1']\n\n\nsetup_options = dict(\n name='awscli',\n version=awscli.__version__,\n description='Universal Command Line Environment for AWS.',\n long_description=open('README.rst').read(),\n author='Mitch Garnaat',\n author_email='[email protected]',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh'],\n packages=find_packages('.', exclude=['tests*']),\n package_dir={'awscli': 'awscli'},\n package_data={'awscli': ['data/*.json', 'examples/*/*']},\n install_requires=requires,\n license=open(\"LICENSE.txt\").read(),\n classifiers=(\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n ),\n)\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'includes': ['ConfigParser', 'urllib', 'httplib',\n 'docutils.readers.standalone',\n 'docutils.parsers.rst',\n 'docutils.languages.en',\n 'xml.etree.ElementTree', 'HTMLParser',\n 'awscli.handlers'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport sys\n\nfrom setuptools import setup, find_packages\n\nimport awscli\n\n\nrequires = ['botocore>=0.16.0,<0.17.0',\n 'bcdoc>=0.9.0,<0.10.0',\n 'six>=1.1.0',\n 'colorama==0.2.5',\n 'docutils>=0.10',\n 'rsa==3.1.1']\n\nif sys.version_info[:2] == (2, 6):\n # For python2.6 we have to require argparse since it\n # was not in stdlib until 2.7.\n requires.append('argparse>=1.1')\n\n\nsetup_options = dict(\n name='awscli',\n version=awscli.__version__,\n description='Universal Command Line Environment for AWS.',\n long_description=open('README.rst').read(),\n author='Mitch Garnaat',\n author_email='[email protected]',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh'],\n packages=find_packages('.', exclude=['tests*']),\n package_dir={'awscli': 'awscli'},\n package_data={'awscli': ['data/*.json', 'examples/*/*']},\n install_requires=requires,\n license=open(\"LICENSE.txt\").read(),\n classifiers=(\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n ),\n)\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'includes': ['ConfigParser', 'urllib', 'httplib',\n 'docutils.readers.standalone',\n 'docutils.parsers.rst',\n 'docutils.languages.en',\n 'xml.etree.ElementTree', 'HTMLParser',\n 'awscli.handlers'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}]} | 966 | 205 |
gh_patches_debug_11235 | rasdani/github-patches | git_diff | saleor__saleor-5311 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken multiple interface notation in schema
### What I'm trying to achieve
To use Apollo tooling to generate TS types for the application queries. However, it fails because Saleor's schema uses comma as a separator instead of ampersand. More: https://github.com/apollographql/apollo-tooling/issues/434
### Steps to reproduce the problem
1. Go to mirumee/saleor-dashboard repository and clone it
2. Copy schema from core to dashboard
3. `npm run build-types`
4. Notice that it fails at multiple interface implementation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/management/commands/get_graphql_schema.py`
Content:
```
1 from django.core.management.base import BaseCommand
2 from graphql import print_schema
3
4 from ...api import schema
5
6
7 class Command(BaseCommand):
8 help = "Writes SDL for GraphQL API schema to stdout"
9
10 def handle(self, *args, **options):
11 self.stdout.write(print_schema(schema))
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/management/commands/get_graphql_schema.py b/saleor/graphql/management/commands/get_graphql_schema.py
--- a/saleor/graphql/management/commands/get_graphql_schema.py
+++ b/saleor/graphql/management/commands/get_graphql_schema.py
@@ -8,4 +8,14 @@
help = "Writes SDL for GraphQL API schema to stdout"
def handle(self, *args, **options):
- self.stdout.write(print_schema(schema))
+ """Support multiple interface notation in schema for Apollo tooling.
+
+ In `graphql-core` V2 separator for interaces is `,`.
+ Apollo tooling to generate TypeScript types using `&` as interfaces separator.
+ https://github.com/graphql-python/graphql-core/pull/258
+ """
+ printed_schema = print_schema(schema)
+ for line in printed_schema.splitlines():
+ if "implements" in line:
+ line = line.replace(",", " &")
+ self.stdout.write(f"{line}\n")
| {"golden_diff": "diff --git a/saleor/graphql/management/commands/get_graphql_schema.py b/saleor/graphql/management/commands/get_graphql_schema.py\n--- a/saleor/graphql/management/commands/get_graphql_schema.py\n+++ b/saleor/graphql/management/commands/get_graphql_schema.py\n@@ -8,4 +8,14 @@\n help = \"Writes SDL for GraphQL API schema to stdout\"\n \n def handle(self, *args, **options):\n- self.stdout.write(print_schema(schema))\n+ \"\"\"Support multiple interface notation in schema for Apollo tooling.\n+\n+ In `graphql-core` V2 separator for interaces is `,`.\n+ Apollo tooling to generate TypeScript types using `&` as interfaces separator.\n+ https://github.com/graphql-python/graphql-core/pull/258\n+ \"\"\"\n+ printed_schema = print_schema(schema)\n+ for line in printed_schema.splitlines():\n+ if \"implements\" in line:\n+ line = line.replace(\",\", \" &\")\n+ self.stdout.write(f\"{line}\\n\")\n", "issue": "Broken multiple interface notation in schema\n### What I'm trying to achieve\r\nTo use Apollo tooling to generate TS types for the application queries. However, it fails because Saleor's schema uses comma as a separator instead of ampersand. More: https://github.com/apollographql/apollo-tooling/issues/434 \r\n\r\n### Steps to reproduce the problem\r\n1. Go to mirumee/saleor-dashboard repository and clone it\r\n2. Copy schema from core to dashboard\r\n3. `npm run build-types`\r\n4. Notice that it fails at multiple interface implementation.\n", "before_files": [{"content": "from django.core.management.base import BaseCommand\nfrom graphql import print_schema\n\nfrom ...api import schema\n\n\nclass Command(BaseCommand):\n help = \"Writes SDL for GraphQL API schema to stdout\"\n\n def handle(self, *args, **options):\n self.stdout.write(print_schema(schema))\n", "path": "saleor/graphql/management/commands/get_graphql_schema.py"}], "after_files": [{"content": "from django.core.management.base import BaseCommand\nfrom graphql import print_schema\n\nfrom ...api import schema\n\n\nclass Command(BaseCommand):\n help = \"Writes SDL for GraphQL API schema to stdout\"\n\n def handle(self, *args, **options):\n \"\"\"Support multiple interface notation in schema for Apollo tooling.\n\n In `graphql-core` V2 separator for interaces is `,`.\n Apollo tooling to generate TypeScript types using `&` as interfaces separator.\n https://github.com/graphql-python/graphql-core/pull/258\n \"\"\"\n printed_schema = print_schema(schema)\n for line in printed_schema.splitlines():\n if \"implements\" in line:\n line = line.replace(\",\", \" &\")\n self.stdout.write(f\"{line}\\n\")\n", "path": "saleor/graphql/management/commands/get_graphql_schema.py"}]} | 464 | 226 |
gh_patches_debug_22720 | rasdani/github-patches | git_diff | privacyidea__privacyidea-3675 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error 904 is still shown with policy Action login_mode: privacyIDEA
privacyidea 3.8
ubuntu 20
Since 3.7 the error message :´ERR904: The user can not be found in any resolver in this realm!¨ is deactivated when you enter a wrong username.
But this is still true if you have a policy :
Scope webui
Action login_mode: privacyIDEA
if you have a policy:
Scope webui
login_mode: userstore
Then the error (ERR904: The user can not be found in any resolver in this realm!) Is deactivated.
Kind regards
Sebastien
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `privacyidea/lib/auth.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # 2015-11-03 Cornelius Kölbel <[email protected]>
4 # Add check if an admin user exists
5 # 2014-12-15 Cornelius Kölbel, [email protected]
6 # Initial creation
7 #
8 # (c) Cornelius Kölbel
9 # Info: http://www.privacyidea.org
10 #
11 # This code is free software; you can redistribute it and/or
12 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
13 # License as published by the Free Software Foundation; either
14 # version 3 of the License, or any later version.
15 #
16 # This code is distributed in the hope that it will be useful,
17 # but WITHOUT ANY WARRANTY; without even the implied warranty of
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
19 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
20 #
21 # You should have received a copy of the GNU Affero General Public
22 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
23 #
24 from privacyidea.models import Admin
25 from privacyidea.lib.token import check_user_pass
26 from privacyidea.lib.policydecorators import libpolicy, login_mode
27 from privacyidea.lib.crypto import hash_with_pepper, verify_with_pepper
28 from privacyidea.lib.utils import fetch_one_resource
29
30
31 class ROLE(object):
32 ADMIN = "admin"
33 USER = "user"
34 VALIDATE = "validate"
35
36
37 def verify_db_admin(username, password):
38 """
39 This function is used to verify the username and the password against the
40 database table "Admin".
41 :param username: The administrator username
42 :param password: The password
43 :return: True if password is correct for the admin
44 :rtype: bool
45 """
46 success = False
47 qa = Admin.query.filter(Admin.username == username).first()
48 if qa:
49 success = verify_with_pepper(qa.password, password)
50
51 return success
52
53
54 def db_admin_exist(username):
55 """
56 Checks if a local admin in the database exists
57
58 :param username: The username of the admin
59 :return: True, if exist
60 """
61 return bool(get_db_admin(username))
62
63
64 def create_db_admin(app, username, email=None, password=None):
65 pw_dig = None
66 if password:
67 pw_dig = hash_with_pepper(password)
68 user = Admin(email=email, username=username, password=pw_dig)
69 user.save()
70
71
72 def list_db_admin():
73 admins = Admin.query.all()
74 print("Name \t email")
75 print(30*"=")
76 for admin in admins:
77 print("{0!s} \t {1!s}".format(admin.username, admin.email))
78
79
80 def get_db_admins():
81 admins = Admin.query.all()
82 return admins
83
84
85 def get_db_admin(username):
86 return Admin.query.filter(Admin.username == username).first()
87
88
89 def delete_db_admin(username):
90 print("Deleting admin {0!s}".format(username))
91 fetch_one_resource(Admin, username=username).delete()
92
93
94 @libpolicy(login_mode)
95 def check_webui_user(user_obj,
96 password,
97 options=None,
98 superuser_realms=None,
99 check_otp=False):
100 """
101 This function is used to authenticate the user at the web ui.
102 It checks against the userstore or against OTP/privacyidea (check_otp).
103 It returns a tuple of
104
105 * true/false if the user authenticated successfully
106 * the role of the user
107 * the "detail" dictionary of the response
108
109 :param user_obj: The user who tries to authenticate
110 :type user_obj: User Object
111 :param password: Password, static and or OTP
112 :param options: additional options like g and clientip
113 :type options: dict
114 :param superuser_realms: list of realms, that contain admins
115 :type superuser_realms: list
116 :param check_otp: If set, the user is not authenticated against the
117 userstore but against privacyidea
118 :return: tuple of bool, string and dict/None
119 """
120 options = options or {}
121 superuser_realms = superuser_realms or []
122 user_auth = False
123 role = ROLE.USER
124 details = None
125
126 if check_otp:
127 # check if the given password matches an OTP token
128 check, details = check_user_pass(user_obj, password, options=options)
129 details["loginmode"] = "privacyIDEA"
130 if check:
131 user_auth = True
132 else:
133 # check the password of the user against the userstore
134 if user_obj.check_password(password):
135 user_auth = True
136
137 # If the realm is in the SUPERUSER_REALM then the authorization role
138 # is risen to "admin".
139 if user_obj.realm in superuser_realms:
140 role = ROLE.ADMIN
141
142 return user_auth, role, details
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/privacyidea/lib/auth.py b/privacyidea/lib/auth.py
--- a/privacyidea/lib/auth.py
+++ b/privacyidea/lib/auth.py
@@ -26,6 +26,10 @@
from privacyidea.lib.policydecorators import libpolicy, login_mode
from privacyidea.lib.crypto import hash_with_pepper, verify_with_pepper
from privacyidea.lib.utils import fetch_one_resource
+import logging
+
+log = logging.getLogger(__name__)
+
class ROLE(object):
@@ -125,10 +129,13 @@
if check_otp:
# check if the given password matches an OTP token
- check, details = check_user_pass(user_obj, password, options=options)
- details["loginmode"] = "privacyIDEA"
- if check:
- user_auth = True
+ try:
+ check, details = check_user_pass(user_obj, password, options=options)
+ details["loginmode"] = "privacyIDEA"
+ if check:
+ user_auth = True
+ except Exception as e:
+ log.debug("Error authenticating user against privacyIDEA: {0!r}".format(e))
else:
# check the password of the user against the userstore
if user_obj.check_password(password):
| {"golden_diff": "diff --git a/privacyidea/lib/auth.py b/privacyidea/lib/auth.py\n--- a/privacyidea/lib/auth.py\n+++ b/privacyidea/lib/auth.py\n@@ -26,6 +26,10 @@\n from privacyidea.lib.policydecorators import libpolicy, login_mode\n from privacyidea.lib.crypto import hash_with_pepper, verify_with_pepper\n from privacyidea.lib.utils import fetch_one_resource\n+import logging\n+\n+log = logging.getLogger(__name__)\n+\n \n \n class ROLE(object):\n@@ -125,10 +129,13 @@\n \n if check_otp:\n # check if the given password matches an OTP token\n- check, details = check_user_pass(user_obj, password, options=options)\n- details[\"loginmode\"] = \"privacyIDEA\"\n- if check:\n- user_auth = True\n+ try:\n+ check, details = check_user_pass(user_obj, password, options=options)\n+ details[\"loginmode\"] = \"privacyIDEA\"\n+ if check:\n+ user_auth = True\n+ except Exception as e:\n+ log.debug(\"Error authenticating user against privacyIDEA: {0!r}\".format(e))\n else:\n # check the password of the user against the userstore\n if user_obj.check_password(password):\n", "issue": "Error 904 is still shown with policy Action login_mode: privacyIDEA \n\r\nprivacyidea 3.8\r\nubuntu 20\r\n\r\nSince 3.7 the error message :\u00b4ERR904: The user can not be found in any resolver in this realm!\u00a8 is deactivated when you enter a wrong username.\r\n\r\nBut this is still true if you have a policy :\r\nScope webui\r\nAction login_mode: privacyIDEA \r\n\r\nif you have a policy:\r\nScope webui\r\nlogin_mode: userstore \r\nThen the error (ERR904: The user can not be found in any resolver in this realm!) Is deactivated.\r\n\r\nKind regards\r\nSebastien\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2015-11-03 Cornelius K\u00f6lbel <[email protected]>\n# Add check if an admin user exists\n# 2014-12-15 Cornelius K\u00f6lbel, [email protected]\n# Initial creation\n#\n# (c) Cornelius K\u00f6lbel\n# Info: http://www.privacyidea.org\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom privacyidea.models import Admin\nfrom privacyidea.lib.token import check_user_pass\nfrom privacyidea.lib.policydecorators import libpolicy, login_mode\nfrom privacyidea.lib.crypto import hash_with_pepper, verify_with_pepper\nfrom privacyidea.lib.utils import fetch_one_resource\n\n\nclass ROLE(object):\n ADMIN = \"admin\"\n USER = \"user\"\n VALIDATE = \"validate\"\n\n\ndef verify_db_admin(username, password):\n \"\"\"\n This function is used to verify the username and the password against the\n database table \"Admin\".\n :param username: The administrator username\n :param password: The password\n :return: True if password is correct for the admin\n :rtype: bool\n \"\"\"\n success = False\n qa = Admin.query.filter(Admin.username == username).first()\n if qa:\n success = verify_with_pepper(qa.password, password)\n\n return success\n\n\ndef db_admin_exist(username):\n \"\"\"\n Checks if a local admin in the database exists\n\n :param username: The username of the admin\n :return: True, if exist\n \"\"\"\n return bool(get_db_admin(username))\n\n\ndef create_db_admin(app, username, email=None, password=None):\n pw_dig = None\n if password:\n pw_dig = hash_with_pepper(password)\n user = Admin(email=email, username=username, password=pw_dig)\n user.save()\n\n\ndef list_db_admin():\n admins = Admin.query.all()\n print(\"Name \\t email\")\n print(30*\"=\")\n for admin in admins:\n print(\"{0!s} \\t {1!s}\".format(admin.username, admin.email))\n\n\ndef get_db_admins():\n admins = Admin.query.all()\n return admins\n\n\ndef get_db_admin(username):\n return Admin.query.filter(Admin.username == username).first()\n\n\ndef delete_db_admin(username):\n print(\"Deleting admin {0!s}\".format(username))\n fetch_one_resource(Admin, username=username).delete()\n\n\n@libpolicy(login_mode)\ndef check_webui_user(user_obj,\n password,\n options=None,\n superuser_realms=None,\n check_otp=False):\n \"\"\"\n This function is used to authenticate the user at the web ui.\n It checks against the userstore or against OTP/privacyidea (check_otp).\n It returns a tuple of\n\n * true/false if the user authenticated successfully\n * the role of the user\n * the \"detail\" dictionary of the response\n\n :param user_obj: The user who tries to authenticate\n :type user_obj: User Object\n :param password: Password, static and or OTP\n :param options: additional options like g and clientip\n :type options: dict\n :param superuser_realms: list of realms, that contain admins\n :type superuser_realms: list\n :param check_otp: If set, the user is not authenticated against the\n userstore but against privacyidea\n :return: tuple of bool, string and dict/None\n \"\"\"\n options = options or {}\n superuser_realms = superuser_realms or []\n user_auth = False\n role = ROLE.USER\n details = None\n\n if check_otp:\n # check if the given password matches an OTP token\n check, details = check_user_pass(user_obj, password, options=options)\n details[\"loginmode\"] = \"privacyIDEA\"\n if check:\n user_auth = True\n else:\n # check the password of the user against the userstore\n if user_obj.check_password(password):\n user_auth = True\n\n # If the realm is in the SUPERUSER_REALM then the authorization role\n # is risen to \"admin\".\n if user_obj.realm in superuser_realms:\n role = ROLE.ADMIN\n\n return user_auth, role, details\n", "path": "privacyidea/lib/auth.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2015-11-03 Cornelius K\u00f6lbel <[email protected]>\n# Add check if an admin user exists\n# 2014-12-15 Cornelius K\u00f6lbel, [email protected]\n# Initial creation\n#\n# (c) Cornelius K\u00f6lbel\n# Info: http://www.privacyidea.org\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom privacyidea.models import Admin\nfrom privacyidea.lib.token import check_user_pass\nfrom privacyidea.lib.policydecorators import libpolicy, login_mode\nfrom privacyidea.lib.crypto import hash_with_pepper, verify_with_pepper\nfrom privacyidea.lib.utils import fetch_one_resource\nimport logging\n\nlog = logging.getLogger(__name__)\n\n\n\nclass ROLE(object):\n ADMIN = \"admin\"\n USER = \"user\"\n VALIDATE = \"validate\"\n\n\ndef verify_db_admin(username, password):\n \"\"\"\n This function is used to verify the username and the password against the\n database table \"Admin\".\n :param username: The administrator username\n :param password: The password\n :return: True if password is correct for the admin\n :rtype: bool\n \"\"\"\n success = False\n qa = Admin.query.filter(Admin.username == username).first()\n if qa:\n success = verify_with_pepper(qa.password, password)\n\n return success\n\n\ndef db_admin_exist(username):\n \"\"\"\n Checks if a local admin in the database exists\n\n :param username: The username of the admin\n :return: True, if exist\n \"\"\"\n return bool(get_db_admin(username))\n\n\ndef create_db_admin(app, username, email=None, password=None):\n pw_dig = None\n if password:\n pw_dig = hash_with_pepper(password)\n user = Admin(email=email, username=username, password=pw_dig)\n user.save()\n\n\ndef list_db_admin():\n admins = Admin.query.all()\n print(\"Name \\t email\")\n print(30*\"=\")\n for admin in admins:\n print(\"{0!s} \\t {1!s}\".format(admin.username, admin.email))\n\n\ndef get_db_admins():\n admins = Admin.query.all()\n return admins\n\n\ndef get_db_admin(username):\n return Admin.query.filter(Admin.username == username).first()\n\n\ndef delete_db_admin(username):\n print(\"Deleting admin {0!s}\".format(username))\n fetch_one_resource(Admin, username=username).delete()\n\n\n@libpolicy(login_mode)\ndef check_webui_user(user_obj,\n password,\n options=None,\n superuser_realms=None,\n check_otp=False):\n \"\"\"\n This function is used to authenticate the user at the web ui.\n It checks against the userstore or against OTP/privacyidea (check_otp).\n It returns a tuple of\n\n * true/false if the user authenticated successfully\n * the role of the user\n * the \"detail\" dictionary of the response\n\n :param user_obj: The user who tries to authenticate\n :type user_obj: User Object\n :param password: Password, static and or OTP\n :param options: additional options like g and clientip\n :type options: dict\n :param superuser_realms: list of realms, that contain admins\n :type superuser_realms: list\n :param check_otp: If set, the user is not authenticated against the\n userstore but against privacyidea\n :return: tuple of bool, string and dict/None\n \"\"\"\n options = options or {}\n superuser_realms = superuser_realms or []\n user_auth = False\n role = ROLE.USER\n details = None\n\n if check_otp:\n # check if the given password matches an OTP token\n try:\n check, details = check_user_pass(user_obj, password, options=options)\n details[\"loginmode\"] = \"privacyIDEA\"\n if check:\n user_auth = True\n except Exception as e:\n log.debug(\"Error authenticating user against privacyIDEA: {0!r}\".format(e))\n else:\n # check the password of the user against the userstore\n if user_obj.check_password(password):\n user_auth = True\n\n # If the realm is in the SUPERUSER_REALM then the authorization role\n # is risen to \"admin\".\n if user_obj.realm in superuser_realms:\n role = ROLE.ADMIN\n\n return user_auth, role, details\n", "path": "privacyidea/lib/auth.py"}]} | 1,801 | 283 |
gh_patches_debug_14761 | rasdani/github-patches | git_diff | iterative__dvc-7965 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add TOML support for metrics
Right now, there is only TOML file support for params files. We need to add TOML support for metrics as well.
Here's a [link to the Discord question](https://discord.com/channels/485586884165107732/485596304961962003/865974923079319563) that brought this up.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/repo/metrics/show.py`
Content:
```
1 import logging
2 import os
3 from typing import List
4
5 from scmrepo.exceptions import SCMError
6
7 from dvc.fs.dvc import DvcFileSystem
8 from dvc.output import Output
9 from dvc.repo import locked
10 from dvc.repo.collect import StrPaths, collect
11 from dvc.repo.live import summary_fs_path
12 from dvc.scm import NoSCMError
13 from dvc.utils import error_handler, errored_revisions, onerror_collect
14 from dvc.utils.collections import ensure_list
15 from dvc.utils.serialize import load_yaml
16
17 logger = logging.getLogger(__name__)
18
19
20 def _is_metric(out: Output) -> bool:
21 return bool(out.metric) or bool(out.live)
22
23
24 def _to_fs_paths(metrics: List[Output]) -> StrPaths:
25 result = []
26 for out in metrics:
27 if out.metric:
28 result.append(out.repo.dvcfs.from_os_path(out.fs_path))
29 elif out.live:
30 fs_path = summary_fs_path(out)
31 if fs_path:
32 result.append(out.repo.dvcfs.from_os_path(fs_path))
33 return result
34
35
36 def _collect_metrics(repo, targets, revision, recursive):
37 metrics, fs_paths = collect(
38 repo,
39 targets=targets,
40 output_filter=_is_metric,
41 recursive=recursive,
42 rev=revision,
43 )
44 return _to_fs_paths(metrics) + list(fs_paths)
45
46
47 def _extract_metrics(metrics, path, rev):
48 if isinstance(metrics, (int, float)):
49 return metrics
50
51 if not isinstance(metrics, dict):
52 return None
53
54 ret = {}
55 for key, val in metrics.items():
56 m = _extract_metrics(val, path, rev)
57 if m not in (None, {}):
58 ret[key] = m
59 else:
60 logger.debug(
61 "Could not parse '%s' metric from '%s' at '%s' "
62 "due to its unsupported type: '%s'",
63 key,
64 path,
65 rev,
66 type(val).__name__,
67 )
68
69 return ret
70
71
72 @error_handler
73 def _read_metric(path, fs, rev, **kwargs):
74 val = load_yaml(path, fs=fs)
75 val = _extract_metrics(val, path, rev)
76 return val or {}
77
78
79 def _read_metrics(repo, metrics, rev, onerror=None):
80 fs = DvcFileSystem(repo=repo)
81
82 relpath = ""
83 if repo.root_dir != repo.fs.path.getcwd():
84 relpath = repo.fs.path.relpath(repo.root_dir, repo.fs.path.getcwd())
85
86 res = {}
87 for metric in metrics:
88 if not fs.isfile(metric):
89 continue
90
91 res[os.path.join(relpath, *fs.path.parts(metric))] = _read_metric(
92 metric, fs, rev, onerror=onerror
93 )
94
95 return res
96
97
98 def _gather_metrics(repo, targets, rev, recursive, onerror=None):
99 metrics = _collect_metrics(repo, targets, rev, recursive)
100 return _read_metrics(repo, metrics, rev, onerror=onerror)
101
102
103 @locked
104 def show(
105 repo,
106 targets=None,
107 all_branches=False,
108 all_tags=False,
109 recursive=False,
110 revs=None,
111 all_commits=False,
112 onerror=None,
113 ):
114 if onerror is None:
115 onerror = onerror_collect
116
117 targets = ensure_list(targets)
118 targets = [repo.dvcfs.from_os_path(target) for target in targets]
119
120 res = {}
121 for rev in repo.brancher(
122 revs=revs,
123 all_branches=all_branches,
124 all_tags=all_tags,
125 all_commits=all_commits,
126 ):
127 res[rev] = error_handler(_gather_metrics)(
128 repo, targets, rev, recursive, onerror=onerror
129 )
130
131 # Hide workspace metrics if they are the same as in the active branch
132 try:
133 active_branch = repo.scm.active_branch()
134 except (SCMError, NoSCMError):
135 # SCMError - detached head
136 # NoSCMError - no repo case
137 pass
138 else:
139 if res.get("workspace") == res.get(active_branch):
140 res.pop("workspace", None)
141
142 errored = errored_revisions(res)
143 if errored:
144 from dvc.ui import ui
145
146 ui.error_write(
147 "DVC failed to load some metrics for following revisions:"
148 f" '{', '.join(errored)}'."
149 )
150
151 return res
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/repo/metrics/show.py b/dvc/repo/metrics/show.py
--- a/dvc/repo/metrics/show.py
+++ b/dvc/repo/metrics/show.py
@@ -12,7 +12,7 @@
from dvc.scm import NoSCMError
from dvc.utils import error_handler, errored_revisions, onerror_collect
from dvc.utils.collections import ensure_list
-from dvc.utils.serialize import load_yaml
+from dvc.utils.serialize import LOADERS
logger = logging.getLogger(__name__)
@@ -71,7 +71,9 @@
@error_handler
def _read_metric(path, fs, rev, **kwargs):
- val = load_yaml(path, fs=fs)
+ suffix = fs.path.suffix(path).lower()
+ loader = LOADERS[suffix]
+ val = loader(path, fs=fs)
val = _extract_metrics(val, path, rev)
return val or {}
| {"golden_diff": "diff --git a/dvc/repo/metrics/show.py b/dvc/repo/metrics/show.py\n--- a/dvc/repo/metrics/show.py\n+++ b/dvc/repo/metrics/show.py\n@@ -12,7 +12,7 @@\n from dvc.scm import NoSCMError\n from dvc.utils import error_handler, errored_revisions, onerror_collect\n from dvc.utils.collections import ensure_list\n-from dvc.utils.serialize import load_yaml\n+from dvc.utils.serialize import LOADERS\n \n logger = logging.getLogger(__name__)\n \n@@ -71,7 +71,9 @@\n \n @error_handler\n def _read_metric(path, fs, rev, **kwargs):\n- val = load_yaml(path, fs=fs)\n+ suffix = fs.path.suffix(path).lower()\n+ loader = LOADERS[suffix]\n+ val = loader(path, fs=fs)\n val = _extract_metrics(val, path, rev)\n return val or {}\n", "issue": "Add TOML support for metrics\nRight now, there is only TOML file support for params files. We need to add TOML support for metrics as well.\r\n\r\nHere's a [link to the Discord question](https://discord.com/channels/485586884165107732/485596304961962003/865974923079319563) that brought this up.\n", "before_files": [{"content": "import logging\nimport os\nfrom typing import List\n\nfrom scmrepo.exceptions import SCMError\n\nfrom dvc.fs.dvc import DvcFileSystem\nfrom dvc.output import Output\nfrom dvc.repo import locked\nfrom dvc.repo.collect import StrPaths, collect\nfrom dvc.repo.live import summary_fs_path\nfrom dvc.scm import NoSCMError\nfrom dvc.utils import error_handler, errored_revisions, onerror_collect\nfrom dvc.utils.collections import ensure_list\nfrom dvc.utils.serialize import load_yaml\n\nlogger = logging.getLogger(__name__)\n\n\ndef _is_metric(out: Output) -> bool:\n return bool(out.metric) or bool(out.live)\n\n\ndef _to_fs_paths(metrics: List[Output]) -> StrPaths:\n result = []\n for out in metrics:\n if out.metric:\n result.append(out.repo.dvcfs.from_os_path(out.fs_path))\n elif out.live:\n fs_path = summary_fs_path(out)\n if fs_path:\n result.append(out.repo.dvcfs.from_os_path(fs_path))\n return result\n\n\ndef _collect_metrics(repo, targets, revision, recursive):\n metrics, fs_paths = collect(\n repo,\n targets=targets,\n output_filter=_is_metric,\n recursive=recursive,\n rev=revision,\n )\n return _to_fs_paths(metrics) + list(fs_paths)\n\n\ndef _extract_metrics(metrics, path, rev):\n if isinstance(metrics, (int, float)):\n return metrics\n\n if not isinstance(metrics, dict):\n return None\n\n ret = {}\n for key, val in metrics.items():\n m = _extract_metrics(val, path, rev)\n if m not in (None, {}):\n ret[key] = m\n else:\n logger.debug(\n \"Could not parse '%s' metric from '%s' at '%s' \"\n \"due to its unsupported type: '%s'\",\n key,\n path,\n rev,\n type(val).__name__,\n )\n\n return ret\n\n\n@error_handler\ndef _read_metric(path, fs, rev, **kwargs):\n val = load_yaml(path, fs=fs)\n val = _extract_metrics(val, path, rev)\n return val or {}\n\n\ndef _read_metrics(repo, metrics, rev, onerror=None):\n fs = DvcFileSystem(repo=repo)\n\n relpath = \"\"\n if repo.root_dir != repo.fs.path.getcwd():\n relpath = repo.fs.path.relpath(repo.root_dir, repo.fs.path.getcwd())\n\n res = {}\n for metric in metrics:\n if not fs.isfile(metric):\n continue\n\n res[os.path.join(relpath, *fs.path.parts(metric))] = _read_metric(\n metric, fs, rev, onerror=onerror\n )\n\n return res\n\n\ndef _gather_metrics(repo, targets, rev, recursive, onerror=None):\n metrics = _collect_metrics(repo, targets, rev, recursive)\n return _read_metrics(repo, metrics, rev, onerror=onerror)\n\n\n@locked\ndef show(\n repo,\n targets=None,\n all_branches=False,\n all_tags=False,\n recursive=False,\n revs=None,\n all_commits=False,\n onerror=None,\n):\n if onerror is None:\n onerror = onerror_collect\n\n targets = ensure_list(targets)\n targets = [repo.dvcfs.from_os_path(target) for target in targets]\n\n res = {}\n for rev in repo.brancher(\n revs=revs,\n all_branches=all_branches,\n all_tags=all_tags,\n all_commits=all_commits,\n ):\n res[rev] = error_handler(_gather_metrics)(\n repo, targets, rev, recursive, onerror=onerror\n )\n\n # Hide workspace metrics if they are the same as in the active branch\n try:\n active_branch = repo.scm.active_branch()\n except (SCMError, NoSCMError):\n # SCMError - detached head\n # NoSCMError - no repo case\n pass\n else:\n if res.get(\"workspace\") == res.get(active_branch):\n res.pop(\"workspace\", None)\n\n errored = errored_revisions(res)\n if errored:\n from dvc.ui import ui\n\n ui.error_write(\n \"DVC failed to load some metrics for following revisions:\"\n f\" '{', '.join(errored)}'.\"\n )\n\n return res\n", "path": "dvc/repo/metrics/show.py"}], "after_files": [{"content": "import logging\nimport os\nfrom typing import List\n\nfrom scmrepo.exceptions import SCMError\n\nfrom dvc.fs.dvc import DvcFileSystem\nfrom dvc.output import Output\nfrom dvc.repo import locked\nfrom dvc.repo.collect import StrPaths, collect\nfrom dvc.repo.live import summary_fs_path\nfrom dvc.scm import NoSCMError\nfrom dvc.utils import error_handler, errored_revisions, onerror_collect\nfrom dvc.utils.collections import ensure_list\nfrom dvc.utils.serialize import LOADERS\n\nlogger = logging.getLogger(__name__)\n\n\ndef _is_metric(out: Output) -> bool:\n return bool(out.metric) or bool(out.live)\n\n\ndef _to_fs_paths(metrics: List[Output]) -> StrPaths:\n result = []\n for out in metrics:\n if out.metric:\n result.append(out.repo.dvcfs.from_os_path(out.fs_path))\n elif out.live:\n fs_path = summary_fs_path(out)\n if fs_path:\n result.append(out.repo.dvcfs.from_os_path(fs_path))\n return result\n\n\ndef _collect_metrics(repo, targets, revision, recursive):\n metrics, fs_paths = collect(\n repo,\n targets=targets,\n output_filter=_is_metric,\n recursive=recursive,\n rev=revision,\n )\n return _to_fs_paths(metrics) + list(fs_paths)\n\n\ndef _extract_metrics(metrics, path, rev):\n if isinstance(metrics, (int, float)):\n return metrics\n\n if not isinstance(metrics, dict):\n return None\n\n ret = {}\n for key, val in metrics.items():\n m = _extract_metrics(val, path, rev)\n if m not in (None, {}):\n ret[key] = m\n else:\n logger.debug(\n \"Could not parse '%s' metric from '%s' at '%s' \"\n \"due to its unsupported type: '%s'\",\n key,\n path,\n rev,\n type(val).__name__,\n )\n\n return ret\n\n\n@error_handler\ndef _read_metric(path, fs, rev, **kwargs):\n suffix = fs.path.suffix(path).lower()\n loader = LOADERS[suffix]\n val = loader(path, fs=fs)\n val = _extract_metrics(val, path, rev)\n return val or {}\n\n\ndef _read_metrics(repo, metrics, rev, onerror=None):\n fs = DvcFileSystem(repo=repo)\n\n relpath = \"\"\n if repo.root_dir != repo.fs.path.getcwd():\n relpath = repo.fs.path.relpath(repo.root_dir, repo.fs.path.getcwd())\n\n res = {}\n for metric in metrics:\n if not fs.isfile(metric):\n continue\n\n res[os.path.join(relpath, *fs.path.parts(metric))] = _read_metric(\n metric, fs, rev, onerror=onerror\n )\n\n return res\n\n\ndef _gather_metrics(repo, targets, rev, recursive, onerror=None):\n metrics = _collect_metrics(repo, targets, rev, recursive)\n return _read_metrics(repo, metrics, rev, onerror=onerror)\n\n\n@locked\ndef show(\n repo,\n targets=None,\n all_branches=False,\n all_tags=False,\n recursive=False,\n revs=None,\n all_commits=False,\n onerror=None,\n):\n if onerror is None:\n onerror = onerror_collect\n\n targets = ensure_list(targets)\n targets = [repo.dvcfs.from_os_path(target) for target in targets]\n\n res = {}\n for rev in repo.brancher(\n revs=revs,\n all_branches=all_branches,\n all_tags=all_tags,\n all_commits=all_commits,\n ):\n res[rev] = error_handler(_gather_metrics)(\n repo, targets, rev, recursive, onerror=onerror\n )\n\n # Hide workspace metrics if they are the same as in the active branch\n try:\n active_branch = repo.scm.active_branch()\n except (SCMError, NoSCMError):\n # SCMError - detached head\n # NoSCMError - no repo case\n pass\n else:\n if res.get(\"workspace\") == res.get(active_branch):\n res.pop(\"workspace\", None)\n\n errored = errored_revisions(res)\n if errored:\n from dvc.ui import ui\n\n ui.error_write(\n \"DVC failed to load some metrics for following revisions:\"\n f\" '{', '.join(errored)}'.\"\n )\n\n return res\n", "path": "dvc/repo/metrics/show.py"}]} | 1,684 | 211 |
gh_patches_debug_31073 | rasdani/github-patches | git_diff | fossasia__open-event-server-4162 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ticket-tag: remove GET for /ticket-tags
Parent issue #4101.
Related issue: #4119.
Make `/ticket-tags` POST only.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/ticket_tags.py`
Content:
```
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from marshmallow_jsonapi.flask import Schema, Relationship
3 from marshmallow_jsonapi import fields
4 from sqlalchemy.orm.exc import NoResultFound
5 from flask_rest_jsonapi.exceptions import ObjectNotFound
6
7 from app.api.helpers.utilities import dasherize
8 from app.api.helpers.permissions import jwt_required
9 from app.models import db
10 from app.models.ticket import Ticket, TicketTag, ticket_tags_table
11 from app.models.event import Event
12 from app.api.helpers.db import safe_query
13 from app.api.helpers.utilities import require_relationship
14 from app.api.helpers.exceptions import ForbiddenException
15 from app.api.helpers.permission_manager import has_access
16
17
18 class TicketTagSchema(Schema):
19 """
20 Api schema for TicketTag Model
21 """
22
23 class Meta:
24 """
25 Meta class for TicketTag Api Schema
26 """
27 type_ = 'ticket-tag'
28 self_view = 'v1.ticket_tag_detail'
29 self_view_kwargs = {'id': '<id>'}
30 inflect = dasherize
31
32 id = fields.Str(dump_only=True)
33 name = fields.Str(allow_none=True)
34 tickets = Relationship(attribute='tickets',
35 self_view='v1.ticket_tag_ticket',
36 self_view_kwargs={'id': '<id>'},
37 related_view='v1.ticket_list',
38 related_view_kwargs={'ticket_tag_id': '<id>'},
39 schema='TicketSchema',
40 many=True,
41 type_='ticket')
42 event = Relationship(attribute='event',
43 self_view='v1.ticket_tag_event',
44 self_view_kwargs={'id': '<id>'},
45 related_view='v1.event_detail',
46 related_view_kwargs={'ticket_tag_id': '<id>'},
47 schema='EventSchema',
48 type_='event')
49
50
51 class TicketTagListPost(ResourceList):
52 """
53 List and create TicketTag
54 """
55 def before_post(self, args, kwargs, data):
56 """
57 before post method for checking required relationship
58 :param args:
59 :param kwargs:
60 :param data:
61 :return:
62 """
63 require_relationship(['event'], data)
64
65 if not has_access('is_coorganizer', event_id=data['event']):
66 raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')
67
68 def after_create_object(self, obj, data, view_kwargs):
69 """
70 method to add ticket tags and ticket in association table
71 :param obj:
72 :param data:
73 :param view_kwargs:
74 :return:
75 """
76 if 'tickets' in data:
77 ticket_ids = data['tickets']
78 for ticket_id in ticket_ids:
79 try:
80 ticket = Ticket.query.filter_by(id=ticket_id).one()
81 except NoResultFound:
82 raise ObjectNotFound({'parameter': 'ticket_id'},
83 "Ticket: {} not found".format(ticket_id))
84 else:
85 ticket.tags.append(obj)
86 self.session.commit()
87
88 schema = TicketTagSchema
89 data_layer = {'session': db.session,
90 'model': TicketTag,
91 'methods': {
92 'after_create_object': after_create_object
93 }}
94
95
96 class TicketTagList(ResourceList):
97 """
98 List TicketTags based on event_id or ticket_id
99 """
100 def query(self, view_kwargs):
101 """
102 method to query Ticket tags based on different params
103 :param view_kwargs:
104 :return:
105 """
106 query_ = self.session.query(TicketTag)
107 if view_kwargs.get('ticket_id'):
108 ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')
109 query_ = query_.join(ticket_tags_table).filter_by(ticket_id=ticket.id)
110 if view_kwargs.get('event_id'):
111 event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
112 query_ = query_.join(Event).filter(Event.id == event.id)
113 elif view_kwargs.get('event_identifier'):
114 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
115 query_ = query_.join(Event).filter(Event.id == event.id)
116 return query_
117
118 view_kwargs = True
119 schema = TicketTagSchema
120 methods = ['GET', ]
121 data_layer = {'session': db.session,
122 'model': TicketTag,
123 'methods': {
124 'query': query
125 }}
126
127
128 class TicketTagDetail(ResourceDetail):
129 """
130 TicketTag detail by id
131 """
132 decorators = (jwt_required,)
133 schema = TicketTagSchema
134 data_layer = {'session': db.session,
135 'model': TicketTag}
136
137
138 class TicketTagRelationshipRequired(ResourceRelationship):
139 """
140 TicketTag Relationship
141 """
142 decorators = (jwt_required,)
143 methods = ['GET', 'PATCH']
144 schema = TicketTagSchema
145 data_layer = {'session': db.session,
146 'model': TicketTag}
147
148
149 class TicketTagRelationshipOptional(ResourceRelationship):
150 """
151 TicketTag Relationship
152 """
153 decorators = (jwt_required,)
154 schema = TicketTagSchema
155 data_layer = {'session': db.session,
156 'model': TicketTag}
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/ticket_tags.py b/app/api/ticket_tags.py
--- a/app/api/ticket_tags.py
+++ b/app/api/ticket_tags.py
@@ -1,8 +1,6 @@
from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
from marshmallow_jsonapi.flask import Schema, Relationship
from marshmallow_jsonapi import fields
-from sqlalchemy.orm.exc import NoResultFound
-from flask_rest_jsonapi.exceptions import ObjectNotFound
from app.api.helpers.utilities import dasherize
from app.api.helpers.permissions import jwt_required
@@ -65,32 +63,10 @@
if not has_access('is_coorganizer', event_id=data['event']):
raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')
- def after_create_object(self, obj, data, view_kwargs):
- """
- method to add ticket tags and ticket in association table
- :param obj:
- :param data:
- :param view_kwargs:
- :return:
- """
- if 'tickets' in data:
- ticket_ids = data['tickets']
- for ticket_id in ticket_ids:
- try:
- ticket = Ticket.query.filter_by(id=ticket_id).one()
- except NoResultFound:
- raise ObjectNotFound({'parameter': 'ticket_id'},
- "Ticket: {} not found".format(ticket_id))
- else:
- ticket.tags.append(obj)
- self.session.commit()
-
schema = TicketTagSchema
+ methods = ['POST', ]
data_layer = {'session': db.session,
- 'model': TicketTag,
- 'methods': {
- 'after_create_object': after_create_object
- }}
+ 'model': TicketTag}
class TicketTagList(ResourceList):
| {"golden_diff": "diff --git a/app/api/ticket_tags.py b/app/api/ticket_tags.py\n--- a/app/api/ticket_tags.py\n+++ b/app/api/ticket_tags.py\n@@ -1,8 +1,6 @@\n from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n from marshmallow_jsonapi.flask import Schema, Relationship\n from marshmallow_jsonapi import fields\n-from sqlalchemy.orm.exc import NoResultFound\n-from flask_rest_jsonapi.exceptions import ObjectNotFound\n \n from app.api.helpers.utilities import dasherize\n from app.api.helpers.permissions import jwt_required\n@@ -65,32 +63,10 @@\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n \n- def after_create_object(self, obj, data, view_kwargs):\n- \"\"\"\n- method to add ticket tags and ticket in association table\n- :param obj:\n- :param data:\n- :param view_kwargs:\n- :return:\n- \"\"\"\n- if 'tickets' in data:\n- ticket_ids = data['tickets']\n- for ticket_id in ticket_ids:\n- try:\n- ticket = Ticket.query.filter_by(id=ticket_id).one()\n- except NoResultFound:\n- raise ObjectNotFound({'parameter': 'ticket_id'},\n- \"Ticket: {} not found\".format(ticket_id))\n- else:\n- ticket.tags.append(obj)\n- self.session.commit()\n-\n schema = TicketTagSchema\n+ methods = ['POST', ]\n data_layer = {'session': db.session,\n- 'model': TicketTag,\n- 'methods': {\n- 'after_create_object': after_create_object\n- }}\n+ 'model': TicketTag}\n \n \n class TicketTagList(ResourceList):\n", "issue": "Ticket-tag: remove GET for /ticket-tags \nParent issue #4101.\r\nRelated issue: #4119.\r\n\r\nMake `/ticket-tags` POST only.\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nfrom sqlalchemy.orm.exc import NoResultFound\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.helpers.permissions import jwt_required\nfrom app.models import db\nfrom app.models.ticket import Ticket, TicketTag, ticket_tags_table\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\n\n\nclass TicketTagSchema(Schema):\n \"\"\"\n Api schema for TicketTag Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for TicketTag Api Schema\n \"\"\"\n type_ = 'ticket-tag'\n self_view = 'v1.ticket_tag_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Str(dump_only=True)\n name = fields.Str(allow_none=True)\n tickets = Relationship(attribute='tickets',\n self_view='v1.ticket_tag_ticket',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.ticket_list',\n related_view_kwargs={'ticket_tag_id': '<id>'},\n schema='TicketSchema',\n many=True,\n type_='ticket')\n event = Relationship(attribute='event',\n self_view='v1.ticket_tag_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'ticket_tag_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass TicketTagListPost(ResourceList):\n \"\"\"\n List and create TicketTag\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method for checking required relationship\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n def after_create_object(self, obj, data, view_kwargs):\n \"\"\"\n method to add ticket tags and ticket in association table\n :param obj:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if 'tickets' in data:\n ticket_ids = data['tickets']\n for ticket_id in ticket_ids:\n try:\n ticket = Ticket.query.filter_by(id=ticket_id).one()\n except NoResultFound:\n raise ObjectNotFound({'parameter': 'ticket_id'},\n \"Ticket: {} not found\".format(ticket_id))\n else:\n ticket.tags.append(obj)\n self.session.commit()\n\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag,\n 'methods': {\n 'after_create_object': after_create_object\n }}\n\n\nclass TicketTagList(ResourceList):\n \"\"\"\n List TicketTags based on event_id or ticket_id\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n method to query Ticket tags based on different params\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(TicketTag)\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n query_ = query_.join(ticket_tags_table).filter_by(ticket_id=ticket.id)\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n query_ = query_.join(Event).filter(Event.id == event.id)\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n query_ = query_.join(Event).filter(Event.id == event.id)\n return query_\n\n view_kwargs = True\n schema = TicketTagSchema\n methods = ['GET', ]\n data_layer = {'session': db.session,\n 'model': TicketTag,\n 'methods': {\n 'query': query\n }}\n\n\nclass TicketTagDetail(ResourceDetail):\n \"\"\"\n TicketTag detail by id\n \"\"\"\n decorators = (jwt_required,)\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n\n\nclass TicketTagRelationshipRequired(ResourceRelationship):\n \"\"\"\n TicketTag Relationship\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n\n\nclass TicketTagRelationshipOptional(ResourceRelationship):\n \"\"\"\n TicketTag Relationship\n \"\"\"\n decorators = (jwt_required,)\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n", "path": "app/api/ticket_tags.py"}], "after_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\n\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.helpers.permissions import jwt_required\nfrom app.models import db\nfrom app.models.ticket import Ticket, TicketTag, ticket_tags_table\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\n\n\nclass TicketTagSchema(Schema):\n \"\"\"\n Api schema for TicketTag Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for TicketTag Api Schema\n \"\"\"\n type_ = 'ticket-tag'\n self_view = 'v1.ticket_tag_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Str(dump_only=True)\n name = fields.Str(allow_none=True)\n tickets = Relationship(attribute='tickets',\n self_view='v1.ticket_tag_ticket',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.ticket_list',\n related_view_kwargs={'ticket_tag_id': '<id>'},\n schema='TicketSchema',\n many=True,\n type_='ticket')\n event = Relationship(attribute='event',\n self_view='v1.ticket_tag_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'ticket_tag_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass TicketTagListPost(ResourceList):\n \"\"\"\n List and create TicketTag\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method for checking required relationship\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n schema = TicketTagSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': TicketTag}\n\n\nclass TicketTagList(ResourceList):\n \"\"\"\n List TicketTags based on event_id or ticket_id\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n method to query Ticket tags based on different params\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(TicketTag)\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n query_ = query_.join(ticket_tags_table).filter_by(ticket_id=ticket.id)\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n query_ = query_.join(Event).filter(Event.id == event.id)\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n query_ = query_.join(Event).filter(Event.id == event.id)\n return query_\n\n view_kwargs = True\n schema = TicketTagSchema\n methods = ['GET', ]\n data_layer = {'session': db.session,\n 'model': TicketTag,\n 'methods': {\n 'query': query\n }}\n\n\nclass TicketTagDetail(ResourceDetail):\n \"\"\"\n TicketTag detail by id\n \"\"\"\n decorators = (jwt_required,)\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n\n\nclass TicketTagRelationshipRequired(ResourceRelationship):\n \"\"\"\n TicketTag Relationship\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n\n\nclass TicketTagRelationshipOptional(ResourceRelationship):\n \"\"\"\n TicketTag Relationship\n \"\"\"\n decorators = (jwt_required,)\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n", "path": "app/api/ticket_tags.py"}]} | 1,737 | 392 |
gh_patches_debug_63274 | rasdani/github-patches | git_diff | Mailu__Mailu-2603 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Maximum number of connections from user+IP exceeded
Hi, we have a problem... :-)
We have changed the original value of "AUTH_RATELIMIT" to "AUTH_RATELIMIT=100/minute;6000/hour", but logs continue to say " Maximum number of connections from user+IP exceeded (mail_max_userip_connections=20)" while reading response from upstream..."
We have made docker-compose dow and docker-compose up -d, but without result.
How can we change the default limit set during the installation?
Thanks in advance.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/views/dovecot.py`
Content:
```
1 from mailu import models
2 from mailu.internal import internal
3 from flask import current_app as app
4
5 import flask
6 import socket
7 import os
8 import sqlalchemy.exc
9
10 @internal.route("/dovecot/passdb/<path:user_email>")
11 def dovecot_passdb_dict(user_email):
12 user = models.User.query.get(user_email) or flask.abort(404)
13 allow_nets = []
14 allow_nets.append(app.config["SUBNET"])
15 if app.config["SUBNET6"]:
16 allow_nets.append(app.config["SUBNET6"])
17 return flask.jsonify({
18 "password": None,
19 "nopassword": "Y",
20 "allow_nets": ",".join(allow_nets)
21 })
22
23 @internal.route("/dovecot/userdb/")
24 def dovecot_userdb_dict_list():
25 return flask.jsonify([
26 user[0] for user in models.User.query.filter(models.User.enabled.is_(True)).with_entities(models.User.email).all()
27 ])
28
29 @internal.route("/dovecot/userdb/<path:user_email>")
30 def dovecot_userdb_dict(user_email):
31 try:
32 quota = models.User.query.filter(models.User.email==user_email).with_entities(models.User.quota_bytes).one_or_none() or flask.abort(404)
33 except sqlalchemy.exc.StatementError as exc:
34 flask.abort(404)
35 return flask.jsonify({
36 "quota_rule": f"*:bytes={quota[0]}"
37 })
38
39
40 @internal.route("/dovecot/quota/<ns>/<path:user_email>", methods=["POST"])
41 def dovecot_quota(ns, user_email):
42 user = models.User.query.get(user_email) or flask.abort(404)
43 if ns == "storage":
44 user.quota_bytes_used = flask.request.get_json()
45 user.dont_change_updated_at()
46 models.db.session.commit()
47 return flask.jsonify(None)
48
49
50 @internal.route("/dovecot/sieve/name/<script>/<path:user_email>")
51 def dovecot_sieve_name(script, user_email):
52 return flask.jsonify(script)
53
54
55 @internal.route("/dovecot/sieve/data/default/<path:user_email>")
56 def dovecot_sieve_data(user_email):
57 user = models.User.query.get(user_email) or flask.abort(404)
58 return flask.jsonify(flask.render_template("default.sieve", user=user))
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/views/dovecot.py b/core/admin/mailu/internal/views/dovecot.py
--- a/core/admin/mailu/internal/views/dovecot.py
+++ b/core/admin/mailu/internal/views/dovecot.py
@@ -17,7 +17,7 @@
return flask.jsonify({
"password": None,
"nopassword": "Y",
- "allow_nets": ",".join(allow_nets)
+ "allow_real_nets": ",".join(allow_nets)
})
@internal.route("/dovecot/userdb/")
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/dovecot.py b/core/admin/mailu/internal/views/dovecot.py\n--- a/core/admin/mailu/internal/views/dovecot.py\n+++ b/core/admin/mailu/internal/views/dovecot.py\n@@ -17,7 +17,7 @@\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n- \"allow_nets\": \",\".join(allow_nets)\n+ \"allow_real_nets\": \",\".join(allow_nets)\n })\n \n @internal.route(\"/dovecot/userdb/\")\n", "issue": "Maximum number of connections from user+IP exceeded \nHi, we have a problem... :-)\r\nWe have changed the original value of \"AUTH_RATELIMIT\" to \"AUTH_RATELIMIT=100/minute;6000/hour\", but logs continue to say \" Maximum number of connections from user+IP exceeded (mail_max_userip_connections=20)\" while reading response from upstream...\"\r\nWe have made docker-compose dow and docker-compose up -d, but without result.\r\nHow can we change the default limit set during the installation?\r\nThanks in advance.\n", "before_files": [{"content": "from mailu import models\nfrom mailu.internal import internal\nfrom flask import current_app as app\n\nimport flask\nimport socket\nimport os\nimport sqlalchemy.exc\n\[email protected](\"/dovecot/passdb/<path:user_email>\")\ndef dovecot_passdb_dict(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n allow_nets = []\n allow_nets.append(app.config[\"SUBNET\"])\n if app.config[\"SUBNET6\"]:\n allow_nets.append(app.config[\"SUBNET6\"])\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n \"allow_nets\": \",\".join(allow_nets)\n })\n\[email protected](\"/dovecot/userdb/\")\ndef dovecot_userdb_dict_list():\n return flask.jsonify([\n user[0] for user in models.User.query.filter(models.User.enabled.is_(True)).with_entities(models.User.email).all()\n ])\n\[email protected](\"/dovecot/userdb/<path:user_email>\")\ndef dovecot_userdb_dict(user_email):\n try:\n quota = models.User.query.filter(models.User.email==user_email).with_entities(models.User.quota_bytes).one_or_none() or flask.abort(404)\n except sqlalchemy.exc.StatementError as exc:\n flask.abort(404)\n return flask.jsonify({\n \"quota_rule\": f\"*:bytes={quota[0]}\"\n })\n\n\[email protected](\"/dovecot/quota/<ns>/<path:user_email>\", methods=[\"POST\"])\ndef dovecot_quota(ns, user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n if ns == \"storage\":\n user.quota_bytes_used = flask.request.get_json()\n user.dont_change_updated_at()\n models.db.session.commit()\n return flask.jsonify(None)\n\n\[email protected](\"/dovecot/sieve/name/<script>/<path:user_email>\")\ndef dovecot_sieve_name(script, user_email):\n return flask.jsonify(script)\n\n\[email protected](\"/dovecot/sieve/data/default/<path:user_email>\")\ndef dovecot_sieve_data(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n return flask.jsonify(flask.render_template(\"default.sieve\", user=user))\n", "path": "core/admin/mailu/internal/views/dovecot.py"}], "after_files": [{"content": "from mailu import models\nfrom mailu.internal import internal\nfrom flask import current_app as app\n\nimport flask\nimport socket\nimport os\nimport sqlalchemy.exc\n\[email protected](\"/dovecot/passdb/<path:user_email>\")\ndef dovecot_passdb_dict(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n allow_nets = []\n allow_nets.append(app.config[\"SUBNET\"])\n if app.config[\"SUBNET6\"]:\n allow_nets.append(app.config[\"SUBNET6\"])\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n \"allow_real_nets\": \",\".join(allow_nets)\n })\n\[email protected](\"/dovecot/userdb/\")\ndef dovecot_userdb_dict_list():\n return flask.jsonify([\n user[0] for user in models.User.query.filter(models.User.enabled.is_(True)).with_entities(models.User.email).all()\n ])\n\[email protected](\"/dovecot/userdb/<path:user_email>\")\ndef dovecot_userdb_dict(user_email):\n try:\n quota = models.User.query.filter(models.User.email==user_email).with_entities(models.User.quota_bytes).one_or_none() or flask.abort(404)\n except sqlalchemy.exc.StatementError as exc:\n flask.abort(404)\n return flask.jsonify({\n \"quota_rule\": f\"*:bytes={quota[0]}\"\n })\n\n\[email protected](\"/dovecot/quota/<ns>/<path:user_email>\", methods=[\"POST\"])\ndef dovecot_quota(ns, user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n if ns == \"storage\":\n user.quota_bytes_used = flask.request.get_json()\n user.dont_change_updated_at()\n models.db.session.commit()\n return flask.jsonify(None)\n\n\[email protected](\"/dovecot/sieve/name/<script>/<path:user_email>\")\ndef dovecot_sieve_name(script, user_email):\n return flask.jsonify(script)\n\n\[email protected](\"/dovecot/sieve/data/default/<path:user_email>\")\ndef dovecot_sieve_data(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n return flask.jsonify(flask.render_template(\"default.sieve\", user=user))\n", "path": "core/admin/mailu/internal/views/dovecot.py"}]} | 987 | 128 |
gh_patches_debug_20106 | rasdani/github-patches | git_diff | microsoft__torchgeo-93 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jupyter Notebook tutorials
We need to figure out how to render Jupyter Notebooks in our documentation so that we can provide easy-to-use tutorials for new users. This should work similarly to https://pytorch.org/tutorials/.
Ideally I would like to be able to test these tutorials so that they stay up-to-date.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 import os
10 import sys
11
12 import pytorch_sphinx_theme
13
14 # If extensions (or modules to document with autodoc) are in another directory,
15 # add these directories to sys.path here. If the directory is relative to the
16 # documentation root, use os.path.abspath to make it absolute, like shown here.
17 sys.path.insert(0, os.path.abspath(".."))
18
19 import torchgeo # noqa: E402
20
21 # -- Project information -----------------------------------------------------
22
23 project = "torchgeo"
24 copyright = "2021, Microsoft Corporation"
25 author = "Adam J. Stewart"
26 version = ".".join(torchgeo.__version__.split(".")[:2])
27 release = torchgeo.__version__
28
29
30 # -- General configuration ---------------------------------------------------
31
32 # Add any Sphinx extension module names here, as strings. They can be
33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
34 # ones.
35 extensions = [
36 "sphinx.ext.autodoc",
37 "sphinx.ext.autosectionlabel",
38 "sphinx.ext.intersphinx",
39 "sphinx.ext.napoleon",
40 "sphinx.ext.todo",
41 "sphinx.ext.viewcode",
42 ]
43
44 # List of patterns, relative to source directory, that match files and
45 # directories to ignore when looking for source files.
46 # This pattern also affects html_static_path and html_extra_path.
47 exclude_patterns = ["_build"]
48
49 # Sphinx 3.0+ required for:
50 # autodoc_typehints = "description"
51 needs_sphinx = "3.0"
52
53 nitpicky = True
54 nitpick_ignore = [
55 # https://github.com/sphinx-doc/sphinx/issues/8127
56 ("py:class", ".."),
57 # TODO: can't figure out why this isn't found
58 ("py:class", "LightningDataModule"),
59 ]
60
61
62 # -- Options for HTML output -------------------------------------------------
63
64 # The theme to use for HTML and HTML Help pages. See the documentation for
65 # a list of builtin themes.
66 html_theme = "pytorch_sphinx_theme"
67 html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
68
69 # Theme options are theme-specific and customize the look and feel of a theme
70 # further. For a list of options available for each theme, see the
71 # documentation.
72 html_theme_options = {
73 "collapse_navigation": False,
74 "display_version": True,
75 "logo_only": True,
76 "pytorch_project": "docs",
77 "navigation_with_keys": True,
78 "analytics_id": "UA-117752657-2",
79 }
80
81 # -- Extension configuration -------------------------------------------------
82
83 # sphinx.ext.autodoc
84 autodoc_default_options = {
85 "members": True,
86 "special-members": True,
87 "show-inheritance": True,
88 }
89 autodoc_member_order = "bysource"
90 autodoc_typehints = "description"
91
92 # sphinx.ext.intersphinx
93 intersphinx_mapping = {
94 "python": ("https://docs.python.org/3", None),
95 "pytorch-lightning": ("https://pytorch-lightning.readthedocs.io/en/latest/", None),
96 "rasterio": ("https://rasterio.readthedocs.io/en/latest/", None),
97 "rtree": ("https://rtree.readthedocs.io/en/latest/", None),
98 "torch": ("https://pytorch.org/docs/stable", None),
99 }
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -34,11 +34,11 @@
# ones.
extensions = [
"sphinx.ext.autodoc",
- "sphinx.ext.autosectionlabel",
"sphinx.ext.intersphinx",
"sphinx.ext.napoleon",
"sphinx.ext.todo",
"sphinx.ext.viewcode",
+ "nbsphinx",
]
# List of patterns, relative to source directory, that match files and
@@ -97,3 +97,17 @@
"rtree": ("https://rtree.readthedocs.io/en/latest/", None),
"torch": ("https://pytorch.org/docs/stable", None),
}
+
+# nbsphinx
+nbsphinx_execute = "never"
+# TODO: branch/tag should change depending on which version of docs you look at
+# TODO: :width: may be broken
+nbsphinx_prolog = """
+{% set colab = "https://colab.research.google.com" %}
+{% set repo = "microsoft/torchgeo" %}
+{% set branch = "main" %}
+
+.. image:: {{ colab }}/assets/colab-badge.svg
+ :alt: Open in Colab
+ :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}
+"""
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -34,11 +34,11 @@\n # ones.\n extensions = [\n \"sphinx.ext.autodoc\",\n- \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n+ \"nbsphinx\",\n ]\n \n # List of patterns, relative to source directory, that match files and\n@@ -97,3 +97,17 @@\n \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n }\n+\n+# nbsphinx\n+nbsphinx_execute = \"never\"\n+# TODO: branch/tag should change depending on which version of docs you look at\n+# TODO: :width: may be broken\n+nbsphinx_prolog = \"\"\"\n+{% set colab = \"https://colab.research.google.com\" %}\n+{% set repo = \"microsoft/torchgeo\" %}\n+{% set branch = \"main\" %}\n+\n+.. image:: {{ colab }}/assets/colab-badge.svg\n+ :alt: Open in Colab\n+ :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}\n+\"\"\"\n", "issue": "Jupyter Notebook tutorials\nWe need to figure out how to render Jupyter Notebooks in our documentation so that we can provide easy-to-use tutorials for new users. This should work similarly to https://pytorch.org/tutorials/.\r\n\r\nIdeally I would like to be able to test these tutorials so that they stay up-to-date.\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\n\nimport pytorch_sphinx_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nimport torchgeo # noqa: E402\n\n# -- Project information -----------------------------------------------------\n\nproject = \"torchgeo\"\ncopyright = \"2021, Microsoft Corporation\"\nauthor = \"Adam J. Stewart\"\nversion = \".\".join(torchgeo.__version__.split(\".\")[:2])\nrelease = torchgeo.__version__\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\"]\n\n# Sphinx 3.0+ required for:\n# autodoc_typehints = \"description\"\nneeds_sphinx = \"3.0\"\n\nnitpicky = True\nnitpick_ignore = [\n # https://github.com/sphinx-doc/sphinx/issues/8127\n (\"py:class\", \"..\"),\n # TODO: can't figure out why this isn't found\n (\"py:class\", \"LightningDataModule\"),\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-117752657-2\",\n}\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc\nautodoc_default_options = {\n \"members\": True,\n \"special-members\": True,\n \"show-inheritance\": True,\n}\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\n\n# sphinx.ext.intersphinx\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/latest/\", None),\n \"rasterio\": (\"https://rasterio.readthedocs.io/en/latest/\", None),\n \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n}\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\n\nimport pytorch_sphinx_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nimport torchgeo # noqa: E402\n\n# -- Project information -----------------------------------------------------\n\nproject = \"torchgeo\"\ncopyright = \"2021, Microsoft Corporation\"\nauthor = \"Adam J. Stewart\"\nversion = \".\".join(torchgeo.__version__.split(\".\")[:2])\nrelease = torchgeo.__version__\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"nbsphinx\",\n]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\"]\n\n# Sphinx 3.0+ required for:\n# autodoc_typehints = \"description\"\nneeds_sphinx = \"3.0\"\n\nnitpicky = True\nnitpick_ignore = [\n # https://github.com/sphinx-doc/sphinx/issues/8127\n (\"py:class\", \"..\"),\n # TODO: can't figure out why this isn't found\n (\"py:class\", \"LightningDataModule\"),\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-117752657-2\",\n}\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc\nautodoc_default_options = {\n \"members\": True,\n \"special-members\": True,\n \"show-inheritance\": True,\n}\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\n\n# sphinx.ext.intersphinx\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/latest/\", None),\n \"rasterio\": (\"https://rasterio.readthedocs.io/en/latest/\", None),\n \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n}\n\n# nbsphinx\nnbsphinx_execute = \"never\"\n# TODO: branch/tag should change depending on which version of docs you look at\n# TODO: :width: may be broken\nnbsphinx_prolog = \"\"\"\n{% set colab = \"https://colab.research.google.com\" %}\n{% set repo = \"microsoft/torchgeo\" %}\n{% set branch = \"main\" %}\n\n.. image:: {{ colab }}/assets/colab-badge.svg\n :alt: Open in Colab\n :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}\n\"\"\"\n", "path": "docs/conf.py"}]} | 1,294 | 308 |
gh_patches_debug_32695 | rasdani/github-patches | git_diff | conan-io__conan-center-index-3023 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] jbig/20160605: Fails to build on iOS
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
-->
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **jbig/20160605**
* Operating System+version: **iOS 11.0**
* Compiler+version: **apple-clang 11.0**
* Conan version: **conan 1.29.2**
* Python version: **Python 3.8.5**
### Conan profile
```
[settings]
arch=x86_64
arch_build=x86_64
build_type=Debug
compiler=apple-clang
compiler.cppstd=17
compiler.libcxx=libc++
compiler.version=11.0
os=iOS
os.version=11.0
os_build=Macos
[options]
[build_requires]
*: darwin-toolchain/1.0.8@theodelrieu/stable
[env]
```
### Steps to reproduce (Include if Applicable)
`conan install jbig/20160605@ --profile ios --build=missing`
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
CMake Error at CMakeLists.txt:31 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "jbgtopbm".
```
</details>
I would suggest adding an option that disables the `pbmtojbg` and `jbgtopbm` targets from being generated. The recipe could define individual `build_` options for each, which other packages do, or go with a more generically named option that enables/disables both. For reference, `sqlite3`, `bzip2`, and `spirv-cross` have a `build_executable` option, while `glslang` has a `build_executables` option.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/jbig/all/conanfile.py`
Content:
```
1 import os
2 import glob
3 from conans import ConanFile, CMake, tools
4
5
6 class ConanJBig(ConanFile):
7 name = "jbig"
8 url = "https://github.com/conan-io/conan-center-index"
9 homepage = "https://github.com/ImageMagick/jbig"
10 description = "jbig for the Windows build of ImageMagick"
11 topics = ("conan", "jbig", "imagemagick", "window", "graphic")
12 license = "GPL-2.0"
13 exports_sources = ['CMakeLists.txt', "*.patch"]
14 generators = 'cmake'
15 settings = "os", "arch", "compiler", "build_type"
16 options = {"shared": [True, False], "fPIC": [True, False]}
17 default_options = {"shared": False, "fPIC": True}
18
19 @property
20 def _source_subfolder(self):
21 return "source_subfolder"
22
23 @property
24 def _build_subfolder(self):
25 return "build_subfolder"
26
27 def source(self):
28 tools.get(**self.conan_data["sources"][self.version])
29 for extracted_dir in glob.glob("jbig-*"):
30 os.rename(extracted_dir, self._source_subfolder)
31
32 def config_options(self):
33 if self.settings.os == "Windows":
34 del self.options.fPIC
35
36 def configure(self):
37 del self.settings.compiler.libcxx
38 del self.settings.compiler.cppstd
39
40 def _configure_cmake(self):
41 cmake = CMake(self)
42 cmake.configure(build_folder=self._build_subfolder)
43 return cmake
44
45 def build(self):
46 for patch in self.conan_data["patches"][self.version]:
47 tools.patch(**patch)
48 cmake = self._configure_cmake()
49 cmake.build()
50
51 def package(self):
52 self.copy(pattern="COPYING", dst="licenses", src=self._source_subfolder)
53 cmake = self._configure_cmake()
54 cmake.install()
55
56 def package_info(self):
57 bindir = os.path.join(self.package_folder, "bin")
58 self.output.info("Appending PATH environment variable: {}".format(bindir))
59 self.env_info.PATH.append(bindir)
60 self.cpp_info.libs = [self.name]
61 if self.options.shared and self.settings.compiler == "Visual Studio":
62 self.cpp_info.defines = ["_JBIGDLL_"]
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/jbig/all/conanfile.py b/recipes/jbig/all/conanfile.py
--- a/recipes/jbig/all/conanfile.py
+++ b/recipes/jbig/all/conanfile.py
@@ -13,8 +13,18 @@
exports_sources = ['CMakeLists.txt', "*.patch"]
generators = 'cmake'
settings = "os", "arch", "compiler", "build_type"
- options = {"shared": [True, False], "fPIC": [True, False]}
- default_options = {"shared": False, "fPIC": True}
+ options = {
+ "shared": [True, False],
+ "fPIC": [True, False],
+ "build_executables": [True, False]
+ }
+ default_options = {
+ "shared": False,
+ "fPIC": True,
+ "build_executables": True
+ }
+
+ _cmake = None
@property
def _source_subfolder(self):
@@ -38,9 +48,13 @@
del self.settings.compiler.cppstd
def _configure_cmake(self):
- cmake = CMake(self)
- cmake.configure(build_folder=self._build_subfolder)
- return cmake
+ if self._cmake:
+ return self._cmake
+
+ self._cmake = CMake(self)
+ self._cmake.definitions["BUILD_EXECUTABLES"] = self.options.build_executables
+ self._cmake.configure(build_folder=self._build_subfolder)
+ return self._cmake
def build(self):
for patch in self.conan_data["patches"][self.version]:
@@ -54,9 +68,11 @@
cmake.install()
def package_info(self):
- bindir = os.path.join(self.package_folder, "bin")
- self.output.info("Appending PATH environment variable: {}".format(bindir))
- self.env_info.PATH.append(bindir)
self.cpp_info.libs = [self.name]
if self.options.shared and self.settings.compiler == "Visual Studio":
self.cpp_info.defines = ["_JBIGDLL_"]
+
+ if self.options.build_executables:
+ bin_path = os.path.join(self.package_folder, "bin")
+ self.output.info("Appending PATH environment variable: {}".format(bin_path))
+ self.env_info.PATH.append(bin_path)
| {"golden_diff": "diff --git a/recipes/jbig/all/conanfile.py b/recipes/jbig/all/conanfile.py\n--- a/recipes/jbig/all/conanfile.py\n+++ b/recipes/jbig/all/conanfile.py\n@@ -13,8 +13,18 @@\n exports_sources = ['CMakeLists.txt', \"*.patch\"]\n generators = 'cmake'\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n- options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n- default_options = {\"shared\": False, \"fPIC\": True}\n+ options = {\n+ \"shared\": [True, False],\n+ \"fPIC\": [True, False],\n+ \"build_executables\": [True, False]\n+ }\n+ default_options = {\n+ \"shared\": False,\n+ \"fPIC\": True,\n+ \"build_executables\": True\n+ }\n+\n+ _cmake = None\n \n @property\n def _source_subfolder(self):\n@@ -38,9 +48,13 @@\n del self.settings.compiler.cppstd\n \n def _configure_cmake(self):\n- cmake = CMake(self)\n- cmake.configure(build_folder=self._build_subfolder)\n- return cmake\n+ if self._cmake:\n+ return self._cmake\n+\n+ self._cmake = CMake(self)\n+ self._cmake.definitions[\"BUILD_EXECUTABLES\"] = self.options.build_executables\n+ self._cmake.configure(build_folder=self._build_subfolder)\n+ return self._cmake\n \n def build(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n@@ -54,9 +68,11 @@\n cmake.install()\n \n def package_info(self):\n- bindir = os.path.join(self.package_folder, \"bin\")\n- self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n- self.env_info.PATH.append(bindir)\n self.cpp_info.libs = [self.name]\n if self.options.shared and self.settings.compiler == \"Visual Studio\":\n self.cpp_info.defines = [\"_JBIGDLL_\"]\n+\n+ if self.options.build_executables:\n+ bin_path = os.path.join(self.package_folder, \"bin\")\n+ self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n+ self.env_info.PATH.append(bin_path)\n", "issue": "[package] jbig/20160605: Fails to build on iOS\n<!-- \r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n-->\r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **jbig/20160605**\r\n * Operating System+version: **iOS 11.0**\r\n * Compiler+version: **apple-clang 11.0**\r\n * Conan version: **conan 1.29.2**\r\n * Python version: **Python 3.8.5**\r\n\r\n### Conan profile\r\n```\r\n[settings]\r\narch=x86_64\r\narch_build=x86_64\r\nbuild_type=Debug\r\ncompiler=apple-clang\r\ncompiler.cppstd=17\r\ncompiler.libcxx=libc++\r\ncompiler.version=11.0\r\nos=iOS\r\nos.version=11.0\r\nos_build=Macos\r\n[options]\r\n[build_requires]\r\n*: darwin-toolchain/1.0.8@theodelrieu/stable\r\n[env]\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n`conan install jbig/20160605@ --profile ios --build=missing`\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nCMake Error at CMakeLists.txt:31 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"jbgtopbm\".\r\n```\r\n\r\n</details>\r\n\r\nI would suggest adding an option that disables the `pbmtojbg` and `jbgtopbm` targets from being generated. The recipe could define individual `build_` options for each, which other packages do, or go with a more generically named option that enables/disables both. For reference, `sqlite3`, `bzip2`, and `spirv-cross` have a `build_executable` option, while `glslang` has a `build_executables` option. \n", "before_files": [{"content": "import os\nimport glob\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanJBig(ConanFile):\n name = \"jbig\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/ImageMagick/jbig\"\n description = \"jbig for the Windows build of ImageMagick\"\n topics = (\"conan\", \"jbig\", \"imagemagick\", \"window\", \"graphic\")\n license = \"GPL-2.0\"\n exports_sources = ['CMakeLists.txt', \"*.patch\"]\n generators = 'cmake'\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n default_options = {\"shared\": False, \"fPIC\": True}\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _build_subfolder(self):\n return \"build_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n for extracted_dir in glob.glob(\"jbig-*\"):\n os.rename(extracted_dir, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.configure(build_folder=self._build_subfolder)\n return cmake\n\n def build(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n tools.patch(**patch)\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n bindir = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n self.env_info.PATH.append(bindir)\n self.cpp_info.libs = [self.name]\n if self.options.shared and self.settings.compiler == \"Visual Studio\":\n self.cpp_info.defines = [\"_JBIGDLL_\"]\n", "path": "recipes/jbig/all/conanfile.py"}], "after_files": [{"content": "import os\nimport glob\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanJBig(ConanFile):\n name = \"jbig\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/ImageMagick/jbig\"\n description = \"jbig for the Windows build of ImageMagick\"\n topics = (\"conan\", \"jbig\", \"imagemagick\", \"window\", \"graphic\")\n license = \"GPL-2.0\"\n exports_sources = ['CMakeLists.txt', \"*.patch\"]\n generators = 'cmake'\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"build_executables\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"build_executables\": True\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _build_subfolder(self):\n return \"build_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n for extracted_dir in glob.glob(\"jbig-*\"):\n os.rename(extracted_dir, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n\n self._cmake = CMake(self)\n self._cmake.definitions[\"BUILD_EXECUTABLES\"] = self.options.build_executables\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def build(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n tools.patch(**patch)\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = [self.name]\n if self.options.shared and self.settings.compiler == \"Visual Studio\":\n self.cpp_info.defines = [\"_JBIGDLL_\"]\n\n if self.options.build_executables:\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/jbig/all/conanfile.py"}]} | 1,345 | 535 |
gh_patches_debug_33865 | rasdani/github-patches | git_diff | cowrie__cowrie-1022 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cowrie not set up for py.test framework
So I tried running the test in both python2 and python3. For python2 all the tests were passing but for python3 there was some error.
```
py.test --cov=cowrie
===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.2, pytest-4.2.0, py-1.7.0, pluggy-0.8.1
rootdir: /home/mzfr/dev/cowrie, inifile:
plugins: cov-2.6.1
collected 3 items / 3 errors
=========================================================================================== ERRORS ============================================================================================
___________________________________________________________________ ERROR collecting src/cowrie/test/test_base_commands.py ____________________________________________________________________
../shell/fs.py:26: in <module>
PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))
../core/config.py:29: in get
return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
/usr/lib/python3.7/configparser.py:780: in get
d = self._unify_values(section, vars)
/usr/lib/python3.7/configparser.py:1146: in _unify_values
raise NoSectionError(section) from None
E configparser.NoSectionError: No section: 'shell'
During handling of the above exception, another exception occurred:
test_base_commands.py:12: in <module>
from cowrie.shell import protocol
../shell/protocol.py:21: in <module>
from cowrie.shell import command
../shell/command.py:20: in <module>
from cowrie.shell import fs
../shell/fs.py:29: in <module>
exit(2)
/usr/lib/python3.7/_sitebuiltins.py:26: in __call__
raise SystemExit(code)
E SystemExit: 2
--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------
ERROR: Config file not found: etc/cowrie.cfg.dist
________________________________________________________________________ ERROR collecting src/cowrie/test/test_echo.py ________________________________________________________________________
../shell/fs.py:26: in <module>
PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))
../core/config.py:29: in get
return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
/usr/lib/python3.7/configparser.py:780: in get
d = self._unify_values(section, vars)
/usr/lib/python3.7/configparser.py:1146: in _unify_values
raise NoSectionError(section) from None
E configparser.NoSectionError: No section: 'shell'
During handling of the above exception, another exception occurred:
test_echo.py:16: in <module>
from cowrie.shell import protocol
../shell/protocol.py:21: in <module>
from cowrie.shell import command
../shell/command.py:20: in <module>
from cowrie.shell import fs
../shell/fs.py:29: in <module>
exit(2)
/usr/lib/python3.7/_sitebuiltins.py:26: in __call__
raise SystemExit(code)
E SystemExit: 2
--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------
ERROR: Config file not found: etc/cowrie.cfg.dist
________________________________________________________________________ ERROR collecting src/cowrie/test/test_tftp.py ________________________________________________________________________
../shell/fs.py:26: in <module>
PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))
../core/config.py:29: in get
return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
/usr/lib/python3.7/configparser.py:780: in get
d = self._unify_values(section, vars)
/usr/lib/python3.7/configparser.py:1146: in _unify_values
raise NoSectionError(section) from None
E configparser.NoSectionError: No section: 'shell'
During handling of the above exception, another exception occurred:
test_tftp.py:16: in <module>
from cowrie.shell import protocol
../shell/protocol.py:21: in <module>
from cowrie.shell import command
../shell/command.py:20: in <module>
from cowrie.shell import fs
../shell/fs.py:29: in <module>
exit(2)
/usr/lib/python3.7/_sitebuiltins.py:26: in __call__
raise SystemExit(code)
E SystemExit: 2
--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------
ERROR: Config file not found: etc/cowrie.cfg.dist
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cowrie/core/config.py`
Content:
```
1 # Copyright (c) 2009-2014 Upi Tamminen <[email protected]>
2 # See the COPYRIGHT file for more information
3
4 """
5 This module contains ...
6 """
7
8 from __future__ import absolute_import, division
9
10 import configparser
11 import os
12
13
14 def to_environ_key(key):
15 return key.upper()
16
17
18 class EnvironmentConfigParser(configparser.ConfigParser):
19
20 def has_option(self, section, option):
21 if to_environ_key('_'.join((section, option))) in os.environ:
22 return True
23 return super(EnvironmentConfigParser, self).has_option(section, option)
24
25 def get(self, section, option, **kwargs):
26 key = to_environ_key('_'.join((section, option)))
27 if key in os.environ:
28 return os.environ[key]
29 return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
30
31
32 def readConfigFile(cfgfile):
33 """
34 Read config files and return ConfigParser object
35
36 @param cfgfile: filename or array of filenames
37 @return: ConfigParser object
38 """
39 parser = EnvironmentConfigParser(interpolation=configparser.ExtendedInterpolation())
40 parser.read(cfgfile)
41 return parser
42
43
44 CONFIG = readConfigFile(("etc/cowrie.cfg.dist", "/etc/cowrie/cowrie.cfg", "etc/cowrie.cfg", "cowrie.cfg"))
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cowrie/core/config.py b/src/cowrie/core/config.py
--- a/src/cowrie/core/config.py
+++ b/src/cowrie/core/config.py
@@ -8,7 +8,8 @@
from __future__ import absolute_import, division
import configparser
-import os
+from os import environ
+from os.path import abspath, dirname, exists, join
def to_environ_key(key):
@@ -18,15 +19,16 @@
class EnvironmentConfigParser(configparser.ConfigParser):
def has_option(self, section, option):
- if to_environ_key('_'.join((section, option))) in os.environ:
+ if to_environ_key('_'.join((section, option))) in environ:
return True
return super(EnvironmentConfigParser, self).has_option(section, option)
def get(self, section, option, **kwargs):
key = to_environ_key('_'.join((section, option)))
- if key in os.environ:
- return os.environ[key]
- return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
+ if key in environ:
+ return environ[key]
+ return super(EnvironmentConfigParser, self).get(
+ section, option, **kwargs)
def readConfigFile(cfgfile):
@@ -36,9 +38,26 @@
@param cfgfile: filename or array of filenames
@return: ConfigParser object
"""
- parser = EnvironmentConfigParser(interpolation=configparser.ExtendedInterpolation())
+ parser = EnvironmentConfigParser(
+ interpolation=configparser.ExtendedInterpolation())
parser.read(cfgfile)
return parser
-CONFIG = readConfigFile(("etc/cowrie.cfg.dist", "/etc/cowrie/cowrie.cfg", "etc/cowrie.cfg", "cowrie.cfg"))
+def get_config_path():
+ """Get absolute path to the config file
+ """
+ config_files = ["etc/cowrie/cowrie.cfg", "etc/cowrie.cfg",
+ "cowrie.cfg", "etc/cowrie.cfg.dist"]
+ current_path = abspath(dirname(__file__))
+ root = "/".join(current_path.split("/")[:-3])
+
+ for file in config_files:
+ absolute_path = join(root, file)
+ if exists(absolute_path):
+ return absolute_path
+
+ print("Config file not found")
+
+
+CONFIG = readConfigFile(get_config_path())
| {"golden_diff": "diff --git a/src/cowrie/core/config.py b/src/cowrie/core/config.py\n--- a/src/cowrie/core/config.py\n+++ b/src/cowrie/core/config.py\n@@ -8,7 +8,8 @@\n from __future__ import absolute_import, division\n \n import configparser\n-import os\n+from os import environ\n+from os.path import abspath, dirname, exists, join\n \n \n def to_environ_key(key):\n@@ -18,15 +19,16 @@\n class EnvironmentConfigParser(configparser.ConfigParser):\n \n def has_option(self, section, option):\n- if to_environ_key('_'.join((section, option))) in os.environ:\n+ if to_environ_key('_'.join((section, option))) in environ:\n return True\n return super(EnvironmentConfigParser, self).has_option(section, option)\n \n def get(self, section, option, **kwargs):\n key = to_environ_key('_'.join((section, option)))\n- if key in os.environ:\n- return os.environ[key]\n- return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\n+ if key in environ:\n+ return environ[key]\n+ return super(EnvironmentConfigParser, self).get(\n+ section, option, **kwargs)\n \n \n def readConfigFile(cfgfile):\n@@ -36,9 +38,26 @@\n @param cfgfile: filename or array of filenames\n @return: ConfigParser object\n \"\"\"\n- parser = EnvironmentConfigParser(interpolation=configparser.ExtendedInterpolation())\n+ parser = EnvironmentConfigParser(\n+ interpolation=configparser.ExtendedInterpolation())\n parser.read(cfgfile)\n return parser\n \n \n-CONFIG = readConfigFile((\"etc/cowrie.cfg.dist\", \"/etc/cowrie/cowrie.cfg\", \"etc/cowrie.cfg\", \"cowrie.cfg\"))\n+def get_config_path():\n+ \"\"\"Get absolute path to the config file\n+ \"\"\"\n+ config_files = [\"etc/cowrie/cowrie.cfg\", \"etc/cowrie.cfg\",\n+ \"cowrie.cfg\", \"etc/cowrie.cfg.dist\"]\n+ current_path = abspath(dirname(__file__))\n+ root = \"/\".join(current_path.split(\"/\")[:-3])\n+\n+ for file in config_files:\n+ absolute_path = join(root, file)\n+ if exists(absolute_path):\n+ return absolute_path\n+\n+ print(\"Config file not found\")\n+\n+\n+CONFIG = readConfigFile(get_config_path())\n", "issue": "Cowrie not set up for py.test framework\nSo I tried running the test in both python2 and python3. For python2 all the tests were passing but for python3 there was some error.\r\n\r\n```\r\n py.test --cov=cowrie \r\n===================================================================================== test session starts =====================================================================================\r\nplatform linux -- Python 3.7.2, pytest-4.2.0, py-1.7.0, pluggy-0.8.1\r\nrootdir: /home/mzfr/dev/cowrie, inifile:\r\nplugins: cov-2.6.1\r\ncollected 3 items / 3 errors \r\n\r\n=========================================================================================== ERRORS ============================================================================================\r\n___________________________________________________________________ ERROR collecting src/cowrie/test/test_base_commands.py ____________________________________________________________________\r\n../shell/fs.py:26: in <module>\r\n PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))\r\n../core/config.py:29: in get\r\n return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\r\n/usr/lib/python3.7/configparser.py:780: in get\r\n d = self._unify_values(section, vars)\r\n/usr/lib/python3.7/configparser.py:1146: in _unify_values\r\n raise NoSectionError(section) from None\r\nE configparser.NoSectionError: No section: 'shell'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\ntest_base_commands.py:12: in <module>\r\n from cowrie.shell import protocol\r\n../shell/protocol.py:21: in <module>\r\n from cowrie.shell import command\r\n../shell/command.py:20: in <module>\r\n from cowrie.shell import fs\r\n../shell/fs.py:29: in <module>\r\n exit(2)\r\n/usr/lib/python3.7/_sitebuiltins.py:26: in __call__\r\n raise SystemExit(code)\r\nE SystemExit: 2\r\n--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------\r\nERROR: Config file not found: etc/cowrie.cfg.dist\r\n________________________________________________________________________ ERROR collecting src/cowrie/test/test_echo.py ________________________________________________________________________\r\n../shell/fs.py:26: in <module>\r\n PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))\r\n../core/config.py:29: in get\r\n return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\r\n/usr/lib/python3.7/configparser.py:780: in get\r\n d = self._unify_values(section, vars)\r\n/usr/lib/python3.7/configparser.py:1146: in _unify_values\r\n raise NoSectionError(section) from None\r\nE configparser.NoSectionError: No section: 'shell'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\ntest_echo.py:16: in <module>\r\n from cowrie.shell import protocol\r\n../shell/protocol.py:21: in <module>\r\n from cowrie.shell import command\r\n../shell/command.py:20: in <module>\r\n from cowrie.shell import fs\r\n../shell/fs.py:29: in <module>\r\n exit(2)\r\n/usr/lib/python3.7/_sitebuiltins.py:26: in __call__\r\n raise SystemExit(code)\r\nE SystemExit: 2\r\n--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------\r\nERROR: Config file not found: etc/cowrie.cfg.dist\r\n________________________________________________________________________ ERROR collecting src/cowrie/test/test_tftp.py ________________________________________________________________________\r\n../shell/fs.py:26: in <module>\r\n PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))\r\n../core/config.py:29: in get\r\n return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\r\n/usr/lib/python3.7/configparser.py:780: in get\r\n d = self._unify_values(section, vars)\r\n/usr/lib/python3.7/configparser.py:1146: in _unify_values\r\n raise NoSectionError(section) from None\r\nE configparser.NoSectionError: No section: 'shell'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\ntest_tftp.py:16: in <module>\r\n from cowrie.shell import protocol\r\n../shell/protocol.py:21: in <module>\r\n from cowrie.shell import command\r\n../shell/command.py:20: in <module>\r\n from cowrie.shell import fs\r\n../shell/fs.py:29: in <module>\r\n exit(2)\r\n/usr/lib/python3.7/_sitebuiltins.py:26: in __call__\r\n raise SystemExit(code)\r\nE SystemExit: 2\r\n--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------\r\nERROR: Config file not found: etc/cowrie.cfg.dist\r\n```\n", "before_files": [{"content": "# Copyright (c) 2009-2014 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\n\"\"\"\nThis module contains ...\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport configparser\nimport os\n\n\ndef to_environ_key(key):\n return key.upper()\n\n\nclass EnvironmentConfigParser(configparser.ConfigParser):\n\n def has_option(self, section, option):\n if to_environ_key('_'.join((section, option))) in os.environ:\n return True\n return super(EnvironmentConfigParser, self).has_option(section, option)\n\n def get(self, section, option, **kwargs):\n key = to_environ_key('_'.join((section, option)))\n if key in os.environ:\n return os.environ[key]\n return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\n\n\ndef readConfigFile(cfgfile):\n \"\"\"\n Read config files and return ConfigParser object\n\n @param cfgfile: filename or array of filenames\n @return: ConfigParser object\n \"\"\"\n parser = EnvironmentConfigParser(interpolation=configparser.ExtendedInterpolation())\n parser.read(cfgfile)\n return parser\n\n\nCONFIG = readConfigFile((\"etc/cowrie.cfg.dist\", \"/etc/cowrie/cowrie.cfg\", \"etc/cowrie.cfg\", \"cowrie.cfg\"))\n", "path": "src/cowrie/core/config.py"}], "after_files": [{"content": "# Copyright (c) 2009-2014 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\n\"\"\"\nThis module contains ...\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport configparser\nfrom os import environ\nfrom os.path import abspath, dirname, exists, join\n\n\ndef to_environ_key(key):\n return key.upper()\n\n\nclass EnvironmentConfigParser(configparser.ConfigParser):\n\n def has_option(self, section, option):\n if to_environ_key('_'.join((section, option))) in environ:\n return True\n return super(EnvironmentConfigParser, self).has_option(section, option)\n\n def get(self, section, option, **kwargs):\n key = to_environ_key('_'.join((section, option)))\n if key in environ:\n return environ[key]\n return super(EnvironmentConfigParser, self).get(\n section, option, **kwargs)\n\n\ndef readConfigFile(cfgfile):\n \"\"\"\n Read config files and return ConfigParser object\n\n @param cfgfile: filename or array of filenames\n @return: ConfigParser object\n \"\"\"\n parser = EnvironmentConfigParser(\n interpolation=configparser.ExtendedInterpolation())\n parser.read(cfgfile)\n return parser\n\n\ndef get_config_path():\n \"\"\"Get absolute path to the config file\n \"\"\"\n config_files = [\"etc/cowrie/cowrie.cfg\", \"etc/cowrie.cfg\",\n \"cowrie.cfg\", \"etc/cowrie.cfg.dist\"]\n current_path = abspath(dirname(__file__))\n root = \"/\".join(current_path.split(\"/\")[:-3])\n\n for file in config_files:\n absolute_path = join(root, file)\n if exists(absolute_path):\n return absolute_path\n\n print(\"Config file not found\")\n\n\nCONFIG = readConfigFile(get_config_path())\n", "path": "src/cowrie/core/config.py"}]} | 1,656 | 540 |
gh_patches_debug_15272 | rasdani/github-patches | git_diff | chainer__chainer-1539 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid CuPy cache problem with different version of CUDAs
When a user update CUDA, caches of CuPy for old CUDA is sometimes incompatible with new one. We need to check CUDA version and to store kernel cache with its information.
@cosmo__ reported this problem on Twitter. Thank you!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/cuda/compiler.py`
Content:
```
1 import hashlib
2 import os
3 import re
4 import subprocess
5 import sys
6 import tempfile
7
8 import filelock
9 import six
10
11 from cupy.cuda import device
12 from cupy.cuda import function
13
14
15 def _get_arch():
16 cc = device.Device().compute_capability
17 return 'sm_%s' % cc
18
19
20 class TemporaryDirectory(object):
21
22 def __enter__(self):
23 self.path = tempfile.mkdtemp()
24 return self.path
25
26 def __exit__(self, exc_type, exc_value, traceback):
27 if exc_value is not None:
28 return
29
30 for name in os.listdir(self.path):
31 os.unlink(os.path.join(self.path, name))
32 os.rmdir(self.path)
33
34
35 def _run_nvcc(cmd, cwd):
36 try:
37 return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT)
38 except subprocess.CalledProcessError as e:
39 msg = ('`nvcc` command returns non-zero exit status. \n'
40 'command: {0}\n'
41 'return-code: {1}\n'
42 'stdout/stderr: \n'
43 '{2}'.format(e.cmd, e.returncode, e.output))
44 raise RuntimeError(msg)
45 except OSError as e:
46 msg = 'Failed to run `nvcc` command. ' \
47 'Check PATH environment variable: ' \
48 + str(e)
49 raise OSError(msg)
50
51
52 def nvcc(source, options=(), arch=None):
53 if not arch:
54 arch = _get_arch()
55 cmd = ['nvcc', '--cubin', '-arch', arch] + list(options)
56
57 with TemporaryDirectory() as root_dir:
58 path = os.path.join(root_dir, 'kern')
59 cu_path = '%s.cu' % path
60 cubin_path = '%s.cubin' % path
61
62 with open(cu_path, 'w') as cu_file:
63 cu_file.write(source)
64
65 cmd.append(cu_path)
66 _run_nvcc(cmd, root_dir)
67
68 with open(cubin_path, 'rb') as bin_file:
69 return bin_file.read()
70
71
72 def preprocess(source, options=()):
73 cmd = ['nvcc', '--preprocess'] + list(options)
74 with TemporaryDirectory() as root_dir:
75 path = os.path.join(root_dir, 'kern')
76 cu_path = '%s.cu' % path
77
78 with open(cu_path, 'w') as cu_file:
79 cu_file.write(source)
80
81 cmd.append(cu_path)
82 pp_src = _run_nvcc(cmd, root_dir)
83
84 if isinstance(pp_src, six.binary_type):
85 pp_src = pp_src.decode('utf-8')
86 return re.sub('(?m)^#.*$', '', pp_src)
87
88
89 _default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')
90
91
92 def get_cache_dir():
93 return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)
94
95
96 _empty_file_preprocess_cache = {}
97
98
99 def compile_with_cache(source, options=(), arch=None, cache_dir=None):
100 global _empty_file_preprocess_cache
101 if cache_dir is None:
102 cache_dir = get_cache_dir()
103 if arch is None:
104 arch = _get_arch()
105
106 if 'win32' == sys.platform:
107 options += ('-Xcompiler', '/wd 4819')
108 if sys.maxsize == 9223372036854775807:
109 options += '-m64',
110 elif sys.maxsize == 2147483647:
111 options += '-m32',
112
113 env = (arch, options)
114 if '#include' in source:
115 pp_src = '%s %s' % (env, preprocess(source, options))
116 else:
117 base = _empty_file_preprocess_cache.get(env, None)
118 if base is None:
119 base = _empty_file_preprocess_cache[env] = preprocess('', options)
120 pp_src = '%s %s %s' % (env, base, source)
121
122 if isinstance(pp_src, six.text_type):
123 pp_src = pp_src.encode('utf-8')
124 name = '%s.cubin' % hashlib.md5(pp_src).hexdigest()
125
126 mod = function.Module()
127
128 if not os.path.exists(cache_dir):
129 os.makedirs(cache_dir)
130
131 lock_path = os.path.join(cache_dir, 'lock_file.lock')
132
133 path = os.path.join(cache_dir, name)
134 with filelock.FileLock(lock_path) as lock:
135 if os.path.exists(path):
136 with open(path, 'rb') as file:
137 cubin = file.read()
138 mod.load(cubin)
139 else:
140 lock.release()
141 cubin = nvcc(source, options, arch)
142 mod.load(cubin)
143 lock.acquire()
144 with open(path, 'wb') as cubin_file:
145 cubin_file.write(cubin)
146
147 return mod
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/cuda/compiler.py b/cupy/cuda/compiler.py
--- a/cupy/cuda/compiler.py
+++ b/cupy/cuda/compiler.py
@@ -12,6 +12,18 @@
from cupy.cuda import function
+_nvcc_version = None
+
+
+def _get_nvcc_version():
+ global _nvcc_version
+ if _nvcc_version is None:
+ cmd = ['nvcc', '--version']
+ _nvcc_version = _run_nvcc(cmd, '.')
+
+ return _nvcc_version
+
+
def _get_arch():
cc = device.Device().compute_capability
return 'sm_%s' % cc
@@ -110,7 +122,7 @@
elif sys.maxsize == 2147483647:
options += '-m32',
- env = (arch, options)
+ env = (arch, options, _get_nvcc_version())
if '#include' in source:
pp_src = '%s %s' % (env, preprocess(source, options))
else:
| {"golden_diff": "diff --git a/cupy/cuda/compiler.py b/cupy/cuda/compiler.py\n--- a/cupy/cuda/compiler.py\n+++ b/cupy/cuda/compiler.py\n@@ -12,6 +12,18 @@\n from cupy.cuda import function\n \n \n+_nvcc_version = None\n+\n+\n+def _get_nvcc_version():\n+ global _nvcc_version\n+ if _nvcc_version is None:\n+ cmd = ['nvcc', '--version']\n+ _nvcc_version = _run_nvcc(cmd, '.')\n+\n+ return _nvcc_version\n+\n+\n def _get_arch():\n cc = device.Device().compute_capability\n return 'sm_%s' % cc\n@@ -110,7 +122,7 @@\n elif sys.maxsize == 2147483647:\n options += '-m32',\n \n- env = (arch, options)\n+ env = (arch, options, _get_nvcc_version())\n if '#include' in source:\n pp_src = '%s %s' % (env, preprocess(source, options))\n else:\n", "issue": "Invalid CuPy cache problem with different version of CUDAs\nWhen a user update CUDA, caches of CuPy for old CUDA is sometimes incompatible with new one. We need to check CUDA version and to store kernel cache with its information.\n\n@cosmo__ reported this problem on Twitter. Thank you!\n\n", "before_files": [{"content": "import hashlib\nimport os\nimport re\nimport subprocess\nimport sys\nimport tempfile\n\nimport filelock\nimport six\n\nfrom cupy.cuda import device\nfrom cupy.cuda import function\n\n\ndef _get_arch():\n cc = device.Device().compute_capability\n return 'sm_%s' % cc\n\n\nclass TemporaryDirectory(object):\n\n def __enter__(self):\n self.path = tempfile.mkdtemp()\n return self.path\n\n def __exit__(self, exc_type, exc_value, traceback):\n if exc_value is not None:\n return\n\n for name in os.listdir(self.path):\n os.unlink(os.path.join(self.path, name))\n os.rmdir(self.path)\n\n\ndef _run_nvcc(cmd, cwd):\n try:\n return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n msg = ('`nvcc` command returns non-zero exit status. \\n'\n 'command: {0}\\n'\n 'return-code: {1}\\n'\n 'stdout/stderr: \\n'\n '{2}'.format(e.cmd, e.returncode, e.output))\n raise RuntimeError(msg)\n except OSError as e:\n msg = 'Failed to run `nvcc` command. ' \\\n 'Check PATH environment variable: ' \\\n + str(e)\n raise OSError(msg)\n\n\ndef nvcc(source, options=(), arch=None):\n if not arch:\n arch = _get_arch()\n cmd = ['nvcc', '--cubin', '-arch', arch] + list(options)\n\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n cubin_path = '%s.cubin' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n cmd.append(cu_path)\n _run_nvcc(cmd, root_dir)\n\n with open(cubin_path, 'rb') as bin_file:\n return bin_file.read()\n\n\ndef preprocess(source, options=()):\n cmd = ['nvcc', '--preprocess'] + list(options)\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n cmd.append(cu_path)\n pp_src = _run_nvcc(cmd, root_dir)\n\n if isinstance(pp_src, six.binary_type):\n pp_src = pp_src.decode('utf-8')\n return re.sub('(?m)^#.*$', '', pp_src)\n\n\n_default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')\n\n\ndef get_cache_dir():\n return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)\n\n\n_empty_file_preprocess_cache = {}\n\n\ndef compile_with_cache(source, options=(), arch=None, cache_dir=None):\n global _empty_file_preprocess_cache\n if cache_dir is None:\n cache_dir = get_cache_dir()\n if arch is None:\n arch = _get_arch()\n\n if 'win32' == sys.platform:\n options += ('-Xcompiler', '/wd 4819')\n if sys.maxsize == 9223372036854775807:\n options += '-m64',\n elif sys.maxsize == 2147483647:\n options += '-m32',\n\n env = (arch, options)\n if '#include' in source:\n pp_src = '%s %s' % (env, preprocess(source, options))\n else:\n base = _empty_file_preprocess_cache.get(env, None)\n if base is None:\n base = _empty_file_preprocess_cache[env] = preprocess('', options)\n pp_src = '%s %s %s' % (env, base, source)\n\n if isinstance(pp_src, six.text_type):\n pp_src = pp_src.encode('utf-8')\n name = '%s.cubin' % hashlib.md5(pp_src).hexdigest()\n\n mod = function.Module()\n\n if not os.path.exists(cache_dir):\n os.makedirs(cache_dir)\n\n lock_path = os.path.join(cache_dir, 'lock_file.lock')\n\n path = os.path.join(cache_dir, name)\n with filelock.FileLock(lock_path) as lock:\n if os.path.exists(path):\n with open(path, 'rb') as file:\n cubin = file.read()\n mod.load(cubin)\n else:\n lock.release()\n cubin = nvcc(source, options, arch)\n mod.load(cubin)\n lock.acquire()\n with open(path, 'wb') as cubin_file:\n cubin_file.write(cubin)\n\n return mod\n", "path": "cupy/cuda/compiler.py"}], "after_files": [{"content": "import hashlib\nimport os\nimport re\nimport subprocess\nimport sys\nimport tempfile\n\nimport filelock\nimport six\n\nfrom cupy.cuda import device\nfrom cupy.cuda import function\n\n\n_nvcc_version = None\n\n\ndef _get_nvcc_version():\n global _nvcc_version\n if _nvcc_version is None:\n cmd = ['nvcc', '--version']\n _nvcc_version = _run_nvcc(cmd, '.')\n\n return _nvcc_version\n\n\ndef _get_arch():\n cc = device.Device().compute_capability\n return 'sm_%s' % cc\n\n\nclass TemporaryDirectory(object):\n\n def __enter__(self):\n self.path = tempfile.mkdtemp()\n return self.path\n\n def __exit__(self, exc_type, exc_value, traceback):\n if exc_value is not None:\n return\n\n for name in os.listdir(self.path):\n os.unlink(os.path.join(self.path, name))\n os.rmdir(self.path)\n\n\ndef _run_nvcc(cmd, cwd):\n try:\n return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n msg = ('`nvcc` command returns non-zero exit status. \\n'\n 'command: {0}\\n'\n 'return-code: {1}\\n'\n 'stdout/stderr: \\n'\n '{2}'.format(e.cmd, e.returncode, e.output))\n raise RuntimeError(msg)\n except OSError as e:\n msg = 'Failed to run `nvcc` command. ' \\\n 'Check PATH environment variable: ' \\\n + str(e)\n raise OSError(msg)\n\n\ndef nvcc(source, options=(), arch=None):\n if not arch:\n arch = _get_arch()\n cmd = ['nvcc', '--cubin', '-arch', arch] + list(options)\n\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n cubin_path = '%s.cubin' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n cmd.append(cu_path)\n _run_nvcc(cmd, root_dir)\n\n with open(cubin_path, 'rb') as bin_file:\n return bin_file.read()\n\n\ndef preprocess(source, options=()):\n cmd = ['nvcc', '--preprocess'] + list(options)\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n cmd.append(cu_path)\n pp_src = _run_nvcc(cmd, root_dir)\n\n if isinstance(pp_src, six.binary_type):\n pp_src = pp_src.decode('utf-8')\n return re.sub('(?m)^#.*$', '', pp_src)\n\n\n_default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')\n\n\ndef get_cache_dir():\n return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)\n\n\n_empty_file_preprocess_cache = {}\n\n\ndef compile_with_cache(source, options=(), arch=None, cache_dir=None):\n global _empty_file_preprocess_cache\n if cache_dir is None:\n cache_dir = get_cache_dir()\n if arch is None:\n arch = _get_arch()\n\n if 'win32' == sys.platform:\n options += ('-Xcompiler', '/wd 4819')\n if sys.maxsize == 9223372036854775807:\n options += '-m64',\n elif sys.maxsize == 2147483647:\n options += '-m32',\n\n env = (arch, options, _get_nvcc_version())\n if '#include' in source:\n pp_src = '%s %s' % (env, preprocess(source, options))\n else:\n base = _empty_file_preprocess_cache.get(env, None)\n if base is None:\n base = _empty_file_preprocess_cache[env] = preprocess('', options)\n pp_src = '%s %s %s' % (env, base, source)\n\n if isinstance(pp_src, six.text_type):\n pp_src = pp_src.encode('utf-8')\n name = '%s.cubin' % hashlib.md5(pp_src).hexdigest()\n\n mod = function.Module()\n\n if not os.path.exists(cache_dir):\n os.makedirs(cache_dir)\n\n lock_path = os.path.join(cache_dir, 'lock_file.lock')\n\n path = os.path.join(cache_dir, name)\n with filelock.FileLock(lock_path) as lock:\n if os.path.exists(path):\n with open(path, 'rb') as file:\n cubin = file.read()\n mod.load(cubin)\n else:\n lock.release()\n cubin = nvcc(source, options, arch)\n mod.load(cubin)\n lock.acquire()\n with open(path, 'wb') as cubin_file:\n cubin_file.write(cubin)\n\n return mod\n", "path": "cupy/cuda/compiler.py"}]} | 1,732 | 247 |
gh_patches_debug_63916 | rasdani/github-patches | git_diff | tensorflow__addons-897 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Nightly build break
**System information**
- TensorFlow version and how it was installed (source or binary): tf-nightly-**2.2.0.dev20200115**
- TensorFlow-Addons version and how it was installed (source or binary): tfa-nightly-**0.8.0.dev20200115**
**Describe the bug**
Hi, it looks like [this commit](https://github.com/tensorflow/addons/commit/3aae7732998cb233234a2948010b9aaafc24e920) causes the latest nightly build to fail on import
```
----> 1 import tensorflow_addons
/usr/local/lib/python3.6/dist-packages/tensorflow_addons/__init__.py in <module>()
30
31 # Cleanup symbols to avoid polluting namespace.
---> 32 del absolute_import
33 del division
34 del print_function
NameError: name 'absolute_import' is not defined
```
@seanpmorgan
**Code to reproduce the issue**
[colab](https://colab.research.google.com/drive/1fxRshVv0FPJNHdOqWC4GySjPJ_TdJTJU#scrollTo=TTC3gzRLRAvY)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/__init__.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Useful extra functionality for TensorFlow maintained by SIG-addons."""
16
17 # Local project imports
18 from tensorflow_addons import activations
19 from tensorflow_addons import callbacks
20 from tensorflow_addons import image
21 from tensorflow_addons import layers
22 from tensorflow_addons import losses
23 from tensorflow_addons import metrics
24 from tensorflow_addons import optimizers
25 from tensorflow_addons import rnn
26 from tensorflow_addons import seq2seq
27 from tensorflow_addons import text
28
29 from tensorflow_addons.version import __version__
30
31 # Cleanup symbols to avoid polluting namespace.
32 del absolute_import
33 del division
34 del print_function
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensorflow_addons/__init__.py b/tensorflow_addons/__init__.py
--- a/tensorflow_addons/__init__.py
+++ b/tensorflow_addons/__init__.py
@@ -27,8 +27,3 @@
from tensorflow_addons import text
from tensorflow_addons.version import __version__
-
-# Cleanup symbols to avoid polluting namespace.
-del absolute_import
-del division
-del print_function
| {"golden_diff": "diff --git a/tensorflow_addons/__init__.py b/tensorflow_addons/__init__.py\n--- a/tensorflow_addons/__init__.py\n+++ b/tensorflow_addons/__init__.py\n@@ -27,8 +27,3 @@\n from tensorflow_addons import text\n \n from tensorflow_addons.version import __version__\n-\n-# Cleanup symbols to avoid polluting namespace.\n-del absolute_import\n-del division\n-del print_function\n", "issue": "Nightly build break\n**System information**\r\n- TensorFlow version and how it was installed (source or binary): tf-nightly-**2.2.0.dev20200115** \r\n- TensorFlow-Addons version and how it was installed (source or binary): tfa-nightly-**0.8.0.dev20200115**\r\n\r\n**Describe the bug**\r\nHi, it looks like [this commit](https://github.com/tensorflow/addons/commit/3aae7732998cb233234a2948010b9aaafc24e920) causes the latest nightly build to fail on import\r\n\r\n```\r\n----> 1 import tensorflow_addons\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_addons/__init__.py in <module>()\r\n 30 \r\n 31 # Cleanup symbols to avoid polluting namespace.\r\n---> 32 del absolute_import\r\n 33 del division\r\n 34 del print_function\r\n\r\nNameError: name 'absolute_import' is not defined\r\n```\r\n@seanpmorgan \r\n\r\n**Code to reproduce the issue**\r\n[colab](https://colab.research.google.com/drive/1fxRshVv0FPJNHdOqWC4GySjPJ_TdJTJU#scrollTo=TTC3gzRLRAvY)\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Useful extra functionality for TensorFlow maintained by SIG-addons.\"\"\"\n\n# Local project imports\nfrom tensorflow_addons import activations\nfrom tensorflow_addons import callbacks\nfrom tensorflow_addons import image\nfrom tensorflow_addons import layers\nfrom tensorflow_addons import losses\nfrom tensorflow_addons import metrics\nfrom tensorflow_addons import optimizers\nfrom tensorflow_addons import rnn\nfrom tensorflow_addons import seq2seq\nfrom tensorflow_addons import text\n\nfrom tensorflow_addons.version import __version__\n\n# Cleanup symbols to avoid polluting namespace.\ndel absolute_import\ndel division\ndel print_function\n", "path": "tensorflow_addons/__init__.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Useful extra functionality for TensorFlow maintained by SIG-addons.\"\"\"\n\n# Local project imports\nfrom tensorflow_addons import activations\nfrom tensorflow_addons import callbacks\nfrom tensorflow_addons import image\nfrom tensorflow_addons import layers\nfrom tensorflow_addons import losses\nfrom tensorflow_addons import metrics\nfrom tensorflow_addons import optimizers\nfrom tensorflow_addons import rnn\nfrom tensorflow_addons import seq2seq\nfrom tensorflow_addons import text\n\nfrom tensorflow_addons.version import __version__\n", "path": "tensorflow_addons/__init__.py"}]} | 891 | 98 |
gh_patches_debug_13053 | rasdani/github-patches | git_diff | pymodbus-dev__pymodbus-921 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: module 'asyncio' has no attribute 'exceptions'
```
Traceback (most recent call last):
File "/home//.local/bin/pymodbus.server", line 6, in <module>
from pymodbus.repl.server.main import server
File "/home//.local/lib/python3.7/site-packages/pymodbus/repl/server/main.py", line 16, in <module>
CANCELLED_ERROR = asyncio.exceptions.CancelledError
AttributeError: module 'asyncio' has no attribute 'exceptions'
```
in the code
```
if IS_PYTHON3 and PYTHON_VERSION > (3, 7):
CANCELLED_ERROR = asyncio.exceptions.CancelledError
else:
CANCELLED_ERROR = asyncio.CancelledError
```
python 3.7 at least 3.7.3 doesn't have `asyncio.exceptions`
Maybe it was supposed to be >= 3.8.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pymodbus/repl/server/main.py`
Content:
```
1 """Repl server main.
2
3 Copyright (c) 2020 by RiptideIO
4 All rights reserved.
5 """
6 import sys
7 import logging
8 import asyncio
9 import json
10 import click
11 from pymodbus.framer.socket_framer import ModbusSocketFramer
12 from pymodbus.server.reactive.main import (
13 ReactiveServer,
14 DEFAULT_FRAMER,
15 DEFUALT_HANDLERS,
16 )
17 from pymodbus.server.reactive.default_config import DEFUALT_CONFIG
18 from pymodbus.repl.server.cli import run_repl
19
20 if sys.version_info > (3, 7):
21 CANCELLED_ERROR = asyncio.exceptions.CancelledError
22 else:
23 CANCELLED_ERROR = asyncio.CancelledError # pylint: disable=invalid-name
24
25
26 @click.group("ReactiveModbusServer")
27 @click.option("--host", default="localhost", help="Host address")
28 @click.option("--web-port", default=8080, help="Web app port")
29 @click.option(
30 "--broadcast-support",
31 is_flag=True,
32 default=False,
33 help="Support broadcast messages",
34 )
35 @click.option(
36 "--repl/--no-repl",
37 is_flag=True,
38 default=True,
39 help="Enable/Disable repl for server",
40 )
41 @click.option(
42 "--verbose", is_flag=True, help="Run with debug logs enabled for pymodbus"
43 )
44 @click.pass_context
45 def server(ctx, host, web_port, broadcast_support, repl, verbose):
46 """Run server code."""
47 FORMAT = ( # pylint: disable=invalid-name
48 "%(asctime)-15s %(threadName)-15s"
49 " %(levelname)-8s %(module)-15s:%(lineno)-8s %(message)s"
50 )
51 pymodbus_logger = logging.getLogger("pymodbus")
52 logging.basicConfig(format=FORMAT) # NOSONAR
53 if verbose:
54 pymodbus_logger.setLevel(logging.DEBUG)
55 else:
56 pymodbus_logger.setLevel(logging.ERROR)
57
58 ctx.obj = {
59 "repl": repl,
60 "host": host,
61 "web_port": web_port,
62 "broadcast": broadcast_support,
63 }
64
65
66 @server.command("run")
67 @click.option(
68 "--modbus-server",
69 default="tcp",
70 type=click.Choice(["tcp", "serial", "tls", "udp"], case_sensitive=False),
71 help="Modbus server",
72 )
73 @click.option(
74 "--modbus-framer",
75 default="socket",
76 type=click.Choice(
77 ["socket", "rtu", "tls", "ascii", "binary"], case_sensitive=False
78 ),
79 help="Modbus framer to use",
80 )
81 @click.option("--modbus-port", default="5020", help="Modbus port")
82 @click.option(
83 "--modbus-unit-id", default=[1], type=int, multiple=True, help="Modbus unit id"
84 )
85 @click.option(
86 "--modbus-config",
87 type=click.Path(exists=True),
88 help="Path to additional modbus server config",
89 )
90 @click.option(
91 "-r",
92 "--randomize",
93 default=0,
94 help="Randomize every `r` reads."
95 " 0=never, 1=always, "
96 "2=every-second-read, "
97 "and so on. "
98 "Applicable IR and DI.",
99 )
100 @click.pass_context
101 def run(
102 ctx,
103 modbus_server,
104 modbus_framer,
105 modbus_port,
106 modbus_unit_id,
107 modbus_config,
108 randomize,
109 ):
110 """Run Reactive Modbus server.
111
112 Exposing REST endpoint for response manipulation.
113 """
114 repl = ctx.obj.pop("repl")
115 web_app_config = ctx.obj
116 loop = asyncio.get_event_loop()
117 framer = DEFAULT_FRAMER.get(modbus_framer, ModbusSocketFramer)
118 if modbus_config:
119 with open(modbus_config) as my_file: # pylint: disable=unspecified-encoding
120 modbus_config = json.load(my_file)
121 else:
122 modbus_config = DEFUALT_CONFIG
123 modbus_config = modbus_config.get(modbus_server, {})
124 if modbus_server != "serial":
125 modbus_port = int(modbus_port)
126 handler = modbus_config.pop("handler", "ModbusConnectedRequestHandler")
127 else:
128 handler = modbus_config.pop("handler", "ModbusSingleRequestHandler")
129 handler = DEFUALT_HANDLERS.get(handler.strip())
130
131 modbus_config["handler"] = handler
132 modbus_config["randomize"] = randomize
133 app = ReactiveServer.factory(
134 modbus_server,
135 framer,
136 modbus_port=modbus_port,
137 unit=modbus_unit_id,
138 loop=loop,
139 **web_app_config,
140 **modbus_config
141 )
142 try:
143 if repl:
144 loop.run_until_complete(app.run_async())
145
146 loop.run_until_complete(run_repl(app))
147 loop.run_forever()
148 else:
149 app.run()
150
151 except CANCELLED_ERROR:
152 print("Done!!!!!")
153
154
155 if __name__ == "__main__":
156 server() # pylint: disable=no-value-for-parameter
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pymodbus/repl/server/main.py b/pymodbus/repl/server/main.py
--- a/pymodbus/repl/server/main.py
+++ b/pymodbus/repl/server/main.py
@@ -3,7 +3,6 @@
Copyright (c) 2020 by RiptideIO
All rights reserved.
"""
-import sys
import logging
import asyncio
import json
@@ -17,10 +16,7 @@
from pymodbus.server.reactive.default_config import DEFUALT_CONFIG
from pymodbus.repl.server.cli import run_repl
-if sys.version_info > (3, 7):
- CANCELLED_ERROR = asyncio.exceptions.CancelledError
-else:
- CANCELLED_ERROR = asyncio.CancelledError # pylint: disable=invalid-name
+CANCELLED_ERROR = asyncio.exceptions.CancelledError
@click.group("ReactiveModbusServer")
| {"golden_diff": "diff --git a/pymodbus/repl/server/main.py b/pymodbus/repl/server/main.py\n--- a/pymodbus/repl/server/main.py\n+++ b/pymodbus/repl/server/main.py\n@@ -3,7 +3,6 @@\n Copyright (c) 2020 by RiptideIO\n All rights reserved.\n \"\"\"\n-import sys\n import logging\n import asyncio\n import json\n@@ -17,10 +16,7 @@\n from pymodbus.server.reactive.default_config import DEFUALT_CONFIG\n from pymodbus.repl.server.cli import run_repl\n \n-if sys.version_info > (3, 7):\n- CANCELLED_ERROR = asyncio.exceptions.CancelledError\n-else:\n- CANCELLED_ERROR = asyncio.CancelledError # pylint: disable=invalid-name\n+CANCELLED_ERROR = asyncio.exceptions.CancelledError\n \n \n @click.group(\"ReactiveModbusServer\")\n", "issue": "AttributeError: module 'asyncio' has no attribute 'exceptions'\n```\r\nTraceback (most recent call last):\r\n File \"/home//.local/bin/pymodbus.server\", line 6, in <module>\r\n from pymodbus.repl.server.main import server\r\n File \"/home//.local/lib/python3.7/site-packages/pymodbus/repl/server/main.py\", line 16, in <module>\r\n CANCELLED_ERROR = asyncio.exceptions.CancelledError\r\nAttributeError: module 'asyncio' has no attribute 'exceptions'\r\n```\r\n\r\nin the code \r\n\r\n```\r\nif IS_PYTHON3 and PYTHON_VERSION > (3, 7):\r\n CANCELLED_ERROR = asyncio.exceptions.CancelledError\r\nelse:\r\n CANCELLED_ERROR = asyncio.CancelledError\r\n\r\n```\r\npython 3.7 at least 3.7.3 doesn't have `asyncio.exceptions`\r\nMaybe it was supposed to be >= 3.8.\r\n\n", "before_files": [{"content": "\"\"\"Repl server main.\n\nCopyright (c) 2020 by RiptideIO\nAll rights reserved.\n\"\"\"\nimport sys\nimport logging\nimport asyncio\nimport json\nimport click\nfrom pymodbus.framer.socket_framer import ModbusSocketFramer\nfrom pymodbus.server.reactive.main import (\n ReactiveServer,\n DEFAULT_FRAMER,\n DEFUALT_HANDLERS,\n)\nfrom pymodbus.server.reactive.default_config import DEFUALT_CONFIG\nfrom pymodbus.repl.server.cli import run_repl\n\nif sys.version_info > (3, 7):\n CANCELLED_ERROR = asyncio.exceptions.CancelledError\nelse:\n CANCELLED_ERROR = asyncio.CancelledError # pylint: disable=invalid-name\n\n\[email protected](\"ReactiveModbusServer\")\[email protected](\"--host\", default=\"localhost\", help=\"Host address\")\[email protected](\"--web-port\", default=8080, help=\"Web app port\")\[email protected](\n \"--broadcast-support\",\n is_flag=True,\n default=False,\n help=\"Support broadcast messages\",\n)\[email protected](\n \"--repl/--no-repl\",\n is_flag=True,\n default=True,\n help=\"Enable/Disable repl for server\",\n)\[email protected](\n \"--verbose\", is_flag=True, help=\"Run with debug logs enabled for pymodbus\"\n)\[email protected]_context\ndef server(ctx, host, web_port, broadcast_support, repl, verbose):\n \"\"\"Run server code.\"\"\"\n FORMAT = ( # pylint: disable=invalid-name\n \"%(asctime)-15s %(threadName)-15s\"\n \" %(levelname)-8s %(module)-15s:%(lineno)-8s %(message)s\"\n )\n pymodbus_logger = logging.getLogger(\"pymodbus\")\n logging.basicConfig(format=FORMAT) # NOSONAR\n if verbose:\n pymodbus_logger.setLevel(logging.DEBUG)\n else:\n pymodbus_logger.setLevel(logging.ERROR)\n\n ctx.obj = {\n \"repl\": repl,\n \"host\": host,\n \"web_port\": web_port,\n \"broadcast\": broadcast_support,\n }\n\n\[email protected](\"run\")\[email protected](\n \"--modbus-server\",\n default=\"tcp\",\n type=click.Choice([\"tcp\", \"serial\", \"tls\", \"udp\"], case_sensitive=False),\n help=\"Modbus server\",\n)\[email protected](\n \"--modbus-framer\",\n default=\"socket\",\n type=click.Choice(\n [\"socket\", \"rtu\", \"tls\", \"ascii\", \"binary\"], case_sensitive=False\n ),\n help=\"Modbus framer to use\",\n)\[email protected](\"--modbus-port\", default=\"5020\", help=\"Modbus port\")\[email protected](\n \"--modbus-unit-id\", default=[1], type=int, multiple=True, help=\"Modbus unit id\"\n)\[email protected](\n \"--modbus-config\",\n type=click.Path(exists=True),\n help=\"Path to additional modbus server config\",\n)\[email protected](\n \"-r\",\n \"--randomize\",\n default=0,\n help=\"Randomize every `r` reads.\"\n \" 0=never, 1=always, \"\n \"2=every-second-read, \"\n \"and so on. \"\n \"Applicable IR and DI.\",\n)\[email protected]_context\ndef run(\n ctx,\n modbus_server,\n modbus_framer,\n modbus_port,\n modbus_unit_id,\n modbus_config,\n randomize,\n):\n \"\"\"Run Reactive Modbus server.\n\n Exposing REST endpoint for response manipulation.\n \"\"\"\n repl = ctx.obj.pop(\"repl\")\n web_app_config = ctx.obj\n loop = asyncio.get_event_loop()\n framer = DEFAULT_FRAMER.get(modbus_framer, ModbusSocketFramer)\n if modbus_config:\n with open(modbus_config) as my_file: # pylint: disable=unspecified-encoding\n modbus_config = json.load(my_file)\n else:\n modbus_config = DEFUALT_CONFIG\n modbus_config = modbus_config.get(modbus_server, {})\n if modbus_server != \"serial\":\n modbus_port = int(modbus_port)\n handler = modbus_config.pop(\"handler\", \"ModbusConnectedRequestHandler\")\n else:\n handler = modbus_config.pop(\"handler\", \"ModbusSingleRequestHandler\")\n handler = DEFUALT_HANDLERS.get(handler.strip())\n\n modbus_config[\"handler\"] = handler\n modbus_config[\"randomize\"] = randomize\n app = ReactiveServer.factory(\n modbus_server,\n framer,\n modbus_port=modbus_port,\n unit=modbus_unit_id,\n loop=loop,\n **web_app_config,\n **modbus_config\n )\n try:\n if repl:\n loop.run_until_complete(app.run_async())\n\n loop.run_until_complete(run_repl(app))\n loop.run_forever()\n else:\n app.run()\n\n except CANCELLED_ERROR:\n print(\"Done!!!!!\")\n\n\nif __name__ == \"__main__\":\n server() # pylint: disable=no-value-for-parameter\n", "path": "pymodbus/repl/server/main.py"}], "after_files": [{"content": "\"\"\"Repl server main.\n\nCopyright (c) 2020 by RiptideIO\nAll rights reserved.\n\"\"\"\nimport logging\nimport asyncio\nimport json\nimport click\nfrom pymodbus.framer.socket_framer import ModbusSocketFramer\nfrom pymodbus.server.reactive.main import (\n ReactiveServer,\n DEFAULT_FRAMER,\n DEFUALT_HANDLERS,\n)\nfrom pymodbus.server.reactive.default_config import DEFUALT_CONFIG\nfrom pymodbus.repl.server.cli import run_repl\n\nCANCELLED_ERROR = asyncio.exceptions.CancelledError\n\n\[email protected](\"ReactiveModbusServer\")\[email protected](\"--host\", default=\"localhost\", help=\"Host address\")\[email protected](\"--web-port\", default=8080, help=\"Web app port\")\[email protected](\n \"--broadcast-support\",\n is_flag=True,\n default=False,\n help=\"Support broadcast messages\",\n)\[email protected](\n \"--repl/--no-repl\",\n is_flag=True,\n default=True,\n help=\"Enable/Disable repl for server\",\n)\[email protected](\n \"--verbose\", is_flag=True, help=\"Run with debug logs enabled for pymodbus\"\n)\[email protected]_context\ndef server(ctx, host, web_port, broadcast_support, repl, verbose):\n \"\"\"Run server code.\"\"\"\n FORMAT = ( # pylint: disable=invalid-name\n \"%(asctime)-15s %(threadName)-15s\"\n \" %(levelname)-8s %(module)-15s:%(lineno)-8s %(message)s\"\n )\n pymodbus_logger = logging.getLogger(\"pymodbus\")\n logging.basicConfig(format=FORMAT) # NOSONAR\n if verbose:\n pymodbus_logger.setLevel(logging.DEBUG)\n else:\n pymodbus_logger.setLevel(logging.ERROR)\n\n ctx.obj = {\n \"repl\": repl,\n \"host\": host,\n \"web_port\": web_port,\n \"broadcast\": broadcast_support,\n }\n\n\[email protected](\"run\")\[email protected](\n \"--modbus-server\",\n default=\"tcp\",\n type=click.Choice([\"tcp\", \"serial\", \"tls\", \"udp\"], case_sensitive=False),\n help=\"Modbus server\",\n)\[email protected](\n \"--modbus-framer\",\n default=\"socket\",\n type=click.Choice(\n [\"socket\", \"rtu\", \"tls\", \"ascii\", \"binary\"], case_sensitive=False\n ),\n help=\"Modbus framer to use\",\n)\[email protected](\"--modbus-port\", default=\"5020\", help=\"Modbus port\")\[email protected](\n \"--modbus-unit-id\", default=[1], type=int, multiple=True, help=\"Modbus unit id\"\n)\[email protected](\n \"--modbus-config\",\n type=click.Path(exists=True),\n help=\"Path to additional modbus server config\",\n)\[email protected](\n \"-r\",\n \"--randomize\",\n default=0,\n help=\"Randomize every `r` reads.\"\n \" 0=never, 1=always, \"\n \"2=every-second-read, \"\n \"and so on. \"\n \"Applicable IR and DI.\",\n)\[email protected]_context\ndef run(\n ctx,\n modbus_server,\n modbus_framer,\n modbus_port,\n modbus_unit_id,\n modbus_config,\n randomize,\n):\n \"\"\"Run Reactive Modbus server.\n\n Exposing REST endpoint for response manipulation.\n \"\"\"\n repl = ctx.obj.pop(\"repl\")\n web_app_config = ctx.obj\n loop = asyncio.get_event_loop()\n framer = DEFAULT_FRAMER.get(modbus_framer, ModbusSocketFramer)\n if modbus_config:\n with open(modbus_config) as my_file: # pylint: disable=unspecified-encoding\n modbus_config = json.load(my_file)\n else:\n modbus_config = DEFUALT_CONFIG\n modbus_config = modbus_config.get(modbus_server, {})\n if modbus_server != \"serial\":\n modbus_port = int(modbus_port)\n handler = modbus_config.pop(\"handler\", \"ModbusConnectedRequestHandler\")\n else:\n handler = modbus_config.pop(\"handler\", \"ModbusSingleRequestHandler\")\n handler = DEFUALT_HANDLERS.get(handler.strip())\n\n modbus_config[\"handler\"] = handler\n modbus_config[\"randomize\"] = randomize\n app = ReactiveServer.factory(\n modbus_server,\n framer,\n modbus_port=modbus_port,\n unit=modbus_unit_id,\n loop=loop,\n **web_app_config,\n **modbus_config\n )\n try:\n if repl:\n loop.run_until_complete(app.run_async())\n\n loop.run_until_complete(run_repl(app))\n loop.run_forever()\n else:\n app.run()\n\n except CANCELLED_ERROR:\n print(\"Done!!!!!\")\n\n\nif __name__ == \"__main__\":\n server() # pylint: disable=no-value-for-parameter\n", "path": "pymodbus/repl/server/main.py"}]} | 1,941 | 200 |
gh_patches_debug_989 | rasdani/github-patches | git_diff | hydroshare__hydroshare-5098 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Haystack rest endpoint response serializer does not include short_id
**Description of the bug**
The Haystack REST endpoint for complex solr searches does not include the short_id into the response serializer. This is a critical piece of information for users of this endpoint.
Steps to reproduce the bug:
https://github.com/hydroshare/hydroshare/blob/d3bd1737a0179eac74cd68926b3b79b80894410e/hs_rest_api/discovery.py#L12
**Expected behavior**
I expect resource ids to be included with search results so I can retrieve resources.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_rest_api/discovery.py`
Content:
```
1 from drf_haystack.serializers import HaystackSerializer
2 from drf_haystack.viewsets import HaystackViewSet
3 from hs_core.search_indexes import BaseResourceIndex
4 from hs_core.models import BaseResource
5 from drf_haystack.fields import HaystackCharField, HaystackDateField, HaystackMultiValueField, \
6 HaystackFloatField
7 from drf_yasg.utils import swagger_auto_schema
8 from rest_framework.decorators import action
9 from rest_framework import serializers
10
11
12 class DiscoveryResourceSerializer(HaystackSerializer):
13 class Meta:
14 index_classes = [BaseResourceIndex]
15 fields = [
16 "title",
17 "author",
18 "contributor",
19 "subject",
20 "abstract",
21 "resource_type",
22 "content_type",
23 "coverage_type",
24 "availability",
25 "created",
26 "modified",
27 "start_date",
28 "end_date",
29 "east",
30 "north",
31 "eastlimit",
32 "westlimit",
33 "northlimit",
34 "southlimit"
35 ]
36
37
38 class DiscoverResourceValidator(serializers.Serializer):
39 text = HaystackCharField(required=False,
40 help_text='Search across all Resource Fields')
41 author = HaystackCharField(required=False,
42 help_text='Search by author')
43 contributor = HaystackMultiValueField(required=False,
44 help_text='Search by contributor')
45 subject = HaystackMultiValueField(required=False,
46 help_text='Search within subject keywords')
47 abstract = HaystackCharField(required=False,
48 help_text='Search within the abstract')
49 resource_type = HaystackCharField(required=False,
50 help_text='Search by resource type')
51 content_type = HaystackMultiValueField(required=False,
52 help_text='Search by content type')
53 coverage_type = HaystackMultiValueField(required=False,
54 help_text='Search by coverage type '
55 '(point, box, period)')
56 availability = HaystackMultiValueField(required=False,
57 help_text='Search by availability '
58 '(discoverable, public, published)')
59 created = HaystackDateField(required=False,
60 help_text='Search by created date')
61 modified = HaystackDateField(required=False,
62 help_text='Search by modified date')
63 start_date = HaystackDateField(required=False,
64 help_text='Search by start date')
65 end_date = HaystackDateField(required=False,
66 help_text='Search by end date')
67 east = HaystackFloatField(required=False,
68 help_text='Search by location or box center east longitude')
69 north = HaystackFloatField(required=False,
70 help_text='Search by location or box center north latitude')
71 eastlimit = HaystackFloatField(required=False,
72 help_text='Search by east limit longitude')
73 westlimit = HaystackFloatField(required=False,
74 help_text='Search by west limit longitude')
75 northlimit = HaystackFloatField(required=False,
76 help_text='Search by north limit latitude')
77 southlimit = HaystackFloatField(required=False,
78 help_text='Search by south limit latitude')
79
80
81 class DiscoverSearchView(HaystackViewSet):
82 index_models = [BaseResource]
83 serializer_class = DiscoveryResourceSerializer
84
85 @action(detail=True, methods=['get'])
86 @swagger_auto_schema(operation_description="Search HydroShare Resources using solr conventions."
87 "We use haystack for queries so you can use all of "
88 "the parameters described here in combination with "
89 "field lookups "
90 "https://django-haystack.readthedocs.io/en/latest/"
91 "searchqueryset_api.html?highlight=lookups#id1",
92 query_serializer=DiscoverResourceValidator)
93 def list(self, request):
94 return super(DiscoverSearchView, self).list(request)
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hs_rest_api/discovery.py b/hs_rest_api/discovery.py
--- a/hs_rest_api/discovery.py
+++ b/hs_rest_api/discovery.py
@@ -13,6 +13,7 @@
class Meta:
index_classes = [BaseResourceIndex]
fields = [
+ "short_id",
"title",
"author",
"contributor",
| {"golden_diff": "diff --git a/hs_rest_api/discovery.py b/hs_rest_api/discovery.py\n--- a/hs_rest_api/discovery.py\n+++ b/hs_rest_api/discovery.py\n@@ -13,6 +13,7 @@\n class Meta:\n index_classes = [BaseResourceIndex]\n fields = [\n+ \"short_id\",\n \"title\",\n \"author\",\n \"contributor\",\n", "issue": "Haystack rest endpoint response serializer does not include short_id\n**Description of the bug**\r\nThe Haystack REST endpoint for complex solr searches does not include the short_id into the response serializer. This is a critical piece of information for users of this endpoint. \r\n\r\nSteps to reproduce the bug:\r\nhttps://github.com/hydroshare/hydroshare/blob/d3bd1737a0179eac74cd68926b3b79b80894410e/hs_rest_api/discovery.py#L12\r\n\r\n**Expected behavior**\r\nI expect resource ids to be included with search results so I can retrieve resources.\r\n\n", "before_files": [{"content": "from drf_haystack.serializers import HaystackSerializer\nfrom drf_haystack.viewsets import HaystackViewSet\nfrom hs_core.search_indexes import BaseResourceIndex\nfrom hs_core.models import BaseResource\nfrom drf_haystack.fields import HaystackCharField, HaystackDateField, HaystackMultiValueField, \\\n HaystackFloatField\nfrom drf_yasg.utils import swagger_auto_schema\nfrom rest_framework.decorators import action\nfrom rest_framework import serializers\n\n\nclass DiscoveryResourceSerializer(HaystackSerializer):\n class Meta:\n index_classes = [BaseResourceIndex]\n fields = [\n \"title\",\n \"author\",\n \"contributor\",\n \"subject\",\n \"abstract\",\n \"resource_type\",\n \"content_type\",\n \"coverage_type\",\n \"availability\",\n \"created\",\n \"modified\",\n \"start_date\",\n \"end_date\",\n \"east\",\n \"north\",\n \"eastlimit\",\n \"westlimit\",\n \"northlimit\",\n \"southlimit\"\n ]\n\n\nclass DiscoverResourceValidator(serializers.Serializer):\n text = HaystackCharField(required=False,\n help_text='Search across all Resource Fields')\n author = HaystackCharField(required=False,\n help_text='Search by author')\n contributor = HaystackMultiValueField(required=False,\n help_text='Search by contributor')\n subject = HaystackMultiValueField(required=False,\n help_text='Search within subject keywords')\n abstract = HaystackCharField(required=False,\n help_text='Search within the abstract')\n resource_type = HaystackCharField(required=False,\n help_text='Search by resource type')\n content_type = HaystackMultiValueField(required=False,\n help_text='Search by content type')\n coverage_type = HaystackMultiValueField(required=False,\n help_text='Search by coverage type '\n '(point, box, period)')\n availability = HaystackMultiValueField(required=False,\n help_text='Search by availability '\n '(discoverable, public, published)')\n created = HaystackDateField(required=False,\n help_text='Search by created date')\n modified = HaystackDateField(required=False,\n help_text='Search by modified date')\n start_date = HaystackDateField(required=False,\n help_text='Search by start date')\n end_date = HaystackDateField(required=False,\n help_text='Search by end date')\n east = HaystackFloatField(required=False,\n help_text='Search by location or box center east longitude')\n north = HaystackFloatField(required=False,\n help_text='Search by location or box center north latitude')\n eastlimit = HaystackFloatField(required=False,\n help_text='Search by east limit longitude')\n westlimit = HaystackFloatField(required=False,\n help_text='Search by west limit longitude')\n northlimit = HaystackFloatField(required=False,\n help_text='Search by north limit latitude')\n southlimit = HaystackFloatField(required=False,\n help_text='Search by south limit latitude')\n\n\nclass DiscoverSearchView(HaystackViewSet):\n index_models = [BaseResource]\n serializer_class = DiscoveryResourceSerializer\n\n @action(detail=True, methods=['get'])\n @swagger_auto_schema(operation_description=\"Search HydroShare Resources using solr conventions.\"\n \"We use haystack for queries so you can use all of \"\n \"the parameters described here in combination with \"\n \"field lookups \"\n \"https://django-haystack.readthedocs.io/en/latest/\"\n \"searchqueryset_api.html?highlight=lookups#id1\",\n query_serializer=DiscoverResourceValidator)\n def list(self, request):\n return super(DiscoverSearchView, self).list(request)\n", "path": "hs_rest_api/discovery.py"}], "after_files": [{"content": "from drf_haystack.serializers import HaystackSerializer\nfrom drf_haystack.viewsets import HaystackViewSet\nfrom hs_core.search_indexes import BaseResourceIndex\nfrom hs_core.models import BaseResource\nfrom drf_haystack.fields import HaystackCharField, HaystackDateField, HaystackMultiValueField, \\\n HaystackFloatField\nfrom drf_yasg.utils import swagger_auto_schema\nfrom rest_framework.decorators import action\nfrom rest_framework import serializers\n\n\nclass DiscoveryResourceSerializer(HaystackSerializer):\n class Meta:\n index_classes = [BaseResourceIndex]\n fields = [\n \"short_id\",\n \"title\",\n \"author\",\n \"contributor\",\n \"subject\",\n \"abstract\",\n \"resource_type\",\n \"content_type\",\n \"coverage_type\",\n \"availability\",\n \"created\",\n \"modified\",\n \"start_date\",\n \"end_date\",\n \"east\",\n \"north\",\n \"eastlimit\",\n \"westlimit\",\n \"northlimit\",\n \"southlimit\"\n ]\n\n\nclass DiscoverResourceValidator(serializers.Serializer):\n text = HaystackCharField(required=False,\n help_text='Search across all Resource Fields')\n author = HaystackCharField(required=False,\n help_text='Search by author')\n contributor = HaystackMultiValueField(required=False,\n help_text='Search by contributor')\n subject = HaystackMultiValueField(required=False,\n help_text='Search within subject keywords')\n abstract = HaystackCharField(required=False,\n help_text='Search within the abstract')\n resource_type = HaystackCharField(required=False,\n help_text='Search by resource type')\n content_type = HaystackMultiValueField(required=False,\n help_text='Search by content type')\n coverage_type = HaystackMultiValueField(required=False,\n help_text='Search by coverage type '\n '(point, box, period)')\n availability = HaystackMultiValueField(required=False,\n help_text='Search by availability '\n '(discoverable, public, published)')\n created = HaystackDateField(required=False,\n help_text='Search by created date')\n modified = HaystackDateField(required=False,\n help_text='Search by modified date')\n start_date = HaystackDateField(required=False,\n help_text='Search by start date')\n end_date = HaystackDateField(required=False,\n help_text='Search by end date')\n east = HaystackFloatField(required=False,\n help_text='Search by location or box center east longitude')\n north = HaystackFloatField(required=False,\n help_text='Search by location or box center north latitude')\n eastlimit = HaystackFloatField(required=False,\n help_text='Search by east limit longitude')\n westlimit = HaystackFloatField(required=False,\n help_text='Search by west limit longitude')\n northlimit = HaystackFloatField(required=False,\n help_text='Search by north limit latitude')\n southlimit = HaystackFloatField(required=False,\n help_text='Search by south limit latitude')\n\n\nclass DiscoverSearchView(HaystackViewSet):\n index_models = [BaseResource]\n serializer_class = DiscoveryResourceSerializer\n\n @action(detail=True, methods=['get'])\n @swagger_auto_schema(operation_description=\"Search HydroShare Resources using solr conventions.\"\n \"We use haystack for queries so you can use all of \"\n \"the parameters described here in combination with \"\n \"field lookups \"\n \"https://django-haystack.readthedocs.io/en/latest/\"\n \"searchqueryset_api.html?highlight=lookups#id1\",\n query_serializer=DiscoverResourceValidator)\n def list(self, request):\n return super(DiscoverSearchView, self).list(request)\n", "path": "hs_rest_api/discovery.py"}]} | 1,354 | 88 |
gh_patches_debug_20873 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-2223 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix translations in package
The compilemessages step for geotrek and mapentity is missing somewhere
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/python3
2 import os
3 import distutils.command.build
4 from setuptools import setup, find_packages
5
6 here = os.path.abspath(os.path.dirname(__file__))
7
8
9 class BuildCommand(distutils.command.build.build):
10 def run(self):
11 print("before")
12 distutils.command.build.build.run(self)
13 print("after")
14 from django.core.management import call_command
15 curdir = os.getcwd()
16 os.chdir(os.path.join(curdir, 'geotrek'))
17 call_command('compilemessages')
18 os.chdir(os.path.join(curdir, 'mapentity'))
19 call_command('compilemessages')
20 os.chdir(curdir)
21
22
23 setup(
24 name='geotrek',
25 version=open(os.path.join(here, 'VERSION')).read().strip(),
26 author='Makina Corpus',
27 author_email='[email protected]',
28 url='http://makina-corpus.com',
29 description="Geotrek",
30 long_description=(open(os.path.join(here, 'README.rst')).read() + '\n\n'
31 + open(os.path.join(here, 'docs', 'changelog.rst')).read()),
32 scripts=['manage.py'],
33 install_requires=[
34 # pinned by requirements.txt
35 'psycopg2',
36 'docutils',
37 'GDAL',
38 'Pillow',
39 'easy-thumbnails',
40 'simplekml',
41 'pygal',
42 'django-extended-choices',
43 'django-multiselectfield',
44 'geojson',
45 'tif2geojson',
46 'pytz',
47 'djangorestframework-gis',
48 'drf-dynamic-fields',
49 'django-rest-swagger',
50 'django-embed-video',
51 'xlrd',
52 'landez',
53 'redis',
54 'celery',
55 'django-celery-results',
56 'requests[security]',
57 'drf-extensions',
58 'django-colorfield',
59 'factory_boy',
60 ],
61 cmdclass={"build": BuildCommand},
62 include_package_data=True,
63 license='BSD, see LICENSE file.',
64 packages=find_packages(),
65 classifiers=['Natural Language :: English',
66 'Environment :: Web Environment',
67 'Framework :: Django',
68 'Development Status :: 5 - Production/Stable',
69 'Programming Language :: Python :: 2.7'],
70 )
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,23 +1,24 @@
#!/usr/bin/python3
import os
import distutils.command.build
+from pathlib import Path
from setuptools import setup, find_packages
+from shutil import copy
here = os.path.abspath(os.path.dirname(__file__))
class BuildCommand(distutils.command.build.build):
def run(self):
- print("before")
distutils.command.build.build.run(self)
- print("after")
from django.core.management import call_command
curdir = os.getcwd()
- os.chdir(os.path.join(curdir, 'geotrek'))
- call_command('compilemessages')
- os.chdir(os.path.join(curdir, 'mapentity'))
- call_command('compilemessages')
- os.chdir(curdir)
+ for subdir in ('geotrek', 'mapentity'):
+ os.chdir(subdir)
+ call_command('compilemessages')
+ for path in Path('.').rglob('*.mo'):
+ copy(path, os.path.join(curdir, self.build_lib, subdir, path))
+ os.chdir(curdir)
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,23 +1,24 @@\n #!/usr/bin/python3\n import os\n import distutils.command.build\n+from pathlib import Path\n from setuptools import setup, find_packages\n+from shutil import copy\n \n here = os.path.abspath(os.path.dirname(__file__))\n \n \n class BuildCommand(distutils.command.build.build):\n def run(self):\n- print(\"before\")\n distutils.command.build.build.run(self)\n- print(\"after\")\n from django.core.management import call_command\n curdir = os.getcwd()\n- os.chdir(os.path.join(curdir, 'geotrek'))\n- call_command('compilemessages')\n- os.chdir(os.path.join(curdir, 'mapentity'))\n- call_command('compilemessages')\n- os.chdir(curdir)\n+ for subdir in ('geotrek', 'mapentity'):\n+ os.chdir(subdir)\n+ call_command('compilemessages')\n+ for path in Path('.').rglob('*.mo'):\n+ copy(path, os.path.join(curdir, self.build_lib, subdir, path))\n+ os.chdir(curdir)\n \n \n setup(\n", "issue": "Fix translations in package\nThe compilemessages step for geotrek and mapentity is missing somewhere\n", "before_files": [{"content": "#!/usr/bin/python3\nimport os\nimport distutils.command.build\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\nclass BuildCommand(distutils.command.build.build):\n def run(self):\n print(\"before\")\n distutils.command.build.build.run(self)\n print(\"after\")\n from django.core.management import call_command\n curdir = os.getcwd()\n os.chdir(os.path.join(curdir, 'geotrek'))\n call_command('compilemessages')\n os.chdir(os.path.join(curdir, 'mapentity'))\n call_command('compilemessages')\n os.chdir(curdir)\n\n\nsetup(\n name='geotrek',\n version=open(os.path.join(here, 'VERSION')).read().strip(),\n author='Makina Corpus',\n author_email='[email protected]',\n url='http://makina-corpus.com',\n description=\"Geotrek\",\n long_description=(open(os.path.join(here, 'README.rst')).read() + '\\n\\n'\n + open(os.path.join(here, 'docs', 'changelog.rst')).read()),\n scripts=['manage.py'],\n install_requires=[\n # pinned by requirements.txt\n 'psycopg2',\n 'docutils',\n 'GDAL',\n 'Pillow',\n 'easy-thumbnails',\n 'simplekml',\n 'pygal',\n 'django-extended-choices',\n 'django-multiselectfield',\n 'geojson',\n 'tif2geojson',\n 'pytz',\n 'djangorestframework-gis',\n 'drf-dynamic-fields',\n 'django-rest-swagger',\n 'django-embed-video',\n 'xlrd',\n 'landez',\n 'redis',\n 'celery',\n 'django-celery-results',\n 'requests[security]',\n 'drf-extensions',\n 'django-colorfield',\n 'factory_boy',\n ],\n cmdclass={\"build\": BuildCommand},\n include_package_data=True,\n license='BSD, see LICENSE file.',\n packages=find_packages(),\n classifiers=['Natural Language :: English',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Development Status :: 5 - Production/Stable',\n 'Programming Language :: Python :: 2.7'],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/python3\nimport os\nimport distutils.command.build\nfrom pathlib import Path\nfrom setuptools import setup, find_packages\nfrom shutil import copy\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\nclass BuildCommand(distutils.command.build.build):\n def run(self):\n distutils.command.build.build.run(self)\n from django.core.management import call_command\n curdir = os.getcwd()\n for subdir in ('geotrek', 'mapentity'):\n os.chdir(subdir)\n call_command('compilemessages')\n for path in Path('.').rglob('*.mo'):\n copy(path, os.path.join(curdir, self.build_lib, subdir, path))\n os.chdir(curdir)\n\n\nsetup(\n name='geotrek',\n version=open(os.path.join(here, 'VERSION')).read().strip(),\n author='Makina Corpus',\n author_email='[email protected]',\n url='http://makina-corpus.com',\n description=\"Geotrek\",\n long_description=(open(os.path.join(here, 'README.rst')).read() + '\\n\\n'\n + open(os.path.join(here, 'docs', 'changelog.rst')).read()),\n scripts=['manage.py'],\n install_requires=[\n # pinned by requirements.txt\n 'psycopg2',\n 'docutils',\n 'GDAL',\n 'Pillow',\n 'easy-thumbnails',\n 'simplekml',\n 'pygal',\n 'django-extended-choices',\n 'django-multiselectfield',\n 'geojson',\n 'tif2geojson',\n 'pytz',\n 'djangorestframework-gis',\n 'drf-dynamic-fields',\n 'django-rest-swagger',\n 'django-embed-video',\n 'xlrd',\n 'landez',\n 'redis',\n 'celery',\n 'django-celery-results',\n 'requests[security]',\n 'drf-extensions',\n 'django-colorfield',\n 'factory_boy',\n ],\n cmdclass={\"build\": BuildCommand},\n include_package_data=True,\n license='BSD, see LICENSE file.',\n packages=find_packages(),\n classifiers=['Natural Language :: English',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Development Status :: 5 - Production/Stable',\n 'Programming Language :: Python :: 2.7'],\n)\n", "path": "setup.py"}]} | 898 | 255 |
gh_patches_debug_26330 | rasdani/github-patches | git_diff | streamlink__streamlink-1583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Vaughnlive changed IP's to break Streamlink
This will be a very brief bug report... As of tonight the head vaughnlive.py references IPs which were disconnected by vaughn to thwart streamlinking. I've observed vaughn serving video now from "66.90.93.44","66.90.93.35" and have personally gotten it to work overwriting the IP's in rtmp_server_map with those two alternating. I would submit the commit but I think some more testing is needed as I only use streamlink with one occasional stream and don't know how far those IPs will get more frequent SL users.
#1187 contains lengthy discussion on the history of the war vaughn has waged against streamlink, this is probably not the last time the IPs will change.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/vaughnlive.py`
Content:
```
1 import random
2 import re
3 import itertools
4 import ssl
5 import websocket
6
7 from streamlink.plugin import Plugin
8 from streamlink.plugin.api import useragents, http
9 from streamlink.stream import RTMPStream
10
11 _url_re = re.compile(r"""
12 http(s)?://(\w+\.)?
13 (?P<domain>vaughnlive|breakers|instagib|vapers|pearltime).tv
14 (/embed/video)?
15 /(?P<channel>[^/&?]+)
16 """, re.VERBOSE)
17
18
19 class VLWebSocket(websocket.WebSocket):
20 def __init__(self, **_):
21 self.session = _.pop("session")
22 self.logger = self.session.logger.new_module("plugins.vaughnlive.websocket")
23 sslopt = _.pop("sslopt", {})
24 sslopt["cert_reqs"] = ssl.CERT_NONE
25 super(VLWebSocket, self).__init__(sslopt=sslopt, **_)
26
27 def send(self, payload, opcode=websocket.ABNF.OPCODE_TEXT):
28 self.logger.debug("Sending message: {0}", payload)
29 return super(VLWebSocket, self).send(payload + "\n\x00", opcode)
30
31 def recv(self):
32 d = super(VLWebSocket, self).recv().replace("\n", "").replace("\x00", "")
33 return d.split(" ", 1)
34
35
36 class VaughnLive(Plugin):
37 servers = ["wss://sapi-ws-{0}x{1:02}.vaughnlive.tv".format(x, y) for x, y in itertools.product(range(1, 3),
38 range(1, 6))]
39 origin = "https://vaughnlive.tv"
40 rtmp_server_map = {
41 "594140c69edad": "66.90.93.42",
42 "585c4cab1bef1": "66.90.93.34",
43 "5940d648b3929": "66.90.93.42",
44 "5941854b39bc4": "198.255.0.10"
45 }
46 name_remap = {"#vl": "live", "#btv": "btv", "#pt": "pt", "#igb": "instagib", "#vtv": "vtv"}
47 domain_map = {"vaughnlive": "#vl", "breakers": "#btv", "instagib": "#igb", "vapers": "#vtv", "pearltime": "#pt"}
48
49 @classmethod
50 def can_handle_url(cls, url):
51 return _url_re.match(url)
52
53 def api_url(self):
54 return random.choice(self.servers)
55
56 def parse_ack(self, action, message):
57 if action.endswith("3"):
58 channel, _, viewers, token, server, choked, is_live, chls, trns, ingest = message.split(";")
59 is_live = is_live == "1"
60 viewers = int(viewers)
61 self.logger.debug("Viewers: {0}, isLive={1}", viewers, is_live)
62 domain, channel = channel.split("-", 1)
63 return is_live, server, domain, channel, token, ingest
64 else:
65 self.logger.error("Unhandled action format: {0}", action)
66
67 def _get_info(self, stream_name):
68 server = self.api_url()
69 self.logger.debug("Connecting to API: {0}", server)
70 ws = websocket.create_connection(server,
71 header=["User-Agent: {0}".format(useragents.CHROME)],
72 origin=self.origin,
73 class_=VLWebSocket,
74 session=self.session)
75 ws.send("MVN LOAD3 {0}".format(stream_name))
76 action, message = ws.recv()
77 return self.parse_ack(action, message)
78
79 def _get_rtmp_streams(self, server, domain, channel, token):
80 rtmp_server = self.rtmp_server_map.get(server, server)
81
82 url = "rtmp://{0}/live?{1}".format(rtmp_server, token)
83
84 yield "live", RTMPStream(self.session, params={
85 "rtmp": url,
86 "pageUrl": self.url,
87 "playpath": "{0}_{1}".format(self.name_remap.get(domain, "live"), channel),
88 "live": True
89 })
90
91 def _get_streams(self):
92 m = _url_re.match(self.url)
93 if m:
94 stream_name = "{0}-{1}".format(self.domain_map[(m.group("domain").lower())],
95 m.group("channel"))
96
97 is_live, server, domain, channel, token, ingest = self._get_info(stream_name)
98
99 if not is_live:
100 self.logger.info("Stream is currently off air")
101 else:
102 for s in self._get_rtmp_streams(server, domain, channel, token):
103 yield s
104
105
106 __plugin__ = VaughnLive
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/vaughnlive.py b/src/streamlink/plugins/vaughnlive.py
--- a/src/streamlink/plugins/vaughnlive.py
+++ b/src/streamlink/plugins/vaughnlive.py
@@ -38,10 +38,10 @@
range(1, 6))]
origin = "https://vaughnlive.tv"
rtmp_server_map = {
- "594140c69edad": "66.90.93.42",
- "585c4cab1bef1": "66.90.93.34",
- "5940d648b3929": "66.90.93.42",
- "5941854b39bc4": "198.255.0.10"
+ "594140c69edad": "192.240.105.171:1935",
+ "585c4cab1bef1": "192.240.105.171:1935",
+ "5940d648b3929": "192.240.105.171:1935",
+ "5941854b39bc4": "192.240.105.171:1935"
}
name_remap = {"#vl": "live", "#btv": "btv", "#pt": "pt", "#igb": "instagib", "#vtv": "vtv"}
domain_map = {"vaughnlive": "#vl", "breakers": "#btv", "instagib": "#igb", "vapers": "#vtv", "pearltime": "#pt"}
@@ -99,6 +99,7 @@
if not is_live:
self.logger.info("Stream is currently off air")
else:
+ self.logger.info("Stream powered by VaughnSoft - remember to support them.")
for s in self._get_rtmp_streams(server, domain, channel, token):
yield s
| {"golden_diff": "diff --git a/src/streamlink/plugins/vaughnlive.py b/src/streamlink/plugins/vaughnlive.py\n--- a/src/streamlink/plugins/vaughnlive.py\n+++ b/src/streamlink/plugins/vaughnlive.py\n@@ -38,10 +38,10 @@\n range(1, 6))]\n origin = \"https://vaughnlive.tv\"\n rtmp_server_map = {\n- \"594140c69edad\": \"66.90.93.42\",\n- \"585c4cab1bef1\": \"66.90.93.34\",\n- \"5940d648b3929\": \"66.90.93.42\",\n- \"5941854b39bc4\": \"198.255.0.10\"\n+ \"594140c69edad\": \"192.240.105.171:1935\",\n+ \"585c4cab1bef1\": \"192.240.105.171:1935\",\n+ \"5940d648b3929\": \"192.240.105.171:1935\",\n+ \"5941854b39bc4\": \"192.240.105.171:1935\"\n }\n name_remap = {\"#vl\": \"live\", \"#btv\": \"btv\", \"#pt\": \"pt\", \"#igb\": \"instagib\", \"#vtv\": \"vtv\"}\n domain_map = {\"vaughnlive\": \"#vl\", \"breakers\": \"#btv\", \"instagib\": \"#igb\", \"vapers\": \"#vtv\", \"pearltime\": \"#pt\"}\n@@ -99,6 +99,7 @@\n if not is_live:\n self.logger.info(\"Stream is currently off air\")\n else:\n+ self.logger.info(\"Stream powered by VaughnSoft - remember to support them.\")\n for s in self._get_rtmp_streams(server, domain, channel, token):\n yield s\n", "issue": "Vaughnlive changed IP's to break Streamlink\nThis will be a very brief bug report... As of tonight the head vaughnlive.py references IPs which were disconnected by vaughn to thwart streamlinking. I've observed vaughn serving video now from \"66.90.93.44\",\"66.90.93.35\" and have personally gotten it to work overwriting the IP's in rtmp_server_map with those two alternating. I would submit the commit but I think some more testing is needed as I only use streamlink with one occasional stream and don't know how far those IPs will get more frequent SL users.\r\n\r\n #1187 contains lengthy discussion on the history of the war vaughn has waged against streamlink, this is probably not the last time the IPs will change.\n", "before_files": [{"content": "import random\nimport re\nimport itertools\nimport ssl\nimport websocket\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents, http\nfrom streamlink.stream import RTMPStream\n\n_url_re = re.compile(r\"\"\"\n http(s)?://(\\w+\\.)?\n (?P<domain>vaughnlive|breakers|instagib|vapers|pearltime).tv\n (/embed/video)?\n /(?P<channel>[^/&?]+)\n\"\"\", re.VERBOSE)\n\n\nclass VLWebSocket(websocket.WebSocket):\n def __init__(self, **_):\n self.session = _.pop(\"session\")\n self.logger = self.session.logger.new_module(\"plugins.vaughnlive.websocket\")\n sslopt = _.pop(\"sslopt\", {})\n sslopt[\"cert_reqs\"] = ssl.CERT_NONE\n super(VLWebSocket, self).__init__(sslopt=sslopt, **_)\n\n def send(self, payload, opcode=websocket.ABNF.OPCODE_TEXT):\n self.logger.debug(\"Sending message: {0}\", payload)\n return super(VLWebSocket, self).send(payload + \"\\n\\x00\", opcode)\n\n def recv(self):\n d = super(VLWebSocket, self).recv().replace(\"\\n\", \"\").replace(\"\\x00\", \"\")\n return d.split(\" \", 1)\n\n\nclass VaughnLive(Plugin):\n servers = [\"wss://sapi-ws-{0}x{1:02}.vaughnlive.tv\".format(x, y) for x, y in itertools.product(range(1, 3),\n range(1, 6))]\n origin = \"https://vaughnlive.tv\"\n rtmp_server_map = {\n \"594140c69edad\": \"66.90.93.42\",\n \"585c4cab1bef1\": \"66.90.93.34\",\n \"5940d648b3929\": \"66.90.93.42\",\n \"5941854b39bc4\": \"198.255.0.10\"\n }\n name_remap = {\"#vl\": \"live\", \"#btv\": \"btv\", \"#pt\": \"pt\", \"#igb\": \"instagib\", \"#vtv\": \"vtv\"}\n domain_map = {\"vaughnlive\": \"#vl\", \"breakers\": \"#btv\", \"instagib\": \"#igb\", \"vapers\": \"#vtv\", \"pearltime\": \"#pt\"}\n\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def api_url(self):\n return random.choice(self.servers)\n\n def parse_ack(self, action, message):\n if action.endswith(\"3\"):\n channel, _, viewers, token, server, choked, is_live, chls, trns, ingest = message.split(\";\")\n is_live = is_live == \"1\"\n viewers = int(viewers)\n self.logger.debug(\"Viewers: {0}, isLive={1}\", viewers, is_live)\n domain, channel = channel.split(\"-\", 1)\n return is_live, server, domain, channel, token, ingest\n else:\n self.logger.error(\"Unhandled action format: {0}\", action)\n\n def _get_info(self, stream_name):\n server = self.api_url()\n self.logger.debug(\"Connecting to API: {0}\", server)\n ws = websocket.create_connection(server,\n header=[\"User-Agent: {0}\".format(useragents.CHROME)],\n origin=self.origin,\n class_=VLWebSocket,\n session=self.session)\n ws.send(\"MVN LOAD3 {0}\".format(stream_name))\n action, message = ws.recv()\n return self.parse_ack(action, message)\n\n def _get_rtmp_streams(self, server, domain, channel, token):\n rtmp_server = self.rtmp_server_map.get(server, server)\n\n url = \"rtmp://{0}/live?{1}\".format(rtmp_server, token)\n\n yield \"live\", RTMPStream(self.session, params={\n \"rtmp\": url,\n \"pageUrl\": self.url,\n \"playpath\": \"{0}_{1}\".format(self.name_remap.get(domain, \"live\"), channel),\n \"live\": True\n })\n\n def _get_streams(self):\n m = _url_re.match(self.url)\n if m:\n stream_name = \"{0}-{1}\".format(self.domain_map[(m.group(\"domain\").lower())],\n m.group(\"channel\"))\n\n is_live, server, domain, channel, token, ingest = self._get_info(stream_name)\n\n if not is_live:\n self.logger.info(\"Stream is currently off air\")\n else:\n for s in self._get_rtmp_streams(server, domain, channel, token):\n yield s\n\n\n__plugin__ = VaughnLive\n", "path": "src/streamlink/plugins/vaughnlive.py"}], "after_files": [{"content": "import random\nimport re\nimport itertools\nimport ssl\nimport websocket\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents, http\nfrom streamlink.stream import RTMPStream\n\n_url_re = re.compile(r\"\"\"\n http(s)?://(\\w+\\.)?\n (?P<domain>vaughnlive|breakers|instagib|vapers|pearltime).tv\n (/embed/video)?\n /(?P<channel>[^/&?]+)\n\"\"\", re.VERBOSE)\n\n\nclass VLWebSocket(websocket.WebSocket):\n def __init__(self, **_):\n self.session = _.pop(\"session\")\n self.logger = self.session.logger.new_module(\"plugins.vaughnlive.websocket\")\n sslopt = _.pop(\"sslopt\", {})\n sslopt[\"cert_reqs\"] = ssl.CERT_NONE\n super(VLWebSocket, self).__init__(sslopt=sslopt, **_)\n\n def send(self, payload, opcode=websocket.ABNF.OPCODE_TEXT):\n self.logger.debug(\"Sending message: {0}\", payload)\n return super(VLWebSocket, self).send(payload + \"\\n\\x00\", opcode)\n\n def recv(self):\n d = super(VLWebSocket, self).recv().replace(\"\\n\", \"\").replace(\"\\x00\", \"\")\n return d.split(\" \", 1)\n\n\nclass VaughnLive(Plugin):\n servers = [\"wss://sapi-ws-{0}x{1:02}.vaughnlive.tv\".format(x, y) for x, y in itertools.product(range(1, 3),\n range(1, 6))]\n origin = \"https://vaughnlive.tv\"\n rtmp_server_map = {\n \"594140c69edad\": \"192.240.105.171:1935\",\n \"585c4cab1bef1\": \"192.240.105.171:1935\",\n \"5940d648b3929\": \"192.240.105.171:1935\",\n \"5941854b39bc4\": \"192.240.105.171:1935\"\n }\n name_remap = {\"#vl\": \"live\", \"#btv\": \"btv\", \"#pt\": \"pt\", \"#igb\": \"instagib\", \"#vtv\": \"vtv\"}\n domain_map = {\"vaughnlive\": \"#vl\", \"breakers\": \"#btv\", \"instagib\": \"#igb\", \"vapers\": \"#vtv\", \"pearltime\": \"#pt\"}\n\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def api_url(self):\n return random.choice(self.servers)\n\n def parse_ack(self, action, message):\n if action.endswith(\"3\"):\n channel, _, viewers, token, server, choked, is_live, chls, trns, ingest = message.split(\";\")\n is_live = is_live == \"1\"\n viewers = int(viewers)\n self.logger.debug(\"Viewers: {0}, isLive={1}\", viewers, is_live)\n domain, channel = channel.split(\"-\", 1)\n return is_live, server, domain, channel, token, ingest\n else:\n self.logger.error(\"Unhandled action format: {0}\", action)\n\n def _get_info(self, stream_name):\n server = self.api_url()\n self.logger.debug(\"Connecting to API: {0}\", server)\n ws = websocket.create_connection(server,\n header=[\"User-Agent: {0}\".format(useragents.CHROME)],\n origin=self.origin,\n class_=VLWebSocket,\n session=self.session)\n ws.send(\"MVN LOAD3 {0}\".format(stream_name))\n action, message = ws.recv()\n return self.parse_ack(action, message)\n\n def _get_rtmp_streams(self, server, domain, channel, token):\n rtmp_server = self.rtmp_server_map.get(server, server)\n\n url = \"rtmp://{0}/live?{1}\".format(rtmp_server, token)\n\n yield \"live\", RTMPStream(self.session, params={\n \"rtmp\": url,\n \"pageUrl\": self.url,\n \"playpath\": \"{0}_{1}\".format(self.name_remap.get(domain, \"live\"), channel),\n \"live\": True\n })\n\n def _get_streams(self):\n m = _url_re.match(self.url)\n if m:\n stream_name = \"{0}-{1}\".format(self.domain_map[(m.group(\"domain\").lower())],\n m.group(\"channel\"))\n\n is_live, server, domain, channel, token, ingest = self._get_info(stream_name)\n\n if not is_live:\n self.logger.info(\"Stream is currently off air\")\n else:\n self.logger.info(\"Stream powered by VaughnSoft - remember to support them.\")\n for s in self._get_rtmp_streams(server, domain, channel, token):\n yield s\n\n\n__plugin__ = VaughnLive\n", "path": "src/streamlink/plugins/vaughnlive.py"}]} | 1,737 | 518 |
gh_patches_debug_26363 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-786 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement showing and changing a column's type
## Problem
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Users might want to change the data type of an existing column on their table.
## Proposed solution
<!-- A clear and concise description of your proposed solution or feature. -->
The ["Working with Columns" design spec](https://wiki.mathesar.org/en/design/specs/working-with-columns) has a solution for showing and changing column types, which we need to implement on the frontend.
Please note that we're only implementing changing the Mathesar data type in this milestone. Options specific to individual data types will be implemented in the next milestone.
Number data types should save as `NUMERIC`.
Text data types should save as `VARCHAR`.
Date/time data types can be disabled for now since they're not fully implemented on the backend.
## Additional context
<!-- Add any other context or screenshots about the feature request here.-->
- Backend work:
- #532 to get the list of types
- #199 to get valid target types and change types
- Design issue: #324
- Design discussion: #436
- #269
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/views.py`
Content:
```
1 from django.shortcuts import render, redirect, get_object_or_404
2
3 from mathesar.models import Database, Schema, Table
4 from mathesar.api.serializers.databases import DatabaseSerializer
5 from mathesar.api.serializers.schemas import SchemaSerializer
6 from mathesar.api.serializers.tables import TableSerializer
7
8
9 def get_schema_list(request, database):
10 schema_serializer = SchemaSerializer(
11 Schema.objects.filter(database=database),
12 many=True,
13 context={'request': request}
14 )
15 return schema_serializer.data
16
17
18 def get_database_list(request):
19 database_serializer = DatabaseSerializer(
20 Database.objects.all(),
21 many=True,
22 context={'request': request}
23 )
24 return database_serializer.data
25
26
27 def get_table_list(request, schema):
28 if schema is None:
29 return []
30 table_serializer = TableSerializer(
31 Table.objects.filter(schema=schema),
32 many=True,
33 context={'request': request}
34 )
35 return table_serializer.data
36
37
38 def get_common_data(request, database, schema=None):
39 return {
40 'current_db': database.name if database else None,
41 'current_schema': schema.id if schema else None,
42 'schemas': get_schema_list(request, database),
43 'databases': get_database_list(request),
44 'tables': get_table_list(request, schema)
45 }
46
47
48 def get_current_database(request, db_name):
49 # if there's a DB name passed in, try to retrieve the database, or return a 404 error.
50 if db_name is not None:
51 return get_object_or_404(Database, name=db_name)
52 else:
53 try:
54 # Try to get the first database available
55 return Database.objects.order_by('id').first()
56 except Database.DoesNotExist:
57 return None
58
59
60 def get_current_schema(request, schema_id, database):
61 # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.
62 if schema_id is not None:
63 return get_object_or_404(Schema, id=schema_id)
64 else:
65 try:
66 # Try to get the first schema in the DB
67 return Schema.objects.filter(database=database).order_by('id').first()
68 except Schema.DoesNotExist:
69 return None
70
71
72 def render_schema(request, database, schema):
73 # if there's no schema available, redirect to the schemas page.
74 if not schema:
75 return redirect('schemas', db_name=database.name)
76 else:
77 # We are redirecting so that the correct URL is passed to the frontend.
78 return redirect('schema_home', db_name=database.name, schema_id=schema.id)
79
80
81 def home(request):
82 database = get_current_database(request, None)
83 schema = get_current_schema(request, None, database)
84 return render_schema(request, database, schema)
85
86
87 def db_home(request, db_name):
88 database = get_current_database(request, db_name)
89 schema = get_current_schema(request, None, database)
90 return render_schema(request, database, schema)
91
92
93 def schema_home(request, db_name, schema_id):
94 database = get_current_database(request, db_name)
95 schema = get_current_schema(request, schema_id, database)
96 return render(request, 'mathesar/index.html', {
97 'common_data': get_common_data(request, database, schema)
98 })
99
100
101 def schemas(request, db_name):
102 database = get_current_database(request, db_name)
103 schema = get_current_schema(request, None, database)
104 return render(request, 'mathesar/index.html', {
105 'common_data': get_common_data(request, database, schema)
106 })
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/views.py b/mathesar/views.py
--- a/mathesar/views.py
+++ b/mathesar/views.py
@@ -1,7 +1,7 @@
from django.shortcuts import render, redirect, get_object_or_404
from mathesar.models import Database, Schema, Table
-from mathesar.api.serializers.databases import DatabaseSerializer
+from mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer
from mathesar.api.serializers.schemas import SchemaSerializer
from mathesar.api.serializers.tables import TableSerializer
@@ -35,13 +35,25 @@
return table_serializer.data
+def get_type_list(request, database):
+ if database is None:
+ return []
+ type_serializer = TypeSerializer(
+ database.supported_types,
+ many=True,
+ context={'request': request}
+ )
+ return type_serializer.data
+
+
def get_common_data(request, database, schema=None):
return {
'current_db': database.name if database else None,
'current_schema': schema.id if schema else None,
'schemas': get_schema_list(request, database),
'databases': get_database_list(request),
- 'tables': get_table_list(request, schema)
+ 'tables': get_table_list(request, schema),
+ 'abstract_types': get_type_list(request, database)
}
| {"golden_diff": "diff --git a/mathesar/views.py b/mathesar/views.py\n--- a/mathesar/views.py\n+++ b/mathesar/views.py\n@@ -1,7 +1,7 @@\n from django.shortcuts import render, redirect, get_object_or_404\n \n from mathesar.models import Database, Schema, Table\n-from mathesar.api.serializers.databases import DatabaseSerializer\n+from mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer\n from mathesar.api.serializers.schemas import SchemaSerializer\n from mathesar.api.serializers.tables import TableSerializer\n \n@@ -35,13 +35,25 @@\n return table_serializer.data\n \n \n+def get_type_list(request, database):\n+ if database is None:\n+ return []\n+ type_serializer = TypeSerializer(\n+ database.supported_types,\n+ many=True,\n+ context={'request': request}\n+ )\n+ return type_serializer.data\n+\n+\n def get_common_data(request, database, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n 'schemas': get_schema_list(request, database),\n 'databases': get_database_list(request),\n- 'tables': get_table_list(request, schema)\n+ 'tables': get_table_list(request, schema),\n+ 'abstract_types': get_type_list(request, database)\n }\n", "issue": "Implement showing and changing a column's type\n## Problem\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nUsers might want to change the data type of an existing column on their table.\r\n\r\n## Proposed solution\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nThe [\"Working with Columns\" design spec](https://wiki.mathesar.org/en/design/specs/working-with-columns) has a solution for showing and changing column types, which we need to implement on the frontend.\r\n\r\nPlease note that we're only implementing changing the Mathesar data type in this milestone. Options specific to individual data types will be implemented in the next milestone.\r\n\r\nNumber data types should save as `NUMERIC`.\r\nText data types should save as `VARCHAR`.\r\nDate/time data types can be disabled for now since they're not fully implemented on the backend.\r\n\r\n## Additional context\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\n- Backend work:\r\n - #532 to get the list of types \r\n - #199 to get valid target types and change types \r\n- Design issue: #324 \r\n- Design discussion: #436\r\n- #269 \n", "before_files": [{"content": "from django.shortcuts import render, redirect, get_object_or_404\n\nfrom mathesar.models import Database, Schema, Table\nfrom mathesar.api.serializers.databases import DatabaseSerializer\nfrom mathesar.api.serializers.schemas import SchemaSerializer\nfrom mathesar.api.serializers.tables import TableSerializer\n\n\ndef get_schema_list(request, database):\n schema_serializer = SchemaSerializer(\n Schema.objects.filter(database=database),\n many=True,\n context={'request': request}\n )\n return schema_serializer.data\n\n\ndef get_database_list(request):\n database_serializer = DatabaseSerializer(\n Database.objects.all(),\n many=True,\n context={'request': request}\n )\n return database_serializer.data\n\n\ndef get_table_list(request, schema):\n if schema is None:\n return []\n table_serializer = TableSerializer(\n Table.objects.filter(schema=schema),\n many=True,\n context={'request': request}\n )\n return table_serializer.data\n\n\ndef get_common_data(request, database, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n 'schemas': get_schema_list(request, database),\n 'databases': get_database_list(request),\n 'tables': get_table_list(request, schema)\n }\n\n\ndef get_current_database(request, db_name):\n # if there's a DB name passed in, try to retrieve the database, or return a 404 error.\n if db_name is not None:\n return get_object_or_404(Database, name=db_name)\n else:\n try:\n # Try to get the first database available\n return Database.objects.order_by('id').first()\n except Database.DoesNotExist:\n return None\n\n\ndef get_current_schema(request, schema_id, database):\n # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.\n if schema_id is not None:\n return get_object_or_404(Schema, id=schema_id)\n else:\n try:\n # Try to get the first schema in the DB\n return Schema.objects.filter(database=database).order_by('id').first()\n except Schema.DoesNotExist:\n return None\n\n\ndef render_schema(request, database, schema):\n # if there's no schema available, redirect to the schemas page.\n if not schema:\n return redirect('schemas', db_name=database.name)\n else:\n # We are redirecting so that the correct URL is passed to the frontend.\n return redirect('schema_home', db_name=database.name, schema_id=schema.id)\n\n\ndef home(request):\n database = get_current_database(request, None)\n schema = get_current_schema(request, None, database)\n return render_schema(request, database, schema)\n\n\ndef db_home(request, db_name):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, None, database)\n return render_schema(request, database, schema)\n\n\ndef schema_home(request, db_name, schema_id):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, schema_id, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n\n\ndef schemas(request, db_name):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, None, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n", "path": "mathesar/views.py"}], "after_files": [{"content": "from django.shortcuts import render, redirect, get_object_or_404\n\nfrom mathesar.models import Database, Schema, Table\nfrom mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer\nfrom mathesar.api.serializers.schemas import SchemaSerializer\nfrom mathesar.api.serializers.tables import TableSerializer\n\n\ndef get_schema_list(request, database):\n schema_serializer = SchemaSerializer(\n Schema.objects.filter(database=database),\n many=True,\n context={'request': request}\n )\n return schema_serializer.data\n\n\ndef get_database_list(request):\n database_serializer = DatabaseSerializer(\n Database.objects.all(),\n many=True,\n context={'request': request}\n )\n return database_serializer.data\n\n\ndef get_table_list(request, schema):\n if schema is None:\n return []\n table_serializer = TableSerializer(\n Table.objects.filter(schema=schema),\n many=True,\n context={'request': request}\n )\n return table_serializer.data\n\n\ndef get_type_list(request, database):\n if database is None:\n return []\n type_serializer = TypeSerializer(\n database.supported_types,\n many=True,\n context={'request': request}\n )\n return type_serializer.data\n\n\ndef get_common_data(request, database, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n 'schemas': get_schema_list(request, database),\n 'databases': get_database_list(request),\n 'tables': get_table_list(request, schema),\n 'abstract_types': get_type_list(request, database)\n }\n\n\ndef get_current_database(request, db_name):\n # if there's a DB name passed in, try to retrieve the database, or return a 404 error.\n if db_name is not None:\n return get_object_or_404(Database, name=db_name)\n else:\n try:\n # Try to get the first database available\n return Database.objects.order_by('id').first()\n except Database.DoesNotExist:\n return None\n\n\ndef get_current_schema(request, schema_id, database):\n # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.\n if schema_id is not None:\n return get_object_or_404(Schema, id=schema_id)\n else:\n try:\n # Try to get the first schema in the DB\n return Schema.objects.filter(database=database).order_by('id').first()\n except Schema.DoesNotExist:\n return None\n\n\ndef render_schema(request, database, schema):\n # if there's no schema available, redirect to the schemas page.\n if not schema:\n return redirect('schemas', db_name=database.name)\n else:\n # We are redirecting so that the correct URL is passed to the frontend.\n return redirect('schema_home', db_name=database.name, schema_id=schema.id)\n\n\ndef home(request):\n database = get_current_database(request, None)\n schema = get_current_schema(request, None, database)\n return render_schema(request, database, schema)\n\n\ndef db_home(request, db_name):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, None, database)\n return render_schema(request, database, schema)\n\n\ndef schema_home(request, db_name, schema_id):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, schema_id, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n\n\ndef schemas(request, db_name):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, None, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n", "path": "mathesar/views.py"}]} | 1,490 | 295 |
gh_patches_debug_34062 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1871 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: Chichester District Council is not working
### I Have A Problem With:
A specific source
### What's Your Problem
The source has stopped working since Tuesday 13th February 2024. All the collection days no longer show on the calendar at all. The Chichester District Council website still shows me the days.
### Source (if relevant)
chichester_gov_uk
### Logs
```Shell
This error originated from a custom integration.
Logger: waste_collection_schedule.source_shell
Source: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136
Integration: waste_collection_schedule (documentation)
First occurred: 11:36:47 (1 occurrences)
Last logged: 11:36:47
fetch failed for source Chichester District Council: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py", line 37, in fetch form_url = form["action"] ~~~~^^^^^^^^^^ TypeError: 'NoneType' object is not subscriptable
```
### Relevant Configuration
```YAML
waste_collection_schedule:
sources:
- name: chichester_gov_uk
args:
uprn: 10002466648
```
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py`
Content:
```
1 from datetime import datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection
6
7 TITLE = "Chichester District Council"
8 DESCRIPTION = "Source for chichester.gov.uk services for Chichester"
9 URL = "chichester.gov.uk"
10
11 TEST_CASES = {
12 "Test_001": {"uprn": "010002476348"},
13 "Test_002": {"uprn": "100062612654"},
14 "Test_003": {"uprn": "100061745708"},
15 }
16
17 ICON_MAP = {
18 "General Waste": "mdi:trash-can",
19 "Recycling": "mdi:recycle",
20 "Garden Recycling": "mdi:leaf",
21 }
22
23
24 class Source:
25 def __init__(self, uprn):
26 self._uprn = uprn
27
28 def fetch(self):
29 session = requests.Session()
30 # Start a session
31 r = session.get("https://www.chichester.gov.uk/checkyourbinday")
32 r.raise_for_status()
33 soup = BeautifulSoup(r.text, features="html.parser")
34
35 # Extract form submission url
36 form = soup.find("form", attrs={"id": "WASTECOLLECTIONCALENDARV2_FORM"})
37 form_url = form["action"]
38
39 # Submit form
40 form_data = {
41 "WASTECOLLECTIONCALENDARV2_FORMACTION_NEXT": "Submit",
42 "WASTECOLLECTIONCALENDARV2_CALENDAR_UPRN": self._uprn,
43 }
44 r = session.post(form_url, data=form_data)
45 r.raise_for_status()
46
47 # Extract collection dates
48 soup = BeautifulSoup(r.text, features="html.parser")
49 entries = []
50 data = soup.find_all("div", attrs={"class": "bin-days"})
51 for bin in data:
52 if "print-only" in bin["class"]:
53 continue
54
55 type = bin.find("span").contents[0].replace("bin", "").strip().title()
56 list_items = bin.find_all("li")
57 if list_items:
58 for item in list_items:
59 date = datetime.strptime(item.text, "%d %B %Y").date()
60 entries.append(
61 Collection(
62 date=date,
63 t=type,
64 icon=ICON_MAP.get(type),
65 )
66 )
67
68 return entries
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py
@@ -33,13 +33,13 @@
soup = BeautifulSoup(r.text, features="html.parser")
# Extract form submission url
- form = soup.find("form", attrs={"id": "WASTECOLLECTIONCALENDARV2_FORM"})
+ form = soup.find("form", attrs={"id": "WASTECOLLECTIONCALENDARV5_FORM"})
form_url = form["action"]
# Submit form
form_data = {
- "WASTECOLLECTIONCALENDARV2_FORMACTION_NEXT": "Submit",
- "WASTECOLLECTIONCALENDARV2_CALENDAR_UPRN": self._uprn,
+ "WASTECOLLECTIONCALENDARV5_FORMACTION_NEXT": "Submit",
+ "WASTECOLLECTIONCALENDARV5_CALENDAR_UPRN": self._uprn,
}
r = session.post(form_url, data=form_data)
r.raise_for_status()
@@ -47,16 +47,18 @@
# Extract collection dates
soup = BeautifulSoup(r.text, features="html.parser")
entries = []
- data = soup.find_all("div", attrs={"class": "bin-days"})
- for bin in data:
- if "print-only" in bin["class"]:
- continue
-
- type = bin.find("span").contents[0].replace("bin", "").strip().title()
- list_items = bin.find_all("li")
- if list_items:
- for item in list_items:
- date = datetime.strptime(item.text, "%d %B %Y").date()
+ tables = soup.find_all("table", attrs={"class": "bin-collection-dates"})
+ # Data is presented in two tables side-by-side
+ for table in tables:
+ # Each collection is a table row
+ data = table.find_all("tr")
+ for bin in data:
+ cells = bin.find_all("td")
+ # Ignore the header row
+ if len(cells) == 2:
+ date = datetime.strptime(cells[0].text, "%d %B %Y").date()
+ # Maintain backwards compatibility - it used to be General Waste and now it is General waste
+ type = cells[1].text.title()
entries.append(
Collection(
date=date,
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py\n@@ -33,13 +33,13 @@\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n \n # Extract form submission url\n- form = soup.find(\"form\", attrs={\"id\": \"WASTECOLLECTIONCALENDARV2_FORM\"})\n+ form = soup.find(\"form\", attrs={\"id\": \"WASTECOLLECTIONCALENDARV5_FORM\"})\n form_url = form[\"action\"]\n \n # Submit form\n form_data = {\n- \"WASTECOLLECTIONCALENDARV2_FORMACTION_NEXT\": \"Submit\",\n- \"WASTECOLLECTIONCALENDARV2_CALENDAR_UPRN\": self._uprn,\n+ \"WASTECOLLECTIONCALENDARV5_FORMACTION_NEXT\": \"Submit\",\n+ \"WASTECOLLECTIONCALENDARV5_CALENDAR_UPRN\": self._uprn,\n }\n r = session.post(form_url, data=form_data)\n r.raise_for_status()\n@@ -47,16 +47,18 @@\n # Extract collection dates\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n entries = []\n- data = soup.find_all(\"div\", attrs={\"class\": \"bin-days\"})\n- for bin in data:\n- if \"print-only\" in bin[\"class\"]:\n- continue\n-\n- type = bin.find(\"span\").contents[0].replace(\"bin\", \"\").strip().title()\n- list_items = bin.find_all(\"li\")\n- if list_items:\n- for item in list_items:\n- date = datetime.strptime(item.text, \"%d %B %Y\").date()\n+ tables = soup.find_all(\"table\", attrs={\"class\": \"bin-collection-dates\"})\n+ # Data is presented in two tables side-by-side\n+ for table in tables:\n+ # Each collection is a table row\n+ data = table.find_all(\"tr\")\n+ for bin in data:\n+ cells = bin.find_all(\"td\")\n+ # Ignore the header row\n+ if len(cells) == 2:\n+ date = datetime.strptime(cells[0].text, \"%d %B %Y\").date()\n+ # Maintain backwards compatibility - it used to be General Waste and now it is General waste\n+ type = cells[1].text.title()\n entries.append(\n Collection(\n date=date,\n", "issue": "[Bug]: Chichester District Council is not working\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nThe source has stopped working since Tuesday 13th February 2024. All the collection days no longer show on the calendar at all. The Chichester District Council website still shows me the days.\n\n### Source (if relevant)\n\nchichester_gov_uk\n\n### Logs\n\n```Shell\nThis error originated from a custom integration.\r\n\r\nLogger: waste_collection_schedule.source_shell\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136\r\nIntegration: waste_collection_schedule (documentation)\r\nFirst occurred: 11:36:47 (1 occurrences)\r\nLast logged: 11:36:47\r\n\r\nfetch failed for source Chichester District Council: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py\", line 37, in fetch form_url = form[\"action\"] ~~~~^^^^^^^^^^ TypeError: 'NoneType' object is not subscriptable\n```\n\n\n### Relevant Configuration\n\n```YAML\nwaste_collection_schedule:\r\n sources:\r\n - name: chichester_gov_uk\r\n args:\r\n uprn: 10002466648\n```\n\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Chichester District Council\"\nDESCRIPTION = \"Source for chichester.gov.uk services for Chichester\"\nURL = \"chichester.gov.uk\"\n\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"010002476348\"},\n \"Test_002\": {\"uprn\": \"100062612654\"},\n \"Test_003\": {\"uprn\": \"100061745708\"},\n}\n\nICON_MAP = {\n \"General Waste\": \"mdi:trash-can\",\n \"Recycling\": \"mdi:recycle\",\n \"Garden Recycling\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = uprn\n\n def fetch(self):\n session = requests.Session()\n # Start a session\n r = session.get(\"https://www.chichester.gov.uk/checkyourbinday\")\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n\n # Extract form submission url\n form = soup.find(\"form\", attrs={\"id\": \"WASTECOLLECTIONCALENDARV2_FORM\"})\n form_url = form[\"action\"]\n\n # Submit form\n form_data = {\n \"WASTECOLLECTIONCALENDARV2_FORMACTION_NEXT\": \"Submit\",\n \"WASTECOLLECTIONCALENDARV2_CALENDAR_UPRN\": self._uprn,\n }\n r = session.post(form_url, data=form_data)\n r.raise_for_status()\n\n # Extract collection dates\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n entries = []\n data = soup.find_all(\"div\", attrs={\"class\": \"bin-days\"})\n for bin in data:\n if \"print-only\" in bin[\"class\"]:\n continue\n\n type = bin.find(\"span\").contents[0].replace(\"bin\", \"\").strip().title()\n list_items = bin.find_all(\"li\")\n if list_items:\n for item in list_items:\n date = datetime.strptime(item.text, \"%d %B %Y\").date()\n entries.append(\n Collection(\n date=date,\n t=type,\n icon=ICON_MAP.get(type),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py"}], "after_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Chichester District Council\"\nDESCRIPTION = \"Source for chichester.gov.uk services for Chichester\"\nURL = \"chichester.gov.uk\"\n\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"010002476348\"},\n \"Test_002\": {\"uprn\": \"100062612654\"},\n \"Test_003\": {\"uprn\": \"100061745708\"},\n}\n\nICON_MAP = {\n \"General Waste\": \"mdi:trash-can\",\n \"Recycling\": \"mdi:recycle\",\n \"Garden Recycling\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = uprn\n\n def fetch(self):\n session = requests.Session()\n # Start a session\n r = session.get(\"https://www.chichester.gov.uk/checkyourbinday\")\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n\n # Extract form submission url\n form = soup.find(\"form\", attrs={\"id\": \"WASTECOLLECTIONCALENDARV5_FORM\"})\n form_url = form[\"action\"]\n\n # Submit form\n form_data = {\n \"WASTECOLLECTIONCALENDARV5_FORMACTION_NEXT\": \"Submit\",\n \"WASTECOLLECTIONCALENDARV5_CALENDAR_UPRN\": self._uprn,\n }\n r = session.post(form_url, data=form_data)\n r.raise_for_status()\n\n # Extract collection dates\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n entries = []\n tables = soup.find_all(\"table\", attrs={\"class\": \"bin-collection-dates\"})\n # Data is presented in two tables side-by-side\n for table in tables:\n # Each collection is a table row\n data = table.find_all(\"tr\")\n for bin in data:\n cells = bin.find_all(\"td\")\n # Ignore the header row\n if len(cells) == 2:\n date = datetime.strptime(cells[0].text, \"%d %B %Y\").date()\n # Maintain backwards compatibility - it used to be General Waste and now it is General waste\n type = cells[1].text.title()\n entries.append(\n Collection(\n date=date,\n t=type,\n icon=ICON_MAP.get(type),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py"}]} | 1,465 | 596 |
gh_patches_debug_2252 | rasdani/github-patches | git_diff | fonttools__fonttools-337 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
I find the font's line height is bigger than original font.
I have tried pyftsubset with command line option --no-recalc-bounds ,
but the generated subfont's line height is still bigger than original font.
I used html font-face render font.
@font-face {
font-family: 'freetype';
src: url('font.ttf') format('truetype');
}
the font file is microsoft chinese liti.ttf.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/ttLib/tables/_v_h_e_a.py`
Content:
```
1 from __future__ import print_function, division, absolute_import
2 from fontTools.misc.py23 import *
3 from fontTools.misc import sstruct
4 from fontTools.misc.textTools import safeEval
5 from . import DefaultTable
6
7 vheaFormat = """
8 > # big endian
9 tableVersion: 16.16F
10 ascent: h
11 descent: h
12 lineGap: h
13 advanceHeightMax: H
14 minTopSideBearing: h
15 minBottomSideBearing: h
16 yMaxExtent: h
17 caretSlopeRise: h
18 caretSlopeRun: h
19 reserved0: h
20 reserved1: h
21 reserved2: h
22 reserved3: h
23 reserved4: h
24 metricDataFormat: h
25 numberOfVMetrics: H
26 """
27
28 class table__v_h_e_a(DefaultTable.DefaultTable):
29
30 # Note: Keep in sync with table__h_h_e_a
31
32 dependencies = ['vmtx', 'glyf']
33
34 def decompile(self, data, ttFont):
35 sstruct.unpack(vheaFormat, data, self)
36
37 def compile(self, ttFont):
38 self.recalc(ttFont)
39 return sstruct.pack(vheaFormat, self)
40
41 def recalc(self, ttFont):
42 vtmxTable = ttFont['vmtx']
43 if 'glyf' in ttFont:
44 glyfTable = ttFont['glyf']
45 INFINITY = 100000
46 advanceHeightMax = 0
47 minTopSideBearing = +INFINITY # arbitrary big number
48 minBottomSideBearing = +INFINITY # arbitrary big number
49 yMaxExtent = -INFINITY # arbitrary big negative number
50
51 for name in ttFont.getGlyphOrder():
52 height, tsb = vtmxTable[name]
53 advanceHeightMax = max(advanceHeightMax, height)
54 g = glyfTable[name]
55 if g.numberOfContours == 0:
56 continue
57 if g.numberOfContours < 0 and not hasattr(g, "yMax"):
58 # Composite glyph without extents set.
59 # Calculate those.
60 g.recalcBounds(glyfTable)
61 minTopSideBearing = min(minTopSideBearing, tsb)
62 bsb = height - tsb - (g.yMax - g.yMin)
63 minBottomSideBearing = min(minBottomSideBearing, bsb)
64 extent = tsb + (g.yMax - g.yMin)
65 yMaxExtent = max(yMaxExtent, extent)
66
67 if yMaxExtent == -INFINITY:
68 # No glyph has outlines.
69 minTopSideBearing = 0
70 minBottomSideBearing = 0
71 yMaxExtent = 0
72
73 self.advanceHeightMax = advanceHeightMax
74 self.minTopSideBearing = minTopSideBearing
75 self.minBottomSideBearing = minBottomSideBearing
76 self.yMaxExtent = yMaxExtent
77 else:
78 # XXX CFF recalc...
79 pass
80
81 def toXML(self, writer, ttFont):
82 formatstring, names, fixes = sstruct.getformat(vheaFormat)
83 for name in names:
84 value = getattr(self, name)
85 writer.simpletag(name, value=value)
86 writer.newline()
87
88 def fromXML(self, name, attrs, content, ttFont):
89 setattr(self, name, safeEval(attrs["value"]))
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Lib/fontTools/ttLib/tables/_v_h_e_a.py b/Lib/fontTools/ttLib/tables/_v_h_e_a.py
--- a/Lib/fontTools/ttLib/tables/_v_h_e_a.py
+++ b/Lib/fontTools/ttLib/tables/_v_h_e_a.py
@@ -35,7 +35,8 @@
sstruct.unpack(vheaFormat, data, self)
def compile(self, ttFont):
- self.recalc(ttFont)
+ if ttFont.isLoaded('glyf') and ttFont.recalcBBoxes:
+ self.recalc(ttFont)
return sstruct.pack(vheaFormat, self)
def recalc(self, ttFont):
| {"golden_diff": "diff --git a/Lib/fontTools/ttLib/tables/_v_h_e_a.py b/Lib/fontTools/ttLib/tables/_v_h_e_a.py\n--- a/Lib/fontTools/ttLib/tables/_v_h_e_a.py\n+++ b/Lib/fontTools/ttLib/tables/_v_h_e_a.py\n@@ -35,7 +35,8 @@\n \t\tsstruct.unpack(vheaFormat, data, self)\n \n \tdef compile(self, ttFont):\n-\t\tself.recalc(ttFont)\n+\t\tif ttFont.isLoaded('glyf') and ttFont.recalcBBoxes:\n+\t\t\tself.recalc(ttFont)\n \t\treturn sstruct.pack(vheaFormat, self)\n \n \tdef recalc(self, ttFont):\n", "issue": "I find the font's line height is bigger than original font.\n I have tried pyftsubset with command line option --no-recalc-bounds ,\nbut the generated subfont's line height is still bigger than original font.\n\nI used html font-face render font.\n@font-face {\n font-family: 'freetype';\n src: url('font.ttf') format('truetype');\n }\n\nthe font file is microsoft chinese liti.ttf.\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\nfrom fontTools.misc.py23 import *\nfrom fontTools.misc import sstruct\nfrom fontTools.misc.textTools import safeEval\nfrom . import DefaultTable\n\nvheaFormat = \"\"\"\n\t\t>\t# big endian\n\t\ttableVersion:\t\t16.16F\n\t\tascent:\t\t\th\n\t\tdescent:\t\th\n\t\tlineGap:\t\th\n\t\tadvanceHeightMax:\tH\n\t\tminTopSideBearing:\th\n\t\tminBottomSideBearing:\th\n\t\tyMaxExtent:\t\th\n\t\tcaretSlopeRise:\t\th\n\t\tcaretSlopeRun:\t\th\n\t\treserved0:\t\th\n\t\treserved1:\t\th\n\t\treserved2:\t\th\n\t\treserved3:\t\th\n\t\treserved4:\t\th\n\t\tmetricDataFormat:\th\n\t\tnumberOfVMetrics:\tH\n\"\"\"\n\nclass table__v_h_e_a(DefaultTable.DefaultTable):\n\n\t# Note: Keep in sync with table__h_h_e_a\n\n\tdependencies = ['vmtx', 'glyf']\n\n\tdef decompile(self, data, ttFont):\n\t\tsstruct.unpack(vheaFormat, data, self)\n\n\tdef compile(self, ttFont):\n\t\tself.recalc(ttFont)\n\t\treturn sstruct.pack(vheaFormat, self)\n\n\tdef recalc(self, ttFont):\n\t\tvtmxTable = ttFont['vmtx']\n\t\tif 'glyf' in ttFont:\n\t\t\tglyfTable = ttFont['glyf']\n\t\t\tINFINITY = 100000\n\t\t\tadvanceHeightMax = 0\n\t\t\tminTopSideBearing = +INFINITY # arbitrary big number\n\t\t\tminBottomSideBearing = +INFINITY # arbitrary big number\n\t\t\tyMaxExtent = -INFINITY # arbitrary big negative number\n\n\t\t\tfor name in ttFont.getGlyphOrder():\n\t\t\t\theight, tsb = vtmxTable[name]\n\t\t\t\tadvanceHeightMax = max(advanceHeightMax, height)\n\t\t\t\tg = glyfTable[name]\n\t\t\t\tif g.numberOfContours == 0:\n\t\t\t\t\tcontinue\n\t\t\t\tif g.numberOfContours < 0 and not hasattr(g, \"yMax\"):\n\t\t\t\t\t# Composite glyph without extents set.\n\t\t\t\t\t# Calculate those.\n\t\t\t\t\tg.recalcBounds(glyfTable)\n\t\t\t\tminTopSideBearing = min(minTopSideBearing, tsb)\n\t\t\t\tbsb = height - tsb - (g.yMax - g.yMin)\n\t\t\t\tminBottomSideBearing = min(minBottomSideBearing, bsb)\n\t\t\t\textent = tsb + (g.yMax - g.yMin)\n\t\t\t\tyMaxExtent = max(yMaxExtent, extent)\n\n\t\t\tif yMaxExtent == -INFINITY:\n\t\t\t\t# No glyph has outlines.\n\t\t\t\tminTopSideBearing = 0\n\t\t\t\tminBottomSideBearing = 0\n\t\t\t\tyMaxExtent = 0\n\n\t\t\tself.advanceHeightMax = advanceHeightMax\n\t\t\tself.minTopSideBearing = minTopSideBearing\n\t\t\tself.minBottomSideBearing = minBottomSideBearing\n\t\t\tself.yMaxExtent = yMaxExtent\n\t\telse:\n\t\t\t# XXX CFF recalc...\n\t\t\tpass\n\n\tdef toXML(self, writer, ttFont):\n\t\tformatstring, names, fixes = sstruct.getformat(vheaFormat)\n\t\tfor name in names:\n\t\t\tvalue = getattr(self, name)\n\t\t\twriter.simpletag(name, value=value)\n\t\t\twriter.newline()\n\n\tdef fromXML(self, name, attrs, content, ttFont):\n\t\tsetattr(self, name, safeEval(attrs[\"value\"]))\n", "path": "Lib/fontTools/ttLib/tables/_v_h_e_a.py"}], "after_files": [{"content": "from __future__ import print_function, division, absolute_import\nfrom fontTools.misc.py23 import *\nfrom fontTools.misc import sstruct\nfrom fontTools.misc.textTools import safeEval\nfrom . import DefaultTable\n\nvheaFormat = \"\"\"\n\t\t>\t# big endian\n\t\ttableVersion:\t\t16.16F\n\t\tascent:\t\t\th\n\t\tdescent:\t\th\n\t\tlineGap:\t\th\n\t\tadvanceHeightMax:\tH\n\t\tminTopSideBearing:\th\n\t\tminBottomSideBearing:\th\n\t\tyMaxExtent:\t\th\n\t\tcaretSlopeRise:\t\th\n\t\tcaretSlopeRun:\t\th\n\t\treserved0:\t\th\n\t\treserved1:\t\th\n\t\treserved2:\t\th\n\t\treserved3:\t\th\n\t\treserved4:\t\th\n\t\tmetricDataFormat:\th\n\t\tnumberOfVMetrics:\tH\n\"\"\"\n\nclass table__v_h_e_a(DefaultTable.DefaultTable):\n\n\t# Note: Keep in sync with table__h_h_e_a\n\n\tdependencies = ['vmtx', 'glyf']\n\n\tdef decompile(self, data, ttFont):\n\t\tsstruct.unpack(vheaFormat, data, self)\n\n\tdef compile(self, ttFont):\n\t\tif ttFont.isLoaded('glyf') and ttFont.recalcBBoxes:\n\t\t\tself.recalc(ttFont)\n\t\treturn sstruct.pack(vheaFormat, self)\n\n\tdef recalc(self, ttFont):\n\t\tvtmxTable = ttFont['vmtx']\n\t\tif 'glyf' in ttFont:\n\t\t\tglyfTable = ttFont['glyf']\n\t\t\tINFINITY = 100000\n\t\t\tadvanceHeightMax = 0\n\t\t\tminTopSideBearing = +INFINITY # arbitrary big number\n\t\t\tminBottomSideBearing = +INFINITY # arbitrary big number\n\t\t\tyMaxExtent = -INFINITY # arbitrary big negative number\n\n\t\t\tfor name in ttFont.getGlyphOrder():\n\t\t\t\theight, tsb = vtmxTable[name]\n\t\t\t\tadvanceHeightMax = max(advanceHeightMax, height)\n\t\t\t\tg = glyfTable[name]\n\t\t\t\tif g.numberOfContours == 0:\n\t\t\t\t\tcontinue\n\t\t\t\tif g.numberOfContours < 0 and not hasattr(g, \"yMax\"):\n\t\t\t\t\t# Composite glyph without extents set.\n\t\t\t\t\t# Calculate those.\n\t\t\t\t\tg.recalcBounds(glyfTable)\n\t\t\t\tminTopSideBearing = min(minTopSideBearing, tsb)\n\t\t\t\tbsb = height - tsb - (g.yMax - g.yMin)\n\t\t\t\tminBottomSideBearing = min(minBottomSideBearing, bsb)\n\t\t\t\textent = tsb + (g.yMax - g.yMin)\n\t\t\t\tyMaxExtent = max(yMaxExtent, extent)\n\n\t\t\tif yMaxExtent == -INFINITY:\n\t\t\t\t# No glyph has outlines.\n\t\t\t\tminTopSideBearing = 0\n\t\t\t\tminBottomSideBearing = 0\n\t\t\t\tyMaxExtent = 0\n\n\t\t\tself.advanceHeightMax = advanceHeightMax\n\t\t\tself.minTopSideBearing = minTopSideBearing\n\t\t\tself.minBottomSideBearing = minBottomSideBearing\n\t\t\tself.yMaxExtent = yMaxExtent\n\t\telse:\n\t\t\t# XXX CFF recalc...\n\t\t\tpass\n\n\tdef toXML(self, writer, ttFont):\n\t\tformatstring, names, fixes = sstruct.getformat(vheaFormat)\n\t\tfor name in names:\n\t\t\tvalue = getattr(self, name)\n\t\t\twriter.simpletag(name, value=value)\n\t\t\twriter.newline()\n\n\tdef fromXML(self, name, attrs, content, ttFont):\n\t\tsetattr(self, name, safeEval(attrs[\"value\"]))\n", "path": "Lib/fontTools/ttLib/tables/_v_h_e_a.py"}]} | 1,316 | 161 |
gh_patches_debug_22699 | rasdani/github-patches | git_diff | svthalia__concrexit-3592 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Prevent full disk
### Describe the bug
Sometimes the server's storage gets full, because for some reason filepond uploads aren't being deleted. Today this caused the server to crash (because the full server disk broke redis). We should prevent this from happening in multiple ways:
- Make old uploads be deleted. Would be nice to find out why the uploads aren't being deleted already. But we should also (additionally) periodically remove old files from the media volume.
- Maybe limit the volume size such that it getting full does not influence the rest of the server. But docker doesn't really support that nicely. We could make a separate volume for it on the host and bind-mount it I guess.
### How to reproduce
<!-- Steps to reproduce the behaviour -->
1. Upload lots of albums to a docker deployment
2. See the media volume get larger.
### Expected behaviour
Stuff is cleaned up once it's processed and periodically.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/photos/tasks.py`
Content:
```
1 from django.db import transaction
2 from django.dispatch import Signal
3
4 from celery import shared_task
5 from django_drf_filepond.models import TemporaryUpload
6 from django_filepond_widget.fields import FilePondFile
7
8 from photos.models import Album
9
10 from .services import extract_archive
11
12 album_uploaded = Signal()
13
14
15 @shared_task
16 def process_album_upload(archive_upload_id: str, album_id: int):
17 try:
18 album = Album.objects.get(id=album_id)
19 except Album.DoesNotExist:
20 return
21
22 archive = TemporaryUpload.objects.get(upload_id=archive_upload_id).file
23 try:
24 with transaction.atomic():
25 # We make the upload atomic separately, so we can keep using the db if it fails.
26 # See https://docs.djangoproject.com/en/4.2/topics/db/transactions/#handling-exceptions-within-postgresql-transactions.
27 extract_archive(album, archive)
28 album.is_processing = False
29 album.save()
30
31 # Send signal to notify that an album has been uploaded. This is used
32 # by facedetection, and possibly in the future to notify the uploader.
33 album_uploaded.send(sender=None, album=album)
34 finally:
35 if isinstance(archive, FilePondFile):
36 archive.remove()
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/photos/tasks.py b/website/photos/tasks.py
--- a/website/photos/tasks.py
+++ b/website/photos/tasks.py
@@ -3,7 +3,6 @@
from celery import shared_task
from django_drf_filepond.models import TemporaryUpload
-from django_filepond_widget.fields import FilePondFile
from photos.models import Album
@@ -19,7 +18,8 @@
except Album.DoesNotExist:
return
- archive = TemporaryUpload.objects.get(upload_id=archive_upload_id).file
+ upload = TemporaryUpload.objects.get(upload_id=archive_upload_id)
+ archive = upload.file
try:
with transaction.atomic():
# We make the upload atomic separately, so we can keep using the db if it fails.
@@ -32,5 +32,5 @@
# by facedetection, and possibly in the future to notify the uploader.
album_uploaded.send(sender=None, album=album)
finally:
- if isinstance(archive, FilePondFile):
- archive.remove()
+ archive.delete()
+ upload.delete()
| {"golden_diff": "diff --git a/website/photos/tasks.py b/website/photos/tasks.py\n--- a/website/photos/tasks.py\n+++ b/website/photos/tasks.py\n@@ -3,7 +3,6 @@\n \n from celery import shared_task\n from django_drf_filepond.models import TemporaryUpload\n-from django_filepond_widget.fields import FilePondFile\n \n from photos.models import Album\n \n@@ -19,7 +18,8 @@\n except Album.DoesNotExist:\n return\n \n- archive = TemporaryUpload.objects.get(upload_id=archive_upload_id).file\n+ upload = TemporaryUpload.objects.get(upload_id=archive_upload_id)\n+ archive = upload.file\n try:\n with transaction.atomic():\n # We make the upload atomic separately, so we can keep using the db if it fails.\n@@ -32,5 +32,5 @@\n # by facedetection, and possibly in the future to notify the uploader.\n album_uploaded.send(sender=None, album=album)\n finally:\n- if isinstance(archive, FilePondFile):\n- archive.remove()\n+ archive.delete()\n+ upload.delete()\n", "issue": "Prevent full disk\n### Describe the bug\r\nSometimes the server's storage gets full, because for some reason filepond uploads aren't being deleted. Today this caused the server to crash (because the full server disk broke redis). We should prevent this from happening in multiple ways:\r\n\r\n- Make old uploads be deleted. Would be nice to find out why the uploads aren't being deleted already. But we should also (additionally) periodically remove old files from the media volume.\r\n- Maybe limit the volume size such that it getting full does not influence the rest of the server. But docker doesn't really support that nicely. We could make a separate volume for it on the host and bind-mount it I guess.\r\n\r\n### How to reproduce\r\n<!-- Steps to reproduce the behaviour -->\r\n1. Upload lots of albums to a docker deployment\r\n2. See the media volume get larger.\r\n\r\n### Expected behaviour\r\nStuff is cleaned up once it's processed and periodically.\r\n\r\n\r\n\n", "before_files": [{"content": "from django.db import transaction\nfrom django.dispatch import Signal\n\nfrom celery import shared_task\nfrom django_drf_filepond.models import TemporaryUpload\nfrom django_filepond_widget.fields import FilePondFile\n\nfrom photos.models import Album\n\nfrom .services import extract_archive\n\nalbum_uploaded = Signal()\n\n\n@shared_task\ndef process_album_upload(archive_upload_id: str, album_id: int):\n try:\n album = Album.objects.get(id=album_id)\n except Album.DoesNotExist:\n return\n\n archive = TemporaryUpload.objects.get(upload_id=archive_upload_id).file\n try:\n with transaction.atomic():\n # We make the upload atomic separately, so we can keep using the db if it fails.\n # See https://docs.djangoproject.com/en/4.2/topics/db/transactions/#handling-exceptions-within-postgresql-transactions.\n extract_archive(album, archive)\n album.is_processing = False\n album.save()\n\n # Send signal to notify that an album has been uploaded. This is used\n # by facedetection, and possibly in the future to notify the uploader.\n album_uploaded.send(sender=None, album=album)\n finally:\n if isinstance(archive, FilePondFile):\n archive.remove()\n", "path": "website/photos/tasks.py"}], "after_files": [{"content": "from django.db import transaction\nfrom django.dispatch import Signal\n\nfrom celery import shared_task\nfrom django_drf_filepond.models import TemporaryUpload\n\nfrom photos.models import Album\n\nfrom .services import extract_archive\n\nalbum_uploaded = Signal()\n\n\n@shared_task\ndef process_album_upload(archive_upload_id: str, album_id: int):\n try:\n album = Album.objects.get(id=album_id)\n except Album.DoesNotExist:\n return\n\n upload = TemporaryUpload.objects.get(upload_id=archive_upload_id)\n archive = upload.file\n try:\n with transaction.atomic():\n # We make the upload atomic separately, so we can keep using the db if it fails.\n # See https://docs.djangoproject.com/en/4.2/topics/db/transactions/#handling-exceptions-within-postgresql-transactions.\n extract_archive(album, archive)\n album.is_processing = False\n album.save()\n\n # Send signal to notify that an album has been uploaded. This is used\n # by facedetection, and possibly in the future to notify the uploader.\n album_uploaded.send(sender=None, album=album)\n finally:\n archive.delete()\n upload.delete()\n", "path": "website/photos/tasks.py"}]} | 775 | 234 |
gh_patches_debug_16164 | rasdani/github-patches | git_diff | mozilla__bugbug-1631 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make spawn_pipeline not depend on the order of tasks in the yaml file
Currently, if a task is defined in the yaml file before its dependencies, the spawn_pipeline script fails with:
```
Traceback (most recent call last):
File "/code/spawn_pipeline.py", line 132, in <module>
main()
File "/code/spawn_pipeline.py", line 110, in main
new_dependencies.append(id_mapping[dependency])
KeyError: 'regressor-finder'
```
So things like https://github.com/mozilla/bugbug/commit/aaa67b3b0a1db7530cbf88df644aff076fcd2e4e are needed.
We should make the spawn_pipeline script not depend on the order of definition of tasks in the yaml file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `infra/spawn_pipeline.py`
Content:
```
1 #!/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright 2019 Mozilla
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17
18 """
19 This script triggers the data pipeline for the bugbug project
20 """
21
22 import argparse
23 import os
24 import sys
25
26 import jsone
27 import requests.packages.urllib3
28 import taskcluster
29 import yaml
30
31 requests.packages.urllib3.disable_warnings()
32
33 TASKCLUSTER_DEFAULT_URL = "https://community-tc.services.mozilla.com"
34
35
36 def get_taskcluster_options():
37 """
38 Helper to get the Taskcluster setup options
39 according to current environment (local or Taskcluster)
40 """
41 options = taskcluster.optionsFromEnvironment()
42 proxy_url = os.environ.get("TASKCLUSTER_PROXY_URL")
43
44 if proxy_url is not None:
45 # Always use proxy url when available
46 options["rootUrl"] = proxy_url
47
48 if "rootUrl" not in options:
49 # Always have a value in root url
50 options["rootUrl"] = TASKCLUSTER_DEFAULT_URL
51
52 return options
53
54
55 def main():
56 parser = argparse.ArgumentParser(description="Spawn tasks for bugbug data pipeline")
57 parser.add_argument("data_pipeline_json")
58
59 args = parser.parse_args()
60 decision_task_id = os.environ.get("TASK_ID")
61 options = get_taskcluster_options()
62 add_self = False
63 if decision_task_id:
64 add_self = True
65 task_group_id = decision_task_id
66 else:
67 task_group_id = taskcluster.utils.slugId()
68 keys = {"taskGroupId": task_group_id}
69
70 id_mapping = {}
71
72 # First pass, do the template rendering and dependencies resolution
73 tasks = []
74
75 with open(args.data_pipeline_json) as pipeline_file:
76 raw_tasks = yaml.safe_load(pipeline_file.read())
77
78 version = os.getenv("TAG", "latest")
79 context = {"version": version}
80 rendered = jsone.render(raw_tasks, context)
81
82 for task in rendered["tasks"]:
83 # We need to generate new unique task ids for taskcluster to be happy
84 # but need to identify dependencies across tasks. So we create a
85 # mapping between an internal ID and the generate ID
86
87 task_id = taskcluster.utils.slugId()
88 task_internal_id = task.pop("ID")
89
90 if task_internal_id in id_mapping:
91 raise ValueError(f"Conflicting IDs {task_internal_id}")
92
93 id_mapping[task_internal_id] = task_id
94
95 for key, value in keys.items():
96 task[key] = value
97
98 task_payload = task["payload"]
99
100 if "env" in task_payload and task_payload["env"]:
101 task_payload["env"]["TAG"] = version
102 else:
103 task_payload["env"] = {
104 "TAG": version,
105 }
106
107 # Process the dependencies
108 new_dependencies = []
109 for dependency in task.get("dependencies", []):
110 new_dependencies.append(id_mapping[dependency])
111
112 if add_self:
113 new_dependencies.append(decision_task_id)
114
115 task["dependencies"] = new_dependencies
116
117 tasks.append((task_id, task))
118
119 # Now sends them
120 queue = taskcluster.Queue(options)
121 try:
122 for task_id, task_payload in tasks:
123 queue.createTask(task_id, task_payload)
124
125 print(f"https://community-tc.services.mozilla.com/tasks/groups/{task_group_id}")
126 except taskcluster.exceptions.TaskclusterAuthFailure as e:
127 print(f"TaskclusterAuthFailure: {e.body}", file=sys.stderr)
128 raise
129
130
131 if __name__ == "__main__":
132 main()
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/infra/spawn_pipeline.py b/infra/spawn_pipeline.py
--- a/infra/spawn_pipeline.py
+++ b/infra/spawn_pipeline.py
@@ -85,13 +85,19 @@
# mapping between an internal ID and the generate ID
task_id = taskcluster.utils.slugId()
- task_internal_id = task.pop("ID")
+ task_internal_id = task["ID"]
if task_internal_id in id_mapping:
raise ValueError(f"Conflicting IDs {task_internal_id}")
+ # Store each task ID in the id_mapping dictionary before processing dependencies.
+ # This way, tasks can be defined in any order.
id_mapping[task_internal_id] = task_id
+ for task in rendered["tasks"]:
+ task_internal_id = task.pop("ID")
+ task_id = id_mapping[task_internal_id]
+
for key, value in keys.items():
task[key] = value
| {"golden_diff": "diff --git a/infra/spawn_pipeline.py b/infra/spawn_pipeline.py\n--- a/infra/spawn_pipeline.py\n+++ b/infra/spawn_pipeline.py\n@@ -85,13 +85,19 @@\n # mapping between an internal ID and the generate ID\n \n task_id = taskcluster.utils.slugId()\n- task_internal_id = task.pop(\"ID\")\n+ task_internal_id = task[\"ID\"]\n \n if task_internal_id in id_mapping:\n raise ValueError(f\"Conflicting IDs {task_internal_id}\")\n \n+ # Store each task ID in the id_mapping dictionary before processing dependencies.\n+ # This way, tasks can be defined in any order.\n id_mapping[task_internal_id] = task_id\n \n+ for task in rendered[\"tasks\"]:\n+ task_internal_id = task.pop(\"ID\")\n+ task_id = id_mapping[task_internal_id]\n+\n for key, value in keys.items():\n task[key] = value\n", "issue": "Make spawn_pipeline not depend on the order of tasks in the yaml file\nCurrently, if a task is defined in the yaml file before its dependencies, the spawn_pipeline script fails with:\r\n```\r\nTraceback (most recent call last):\r\n File \"/code/spawn_pipeline.py\", line 132, in <module>\r\n main()\r\n File \"/code/spawn_pipeline.py\", line 110, in main\r\n new_dependencies.append(id_mapping[dependency])\r\nKeyError: 'regressor-finder'\r\n```\r\n\r\nSo things like https://github.com/mozilla/bugbug/commit/aaa67b3b0a1db7530cbf88df644aff076fcd2e4e are needed.\r\n\r\nWe should make the spawn_pipeline script not depend on the order of definition of tasks in the yaml file.\n", "before_files": [{"content": "#!/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright 2019 Mozilla\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis script triggers the data pipeline for the bugbug project\n\"\"\"\n\nimport argparse\nimport os\nimport sys\n\nimport jsone\nimport requests.packages.urllib3\nimport taskcluster\nimport yaml\n\nrequests.packages.urllib3.disable_warnings()\n\nTASKCLUSTER_DEFAULT_URL = \"https://community-tc.services.mozilla.com\"\n\n\ndef get_taskcluster_options():\n \"\"\"\n Helper to get the Taskcluster setup options\n according to current environment (local or Taskcluster)\n \"\"\"\n options = taskcluster.optionsFromEnvironment()\n proxy_url = os.environ.get(\"TASKCLUSTER_PROXY_URL\")\n\n if proxy_url is not None:\n # Always use proxy url when available\n options[\"rootUrl\"] = proxy_url\n\n if \"rootUrl\" not in options:\n # Always have a value in root url\n options[\"rootUrl\"] = TASKCLUSTER_DEFAULT_URL\n\n return options\n\n\ndef main():\n parser = argparse.ArgumentParser(description=\"Spawn tasks for bugbug data pipeline\")\n parser.add_argument(\"data_pipeline_json\")\n\n args = parser.parse_args()\n decision_task_id = os.environ.get(\"TASK_ID\")\n options = get_taskcluster_options()\n add_self = False\n if decision_task_id:\n add_self = True\n task_group_id = decision_task_id\n else:\n task_group_id = taskcluster.utils.slugId()\n keys = {\"taskGroupId\": task_group_id}\n\n id_mapping = {}\n\n # First pass, do the template rendering and dependencies resolution\n tasks = []\n\n with open(args.data_pipeline_json) as pipeline_file:\n raw_tasks = yaml.safe_load(pipeline_file.read())\n\n version = os.getenv(\"TAG\", \"latest\")\n context = {\"version\": version}\n rendered = jsone.render(raw_tasks, context)\n\n for task in rendered[\"tasks\"]:\n # We need to generate new unique task ids for taskcluster to be happy\n # but need to identify dependencies across tasks. So we create a\n # mapping between an internal ID and the generate ID\n\n task_id = taskcluster.utils.slugId()\n task_internal_id = task.pop(\"ID\")\n\n if task_internal_id in id_mapping:\n raise ValueError(f\"Conflicting IDs {task_internal_id}\")\n\n id_mapping[task_internal_id] = task_id\n\n for key, value in keys.items():\n task[key] = value\n\n task_payload = task[\"payload\"]\n\n if \"env\" in task_payload and task_payload[\"env\"]:\n task_payload[\"env\"][\"TAG\"] = version\n else:\n task_payload[\"env\"] = {\n \"TAG\": version,\n }\n\n # Process the dependencies\n new_dependencies = []\n for dependency in task.get(\"dependencies\", []):\n new_dependencies.append(id_mapping[dependency])\n\n if add_self:\n new_dependencies.append(decision_task_id)\n\n task[\"dependencies\"] = new_dependencies\n\n tasks.append((task_id, task))\n\n # Now sends them\n queue = taskcluster.Queue(options)\n try:\n for task_id, task_payload in tasks:\n queue.createTask(task_id, task_payload)\n\n print(f\"https://community-tc.services.mozilla.com/tasks/groups/{task_group_id}\")\n except taskcluster.exceptions.TaskclusterAuthFailure as e:\n print(f\"TaskclusterAuthFailure: {e.body}\", file=sys.stderr)\n raise\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "infra/spawn_pipeline.py"}], "after_files": [{"content": "#!/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright 2019 Mozilla\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis script triggers the data pipeline for the bugbug project\n\"\"\"\n\nimport argparse\nimport os\nimport sys\n\nimport jsone\nimport requests.packages.urllib3\nimport taskcluster\nimport yaml\n\nrequests.packages.urllib3.disable_warnings()\n\nTASKCLUSTER_DEFAULT_URL = \"https://community-tc.services.mozilla.com\"\n\n\ndef get_taskcluster_options():\n \"\"\"\n Helper to get the Taskcluster setup options\n according to current environment (local or Taskcluster)\n \"\"\"\n options = taskcluster.optionsFromEnvironment()\n proxy_url = os.environ.get(\"TASKCLUSTER_PROXY_URL\")\n\n if proxy_url is not None:\n # Always use proxy url when available\n options[\"rootUrl\"] = proxy_url\n\n if \"rootUrl\" not in options:\n # Always have a value in root url\n options[\"rootUrl\"] = TASKCLUSTER_DEFAULT_URL\n\n return options\n\n\ndef main():\n parser = argparse.ArgumentParser(description=\"Spawn tasks for bugbug data pipeline\")\n parser.add_argument(\"data_pipeline_json\")\n\n args = parser.parse_args()\n decision_task_id = os.environ.get(\"TASK_ID\")\n options = get_taskcluster_options()\n add_self = False\n if decision_task_id:\n add_self = True\n task_group_id = decision_task_id\n else:\n task_group_id = taskcluster.utils.slugId()\n keys = {\"taskGroupId\": task_group_id}\n\n id_mapping = {}\n\n # First pass, do the template rendering and dependencies resolution\n tasks = []\n\n with open(args.data_pipeline_json) as pipeline_file:\n raw_tasks = yaml.safe_load(pipeline_file.read())\n\n version = os.getenv(\"TAG\", \"latest\")\n context = {\"version\": version}\n rendered = jsone.render(raw_tasks, context)\n\n for task in rendered[\"tasks\"]:\n # We need to generate new unique task ids for taskcluster to be happy\n # but need to identify dependencies across tasks. So we create a\n # mapping between an internal ID and the generate ID\n\n task_id = taskcluster.utils.slugId()\n task_internal_id = task[\"ID\"]\n\n if task_internal_id in id_mapping:\n raise ValueError(f\"Conflicting IDs {task_internal_id}\")\n\n # Store each task ID in the id_mapping dictionary before processing dependencies.\n # This way, tasks can be defined in any order.\n id_mapping[task_internal_id] = task_id\n\n for task in rendered[\"tasks\"]:\n task_internal_id = task.pop(\"ID\")\n task_id = id_mapping[task_internal_id]\n\n for key, value in keys.items():\n task[key] = value\n\n task_payload = task[\"payload\"]\n\n if \"env\" in task_payload and task_payload[\"env\"]:\n task_payload[\"env\"][\"TAG\"] = version\n else:\n task_payload[\"env\"] = {\n \"TAG\": version,\n }\n\n # Process the dependencies\n new_dependencies = []\n for dependency in task.get(\"dependencies\", []):\n new_dependencies.append(id_mapping[dependency])\n\n if add_self:\n new_dependencies.append(decision_task_id)\n\n task[\"dependencies\"] = new_dependencies\n\n tasks.append((task_id, task))\n\n # Now sends them\n queue = taskcluster.Queue(options)\n try:\n for task_id, task_payload in tasks:\n queue.createTask(task_id, task_payload)\n\n print(f\"https://community-tc.services.mozilla.com/tasks/groups/{task_group_id}\")\n except taskcluster.exceptions.TaskclusterAuthFailure as e:\n print(f\"TaskclusterAuthFailure: {e.body}\", file=sys.stderr)\n raise\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "infra/spawn_pipeline.py"}]} | 1,607 | 208 |
gh_patches_debug_11927 | rasdani/github-patches | git_diff | pytorch__text-280 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError in Python 2.7
https://github.com/pytorch/text/blob/a2795e5731d1b7c0298a1b5087bb8142e1c39d0b/torchtext/datasets/imdb.py#L32
In python 2.7, it will report that `TypeError: 'encoding' is an invalid keyword argument for this function`.
I replace `open` with `io.open` to fix it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchtext/datasets/imdb.py`
Content:
```
1 import os
2 import glob
3
4 from .. import data
5
6
7 class IMDB(data.Dataset):
8
9 urls = ['http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz']
10 name = 'imdb'
11 dirname = 'aclImdb'
12
13 @staticmethod
14 def sort_key(ex):
15 return len(ex.text)
16
17 def __init__(self, path, text_field, label_field, **kwargs):
18 """Create an IMDB dataset instance given a path and fields.
19
20 Arguments:
21 path: Path to the dataset's highest level directory
22 text_field: The field that will be used for text data.
23 label_field: The field that will be used for label data.
24 Remaining keyword arguments: Passed to the constructor of
25 data.Dataset.
26 """
27 fields = [('text', text_field), ('label', label_field)]
28 examples = []
29
30 for label in ['pos', 'neg']:
31 for fname in glob.iglob(os.path.join(path, label, '*.txt')):
32 with open(fname, 'r', encoding="utf-8") as f:
33 text = f.readline()
34 examples.append(data.Example.fromlist([text, label], fields))
35
36 super(IMDB, self).__init__(examples, fields, **kwargs)
37
38 @classmethod
39 def splits(cls, text_field, label_field, root='.data',
40 train='train', test='test', **kwargs):
41 """Create dataset objects for splits of the IMDB dataset.
42
43 Arguments:
44 text_field: The field that will be used for the sentence.
45 label_field: The field that will be used for label data.
46 root: Root dataset storage directory. Default is '.data'.
47 train: The directory that contains the training examples
48 test: The directory that contains the test examples
49 Remaining keyword arguments: Passed to the splits method of
50 Dataset.
51 """
52 return super(IMDB, cls).splits(
53 root=root, text_field=text_field, label_field=label_field,
54 train=train, validation=None, test=test, **kwargs)
55
56 @classmethod
57 def iters(cls, batch_size=32, device=0, root='.data', vectors=None, **kwargs):
58 """Creater iterator objects for splits of the IMDB dataset.
59
60 Arguments:
61 batch_size: Batch_size
62 device: Device to create batches on. Use - 1 for CPU and None for
63 the currently active GPU device.
64 root: The root directory that contains the imdb dataset subdirectory
65 vectors: one of the available pretrained vectors or a list with each
66 element one of the available pretrained vectors (see Vocab.load_vectors)
67
68 Remaining keyword arguments: Passed to the splits method.
69 """
70 TEXT = data.Field()
71 LABEL = data.Field(sequential=False)
72
73 train, test = cls.splits(TEXT, LABEL, root=root, **kwargs)
74
75 TEXT.build_vocab(train, vectors=vectors)
76 LABEL.build_vocab(train)
77
78 return data.BucketIterator.splits(
79 (train, test), batch_size=batch_size, device=device)
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchtext/datasets/imdb.py b/torchtext/datasets/imdb.py
--- a/torchtext/datasets/imdb.py
+++ b/torchtext/datasets/imdb.py
@@ -1,5 +1,6 @@
import os
import glob
+import io
from .. import data
@@ -29,7 +30,7 @@
for label in ['pos', 'neg']:
for fname in glob.iglob(os.path.join(path, label, '*.txt')):
- with open(fname, 'r', encoding="utf-8") as f:
+ with io.open(fname, 'r', encoding="utf-8") as f:
text = f.readline()
examples.append(data.Example.fromlist([text, label], fields))
| {"golden_diff": "diff --git a/torchtext/datasets/imdb.py b/torchtext/datasets/imdb.py\n--- a/torchtext/datasets/imdb.py\n+++ b/torchtext/datasets/imdb.py\n@@ -1,5 +1,6 @@\n import os\n import glob\n+import io\n \n from .. import data\n \n@@ -29,7 +30,7 @@\n \n for label in ['pos', 'neg']:\n for fname in glob.iglob(os.path.join(path, label, '*.txt')):\n- with open(fname, 'r', encoding=\"utf-8\") as f:\n+ with io.open(fname, 'r', encoding=\"utf-8\") as f:\n text = f.readline()\n examples.append(data.Example.fromlist([text, label], fields))\n", "issue": "TypeError in Python 2.7\nhttps://github.com/pytorch/text/blob/a2795e5731d1b7c0298a1b5087bb8142e1c39d0b/torchtext/datasets/imdb.py#L32\r\n\r\nIn python 2.7, it will report that `TypeError: 'encoding' is an invalid keyword argument for this function`.\r\n\r\nI replace `open` with `io.open` to fix it.\n", "before_files": [{"content": "import os\nimport glob\n\nfrom .. import data\n\n\nclass IMDB(data.Dataset):\n\n urls = ['http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz']\n name = 'imdb'\n dirname = 'aclImdb'\n\n @staticmethod\n def sort_key(ex):\n return len(ex.text)\n\n def __init__(self, path, text_field, label_field, **kwargs):\n \"\"\"Create an IMDB dataset instance given a path and fields.\n\n Arguments:\n path: Path to the dataset's highest level directory\n text_field: The field that will be used for text data.\n label_field: The field that will be used for label data.\n Remaining keyword arguments: Passed to the constructor of\n data.Dataset.\n \"\"\"\n fields = [('text', text_field), ('label', label_field)]\n examples = []\n\n for label in ['pos', 'neg']:\n for fname in glob.iglob(os.path.join(path, label, '*.txt')):\n with open(fname, 'r', encoding=\"utf-8\") as f:\n text = f.readline()\n examples.append(data.Example.fromlist([text, label], fields))\n\n super(IMDB, self).__init__(examples, fields, **kwargs)\n\n @classmethod\n def splits(cls, text_field, label_field, root='.data',\n train='train', test='test', **kwargs):\n \"\"\"Create dataset objects for splits of the IMDB dataset.\n\n Arguments:\n text_field: The field that will be used for the sentence.\n label_field: The field that will be used for label data.\n root: Root dataset storage directory. Default is '.data'.\n train: The directory that contains the training examples\n test: The directory that contains the test examples\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n return super(IMDB, cls).splits(\n root=root, text_field=text_field, label_field=label_field,\n train=train, validation=None, test=test, **kwargs)\n\n @classmethod\n def iters(cls, batch_size=32, device=0, root='.data', vectors=None, **kwargs):\n \"\"\"Creater iterator objects for splits of the IMDB dataset.\n\n Arguments:\n batch_size: Batch_size\n device: Device to create batches on. Use - 1 for CPU and None for\n the currently active GPU device.\n root: The root directory that contains the imdb dataset subdirectory\n vectors: one of the available pretrained vectors or a list with each\n element one of the available pretrained vectors (see Vocab.load_vectors)\n\n Remaining keyword arguments: Passed to the splits method.\n \"\"\"\n TEXT = data.Field()\n LABEL = data.Field(sequential=False)\n\n train, test = cls.splits(TEXT, LABEL, root=root, **kwargs)\n\n TEXT.build_vocab(train, vectors=vectors)\n LABEL.build_vocab(train)\n\n return data.BucketIterator.splits(\n (train, test), batch_size=batch_size, device=device)\n", "path": "torchtext/datasets/imdb.py"}], "after_files": [{"content": "import os\nimport glob\nimport io\n\nfrom .. import data\n\n\nclass IMDB(data.Dataset):\n\n urls = ['http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz']\n name = 'imdb'\n dirname = 'aclImdb'\n\n @staticmethod\n def sort_key(ex):\n return len(ex.text)\n\n def __init__(self, path, text_field, label_field, **kwargs):\n \"\"\"Create an IMDB dataset instance given a path and fields.\n\n Arguments:\n path: Path to the dataset's highest level directory\n text_field: The field that will be used for text data.\n label_field: The field that will be used for label data.\n Remaining keyword arguments: Passed to the constructor of\n data.Dataset.\n \"\"\"\n fields = [('text', text_field), ('label', label_field)]\n examples = []\n\n for label in ['pos', 'neg']:\n for fname in glob.iglob(os.path.join(path, label, '*.txt')):\n with io.open(fname, 'r', encoding=\"utf-8\") as f:\n text = f.readline()\n examples.append(data.Example.fromlist([text, label], fields))\n\n super(IMDB, self).__init__(examples, fields, **kwargs)\n\n @classmethod\n def splits(cls, text_field, label_field, root='.data',\n train='train', test='test', **kwargs):\n \"\"\"Create dataset objects for splits of the IMDB dataset.\n\n Arguments:\n text_field: The field that will be used for the sentence.\n label_field: The field that will be used for label data.\n root: Root dataset storage directory. Default is '.data'.\n train: The directory that contains the training examples\n test: The directory that contains the test examples\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n return super(IMDB, cls).splits(\n root=root, text_field=text_field, label_field=label_field,\n train=train, validation=None, test=test, **kwargs)\n\n @classmethod\n def iters(cls, batch_size=32, device=0, root='.data', vectors=None, **kwargs):\n \"\"\"Creater iterator objects for splits of the IMDB dataset.\n\n Arguments:\n batch_size: Batch_size\n device: Device to create batches on. Use - 1 for CPU and None for\n the currently active GPU device.\n root: The root directory that contains the imdb dataset subdirectory\n vectors: one of the available pretrained vectors or a list with each\n element one of the available pretrained vectors (see Vocab.load_vectors)\n\n Remaining keyword arguments: Passed to the splits method.\n \"\"\"\n TEXT = data.Field()\n LABEL = data.Field(sequential=False)\n\n train, test = cls.splits(TEXT, LABEL, root=root, **kwargs)\n\n TEXT.build_vocab(train, vectors=vectors)\n LABEL.build_vocab(train)\n\n return data.BucketIterator.splits(\n (train, test), batch_size=batch_size, device=device)\n", "path": "torchtext/datasets/imdb.py"}]} | 1,179 | 168 |
gh_patches_debug_12119 | rasdani/github-patches | git_diff | sanic-org__sanic-647 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
what have done to static.py?
On last Friday,everything is ok,my static file test works fine.
Today,when I pip install sanic==0.5.1
It raise 404 error.
when I pip install sanic==0.5.0
everything is ok again.
seems like the code blow has some problem?
if not file_path.startswith(root_path):
raise FileNotFound('File not found',
path=file_or_directory,
relative_url=file_uri)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/static.py`
Content:
```
1 from mimetypes import guess_type
2 from os import path
3 from re import sub
4 from time import strftime, gmtime
5 from urllib.parse import unquote
6
7 from aiofiles.os import stat
8
9 from sanic.exceptions import (
10 ContentRangeError,
11 FileNotFound,
12 HeaderNotFound,
13 InvalidUsage,
14 )
15 from sanic.handlers import ContentRangeHandler
16 from sanic.response import file, HTTPResponse
17
18
19 def register(app, uri, file_or_directory, pattern,
20 use_modified_since, use_content_range):
21 # TODO: Though sanic is not a file server, I feel like we should at least
22 # make a good effort here. Modified-since is nice, but we could
23 # also look into etags, expires, and caching
24 """
25 Register a static directory handler with Sanic by adding a route to the
26 router and registering a handler.
27
28 :param app: Sanic
29 :param file_or_directory: File or directory path to serve from
30 :param uri: URL to serve from
31 :param pattern: regular expression used to match files in the URL
32 :param use_modified_since: If true, send file modified time, and return
33 not modified if the browser's matches the
34 server's
35 :param use_content_range: If true, process header for range requests
36 and sends the file part that is requested
37 """
38 # If we're not trying to match a file directly,
39 # serve from the folder
40 if not path.isfile(file_or_directory):
41 uri += '<file_uri:' + pattern + '>'
42
43 async def _handler(request, file_uri=None):
44 # Using this to determine if the URL is trying to break out of the path
45 # served. os.path.realpath seems to be very slow
46 if file_uri and '../' in file_uri:
47 raise InvalidUsage("Invalid URL")
48 # Merge served directory and requested file if provided
49 # Strip all / that in the beginning of the URL to help prevent python
50 # from herping a derp and treating the uri as an absolute path
51 root_path = file_path = file_or_directory
52 if file_uri:
53 file_path = path.join(
54 file_or_directory, sub('^[/]*', '', file_uri))
55
56 # URL decode the path sent by the browser otherwise we won't be able to
57 # match filenames which got encoded (filenames with spaces etc)
58 file_path = path.abspath(unquote(file_path))
59 if not file_path.startswith(root_path):
60 raise FileNotFound('File not found',
61 path=file_or_directory,
62 relative_url=file_uri)
63 try:
64 headers = {}
65 # Check if the client has been sent this file before
66 # and it has not been modified since
67 stats = None
68 if use_modified_since:
69 stats = await stat(file_path)
70 modified_since = strftime(
71 '%a, %d %b %Y %H:%M:%S GMT', gmtime(stats.st_mtime))
72 if request.headers.get('If-Modified-Since') == modified_since:
73 return HTTPResponse(status=304)
74 headers['Last-Modified'] = modified_since
75 _range = None
76 if use_content_range:
77 _range = None
78 if not stats:
79 stats = await stat(file_path)
80 headers['Accept-Ranges'] = 'bytes'
81 headers['Content-Length'] = str(stats.st_size)
82 if request.method != 'HEAD':
83 try:
84 _range = ContentRangeHandler(request, stats)
85 except HeaderNotFound:
86 pass
87 else:
88 del headers['Content-Length']
89 for key, value in _range.headers.items():
90 headers[key] = value
91 if request.method == 'HEAD':
92 return HTTPResponse(
93 headers=headers,
94 content_type=guess_type(file_path)[0] or 'text/plain')
95 else:
96 return await file(file_path, headers=headers, _range=_range)
97 except ContentRangeError:
98 raise
99 except Exception:
100 raise FileNotFound('File not found',
101 path=file_or_directory,
102 relative_url=file_uri)
103
104 app.route(uri, methods=['GET', 'HEAD'])(_handler)
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/static.py b/sanic/static.py
--- a/sanic/static.py
+++ b/sanic/static.py
@@ -56,7 +56,7 @@
# URL decode the path sent by the browser otherwise we won't be able to
# match filenames which got encoded (filenames with spaces etc)
file_path = path.abspath(unquote(file_path))
- if not file_path.startswith(root_path):
+ if not file_path.startswith(path.abspath(unquote(root_path))):
raise FileNotFound('File not found',
path=file_or_directory,
relative_url=file_uri)
| {"golden_diff": "diff --git a/sanic/static.py b/sanic/static.py\n--- a/sanic/static.py\n+++ b/sanic/static.py\n@@ -56,7 +56,7 @@\n # URL decode the path sent by the browser otherwise we won't be able to\n # match filenames which got encoded (filenames with spaces etc)\n file_path = path.abspath(unquote(file_path))\n- if not file_path.startswith(root_path):\n+ if not file_path.startswith(path.abspath(unquote(root_path))):\n raise FileNotFound('File not found',\n path=file_or_directory,\n relative_url=file_uri)\n", "issue": "what have done to static.py?\nOn last Friday,everything is ok,my static file test works fine.\r\n\r\nToday,when I pip install sanic==0.5.1\r\nIt raise 404 error.\r\n\r\nwhen I pip install sanic==0.5.0\r\neverything is ok again.\r\n\r\nseems like the code blow has some problem?\r\nif not file_path.startswith(root_path):\r\n raise FileNotFound('File not found',\r\n path=file_or_directory,\r\n relative_url=file_uri)\n", "before_files": [{"content": "from mimetypes import guess_type\nfrom os import path\nfrom re import sub\nfrom time import strftime, gmtime\nfrom urllib.parse import unquote\n\nfrom aiofiles.os import stat\n\nfrom sanic.exceptions import (\n ContentRangeError,\n FileNotFound,\n HeaderNotFound,\n InvalidUsage,\n)\nfrom sanic.handlers import ContentRangeHandler\nfrom sanic.response import file, HTTPResponse\n\n\ndef register(app, uri, file_or_directory, pattern,\n use_modified_since, use_content_range):\n # TODO: Though sanic is not a file server, I feel like we should at least\n # make a good effort here. Modified-since is nice, but we could\n # also look into etags, expires, and caching\n \"\"\"\n Register a static directory handler with Sanic by adding a route to the\n router and registering a handler.\n\n :param app: Sanic\n :param file_or_directory: File or directory path to serve from\n :param uri: URL to serve from\n :param pattern: regular expression used to match files in the URL\n :param use_modified_since: If true, send file modified time, and return\n not modified if the browser's matches the\n server's\n :param use_content_range: If true, process header for range requests\n and sends the file part that is requested\n \"\"\"\n # If we're not trying to match a file directly,\n # serve from the folder\n if not path.isfile(file_or_directory):\n uri += '<file_uri:' + pattern + '>'\n\n async def _handler(request, file_uri=None):\n # Using this to determine if the URL is trying to break out of the path\n # served. os.path.realpath seems to be very slow\n if file_uri and '../' in file_uri:\n raise InvalidUsage(\"Invalid URL\")\n # Merge served directory and requested file if provided\n # Strip all / that in the beginning of the URL to help prevent python\n # from herping a derp and treating the uri as an absolute path\n root_path = file_path = file_or_directory\n if file_uri:\n file_path = path.join(\n file_or_directory, sub('^[/]*', '', file_uri))\n\n # URL decode the path sent by the browser otherwise we won't be able to\n # match filenames which got encoded (filenames with spaces etc)\n file_path = path.abspath(unquote(file_path))\n if not file_path.startswith(root_path):\n raise FileNotFound('File not found',\n path=file_or_directory,\n relative_url=file_uri)\n try:\n headers = {}\n # Check if the client has been sent this file before\n # and it has not been modified since\n stats = None\n if use_modified_since:\n stats = await stat(file_path)\n modified_since = strftime(\n '%a, %d %b %Y %H:%M:%S GMT', gmtime(stats.st_mtime))\n if request.headers.get('If-Modified-Since') == modified_since:\n return HTTPResponse(status=304)\n headers['Last-Modified'] = modified_since\n _range = None\n if use_content_range:\n _range = None\n if not stats:\n stats = await stat(file_path)\n headers['Accept-Ranges'] = 'bytes'\n headers['Content-Length'] = str(stats.st_size)\n if request.method != 'HEAD':\n try:\n _range = ContentRangeHandler(request, stats)\n except HeaderNotFound:\n pass\n else:\n del headers['Content-Length']\n for key, value in _range.headers.items():\n headers[key] = value\n if request.method == 'HEAD':\n return HTTPResponse(\n headers=headers,\n content_type=guess_type(file_path)[0] or 'text/plain')\n else:\n return await file(file_path, headers=headers, _range=_range)\n except ContentRangeError:\n raise\n except Exception:\n raise FileNotFound('File not found',\n path=file_or_directory,\n relative_url=file_uri)\n\n app.route(uri, methods=['GET', 'HEAD'])(_handler)\n", "path": "sanic/static.py"}], "after_files": [{"content": "from mimetypes import guess_type\nfrom os import path\nfrom re import sub\nfrom time import strftime, gmtime\nfrom urllib.parse import unquote\n\nfrom aiofiles.os import stat\n\nfrom sanic.exceptions import (\n ContentRangeError,\n FileNotFound,\n HeaderNotFound,\n InvalidUsage,\n)\nfrom sanic.handlers import ContentRangeHandler\nfrom sanic.response import file, HTTPResponse\n\n\ndef register(app, uri, file_or_directory, pattern,\n use_modified_since, use_content_range):\n # TODO: Though sanic is not a file server, I feel like we should at least\n # make a good effort here. Modified-since is nice, but we could\n # also look into etags, expires, and caching\n \"\"\"\n Register a static directory handler with Sanic by adding a route to the\n router and registering a handler.\n\n :param app: Sanic\n :param file_or_directory: File or directory path to serve from\n :param uri: URL to serve from\n :param pattern: regular expression used to match files in the URL\n :param use_modified_since: If true, send file modified time, and return\n not modified if the browser's matches the\n server's\n :param use_content_range: If true, process header for range requests\n and sends the file part that is requested\n \"\"\"\n # If we're not trying to match a file directly,\n # serve from the folder\n if not path.isfile(file_or_directory):\n uri += '<file_uri:' + pattern + '>'\n\n async def _handler(request, file_uri=None):\n # Using this to determine if the URL is trying to break out of the path\n # served. os.path.realpath seems to be very slow\n if file_uri and '../' in file_uri:\n raise InvalidUsage(\"Invalid URL\")\n # Merge served directory and requested file if provided\n # Strip all / that in the beginning of the URL to help prevent python\n # from herping a derp and treating the uri as an absolute path\n root_path = file_path = file_or_directory\n if file_uri:\n file_path = path.join(\n file_or_directory, sub('^[/]*', '', file_uri))\n\n # URL decode the path sent by the browser otherwise we won't be able to\n # match filenames which got encoded (filenames with spaces etc)\n file_path = path.abspath(unquote(file_path))\n if not file_path.startswith(path.abspath(unquote(root_path))):\n raise FileNotFound('File not found',\n path=file_or_directory,\n relative_url=file_uri)\n try:\n headers = {}\n # Check if the client has been sent this file before\n # and it has not been modified since\n stats = None\n if use_modified_since:\n stats = await stat(file_path)\n modified_since = strftime(\n '%a, %d %b %Y %H:%M:%S GMT', gmtime(stats.st_mtime))\n if request.headers.get('If-Modified-Since') == modified_since:\n return HTTPResponse(status=304)\n headers['Last-Modified'] = modified_since\n _range = None\n if use_content_range:\n _range = None\n if not stats:\n stats = await stat(file_path)\n headers['Accept-Ranges'] = 'bytes'\n headers['Content-Length'] = str(stats.st_size)\n if request.method != 'HEAD':\n try:\n _range = ContentRangeHandler(request, stats)\n except HeaderNotFound:\n pass\n else:\n del headers['Content-Length']\n for key, value in _range.headers.items():\n headers[key] = value\n if request.method == 'HEAD':\n return HTTPResponse(\n headers=headers,\n content_type=guess_type(file_path)[0] or 'text/plain')\n else:\n return await file(file_path, headers=headers, _range=_range)\n except ContentRangeError:\n raise\n except Exception:\n raise FileNotFound('File not found',\n path=file_or_directory,\n relative_url=file_uri)\n\n app.route(uri, methods=['GET', 'HEAD'])(_handler)\n", "path": "sanic/static.py"}]} | 1,452 | 128 |
gh_patches_debug_49285 | rasdani/github-patches | git_diff | ansible__awx-12242 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Websocket not working at non-root path
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Summary
Changes from #11342 and #652 are not full
### AWX version
21.0.0
### Select the relevant components
- [X] UI
- [ ] API
- [ ] Docs
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
Deploy AWX with custom `ingress_path: /awx`
### Expected results
websocket should work
### Actual results
`2022-05-17 08:46:41,031 ERROR [-] daphne.ws_protocol [Failure instance: Traceback: <class 'ValueError'>: No route found for path 'awx/websocket/'.
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/autobahn/websocket/protocol.py:2841:processHandshake
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/txaio/tx.py:366:as_future
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/ws_protocol.py:72:onConnect
--- <exception caught here> ---
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/server.py:201:create_application
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:54:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:47:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:145:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:169:__init__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/middleware.py:31:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:150:__call__
]
2022-05-17 08:46:41,031 ERROR [Failure instance: Traceback: <class 'ValueError'>: No route found for path 'awx/websocket/'.
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/autobahn/websocket/protocol.py:2841:processHandshake
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/txaio/tx.py:366:as_future
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/ws_protocol.py:72:onConnect
--- <exception caught here> ---
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/server.py:201:create_application
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:54:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:47:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:145:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:169:__init__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/middleware.py:31:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:150:__call__`
### Additional information
It seems that issue is in https://github.com/ansible/awx/blob/48b016802c517ff04d1cff4c43e64f17bb77a7a8/awx/main/routing.py
```
websocket_urlpatterns = [
re_path(r'websocket/$', consumers.EventConsumer),
re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer),
]
```
From https://docs.djangoproject.com/en/4.0/ref/urls/:
When a route ends with $ the whole requested URL, matching against path_info, must match the regular expression pattern (re.fullmatch() is used).
Replacing with
```
websocket_urlpatterns = [
re_path(r'websocket/', consumers.EventConsumer),
re_path(r'websocket/broadcast/', consumers.BroadcastConsumer),
]
```
solves the issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awx/main/routing.py`
Content:
```
1 import redis
2 import logging
3
4 from django.conf import settings
5 from django.urls import re_path
6
7 from channels.auth import AuthMiddlewareStack
8 from channels.routing import ProtocolTypeRouter, URLRouter
9
10 from . import consumers
11
12
13 logger = logging.getLogger('awx.main.routing')
14
15
16 class AWXProtocolTypeRouter(ProtocolTypeRouter):
17 def __init__(self, *args, **kwargs):
18 try:
19 r = redis.Redis.from_url(settings.BROKER_URL)
20 for k in r.scan_iter('asgi:*', 500):
21 logger.debug(f"cleaning up Redis key {k}")
22 r.delete(k)
23 except redis.exceptions.RedisError as e:
24 logger.warning("encountered an error communicating with redis.")
25 raise e
26 super().__init__(*args, **kwargs)
27
28
29 websocket_urlpatterns = [
30 re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
31 re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer.as_asgi()),
32 ]
33
34 application = AWXProtocolTypeRouter(
35 {
36 'websocket': AuthMiddlewareStack(URLRouter(websocket_urlpatterns)),
37 }
38 )
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awx/main/routing.py b/awx/main/routing.py
--- a/awx/main/routing.py
+++ b/awx/main/routing.py
@@ -27,8 +27,8 @@
websocket_urlpatterns = [
- re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
- re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer.as_asgi()),
+ re_path(r'websocket/', consumers.EventConsumer.as_asgi()),
+ re_path(r'websocket/broadcast/', consumers.BroadcastConsumer.as_asgi()),
]
application = AWXProtocolTypeRouter(
| {"golden_diff": "diff --git a/awx/main/routing.py b/awx/main/routing.py\n--- a/awx/main/routing.py\n+++ b/awx/main/routing.py\n@@ -27,8 +27,8 @@\n \n \n websocket_urlpatterns = [\n- re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),\n- re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer.as_asgi()),\n+ re_path(r'websocket/', consumers.EventConsumer.as_asgi()),\n+ re_path(r'websocket/broadcast/', consumers.BroadcastConsumer.as_asgi()),\n ]\n \n application = AWXProtocolTypeRouter(\n", "issue": "Websocket not working at non-root path\n### Please confirm the following\n\n- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).\n- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.\n- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.\n\n### Summary\n\nChanges from #11342 and #652 are not full\n\n### AWX version\n\n21.0.0\n\n### Select the relevant components\n\n- [X] UI\n- [ ] API\n- [ ] Docs\n\n### Installation method\n\nkubernetes\n\n### Modifications\n\nno\n\n### Ansible version\n\n_No response_\n\n### Operating system\n\n_No response_\n\n### Web browser\n\n_No response_\n\n### Steps to reproduce\n\nDeploy AWX with custom `ingress_path: /awx`\n\n### Expected results\n\nwebsocket should work\n\n### Actual results\n\n`2022-05-17 08:46:41,031 ERROR [-] daphne.ws_protocol [Failure instance: Traceback: <class 'ValueError'>: No route found for path 'awx/websocket/'.\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/autobahn/websocket/protocol.py:2841:processHandshake\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/txaio/tx.py:366:as_future\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/ws_protocol.py:72:onConnect\r\n--- <exception caught here> ---\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/server.py:201:create_application\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:54:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:47:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:145:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:169:__init__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/middleware.py:31:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:150:__call__\r\n]\r\n2022-05-17 08:46:41,031 ERROR [Failure instance: Traceback: <class 'ValueError'>: No route found for path 'awx/websocket/'.\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/autobahn/websocket/protocol.py:2841:processHandshake\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/txaio/tx.py:366:as_future\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/ws_protocol.py:72:onConnect\r\n--- <exception caught here> ---\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/server.py:201:create_application\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:54:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:47:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:145:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:169:__init__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/middleware.py:31:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:150:__call__`\n\n### Additional information\n\nIt seems that issue is in https://github.com/ansible/awx/blob/48b016802c517ff04d1cff4c43e64f17bb77a7a8/awx/main/routing.py\r\n\r\n```\r\nwebsocket_urlpatterns = [\r\n re_path(r'websocket/$', consumers.EventConsumer),\r\n re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer),\r\n]\r\n```\r\nFrom https://docs.djangoproject.com/en/4.0/ref/urls/:\r\n\r\nWhen a route ends with $ the whole requested URL, matching against path_info, must match the regular expression pattern (re.fullmatch() is used).\r\n\r\nReplacing with\r\n\r\n```\r\nwebsocket_urlpatterns = [\r\n re_path(r'websocket/', consumers.EventConsumer),\r\n re_path(r'websocket/broadcast/', consumers.BroadcastConsumer),\r\n]\r\n```\r\n\r\nsolves the issue\n", "before_files": [{"content": "import redis\nimport logging\n\nfrom django.conf import settings\nfrom django.urls import re_path\n\nfrom channels.auth import AuthMiddlewareStack\nfrom channels.routing import ProtocolTypeRouter, URLRouter\n\nfrom . import consumers\n\n\nlogger = logging.getLogger('awx.main.routing')\n\n\nclass AWXProtocolTypeRouter(ProtocolTypeRouter):\n def __init__(self, *args, **kwargs):\n try:\n r = redis.Redis.from_url(settings.BROKER_URL)\n for k in r.scan_iter('asgi:*', 500):\n logger.debug(f\"cleaning up Redis key {k}\")\n r.delete(k)\n except redis.exceptions.RedisError as e:\n logger.warning(\"encountered an error communicating with redis.\")\n raise e\n super().__init__(*args, **kwargs)\n\n\nwebsocket_urlpatterns = [\n re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),\n re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer.as_asgi()),\n]\n\napplication = AWXProtocolTypeRouter(\n {\n 'websocket': AuthMiddlewareStack(URLRouter(websocket_urlpatterns)),\n }\n)\n", "path": "awx/main/routing.py"}], "after_files": [{"content": "import redis\nimport logging\n\nfrom django.conf import settings\nfrom django.urls import re_path\n\nfrom channels.auth import AuthMiddlewareStack\nfrom channels.routing import ProtocolTypeRouter, URLRouter\n\nfrom . import consumers\n\n\nlogger = logging.getLogger('awx.main.routing')\n\n\nclass AWXProtocolTypeRouter(ProtocolTypeRouter):\n def __init__(self, *args, **kwargs):\n try:\n r = redis.Redis.from_url(settings.BROKER_URL)\n for k in r.scan_iter('asgi:*', 500):\n logger.debug(f\"cleaning up Redis key {k}\")\n r.delete(k)\n except redis.exceptions.RedisError as e:\n logger.warning(\"encountered an error communicating with redis.\")\n raise e\n super().__init__(*args, **kwargs)\n\n\nwebsocket_urlpatterns = [\n re_path(r'websocket/', consumers.EventConsumer.as_asgi()),\n re_path(r'websocket/broadcast/', consumers.BroadcastConsumer.as_asgi()),\n]\n\napplication = AWXProtocolTypeRouter(\n {\n 'websocket': AuthMiddlewareStack(URLRouter(websocket_urlpatterns)),\n }\n)\n", "path": "awx/main/routing.py"}]} | 1,911 | 132 |
gh_patches_debug_38158 | rasdani/github-patches | git_diff | Flexget__Flexget-171 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Input plugin "imdb_list" currently failing to fetch lists behind authentication
Message: `There was an error during imdb_list input (Unable to get imdb list: 404 Client Error: Not Found), using cache instead."`
Same issue as, http://flexget.com/ticket/2313 but even with the most recent fix applied it still fails.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/input/imdb_list.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 import logging
3 import csv
4 import re
5 from cgi import parse_header
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.utils import requests
10 from flexget.utils.imdb import make_url
11 from flexget.utils.cached_input import cached
12 from flexget.utils.tools import decode_html
13 from flexget.entry import Entry
14 from flexget.utils.soup import get_soup
15
16 log = logging.getLogger('imdb_list')
17
18 USER_ID_RE = r'^ur\d{7,8}$'
19
20
21 class ImdbList(object):
22 """"Creates an entry for each movie in your imdb list."""
23
24 schema = {
25 'type': 'object',
26 'properties': {
27 'user_id': {
28 'type': 'string',
29 'pattern': USER_ID_RE,
30 'error_pattern': 'user_id must be in the form urXXXXXXX'
31 },
32 'username': {'type': 'string'},
33 'password': {'type': 'string'},
34 'list': {'type': 'string'}
35 },
36 'required': ['list'],
37 'additionalProperties': False
38 }
39
40 @cached('imdb_list', persist='2 hours')
41 def on_task_input(self, task, config):
42 sess = requests.Session()
43 if config.get('username') and config.get('password'):
44
45 log.verbose('Logging in ...')
46
47 # Log in to imdb with our handler
48 params = {'login': config['username'], 'password': config['password']}
49 try:
50 # First get the login page so we can get the hidden input value
51 soup = get_soup(sess.get('https://secure.imdb.com/register-imdb/login').content)
52
53 # Fix for bs4 bug. see #2313 and github#118
54 auxsoup = soup.find('div', id='nb20').next_sibling.next_sibling
55 tag = auxsoup.find('input', attrs={'name': '49e6c'})
56 if tag:
57 params['49e6c'] = tag['value']
58 else:
59 log.warning('Unable to find required info for imdb login, maybe their login method has changed.')
60 # Now we do the actual login with appropriate parameters
61 r = sess.post('https://secure.imdb.com/register-imdb/login', data=params, raise_status=False)
62 except requests.RequestException as e:
63 raise plugin.PluginError('Unable to login to imdb: %s' % e.message)
64
65 # IMDb redirects us upon a successful login.
66 # removed - doesn't happen always?
67 # if r.status_code != 302:
68 # log.warning('It appears logging in to IMDb was unsuccessful.')
69
70 # try to automatically figure out user_id from watchlist redirect url
71 if not 'user_id' in config:
72 log.verbose('Getting user_id ...')
73 try:
74 response = sess.get('http://www.imdb.com/list/watchlist')
75 except requests.RequestException as e:
76 log.error('Error retrieving user ID from imdb: %s' % e.message)
77 user_id = ''
78 else:
79 log.debug('redirected to %s' % response.url)
80 user_id = response.url.split('/')[-2]
81 if re.match(USER_ID_RE, user_id):
82 config['user_id'] = user_id
83 else:
84 raise plugin.PluginError('Couldn\'t figure out user_id, please configure it manually.')
85
86 if not 'user_id' in config:
87 raise plugin.PluginError('Configuration option `user_id` required.')
88
89 log.verbose('Retrieving list %s ...' % config['list'])
90
91 # Get the imdb list in csv format
92 try:
93 url = 'http://www.imdb.com/list/export'
94 params = {'list_id': config['list'], 'author_id': config['user_id']}
95 log.debug('Requesting %s' % url)
96 opener = sess.get(url, params=params)
97 mime_type = parse_header(opener.headers['content-type'])[0]
98 log.debug('mime_type: %s' % mime_type)
99 if mime_type != 'text/csv':
100 raise plugin.PluginError('Didn\'t get CSV export as response. Probably specified list `%s` '
101 'does not exist.' % config['list'])
102 csv_rows = csv.reader(opener.iter_lines())
103 except requests.RequestException as e:
104 raise plugin.PluginError('Unable to get imdb list: %s' % e.message)
105
106 # Create an Entry for each movie in the list
107 entries = []
108 for row in csv_rows:
109 if not row or row[0] == 'position':
110 # Don't use blank rows or the headings row
111 continue
112 try:
113 title = decode_html(row[5]).decode('utf-8')
114 entries.append(Entry(title=title, url=make_url(row[1]), imdb_id=row[1], imdb_name=title))
115 except IndexError:
116 log.critical('IndexError! Unable to handle row: %s' % row)
117 return entries
118
119
120 @event('plugin.register')
121 def register_plugin():
122 plugin.register(ImdbList, 'imdb_list', api_ver=2)
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flexget/plugins/input/imdb_list.py b/flexget/plugins/input/imdb_list.py
--- a/flexget/plugins/input/imdb_list.py
+++ b/flexget/plugins/input/imdb_list.py
@@ -1,13 +1,13 @@
from __future__ import unicode_literals, division, absolute_import
import logging
-import csv
+import feedparser
import re
from cgi import parse_header
from flexget import plugin
from flexget.event import event
from flexget.utils import requests
-from flexget.utils.imdb import make_url
+from flexget.utils.imdb import make_url, extract_id
from flexget.utils.cached_input import cached
from flexget.utils.tools import decode_html
from flexget.entry import Entry
@@ -88,32 +88,27 @@
log.verbose('Retrieving list %s ...' % config['list'])
- # Get the imdb list in csv format
+ # Get the imdb list in RSS format
try:
- url = 'http://www.imdb.com/list/export'
- params = {'list_id': config['list'], 'author_id': config['user_id']}
+ if config['list'] in ['watchlist', 'ratings', 'checkins']:
+ url = 'http://rss.imdb.com/user/%s/%s' % (config['user_id'], config['list'])
+ else:
+ url = 'http://rss.imdb.com/list/%s' % config['list']
log.debug('Requesting %s' % url)
- opener = sess.get(url, params=params)
- mime_type = parse_header(opener.headers['content-type'])[0]
- log.debug('mime_type: %s' % mime_type)
- if mime_type != 'text/csv':
- raise plugin.PluginError('Didn\'t get CSV export as response. Probably specified list `%s` '
- 'does not exist.' % config['list'])
- csv_rows = csv.reader(opener.iter_lines())
+ try:
+ rss = feedparser.parse(url)
+ except LookupError as e:
+ raise plugin.PluginError('Failed to parse RSS feed for list `%s` correctly: %s' % (config['list'], e))
except requests.RequestException as e:
raise plugin.PluginError('Unable to get imdb list: %s' % e.message)
# Create an Entry for each movie in the list
entries = []
- for row in csv_rows:
- if not row or row[0] == 'position':
- # Don't use blank rows or the headings row
- continue
+ for entry in rss.entries:
try:
- title = decode_html(row[5]).decode('utf-8')
- entries.append(Entry(title=title, url=make_url(row[1]), imdb_id=row[1], imdb_name=title))
+ entries.append(Entry(title=entry.title, url=entry.link, imdb_id=extract_id(entry.link), imdb_name=entry.title))
except IndexError:
- log.critical('IndexError! Unable to handle row: %s' % row)
+ log.critical('IndexError! Unable to handle RSS entry: %s' % entry)
return entries
| {"golden_diff": "diff --git a/flexget/plugins/input/imdb_list.py b/flexget/plugins/input/imdb_list.py\n--- a/flexget/plugins/input/imdb_list.py\n+++ b/flexget/plugins/input/imdb_list.py\n@@ -1,13 +1,13 @@\n from __future__ import unicode_literals, division, absolute_import\n import logging\n-import csv\n+import feedparser\n import re\n from cgi import parse_header\n \n from flexget import plugin\n from flexget.event import event\n from flexget.utils import requests\n-from flexget.utils.imdb import make_url\n+from flexget.utils.imdb import make_url, extract_id\n from flexget.utils.cached_input import cached\n from flexget.utils.tools import decode_html\n from flexget.entry import Entry\n@@ -88,32 +88,27 @@\n \n log.verbose('Retrieving list %s ...' % config['list'])\n \n- # Get the imdb list in csv format\n+ # Get the imdb list in RSS format\n try:\n- url = 'http://www.imdb.com/list/export'\n- params = {'list_id': config['list'], 'author_id': config['user_id']}\n+ if config['list'] in ['watchlist', 'ratings', 'checkins']:\n+ url = 'http://rss.imdb.com/user/%s/%s' % (config['user_id'], config['list'])\n+ else:\n+ url = 'http://rss.imdb.com/list/%s' % config['list']\n log.debug('Requesting %s' % url)\n- opener = sess.get(url, params=params)\n- mime_type = parse_header(opener.headers['content-type'])[0]\n- log.debug('mime_type: %s' % mime_type)\n- if mime_type != 'text/csv':\n- raise plugin.PluginError('Didn\\'t get CSV export as response. Probably specified list `%s` '\n- 'does not exist.' % config['list'])\n- csv_rows = csv.reader(opener.iter_lines())\n+ try:\n+ rss = feedparser.parse(url)\n+ except LookupError as e:\n+ raise plugin.PluginError('Failed to parse RSS feed for list `%s` correctly: %s' % (config['list'], e))\n except requests.RequestException as e:\n raise plugin.PluginError('Unable to get imdb list: %s' % e.message)\n \n # Create an Entry for each movie in the list\n entries = []\n- for row in csv_rows:\n- if not row or row[0] == 'position':\n- # Don't use blank rows or the headings row\n- continue\n+ for entry in rss.entries:\n try:\n- title = decode_html(row[5]).decode('utf-8')\n- entries.append(Entry(title=title, url=make_url(row[1]), imdb_id=row[1], imdb_name=title))\n+ entries.append(Entry(title=entry.title, url=entry.link, imdb_id=extract_id(entry.link), imdb_name=entry.title))\n except IndexError:\n- log.critical('IndexError! Unable to handle row: %s' % row)\n+ log.critical('IndexError! Unable to handle RSS entry: %s' % entry)\n return entries\n", "issue": "Input plugin \"imdb_list\" currently failing to fetch lists behind authentication\nMessage: `There was an error during imdb_list input (Unable to get imdb list: 404 Client Error: Not Found), using cache instead.\"`\n\nSame issue as, http://flexget.com/ticket/2313 but even with the most recent fix applied it still fails.\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nimport logging\nimport csv\nimport re\nfrom cgi import parse_header\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils import requests\nfrom flexget.utils.imdb import make_url\nfrom flexget.utils.cached_input import cached\nfrom flexget.utils.tools import decode_html\nfrom flexget.entry import Entry\nfrom flexget.utils.soup import get_soup\n\nlog = logging.getLogger('imdb_list')\n\nUSER_ID_RE = r'^ur\\d{7,8}$'\n\n\nclass ImdbList(object):\n \"\"\"\"Creates an entry for each movie in your imdb list.\"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'user_id': {\n 'type': 'string',\n 'pattern': USER_ID_RE,\n 'error_pattern': 'user_id must be in the form urXXXXXXX'\n },\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'list': {'type': 'string'}\n },\n 'required': ['list'],\n 'additionalProperties': False\n }\n\n @cached('imdb_list', persist='2 hours')\n def on_task_input(self, task, config):\n sess = requests.Session()\n if config.get('username') and config.get('password'):\n\n log.verbose('Logging in ...')\n\n # Log in to imdb with our handler\n params = {'login': config['username'], 'password': config['password']}\n try:\n # First get the login page so we can get the hidden input value\n soup = get_soup(sess.get('https://secure.imdb.com/register-imdb/login').content)\n\n # Fix for bs4 bug. see #2313 and github#118\n auxsoup = soup.find('div', id='nb20').next_sibling.next_sibling\n tag = auxsoup.find('input', attrs={'name': '49e6c'})\n if tag:\n params['49e6c'] = tag['value']\n else:\n log.warning('Unable to find required info for imdb login, maybe their login method has changed.')\n # Now we do the actual login with appropriate parameters\n r = sess.post('https://secure.imdb.com/register-imdb/login', data=params, raise_status=False)\n except requests.RequestException as e:\n raise plugin.PluginError('Unable to login to imdb: %s' % e.message)\n\n # IMDb redirects us upon a successful login.\n # removed - doesn't happen always?\n # if r.status_code != 302:\n # log.warning('It appears logging in to IMDb was unsuccessful.')\n\n # try to automatically figure out user_id from watchlist redirect url\n if not 'user_id' in config:\n log.verbose('Getting user_id ...')\n try:\n response = sess.get('http://www.imdb.com/list/watchlist')\n except requests.RequestException as e:\n log.error('Error retrieving user ID from imdb: %s' % e.message)\n user_id = ''\n else:\n log.debug('redirected to %s' % response.url)\n user_id = response.url.split('/')[-2]\n if re.match(USER_ID_RE, user_id):\n config['user_id'] = user_id\n else:\n raise plugin.PluginError('Couldn\\'t figure out user_id, please configure it manually.')\n\n if not 'user_id' in config:\n raise plugin.PluginError('Configuration option `user_id` required.')\n\n log.verbose('Retrieving list %s ...' % config['list'])\n\n # Get the imdb list in csv format\n try:\n url = 'http://www.imdb.com/list/export'\n params = {'list_id': config['list'], 'author_id': config['user_id']}\n log.debug('Requesting %s' % url)\n opener = sess.get(url, params=params)\n mime_type = parse_header(opener.headers['content-type'])[0]\n log.debug('mime_type: %s' % mime_type)\n if mime_type != 'text/csv':\n raise plugin.PluginError('Didn\\'t get CSV export as response. Probably specified list `%s` '\n 'does not exist.' % config['list'])\n csv_rows = csv.reader(opener.iter_lines())\n except requests.RequestException as e:\n raise plugin.PluginError('Unable to get imdb list: %s' % e.message)\n\n # Create an Entry for each movie in the list\n entries = []\n for row in csv_rows:\n if not row or row[0] == 'position':\n # Don't use blank rows or the headings row\n continue\n try:\n title = decode_html(row[5]).decode('utf-8')\n entries.append(Entry(title=title, url=make_url(row[1]), imdb_id=row[1], imdb_name=title))\n except IndexError:\n log.critical('IndexError! Unable to handle row: %s' % row)\n return entries\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(ImdbList, 'imdb_list', api_ver=2)\n", "path": "flexget/plugins/input/imdb_list.py"}], "after_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nimport logging\nimport feedparser\nimport re\nfrom cgi import parse_header\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils import requests\nfrom flexget.utils.imdb import make_url, extract_id\nfrom flexget.utils.cached_input import cached\nfrom flexget.utils.tools import decode_html\nfrom flexget.entry import Entry\nfrom flexget.utils.soup import get_soup\n\nlog = logging.getLogger('imdb_list')\n\nUSER_ID_RE = r'^ur\\d{7,8}$'\n\n\nclass ImdbList(object):\n \"\"\"\"Creates an entry for each movie in your imdb list.\"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'user_id': {\n 'type': 'string',\n 'pattern': USER_ID_RE,\n 'error_pattern': 'user_id must be in the form urXXXXXXX'\n },\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'list': {'type': 'string'}\n },\n 'required': ['list'],\n 'additionalProperties': False\n }\n\n @cached('imdb_list', persist='2 hours')\n def on_task_input(self, task, config):\n sess = requests.Session()\n if config.get('username') and config.get('password'):\n\n log.verbose('Logging in ...')\n\n # Log in to imdb with our handler\n params = {'login': config['username'], 'password': config['password']}\n try:\n # First get the login page so we can get the hidden input value\n soup = get_soup(sess.get('https://secure.imdb.com/register-imdb/login').content)\n\n # Fix for bs4 bug. see #2313 and github#118\n auxsoup = soup.find('div', id='nb20').next_sibling.next_sibling\n tag = auxsoup.find('input', attrs={'name': '49e6c'})\n if tag:\n params['49e6c'] = tag['value']\n else:\n log.warning('Unable to find required info for imdb login, maybe their login method has changed.')\n # Now we do the actual login with appropriate parameters\n r = sess.post('https://secure.imdb.com/register-imdb/login', data=params, raise_status=False)\n except requests.RequestException as e:\n raise plugin.PluginError('Unable to login to imdb: %s' % e.message)\n\n # IMDb redirects us upon a successful login.\n # removed - doesn't happen always?\n # if r.status_code != 302:\n # log.warning('It appears logging in to IMDb was unsuccessful.')\n\n # try to automatically figure out user_id from watchlist redirect url\n if not 'user_id' in config:\n log.verbose('Getting user_id ...')\n try:\n response = sess.get('http://www.imdb.com/list/watchlist')\n except requests.RequestException as e:\n log.error('Error retrieving user ID from imdb: %s' % e.message)\n user_id = ''\n else:\n log.debug('redirected to %s' % response.url)\n user_id = response.url.split('/')[-2]\n if re.match(USER_ID_RE, user_id):\n config['user_id'] = user_id\n else:\n raise plugin.PluginError('Couldn\\'t figure out user_id, please configure it manually.')\n\n if not 'user_id' in config:\n raise plugin.PluginError('Configuration option `user_id` required.')\n\n log.verbose('Retrieving list %s ...' % config['list'])\n\n # Get the imdb list in RSS format\n try:\n if config['list'] in ['watchlist', 'ratings', 'checkins']:\n url = 'http://rss.imdb.com/user/%s/%s' % (config['user_id'], config['list'])\n else:\n url = 'http://rss.imdb.com/list/%s' % config['list']\n log.debug('Requesting %s' % url)\n try:\n rss = feedparser.parse(url)\n except LookupError as e:\n raise plugin.PluginError('Failed to parse RSS feed for list `%s` correctly: %s' % (config['list'], e))\n except requests.RequestException as e:\n raise plugin.PluginError('Unable to get imdb list: %s' % e.message)\n\n # Create an Entry for each movie in the list\n entries = []\n for entry in rss.entries:\n try:\n entries.append(Entry(title=entry.title, url=entry.link, imdb_id=extract_id(entry.link), imdb_name=entry.title))\n except IndexError:\n log.critical('IndexError! Unable to handle RSS entry: %s' % entry)\n return entries\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(ImdbList, 'imdb_list', api_ver=2)\n", "path": "flexget/plugins/input/imdb_list.py"}]} | 1,708 | 704 |
gh_patches_debug_36150 | rasdani/github-patches | git_diff | prowler-cloud__prowler-2736 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ecr_repositories_scan_vulnerabilities_in_latest_image: Configure level
### New feature motivation
Hi, is it possible to configure the level from which the test shall fail?
AWS tags some findings as medium which I might want to ignore, but of course I don't want to mute critical findings for the image.
### Solution Proposed
none
### Describe alternatives you've considered
none
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py`
Content:
```
1 from prowler.lib.check.models import Check, Check_Report_AWS
2 from prowler.providers.aws.services.ecr.ecr_client import ecr_client
3
4
5 class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):
6 def execute(self):
7 findings = []
8 for registry in ecr_client.registries.values():
9 for repository in registry.repositories:
10 # First check if the repository has images
11 if len(repository.images_details) > 0:
12 # We only want to check the latest image pushed
13 image = repository.images_details[-1]
14
15 report = Check_Report_AWS(self.metadata())
16 report.region = repository.region
17 report.resource_id = repository.name
18 report.resource_arn = repository.arn
19 report.resource_tags = repository.tags
20 report.status = "PASS"
21 report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned without findings."
22 if not image.scan_findings_status:
23 report.status = "FAIL"
24 report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} without a scan."
25 elif image.scan_findings_status == "FAILED":
26 report.status = "FAIL"
27 report.status_extended = (
28 f"ECR repository {repository.name} with scan status FAILED."
29 )
30 elif image.scan_findings_status != "FAILED":
31 if image.scan_findings_severity_count and (
32 image.scan_findings_severity_count.critical
33 or image.scan_findings_severity_count.high
34 or image.scan_findings_severity_count.medium
35 ):
36 report.status = "FAIL"
37 report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}, MEDIUM->{image.scan_findings_severity_count.medium}."
38
39 findings.append(report)
40
41 return findings
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py b/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py
--- a/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py
+++ b/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py
@@ -5,6 +5,12 @@
class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):
def execute(self):
findings = []
+
+ # Get minimun severity to report
+ minimum_severity = ecr_client.audit_config.get(
+ "ecr_repository_vulnerability_minimum_severity", "MEDIUM"
+ )
+
for registry in ecr_client.registries.values():
for repository in registry.repositories:
# First check if the repository has images
@@ -27,8 +33,23 @@
report.status_extended = (
f"ECR repository {repository.name} with scan status FAILED."
)
- elif image.scan_findings_status != "FAILED":
- if image.scan_findings_severity_count and (
+ elif (
+ image.scan_findings_status != "FAILED"
+ and image.scan_findings_severity_count
+ ):
+ if (
+ minimum_severity == "CRITICAL"
+ and image.scan_findings_severity_count.critical
+ ):
+ report.status = "FAIL"
+ report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}."
+ elif minimum_severity == "HIGH" and (
+ image.scan_findings_severity_count.critical
+ or image.scan_findings_severity_count.high
+ ):
+ report.status = "FAIL"
+ report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}."
+ elif minimum_severity == "MEDIUM" and (
image.scan_findings_severity_count.critical
or image.scan_findings_severity_count.high
or image.scan_findings_severity_count.medium
| {"golden_diff": "diff --git a/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py b/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py\n--- a/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py\n+++ b/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py\n@@ -5,6 +5,12 @@\n class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):\n def execute(self):\n findings = []\n+\n+ # Get minimun severity to report\n+ minimum_severity = ecr_client.audit_config.get(\n+ \"ecr_repository_vulnerability_minimum_severity\", \"MEDIUM\"\n+ )\n+\n for registry in ecr_client.registries.values():\n for repository in registry.repositories:\n # First check if the repository has images\n@@ -27,8 +33,23 @@\n report.status_extended = (\n f\"ECR repository {repository.name} with scan status FAILED.\"\n )\n- elif image.scan_findings_status != \"FAILED\":\n- if image.scan_findings_severity_count and (\n+ elif (\n+ image.scan_findings_status != \"FAILED\"\n+ and image.scan_findings_severity_count\n+ ):\n+ if (\n+ minimum_severity == \"CRITICAL\"\n+ and image.scan_findings_severity_count.critical\n+ ):\n+ report.status = \"FAIL\"\n+ report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}.\"\n+ elif minimum_severity == \"HIGH\" and (\n+ image.scan_findings_severity_count.critical\n+ or image.scan_findings_severity_count.high\n+ ):\n+ report.status = \"FAIL\"\n+ report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}.\"\n+ elif minimum_severity == \"MEDIUM\" and (\n image.scan_findings_severity_count.critical\n or image.scan_findings_severity_count.high\n or image.scan_findings_severity_count.medium\n", "issue": "ecr_repositories_scan_vulnerabilities_in_latest_image: Configure level\n### New feature motivation\n\nHi, is it possible to configure the level from which the test shall fail?\r\nAWS tags some findings as medium which I might want to ignore, but of course I don't want to mute critical findings for the image.\n\n### Solution Proposed\n\nnone\n\n### Describe alternatives you've considered\n\nnone\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "from prowler.lib.check.models import Check, Check_Report_AWS\nfrom prowler.providers.aws.services.ecr.ecr_client import ecr_client\n\n\nclass ecr_repositories_scan_vulnerabilities_in_latest_image(Check):\n def execute(self):\n findings = []\n for registry in ecr_client.registries.values():\n for repository in registry.repositories:\n # First check if the repository has images\n if len(repository.images_details) > 0:\n # We only want to check the latest image pushed\n image = repository.images_details[-1]\n\n report = Check_Report_AWS(self.metadata())\n report.region = repository.region\n report.resource_id = repository.name\n report.resource_arn = repository.arn\n report.resource_tags = repository.tags\n report.status = \"PASS\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned without findings.\"\n if not image.scan_findings_status:\n report.status = \"FAIL\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} without a scan.\"\n elif image.scan_findings_status == \"FAILED\":\n report.status = \"FAIL\"\n report.status_extended = (\n f\"ECR repository {repository.name} with scan status FAILED.\"\n )\n elif image.scan_findings_status != \"FAILED\":\n if image.scan_findings_severity_count and (\n image.scan_findings_severity_count.critical\n or image.scan_findings_severity_count.high\n or image.scan_findings_severity_count.medium\n ):\n report.status = \"FAIL\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}, MEDIUM->{image.scan_findings_severity_count.medium}.\"\n\n findings.append(report)\n\n return findings\n", "path": "prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py"}], "after_files": [{"content": "from prowler.lib.check.models import Check, Check_Report_AWS\nfrom prowler.providers.aws.services.ecr.ecr_client import ecr_client\n\n\nclass ecr_repositories_scan_vulnerabilities_in_latest_image(Check):\n def execute(self):\n findings = []\n\n # Get minimun severity to report\n minimum_severity = ecr_client.audit_config.get(\n \"ecr_repository_vulnerability_minimum_severity\", \"MEDIUM\"\n )\n\n for registry in ecr_client.registries.values():\n for repository in registry.repositories:\n # First check if the repository has images\n if len(repository.images_details) > 0:\n # We only want to check the latest image pushed\n image = repository.images_details[-1]\n\n report = Check_Report_AWS(self.metadata())\n report.region = repository.region\n report.resource_id = repository.name\n report.resource_arn = repository.arn\n report.resource_tags = repository.tags\n report.status = \"PASS\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned without findings.\"\n if not image.scan_findings_status:\n report.status = \"FAIL\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} without a scan.\"\n elif image.scan_findings_status == \"FAILED\":\n report.status = \"FAIL\"\n report.status_extended = (\n f\"ECR repository {repository.name} with scan status FAILED.\"\n )\n elif (\n image.scan_findings_status != \"FAILED\"\n and image.scan_findings_severity_count\n ):\n if (\n minimum_severity == \"CRITICAL\"\n and image.scan_findings_severity_count.critical\n ):\n report.status = \"FAIL\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}.\"\n elif minimum_severity == \"HIGH\" and (\n image.scan_findings_severity_count.critical\n or image.scan_findings_severity_count.high\n ):\n report.status = \"FAIL\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}.\"\n elif minimum_severity == \"MEDIUM\" and (\n image.scan_findings_severity_count.critical\n or image.scan_findings_severity_count.high\n or image.scan_findings_severity_count.medium\n ):\n report.status = \"FAIL\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}, MEDIUM->{image.scan_findings_severity_count.medium}.\"\n\n findings.append(report)\n\n return findings\n", "path": "prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py"}]} | 861 | 559 |
gh_patches_debug_13964 | rasdani/github-patches | git_diff | azavea__raster-vision-701 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Predict zero for nodata pixels on semantic segmentation
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rastervision/task/semantic_segmentation.py`
Content:
```
1 from typing import List
2 import logging
3
4 import numpy as np
5
6 from .task import Task
7 from rastervision.core.box import Box
8 from rastervision.data.scene import Scene
9 from rastervision.data.label import SemanticSegmentationLabels
10
11 log = logging.getLogger(__name__)
12
13
14 def get_random_sample_train_windows(label_store, chip_size, class_map, extent,
15 chip_options, filter_windows):
16 prob = chip_options.negative_survival_probability
17 target_count_threshold = chip_options.target_count_threshold
18 target_classes = chip_options.target_classes
19 chips_per_scene = chip_options.chips_per_scene
20
21 if not target_classes:
22 all_class_ids = [item.id for item in class_map.get_items()]
23 target_classes = all_class_ids
24
25 windows = []
26 attempts = 0
27 while (attempts < chips_per_scene):
28 candidate_window = extent.make_random_square(chip_size)
29 if not filter_windows([candidate_window]):
30 continue
31 attempts = attempts + 1
32
33 if (prob >= 1.0):
34 windows.append(candidate_window)
35 elif attempts == chips_per_scene and len(windows) == 0:
36 windows.append(candidate_window)
37 else:
38 good = label_store.enough_target_pixels(
39 candidate_window, target_count_threshold, target_classes)
40 if good or (np.random.rand() < prob):
41 windows.append(candidate_window)
42
43 return windows
44
45
46 class SemanticSegmentation(Task):
47 """Task-derived type that implements the semantic segmentation task."""
48
49 def get_train_windows(self, scene: Scene) -> List[Box]:
50 """Get training windows covering a scene.
51
52 Args:
53 scene: The scene over-which windows are to be generated.
54
55 Returns:
56 A list of windows, list(Box)
57
58 """
59
60 def filter_windows(windows):
61 if scene.aoi_polygons:
62 windows = Box.filter_by_aoi(windows, scene.aoi_polygons)
63 return windows
64
65 raster_source = scene.raster_source
66 extent = raster_source.get_extent()
67 label_store = scene.ground_truth_label_source
68 chip_size = self.config.chip_size
69
70 chip_options = self.config.chip_options
71
72 if chip_options.window_method == 'random_sample':
73 return get_random_sample_train_windows(
74 label_store, chip_size, self.config.class_map, extent,
75 chip_options, filter_windows)
76 elif chip_options.window_method == 'sliding':
77 stride = chip_options.stride
78 if stride is None:
79 stride = chip_size / 2
80
81 return list(
82 filter_windows((extent.get_windows(chip_size, stride))))
83
84 def get_train_labels(self, window: Box, scene: Scene) -> np.ndarray:
85 """Get the training labels for the given window in the given scene.
86
87 Args:
88 window: The window over-which the labels are to be
89 retrieved.
90 scene: The scene from-which the window of labels is to be
91 extracted.
92
93 Returns:
94 An appropriately-shaped 2d np.ndarray with the labels
95 encoded as packed pixels.
96
97 """
98 label_store = scene.ground_truth_label_source
99 return label_store.get_labels(window)
100
101 def get_predict_windows(self, extent: Box) -> List[Box]:
102 """Get windows over-which predictions will be calculated.
103
104 Args:
105 extent: The overall extent of the area.
106
107 Returns:
108 An sequence of windows.
109
110 """
111 chip_size = self.config.chip_size
112 return extent.get_windows(chip_size, chip_size)
113
114 def post_process_predictions(self, labels, scene):
115 return labels
116
117 def save_debug_predict_image(self, scene, debug_dir_uri):
118 # TODO implement this
119 pass
120
121 def predict_scene(self, scene, tmp_dir):
122 """Predict on a single scene, and return the labels."""
123 log.info('Making predictions for scene')
124 raster_source = scene.raster_source
125 windows = self.get_predict_windows(raster_source.get_extent())
126
127 def label_fn(window):
128 chip = raster_source.get_chip(window)
129 if np.any(chip):
130 chip = raster_source.get_chip(window)
131 labels = self.backend.predict([chip], [window], tmp_dir)
132 label_arr = labels.get_label_arr(window)
133 else:
134 label_arr = np.zeros((window.get_height(), window.get_width()))
135 print('.', end='', flush=True)
136 return label_arr
137
138 return SemanticSegmentationLabels(windows, label_fn)
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rastervision/task/semantic_segmentation.py b/rastervision/task/semantic_segmentation.py
--- a/rastervision/task/semantic_segmentation.py
+++ b/rastervision/task/semantic_segmentation.py
@@ -126,12 +126,12 @@
def label_fn(window):
chip = raster_source.get_chip(window)
- if np.any(chip):
- chip = raster_source.get_chip(window)
- labels = self.backend.predict([chip], [window], tmp_dir)
- label_arr = labels.get_label_arr(window)
- else:
- label_arr = np.zeros((window.get_height(), window.get_width()))
+ labels = self.backend.predict([chip], [window], tmp_dir)
+ label_arr = labels.get_label_arr(window)
+
+ # Set NODATA pixels in imagery to predicted value of 0 (ie. ignore)
+ label_arr[np.sum(chip, axis=2) == 0] = 0
+
print('.', end='', flush=True)
return label_arr
| {"golden_diff": "diff --git a/rastervision/task/semantic_segmentation.py b/rastervision/task/semantic_segmentation.py\n--- a/rastervision/task/semantic_segmentation.py\n+++ b/rastervision/task/semantic_segmentation.py\n@@ -126,12 +126,12 @@\n \n def label_fn(window):\n chip = raster_source.get_chip(window)\n- if np.any(chip):\n- chip = raster_source.get_chip(window)\n- labels = self.backend.predict([chip], [window], tmp_dir)\n- label_arr = labels.get_label_arr(window)\n- else:\n- label_arr = np.zeros((window.get_height(), window.get_width()))\n+ labels = self.backend.predict([chip], [window], tmp_dir)\n+ label_arr = labels.get_label_arr(window)\n+\n+ # Set NODATA pixels in imagery to predicted value of 0 (ie. ignore)\n+ label_arr[np.sum(chip, axis=2) == 0] = 0\n+\n print('.', end='', flush=True)\n return label_arr\n", "issue": "Predict zero for nodata pixels on semantic segmentation\n\n", "before_files": [{"content": "from typing import List\nimport logging\n\nimport numpy as np\n\nfrom .task import Task\nfrom rastervision.core.box import Box\nfrom rastervision.data.scene import Scene\nfrom rastervision.data.label import SemanticSegmentationLabels\n\nlog = logging.getLogger(__name__)\n\n\ndef get_random_sample_train_windows(label_store, chip_size, class_map, extent,\n chip_options, filter_windows):\n prob = chip_options.negative_survival_probability\n target_count_threshold = chip_options.target_count_threshold\n target_classes = chip_options.target_classes\n chips_per_scene = chip_options.chips_per_scene\n\n if not target_classes:\n all_class_ids = [item.id for item in class_map.get_items()]\n target_classes = all_class_ids\n\n windows = []\n attempts = 0\n while (attempts < chips_per_scene):\n candidate_window = extent.make_random_square(chip_size)\n if not filter_windows([candidate_window]):\n continue\n attempts = attempts + 1\n\n if (prob >= 1.0):\n windows.append(candidate_window)\n elif attempts == chips_per_scene and len(windows) == 0:\n windows.append(candidate_window)\n else:\n good = label_store.enough_target_pixels(\n candidate_window, target_count_threshold, target_classes)\n if good or (np.random.rand() < prob):\n windows.append(candidate_window)\n\n return windows\n\n\nclass SemanticSegmentation(Task):\n \"\"\"Task-derived type that implements the semantic segmentation task.\"\"\"\n\n def get_train_windows(self, scene: Scene) -> List[Box]:\n \"\"\"Get training windows covering a scene.\n\n Args:\n scene: The scene over-which windows are to be generated.\n\n Returns:\n A list of windows, list(Box)\n\n \"\"\"\n\n def filter_windows(windows):\n if scene.aoi_polygons:\n windows = Box.filter_by_aoi(windows, scene.aoi_polygons)\n return windows\n\n raster_source = scene.raster_source\n extent = raster_source.get_extent()\n label_store = scene.ground_truth_label_source\n chip_size = self.config.chip_size\n\n chip_options = self.config.chip_options\n\n if chip_options.window_method == 'random_sample':\n return get_random_sample_train_windows(\n label_store, chip_size, self.config.class_map, extent,\n chip_options, filter_windows)\n elif chip_options.window_method == 'sliding':\n stride = chip_options.stride\n if stride is None:\n stride = chip_size / 2\n\n return list(\n filter_windows((extent.get_windows(chip_size, stride))))\n\n def get_train_labels(self, window: Box, scene: Scene) -> np.ndarray:\n \"\"\"Get the training labels for the given window in the given scene.\n\n Args:\n window: The window over-which the labels are to be\n retrieved.\n scene: The scene from-which the window of labels is to be\n extracted.\n\n Returns:\n An appropriately-shaped 2d np.ndarray with the labels\n encoded as packed pixels.\n\n \"\"\"\n label_store = scene.ground_truth_label_source\n return label_store.get_labels(window)\n\n def get_predict_windows(self, extent: Box) -> List[Box]:\n \"\"\"Get windows over-which predictions will be calculated.\n\n Args:\n extent: The overall extent of the area.\n\n Returns:\n An sequence of windows.\n\n \"\"\"\n chip_size = self.config.chip_size\n return extent.get_windows(chip_size, chip_size)\n\n def post_process_predictions(self, labels, scene):\n return labels\n\n def save_debug_predict_image(self, scene, debug_dir_uri):\n # TODO implement this\n pass\n\n def predict_scene(self, scene, tmp_dir):\n \"\"\"Predict on a single scene, and return the labels.\"\"\"\n log.info('Making predictions for scene')\n raster_source = scene.raster_source\n windows = self.get_predict_windows(raster_source.get_extent())\n\n def label_fn(window):\n chip = raster_source.get_chip(window)\n if np.any(chip):\n chip = raster_source.get_chip(window)\n labels = self.backend.predict([chip], [window], tmp_dir)\n label_arr = labels.get_label_arr(window)\n else:\n label_arr = np.zeros((window.get_height(), window.get_width()))\n print('.', end='', flush=True)\n return label_arr\n\n return SemanticSegmentationLabels(windows, label_fn)\n", "path": "rastervision/task/semantic_segmentation.py"}], "after_files": [{"content": "from typing import List\nimport logging\n\nimport numpy as np\n\nfrom .task import Task\nfrom rastervision.core.box import Box\nfrom rastervision.data.scene import Scene\nfrom rastervision.data.label import SemanticSegmentationLabels\n\nlog = logging.getLogger(__name__)\n\n\ndef get_random_sample_train_windows(label_store, chip_size, class_map, extent,\n chip_options, filter_windows):\n prob = chip_options.negative_survival_probability\n target_count_threshold = chip_options.target_count_threshold\n target_classes = chip_options.target_classes\n chips_per_scene = chip_options.chips_per_scene\n\n if not target_classes:\n all_class_ids = [item.id for item in class_map.get_items()]\n target_classes = all_class_ids\n\n windows = []\n attempts = 0\n while (attempts < chips_per_scene):\n candidate_window = extent.make_random_square(chip_size)\n if not filter_windows([candidate_window]):\n continue\n attempts = attempts + 1\n\n if (prob >= 1.0):\n windows.append(candidate_window)\n elif attempts == chips_per_scene and len(windows) == 0:\n windows.append(candidate_window)\n else:\n good = label_store.enough_target_pixels(\n candidate_window, target_count_threshold, target_classes)\n if good or (np.random.rand() < prob):\n windows.append(candidate_window)\n\n return windows\n\n\nclass SemanticSegmentation(Task):\n \"\"\"Task-derived type that implements the semantic segmentation task.\"\"\"\n\n def get_train_windows(self, scene: Scene) -> List[Box]:\n \"\"\"Get training windows covering a scene.\n\n Args:\n scene: The scene over-which windows are to be generated.\n\n Returns:\n A list of windows, list(Box)\n\n \"\"\"\n\n def filter_windows(windows):\n if scene.aoi_polygons:\n windows = Box.filter_by_aoi(windows, scene.aoi_polygons)\n return windows\n\n raster_source = scene.raster_source\n extent = raster_source.get_extent()\n label_store = scene.ground_truth_label_source\n chip_size = self.config.chip_size\n\n chip_options = self.config.chip_options\n\n if chip_options.window_method == 'random_sample':\n return get_random_sample_train_windows(\n label_store, chip_size, self.config.class_map, extent,\n chip_options, filter_windows)\n elif chip_options.window_method == 'sliding':\n stride = chip_options.stride\n if stride is None:\n stride = chip_size / 2\n\n return list(\n filter_windows((extent.get_windows(chip_size, stride))))\n\n def get_train_labels(self, window: Box, scene: Scene) -> np.ndarray:\n \"\"\"Get the training labels for the given window in the given scene.\n\n Args:\n window: The window over-which the labels are to be\n retrieved.\n scene: The scene from-which the window of labels is to be\n extracted.\n\n Returns:\n An appropriately-shaped 2d np.ndarray with the labels\n encoded as packed pixels.\n\n \"\"\"\n label_store = scene.ground_truth_label_source\n return label_store.get_labels(window)\n\n def get_predict_windows(self, extent: Box) -> List[Box]:\n \"\"\"Get windows over-which predictions will be calculated.\n\n Args:\n extent: The overall extent of the area.\n\n Returns:\n An sequence of windows.\n\n \"\"\"\n chip_size = self.config.chip_size\n return extent.get_windows(chip_size, chip_size)\n\n def post_process_predictions(self, labels, scene):\n return labels\n\n def save_debug_predict_image(self, scene, debug_dir_uri):\n # TODO implement this\n pass\n\n def predict_scene(self, scene, tmp_dir):\n \"\"\"Predict on a single scene, and return the labels.\"\"\"\n log.info('Making predictions for scene')\n raster_source = scene.raster_source\n windows = self.get_predict_windows(raster_source.get_extent())\n\n def label_fn(window):\n chip = raster_source.get_chip(window)\n labels = self.backend.predict([chip], [window], tmp_dir)\n label_arr = labels.get_label_arr(window)\n\n # Set NODATA pixels in imagery to predicted value of 0 (ie. ignore)\n label_arr[np.sum(chip, axis=2) == 0] = 0\n\n print('.', end='', flush=True)\n return label_arr\n\n return SemanticSegmentationLabels(windows, label_fn)\n", "path": "rastervision/task/semantic_segmentation.py"}]} | 1,538 | 231 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.